19 Jun 2007
In the spirit of Sean’s tomato porn, I give to you, What Real Strawberries Are Like.
First, Real Strawberries come from very close by, preferably within a few feet of your table, like our front-yard patch:
Second, you pick Real Strawberries yourself, off the bush, and when you do, they look like this:
Third, a Real Strawberry is a perfect red color, and is perfectly clean right off the bush. You don’t wash Real Strawberries, because they don’t have anything on them that needs washing:
Finally, Real Strawberries are red all the way through, because they are ripe on the bush:
If you are lucky, you can grow enough Real Strawberries to have a whole dessert from them, like I did tonight:
Oh, yeah, mmmmmm, Real Strawberries, you make life worth living.
18 Jun 2007
As Western hyper-consumerists go, I’m pretty light on the old carbon foot-print. No car, walk to work, living in a temperate climate not requiring the furnace to work too hard, new insulation and weatherstripping on the house. Having gotten so far, it is annoying to blast all my progress out of the water every time I have to take a business trip on a jet plane. It would be nice to keep the hair shirt in place, even while cruising at 30,000 feet. I would pay money to do so.
On the surface, it seems like purchasing carbon offsets is the solution made for me. Air Canada even lets me purchase them right online with my ticket.
The trouble is, it looks like carbon offsets are a crock. The goal of the offset is to make it as if my polluting trip never happened. So my trip dumps X tonnes of carbon into the atmosphere, to make it like it never happened, I either have to remove X (direct offset), or somehow get someone else who would otherwise release X tonnes to not do so (indirect offset).
The direct offset seems like the best bet, but there are practically no ways of removing substantial quantities of CO2 from the carbon cycle. Note that I did not say “atmosphere”, I said “carbon cycle”. The global warming problem is that we’ve been digging carbon out of the ground and adding it to the biosphere, which cycles carbon from air to ocean to plants and people via the carbon cycle. Planting trees, the most popular direct offset, which removes carbon from the atmosphere, still retains the carbon in the biosphere… it is not back underground, it is just in a tree. When the tree dies in a couple hundred years, back into the atmosphere the carbon will go. Perhaps someone will start a carbon offset program that involves clear cutting mature forests and sinking them into the deep ocean.
The indirect offsets suffer from the fact that while everyone wants a carbon market, no one has built one that actually limits carbon output yet. If I knew a credit I bought on a market was tied to a verifiable, and permanently capped, amount of carbon, I could offset by buying a credit, then flushing it down the toilet.
Pollution markets tend to be tied to big fixed industries with measurable outputs. A verifiable carbon market would be great for individuals but tough on the big polluters, because individuals could suck credits out of the system, and would probably do so even faster than the planned cap reductions. If they did, then inevitably politics would result as the big industries cried foul, the planned caps would be relaxed, and the purchased “offsets” would suddenly lose much of their offsetting value.
So, direct offsets are basically impossible, because no one is pumping carbon back into the crust, and indirect offsets are very subject to manipulation and impotence. For now I’m left with most of my carbon footprint in jet fuel.
15 Jun 2007
It is still early days for registration at FOSS4G, but one trend is showing up loud and clear — workshops are popular. Most of the registrants so far have chosen to attend the Monday hands-on workshops. Taking that trend forward, it means the popular workshops will fill up fast, so putting off registration is a good way to lose out on going to workshops.
If you want your pick of the workshop crop, register now! Space is literally limited, we only have 240 lab seats to work with.
14 Jun 2007
Some commenters have noted that I am turning into something of a negative nelly. So, time to fill up the karma gas tank and accentuate the positive!
What I like about ESRI:
- Corporate environmentalism and the “big picture” corporate attitude it implies (there’s more to life than software).
- The old school AML folks, and the kind of roll up your sleeves and make things work attitude they have. Nothing empowers scientists like flexible tools, and ESRI delivered.
- ArcMap. Bar none the most powerful single bundle of editing, analytics, cartography, out there. No other single install puts so much stuff under your mouse in one go.
What I like about Oracle:
- SQL Developer. The hard core swear by TOAD, but for me, SQL developer is just right.
- OTN and the culture of free downloads for developers. Oracle knows you have to put the tools in front of the users if you want them to try them and recommend them. I recently downloaded both Oracle and DB2 to put up test servers, and the experience was day and night. I still have achy muscles from jumping over all the DB2 hurdles.
- Seven-letter executable names. TNSLSNR, I love you!
- Online documentation. Love them or hate them, you have no excuse to be ignorant of them. The docs are good, they are complete, and they are all there.
- Buying open source companies. It gives a guy hope, you know?
- Xaviar Lopez (Oracle Spatial product manager). Extremely gracious man, willing to put up with a lot of guff from open source folks (like, er, me) at FOSS4G06 and stay positive. Hope to seem him in ‘07.
12 Jun 2007
The reason I was thinking about performance improvements — and how billing by CPU usage provides vendors with no incentive to work on them — is because we have been thinking about a particular PostGIS use case recently.
Suppose you have a very large candidate table of smallish things, 10s of millions of them, and you want to find all of the smallish things that are contained by a largish polygon.
The spatial index will be very useful for quickly winnowing down the 10s of millions of things to the 10s of thousands of things that might fall inside the polygon. However, after that, you’re left testing each of the smallish things individually for containment. And a majority of the smallish things will be unambiguously inside the large polygon, not even close to the edge, so a great deal of computation will be wasted. The same issue adheres to Intersects() and inversely to Within().
The trick, clearly, is to provide some kind of short circuit, so that the “easy” cases can be trivially dealt with and only the boundary cases need a full test.
A nice approach for generally convex polygons would be a “maximum inscribed rectangle” (MIR) — any small thing whose MBR fits in the MIR is definitely contained. However, then you have to calculate a MIR, which is itself costly.
A variation on the MIR approach is just to superimpose the polygon on a grid and find all those squares that are fully contained in the polygon. Any smallish feature whose MBR is fully contained by “inside grid squares” is itself fully contained.
What it looks like we’ll do first is to speed up the general calculation of containment, by caching a topologized version of the large polygon. The topologized version will have an index on all the edge segments, for fast testing if a given candidate crosses the boundary, and an index for fast point-in-polygon testing. The idea is first you see if the candidate crosses the boundary, if it does not then it must be either fully inside or fully outside, so then use a point-in-polygon test on one of the end points to see if it is in or out.
All in all, it is a lot of complexity for what seems like a very common hard-to-index case: test a large number of candidates for containment.