Like Cattle to Slaughter

When I first got a cell-phone, some 10 years ago, I deliberately chose the hip new alternative on the block, FIDO, because their costs were competitive, and they didn’t have a “N-year contract” system. You paid your bill until you were done with the service, and then you stopped – sounded fair to me.

However, the Canadian industry has consolidated down to 2 major carriers and a handful of small regional ones, and FIDO was bought up by the Rogers behemoth. They still operate as an independent brand, but clearly things are changing. For one thing, they have added multi-year contracts, and are now pulling maneuvers to try and get people to leave behind their month-to-month plans.

My wife got a call from a marketer – would you like a better plan? we’ll give you two months free and blah blah blah blah blah. She was holding a screaming baby at the time, and said “yeah, sure”. Never say yes to a salesman in haste. Turns out the new plan was a 3-year contract.

Today, I found out that the phone I have was quietly switched to a 3-year plan when they sent me a new handset last year. Surprise! FIDO had sent new handsets in the past, so I didn’t notice, but apparently the cost of my new phone was that I am now in cell-phone shackles until January of 2010.

You’re not a customer to the phone company, you’re an ambulatory money beast. Mmoooooooo.

Timmy's Telethon #5

It’s not all about Timmy, Timmy worries about the little people too!

  1. Business model: I think I’m missing the business opportunity behind open source. Perhaps its altruism or a hobby… but that only goes so far. How are people leveraging this to pay the bills? Are services where the money is at?

Do you litter? When you put your garbage in a can, instead of dropping it on the street, is it altruism, or self-interest? After all, you’re keeping your own environment clean along with everyone else’s. What about the fellow who actually bends over and picks up someone else’s empty coffee cup to put into the can?

Open source turns software into a commons, like the environment, but because we are used to thinking of software as property, our minds find it hard to figure out why open source software aggregates effort and gets cleaner, faster, stronger. Unlike picking up litter, which everyone can do, improving open source can only be done by a small percentage of the population who can understand and work on the code, and that makes it even harder for the rest of the population to understand what is going on. WTF are these lunatics up to, moving bits of used paper into cylindrical holding devices?

The people improving open source fall into lots of categories, but here are some broad ones:

  • The altruist / tinkerer. The most popular media archetype, the altruist / tinkerer probably accounts for the smallest amount of effort on mature open source technologies, but occasionally will create a new technology from scratch that is so good and compelling that it gathers other types who then keep it alive and move it into wider utility.
  • The service provider. The most widely understood business model, running from the one man band (take a bow, Frank Warmerdam) to billion-dollar companies like Red Hat. They will sell you support, or custom development, directly on the software. Because they are tightly associated with the software, this is the easiest model for “vendor minded” folks to mentally grasp.
  • The systems integrator. Working with the open source software, but not necessarily on the software, the system integrator is easily drawn into fixing bugs and adding new functionality on projects as the client requires. Systems integrators love open source because it allows them to meet client needs without being stuck behind a vendor’s development priorities (“we’ve got your bug report on file for the next release”, “that feature will be available in 18 months”).
  • The company man. Easily the least appreciated member of the open source pantheon, because he is not paid to work on open source. He is paid to work on fish inventories. Or carbon models. Or inventory management. Or tax collection. But he has a bit of discretionary time, and he uses it to make the tools he works with work better.

Notably missing from this list is “the billionaire” and “the venture capitalist”. That is because, while there is money to be made in open source, there is not a lot of money to be made. People who try to put up fences in the open source commons find that all the rest of the players end up routing around them, and the value slowly drains from their little patch of land, leaving them only with the tried and true proprietary model to fall back on.

Attention Neotards: Your GPS Sucks!

Reading through an update from Open Street Map (incredibly, it has taken them mere months to load TIGER into their futuristic system) I came across this little gem regarding aerial photography:

Since the TIGER map data was produced from aerial photography, and was originally intended to assist Census Bureau officials in the field, such problems [misalignment of roads] are bound to occur and are unlikely to have undergone official correction.

I’m not sure what to mock first, the leap to the conclusion that TIGER data problems are a result of aerial photography, or the related conclusion that GPS tracing is somehow superior to aerial photography. They are of course, closely related in the mind of the neotard: “Hmm, TIGER is old; aerial photography is old; TIGER sucks; aerial photography is old and expensive (so it sucks); therefore, the reason TIGER sucks is because aerial photography sucks!”

(Admission, I don’t know why TIGER positions are so bad in places. The answer may well be the source, but it cannot be hung off of aerial photography. Much of TIGER is sourced from lower levels of government then stitched in, so it is probably a pastiche of capture methods. I wouldn’t be surprised if some of it was digitized off of paper road maps. Or if some of it has not been positionally updated in 25 years.)

Paleotard

Oh, pity the poor paleotards, who don’t know any better, wasting good money flying about taking error prone “photographs”, instead of doing the smart thing and walking around with a Garmin. (Is that a Garmin in your pocket, or are you just happy to see me?)

I admit, I suffered from “GPS is magic” syndrome for quite a while, but I had the fortune to be exposed to people whose job it is to make base maps, who have to ensure that the lines they place on the map (in the database) are as close to “true” location as possible, given the technology available. That exposure taught me some useful things about source data collection, and one thing it taught me is that GPS traces are extremely hard to quality control.

The GPS has a very hard job to do. It has to read in multiple signals from satellites and calculate its location based on very, very, very small time differences. What happens when the signals intermittently drop because of trees overhead blocking the signal? Or bounce off of a nearby structure? The unit makes errors. Which would be fine, except it’s hard to isolate which observations are good, and which ones are bad. Too often, GPS track gathering falls back on heuristics that delete “zingers” (really obviously bad readings, determined visually or with a simple algorithm) and assume every other observation is “good”. If you delete zingers and take enough repeated traces, you can slowly get averaged lines that converge towards meter-level accuracy. However, the need for multiple traces radically increases the manpower/cost of gathering decent data, and the accuracy level does max out after a while.

The answer to getting good positional information over a large area is to tie together the strengths of GPS systems and the strength of remote sensing (aerial, satellite) systems.

  • Take a picture from above (or better, borrow one, from the USDA, or USGS, that has already been “orthocorrected”). This provides a very good source of relative measurement information: you can determine very precisely the distance and bearing between any points A and B. But it has no absolute positioning: how do I know the longitude/latitude of A and B?
  • Find several points at the edges of your photograph that are clearly visible from above and identifiable from the ground. Take your GPS to those points, and take long, long, long collections of readings (a hour is good) at each one. Take those readings home, and post-process them to remove even more error. Average your corrected readings for each point, you now have “ground control points”.
  • Use those control points to fit your aerial picture to the ground in some nice planar projection (like UTM). Digitize all the rest of your locations directly off the picture.

Take a deep breath. You are now a paleotard, but a happy, contented paleotard with very well located data.

This posting derives from a very interesting discussion on Geowanking from some months back. The question was “how do I map my two acre property at very high accuracy?”, and while the initial guess of the poster involved using a variation on GPS tracing, the best final answer was a hybrid solution.

Wheels Within Wheels

Sean says he supported Obama because he is more “electable “. So, what are we to make of this notice in the New York Times?

Senator John McCain’s presidential campaign said Thursday that it stood by a year-old pledge made with Senator Barack Obama that each would accept public financing for the general election if the nominee of the opposing party did the same.

This isn’t about campaign financing, since that issue would keep until after the Democratic nominee is decided. This is about pumping up Obama by treating him like the presumptive nominee, putting let more wind in his sails heading into Texas and Ohio, where an Obama win will put a stake through the heart of the Clinton campaign. Could it be the McCain campaign doesn’t share Sean’s take on who the more electable Democrat is?

Keep your Stinking Rasters Out of My Database

W00t! Yeah! Sing it brother!

http://spatialgalaxy.net/2008/02/15/rasters-in-the-database-why-bother/