Timmy's Telethon #6

And finalement:

  1. Third party business applications: It could be argued that an enterprise GIS exist to support business requirements which often call for a third party client solution. Are vendors building COTs apps against these third party solutions?

I think the wording is a little off here, and it should be “are vendors building third party COTS solutions against these open source apps?”

And the answer is “sure!” Not as many as against the ESRI stack, but that is to be expected: ISVs support things where there is demand for support, and the market leader obviously drives a lot more demand. However, the amount of ISV support for open source is greater than zero, and growing. On the PostGIS side, for example:

  • Safe Software’s FME supports it
  • Mapguide supports it
  • Ionic Red Spider supports it
  • ESRI’s Interoperability Extension supports it
  • Manifold supports it (!!!)
  • CadCorp SIS supports it
  • ArcSDE 9.3 supports it
  • MapDotNet Server supports it

Most of the web-services side of things are supported via open standards, so Mapserver slots into all kinds of infrastructures and third party tools via the WMS standard, for example. I imagine if the WMS standard did not exist, more software would talk directly to Mapserver via it’s own CGI dialect, but WMS made that redundant.

And on the client side, I found this nugget from Bill Dollins’ ESRI Fed UC review intriguing:

So the take away was “we’ve got the server to crunch your data and give you good analysis results, display it any way you want.” There was no OpenLayers demo but it was mentioned several times and something that should be able to leverage new APIs.

In the open source world, OpenLayers is already the “default map component” for web apps, it is interesting to see it already banging on the doors to the proprietary world as well.


Great quotation on the OSGeo-Discuss list:

The brutal truth is this: when your key business processes are executed by opaque blocks of bits that you can’t even see inside (let alone modify) you have lost control of your business. You need your supplier more than your supplier needs you—and you will pay, and pay, and pay again for that power imbalance. You’ll pay in higher prices, you’ll pay in lost opportunities, and you’ll pay in lock-in that grows worse over time as the supplier (who has refined its game on a lot of previous victims) tightens its hold.

I heard Chris Di Bona of Google say exactly this at a conference in DC last year with respect to Google’s own strategic software direction. For anything that is core to their business, they use open source if it is good enough, or build it themselves if it is not. Being beholden to a vendor would be a huge competitive disadvantage to them.

Like Cattle to Slaughter

When I first got a cell-phone, some 10 years ago, I deliberately chose the hip new alternative on the block, FIDO, because their costs were competitive, and they didn’t have a “N-year contract” system. You paid your bill until you were done with the service, and then you stopped – sounded fair to me.

However, the Canadian industry has consolidated down to 2 major carriers and a handful of small regional ones, and FIDO was bought up by the Rogers behemoth. They still operate as an independent brand, but clearly things are changing. For one thing, they have added multi-year contracts, and are now pulling maneuvers to try and get people to leave behind their month-to-month plans.

My wife got a call from a marketer – would you like a better plan? we’ll give you two months free and blah blah blah blah blah. She was holding a screaming baby at the time, and said “yeah, sure”. Never say yes to a salesman in haste. Turns out the new plan was a 3-year contract.

Today, I found out that the phone I have was quietly switched to a 3-year plan when they sent me a new handset last year. Surprise! FIDO had sent new handsets in the past, so I didn’t notice, but apparently the cost of my new phone was that I am now in cell-phone shackles until January of 2010.

You’re not a customer to the phone company, you’re an ambulatory money beast. Mmoooooooo.

Timmy's Telethon #5

It’s not all about Timmy, Timmy worries about the little people too!

  1. Business model: I think I’m missing the business opportunity behind open source. Perhaps its altruism or a hobby… but that only goes so far. How are people leveraging this to pay the bills? Are services where the money is at?

Do you litter? When you put your garbage in a can, instead of dropping it on the street, is it altruism, or self-interest? After all, you’re keeping your own environment clean along with everyone else’s. What about the fellow who actually bends over and picks up someone else’s empty coffee cup to put into the can?

Open source turns software into a commons, like the environment, but because we are used to thinking of software as property, our minds find it hard to figure out why open source software aggregates effort and gets cleaner, faster, stronger. Unlike picking up litter, which everyone can do, improving open source can only be done by a small percentage of the population who can understand and work on the code, and that makes it even harder for the rest of the population to understand what is going on. WTF are these lunatics up to, moving bits of used paper into cylindrical holding devices?

The people improving open source fall into lots of categories, but here are some broad ones:

  • The altruist / tinkerer. The most popular media archetype, the altruist / tinkerer probably accounts for the smallest amount of effort on mature open source technologies, but occasionally will create a new technology from scratch that is so good and compelling that it gathers other types who then keep it alive and move it into wider utility.
  • The service provider. The most widely understood business model, running from the one man band (take a bow, Frank Warmerdam) to billion-dollar companies like Red Hat. They will sell you support, or custom development, directly on the software. Because they are tightly associated with the software, this is the easiest model for “vendor minded” folks to mentally grasp.
  • The systems integrator. Working with the open source software, but not necessarily on the software, the system integrator is easily drawn into fixing bugs and adding new functionality on projects as the client requires. Systems integrators love open source because it allows them to meet client needs without being stuck behind a vendor’s development priorities (“we’ve got your bug report on file for the next release”, “that feature will be available in 18 months”).
  • The company man. Easily the least appreciated member of the open source pantheon, because he is not paid to work on open source. He is paid to work on fish inventories. Or carbon models. Or inventory management. Or tax collection. But he has a bit of discretionary time, and he uses it to make the tools he works with work better.

Notably missing from this list is “the billionaire” and “the venture capitalist”. That is because, while there is money to be made in open source, there is not a lot of money to be made. People who try to put up fences in the open source commons find that all the rest of the players end up routing around them, and the value slowly drains from their little patch of land, leaving them only with the tried and true proprietary model to fall back on.

Attention Neotards: Your GPS Sucks!

Reading through an update from Open Street Map (incredibly, it has taken them mere months to load TIGER into their futuristic system) I came across this little gem regarding aerial photography:

Since the TIGER map data was produced from aerial photography, and was originally intended to assist Census Bureau officials in the field, such problems [misalignment of roads] are bound to occur and are unlikely to have undergone official correction.

I’m not sure what to mock first, the leap to the conclusion that TIGER data problems are a result of aerial photography, or the related conclusion that GPS tracing is somehow superior to aerial photography. They are of course, closely related in the mind of the neotard: “Hmm, TIGER is old; aerial photography is old; TIGER sucks; aerial photography is old and expensive (so it sucks); therefore, the reason TIGER sucks is because aerial photography sucks!”

(Admission, I don’t know why TIGER positions are so bad in places. The answer may well be the source, but it cannot be hung off of aerial photography. Much of TIGER is sourced from lower levels of government then stitched in, so it is probably a pastiche of capture methods. I wouldn’t be surprised if some of it was digitized off of paper road maps. Or if some of it has not been positionally updated in 25 years.)


Oh, pity the poor paleotards, who don’t know any better, wasting good money flying about taking error prone “photographs”, instead of doing the smart thing and walking around with a Garmin. (Is that a Garmin in your pocket, or are you just happy to see me?)

I admit, I suffered from “GPS is magic” syndrome for quite a while, but I had the fortune to be exposed to people whose job it is to make base maps, who have to ensure that the lines they place on the map (in the database) are as close to “true” location as possible, given the technology available. That exposure taught me some useful things about source data collection, and one thing it taught me is that GPS traces are extremely hard to quality control.

The GPS has a very hard job to do. It has to read in multiple signals from satellites and calculate its location based on very, very, very small time differences. What happens when the signals intermittently drop because of trees overhead blocking the signal? Or bounce off of a nearby structure? The unit makes errors. Which would be fine, except it’s hard to isolate which observations are good, and which ones are bad. Too often, GPS track gathering falls back on heuristics that delete “zingers” (really obviously bad readings, determined visually or with a simple algorithm) and assume every other observation is “good”. If you delete zingers and take enough repeated traces, you can slowly get averaged lines that converge towards meter-level accuracy. However, the need for multiple traces radically increases the manpower/cost of gathering decent data, and the accuracy level does max out after a while.

The answer to getting good positional information over a large area is to tie together the strengths of GPS systems and the strength of remote sensing (aerial, satellite) systems.

  • Take a picture from above (or better, borrow one, from the USDA, or USGS, that has already been “orthocorrected”). This provides a very good source of relative measurement information: you can determine very precisely the distance and bearing between any points A and B. But it has no absolute positioning: how do I know the longitude/latitude of A and B?
  • Find several points at the edges of your photograph that are clearly visible from above and identifiable from the ground. Take your GPS to those points, and take long, long, long collections of readings (a hour is good) at each one. Take those readings home, and post-process them to remove even more error. Average your corrected readings for each point, you now have “ground control points”.
  • Use those control points to fit your aerial picture to the ground in some nice planar projection (like UTM). Digitize all the rest of your locations directly off the picture.

Take a deep breath. You are now a paleotard, but a happy, contented paleotard with very well located data.

This posting derives from a very interesting discussion on Geowanking from some months back. The question was “how do I map my two acre property at very high accuracy?”, and while the initial guess of the poster involved using a variation on GPS tracing, the best final answer was a hybrid solution.