Standards for Geospatial REST?

One of the things that has been powerful about the open source geospatial community has been the care with which open standards have been implemented. Frequently the best early implementations of the OGC standards have open source ones. It would be nice if the open source community could start providing REST APIs to the great core rendering and data access services that underly products like Mapserver, Geoserver, and so on.

However, it’s bad enough that Google, Microsoft, ESRI, all invent their own addressing schemes for data and services. Do we want the open source community to create another forest of different schemes, one for each project? Also, inventing a scheme takes time and lots of email wanking in any community with more than a handful of members. Replicating that effort for each community will waste a lot of time.

No matter how much we (me) might bitch about OGC standards, they have a huge advantage over DIY, in that they provide a source of truth. You want an answer to how that works? Check the standard.

There’s been enough test bedding of REST and some nice working implementations, perhaps it is time to document some of that knowledge and get into the playing field of the OGC to create a source of truth to guide the large community of implementors.

A lot of the work is done, in that most of the the potential representations have been documented: GML, KML, GeoRSS, GeoJSON, WKT. Perhaps there is little more to be done: writing up how to apply APP to feature management.

For Sean

In talking about mass scaling geospatial web services, Sean Gillies expressed that he wasn’t sure that Mapserver (for example) was up to the task, or alternatively, really provided anything that couldn’t be provided with ArcServer, for example (he’s not a fanboi, but he does like playing devil’s advocate).

For mass scaling a system on Mapserver, I’d have Mapserver generating maps out to a tile cache, so really, the number of Mapservers I would need would be limited to the amount of fresh tiles I was generating at any given time. Sean correctly pointed out that in that architecture, ArcServer wouldn’t be nearly as punitive, from a licensing point-of-view, as I was painting it, since it too could be deployed in more limited numbers while the tile cache handled most of the load.

I just thought of an interesting use case which is totally incompatible with a proprietary per-CPU licensing scheme: suppose I want to generate a complete tile cache for the USA. Do a TIGER street map of all of the USA, for example? No problem, I fire up 1000 EC2 instances and let ‘em rip! With Mapserver, that’s not a problem. With ArcServer, or any other per-CPU licensed product, it’s just not on.

The whole area of virtual rentable infrastructure is a place where old fashioned proprietary licensing has a problem.

Train Dreams

The web is littered with half-baked train “projects” that need little more than some love, public support, and a couple billion dollars to get off the ground. Vector1 gives us a particularly fluffy example today, the solar train. But why stop there? Here’s a gallery of not-gonna-happen/didn’t-happen rail “proposals” culled in just a few minutes from the Goog:

Some of these attempts actually spent millions (like the Texas TGV), but they all met/are meeting the same fate. Millions won’t do it, try billions, guys. That’s what it took to build the continental railway system in the first place, in a financial bubble worthy of the dot-com era – and don’t forget, generous government subsidies via land grants.

Meanwhile, freight rail in North America is going strong, stronger than Europe, even! But before passenger rail takes off, North Americans need to be willing to park their cars. So far, the gas price surge has only generated a surge of news stories on more efficient cars, electric cars, and small cars. But very few stories about not using your car.

Get out of the office much?

In an article about the future of DeCarta, and other DeCarta-flavored beverages, the Virtual Earth Blog serves up this opinion-nugget:

If your web based map app calls for supreme control and customization of cartography you historically would build your own cluster around ESRI’s universe of software and get to coding. For small- to mid-sized apps this was OK assuming you could make the development investment, but it broke down when scaling forced you to build out that cluster. This is where hosted solutions like Virtual Earth come in…

He goes on to beat up DeCarta/Telcontar for having the pricing sensibilities of the ESRI solution (high) with the cartographic flexibility of Virtual Earth (low).

Somehow, he missed the option where you build your customized cluster on open source, and you scale it out for the cost of commodity hardware. He also missed that part where he admits Virtual Earth and Google Maps have a pricing advantage over DeCarta/Telcontar because they are currently giving their services away at a loss and that is going to go on… forever?

In the business of building potentially high volume geospatial sites, my feel is that you have two choices, just like Virtual Earth blogger, it’s just that my two options are different than his: hosted (Google/Microsoft/Yahoo/DeCarta), or open source. Anything else will break the bank when scaling time comes.

Sick, sick, sick

Sean has starting posting early-season vegetable porn… time to head to the back yard to get some pictures of my hotties (oh, hoping for baby fingerling potatoes with aioli tomorrow).