Titanic Exhibit @ Royal BC Museum

For our reception at FOSS4G 2007 we have rented several galleries at the Royal BC Museum, for a stand-up reception amongst the exhibits. Should be very cool.

The Museum is currently hosting a “special traveling exhibit” of artifacts from the Titanic, but the cost of renting the Titanic gallery was exorbitant (more than the rest of the galleries put together), so we did not rent it. Thank goodness!

This weekend was rainy, so I took my daughter to the Museum on Sunday, and the price of getting in included the Titanic exhibit. So in we went. In fairness to the exhibitors, they did their best with what they had to work with. But let’s remember what those things are: artifacts just 100 years old, and pretty ordinary; the limited selection of things they could drag up from 2 miles under the ocean that had not been completely pulped by time.

“Wow, what an attractive chamber pot.” “Goodness, who knew they had taps back then.” “Hm, a bottle full of an unknown fluid.”

If you want a visceral experience of the Titanic… watch James Cameron’s blockbuster movie. There is also an IMAX movie with underwater shots of the wreck showing at the theater attached to the Museum, which might be a bit more impressive.

While the Titanic exhibit is garnering all the hoopla, upstairs in the First Nations gallery is where the really good stuff is (we’ve got that one rented)! 100 year old totem poles, exquisite art, great historical presentation. And, especially good right now, they have the Dundas collection of Tsimshian art, collected 150 years ago. Really magnificent stuff, very museum-worthy and worth seeing. Sadly, the Dundas is also a traveling exhibit, and by the time FOSS4G rolls into town it will have already moved on.

REST Feature Server

Piling on this meme! Pile!

Chris Holmes has taken a stab at some of the semantics of a REST feature server, and of course, Chris Schmidt has already written one. (!!!!!)

I would like to take issue with one of Chris Holmes’ design points:

The results of a query can then just return those urls, which the client may already be caching

http://sigma.openplans.org/geoserver/major_roads?bbox=0,0,10,10

returns something like

<html>
<a href=’http://sigma.openplans.org/geoserver/major_roads/5′>5</a>
<a href=’http://sigma.openplans.org/geoserver/major_roads/1′>1</a>
<a href=’http://sigma.openplans.org/geoserver/major_roads/3′>3</a>
<a href=’http://sigma.openplans.org/geoserver/major_roads/8′>8</a>
</html>

Ouch! Resolving a query that return 100 features would require traversing 100 URLs to pull in the resources. What about if we include the features themselves in the response? Then we are potentially sending the client objects it already has. What is the solution?

It is already here! Tiling redux! Break the spatial plane up into squares, and assign features to every square they touch. You get nice big chunks of data, that are relatively stable, so they can be cached. Each feature can also be referenced singly by URL, just as Chris suggests, but the tiled access allows you to pull more than one at a time.

What about duplication? Some features will fall in more than one tile. What about it? Given the choice between pulling 1000 features individually, and removing a few edge duplicates as new tiles come in, I know what option I would choose. Because each feature in the tile includes the URL of the feature resource, it is easy to identify dupes and drop them as the tiles are loaded into the local cache.

Tiling on the brain, tiling on the brain…

Everybody Loves Metadata

Or, more precisely, everybody loves <Metadata>.

Hardly is KML into the OGC standards process and already folks are getting ready to standardize what goes into the anonymous <Metadata> block.

Ron Lake thinks that … wait for it … wait for it … GML would be an “ideal” encoding to use in <Metadata>.

Chris Goad at Platial thinks that we should be doing content attribution (who made this, who owns this) in <Metadata>.

Even Google is getting into the game. The explanations of how to integrate your application schema for <Metadata> extensions into the KML schema are a nice reminder of the sort of eye-glazing details that have made life so hard for GML. Doing things right is hard.

It is particularly delicious that the very thing that makes adding information to <Metadata> fiddly is the preparation of schemas: you need metadata about the metadata you are adding to <Metadata>.

Where will this all end? I think it will end with the Google Team picking one or a few <Metadata> encodings to expose in their user interfaces (Earth and Maps). At that point all content will converge rapidly on that encoding, and the flexibility of <Metadata> will be rapidly ignored.

OGC Apologies

I owe an apology to OGC, and the participants of the Mass Market working group. The post below on KML includes information about preliminary decisions that are not the policy of the OGC until they are officially approved by the higher governing bodies of the OGC. So to Mark and Carl and anyone else who has more stress to deal with today than they did yesterday because of me, a heartfelt “I am very very sorry”.

KML @ OGC

Well, the Open Geospatial Consortium (OGC) Technical Committee (TC) meeting is once again proving to be the Most Boring Meeting in the Known Universe. The best came on the first day, with the plenary session talks, and the Mass Market meeting, with the entrance of Google Earth’s KML into the OGC.

I attempted to capture Michael Jones’ (CTO Google) explanation verbatim, but am not a professional stenographer. So take the below as a loose quotation:

“Why go this far [submitting KML to OGC]? So we could go further. A lot of people are using Google Earth. It is thrilling to us. It [KML] is a pretty standard format, even though it is vendor controlled. It seemed to us like the Microsoft Word .doc format. Will become more and more a de facto standard. If we could put it into a standards body, they could shape it. It would be a better outcome. We wanted [KML to be] HTML for the geospatial web. Our fledgling efforts have got us this far. We don’t want to bail on this [continuing development of KML], but will support a standards effort in the future.”

“We thought we could codify the current state [of KML]. And then over the next while work together to build a new KML that is a lot like the current KML, but would be built according to the standards body process. Between here and there, could be mid-points, stops along the way, but that is the path. This [submitting KML 2.1 to the OGC] is not the whole activity, it is the first step along the way.”

The quotation captures Jones’ excitement about the potential of KML as an interchange format, a way of sharing and linking geospatial data. The process of bringing KML into the OGC will be:

  • The OGC will develop a “position paper” explaining how KML relates to the OGC standards baseline.
  • KML 2.1 will be re-published as an official OGC “best practices document”.
  • The OGC Mass Market working group will begin working on KML 3.0.
  • In the meantime, Google may add changes to the KML 2.X stream, and the OGC Mass Market group will decide whether they are worthy of propagation into KML 3.0.

The whole process is anticipated to take less than a year.

The pivot on which all this turns is the idea that KML is similar to HTML, as a content encoding that will be consumed by multiple clients (“spatial browsing software”), not just Google Earth. The market already seems to be well on the way to that end state – there are lots of KML consumers (ArcGIS, FME, World Wind) and producers (ArcServer, PostGIS, Mapguide). Given such an emerging market, KML will be healthier and grow even faster under third-party control (OGC) than under corporate control (Google).

There were some questions after.

The OGC has spent a good deal of time and effort carefully separating concerns in their standards: there are content encoding standards (GML), and styling standards (SLD), and filtering standards (Filter), and behavior standards (WMS/WFS), and they have been carefully kept separate. KML merrily stuffs all those concerns into one pot. Asked about this disconnect, Google reps were unapologetic, saying that the very combination of concerns allows for the simple user experience which makes Google Earth (and by extension KML) so compelling. I agree. Bundling the concerns together makes deployment simple, and does not preclude the over-riding of styling, etc, by the client software.

I had to ask the question that was on my mind: if KML 2.1 was going to be an OGC “best practices” document, but Google was going to continue to add features to the 2.X series of KML, how would the KML 3.0 work re-converge? Basically, was the OGC just going to end up rubber-stamping whatever features Google felt like tossing into the 2.X series in the period prior to the 3.0 release.

Michael Jones answered that vendor extensions are a time honored part of the standards development process, that the innovation around standards is what drives them forward. Citing his time in the OpenGL process, Jones noted that companies would add proprietary extensions to their hardware that were not in the standard, but that the worthy extensions would end up pulled into the standard at the next revision. I replied that there was only one vendor in this case, Google, which made for an unhealthy balance of power. Jones said he had approached Microsoft and ESRI to join in the standardization process of KML, and will continue to invite them to the table. In the end, an OGC process with Google and invitations to the others is better than no process at all. I find it hard to disagree with that.

A similar process has driven development of HTML – it is not always pretty, and sometime people yell at each other, but in the end the standard moves forward and everyone has a known baseline to work against. With more and more applications using and creating KML, it is important for companies to have a solid baseline they can trust.