REST Feature Server

Piling on this meme! Pile!

Chris Holmes has taken a stab at some of the semantics of a REST feature server, and of course, Chris Schmidt has already written one. (!!!!!)

I would like to take issue with one of Chris Holmes’ design points:

The results of a query can then just return those urls, which the client may already be caching,0,10,10

returns something like

<a href=’′>5</a>
<a href=’′>1</a>
<a href=’′>3</a>
<a href=’′>8</a>

Ouch! Resolving a query that return 100 features would require traversing 100 URLs to pull in the resources. What about if we include the features themselves in the response? Then we are potentially sending the client objects it already has. What is the solution?

It is already here! Tiling redux! Break the spatial plane up into squares, and assign features to every square they touch. You get nice big chunks of data, that are relatively stable, so they can be cached. Each feature can also be referenced singly by URL, just as Chris suggests, but the tiled access allows you to pull more than one at a time.

What about duplication? Some features will fall in more than one tile. What about it? Given the choice between pulling 1000 features individually, and removing a few edge duplicates as new tiles come in, I know what option I would choose. Because each feature in the tile includes the URL of the feature resource, it is easy to identify dupes and drop them as the tiles are loaded into the local cache.

Tiling on the brain, tiling on the brain…

Everybody Loves Metadata

Or, more precisely, everybody loves <Metadata>.

Hardly is KML into the OGC standards process and already folks are getting ready to standardize what goes into the anonymous <Metadata> block.

Ron Lake thinks that … wait for it … wait for it … GML would be an “ideal” encoding to use in <Metadata>.

Chris Goad at Platial thinks that we should be doing content attribution (who made this, who owns this) in <Metadata>.

Even Google is getting into the game. The explanations of how to integrate your application schema for <Metadata> extensions into the KML schema are a nice reminder of the sort of eye-glazing details that have made life so hard for GML. Doing things right is hard.

It is particularly delicious that the very thing that makes adding information to <Metadata> fiddly is the preparation of schemas: you need metadata about the metadata you are adding to <Metadata>.

Where will this all end? I think it will end with the Google Team picking one or a few <Metadata> encodings to expose in their user interfaces (Earth and Maps). At that point all content will converge rapidly on that encoding, and the flexibility of <Metadata> will be rapidly ignored.

OGC Apologies

I owe an apology to OGC, and the participants of the Mass Market working group. The post below on KML includes information about preliminary decisions that are not the policy of the OGC until they are officially approved by the higher governing bodies of the OGC. So to Mark and Carl and anyone else who has more stress to deal with today than they did yesterday because of me, a heartfelt “I am very very sorry”.


Well, the Open Geospatial Consortium (OGC) Technical Committee (TC) meeting is once again proving to be the Most Boring Meeting in the Known Universe. The best came on the first day, with the plenary session talks, and the Mass Market meeting, with the entrance of Google Earth’s KML into the OGC.

I attempted to capture Michael Jones’ (CTO Google) explanation verbatim, but am not a professional stenographer. So take the below as a loose quotation:

“Why go this far [submitting KML to OGC]? So we could go further. A lot of people are using Google Earth. It is thrilling to us. It [KML] is a pretty standard format, even though it is vendor controlled. It seemed to us like the Microsoft Word .doc format. Will become more and more a de facto standard. If we could put it into a standards body, they could shape it. It would be a better outcome. We wanted [KML to be] HTML for the geospatial web. Our fledgling efforts have got us this far. We don’t want to bail on this [continuing development of KML], but will support a standards effort in the future.”

“We thought we could codify the current state [of KML]. And then over the next while work together to build a new KML that is a lot like the current KML, but would be built according to the standards body process. Between here and there, could be mid-points, stops along the way, but that is the path. This [submitting KML 2.1 to the OGC] is not the whole activity, it is the first step along the way.”

The quotation captures Jones’ excitement about the potential of KML as an interchange format, a way of sharing and linking geospatial data. The process of bringing KML into the OGC will be:

  • The OGC will develop a “position paper” explaining how KML relates to the OGC standards baseline.
  • KML 2.1 will be re-published as an official OGC “best practices document”.
  • The OGC Mass Market working group will begin working on KML 3.0.
  • In the meantime, Google may add changes to the KML 2.X stream, and the OGC Mass Market group will decide whether they are worthy of propagation into KML 3.0.

The whole process is anticipated to take less than a year.

The pivot on which all this turns is the idea that KML is similar to HTML, as a content encoding that will be consumed by multiple clients (“spatial browsing software”), not just Google Earth. The market already seems to be well on the way to that end state – there are lots of KML consumers (ArcGIS, FME, World Wind) and producers (ArcServer, PostGIS, Mapguide). Given such an emerging market, KML will be healthier and grow even faster under third-party control (OGC) than under corporate control (Google).

There were some questions after.

The OGC has spent a good deal of time and effort carefully separating concerns in their standards: there are content encoding standards (GML), and styling standards (SLD), and filtering standards (Filter), and behavior standards (WMS/WFS), and they have been carefully kept separate. KML merrily stuffs all those concerns into one pot. Asked about this disconnect, Google reps were unapologetic, saying that the very combination of concerns allows for the simple user experience which makes Google Earth (and by extension KML) so compelling. I agree. Bundling the concerns together makes deployment simple, and does not preclude the over-riding of styling, etc, by the client software.

I had to ask the question that was on my mind: if KML 2.1 was going to be an OGC “best practices” document, but Google was going to continue to add features to the 2.X series of KML, how would the KML 3.0 work re-converge? Basically, was the OGC just going to end up rubber-stamping whatever features Google felt like tossing into the 2.X series in the period prior to the 3.0 release.

Michael Jones answered that vendor extensions are a time honored part of the standards development process, that the innovation around standards is what drives them forward. Citing his time in the OpenGL process, Jones noted that companies would add proprietary extensions to their hardware that were not in the standard, but that the worthy extensions would end up pulled into the standard at the next revision. I replied that there was only one vendor in this case, Google, which made for an unhealthy balance of power. Jones said he had approached Microsoft and ESRI to join in the standardization process of KML, and will continue to invite them to the table. In the end, an OGC process with Google and invitations to the others is better than no process at all. I find it hard to disagree with that.

A similar process has driven development of HTML – it is not always pretty, and sometime people yell at each other, but in the end the standard moves forward and everyone has a known baseline to work against. With more and more applications using and creating KML, it is important for companies to have a solid baseline they can trust.

FOSS4G 2007 Registration Open!

Another milestone past! The online registration for FOSS4G 2007 in Victoria is now open, and the conference hotel blocks are available.

Firstly, let me plead with folks who are planning to come to the conference to register early. Knowing the attendance makes all the planning so much easier. So don’t wait until the last day before Early Bird rates end to register, just do it now! Please, please, please!

Secondly, let me warn folks who really like to put things off until the last minute. When we booked our hotel blocks, the hotels curiously did not ask for any penalties in the case of our not filling the blocks. In fact, they have a schedule for taking the rooms back away from us and putting them back into their general pool. The implication is that they expect to have no trouble filling all their rooms in that time period, at higher rates than we got for our block. So book your hotel early.

Finally, for those flying in from the USA via Seattle, note that there is a relatively modest amount of seats available each day from the SEA->YYJ run (I calculated about six daily flights of a 40-seat plane). Which is enough probably for all our US travelers coming to the conference — but of course, you won’t be the only ones going through SeaTac to Victoria that day. I will not be surprised if all those SEA->YYJ flights end up full quite early during our conference start/end periods.

Happy travels!