FOSS4G 2011 Decision Process

The 2011 FOSS4G siting process has begun. After the 2010 process, where there were too many good proposals and not enough slots, we decided to do a couple things to lower the effort level required by bidders in aggregate.

First, we wanted to be more definitive about our regional siting plan, which is to rotate between Europe (2010), North America (2011) and elsewhere (with elsewhere being Asia (2012) in the current rotation plan). For 2011 the regional arrow points to North America, since 2010 is in Europe.

Second, we wanted to lower the bar to submitting a proposal. In 2010 we got four full proposals, and could choose only one. So 75% of the bidders did a large amount of preliminary research and spade-work for no purpose. For the 2011 process, we will start with a “letter of intent” phase, a short document outlining expressions of interest. Hopefully having all the interested parties publicly declared will promote locations coming together and forming joint bids.

The OSGeo conference committee will vote on the letters and only the top two will be asked to prepare full bid documents. That would still leave one potentially disappointed party at the end of the process, but is a big improvement over leaving three, as happened in the 2008 and 2010 processes. And hopefully the letters phase will encourage groups to consolidate bids so we won’t have to even run the final round.

The winning site will still have to submit a proposal and budget, so the Board and Conference Committee have confidence that the local team has thought things through, but the kind of research necessary to prepare a full proposal is a necessary precursor to putting on a conference, so the effort will be put to good purpose.

I’m looking forward to seeing the letters!

How to get Your bug fixed

Mike Leahy is providing a textbook demonstration of bug-dogging on the PostGIS users list this week, and anyone interested in learning how to interact with a development community to get something done would do well to study it.

Some key points:

He does as much of the work in diagnosing the problem as possible, including combing through Google for references, cutting down the test data as small as possible, trying to find smaller cases that exercise the issue.

He responds very quickly (you’ll have to read the timestamps) to questions and suggestions for gathering more information. Since the problem appears initially to manifest only on his machine, any delay on his part risks disengaging the folks helping him.

He prepares a sample database and query to allow the development team to easily replicate the situation on their machines.

And he also gets lucky. The problem is replicable, and the discussion catches the attention of Greg Stark, who recalls and digs up some changes Tom Lane made to PostgreSQL which in turn leads me to find the one-argument-change that can remove the problem. Very lucky, really, it’s unlikely I would have been able to debug it by brute force.

NoNoSQL

I’m really looking forward to seeing the “NoSQL” buzzword head over the top of the hype-cycle and start heading downwards, since I think it’s really doing damage in what should be a fairly straightforward discussion of matching customer use cases to appropriate technology.

NoNoSQLBecause the term is framed (“No!”) in opposition to the almost the entire family of existing data persistence technology, anyone who comes to the discussion fresh assumes there’s a replacement process going on, wherein NoSQL stands in opposition to SQL.

That’s a shame, because the term is only one letter away from a (slightly) less polarizing buzzword: NotSQL. Even then, though, the core discussion would be lost, because the big difference isn’t programmatic API versus 4GL. The big difference is use case matching, in particular the high-volume, high-availability use case which has emerged in the age of consumer web services.

People with public-facing web applications face a potentially unconstrained read/write load (in their happiest dreams) and the techniques necessary to scale a traditional RDBMS to match that load proceed from the straightforward at the low end to the increasingly byzantine at the high end.

The scaling story for traditional RDBMS technology just is not great: start by adding servers and extra technology to hook them together; get increasingly smart people to handle your increasingly complex infrastructure; finally, start hacking at your data model to allow even further partitioning and duplication.

The new breed of databases have a great scaling story: once you get set up, scaling requires plugging in new nodes and turning them on. That’s it. No model changes, no extra replication and high availability technology.

There’s no free lunch though. In exchange for the high-throughput/high-availability you lose the expressiveness and power of SQL. Henceforth you will write your joins and summaries yourself, at the application level. Henceforth performing an ad hoc query may require you to build and populate a whole new “table” (the terminology is highly variable at this point) in your model. And of course this technology is all pretty fresh meat, so just learning enough to get started can be a bit of a slog – kids aren’t exactly coming out of school with a course in this stuff under their belt.

So, it’s a whole new world, and if you are planning on serving an application where the numbers (hits, pages, requests, whatever) are heading into the 7-digits, it might be good idea to start with this technology (can you tell I can’t stand to use the “NoSQL” term? It is just too awful, there really needs to be a non-pejorative term for this application category).

Of course, there’s nothing new under the sun. Getting the best performance for a specialized use case requires (a) modifying your data model and (b) using technology that can leverage your specialized data model. This is exactly what OLAP databases have been doing for a generation to provide data analysis on multi-billion record historical databases (special data model, special technology).

Database guru Michael Stonebraker wrote a nice article about the brave new world of databases, called “One Size Fits All: An Idea Whose Time has Come and Gone” in 2005, and the conclusion is that we are going to see increasing fragmentation of database technology based on use case. “NoSQL” (shudder) is just the latest iteration in this process.

Meanwhile, I’ll put my oar in for the general purpose database: it’s easy to run OLAP queries on a general purpose database, you just can’t do it once your table size gets over a billion; it’s easy to run a public web site on a general purpose database, you just can’t do it once your load gets over a million. On the other hand it’s well nigh impossible to run even a small web site on an OLAP database and pretty darn hard to build even a small OLAP system on a NoSQL foundation.

Horses for courses folks, horses for courses.

For more of this kind of geekery, see the several articles linked off of “The Case for the Bit Bucket” at the Oracle Nerd blog.

Where 2.0 Drinking Game

It’s that time of year again, and I’ll be sitting in the audience with my flask (I hope you will too!) playing the Where 2.0 drinking game. Here’s some of my phrases, what are yours?

  • … find a Starbucks…
  • … we’re releasing an API …
  • … friends list …

Also, take a big slug if someone talks about working with geospatial data more complex than a lat/lon point!

Send us Jeremy and Keyur!

ESRI (pretty new web page, by the way) has put their open source position on-line and also produced a short podcast with Victoria Kouyoumjian on the same topic.

http://www.esri.com/news/podcasts/audio/speaker/staff_kouyoumjian.mp3

One thing that struck me in the podcast was when Victoria noted that ESRI has sponsored open source events in the past (most notably FOSS4G 2007 directly, but also 2008 and 2009 to a lesser extent through 50°North). She says,

These events allow us the opportunity to engage in conversations and dialogs with various technologists because we want to gain feedback about the needs of open source developers and users. The objective of course is to channel this information back to development so we can reflect this in future products and business decisions in order to best support our customers.

So far ESRI attendance has been at the managerial level, and while I love those guys (hugs to Satish and Victoria!) some real sparks could fly and serious interoperability improvements be made if we started seeing the developers, the project leads and software designers, at the events. We can do better than “channeling” information back to development, let’s immerse development in it!

Update: We promise to send them back. Really.