Concurrency for PostGIS

I recently had an opportunity to perform an experiment of sorts on an open source community, the PostGIS community. The hypothesis was: an open source community would contribute financially to a shared goal, to directly fund development, with multiple members each contributing a little to create a large total contribution. Note that this is different from a single community member funding a development they need (and others might also need but not pay for). This is a number of entities pitching in simultaneously, and really is the preferred way (I think) to share the pain and share the gain.

The story started with a post from Oleg Bartunov on the PostgresSQL hackers list:

I want to inform that we began to work on concurrency and recovery support in GiST on our’s own account and hope to be ready before 8.1 code freeze. There was some noise about possible sponsoring of our work, but we didn’t get any offering yet, so we’re looking for sponsorship! Full email…

The original noise about sponsorship had actually come from me, about 18 months earlier, when it became clear that the full table locks in the existing GiST index code were going to be a future bottleneck. So, I felt obligated to follow up! Here was a chance to see open source “values” in action: everyone would benefit from this improvement, so surely when given the opportunity to help fund it and ensure it came to fruition, contributors would leap to the fore. I wrote to postgis-users:

Oleg and Teodor are ready to attack GiST concurrency, so the time has come for all enterprise PostGIS users to dig deep and help fund this project … Refractions Research will contribute $2000 towards this development project, and will serve as a central bundling area for all funds contributed towards this project. As such, we will provide invoices that can be provided to accounts payable departments and cash cheques on North American banks to get the money oversees to Oleg and Teodor. No, we will not assess any commissions for this: we want to see this done. Full email…

The response was… Underwhelming. There is nothing like a sense of urgency to provoke motion. The problem is analogous to standing in front of a mass of people with an unpleasant task and yelling “any volunteers?” Everyone waits for someone to step else forward. As the silence stretches out, the pressure builds and eventually volunteers step forward. On an email list, though, no one can hear the silence! My second email was an attempt to make the silence palpable:

Just an update. There are 700 people on this mailing list currently. As of this morning, I have received zero (0) responses on this. Full email…

At this point the dam finally broke and multiple companies came forward to contribute. Some contributors were companies that use PostGIS in their operations and could actually anticipate the future need for improved concurrency. This was classic enlightened self-interest, and my only disappointment was that such a relatively small cross-section of the PostGIS user community was capable or interested in seeing the long term benefit of the short term investment.

Other contributors signed on purely because it was the “right thing to do”. This was also a form of enlightened self-interest, but one with a much broader view of future personal benefit. The best example of this was Cadcorp, a spatial software company from the UK. Cadcorp makes proprietary GIS software. Cadcorp does not even use PostGIS themselves (except for testing). But, their clients do use PostGIS, and their product can act as a PostGIS data viewer and editor. Cadcorp invested in improving PostgreSQL, because it would improve PostGIS, which would improve their clients experience of using their software with PostGIS, which would (in the long view) enhance the value of their software. Now that is seeing the big picture.

The final totals were both quite good (about $8000 total contributions, with no individual contribution exceeding $2000) and not so good (only 9 contributing companies given 700+ members on the postgis-users mailing list).

The final technical results were great! Index recovery from system crashes now works transparently, and performance under concurrence read/write does not scale into a brick wall anymore.

I am pleased that this work got done, but I think the experience has taught me a few things about drumming shared contributions out of the community:

  • A sense of urgency is required. The work had to be done before the PostgreSQL 8.1 release, or a whole release cycle would go by before we had an opportunity to get this into PostgreSQL again.
  • A visible target would help. Having a “United Way thermometer” would probably have been a good thing. Having a fixed funding target would also help.
  • A well-written prospectus helps. I did not have one to start, but did one up and it was used by a number of technical community members to get approval from their non-technical managers.
  • A thick skin would help (I don’t have one). The number of people who will free-ride is very high, and even some very deep pocketed organizations will free-ride. This is deeply annoying when you see individuals and smaller organizations coming to the table generously, but there is nothing to be done about it.

So now we are done, the press release is written, and hopefully we will have an article about this on Newsforge soon!

Simple Web Services Catalogues

Web services are a “big deal” these days, garnering lots and lots of column inches in mainstream IT magazines. The geospatial world is no exception: after years of stagnation the number of number of Open Geospatial Consortium (OGC) web services publicly available on the internet is starting to really explode, going from around 200 to 1000 over the last nine months.

Great news! Except… now we have to find the services so we can use them in clients. The web services mantra is “publish, find, bind”. “Publish” is going great guns, “bind” is working well with good servers and clients, but “find”… now that is another story.

The OGC has a specification for a “Catalogue Services for the Web” (CSW), which is supposed to fill in the “find” part, but:

  • It is 180 pages long;
  • You also need a “profile” for services, another 40 pages; and,
  • The profile catalogues information at a “service” level, rather than at a “layer” level.

The OGC specification is designed to handle a large number of use cases, most of which are irrelevant to spatial web services clients currently in action. By designing for the future, they may have foreclosed on the present, because there are much easier ways to get at catalogue information than via CSW.

For example, if you want a quick raw listing of spatial web services, try this link:

Instant gratification, and you did not have to read a single page of the CSW standard!

With a little Perl, some PostgreSQL and PostGIS you can turn the results of the above into a neat layer-by-layer database and allow people to query it with a very simple URL-based API:

This kind of simple catalogue access became a clear necessity while we were building uDig – there were no CSW servers available with a population of publicly available services, and in any event the CSW profiles did not allow us to search for information by layer, the fundamental unit of interactive mapping.

By using this simple API and properly populated catalogue, we turned uDig into a client that implemented all the elements of the publish-find-bind paradigm, and exposed them with a drag-and-drop user interface. A user types a keyword into the uDig search panel, selects a result from the return list, and drags it into the map panel, where it automatically loads.

So where does this leave the OGC catalogue effort? Still waiting for clients. There are server implementions from the vendors who sponsored the profile effort, but the client implementations are the usual web-based show’n’tell clients, not clients cleanly integrated into actual GIS software. All web services standardization efforts have a chicken-and-egg quality to them, as the network effects of the standard do not become apparent until a critical mass of deployed clients can hit a critical mass of populated servers. Both halves of the equation are required: useful clients, and useful servers.

Open Source Adoption

I will write a technically oriented entry eventually, but right now there are some business issues I still want to explore!

Because most open source proponents are developers themselves, they tend to cast the adoption problem in terms of technical features. The theory goes: “once a product achieves a particular feature level, it will be rapidly adopted, because it is better”. This view has a couple of problems, in that:

  • it ignores switching costs entirely (both monetary and emotional); and,
  • it assumes the open source product can provide every feature required by the market.

Switching costs are almost insurmountable, regardless of the technology, and regardless of whether the alternative is proprietary or open source. Generally speaking, once a piece of technology is deployed, it is not going anywhere, until it hits end of life. Exceptions abound, of course, but it is a good rule of thumb.

Looking at the open source deployment success stories, most back up this theory: Linux and Apache grew like crazy in green fields, they were rarely replacing existing systems, they were filling in gaps and providing new capability. Again, exceptions abound, but keep your eye on the larger picture.

Feature completeness is a different beast, and it is where open source has a hard time competing with proprietary. Once an organization has gotten over the switching hump, for whatever reason, they have to decide what to switch to. They tot up the lists of features they “need”, and then put out a request to vendors to propose products that meet their list of needs. Bzzt! Strike one for open source, there is no dedicated vendor! Which means either the organization themselves needs to take the initiative to bring open source to the table, or some other company who sees a side opportunity to make money has to bring it in.

Now they compare their feature “needs” with the products. Almost invariably, no product meets 100% of the needs, but that is OK, because vendors can provide a “roadmap” of future development, which hopefully covers their requirements.

Bzzt! Strike two for open source. Open source projects generally make no promises about particular features by particular dates. At best a wish list of future features (“subject to interest and time”) is provided. So open source projects which provide only partial coverage of requirements are rejected. Do not pass go, do not collect any dollars.

The problem is that the technology evaluation and standardization process in many large organizations has been shaped over the years to optimally match the properties of the existing proprietary software market:

  • products are provided “finished”, for an up front purchase price;
  • products are marketed by vendors, they do not exist in isolation;
  • products are owned and supported by a single organization;
  • the onus is on the vendor to supply most of the effort in proving the worth of the product (“respond to the following 256 categories in the Request for Proposals”), not the client to evaluate the product independently.

For an organization to start to make optimal use of open source, habits developed over decades have to be stood on their head:

  • products are only as good as the client makes them (through the strategic application of money or effort);
  • great products must be sought out by the client;
  • support may be provided by many vendors, or by the client itself, if their pockets are deep enough to hire a pool of expertise;
  • the onus is on the client to evaluate the worth of the product.

But that hardly seems like an improvement! Now the client is doing way more work! And that is a big hurdle in getting people to invert their thinking about product evaluation and adoption. In order to reap the privilege of receiving software without licensing costs, they must shoulder more responsibility for their own technological health. With great privileges come great responsibilities.

Open Source Company: Oxymoron?

People seem surprised when I tell them that our company does open source development and has a couple of large open source projects (PostGIS and uDig). They have a very valid reason for their surprise: there seems to be no reasonable way such projects could make money. In fact, they seem destined to lose money. Lots of it.

Superficially, I cannot help but agree. When I think back on the profile of clients we have added over the past several years, there does not seem to be that much work that ties back directly to either uDig or PostGIS. And yet, the company has been growing pretty steadily over that same time period, and the number of people working on the open source code in one way or another has also been growing. So clearly something is afoot.

I think the answer is in the nature of a consulting company, which is what we are. Ordinarily (open source excluded) our business procedes lock step with the relationships we can develop with current and potential clients. Those relationships lead to referrals, to more work, and so on. The currency of the relationship is competence and trust. We demonstrate our competence and trustworthyness on a project, and gain a referral.

What the open source effort is buying us, over time, is a proxy source of referrals and trust. By using our software for free, clients are gaining a sense of our competence. By asking us questions on mailing lists, they are also gaining a sense of trust in our reliability. So even though our direct revenue stream from PostGIS consulting is pretty small, the indirect stream of trust leads clients to contact us directly for partially related or unrelated work. We don’t necessarily book the work as “PostGIS derived” or a “uDig lead” and then over time we forget the source.

There is something mildly corrosive about this process, because it can potentially lead us to under-invest in our open source work. After all, it appears to be primarily a cost center. Recognizing the benefit requires extra work: noting where we save money by using PostGIS instead of OracleSpatial for project work; sourcing leads back to the open source work, recording it, and recognizing the sales expense that would have been required to generate that lead in a conventional way.

Eric Raymond spelled out a number of business models for “open source companies” in his paper The Magic Cauldron and none of them really fits us well. We use open source as a loss leader (9.1), but not for proprietary software. We offer services around the open source projects (9.3), but they are not our principal source of revenue, nor are they the primary benefit the open source software provides us.

Our model is what I call the “business card model”. Our open source software projects are tokens of our wider capability in understanding and solving geospatial problems. The projects provide us with an introduction to a large number of potential clients, and a (very) small percentage will eventually contact us with work, but the cost of doing the development will still be much smaller than the conventional sales effort needed to generate the same leads. Our open source projects act like a virtual salesman, flying around slapping backs and handing out business cards. And, as an added bonus, we get cool software too!

Open source: it’s cheaper than a salesman, and you can burn it to a CDROM (try that with a salesman).