An open source GIS on every Desktop?

Steven Citron-Pousty has a good posting about why he cannot move his shop to an open source basis in the near term. He took a look at uDig (thanks!) but, unsurprisingly, finds that it ain’t ArcGIS quite yet. His prescription is not for the faint of heart.

If you want people to switch you need to make the transition as painless as possible. Firefox got people to switch to IE by

  • Making better software
  • Not making user learn a new UI for interacting with the web
  • importing all their IE favorites
  • THEN building in cool new features that keep people around

So, all we have to do is make something better than ArcGIS, but not so much better that it is not familiar to the existing user base, that works transparently with all their existing data and presumably their .mxd files too. And give it away for free.

“Just be like Firefox.” There are a couple of problems with this idea.

  • The first problem is the idea that garnering users is “the goal”. It is not. The misunderstanding is reasonable, because for proprietary companies it is the goal – more users implies more licensing dollars. For open source projects, more users just means somewhat more download bandwidth and slightly higher number of beginner questions on the mailing lists. What open source projects want to attract is not users – it is developers. Developers will make the project stronger, add features, fix bugs, do all the things that end users want, but cannot do for themselves.
  • The second problem is that Firefox is not a normal open source project. The Mozilla Foundation has a lot of employees, most of them working on development, and a deal with Google that nets them millions of dollars each year. They can afford to be end user focused, because they have a paid pool of developers already in house.

It seems like all of the “user” success stories in the open source world (Firefox, Open Office, some of the “desktop Linux” efforts) have at their core one common feature – a large and ongoing firehose of money. Absent the firehose, it is hard to aggregate enough continuous effort to create desktop applications that include both the number of features and quality of finish necessary to entice the otherwise unmotivated “user”.

uDig has enough features and stability to be useful right now to a narrow pool of developers creating custom applications with specific toolsets. Hopefully in the future it will have more features and stability, and the pool will be less narrow. But I predict it will still be dominated by developers.

And that’s fine with me.

If Steven adopted uDig and PostGIS for his shop, it would not do a thing for my bottom line. But if he built an application around it, he would either add a little functionality (which would help me with my clients) or maybe hire us to add a little functionality (which would help the bottom line directly).

Open source is not about users, it is about developers. It is only about users in so far as users become sufficiently engaged in the project that they either become developers themselves, or support developers through careful bug finding or documentation.

The correct models are not Firefox or Open Office (unless someone wants to point a money firehose at me… I won’t object) those projects are aberations. The correct models are Linux, Apache, Perl, PostgreSQL – not user-friendly, but still very useful.

Case Studies Considered Harmful?

Over the past month, I have been trying to compile a list of good case studies of organizations using PostGIS in their daily business. So far, it seems hard to get people out of their shells and say what they are doing – even when you promise to do all the work of writing up the story!

I know from things like the membership of the postgis-users mailing list that there are some big companies using PostGIS. Big names in the geospatial world use it for all kinds of production oriented tasks. But apparently they do not want their stories told.

This is not a problem unique to PostGIS. Other open source projects suffer from the same “shy user” syndrome. I read the postgresql-advocacy list and often see comments to the effect that “my client is a huge company, and they love the performance they are getting from PostgreSQL, but they do not want to be publicly named”.

What will it take to get big organizations to “out” themselves?

ArcSDE comes to PostgreSQL?

It has long been rumoured that ESRI might move their “database neutral” ArcSDE to the ultimate “neutral database”, PostgreSQL. I have heard versions of this idea since around 2003, but I never thought they would come to pass. So, mea culpa, all the people I told “it will never happen”… it has!

Yes, ESRI is currently in the process of developing support for PostgreSQL. We have done all the necessary testing to ensure that this will continue to be a viable product in the future. We plan to release this capability sometime after ArcGIS 9.2.

So, what does this mean for PostGIS? Same thing it means for Oracle Spatial – not very much. ESRI may, or may not, support using PostGIS native spatial geometries as the geometry type in ArcSDE. For Oracle, the default ESRI position has always been their SDEBINARY performs better than SDO_GEOMETRY, so it does not sound like using native types holds any particular allure for ESRI.

Even if ArcSDE does support PostGIS types, the ArcSDE versioning model means that all changes to the geometries will have to be done through the SDE API, in order to ensure the versioning metadata remains consistent.

Still, from a read-only perspective, if ArcSDE does support PostGIS as a geometry type, then the following architecture becomes possible, which could represent a big opportunity for some jurisdictions:

  • (DBMS) PostgreSQL Database
  • (ESRI Pound of Flesh) ArcSDE for PostgreSQL using PostGIS geometries
  • (Desktop Editing / Cartography) ArcGIS
  • (Desktop Viewing) QGIS
  • (Analysis Engine) GRASS
  • (Web Map Publishing) Mapserver
  • (Web Feature Publishing) Geoserver

If, on the other hand, ArcSDE on PostgreSQL only supports SDEBINARY, then this will be a non-event from an open source interoperability point of view. I look forward to hearing some reports from the ESRI UC – someone button-hole those ArcSDE developers and find out what the plan is!

The State of the State of Open Source GIS

About a week ago, my colleague Jody Garnett posted a link to a paper of mine on the OSGeo mailing list. From there, some other folks passed it on, and finally the cycle closed again, as I found a posting-about-a-posting on James Fee’s excellent blog.

The paper itself is a backgrounder for a talk I have been trying with varying degrees of success to shoehorn into 30-minute time slots for the past several years. Any student of the open source GIS world will soon realize that in order to try and talk about it coherently, a great deal will have to be left out. That said, my criteria for taking and leaving have hardly been scientific.

  • I have tried to include projects that are “important” for infrastructural reasons (like the widely re-used libraries).
  • I have tried to include projects that are “big” in terms of having more than a few developers or users.
  • I have included projects that meet neither of the above criteria, but I just personally find “neat” or “promising”

    • “Neat” like GMT, which has a small development community,and no technical ties to the rest of its tribe, but is really useful for UNIX types stringing calculations together, a great power-user tool for non-developers.
    • “Promising” like Mapnik, with a very focussed developer with some very clear design goals up front, and a shameless love of the bleeding edge.

James guesses that my choices have something to do with focussing on “the open source GIS clique that revolves around Refractions and OSGeo”, which is not correct. I include references to projects (QGIS, gvSIG, GMT, TerraLib, OpenEV, OpenMap, Mapbender) which our company has never touched, or which have no association with OSGeo. Like everyone, I write through the frame of my experience, but I do try to get beyond it occasionally.

The original post took issue either with my failure to include a “.Net tribe”, or perhaps with the whole “tribe” concept entirely.

I won’t defend the lack of a “.Net tribe” section. The document was largely written about three years ago, and updated more or less incrementally since then. When I did the initial survey, there was no .Net GIS community that minimally met the standards I was holding the other tribes to. That has changed now. Attendees at my talks this year will have heard me point out that gap in my materials.

I do defend the “tribe” concept, as a way of organizing a talk about open source GIS, because I believe that the “community” is more important than the technology. Do a little association graph of high profile members of the communities and you will find they all clump around particular groups of projects, and those groups tend to be defined by their implementation language.

The same seems to be true of the .Net tribe. As I read the contributor names to those projects, I find few points of reference to contributors in the other tribes, with the exception of a spome people who have gotten involved in the Java side just enough to port things to .Net.

Anyhow, if people are going to read my work and take it seriously, I have a responsibility to update it a little more carefully and frequently. Among the things I noted while doing talks this spring:

  • I do not have an entry for Worldwind! Oops!
  • OpenEV pretty irrelevant these days, the world has passsed it by. So it is due to be cut.
  • Same thing for WKB4J, all its functionality seems to have migrated to other places.
  • GeoAPI should probably be cut from the talk. Though it is a dependent library for GeoTools, its purpose is just too abstract to clearly explain.
  • The “web” projects are hard to talk about definitively, because there are so many and it is still so unclear where the balance of implementation is settling. And so many people just roll their own web interfaces anyways.

    • In any event, I am missing OpenLayers, which is talking about merging with ka-map.
  • I need to spend the time figuring out which parts of the .Net tribe should be included in a new .Net section.

Frankly, even with my arbitrary cutting of whole swathes of projects from my “survey” the survey itself is getting unwieldy, which seems to call for yet another approach. I did the survey as a sort of reaction to the FreeGIS approach, which while egalitarian has resulted in an undifferentiated mass of entries. Comprehensiveness can be a curse.

I think my next series of talks or documents will focus on particular combinations of projects that can be stacked into solutions.

More Simple Web Services Catalogues

A couple months ago I wrote a piece about how uDig made use of a simple web services catalogue to complete the web services publish-find-bind chain of being in a nice clean transparent way. I was very proud, because at the time it was the only example of a decent desktop interface with a proper web services hook.

The times, the are a-catching-up to me! About the same time we were cobbling together our first catalogue of OGC WMS and WFS services, Jeremy Bartley at the University of Kansas was doing a similar thing, except he was mining Google for ArcIMS services. (It should come as some disappointment to boosters of the OGC that he found about 10 times as many ArcIMS servers as we found OGC WMS servers of all types (including ArcIMS).)

Bartley took the results of his mining, and like us, stuffed them into a database for searching in useful ways: by layer and keyword. He exposed the result as Mapdex. For a while, Mapdex was just an HTML web user interface, which made it an interesting novelty, but not exactly an integrated experience.

Then Bartley added a (again, simple!) web services API to the Mapdex database. Now the doors were wide open, and along came the final piece: an ArcMap toolbar that allows direct searching of Mapdex and adding of services to the ArcMap application in real time. So now uDig is not longer unique in providing this particular capability.

I hope this stuff is not being lost on the OGC: facts on the ground are being established, and things that work now will be adopted in favour of things that do not. Complex standards have utility, but if you want them to be adopted you need to provide a real reference implementation that people can deploy easily, without necessarily understanding the standard itself. I would suggest BSD- or MIT- licensed open source code for things like GML parsing, Catalogue client, WFS client, WMS client, OpenLS client, and so on.

The existence of freely-usable reference client code would allow people to quickly enrich their particular client applications with OGC web services ability, making the protocols more relevant and allowing the server makers (who form the bulk of the standards writing pool) to sell more of their product.

Ding! Ding! Ding! Time to wake up, the future is here, and it is not using OGC standards!