28 Mar 2013
There is a good crowd of MapServer developers here in Boston, too!
Jeff McKenna, Jonas Lund Nielsen, and Sarah Ward have been working hard on the documentation tree. Since the last release “MapServer” hasn’t been just MapServer anymore: it also includes MapCache and TinyOWS. The documentation is being reorganized to make the parts of the system clearer, and to make the web site look less like a documentation page.
MapServer eminance grise Daniel Morissette spent the first part of the sprint getting familiar with the new GitHub workflow for working with MapServer. “Overcoming his GitHub fear” in his words.
Thomas Bonfort, the force behind MapServer’s migration to git last year, is making another big infrastructural change this year, migrating the build system from autotools to CMake. This should make multi-platform build support a little more transparent.
Steve Lime, the father of MapServer has been writing up designs for improved expression handling, pushing logical expressions all the way down into the database drivers. This will make rendering more efficient, particularly for things like WFS requests, which often include expressions that could be more efficiently evaluated by databases. Steve is also working on a design to support UTFGrid as an output format.
Alan Boudreault has been very busy: adding support for the “DDS” format to GDAL, which means that MapServer can now serve textures directly to NASA WorldWind; completing support for on-the-fly generation of contour lines from DEM files; and completing a design for on-the-fly line smoothing.
Stephan Meissl worked on resolving some outstanding bugs with dateline wrapping, and has been tracking down issues related to getting MapServer “officially” certified as an OGC compliant server for WFS, WMS and WCS.
Note that much of the work done by the MapServer crew at the “code sprint” has actually been design work, not code. I think this is really the best way to make use of face-to-face time in projects. Online, design discussions can take weeks and consume 100s of e-mails. Face-to-face, ideas can be hashed out much more effectively, getting to agreed designs in just a few hours.
26 Mar 2013
Rather than try and do a day-by-day take on the sprint, this year I’ll try to outline project-by-project what folks are hoping to accomplish and who is attending.
This year we have the largest contingent of PostGIS folks ever, and work is being done on old-school PostGIS as well as the raster and geocoding portions that I usually look in with a bit of fear and trembling.
Azavea has sent four staff to the sprint. David Zwarg has worked on PostGIS raster at previous sprints and is continuing the tradition this year, focussing on building raster overviews inside the database. Justin Walgran is updating GDAL to do EPSG code determination for arbitrary WKT strings. Adam Hinz is returning 1.5 semantics to some OGC geometry functions, and exploring SP-GiST implementation for PostGIS. And Kenny Shepard is adding geometry validity checking and policy enforcement to the shape loader utilities. Update: And, Rob Emanuele, who worked on QGIS support for PostGIS raster and the GDAL QGIS plug-in. Thank him, QGIS users!
Regina Obe has been working with Steven Woodbridge on an upgrade to the TIGER GeoCoder, integrating a C module for address normalization and a flatter scheme for storing address information. Early tests have shown this new approach to be much faster than the old PL/PgSQL code, which should make all the bulk geocoders out there very happy.
Stephen Mather is working on polygon skeletonization, to convert lakes and roads to linear equivalent features.
Oslandia has sent two representatives. Olivier Courtin and Hugo Mercier are working on SFCGAL, a C library wrapper around the C++ CGAL computational geometry library. SFCGAL will bridge PostGIS to use the CGAL 3D geometry algorithms, as well as algorithms like line voronoi generation, which will be useful for Stephen’s skeleton project.
The two heavyweights of the PostGIS raster world, Pierre Racine and Bborie Park are here. Bborie is improving the performance of expression-based map algebra functions. Always pushing the leading edge, Pierre Racine is coordinating the raster work, and collecting up new functional routines in PL/PgSQL into a library of utilities.
I’m spending time fixing my bugs in the 2.1 milestone, completing distance calculation for curved geometry types, and on the new pointcloud extension for LIDAR storage in PostgreSQL.
26 Feb 2013
Let me preface this by saying that “wow, you guys over there are just uniquely kicking ass” is an evaluation that “you guys” almost always dispute.
I know this from personal experience, since most of the world seems to think that Canada is uniquely kicking ass in the field of open source geospatial, a myth founded on a few hundred thousand dollars in Federal funding that a couple of companies bent themselves into pretzels to obtain and use for open source development work. Often, the exterior impression is simplistic, or only takes in one data point. (If the plural of “anecdote” is not data, neither is the singular.)
All that before I go on and say “the UK seems to be really ahead of the curve on a lot of things that are important in government and IT”. In talking about them explicitly, at the least, and perhaps in doing them too.
First, get this: Martha Lane Fox is the “UK Digital Champion”, appointed by the government, and the report is just one of the things she has produced. Imagine, a “digital champion”!
Second, check out the core recommendation: rip responsibility for web and digital interaction with citizens out of line ministries, into a dedicated digital office.
Well, they did it, or seem to have done, and the result is digital.cabinetoffice.gov.uk. It’s worth checking out the site, if only for the pictures of youthful nerd-hood, the independent job postings, the attempt to promote a more innovative and active digital culture than one typically finds within government.
The intent is clearly to create a new centre of mass of government IT, with a more innovative and web-oriented culture, and from there push out into the line ministries over time, rather than trying to alter the whole IT culture in one go. In 20 years, will the IT leadership of the UK be made up of digital.cabinetoffice.gov.uk alumni?
Update: Check out their design principles.
23 Feb 2013
Apropos of nothing, here’s every RFP document issued for the ICM project in BC (and there were 10 different RPF/ITQ/RFI/NOI postings over the years) rendered as a word cloud.
Notice the prominence of “child” and “family”. When you get down to brass tacks, enterprise IT doesn’t much care about what you do, but it very intensely cares about how you do it.
19 Feb 2013
Today in question period, the opposition NDP in British Columbia devoted five of their six questions to information technology issues! It’s pretty rare that the snooze-fest that is IT rises to the level of political notice, so I don’t expect a repeat display again in my lifetime.
Some of the questions:
A. Dix: My question is to the Minister of Citizens’ Services. An independent report on integrated case management across ministry computer systems, meant to improve services to vulnerable British Columbians, states that the system may never be fixed. A system that cost over $200 million may have to be thrown out because of mismanagement. The report says that in the bidding phases for this technology, the ministries in question were in chaos, going through so-called transformation with a revolving door of ministers. The children’s representative has said: “This report speaks to incompetent stewardship.” With such damning comments, and a lack of faith that the system can be fixed, can the minister explain what went wrong and when the government will take responsibility for this mess?
R. Austin: It’s not just in the child welfare system that this mismanagement has occurred but also in education. We know that this government spent almost $100 million on their electronic data system. What we don’t know is how much was also spent by school boards on training, implementation and licensing costs. Communities and taxpayers should not have to pay the costs of Liberal mismanagement on these colossal computer projects. Can the Minister of Education tell this House how much in total was spent by districts and the government on this failed computer program?
The systems in question, ICM and BCeSIS, cost the government respectively about $200M and $100M. That is, far beyond the outer edge of where we can intellectually grasp the amount of money involved, or where the effort goes.
Here’s one way of understanding what a $200M project budget means, look at what the budget means in terms of people.
Assume a $200K fully loaded staff cost (salary, overhead, benefits, everything). That’s a big number, but let’s assume experienced professionals being paid commensurately. That means $200M equates to 1000 person-years of effort. Or, over a 5-year project span, a 200 person staff. What could you deliver, in 5 years, with 200 highly trained professionals?
Here’s another way of looking at what $200M spent on IT means. Over the four year period from 2003 to 2007, Facebook grew from Mark Zuckerberg’s dorm project to a company with 50 million active users. In that time, during which Facebook had no revenue, investors put just under $40M into the company. So it cost $40M to build the 2007-era Facebook.
So, $200M to build a failed case management system that (poorly) serves a working population of a few thousand civil servants, against $40M to build an intricate social network that serves a population of 50 million.
There’s no way to get around it: enterprise IT is hopelessly broken, and much of the money we spend on systems development is wasted. We need a new way of doing business.