BC ICM Footnote

Early this year, the ICM project released a system assessment report from Queensland Consulting which was seized on by the BCGEU and others as showing ICM to be “deeply flawed and serious question remain regarding the software’s suitability for child protection work”.

I found two things curious about the Queensland report: first, it was willing to talk about the failing of COTS methodology in a way that is rare in the stodgy conservative world of enterprise IT; and second, a large number of the process recommendations were reiterations of a previous report, a “Readiness Assessment” by Jo Surich, Victoria tech eminence grise and former BC CIO. If the Surich report was so good that Queensland was quoting from it rather than writing their own material, I wanted to read it, so I filed an FOI.

You can read the full Surich report now, on the BC open info site.

I found two items of particular interest in Surich’s report.

First, the date. Surich reported out in April of 2011. Over a year later, the Queensland consultants were re-iterating many of his recommendations. Surich was not listened to. Since a second set of consultants saw the same problems Surich did, letting his recommendations gather dust was a tactical error, to say the least.

Second, the big picture plan. Surich notes that, in addition to replacing all the tracking and reporting systems used in child protect, the Ministry was simultaneously changing the practice model (the “business process”, in the usual IT terminology) that social workers use for their cases.

Simultaneously changing the business process and technology is a time honoured and widely replicated failure mode in enterprise IT development, because it makes so much sense. If you’re changing the business process, you’ll need to change the systems to match the business process. So, why not replace the systems at the same time as you change the business process?

Lots of reasons!

  • Your mutating business process will constantly change the system requirements underneath you, resulting in lots of back-tracking and re-coding.
  • You won’t know if your business process is bad until you deliver your system, which will then require further system changes as you again alter the business process.
  • You double down on changes your staff need to ingest, in both tooling and methodology, and triple down as you add in fixes to business process after deployment.
  • It’s been done before, and it’s led to some epic, epic failures.

As I learn more about the background to ICM, I have to ask myself if I would have done any differently. Particularly given the timelines and promises that backstop the huge capital commitment that gave birth to ICM, I find myself saying “no”, I’d have made the same (similar) mistakes. I’d have walked down a very similar path. Probably not using COTS, but still trying to do business process and technology at once, trying to deliver a complete replacement system instead of evolving existing ones. Taking a set of rational, correct, defensible decisions leading down a dead-end path to failure.

The Microsoft Era

In many ways, the “Microsoft Era” has been over for quite some time. The most important developments in technology, the ones that change the way we work as IT practitioners, have been coming from other organizations for at least five years, maybe a decade (first open source, then the cloud companies).

But Microsoft has held on, and even garnered some of the aura of the scrappy underdog as it tries to compete in the new “network is the computer” (so close, Sun: right slogan, wrong decade) world. The reason Microsoft has been able to hang in this fight for so long is the continuing “price of doing business” revenue it has been able to extract from its operating system and office automation franchises.

Why, despite its continuing technological failures to deliver useful new functionality into its offerings, is Microsoft still hanging in there, still receiving billions of dollars a year from operating system and office software?

Because it’s good enough, and because there is no alternative.

More realistically, because we BELIEVE it is good enough, and there is no alternative. As long as we believe that, we won’t spend any time evaluating alternatives to see if they too are good enough.

It is the self-reinforcing belief that Microsoft produces and supports good enough software, and has the business continuity to CONTINUE to produce and support good enough software, that allows conservative IT managers to get to sleep at night, safe in the knowledge that they have backed a winner.

But what if they are backing a loser? Or, more to the point, what if they begin to BELIEVE they are backing a loser?

They are going to start looking for alternatives. And suddenly Microsoft will BE a loser. And the feedback loop will intensify.

All this to emphasize the importance that even ZDNet, yes, ZDNet is starting to lose faith in the market dominance of Microsoft.

I think Microsoft could continue to dominate the important, but no longer growing, desktop market for years, even decades to come. However, I don’t think they will.

The analysts have already tracked the decline of Windows relative to tablet and phone operating systems. The CIOs are working on “bring your own device” policies, which will liberate countless desktops from Microsoft monoculture. The trends are not good, and as the trends are publicized more and more, they will only get worse.

Bye bye Microsoft, I wish I could say I’ll miss you.

Boston Code Sprint: MapServer

There is a good crowd of MapServer developers here in Boston, too!

Jeff McKenna, Jonas Lund Nielsen, and Sarah Ward have been working hard on the documentation tree. Since the last release “MapServer” hasn’t been just MapServer anymore: it also includes MapCache and TinyOWS. The documentation is being reorganized to make the parts of the system clearer, and to make the web site look less like a documentation page.

MapServer eminance grise Daniel Morissette spent the first part of the sprint getting familiar with the new GitHub workflow for working with MapServer. “Overcoming his GitHub fear” in his words.

Thomas Bonfort, the force behind MapServer’s migration to git last year, is making another big infrastructural change this year, migrating the build system from autotools to CMake. This should make multi-platform build support a little more transparent.

Steve Lime, the father of MapServer has been writing up designs for improved expression handling, pushing logical expressions all the way down into the database drivers. This will make rendering more efficient, particularly for things like WFS requests, which often include expressions that could be more efficiently evaluated by databases. Steve is also working on a design to support UTFGrid as an output format.

Alan Boudreault has been very busy: adding support for the “DDS” format to GDAL, which means that MapServer can now serve textures directly to NASA WorldWind; completing support for on-the-fly generation of contour lines from DEM files; and completing a design for on-the-fly line smoothing.

Stephan Meissl worked on resolving some outstanding bugs with dateline wrapping, and has been tracking down issues related to getting MapServer “officially” certified as an OGC compliant server for WFS, WMS and WCS.

Note that much of the work done by the MapServer crew at the “code sprint” has actually been design work, not code. I think this is really the best way to make use of face-to-face time in projects. Online, design discussions can take weeks and consume 100s of e-mails. Face-to-face, ideas can be hashed out much more effectively, getting to agreed designs in just a few hours.

Boston Code Sprint: PostGIS

Rather than try and do a day-by-day take on the sprint, this year I’ll try to outline project-by-project what folks are hoping to accomplish and who is attending.

This year we have the largest contingent of PostGIS folks ever, and work is being done on old-school PostGIS as well as the raster and geocoding portions that I usually look in with a bit of fear and trembling.

Azavea has sent four staff to the sprint. David Zwarg has worked on PostGIS raster at previous sprints and is continuing the tradition this year, focussing on building raster overviews inside the database. Justin Walgran is updating GDAL to do EPSG code determination for arbitrary WKT strings. Adam Hinz is returning 1.5 semantics to some OGC geometry functions, and exploring SP-GiST implementation for PostGIS. And Kenny Shepard is adding geometry validity checking and policy enforcement to the shape loader utilities. Update: And, Rob Emanuele, who worked on QGIS support for PostGIS raster and the GDAL QGIS plug-in. Thank him, QGIS users!

Regina Obe has been working with Steven Woodbridge on an upgrade to the TIGER GeoCoder, integrating a C module for address normalization and a flatter scheme for storing address information. Early tests have shown this new approach to be much faster than the old PL/PgSQL code, which should make all the bulk geocoders out there very happy.

Stephen Mather is working on polygon skeletonization, to convert lakes and roads to linear equivalent features.

Oslandia has sent two representatives. Olivier Courtin and Hugo Mercier are working on SFCGAL, a C library wrapper around the C++ CGAL computational geometry library. SFCGAL will bridge PostGIS to use the CGAL 3D geometry algorithms, as well as algorithms like line voronoi generation, which will be useful for Stephen’s skeleton project.

The two heavyweights of the PostGIS raster world, Pierre Racine and Bborie Park are here. Bborie is improving the performance of expression-based map algebra functions. Always pushing the leading edge, Pierre Racine is coordinating the raster work, and collecting up new functional routines in PL/PgSQL into a library of utilities.

I’m spending time fixing my bugs in the 2.1 milestone, completing distance calculation for curved geometry types, and on the new pointcloud extension for LIDAR storage in PostgreSQL.

digital.cabinetoffice.gov.uk

Let me preface this by saying that “wow, you guys over there are just uniquely kicking ass” is an evaluation that “you guys” almost always dispute.

I know this from personal experience, since most of the world seems to think that Canada is uniquely kicking ass in the field of open source geospatial, a myth founded on a few hundred thousand dollars in Federal funding that a couple of companies bent themselves into pretzels to obtain and use for open source development work. Often, the exterior impression is simplistic, or only takes in one data point. (If the plural of “anecdote” is not data, neither is the singular.)

All that before I go on and say “the UK seems to be really ahead of the curve on a lot of things that are important in government and IT”. In talking about them explicitly, at the least, and perhaps in doing them too.

First, get this: Martha Lane Fox is the “UK Digital Champion”, appointed by the government, and the report is just one of the things she has produced. Imagine, a “digital champion”!

Second, check out the core recommendation: rip responsibility for web and digital interaction with citizens out of line ministries, into a dedicated digital office.

Well, they did it, or seem to have done, and the result is digital.cabinetoffice.gov.uk. It’s worth checking out the site, if only for the pictures of youthful nerd-hood, the independent job postings, the attempt to promote a more innovative and active digital culture than one typically finds within government.

The intent is clearly to create a new centre of mass of government IT, with a more innovative and web-oriented culture, and from there push out into the line ministries over time, rather than trying to alter the whole IT culture in one go. In 20 years, will the IT leadership of the UK be made up of digital.cabinetoffice.gov.uk alumni?

Update: Check out their design principles.