BC IT in the New New Era

Every change of government provides an opportunity for substantial change in either broad or particular areas of policy, even ones that don’t involve a change in party. On taking the Premiership two years ago, Christy Clark ratified and amplified the nascent open data policies her predecessor had been toying with, which for techno-weenies (guilty as charged) constituted a significant and important positive change. (Non-techno-weenies will note the degradation in overall support for open government in general, but we’ll ignore them because they don’t like computers.)

Now, after a surprise victory, taking full control of her party and vanquishing her enemies both outside and within her party, will Christy Clark bring in any big changes in the world of BC government IT? My guess is “no”, or “yes, if a lot more of the same constitutes a change”.

The big tell is in today’s detailed news release on the new cabinet which in addition to the political leadership (yawn!) details the roster of Deputy Ministers, the administrative leadership.

Tell #1: Bette-Jo Hughes becomes the CIO. No longer acting (?), the architect of the IBM desktop services (we use the term loosely) agreement continues at the OCIO. So, continuity of policy in the boring but important worlds of procurement and standards, and presumably in major initiatives like the BC ID card as well.

Tell #2: Former CIO Dave Nikolejsin is plucked from purgatory at the Environmental Assessment Office and plunked down as DM of Energy and Mines (and Core Review). This one I think is far more interesting.

The “core review” gets to poke its fingers into any and every program in government. What will the man who brought us ICM, the BC Services Card, and the upcoming “one stop” natural resources system rebuild do when placed in charge of a review of all government programs?

I don’t know, but I have a theory: it will somehow involve Deloitte & Touche. Nikolejsin’s other initiatives have all been big ticket, vendor-led enterprises, so it seems reasonable to conclude that D@T’s blend of management consultantese and IT bafflegabery will prove irresistible in this case too. Think of all the business processes that can be streamlined and integrated with a mere $100M in IT capital spend! The savings on paperclips alone…!

One thing I’m fairly certain of, there will be no shortage of IT boondoggles to write about over the next four years. With old boondoggles still en train (ICM), older boondoggles rebooting (BCeSIS), new boondoggles in the works (natural resources) and top-secret boondoggles waiting to come to light (eHealth), all of them 9-figures and up, the world of BC government IT will not disappoint.

BC ICM Footnote

Early this year, the ICM project released a system assessment report from Queensland Consulting which was seized on by the BCGEU and others as showing ICM to be “deeply flawed and serious question remain regarding the software’s suitability for child protection work”.

I found two things curious about the Queensland report: first, it was willing to talk about the failing of COTS methodology in a way that is rare in the stodgy conservative world of enterprise IT; and second, a large number of the process recommendations were reiterations of a previous report, a “Readiness Assessment” by Jo Surich, Victoria tech eminence grise and former BC CIO. If the Surich report was so good that Queensland was quoting from it rather than writing their own material, I wanted to read it, so I filed an FOI.

You can read the full Surich report now, on the BC open info site.

I found two items of particular interest in Surich’s report.

First, the date. Surich reported out in April of 2011. Over a year later, the Queensland consultants were re-iterating many of his recommendations. Surich was not listened to. Since a second set of consultants saw the same problems Surich did, letting his recommendations gather dust was a tactical error, to say the least.

Second, the big picture plan. Surich notes that, in addition to replacing all the tracking and reporting systems used in child protect, the Ministry was simultaneously changing the practice model (the “business process”, in the usual IT terminology) that social workers use for their cases.

Simultaneously changing the business process and technology is a time honoured and widely replicated failure mode in enterprise IT development, because it makes so much sense. If you’re changing the business process, you’ll need to change the systems to match the business process. So, why not replace the systems at the same time as you change the business process?

Lots of reasons!

  • Your mutating business process will constantly change the system requirements underneath you, resulting in lots of back-tracking and re-coding.
  • You won’t know if your business process is bad until you deliver your system, which will then require further system changes as you again alter the business process.
  • You double down on changes your staff need to ingest, in both tooling and methodology, and triple down as you add in fixes to business process after deployment.
  • It’s been done before, and it’s led to some epic, epic failures.

As I learn more about the background to ICM, I have to ask myself if I would have done any differently. Particularly given the timelines and promises that backstop the huge capital commitment that gave birth to ICM, I find myself saying “no”, I’d have made the same (similar) mistakes. I’d have walked down a very similar path. Probably not using COTS, but still trying to do business process and technology at once, trying to deliver a complete replacement system instead of evolving existing ones. Taking a set of rational, correct, defensible decisions leading down a dead-end path to failure.

The Microsoft Era

In many ways, the “Microsoft Era” has been over for quite some time. The most important developments in technology, the ones that change the way we work as IT practitioners, have been coming from other organizations for at least five years, maybe a decade (first open source, then the cloud companies).

But Microsoft has held on, and even garnered some of the aura of the scrappy underdog as it tries to compete in the new “network is the computer” (so close, Sun: right slogan, wrong decade) world. The reason Microsoft has been able to hang in this fight for so long is the continuing “price of doing business” revenue it has been able to extract from its operating system and office automation franchises.

Why, despite its continuing technological failures to deliver useful new functionality into its offerings, is Microsoft still hanging in there, still receiving billions of dollars a year from operating system and office software?

Because it’s good enough, and because there is no alternative.

More realistically, because we BELIEVE it is good enough, and there is no alternative. As long as we believe that, we won’t spend any time evaluating alternatives to see if they too are good enough.

It is the self-reinforcing belief that Microsoft produces and supports good enough software, and has the business continuity to CONTINUE to produce and support good enough software, that allows conservative IT managers to get to sleep at night, safe in the knowledge that they have backed a winner.

But what if they are backing a loser? Or, more to the point, what if they begin to BELIEVE they are backing a loser?

They are going to start looking for alternatives. And suddenly Microsoft will BE a loser. And the feedback loop will intensify.

All this to emphasize the importance that even ZDNet, yes, ZDNet is starting to lose faith in the market dominance of Microsoft.

I think Microsoft could continue to dominate the important, but no longer growing, desktop market for years, even decades to come. However, I don’t think they will.

The analysts have already tracked the decline of Windows relative to tablet and phone operating systems. The CIOs are working on “bring your own device” policies, which will liberate countless desktops from Microsoft monoculture. The trends are not good, and as the trends are publicized more and more, they will only get worse.

Bye bye Microsoft, I wish I could say I’ll miss you.

Boston Code Sprint: MapServer

There is a good crowd of MapServer developers here in Boston, too!

Jeff McKenna, Jonas Lund Nielsen, and Sarah Ward have been working hard on the documentation tree. Since the last release “MapServer” hasn’t been just MapServer anymore: it also includes MapCache and TinyOWS. The documentation is being reorganized to make the parts of the system clearer, and to make the web site look less like a documentation page.

MapServer eminance grise Daniel Morissette spent the first part of the sprint getting familiar with the new GitHub workflow for working with MapServer. “Overcoming his GitHub fear” in his words.

Thomas Bonfort, the force behind MapServer’s migration to git last year, is making another big infrastructural change this year, migrating the build system from autotools to CMake. This should make multi-platform build support a little more transparent.

Steve Lime, the father of MapServer has been writing up designs for improved expression handling, pushing logical expressions all the way down into the database drivers. This will make rendering more efficient, particularly for things like WFS requests, which often include expressions that could be more efficiently evaluated by databases. Steve is also working on a design to support UTFGrid as an output format.

Alan Boudreault has been very busy: adding support for the “DDS” format to GDAL, which means that MapServer can now serve textures directly to NASA WorldWind; completing support for on-the-fly generation of contour lines from DEM files; and completing a design for on-the-fly line smoothing.

Stephan Meissl worked on resolving some outstanding bugs with dateline wrapping, and has been tracking down issues related to getting MapServer “officially” certified as an OGC compliant server for WFS, WMS and WCS.

Note that much of the work done by the MapServer crew at the “code sprint” has actually been design work, not code. I think this is really the best way to make use of face-to-face time in projects. Online, design discussions can take weeks and consume 100s of e-mails. Face-to-face, ideas can be hashed out much more effectively, getting to agreed designs in just a few hours.

Boston Code Sprint: PostGIS

Rather than try and do a day-by-day take on the sprint, this year I’ll try to outline project-by-project what folks are hoping to accomplish and who is attending.

This year we have the largest contingent of PostGIS folks ever, and work is being done on old-school PostGIS as well as the raster and geocoding portions that I usually look in with a bit of fear and trembling.

Azavea has sent four staff to the sprint. David Zwarg has worked on PostGIS raster at previous sprints and is continuing the tradition this year, focussing on building raster overviews inside the database. Justin Walgran is updating GDAL to do EPSG code determination for arbitrary WKT strings. Adam Hinz is returning 1.5 semantics to some OGC geometry functions, and exploring SP-GiST implementation for PostGIS. And Kenny Shepard is adding geometry validity checking and policy enforcement to the shape loader utilities. Update: And, Rob Emanuele, who worked on QGIS support for PostGIS raster and the GDAL QGIS plug-in. Thank him, QGIS users!

Regina Obe has been working with Steven Woodbridge on an upgrade to the TIGER GeoCoder, integrating a C module for address normalization and a flatter scheme for storing address information. Early tests have shown this new approach to be much faster than the old PL/PgSQL code, which should make all the bulk geocoders out there very happy.

Stephen Mather is working on polygon skeletonization, to convert lakes and roads to linear equivalent features.

Oslandia has sent two representatives. Olivier Courtin and Hugo Mercier are working on SFCGAL, a C library wrapper around the C++ CGAL computational geometry library. SFCGAL will bridge PostGIS to use the CGAL 3D geometry algorithms, as well as algorithms like line voronoi generation, which will be useful for Stephen’s skeleton project.

The two heavyweights of the PostGIS raster world, Pierre Racine and Bborie Park are here. Bborie is improving the performance of expression-based map algebra functions. Always pushing the leading edge, Pierre Racine is coordinating the raster work, and collecting up new functional routines in PL/PgSQL into a library of utilities.

I’m spending time fixing my bugs in the 2.1 milestone, completing distance calculation for curved geometry types, and on the new pointcloud extension for LIDAR storage in PostgreSQL.