Victoria gets Portal'ed

For years, I just assumed that, as a cash strapped municipality carrying much of the service load of the region (Victoria is the small (80K) urban centre of a metro area (330K) that comprises ten separate municipalities) my city would just never have enough money to splash out on a mapping webstacle, but today I was proven wrong.

I give you: VicMap

It’s got everything you expect from a “municipal map portal”. (For explanation of the scare quotes, please see parts 1, 2, 3, 4 and 5 of “why map portals don’t work”.)

It’s got the Silverlight requirement, and associated “loading” gadget.

It’s got the pop-up disclaimer dialogue.

And it’s got the menu of all available layers.

It’s also got a task-oriented attempt to match usage to users, a good attempt, marred somewhat with GIS terminology (“identify”, anyone?).

Most of the data is already on the city open data site except the aerial photography. The only positive I can see is that the site is built with local technology.

Anyhow, now that we have a “new shiny”, I hope the City follows up and keeps track of who uses it and how, and changes appropriately. I’ve long noted the popularity of the CRD online atlas (Capital Regional District), another “portal par excellance”, but even my limited sampling has turned up a 100% use case of “viewing the hi-res imagery and parcel outlines” – the rest of it could be thrown out, or hidden in a file drawer somewhere.

Keeping metrics and honestly evaluating the utility of the site are important next steps, and not being beholden to the all-in-one framework are important. The site is probably not being used for what you think it is, and your users might be better served not with one big slow map, but several small fast ones.

But you won’t know unless you keep track of:

  • Usage patterns day by day, hour by hour, region by region
  • Usage of layers, which are popular and when
  • Usage of features, which are often used, which are never used
  • Usage of locality, where do people look most often, and at what layers

With that information in hand, the one big map can be taken apart into pieces that address the actual needs of users as efficiently as possible.

BC IT in the New New Era

Every change of government provides an opportunity for substantial change in either broad or particular areas of policy, even ones that don’t involve a change in party. On taking the Premiership two years ago, Christy Clark ratified and amplified the nascent open data policies her predecessor had been toying with, which for techno-weenies (guilty as charged) constituted a significant and important positive change. (Non-techno-weenies will note the degradation in overall support for open government in general, but we’ll ignore them because they don’t like computers.)

Now, after a surprise victory, taking full control of her party and vanquishing her enemies both outside and within her party, will Christy Clark bring in any big changes in the world of BC government IT? My guess is “no”, or “yes, if a lot more of the same constitutes a change”.

The big tell is in today’s detailed news release on the new cabinet which in addition to the political leadership (yawn!) details the roster of Deputy Ministers, the administrative leadership.

Tell #1: Bette-Jo Hughes becomes the CIO. No longer acting (?), the architect of the IBM desktop services (we use the term loosely) agreement continues at the OCIO. So, continuity of policy in the boring but important worlds of procurement and standards, and presumably in major initiatives like the BC ID card as well.

Tell #2: Former CIO Dave Nikolejsin is plucked from purgatory at the Environmental Assessment Office and plunked down as DM of Energy and Mines (and Core Review). This one I think is far more interesting.

The “core review” gets to poke its fingers into any and every program in government. What will the man who brought us ICM, the BC Services Card, and the upcoming “one stop” natural resources system rebuild do when placed in charge of a review of all government programs?

I don’t know, but I have a theory: it will somehow involve Deloitte & Touche. Nikolejsin’s other initiatives have all been big ticket, vendor-led enterprises, so it seems reasonable to conclude that D@T’s blend of management consultantese and IT bafflegabery will prove irresistible in this case too. Think of all the business processes that can be streamlined and integrated with a mere $100M in IT capital spend! The savings on paperclips alone…!

One thing I’m fairly certain of, there will be no shortage of IT boondoggles to write about over the next four years. With old boondoggles still en train (ICM), older boondoggles rebooting (BCeSIS), new boondoggles in the works (natural resources) and top-secret boondoggles waiting to come to light (eHealth), all of them 9-figures and up, the world of BC government IT will not disappoint.

BC ICM Footnote

Early this year, the ICM project released a system assessment report from Queensland Consulting which was seized on by the BCGEU and others as showing ICM to be “deeply flawed and serious question remain regarding the software’s suitability for child protection work”.

I found two things curious about the Queensland report: first, it was willing to talk about the failing of COTS methodology in a way that is rare in the stodgy conservative world of enterprise IT; and second, a large number of the process recommendations were reiterations of a previous report, a “Readiness Assessment” by Jo Surich, Victoria tech eminence grise and former BC CIO. If the Surich report was so good that Queensland was quoting from it rather than writing their own material, I wanted to read it, so I filed an FOI.

You can read the full Surich report now, on the BC open info site.

I found two items of particular interest in Surich’s report.

First, the date. Surich reported out in April of 2011. Over a year later, the Queensland consultants were re-iterating many of his recommendations. Surich was not listened to. Since a second set of consultants saw the same problems Surich did, letting his recommendations gather dust was a tactical error, to say the least.

Second, the big picture plan. Surich notes that, in addition to replacing all the tracking and reporting systems used in child protect, the Ministry was simultaneously changing the practice model (the “business process”, in the usual IT terminology) that social workers use for their cases.

Simultaneously changing the business process and technology is a time honoured and widely replicated failure mode in enterprise IT development, because it makes so much sense. If you’re changing the business process, you’ll need to change the systems to match the business process. So, why not replace the systems at the same time as you change the business process?

Lots of reasons!

  • Your mutating business process will constantly change the system requirements underneath you, resulting in lots of back-tracking and re-coding.
  • You won’t know if your business process is bad until you deliver your system, which will then require further system changes as you again alter the business process.
  • You double down on changes your staff need to ingest, in both tooling and methodology, and triple down as you add in fixes to business process after deployment.
  • It’s been done before, and it’s led to some epic, epic failures.

As I learn more about the background to ICM, I have to ask myself if I would have done any differently. Particularly given the timelines and promises that backstop the huge capital commitment that gave birth to ICM, I find myself saying “no”, I’d have made the same (similar) mistakes. I’d have walked down a very similar path. Probably not using COTS, but still trying to do business process and technology at once, trying to deliver a complete replacement system instead of evolving existing ones. Taking a set of rational, correct, defensible decisions leading down a dead-end path to failure.

The Microsoft Era

In many ways, the “Microsoft Era” has been over for quite some time. The most important developments in technology, the ones that change the way we work as IT practitioners, have been coming from other organizations for at least five years, maybe a decade (first open source, then the cloud companies).

But Microsoft has held on, and even garnered some of the aura of the scrappy underdog as it tries to compete in the new “network is the computer” (so close, Sun: right slogan, wrong decade) world. The reason Microsoft has been able to hang in this fight for so long is the continuing “price of doing business” revenue it has been able to extract from its operating system and office automation franchises.

Why, despite its continuing technological failures to deliver useful new functionality into its offerings, is Microsoft still hanging in there, still receiving billions of dollars a year from operating system and office software?

Because it’s good enough, and because there is no alternative.

More realistically, because we BELIEVE it is good enough, and there is no alternative. As long as we believe that, we won’t spend any time evaluating alternatives to see if they too are good enough.

It is the self-reinforcing belief that Microsoft produces and supports good enough software, and has the business continuity to CONTINUE to produce and support good enough software, that allows conservative IT managers to get to sleep at night, safe in the knowledge that they have backed a winner.

But what if they are backing a loser? Or, more to the point, what if they begin to BELIEVE they are backing a loser?

They are going to start looking for alternatives. And suddenly Microsoft will BE a loser. And the feedback loop will intensify.

All this to emphasize the importance that even ZDNet, yes, ZDNet is starting to lose faith in the market dominance of Microsoft.

I think Microsoft could continue to dominate the important, but no longer growing, desktop market for years, even decades to come. However, I don’t think they will.

The analysts have already tracked the decline of Windows relative to tablet and phone operating systems. The CIOs are working on “bring your own device” policies, which will liberate countless desktops from Microsoft monoculture. The trends are not good, and as the trends are publicized more and more, they will only get worse.

Bye bye Microsoft, I wish I could say I’ll miss you.

Boston Code Sprint: MapServer

There is a good crowd of MapServer developers here in Boston, too!

Jeff McKenna, Jonas Lund Nielsen, and Sarah Ward have been working hard on the documentation tree. Since the last release “MapServer” hasn’t been just MapServer anymore: it also includes MapCache and TinyOWS. The documentation is being reorganized to make the parts of the system clearer, and to make the web site look less like a documentation page.

MapServer eminance grise Daniel Morissette spent the first part of the sprint getting familiar with the new GitHub workflow for working with MapServer. “Overcoming his GitHub fear” in his words.

Thomas Bonfort, the force behind MapServer’s migration to git last year, is making another big infrastructural change this year, migrating the build system from autotools to CMake. This should make multi-platform build support a little more transparent.

Steve Lime, the father of MapServer has been writing up designs for improved expression handling, pushing logical expressions all the way down into the database drivers. This will make rendering more efficient, particularly for things like WFS requests, which often include expressions that could be more efficiently evaluated by databases. Steve is also working on a design to support UTFGrid as an output format.

Alan Boudreault has been very busy: adding support for the “DDS” format to GDAL, which means that MapServer can now serve textures directly to NASA WorldWind; completing support for on-the-fly generation of contour lines from DEM files; and completing a design for on-the-fly line smoothing.

Stephan Meissl worked on resolving some outstanding bugs with dateline wrapping, and has been tracking down issues related to getting MapServer “officially” certified as an OGC compliant server for WFS, WMS and WCS.

Note that much of the work done by the MapServer crew at the “code sprint” has actually been design work, not code. I think this is really the best way to make use of face-to-face time in projects. Online, design discussions can take weeks and consume 100s of e-mails. Face-to-face, ideas can be hashed out much more effectively, getting to agreed designs in just a few hours.