Some Great Things about PostgreSQL

I spent the last few months using PostgreSQL for real work, with real data, and I’ve been really loving some of the more esoteric features. If you use PostgreSQL on a regular basis, learning these tools can make your code a lot more readable and possibly faster too.

Distinct On

A number of the tables I had to work with included multiple historical records for each individual, but I was only interested in the most recent value. That meant that every query had to start with some kind of filter to pull off the latest value for joining to other tables.

It turns out that the PostgreSQL DISTINCT ON syntax can spit out the right answer very easily:

SELECT DISTINCT ON (order_id) orders.*
FROM orders
ORDER BY orders.order_id, orders.timestamp DESC

No self-joining or complexity here, the tuple set is sorted into id/time order, and then the distinct on clause pulls the first entry (which is the most recent, thanks to the sorting) off of each id grouping.

Filtered Aggregates

I was doing a lot of reporting, so I built a BI-style denormalized reporting table, with a row for every entity of interest and a column for every variable of interest. Then all that was left was the reporting, which rolled up results across multiple groupings. The trouble was, the roll-ups were oftenly highly conditional: all entities with this condition A but not B, compared with those with B but not A, compared with all entities in aggregate.

Ordinarily this might involve embedding a big case statement for each conditional but with filtered aggregates we get a nice terse layout that also evaluates faster.

SELECT
    store_territory,
    Count(*) FILTER (WHERE amount < 5.0) 
        AS cheap_sales_count,
    Sum(amount) FILTER (WHERE amount < 5.0) 
        AS cheap_sales_amount,
    Count(*) FILTER (WHERE amount < 5.0 AND customer_mood = 'good') 
        AS cheap_sales_count_happy,
    Sum(amount) FILTER (WHERE amount < 5.0 AND customer_mood = 'good')
        AS cheap_sales_amount_happy
FROM bi_table
GROUP BY store_territory

I would routinely end up with 20-line versions of this query, which spat out spreadsheets that analysts were extremely happy to take and turn into charts and graphs and even decisions.

Window Functions

My mind aches slightly when trying to formulate window functions, but I was still able to put them to use in a couple places.

First, even with a window wide enough to cover a whole table, window functions can be handy! Add a percentile column to a whole table:

SELECT bi_table.*, 
    ntile(100) OVER (ORDER BY amount) 
        AS amount_percentile
FROM bi_table

Second, using ordinary aggregates in a window context can create some really groovy results. Want cumulated sales over store territories? (This might be better delegated to front-end BI display software, but…)

WITH daily_amounts AS (
    SELECT 
        sum(amount) AS amount,
        store_territory,
        date(timestamp) AS date
    FROM bi_table
    GROUP BY store_territory, date
)
SELECT 
    sum(amount) OVER (PARTITION BY store_territory ORDER BY date) 
        AS amount_cumulate
    store_territory, date
FROM daily_amounts

Alert readers will note the above example won’t provide a perfect output table if there are days without any sales at all, which brings me to a side note cool feature: PostgreSQL’s generate_series function (Regina Obe’s favourite function) supports generating time-based series!

SELECT generate_series(
    '2017-01-01'::date, 
    '2017-01-10'::date, 
    '18 hours'::interval);

Normally you’ll probably generate boring 1-day, or 1-week, or 1-month series, but the ability to generate arbitrarily stepped time series is pretty cool and useful. To solve the cumulation problem, you can just generate a full series of days of interest, and left join the calculated daily amounts to that, prior to cumulation in order to get a clean one-value-per-day cumulated result.

Left Join and Coalesce

This is not really an advanced technique, but it’s still handy. Suppose you have partial data on a bunch of sales from different sources and in different tables. You want a single table output that includes your best guess about the value, what’s the easiest way to get it? Left join and coalesce.

Start with a base table that includes all the sales you care about, left join all the potential sources of data, then coalesce the value you care about into a single output column.

SELECT
    base.order_id,
    Coalesce(oi1.order_name, oi2.order_name, oi2.order_name) 
        AS order_name
FROM base
LEFT JOIN order_info_1 oi1 USING (order_id)
LEFT JOIN order_info_2 oi2 USING (order_id)
LEFT JOIN order_info_3 oi3 USING (order_id)

The coalesce function takes the first non-NULL value it encounters in its parameters and returns that as the value. The practical effect is that, in the case where the first two tables have no rows for a particular base record, and the third does, the coalesce will skip past the first two and return the non-NULL value from the third. This is a great technique for compressing sparse multiple input sources into a terse usable single output.

Christy Clark's $1M Faux Conference Photo-op

On February 25, 2013, Christie Clark mounted the stage at the “International LNG Conference” in Vancouver to trumpet her government’s plans to have “at least one LNG pipeline and terminal in operation in Kitimat by 2015 and three in operation by 2020”.

Christy Clark's $1M Faux Conference Photo-op

Notwithstanding the Premier’s desire to frame economic devopment as a triumph of will, and notwithstanding the generous firehosing of subsidies and taxbreaks on the still nascent sector, the number of LNG pipelines and terminals in operation in Kitimat remains stubbornly zero. The markets are unlikely to relent in time to make the 2020 deadline.

And about that “conference”?

Like the faux “Bollywood Awards” that the government paid $10M to stage just weeks before the 2013 election, the “LNG in BC” conference was a government organized “event” put on primarily to advance the pre-election public relations agenda of the BC Liberal party.

In case anyone had any doubts about the purpose of the “event”, at the 2014 edition an exhibitor helpfully handed out a brochure to attendees, featuring an election night picture of the Premier and her son, under the title “We Won”.

We Won

The “LNG in BC” conference continued to be organized by the government for two more years, providing a stage each year for the Premier and multiple Ministers to broadcast their message.

The government is no longer organizing an annual LNG confab, despite their protestations that the industry remains a key priority. At this point, it would generate more public embarassment than public plaudits.

Instead, we have a new faux “conference”, slated to run March 14-15, just four weeks before the 2017 election begins: the #BCTech Summit.

Like “LNG in BC”, the “BCTech Summit” is a government-organized and government-funded faux industry event, put on primarily to provide an expensive backdrop for BC Liberal politicking.

BC Innovation Council

The BC Innovation Council (BCIC) that is co-hosting the event is itself a government-funded advisory council run by BC Liberal appointees, many of whom are also party donors. To fund the inaugural 2016 version of the event, the Ministry of Citizens Services wrote a direct award $1,000,000 contract to the BCIC.

The pre-election timing is not coincidental, it is part of a plan that dates all the way back to early 2015, when Deputy Minister Athana Mentzelopoulos directed staff to begin planning a “Tech Summit” for spring of the following year.

“We will not be coupling the tech summit with the LNG conference. Instead, the desire is to plan for the tech summit annually over the next couple of years – first in January 2016 and then in January 2017.” – email from A. Mentzelopoulos, April 8, 2015

The intent of creating a “conference” to sell a new-and-improved government “jobs plan”, and the source of that plan, was made clear by the government manager tasked with delivering the event.

“The push for this as an annual conference has come from the Premier’s Office and they want to (i) show alignment with the Jobs Plan (including the LNG conference) and (ii) show this has multi-ministry buy-in and participation.” – S. Butterworth, April 24, 2015

The event was not something industry wanted. It was not even something the BCIC wanted. It was something the Premier’s Office wanted.

And so they got it: everyone pulled together, the conference was put on, and it made a $1,000,000 loss which was dutifully covered by the Province via the BC Innovation Council, laying the groundwork for 2017’s much more politically potent version.

This year’s event will be held weeks before the next election. It too will be subsidized heavily by the government. And as with the LNG conference, exhibitors and sponsors will plunk down some money to show their loyalty to the party of power.

LNG BC Sponsors

The platinum sponsors of LNG in BC 2015 were almost all major LNG project proponents: LNG Canada, Pacific Northwest LNG, and Kitimat LNG. Were they, like sponsors at a normal trade conference, seeking to raise their profile among attendees? Or were they demonstrating their loyalty to the government that organized the event and then approached them for sponsorship dollars?

It is hard to avoid the conclusion that these events are just another conduit for cash in our “wild west” political culture, a culture that shows many of the signs of “systematic corruption” described by economist John Wallis in 2004.

“In polities plagued with systematic corruption, a group of politicians deliberately create rents by limiting entry into valuable economic activities, through grants of monopoly, restrictive corporate charters, tariffs, quotas, regulations, and the like. These rents bind the interests of the recipients to the politicians who create them.”

Systematically corrupt governments aren’t interested in personally enriching their members, they are interested in retaining and reinforcing their power, through a virtuous cycle of favours: economic favours are handed to compliant economic actors who in turn do what they can to protect and promote their government patrons.

Circle of Graft

The 2017 #BCTech conference already has a title sponsor: Microsoft. In unrelated news, Microsoft is currently negotiating to bring their Office 365 product into the BC public sector. If the #BCTech conference was an ordinary trade show, these two unrelated facts wouldn’t be cause for concern. But because the event is an artificially created artifact of the Premier’s Office, a shadow is cast over the whole enterprise.

Who is helping who here, and why?

A recent article in Macleans included a telling quote from an anonymous BC lobbyist:

If your client doesn’t donate, it puts you at a competitive disadvantage, he adds. It’s a small province, after all; the Liberals know exactly who is funding them, the lobbyist notes, magnifying the role donors play and the access they receive in return.

As long as BC remains effectively a one-party state, the cycle of favors and reciprocation will continue. Any business subject to the regulatory or purchasing decisions of government would be foolish not to hedge their bets with a few well-placed dollars in the pocket of BC’s natural governing party.

The cycle is systematic and self-reinforcing, and the only way to break the cycle, is to break the cycle.

Two Thoughts About Enterprise IT

While thinking about the failure of the Cerner roll-out in Nanaimo, where an expensive COTS electronic health record (EHR) system has been withdrawn after the medical staff rejected it, I have had a couple thoughts.

Origins Matter

As I noted in my post, I actually expected the Cerner roll-out on Vancouver Island to go fine. Proven software, specialized company, no shortage of cash. But it didn’t go fine. Among the complaints were things you’d expect to have been ironed out of a mature product ages ago: data entry efficiency, user-friendly displays of relevant data at relevant places, plain old system speed and responsiveness.

Why would a system rolled out in multiple US hospitals display these drawbacks?

I’m wondering if some of the issue might be the origin story: Cerner is an EHR developed and deployed first in the USA. As such, one of the main value propositions of the system would be the fine-grained tracking and management of billable events. To the people purchasing the system, the hospital managers, the system would pay for itself many times over just by improving the accounting aspect of hospital management. Even if it made patient care less efficient, the improvement in billing alone could make the system fiscally worthwhile.

Recently, a colleague told me about the ambulance dispatch system BC procured a decade ago. Like Cerner, it was a US system. It handled dispatch fine, and still does. What was odd about the system was that when you cracked open the user manuals, over 75% of the material was about how to configure and manage billing. The primary IT pain point for these US organizations is the incredibly complicated accounting issues associated with their multiple-payer private health insurance system, and the system reflects that.

What I’m getting at here is that if you go to a US reference customer for one of these systems and ask “how did it work for you, was it good?”, when they answer “it is great, it has really made things better!” what they mean by “better” might not be what you might mean by “better”, at least not 100%.

Two Thoughts About Enterprise IT

Cost of Safe, Proven Systems

My second thought on the Cerner roll-out was about the timelines associated with software procurement, particularly when the goal is a “safe, proven choice”. IT managers – particularly ones who are, deep in their hearts, afraid of technology – like a safe, proven choice. They like software that someone else with a similar business is already using, in a similar way.

When you’re talking about big production systems, that may take a couple years to set up, and another couple years to roll out progressively to a large front-line staff, there’s a number of big lags:

  • If you take two years to configure and a year to roll out, the software will be three years old before your staff is all using it.
  • Your procurement process will itself probably take a year.
  • If during procurement, you only consider software that has been in operation for a year somewhere else, that adds a year.
  • The organization you are using as your reference will in turn have taken 3 years to prepare and roll-out.
  • So, assuming the software was brand new when the reference organization chose it, your staff will be getting software at least 8 years old by the time your roll-out is complete.

This is more-or-less exactly what happened to the Ministry of Children and Families in the ICM procurement.

The software that runs their basically “brand new” system (I believe final roll-out was officially finished last year) is now over a decade old, and shows it.

The reason consumer-facing systems are always au courant, lovely and fast, is because they are maintained by dedicated teams who are charged with continuous improvement. As long as we purchase systems like we build bridges, as a one-time capital expense followed by decades of disintegration and then replacement, we can expect our enterprise systems to lag far far behind.

Super Expensive Cerner Crack-up at Island Health

Kansas City, we have a problem.

Super Expensive Cerner Crack-up at Island Health

A year after roll-out, the Island Health electronic health record (EHR) project being piloted at Nanaimo Regional General Hospital (NRGH) is abandoning electronic processes and returning to pen and paper. An alert reader forwarded me this note from the Island Health CEO, sent out Friday afternoon:

The Nanaimo Medical Staff Association Executive has requested that the CPOE tools be suspended while improvements are made. An Island Health Board meeting was held yesterday to discuss the path forward. The Board and Executive take the concerns raised by the Medical Staff Association seriously, and recognize the need to have the commitment and confidence of the physician community in using advanced EHR tools such as CPE. We will engage the NRGH physicians and staff on a plan to start taking steps to cease use of the CPOE tools and associated processes. Any plan will be implemented in a safe and thoughtful way with patient care and safety as a focus.
Dr. Brendan Carr to all Staff/Physicians at Nanaimo Regional General Hospital

This extremely expensive back-tracking comes after a year of struggles between the Health Authority and the staff and physicians at NRGH.

After two years of development and testing, the system was rolled out on March 19, 2016. Within a couple months, staff had moved beyond internal griping to griping to the media and attempting to force changes through bad publicity.

Doctors at Nanaimo Regional Hospital say a new paperless health record system isn’t getting any easier to use.

They say the system is cumbersome, prone to inputting errors, and has led to problems with medication orders.

“There continue to be reports daily of problems that are identified,” said Dr. David Forrest, president of the Medical Staff Association at the hospital.
– CBC News, July 7, 2016

Some of the early problems were undoubtedly of the “critical fault between chair and keyboard” variety – any new information interface quickly exposes how much we use our mental muscle memory to navigate both computer interfaces and paper forms.

IHealth Terminal & Trainer

So naturally, the Health Authority stuck to their guns, hoping to wait out the learning process. Unfortunately for them, the system appears to have been so poorly put together that no amount of user acclimatization can save it in the current form.

An independent review of the system in November 2016 has turned up not just user learning issues, but critical functional deficiencies:

  • High doses of medication can be ordered and could be administered. Using processes available to any user, a prescriber can inadvertently write an order for an unsafe dose of a medication.
  • Multiple orders for high-risk medications remain active on the medication administration record resulting in the possibility of unintended overdosing.
  • The IHealth system makes extensive use of small font sizes, long lists of items in drop-down menus and lacks filtering for some lists. The information display is dense making it hard to read and navigate.
  • End users report that challenges commonly occur with: system responsiveness, log-in when changing computers, unexplained screen freezes and bar code reader connectivity
  • PharmaNet integration is not effective and adds to the burden of medication reconciliation.

The Health Authority committed to address the concerns of the report, but evidently the hospital staff felt they could no longer risk patient health while waiting for the improvements to land. Hence a very expensive back-track to paper processes, and then another expensive roll-out process in the future.

This set-back will undoubtedly cost millions. The EHR roll-out was supposed to proceed smoothly from NRGH to the rest of the facilities in Island Health before the end of 2016.

This new functionality will first be implemented at the NRGH core campus, Dufferin Place and Oceanside Urgent Care on March 19, 2016. The remaining community sites and programs in Geography 2 and all of Geography 1 will follow approximately 6 months later. The rest of Island Health (Geographies 3 and 4) will go-live roughly 3 to 6 months after that.

Clearly that schedule is no longer operative.

The failure of this particular system is deeply worrying because it is a failure on the part of a vendor, Cerner, that is now the primary provider of EHR technology to the BC health system.

Cerner

When the IBM-led EHR project at PHSA and Coastal Health was “reset” (after spending $72M) by Minister Terry Lake in 2015, the government fired IBM and turned to a vendor they hoped would be more reliable: EHR software maker Cerner.

Cerner was already leading the Island Health project, which at that point (mid-2015) was apparently heading to a successful on-time roll-out in Nanaimo. They seemed like a safe bet. They had more direct experience with the EHR software, since they wrote it. They were a health specialist firm, not a consulting generalist firm.

For all my concerns about failures in enterprise IT, I would have bet on Cerner turning out a successful if very, very, very costly system. There’s a lot of strength in having relevant domain experience: it provides focus and a deep store of best practices to fall back on. And as a specialist in EHR, a failed EHR project will injure Cerner’s reputation in ways a single failed project will barely dent IBM’s clout.

There will be a lot of finger-pointing and blame shifting going on at Island Health and the Ministry over the next few months. The government should not be afraid to point fingers at Cerner and force them to cough up some dollars for this failure. If Cerner doesn’t want to wear this failure, if they want to be seen as a true “partner” in this project, they need to buck up.

Cerner will want to blame the end users. But when data entry takes twice as long as paper processes, that’s not end users’ fault. When screens are built with piles of non-relevant fields, and poor layouts, that’s not end users’ fault. When systems are slow or unreliable, that’s not end users’ fault.

Congratulations British Columbia, on your latest non-working enterprise IT project. The only solace I can provide is that eventually it will probably work, albeit some years later and at several times the price you might have considered “reasonable”.

On "Transformation" and Government IT

“You may find yourself,
In a beautiful house,
With a beautiful wife.
You may ask yourself,
well, how did I get here?”
– David Byrne, Once In a Lifetime

I’ve spilled a lot of electrons over the last 5 years talking about IT failures in the BC government (ICM, BCeSIS, NRPP, CloudBC), and a recurring theme in the comments is “how did this happen?” and “are we special or does everybody do this?”

The answer is nothing more sophisticated than “big projects tend to fail”, usually because more people on an IT project just adds to organizational churn: more reporting, more planning, more reviewing of same, all undertaken by the most expensive managerial resources.

That being so, why do we keep approving and attempting big IT projects? BC isn’t unique in doing so, though we have our own organizational tale to tell.

“Transformation”

Both the social services Integrated Case Management (2009) and the Natural Resources Permitting Project (2013) projects were born out of “transformation” intiatives, attempts to restructure the business of a Ministry around new principles of authority and information flow.

Even for businesses as structured and regimented as a Department of Motor Vehicles, these projects can be risky. For a Ministry like Children & Families, where the stakes are children’s lives, and the evaluations of the facts of cases are necessarily subjective, the risk levels are even higher.

On "Transformation" and Government IT

Nonetheless, in the mid 2000’s, the Province gave the Ministry of Management Services a new mission, to “champion the transformation of government service delivery to respond to the everyday needs of citizens, businesses and the public sector.” In particular, the IT folks in the Chief Information Officer’s department took this mission in hand. This may be a clue as to why IT projects became the central pivot for “transformation” initiatives.

Around the same time, the term “citizen centred service delivery” began to show up in Ministry Service Plans. Ministries were encouraged to pursue this new goal, with the assistance of Management Services (later renamed Citizens’ Services). This activity reached a climax in 2010, with the release of Citizens @ The Centre: B.C. Government 2.0 by then Deputy to the Premier Alan Seckel.

The “government 2.0” bit is a direct echo of hype from south of the border, where in 2009 meme-machine Tim O’Reilly kicked off a series of conferences and articles on “Government 2.0” as a counterpart to his “web 2.0” meme.

Enthusiasm for technology-driven “government as a platform” resulted in some positive side effects, such as the “open data” movement, and the creation of alternative in-sourced delivery organizations, like the UK Government Digital Service and the American 18F organization in the General Service Administration. In BC the technology mania also wafted over the top levels of civil service, resulting in the Citizens @ The Centre plan.

As a result, the IT inmates took over the asylum. Otherwise sane Ministries were directed to produce “Transformation and Technology Plans” to demonstrate their alignment with the goals of “Citizens @ the Centre”. Education, Transportation, Natural Resources and presumbly all the rest produced these plans during what turned out to be the final years of Premier Gordon Campbell’s rule.

The 2011 change in leadership from Gordon Campbell’s technocratic approach to the “politics über alles” style of Christie Clark has not substantially reduced the momentum of IT-driven transformation projects.

Part of this may be a matter of senior leadership personalities: the current Deputy to the Premier, Kim Henderson was leading Citizen’s Services when it produced Citizens @ The Centre; the former CIO Dave Nikolejsin, an architect of the catastrophic ICM project, remains involved in the ongoing NRPP transformation project, despite his new perch in the Ministry of Natural Gas Development.

Ballooning Budgets

The IT-led do-goodism of “transformation” explains to some extent how systems became the organizing principle for these disruptive projects, but it doesn’t fully explain their awesome size. Within my own professional memory, only 15 years ago, $10M was an surprisingly large IT opportunity in BC. Now I can name half a dozen projects that have exceeded $100M.

The social services Integrated Case Management project provides an interesting study in how a budget can blow up.

“Integrated Case Management” as a desirable concept was suggested by the 1996 Gove Report, (yes, 1996) leading to an “Integrated Case Management Policy” in 1998, and by 1999 a Ministry working group examining “off-the-shelf” (COTS) technology solutions. The COTS review did not turn up a suitable solution, and the Ministry buried itself in a long “requirements gathering” process, culminating in a working prototype by 2002. In 2003, an RFP was issued to expand the prototype into a pilot for a few hundred users.

The 2003 contract to build out the pilot was awarded to GDS & Associates Systems in the amount of $142,800. No, I didn’t drop any zeroes. Six years before a $180,000,000 ICM contract was awarded to Deloitte, the Ministry thought a working pilot could be developed for 1/1000 of the cost.

What happened after that is a bit of a mystery. Presumably the pilot process failed in some way (would love to know! leave a comment! send an email!) because in 2007 the government was back with a Request to procure a “commercial off-the-shelf” integrated case management solution.

This was the big kahuna. Instead of piloting a small solution and incrementally rolling out, the government already had plans to roll the solution out to over 5000 government workers and potentially 12000 contractors. And the process was going to be incredibly easy:

  • Phase 1: Procure Software
  • Phase 2: Planning and Systems Integration
  • Phase 3: Blueprint and Configure
  • Phase 4: Implementation
  • Phase 5: Future Implementation

Here’s where things get confusing. Despite using “off-the-shelf” software to avoid software development risks, and having a simple plan to just “configure” the software and roll it out, the new project was tied to a capital plan for $180M dollars, 1000 times the budget of the custom pilot software from a few years earlier.

Why?

Whereas the original pilot aimed to provide a new case management solution, full stop, the new project seems to have been designed to “boil the ocean”, touching and replacing hundreds of systems throughout the Ministry.

Tangle

At this point the psychology of capital financing and government approvals starts to come into play. Capital financing can be hard to get. Large plans must be written and costed, and business cases built to show “return on investment” to Treasury Board.

One way to gain easy “return” is to take all the legacy systems in the organization, fluff up their annual operating costs as much as possible, bundle them together and say “we’re going to replace all this, for an annual savings of $X, which in conjunction with efficiencies $Y from our new technology and centralized maintenance gives us a positive ROI!”

I’m pretty sure this psychology applies, since almost exactly the same arguments backstop the “business case” for the ongoing Natural Resource Permitting Project.

A natural effect of bundling multiple system integrations is to blow up the budget size. This is actually a Good Thing (tm) from the point of view of technology managers, since it provides some excellent resumé points: “procured and managed 9-digit public IT project.”

A manager who successfully delivers a superb $4M IT project gets a celebratory dinner at the pub; a manager who brings even a terrible $140M IT project to “completion” can write her ticket in IT consulting.

The only downside of huge IT projects is that they fail to provide value to end-users a majority of the time, and of course they soak the taxpayers (or shareholders in the case of private sector IT failures, which happen all the time) for far more money than they should.

Can We Stop? Should We?

We really should stop. The source of the problems are pretty clear: overly large projects, and heavily outsourced IT.

Even the folks at the very top can see the problem and describe it, if they have the guts.

David Freud, former Conservative minister at the UK Department for Work and Pensions, was in charge of “Universal Credit” a large social services transformation project, which included a large poorly-built IT component – a project similar in scope to our ICM. He had this to say in a debrief about what he learned in the process:

The implementation was harder than I had expected. Maybe that was my own naivety. What I didn’t know, and I don’t think anyone knew, was how bad a mistake it had been for all of government to have sent out their IT.

It happened in the 1990s and early 2000s. You went to these big firms to build your IT. I think that was a most fundamental mistake, right across government and probably across government in the western world…

We talk about IT as something separate but it isn’t. It is part of your operating system. It’s a tool within a much better system. If you get rid of it, and lose control of it, you don’t know how to build these systems.

So we had an IT department but it was actually an IT commissioning department. It didn’t know how to do the IT.

What we actually discovered through the (UC) process was that you had to bring the IT back on board. The department has been rebuilding itself in order to do that. That is a massive job.

The solution will be difficult, because it will involve re-building internal IT skills in the public service, a work environment that is drastically less flexible and rewarding than the one on offer from the private sector. However, what the public service has going for it is a mission. Public service is about, well, “public service”. And IT workers are the same as anyone else in wanting their work to have value, to help people, and to do good.

The best minds of my generation are thinking about how to make people click ads.
– Jeffrey Hammerbacher

When the Obamacare healthcare.gov site, built by Canadian enterprise IT consultant CGI, cratered shortly after launch, it was customer-focused IT experts from Silicon Valley and elsewhere who sprang into action to rescue it. And when they were done, many of them stayed on, founding the US Digital Service to bring modern technology practices into the government. They’re paid less, and their offices aren’t as swank, but they have a mission, beyond driving profit to shareholders, and that’s a motivating thing.

Government can build up a new IT workforce, and start building smaller projects, faster, and stop boiling the ocean, but they have to want to do it first. That’ll take some leadership, at the political level as well as in the civil service. IT revitalization is not a partisan thing, but neither is it an easy thing, or a sexy thing, so it’ll take a politician with some guts to make it a priority.