Comox Valley 2013 Absentee Ballots

For all the electrons spilled speculating on what trends might apply to the Courtenay-Comox absentee ballots being counted next week, I feel like I haven’t seen the actual numbers from 2013 in print anywhere, so here they are from the 2013 Statement of Votes:

Section GP NDP CP LIB Total %
s. 98 Special 20 83 8 57 168 4.8%
s. 99 Absentee - in ED 219 607 86 560 1472 42.0%
s. 100 Absentee - out of ED 42 132 6 111 291 8.3%
s. 101 Absentee - advance 8 41 3 41 92 2.7%
s. 104 Voting in DEO office 119 519 74 601 1313 37.5%
s. 106 Voting by mail 18 74 15 61 168 4.8%
Total 426 1456 192 1431 3505 100%
% 12.2% 41.5% 5.5% 40.8% 100% -

Some caveats:

  • Redistribution made the 2017 riding somewhat weaker for the NDP than it was in 2013. (Advantage: Liberals)
  • In 2017 the NDP candidate did somewhat better than in 2013. (Advantage: NDP)
  • In 2013 the NDP candidate lost the riding but (barely) won the absentee tally. (Advantage: NDP)

With those caveats in mind, the final conclusion: anyone who tells you that there’s a predictable direction the absentee ballot will go based on past results is blowing smoke up your ***.

Some Great Things about PostgreSQL

I spent the last few months using PostgreSQL for real work, with real data, and I’ve been really loving some of the more esoteric features. If you use PostgreSQL on a regular basis, learning these tools can make your code a lot more readable and possibly faster too.

Distinct On

A number of the tables I had to work with included multiple historical records for each individual, but I was only interested in the most recent value. That meant that every query had to start with some kind of filter to pull off the latest value for joining to other tables.

It turns out that the PostgreSQL DISTINCT ON syntax can spit out the right answer very easily:

SELECT DISTINCT ON (order_id) orders.*
FROM orders
ORDER BY orders.order_id, orders.timestamp DESC

No self-joining or complexity here, the tuple set is sorted into id/time order, and then the distinct on clause pulls the first entry (which is the most recent, thanks to the sorting) off of each id grouping.

Filtered Aggregates

I was doing a lot of reporting, so I built a BI-style denormalized reporting table, with a row for every entity of interest and a column for every variable of interest. Then all that was left was the reporting, which rolled up results across multiple groupings. The trouble was, the roll-ups were oftenly highly conditional: all entities with this condition A but not B, compared with those with B but not A, compared with all entities in aggregate.

Ordinarily this might involve embedding a big case statement for each conditional but with filtered aggregates we get a nice terse layout that also evaluates faster.

SELECT
    store_territory,
    Count(*) FILTER (WHERE amount < 5.0) 
        AS cheap_sales_count,
    Sum(amount) FILTER (WHERE amount < 5.0) 
        AS cheap_sales_amount,
    Count(*) FILTER (WHERE amount < 5.0 AND customer_mood = 'good') 
        AS cheap_sales_count_happy,
    Sum(amount) FILTER (WHERE amount < 5.0 AND customer_mood = 'good')
        AS cheap_sales_amount_happy
FROM bi_table
GROUP BY store_territory

I would routinely end up with 20-line versions of this query, which spat out spreadsheets that analysts were extremely happy to take and turn into charts and graphs and even decisions.

Window Functions

My mind aches slightly when trying to formulate window functions, but I was still able to put them to use in a couple places.

First, even with a window wide enough to cover a whole table, window functions can be handy! Add a percentile column to a whole table:

SELECT bi_table.*, 
    ntile(100) OVER (ORDER BY amount) 
        AS amount_percentile
FROM bi_table

Second, using ordinary aggregates in a window context can create some really groovy results. Want cumulated sales over store territories? (This might be better delegated to front-end BI display software, but…)

WITH daily_amounts AS (
    SELECT 
        sum(amount) AS amount,
        store_territory,
        date(timestamp) AS date
    FROM bi_table
    GROUP BY store_territory, date
)
SELECT 
    sum(amount) OVER (PARTITION BY store_territory ORDER BY date) 
        AS amount_cumulate
    store_territory, date
FROM daily_amounts

Alert readers will note the above example won’t provide a perfect output table if there are days without any sales at all, which brings me to a side note cool feature: PostgreSQL’s generate_series function (Regina Obe’s favourite function) supports generating time-based series!

SELECT generate_series(
    '2017-01-01'::date, 
    '2017-01-10'::date, 
    '18 hours'::interval);

Normally you’ll probably generate boring 1-day, or 1-week, or 1-month series, but the ability to generate arbitrarily stepped time series is pretty cool and useful. To solve the cumulation problem, you can just generate a full series of days of interest, and left join the calculated daily amounts to that, prior to cumulation in order to get a clean one-value-per-day cumulated result.

Left Join and Coalesce

This is not really an advanced technique, but it’s still handy. Suppose you have partial data on a bunch of sales from different sources and in different tables. You want a single table output that includes your best guess about the value, what’s the easiest way to get it? Left join and coalesce.

Start with a base table that includes all the sales you care about, left join all the potential sources of data, then coalesce the value you care about into a single output column.

SELECT
    base.order_id,
    Coalesce(oi1.order_name, oi2.order_name, oi2.order_name) 
        AS order_name
FROM base
LEFT JOIN order_info_1 oi1 USING (order_id)
LEFT JOIN order_info_2 oi2 USING (order_id)
LEFT JOIN order_info_3 oi3 USING (order_id)

The coalesce function takes the first non-NULL value it encounters in its parameters and returns that as the value. The practical effect is that, in the case where the first two tables have no rows for a particular base record, and the third does, the coalesce will skip past the first two and return the non-NULL value from the third. This is a great technique for compressing sparse multiple input sources into a terse usable single output.

Christy Clark's $1M Faux Conference Photo-op

On February 25, 2013, Christie Clark mounted the stage at the “International LNG Conference” in Vancouver to trumpet her government’s plans to have “at least one LNG pipeline and terminal in operation in Kitimat by 2015 and three in operation by 2020”.

Christy Clark's $1M Faux Conference Photo-op

Notwithstanding the Premier’s desire to frame economic devopment as a triumph of will, and notwithstanding the generous firehosing of subsidies and taxbreaks on the still nascent sector, the number of LNG pipelines and terminals in operation in Kitimat remains stubbornly zero. The markets are unlikely to relent in time to make the 2020 deadline.

And about that “conference”?

Like the faux “Bollywood Awards” that the government paid $10M to stage just weeks before the 2013 election, the “LNG in BC” conference was a government organized “event” put on primarily to advance the pre-election public relations agenda of the BC Liberal party.

In case anyone had any doubts about the purpose of the “event”, at the 2014 edition an exhibitor helpfully handed out a brochure to attendees, featuring an election night picture of the Premier and her son, under the title “We Won”.

We Won

The “LNG in BC” conference continued to be organized by the government for two more years, providing a stage each year for the Premier and multiple Ministers to broadcast their message.

The government is no longer organizing an annual LNG confab, despite their protestations that the industry remains a key priority. At this point, it would generate more public embarassment than public plaudits.

Instead, we have a new faux “conference”, slated to run March 14-15, just four weeks before the 2017 election begins: the #BCTech Summit.

Like “LNG in BC”, the “BCTech Summit” is a government-organized and government-funded faux industry event, put on primarily to provide an expensive backdrop for BC Liberal politicking.

BC Innovation Council

The BC Innovation Council (BCIC) that is co-hosting the event is itself a government-funded advisory council run by BC Liberal appointees, many of whom are also party donors. To fund the inaugural 2016 version of the event, the Ministry of Citizens Services wrote a direct award $1,000,000 contract to the BCIC.

The pre-election timing is not coincidental, it is part of a plan that dates all the way back to early 2015, when Deputy Minister Athana Mentzelopoulos directed staff to begin planning a “Tech Summit” for spring of the following year.

“We will not be coupling the tech summit with the LNG conference. Instead, the desire is to plan for the tech summit annually over the next couple of years – first in January 2016 and then in January 2017.” – email from A. Mentzelopoulos, April 8, 2015

The intent of creating a “conference” to sell a new-and-improved government “jobs plan”, and the source of that plan, was made clear by the government manager tasked with delivering the event.

“The push for this as an annual conference has come from the Premier’s Office and they want to (i) show alignment with the Jobs Plan (including the LNG conference) and (ii) show this has multi-ministry buy-in and participation.” – S. Butterworth, April 24, 2015

The event was not something industry wanted. It was not even something the BCIC wanted. It was something the Premier’s Office wanted.

And so they got it: everyone pulled together, the conference was put on, and it made a $1,000,000 loss which was dutifully covered by the Province via the BC Innovation Council, laying the groundwork for 2017’s much more politically potent version.

This year’s event will be held weeks before the next election. It too will be subsidized heavily by the government. And as with the LNG conference, exhibitors and sponsors will plunk down some money to show their loyalty to the party of power.

LNG BC Sponsors

The platinum sponsors of LNG in BC 2015 were almost all major LNG project proponents: LNG Canada, Pacific Northwest LNG, and Kitimat LNG. Were they, like sponsors at a normal trade conference, seeking to raise their profile among attendees? Or were they demonstrating their loyalty to the government that organized the event and then approached them for sponsorship dollars?

It is hard to avoid the conclusion that these events are just another conduit for cash in our “wild west” political culture, a culture that shows many of the signs of “systematic corruption” described by economist John Wallis in 2004.

“In polities plagued with systematic corruption, a group of politicians deliberately create rents by limiting entry into valuable economic activities, through grants of monopoly, restrictive corporate charters, tariffs, quotas, regulations, and the like. These rents bind the interests of the recipients to the politicians who create them.”

Systematically corrupt governments aren’t interested in personally enriching their members, they are interested in retaining and reinforcing their power, through a virtuous cycle of favours: economic favours are handed to compliant economic actors who in turn do what they can to protect and promote their government patrons.

Circle of Graft

The 2017 #BCTech conference already has a title sponsor: Microsoft. In unrelated news, Microsoft is currently negotiating to bring their Office 365 product into the BC public sector. If the #BCTech conference was an ordinary trade show, these two unrelated facts wouldn’t be cause for concern. But because the event is an artificially created artifact of the Premier’s Office, a shadow is cast over the whole enterprise.

Who is helping who here, and why?

A recent article in Macleans included a telling quote from an anonymous BC lobbyist:

If your client doesn’t donate, it puts you at a competitive disadvantage, he adds. It’s a small province, after all; the Liberals know exactly who is funding them, the lobbyist notes, magnifying the role donors play and the access they receive in return.

As long as BC remains effectively a one-party state, the cycle of favors and reciprocation will continue. Any business subject to the regulatory or purchasing decisions of government would be foolish not to hedge their bets with a few well-placed dollars in the pocket of BC’s natural governing party.

The cycle is systematic and self-reinforcing, and the only way to break the cycle, is to break the cycle.

Two Thoughts About Enterprise IT

While thinking about the failure of the Cerner roll-out in Nanaimo, where an expensive COTS electronic health record (EHR) system has been withdrawn after the medical staff rejected it, I have had a couple thoughts.

Origins Matter

As I noted in my post, I actually expected the Cerner roll-out on Vancouver Island to go fine. Proven software, specialized company, no shortage of cash. But it didn’t go fine. Among the complaints were things you’d expect to have been ironed out of a mature product ages ago: data entry efficiency, user-friendly displays of relevant data at relevant places, plain old system speed and responsiveness.

Why would a system rolled out in multiple US hospitals display these drawbacks?

I’m wondering if some of the issue might be the origin story: Cerner is an EHR developed and deployed first in the USA. As such, one of the main value propositions of the system would be the fine-grained tracking and management of billable events. To the people purchasing the system, the hospital managers, the system would pay for itself many times over just by improving the accounting aspect of hospital management. Even if it made patient care less efficient, the improvement in billing alone could make the system fiscally worthwhile.

Recently, a colleague told me about the ambulance dispatch system BC procured a decade ago. Like Cerner, it was a US system. It handled dispatch fine, and still does. What was odd about the system was that when you cracked open the user manuals, over 75% of the material was about how to configure and manage billing. The primary IT pain point for these US organizations is the incredibly complicated accounting issues associated with their multiple-payer private health insurance system, and the system reflects that.

What I’m getting at here is that if you go to a US reference customer for one of these systems and ask “how did it work for you, was it good?”, when they answer “it is great, it has really made things better!” what they mean by “better” might not be what you might mean by “better”, at least not 100%.

Two Thoughts About Enterprise IT

Cost of Safe, Proven Systems

My second thought on the Cerner roll-out was about the timelines associated with software procurement, particularly when the goal is a “safe, proven choice”. IT managers – particularly ones who are, deep in their hearts, afraid of technology – like a safe, proven choice. They like software that someone else with a similar business is already using, in a similar way.

When you’re talking about big production systems, that may take a couple years to set up, and another couple years to roll out progressively to a large front-line staff, there’s a number of big lags:

  • If you take two years to configure and a year to roll out, the software will be three years old before your staff is all using it.
  • Your procurement process will itself probably take a year.
  • If during procurement, you only consider software that has been in operation for a year somewhere else, that adds a year.
  • The organization you are using as your reference will in turn have taken 3 years to prepare and roll-out.
  • So, assuming the software was brand new when the reference organization chose it, your staff will be getting software at least 8 years old by the time your roll-out is complete.

This is more-or-less exactly what happened to the Ministry of Children and Families in the ICM procurement.

The software that runs their basically “brand new” system (I believe final roll-out was officially finished last year) is now over a decade old, and shows it.

The reason consumer-facing systems are always au courant, lovely and fast, is because they are maintained by dedicated teams who are charged with continuous improvement. As long as we purchase systems like we build bridges, as a one-time capital expense followed by decades of disintegration and then replacement, we can expect our enterprise systems to lag far far behind.

Super Expensive Cerner Crack-up at Island Health

Kansas City, we have a problem.

Super Expensive Cerner Crack-up at Island Health

A year after roll-out, the Island Health electronic health record (EHR) project being piloted at Nanaimo Regional General Hospital (NRGH) is abandoning electronic processes and returning to pen and paper. An alert reader forwarded me this note from the Island Health CEO, sent out Friday afternoon:

The Nanaimo Medical Staff Association Executive has requested that the CPOE tools be suspended while improvements are made. An Island Health Board meeting was held yesterday to discuss the path forward. The Board and Executive take the concerns raised by the Medical Staff Association seriously, and recognize the need to have the commitment and confidence of the physician community in using advanced EHR tools such as CPE. We will engage the NRGH physicians and staff on a plan to start taking steps to cease use of the CPOE tools and associated processes. Any plan will be implemented in a safe and thoughtful way with patient care and safety as a focus.
Dr. Brendan Carr to all Staff/Physicians at Nanaimo Regional General Hospital

This extremely expensive back-tracking comes after a year of struggles between the Health Authority and the staff and physicians at NRGH.

After two years of development and testing, the system was rolled out on March 19, 2016. Within a couple months, staff had moved beyond internal griping to griping to the media and attempting to force changes through bad publicity.

Doctors at Nanaimo Regional Hospital say a new paperless health record system isn’t getting any easier to use.

They say the system is cumbersome, prone to inputting errors, and has led to problems with medication orders.

“There continue to be reports daily of problems that are identified,” said Dr. David Forrest, president of the Medical Staff Association at the hospital.
– CBC News, July 7, 2016

Some of the early problems were undoubtedly of the “critical fault between chair and keyboard” variety – any new information interface quickly exposes how much we use our mental muscle memory to navigate both computer interfaces and paper forms.

IHealth Terminal & Trainer

So naturally, the Health Authority stuck to their guns, hoping to wait out the learning process. Unfortunately for them, the system appears to have been so poorly put together that no amount of user acclimatization can save it in the current form.

An independent review of the system in November 2016 has turned up not just user learning issues, but critical functional deficiencies:

  • High doses of medication can be ordered and could be administered. Using processes available to any user, a prescriber can inadvertently write an order for an unsafe dose of a medication.
  • Multiple orders for high-risk medications remain active on the medication administration record resulting in the possibility of unintended overdosing.
  • The IHealth system makes extensive use of small font sizes, long lists of items in drop-down menus and lacks filtering for some lists. The information display is dense making it hard to read and navigate.
  • End users report that challenges commonly occur with: system responsiveness, log-in when changing computers, unexplained screen freezes and bar code reader connectivity
  • PharmaNet integration is not effective and adds to the burden of medication reconciliation.

The Health Authority committed to address the concerns of the report, but evidently the hospital staff felt they could no longer risk patient health while waiting for the improvements to land. Hence a very expensive back-track to paper processes, and then another expensive roll-out process in the future.

This set-back will undoubtedly cost millions. The EHR roll-out was supposed to proceed smoothly from NRGH to the rest of the facilities in Island Health before the end of 2016.

This new functionality will first be implemented at the NRGH core campus, Dufferin Place and Oceanside Urgent Care on March 19, 2016. The remaining community sites and programs in Geography 2 and all of Geography 1 will follow approximately 6 months later. The rest of Island Health (Geographies 3 and 4) will go-live roughly 3 to 6 months after that.

Clearly that schedule is no longer operative.

The failure of this particular system is deeply worrying because it is a failure on the part of a vendor, Cerner, that is now the primary provider of EHR technology to the BC health system.

Cerner

When the IBM-led EHR project at PHSA and Coastal Health was “reset” (after spending $72M) by Minister Terry Lake in 2015, the government fired IBM and turned to a vendor they hoped would be more reliable: EHR software maker Cerner.

Cerner was already leading the Island Health project, which at that point (mid-2015) was apparently heading to a successful on-time roll-out in Nanaimo. They seemed like a safe bet. They had more direct experience with the EHR software, since they wrote it. They were a health specialist firm, not a consulting generalist firm.

For all my concerns about failures in enterprise IT, I would have bet on Cerner turning out a successful if very, very, very costly system. There’s a lot of strength in having relevant domain experience: it provides focus and a deep store of best practices to fall back on. And as a specialist in EHR, a failed EHR project will injure Cerner’s reputation in ways a single failed project will barely dent IBM’s clout.

There will be a lot of finger-pointing and blame shifting going on at Island Health and the Ministry over the next few months. The government should not be afraid to point fingers at Cerner and force them to cough up some dollars for this failure. If Cerner doesn’t want to wear this failure, if they want to be seen as a true “partner” in this project, they need to buck up.

Cerner will want to blame the end users. But when data entry takes twice as long as paper processes, that’s not end users’ fault. When screens are built with piles of non-relevant fields, and poor layouts, that’s not end users’ fault. When systems are slow or unreliable, that’s not end users’ fault.

Congratulations British Columbia, on your latest non-working enterprise IT project. The only solace I can provide is that eventually it will probably work, albeit some years later and at several times the price you might have considered “reasonable”.