@OSCon 2012

I’m at OSCON this week, taking in the open source gestalt in it’s most grandiose form: a gathering of over 3000 technophiles in the “open source city”, Portland, Oregon.

This keynotes this morning were a lovely mix of core open source core concerns and philosophy and related-yet-different topics.

The core of open source is sharing and cooperation, and David Eaves addressed that in his talk, advocating a new focus on community building. Not from a qualitative “boy, it would be good to pay attention to community” point of view, but from a quantitative, “hey, we can and should measure engagement and seek to improve it, in our tooling and our processes”. If we do so, we’ll learn some obvious things (being blunt is good for saving keystrokes but bad for incubating new contributors) and maybe we’ll also learn from surprising things, like who our best bug reporters are.

Danny Hillis gave an overview of one of this current projects, creating a “learning map” that allows agents to help guide learning. He used a wonderful analogy to his elementary school librarian, who guided his early learning by not only finding him information on the topics he said he was interested in, but also brought him information she though he might be interested in and was ready to comprehend.

Kay Thaney talked about changing the process of research science, to modernize and align the incentives of researchers for faster progress. I feel like she talked over the subject a bit, with not enough concrete examples of how modern science is being held back by traditional promotion and funding structures, but I agree with her overall premise. I recently read a piece on the travails of a computer science PhD candidate which distilled a bunch of the problems with modern academia: the power of the principal investigator and his/her grants; the requirement for publication first, foremost and always; and the power of academic social networks in controlling access to funding and career opportunities.

One of my database idols, Brian Aker, talked about his role role at (!!) HP, overseeing their public cloud offering based on OpenStack. Learning about OpenStack and the incredible development velocity around the project has been one of the huge eye-openers for me here as OSCon.

And finally Tim O’Reilly brought us back to the open source philosophy with a talk about how we value open source contribution to society (tl;dr: we don’t). Like all the energy we save through conservation or all the fresh air trees and plants produce, the value of freely available open source is unaccounted for in the formal economy.

My previous experience of OSCon was one of my favourite conference experiences, and informed how I tried to organize FOSS4G 2007 the following year. This event is the same: quality speakers, good opportunities to network and meet as many people you can wish, and a range of surprising new things that send you home going “wow”. I’m looking forward to the rest of the day and tomorrow!

More ICM

Who would have thought “Integrated Case Management” could be such an interesting topic! But then, who would have thought that you could spend $180M on a technology project.

An Ugly Duckling

The ICM demonstration video is a great way to get a feel for the reality of the project. Remember, this is the best possible face they can put on the system, it’s the public demo video.

A non-technical observer, used to the wonders of the consumer web, might wonder, “why is that $180M software so damned ugly?”

The reason it is ugly, is that it is built using a Customer Relationship Management (CRM) system called Seibel. Seibel was the market leader in CRM during the late 1990s (starting to understand the look-and-feel now?) and was purchased by Oracle Corporation in 2006. Suffice to say, the Glory Days of Seibel are now far in the past.

Customers or Clients?

A non-technical observer might also wonder, “why is our social services case management system built using a ‘Customer Relationship Management’ software?”

The cynic would answer that it’s another example of the corporatisation of government, but the cynic would be wrong. CRM software was “invented” (“evolved” is probably a better term) because Fortune 500 companies wanted to ensure all branches of their business had complete information about their customers throughout the lifecycle of the customer relationship.

A “customer” starts out as a sales lead, eventually becomes a prospect, and is (hopefully) sold some product. The sales department handles all this. The product is shipped to the customer. The shipping department handles this, and it would be good for them to know any special conditions the sales team recorded in their talks with the customer. On delivery, the customer might have some questions about the product, and would contact the support department. In order to provide good support, the support department shoud have access to all the previously gathered information about the customer and their purchase. After a few years, it makes sense to contact the customer to inquire about purchasing a new model. The inside sales department would do this, using all the customer history to craft the best possible pitch.

At this point, even the non-technical observer should notice the similarity between the customer management problem and the social services application. Multiple departments, sharing information to provide the best possible service. Clients being contacted by different branches of an organization with the system helping ensure the left and right hands know what they are doing.

CRM #fail

That’s the theory. You merge all your departmental databases into one master CRM database, provide uniform access across the enterprise and magic occurs!

The practice, unfortunately, has not always been positive, as the Wikipedia article on “CRM” makes abundently clear. The very first section of the article is “Challenges”, which are as follows:

2 Challenges
2.1 Complexity
2.2 Poor usability
2.3 Fragmentation
2.4 Business reputation
2.5 Security, privacy and data security concerns

Anything sound familiar? It’s a prescient catalogue of the complaints and concerns we are hearing about ICM.

The reason CRM systems are complex and hard to use is that they have been evolved from the specific purpose of managing the sales process, to the general purpose of managing all customer information across a complex organization, while trying to minimize custom user interface programming.

The $180M Off-the-shelf Solution

For reasons that escape my understanding, IT managers hate programming. They gravitate readily towards “off the shelf” solutions, pieces of software than can be purchased, easily installed and set up, and left to run. This makes sense for well-understood pieces of functionality, like word processors and spreadsheets.

Where things start to go sideways is when “off the shelf” mania spreads into more and more murky problem spaces. As “off the self” solutions become more generic, they require substantial “configuration” stages before they can be left to run. (Astute cynics will note that “off the shelf” solutions always require “configuration” and never need “programming”.)

I have to admit, case management feels kind of like an off-the-shelf problem. It’s just paper shuffling, it’s the kind of thing 100s of other jurisdictions do. Rather than building a custom case management system, it makes sense to buy an existing one, right?

Yes and no, apparently. If you’re going to spend $180M configuring your new system, the savings you got by starting from a “off the shelf” solution start to seem pretty illusary, particularly if one of the drawbacks you accept by using “off the shelf” software is a more complex user experience than you could have provided with standard programming tools.

A $180M Black Box

If an “off the shelf solution that requires $180M in configuration” sounds like a contradiction in terms, how about “a solution that requires no programming that noone can understand”.

Victoria is absolutely lousy with software developers who understand Java and SQL and HTML and JavaScript, the kinds of tools you might use to build a case management system “from scratch”.

However, there are very few IT people in Victoria who know how to configure Seibel. In fact, there are so few Seibel people in Victoria that, in order to get access to a top notch Siebel team, the ICM project has been flying in experts from abroad. They stay in Victoria for the week, then return home on the weekend.

Unfortunately, the choice of Seibel doesn’t just affect staffing availability, it also affects the future flexibility of the ICM system.

Seibel is one big, tightly coupled piece of software. Interfaces depend on database structures and configurations. Upgrades will mean upgrading everything at once: in the presence of $180M worth of configuration and customization, I wonder if the Seibel software will ever get upgraded. The risk of failure will be too high. That means that the system you see today is the system you’ll see 10 years from now. Do you know what “good” technology will look like in 10 years? Me neither. But it won’t look like ICM.

The End?

Major IT projects never fail. Never. At least, not until the CIO moves on to a new job. Then the truth can be spoken.

The industry experience with Seibel has been so bad, that “CRM Magazine” actually writes of a “Seibel effect”:

Fair or not, Siebel Systems—the CRM pioneer later acquired by Oracle—is often considered the poster child for failed CRM system implementations. I’ve heard dozens of war stories about overaggressive and excessively large CRM implementations that did not come close to realizing their promised benefits—and wasted a lot of money …. The so-called “Siebel effect” is a fear of large, expensive, and often very lengthy projects intended to revamp all aspects of an enterprise’s customer-facing solutions.

With luck, BC’s “Seibel effect” will cause us to re-value building small, interoperable, systems. Big, reliable systems (like Amazon, or Facebook, or Netflix) can be engineered from small, comprehensible, interoperable pieces, using smaller, more flexible teams. And if we encourage in-house technicial expertise, we’ll give government the deep technical understanding it needs to evolve systems over time, instead of committing to failure-prone IT mega-projects like ICM.

Let’s give it a try. It can’t be worse than the alternative.

Web Architectures for SQL

Like bicycles for fishes and tuxedos for monkeys, web architectures for SQL don’t get no respect. I aim to change that.

Take Smaller Bites

Yesterday, contra to the usual course of events, a question about IT was raised in BC Legislature, regarding the Integrated Case Management (ICM) system:

C. Trevena: The government has been phasing in a $182 million integrated case management system in the Ministries of Social Development and Children and Families. It’s supposed to improve services, but feedback from social workers using it shows the opposite. Social workers are talking about some very serious concerns. They say it’s cumbersome and can result in dangerous mistakes being made. In the Ministry of Children and Families they can’t get the information they need about children at risk quickly, and that often results in a crisis. What is the Minister of Children and Family Development doing to make sure this computer system is not going to hamper the work of child protection officers?

To which the Minister responded:

Hon. M. McNeil: The integrated case management system that is in its phase 2 has, as with every new technology, some challenges when it first begins. But I have to tell you that this integrated case management system for our ministry alone is replacing 64 individual technologies. I think what it’s going to be able to do is to allow our social workers and our front-line workers, once they get fully trained and fully up to speed on the actual system, to be able to communicate with each other so that communication between the various entities will ensure that children and youth in this province are well served.

And I have to admit, the first thing I do with this kind of thing is think “$180M is a lot of money, a lot, I wonder what that buys?”. And the answer is, over a 10 year life-span, using a $150,000 fully loaded FTE cost, $180M is equivalent to 120 staff. That’s how much incremental extra efficiency the new system has to provide over the status quo. Were the old systems wasting the full-time efforts of 120 people?

But that kind of analysis is reductive. Systems get old, they need to be replaced, and integration has benefits in terms of new capabilities that can be offered that the old processes could not provide. So, was there a way to make things better, but maybe without spending $180M on a huge waterfall project?

I mused about ICM for a while. $1M isn’t actually a lot of money. You can get a team of perhaps 5 senior contractors for that kind of money. And once you commit to converting 64 systems simultaneously into an integrated system, it seems no wonder that the budget blows up. On the other hand, these large waterfall projects have a tendency to fail, precisely because they do try to solve so many problems over such a wide scope in one go.

I think the “answer”, if you can call it that, is to commit to incrementalism in all things. IT folks desperately want to re-write. We desperately want to fix everything at once. But that’s not the only way to get new and better systems.

What if the ICM project had, instead of deciding to integrate all 64 systems at once, instead decided to integrate just two of the largest systems? They would have created a problem with far less scope and risk, and with a far lower budget. The trouble with a team of 200 (or 100, or 50) is that, thanks to the miracle of Brooks’ Law, large teams waste an awful amount of time in coordination costs. Meetings, emails, status updates, reporting. All the effluvia of project management.

Unfortunately, there’s a bureaucratic problem with approaching ICM by integrating just a few systems and then incrementally adding new systems over time. Doing so would mean the project could no longer be funded as a one-time capital expense. It would have to be funded as an ongoing operating cost. And capital costs are treated very differently from operating costs. When the media reports that “government is running a deficit”, they are reporting on the operating deficit. So there is a lot more political sensitivity (at the moment) around operating expenditures than there is around capital expenditures.

The end result? The provincial accounting system is forcing IT into a waterfall methodology, as if building information systems is the same as building bridges.

I don’t think that’s the right way to treat IT systems. They aren’t bridges. Unlike bridges, it’s possible to completely re-write and re-build a running IT system 100% (or 200% or 300%) while keeping it in operation. For example, Facebook has been incrementally re-architected and re-built several times since it started, expanding a million fold over the period, and hasn’t had to be shut down for upgrade once. The same is true of Google, Amazon, and eBay.

Continuous change and incremental upgrade on running systems should be considered a best practice! Why? Because they force IT teams to break projects into discrete testable chunks, instead of attempting grand moon-shot re-writes-from-scratch.

Committing to continuous change in systems, instead of a build/maintain/retire cycle, would have all sorts of positive effects. The systems would have dedicated development staff who understand the whole thing soup to nuts. New features wouldn’t require outside contractors who themselves need to spend time learning the system (imperfectly) before they can start. User interface and ergonomics could be continuously measured and improved over time, like Facebook and Google do, instead of guessed at in waterfall design documents ahead of time.

Getting there from here will be hard, because the capital funding model is going to continue to push projects into the classic waterfall failure mode: over-promise on scope and features; under-budget to get past Treasury Board; hire expensive international system integrators because only they can show the past experience with $100M project scopes.

But if we know why it’s broken, maybe we can fix it!

ESRI Dev Summit Snark

Every year, without fail, ESRI embraces and endorses interoperability! This year, again,

Takeaway 5: ArcGIS Embraces Interoperability
ArcGIS continues to embrace interoperability by supporting new Open Geospatial Consortium, Inc. (OGC), specifications (Web Map Tile Service [WMTS] and Web Processing Service [WPS]), making public its GeoServices REST specification and file geodatabase API. ArcGIS 10.1 also features enhanced support for native spatial types in commercial and open source databases.

Thought experiment: if ESRI truly embraced interoperability, at some point would one not expect it to cease to be a noteworthy topic?