Going "All In" on IT Projects

I was thinking about IT project failure this weekend (OK, actually I was thinking about the ICM project, but you can see how I got there) and did a little Googling around on failure rates.

The estimated range of failure rates is surprising large, from a high of 68% down to a lower bound around 30%. A lot of the variability is definitional (what is a “failure”, after all?) and some of it seems to be methodological (who are you asking, and what are you asking them about?). However the recent studies seem to cluster more around the lower range.

The web references are all pretty confident that poor requirements gathering is the root of most of the problems, and I can see some validity in that, but the variable I keep coming back to in my mental models is project size. The Sauer/Germino/Reich paper takes on project size directly, and finds that the odds of failure (or non-success) go up to steeply as project size increases.

Sadly, they don’t find that there is an optimal small-team size that brings down failure rates to what might be considered an acceptable level: even small-team projects fail at a rate of at best 25%.

So this brings me back to my weekend musings: suppose you have a capital budget of, say, $180M. Even assuming all projects of all sizes fail at the same rates (and as we’ve seen, they don’t) if you bet all your money on a single project there’s at least a 25% chance that you’ll lose 100% it in a failure.

On the other hand, if you split your money into multiple smaller independent projects (and smaller teams) the impact of any one failure goes way down. Rather than losing your whole capital budget (and, we hope, your job) in a massive IT meltdown, you lose a few of your sub-projects, about 25% of your total budget.

Fundamentally, organizing a project to deal with the 100% certainty of a 25% failure rate is a more defensible approach that gambling with the 25% chance of a 100% failure rate.

Don’t just go “all in”, don’t bet it all on black, know that the unknowns inherent in software development make it a probabilistic process, not a deterministic one, and plan appropriately.

End note: Expecting failure, and planning for it, makes more sense than crossing your fingers and hoping you’ll dodge the bullet. And it surfaces some attractive approaches to achieving success: small independent teams can be evaluated and measured against one another, allowing you to find and better utilize your top performers; because you now expect some teams to fail, you can insulate yourself against some failures by running duplicate teams on critical sub-projects; because small failures are non-fatal, they can be used as learning tools for succeeding rounds. The similarity to agile development is probably no coincidence.

BC IT Outsourcing Update

This week the BC 2011/2012 Public Accounts (finally) came out, so that means it’s time to update our tracking of IT outsourcing spending with another year’s data points!

The headline number for 2012: $321,773,530, a new record! After a brief dip last fiscal year, IT outsourcing continued growing robustly in 2011/2012.

Of course, there are always winners and losers, and the surprise loser this year is perennial outsourcing champ, IBM. With only $61M in billings in 2011/2012, IBM notched its lowest take since 2005! Fortunately, HP Advanced Solutions was there to pick up the slack, hauling in $109M this fiscal year for not only a personal best, but an all time record in single-firm IT billings (besting the old record of $107M set by IBM in 2010).

Also notable this fiscal year are the rise of Deloitte (to $37M) and local body-shoppers TP Systems ($9M) and Quartech ($13M) on the strength of the ongoing ICM debacle (in related news, the BC Children’s Advocate is the latest to take a dump on ICM).

Thanks to all our contestants, I know you’ll all be back next year!

Obscure end material: One surprise entry in the larger IT spends is “Oracle Microsystems” at $24M. What’s that? Looks like the remains of “Sun Microsystems”, since their entry is gone. It also leads me to wonder, why are we still dropping tens of millions of bucks on Sun hardware at this late date in the enterprise computing era? Guys, it’s called the “cloud”, look it up.

@OSCon 2012

I’m at OSCON this week, taking in the open source gestalt in it’s most grandiose form: a gathering of over 3000 technophiles in the “open source city”, Portland, Oregon.

This keynotes this morning were a lovely mix of core open source core concerns and philosophy and related-yet-different topics.

The core of open source is sharing and cooperation, and David Eaves addressed that in his talk, advocating a new focus on community building. Not from a qualitative “boy, it would be good to pay attention to community” point of view, but from a quantitative, “hey, we can and should measure engagement and seek to improve it, in our tooling and our processes”. If we do so, we’ll learn some obvious things (being blunt is good for saving keystrokes but bad for incubating new contributors) and maybe we’ll also learn from surprising things, like who our best bug reporters are.

Danny Hillis gave an overview of one of this current projects, creating a “learning map” that allows agents to help guide learning. He used a wonderful analogy to his elementary school librarian, who guided his early learning by not only finding him information on the topics he said he was interested in, but also brought him information she though he might be interested in and was ready to comprehend.

Kay Thaney talked about changing the process of research science, to modernize and align the incentives of researchers for faster progress. I feel like she talked over the subject a bit, with not enough concrete examples of how modern science is being held back by traditional promotion and funding structures, but I agree with her overall premise. I recently read a piece on the travails of a computer science PhD candidate which distilled a bunch of the problems with modern academia: the power of the principal investigator and his/her grants; the requirement for publication first, foremost and always; and the power of academic social networks in controlling access to funding and career opportunities.

One of my database idols, Brian Aker, talked about his role role at (!!) HP, overseeing their public cloud offering based on OpenStack. Learning about OpenStack and the incredible development velocity around the project has been one of the huge eye-openers for me here as OSCon.

And finally Tim O’Reilly brought us back to the open source philosophy with a talk about how we value open source contribution to society (tl;dr: we don’t). Like all the energy we save through conservation or all the fresh air trees and plants produce, the value of freely available open source is unaccounted for in the formal economy.

My previous experience of OSCon was one of my favourite conference experiences, and informed how I tried to organize FOSS4G 2007 the following year. This event is the same: quality speakers, good opportunities to network and meet as many people you can wish, and a range of surprising new things that send you home going “wow”. I’m looking forward to the rest of the day and tomorrow!

More ICM

Who would have thought “Integrated Case Management” could be such an interesting topic! But then, who would have thought that you could spend $180M on a technology project.

An Ugly Duckling

The ICM demonstration video is a great way to get a feel for the reality of the project. Remember, this is the best possible face they can put on the system, it’s the public demo video.

A non-technical observer, used to the wonders of the consumer web, might wonder, “why is that $180M software so damned ugly?”

The reason it is ugly, is that it is built using a Customer Relationship Management (CRM) system called Seibel. Seibel was the market leader in CRM during the late 1990s (starting to understand the look-and-feel now?) and was purchased by Oracle Corporation in 2006. Suffice to say, the Glory Days of Seibel are now far in the past.

Customers or Clients?

A non-technical observer might also wonder, “why is our social services case management system built using a ‘Customer Relationship Management’ software?”

The cynic would answer that it’s another example of the corporatisation of government, but the cynic would be wrong. CRM software was “invented” (“evolved” is probably a better term) because Fortune 500 companies wanted to ensure all branches of their business had complete information about their customers throughout the lifecycle of the customer relationship.

A “customer” starts out as a sales lead, eventually becomes a prospect, and is (hopefully) sold some product. The sales department handles all this. The product is shipped to the customer. The shipping department handles this, and it would be good for them to know any special conditions the sales team recorded in their talks with the customer. On delivery, the customer might have some questions about the product, and would contact the support department. In order to provide good support, the support department shoud have access to all the previously gathered information about the customer and their purchase. After a few years, it makes sense to contact the customer to inquire about purchasing a new model. The inside sales department would do this, using all the customer history to craft the best possible pitch.

At this point, even the non-technical observer should notice the similarity between the customer management problem and the social services application. Multiple departments, sharing information to provide the best possible service. Clients being contacted by different branches of an organization with the system helping ensure the left and right hands know what they are doing.

CRM #fail

That’s the theory. You merge all your departmental databases into one master CRM database, provide uniform access across the enterprise and magic occurs!

The practice, unfortunately, has not always been positive, as the Wikipedia article on “CRM” makes abundently clear. The very first section of the article is “Challenges”, which are as follows:

2 Challenges
2.1 Complexity
2.2 Poor usability
2.3 Fragmentation
2.4 Business reputation
2.5 Security, privacy and data security concerns

Anything sound familiar? It’s a prescient catalogue of the complaints and concerns we are hearing about ICM.

The reason CRM systems are complex and hard to use is that they have been evolved from the specific purpose of managing the sales process, to the general purpose of managing all customer information across a complex organization, while trying to minimize custom user interface programming.

The $180M Off-the-shelf Solution

For reasons that escape my understanding, IT managers hate programming. They gravitate readily towards “off the shelf” solutions, pieces of software than can be purchased, easily installed and set up, and left to run. This makes sense for well-understood pieces of functionality, like word processors and spreadsheets.

Where things start to go sideways is when “off the shelf” mania spreads into more and more murky problem spaces. As “off the self” solutions become more generic, they require substantial “configuration” stages before they can be left to run. (Astute cynics will note that “off the shelf” solutions always require “configuration” and never need “programming”.)

I have to admit, case management feels kind of like an off-the-shelf problem. It’s just paper shuffling, it’s the kind of thing 100s of other jurisdictions do. Rather than building a custom case management system, it makes sense to buy an existing one, right?

Yes and no, apparently. If you’re going to spend $180M configuring your new system, the savings you got by starting from a “off the shelf” solution start to seem pretty illusary, particularly if one of the drawbacks you accept by using “off the shelf” software is a more complex user experience than you could have provided with standard programming tools.

A $180M Black Box

If an “off the shelf solution that requires $180M in configuration” sounds like a contradiction in terms, how about “a solution that requires no programming that noone can understand”.

Victoria is absolutely lousy with software developers who understand Java and SQL and HTML and JavaScript, the kinds of tools you might use to build a case management system “from scratch”.

However, there are very few IT people in Victoria who know how to configure Seibel. In fact, there are so few Seibel people in Victoria that, in order to get access to a top notch Siebel team, the ICM project has been flying in experts from abroad. They stay in Victoria for the week, then return home on the weekend.

Unfortunately, the choice of Seibel doesn’t just affect staffing availability, it also affects the future flexibility of the ICM system.

Seibel is one big, tightly coupled piece of software. Interfaces depend on database structures and configurations. Upgrades will mean upgrading everything at once: in the presence of $180M worth of configuration and customization, I wonder if the Seibel software will ever get upgraded. The risk of failure will be too high. That means that the system you see today is the system you’ll see 10 years from now. Do you know what “good” technology will look like in 10 years? Me neither. But it won’t look like ICM.

The End?

Major IT projects never fail. Never. At least, not until the CIO moves on to a new job. Then the truth can be spoken.

The industry experience with Seibel has been so bad, that “CRM Magazine” actually writes of a “Seibel effect”:

Fair or not, Siebel Systems—the CRM pioneer later acquired by Oracle—is often considered the poster child for failed CRM system implementations. I’ve heard dozens of war stories about overaggressive and excessively large CRM implementations that did not come close to realizing their promised benefits—and wasted a lot of money …. The so-called “Siebel effect” is a fear of large, expensive, and often very lengthy projects intended to revamp all aspects of an enterprise’s customer-facing solutions.

With luck, BC’s “Seibel effect” will cause us to re-value building small, interoperable, systems. Big, reliable systems (like Amazon, or Facebook, or Netflix) can be engineered from small, comprehensible, interoperable pieces, using smaller, more flexible teams. And if we encourage in-house technicial expertise, we’ll give government the deep technical understanding it needs to evolve systems over time, instead of committing to failure-prone IT mega-projects like ICM.

Let’s give it a try. It can’t be worse than the alternative.

Web Architectures for SQL

Like bicycles for fishes and tuxedos for monkeys, web architectures for SQL don’t get no respect. I aim to change that.