IT Revolution

Back when I first started working in the British Columbia government, in 1993, there existed a Crown Corporation called “BC Systems Corporation”, BCSC. Through the 1980s, major pieces of government operations (timber tenuring, financial tracking, health insurance) had been computerized using the finest technology of the era (mostly mainframes) and BCSC had grown up as the central IT branch of the government.

They ran the big machines, the terminals that accessed them and were taking on the newfangled PCs increasingly showing up on the desks of drones like me (I answered correspondence, there still being a brisk trade in “mail” at the time).

The extent to which BCSC was wedded to the technology of the previous era can best be understood by noting that one of their last major accomplishments was writing a web browser, Charlotte, for the 3270 green-screen terminal.

Being stocked full of the finest minds in computerdom, BCSC briskly alienated pretty much all the folks it was supposed to be serving. It’s services were too expensive, the old mainframe systems too inflexible, the response times too long. Shortly after I left government to finish my graduate degree, BCSC was blown up and its functions distributed to IT departments in the Ministries themselves.

And so the process began anew. Just as BCSC had stitched together systems from multiple ministries into a centralized empire over a decade, so too did the centralizing imperative begin to work immediately once again. First e-mail. Then major data centre needs. First by choice, then by fiat, gradually the Ministry IT departments were stripped of line responsibility for their systems and left with a residue of business analysts and managers.

A new wrinkle was added this time around though, in that rather than centralizing entirely to another agency, many of the services were centralized and then bundled off to outside companies. So desktop support was re-centralized and taken over by IBM. The result has been service just as poor as the old days (worse, in fact! back in the BCSC days there was a PC tech co-located right on our office floor, a luxury most government workers could hardly imagine now) with prices just as high.

BCSC was blown up when Ministries realized that they could provide the same services for themselves that BCSC did using PCs, and started doing so. Once the mainframe boys weren’t necessary any more, they were dispensed with, joyfully. Ministries wanted to Get Things Done, and the main role of BCSC was to say You Can’t Do That.

Today we have another round of folks saying You Can’t Do That. The desktop outsource vendor says you can’t have a new computer, and if you do get one, you can’t install any software on it. The server outsource vendor says you can’t afford a system to run your code on. Or if you can, then you need to go through a three month process to get the server first. And the CIO’s office says you can’t run that programming language because it’s not government standard.

And here comes the cloud. Just as the PC freed the Ministries from the mainframe boys, the cloud is going to free folks from IT once again. Need a web site? Need a small system? Maybe an App Engine deployment. Need some data processing? Maybe an EC2 server will do the trick. It’s all well under credit card spending limits. (My first government supervisor, bless her heart, bought me a copy of MS Access on her department credit card so I could avoid the All Databases Must Be Oracle directive and actually Get Some Work Done On My Computer).

The revolution is coming, it’ll start quiet, but it’ll get ugly. Hopefully soon.

Postscript: Michael Weisman, in a comment on my last post points to an article in the Washington Post about superstar CIO Vivek Kundra (coincidentally, the topic of another upcoming post of mine). Says Mr. Kundra, “People have better access to information technology at their homes than they do at work, and that’s especially true in the public sector”. And another great quote from the article: Asked what he typically hears from workers about gov­ernment- or corporate-provided technology, Kundra said, “It’s not a question of whether they don’t like it. They despise it.”

Kundra seems to have hit upon a winning formula: stop saying “no” and start saying “yes”. The career risk of saying “no” is now higher than the risk of saying “yes”. Welcome to the new conservatism. Give the people what they want, or things will get… unpleasant.

Broken Enterprise IT

For as long as I have been doing technology, I have watched enterprise IT and said to myself “this seems incredibly broken, why do people put up with this?” This includes the period when I was doing consulting and running teams that specifically had to mould their reporting and processes to fit into the requirements of government (enterprise) IT. Actually doing it didn’t make it seem any more sane.

At this point, I should explain what seemed wrong. The processes around IT actions in the enterprise environment (and my experience was in government, but my glancing exposure to Fortune 500 environments didn’t uncover any great differences there) all seemed geared towards slowing down progress. The many layers of approvals and plans, the disconnection between operations and development, the migration of core technical decisions upwards to non-technical management staff.

Compare the outputs of a start-up company, turning out an aesthetically pleasing piece of software usable by ordinary folks within a timeline of a few months to that of an enterprise system building team that may or may not deliver something ugly and slow in a couple years. What’s different? The people are generally of the similar capability and intelligence. But the culture and processes they are embedded in are radically different.

As a 25-year-old smart-ass I was always pretty sure a team of 5 working outside the enterprise IT structure could produce a better system faster than a team of any size working within it. As a 40-year-old smart ass, for some reason I still haven’t altered that conception. I do, however, have more sympathy for the folks working within the system. As a 25-year-old I naturally assumed they were just stupid. As a 40-year-old I now see them as trapped. Very smart, capable people, stuck within a system that seems built to frustrate and block them from actually Making Their Computers Do Useful Things for Other People.

The genesis of the enterprise IT homunculus seems semi-explainable. If you put a large collection of detail oriented control freaks (hello, computer people) into a single organization, the need of the (detail oriented, control freak) management to avoid non-linearity in their organization will naturally lead to layer upon layer of process and reporting. Otherwise, how can the CIO sleep at night?

But even if the genesis is explainable, that still doesn’t make the result good, or the end state desirable. This is still a system wasting the potential of the greater number of its participants. The non-technical masters of these organizations (CEOs, other upper managers) see clearly that they are broken (they manifestly cannot deliver), but the fixes only seem to make things worse: my organization is broken, so I’ll outsource to another organization; my organization is broken, so I’ll bring in consultants (who carry the same organizational biases); my organization is broken, so I’ll go ISO9000 (embrace the disfunction!).

The British Columbia government seems to be in the middle of the outsourcing “fix” right now. The remains of the central IT department are being reviewed for their incredibly high internal pricing, and meanwhile major server and network infrastructure is being outsourced to HP and shipped out of province. The irony is that, should the central IT department be found to be “too inefficient” the “solution” will be another round of outsourcing to a contractor who will end up being (contractually) largely immune to any review.

I think we are only a few years from a serious IT revolution in the provincial government, and by revolution I mean the peasants are going to take up their pitchforks and torches and storm the keep. More on that tomorrow.

In the meanwhile, I am collecting stories of IT transformations that have actually worked. It seems like there are very few if any stories out there. The super-star CIOs seem to rely on various combinations of smoke and mirrors to burnish their reputations, very few seem to have attacked the root of the IT cultures in their organizations. But I would love to hear some stories about those who did and succeeded (or failed).

Keynote May

It’s a little too late now to promote my keynote presentation at ASPRS 2011, since I already gave it on Tuesday…. but! I can still promote my upcoming keynote at PgCon, which I’m looking forward to immensely (and getting nervous about in equal measure).

The ASPRS talk went well (I thought), delivering a little capsule history of open source and reasons why managers might (should) want to start adopting it (I had to shorten my Duluth version of the talk a bit to fit the ASPRS time slot). The reviews afterwards were positive (Twitter is an attention-hog’s wet dream (said the attention-hog)).

The PgCon talk will be a new challenge. I’m usually generalizing my technical knowledge up for a less specialized audience of geo-people. But for PgCon I have to narrow and focus my general geo-knowledge down for a very specialized audience of PostgreSQL database developers. Having a room full of the foremost experts and developers of PostgreSQL to talk to is the most daunting audience I’ve faced since my graduate oral exams.

However, my reward after the keynote will be two days of attending talks by those same foremost experts, so I’m really looking forward to PgCon! Hope to see you there!

Spatial Partitioning in PostGIS

I’ve been meaning for a long time to see what an implementation of spatial partitioning in PostGIS would look like, and a trip next week to the Center for Topographic Information in Sherbrooke had given me the excuse to try a toy implementation.

Imaging map data constrained to a 1km square. Here is an example that partitions that square into a left- and right-side, and then inserts the data appropriately into the right table as it comes in. Features that straddle the two halves get put into a third special-case table for handling overlaps.

-- parent table
create table km (
id integer,
geom geometry
);

-- left side
create table km_left(
check ( _st_contains(st_makeenvelope(0,0,500,1000,-1),geom ) )
) inherits (km);

-- right side
create table km_right(
check ( _st_contains(st_makeenvelope(500,0,1000,1000,-1),geom ) )
) inherits (km);

-- border overlaps
create table km_overlaps(
check ( _st_intersects(st_makeline(st_makepoint(500,0),st_makepoint(500,1000)),geom ) )
) inherits (km);

-- indexes
create index km_left_gix on km_left using gist (geom);
create index km_right_gix on km_right using gist (geom);
create index km_overlaps_gix on km_overlaps using gist (geom);

-- direct insert to appropriate table
CREATE OR REPLACE FUNCTION km_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
    IF ( _st_contains(st_makeenvelope(0,0,500,1000,-1),NEW.geom) ) THEN
    INSERT INTO km_left VALUES (NEW.*);
    ELSIF ( _st_contains(st_makeenvelope(500,0,1000,1000,-1),NEW.geom) ) THEN
    INSERT INTO km_right VALUES (NEW.*);
    ELSEif ( _st_intersects(st_makeline(st_makepoint(500,0),st_makepoint(500,1000)),NEW.geom) ) THEN
    INSERT INTO km_overlaps VALUES (NEW.*);
    ELSE
    RAISE EXCEPTION 'Geometry out of range.';
    END IF;
    RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_km_trigger
    BEFORE INSERT ON km
    FOR EACH ROW EXECUTE PROCEDURE km_insert_trigger();

-- add some data
insert into km (id, geom) values (1, 'POINT(50 50)');
insert into km (id, geom) values (2, 'POINT(550 50)');
insert into km (id, geom) values (3, 'LINESTRING(250 250, 750 750)');

-- see where it lands
select * from km;
select * from km_right;
select * from km where st_contains('POLYGON((20 20,20 60,60 60,60 20,20 20))',geom);

It’s still a toy, I need to put more data in it and see how well the indexes and constraint exclusion mechanisms actually work in this case.

ESRI FGDB API: No Servers need Apply

When downloading the ESRI file-based geodatabase API you are required to accept a license agreement which includes this lovely clause in the section about acceptabe uses of the API:

“Single Use License.” Licensee may permit a single authorized end user to install and use the Software, Data, and Documentation on a single computer for use by that end user on the computer on which the Software is installed. Remote access is not permitted. Licensee may permit the single authorized end user to make a second copy for end user’s exclusive use on a portable computer as long as only one (1) copy of the Software, Data, and Documentation is in use at any one (1) time. No other end user may use the Software, Data, or Documentation under the same license at the same time for any other purpose.

Note, only one user may use the software, and remote access is not permitted. That is, you can’t run a mapping server on top of this API, because then multiple users would be using it, remotely. Impressive. Point to Redlands. I will console myself in the usual way: “better than nothing.”

Update: Martin Daly asked me to take a second look, and someone Twittered asking if the clause was boilerplate. On second examination, it could simply be boilerplate used for other desktop-oriented software that has been used without thought as to how it muddies the uses for non-desktop systems. The full text is here for those with ESRI logins. Never attribute to malice what you can attribute to accident, another good rule to live by (but how empty I would be without my little conspiracy theories!). The precautionary principle would dictate not web-serving with it until/unless it is clarified, but in the meanwhile bulk loaders and translators could certainly be put together.

Update 2: Though the I find the license language somewhat unclear, the intent appears to be to allow any application to sit on top of the API. Crisis averted, situation normal.

Update 3: Just got off the plane and the ESRI folks have left me a nice voice mail explaining that the intent of the clause is with respect to developers downloading it (once, for one person) not for uses of derived products. The moral of the story is that ESRI people are pretty darn nice, and I am not so very nice. Something for me to work on in the new year.

Update 4: Received email from ESRI confirming that the final license will be reviewed to ensure there are no ambiguities and that it reflects their intent that the API be usable by any application in any application category and the derived product freely redistributable and royalty free. So to the extent that the current license has any ambiguity it shouldn’t be considered a red flag that the final one will.