Broken Enterprise IT

For as long as I have been doing technology, I have watched enterprise IT and said to myself “this seems incredibly broken, why do people put up with this?” This includes the period when I was doing consulting and running teams that specifically had to mould their reporting and processes to fit into the requirements of government (enterprise) IT. Actually doing it didn’t make it seem any more sane.

At this point, I should explain what seemed wrong. The processes around IT actions in the enterprise environment (and my experience was in government, but my glancing exposure to Fortune 500 environments didn’t uncover any great differences there) all seemed geared towards slowing down progress. The many layers of approvals and plans, the disconnection between operations and development, the migration of core technical decisions upwards to non-technical management staff.

Compare the outputs of a start-up company, turning out an aesthetically pleasing piece of software usable by ordinary folks within a timeline of a few months to that of an enterprise system building team that may or may not deliver something ugly and slow in a couple years. What’s different? The people are generally of the similar capability and intelligence. But the culture and processes they are embedded in are radically different.

As a 25-year-old smart-ass I was always pretty sure a team of 5 working outside the enterprise IT structure could produce a better system faster than a team of any size working within it. As a 40-year-old smart ass, for some reason I still haven’t altered that conception. I do, however, have more sympathy for the folks working within the system. As a 25-year-old I naturally assumed they were just stupid. As a 40-year-old I now see them as trapped. Very smart, capable people, stuck within a system that seems built to frustrate and block them from actually Making Their Computers Do Useful Things for Other People.

The genesis of the enterprise IT homunculus seems semi-explainable. If you put a large collection of detail oriented control freaks (hello, computer people) into a single organization, the need of the (detail oriented, control freak) management to avoid non-linearity in their organization will naturally lead to layer upon layer of process and reporting. Otherwise, how can the CIO sleep at night?

But even if the genesis is explainable, that still doesn’t make the result good, or the end state desirable. This is still a system wasting the potential of the greater number of its participants. The non-technical masters of these organizations (CEOs, other upper managers) see clearly that they are broken (they manifestly cannot deliver), but the fixes only seem to make things worse: my organization is broken, so I’ll outsource to another organization; my organization is broken, so I’ll bring in consultants (who carry the same organizational biases); my organization is broken, so I’ll go ISO9000 (embrace the disfunction!).

The British Columbia government seems to be in the middle of the outsourcing “fix” right now. The remains of the central IT department are being reviewed for their incredibly high internal pricing, and meanwhile major server and network infrastructure is being outsourced to HP and shipped out of province. The irony is that, should the central IT department be found to be “too inefficient” the “solution” will be another round of outsourcing to a contractor who will end up being (contractually) largely immune to any review.

I think we are only a few years from a serious IT revolution in the provincial government, and by revolution I mean the peasants are going to take up their pitchforks and torches and storm the keep. More on that tomorrow.

In the meanwhile, I am collecting stories of IT transformations that have actually worked. It seems like there are very few if any stories out there. The super-star CIOs seem to rely on various combinations of smoke and mirrors to burnish their reputations, very few seem to have attacked the root of the IT cultures in their organizations. But I would love to hear some stories about those who did and succeeded (or failed).

Keynote May

It’s a little too late now to promote my keynote presentation at ASPRS 2011, since I already gave it on Tuesday…. but! I can still promote my upcoming keynote at PgCon, which I’m looking forward to immensely (and getting nervous about in equal measure).

The ASPRS talk went well (I thought), delivering a little capsule history of open source and reasons why managers might (should) want to start adopting it (I had to shorten my Duluth version of the talk a bit to fit the ASPRS time slot). The reviews afterwards were positive (Twitter is an attention-hog’s wet dream (said the attention-hog)).

The PgCon talk will be a new challenge. I’m usually generalizing my technical knowledge up for a less specialized audience of geo-people. But for PgCon I have to narrow and focus my general geo-knowledge down for a very specialized audience of PostgreSQL database developers. Having a room full of the foremost experts and developers of PostgreSQL to talk to is the most daunting audience I’ve faced since my graduate oral exams.

However, my reward after the keynote will be two days of attending talks by those same foremost experts, so I’m really looking forward to PgCon! Hope to see you there!

Spatial Partitioning in PostGIS

I’ve been meaning for a long time to see what an implementation of spatial partitioning in PostGIS would look like, and a trip next week to the Center for Topographic Information in Sherbrooke had given me the excuse to try a toy implementation.

Imaging map data constrained to a 1km square. Here is an example that partitions that square into a left- and right-side, and then inserts the data appropriately into the right table as it comes in. Features that straddle the two halves get put into a third special-case table for handling overlaps.

-- parent table
create table km (
id integer,
geom geometry
);

-- left side
create table km_left(
check ( _st_contains(st_makeenvelope(0,0,500,1000,-1),geom ) )
) inherits (km);

-- right side
create table km_right(
check ( _st_contains(st_makeenvelope(500,0,1000,1000,-1),geom ) )
) inherits (km);

-- border overlaps
create table km_overlaps(
check ( _st_intersects(st_makeline(st_makepoint(500,0),st_makepoint(500,1000)),geom ) )
) inherits (km);

-- indexes
create index km_left_gix on km_left using gist (geom);
create index km_right_gix on km_right using gist (geom);
create index km_overlaps_gix on km_overlaps using gist (geom);

-- direct insert to appropriate table
CREATE OR REPLACE FUNCTION km_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
    IF ( _st_contains(st_makeenvelope(0,0,500,1000,-1),NEW.geom) ) THEN
    INSERT INTO km_left VALUES (NEW.*);
    ELSIF ( _st_contains(st_makeenvelope(500,0,1000,1000,-1),NEW.geom) ) THEN
    INSERT INTO km_right VALUES (NEW.*);
    ELSEif ( _st_intersects(st_makeline(st_makepoint(500,0),st_makepoint(500,1000)),NEW.geom) ) THEN
    INSERT INTO km_overlaps VALUES (NEW.*);
    ELSE
    RAISE EXCEPTION 'Geometry out of range.';
    END IF;
    RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_km_trigger
    BEFORE INSERT ON km
    FOR EACH ROW EXECUTE PROCEDURE km_insert_trigger();

-- add some data
insert into km (id, geom) values (1, 'POINT(50 50)');
insert into km (id, geom) values (2, 'POINT(550 50)');
insert into km (id, geom) values (3, 'LINESTRING(250 250, 750 750)');

-- see where it lands
select * from km;
select * from km_right;
select * from km where st_contains('POLYGON((20 20,20 60,60 60,60 20,20 20))',geom);

It’s still a toy, I need to put more data in it and see how well the indexes and constraint exclusion mechanisms actually work in this case.

ESRI FGDB API: No Servers need Apply

When downloading the ESRI file-based geodatabase API you are required to accept a license agreement which includes this lovely clause in the section about acceptabe uses of the API:

“Single Use License.” Licensee may permit a single authorized end user to install and use the Software, Data, and Documentation on a single computer for use by that end user on the computer on which the Software is installed. Remote access is not permitted. Licensee may permit the single authorized end user to make a second copy for end user’s exclusive use on a portable computer as long as only one (1) copy of the Software, Data, and Documentation is in use at any one (1) time. No other end user may use the Software, Data, or Documentation under the same license at the same time for any other purpose.

Note, only one user may use the software, and remote access is not permitted. That is, you can’t run a mapping server on top of this API, because then multiple users would be using it, remotely. Impressive. Point to Redlands. I will console myself in the usual way: “better than nothing.”

Update: Martin Daly asked me to take a second look, and someone Twittered asking if the clause was boilerplate. On second examination, it could simply be boilerplate used for other desktop-oriented software that has been used without thought as to how it muddies the uses for non-desktop systems. The full text is here for those with ESRI logins. Never attribute to malice what you can attribute to accident, another good rule to live by (but how empty I would be without my little conspiracy theories!). The precautionary principle would dictate not web-serving with it until/unless it is clarified, but in the meanwhile bulk loaders and translators could certainly be put together.

Update 2: Though the I find the license language somewhat unclear, the intent appears to be to allow any application to sit on top of the API. Crisis averted, situation normal.

Update 3: Just got off the plane and the ESRI folks have left me a nice voice mail explaining that the intent of the clause is with respect to developers downloading it (once, for one person) not for uses of derived products. The moral of the story is that ESRI people are pretty darn nice, and I am not so very nice. Something for me to work on in the new year.

Update 4: Received email from ESRI confirming that the final license will be reviewed to ensure there are no ambiguities and that it reflects their intent that the API be usable by any application in any application category and the derived product freely redistributable and royalty free. So to the extent that the current license has any ambiguity it shouldn’t be considered a red flag that the final one will.

BC: Power up your Vote!

I gave this spiel at the Victoria GeoGeeks meet-up last week, but it’s worth giving to the world at large: if you want your vote to count in choosing your representatives, exercise it at every opportunity! For those in the USA, that means participating in primary elections, and for us in British Columbia right now that means participating in the party leadership contests.

When you participate in early elections like primaries or leadership races, your vote gets more powerful.

In the last British Columbia election (2009) 1,651,567 people voted to choose the government. But they didn’t get to choose the Premier, they just got to choose the party or local candidate. The next Premier of BC will be the leader of either the BC Liberals or the BC NDP. The combined membership of those races will probably be on the order of 60 thousand. So a vote cast in the leadership contests is about 27 times more determinative of who will be the Premier than a General Election vote.

Who wouldn’t want their vote to be 27 times more powerful?!? But here’s the rub: only party members get to vote.

For the BC Liberals, the election will occur on February 26, but the cut-off for membership is 5PM on February 4. You can sign-up online here ($10 annual membership).

For the BC New Democrats, the election will occur on April 17, but the cut-off is midnight on January 17, and you can sign-up online here ($10 minimum donation).

If you live in BC, this is your chance to have an outsized influence on who makes the decisions for our future. Get to it! I have!