Thursday, January 08, 2015

Can Procurement do Agile?

Some very interesting news out of the USA today, as the GSA (the biggest, baddest procurement agency in the world) has released a Request For Information (RFI) on agile technology delivery.

To shift the software procurement paradigm, GSA’s 18F Team and the Office of Integrated Technology Services (ITS) is collaborating on the establishment of a BPA that will feature vendors who specialize in Agile Delivery Services. The goal of the proposed BPA is to decrease software acquisition cycles to less than four weeks (from solicitation to contract) and expedite the delivery of a minimum viable product (MVP) within three months or less.

In a wonderful "eat your own dogfood" move, the team working on building this new procurement vehicle are themselves adopting agile practices in their own process. Starting small with a pilot, working directly with the vendors who will be trying the new vehicle, etc. If the hidebound old GSA can develop a workable framework for agile procurement, then nobody else has an excuse.

(The reason procurement agencies have found it hard to specify "agile" is that agile by design does not define precise deliverables in advance, so it is damnably hard to fit into a "fixed cost bid" structure. In places where time-and-materials vehicles are already in place, lots of government organizations are already working with vendors in an agile way, but for the kinds of big, boondoggle-prone capital investment projects I write about, the waterfall model still predominates.)

Tuesday, December 30, 2014

Give in, to the power of the Client Side...

Brian Timoney called out this Tom Macwright quote, and he's right, it deserves a little more discussion:

…the client side will eat more of the server side stack.

To understand what "more" there is left to eat, it's worth enumerating what's already been eaten (or, which is being consumed right now, as we watch):

  • Interaction: OK, so this was always on the client side, but it's worth noting that the impulse towards using a heavy-weight plug-in for interaction is now pretty much dead. The detritus of plug-in based solutions will be around for a long while, inching towards end-of-life, but not many new ones are being built. (I bet some are, though, in the bowels of organizations where IE remains an unbreakable corporate standard.
  • Single-Layer Rendering: Go back almost 10 years and you'll find OpenLayers doing client-side rendering, though using some pretty gnarly hacks at the time. Given the restrictions in rendering performance, a common way to break down an app was a static, tiled base map with a single vector layer of interest on top. (Or, for the truly performance oriented, a single raster layer on top, only switching to vector for editing purposes.) With modern browser technology, and good implementations, rendering very large numbers of features on the client has become commonplace, to the extent that the new bottleneck is no longer the CPU, it's the network.
  • All-the-layers Rendering: Already shown in-principle by Google Maps, tiled vector rendering is moving over the last 12 months rapidly from wow-wizzy-demo to oh-no-not-that-again status. Rather than rendering to raster on the server side, send a simplified version to the client for rendering there. For base maps there's not a *lot* of benefit over pre-rendered raster, but there's some: dynamic labelling means orientation is completely flexible, and also allows for multiple options for labelling; also, simplified vector tiles can serve a wider range of zoom levels while remaining attractively rendered, so the all-important network bandwidth issues can be addressed for mobile devices.
  • "GIS" operations: While large scale analysis is not going to happen on a web page, a lot of visual effects that were otherwise hard to achieve can now be pushed to the client. Some of the GIS operations are actually in support of getting attractive client-side rendering: voronoi diagrams can be a great aid to label placement; buffers come in handy for cartography all the time.
  • Persistence: Not really designed for long-term storage, but since any mobile application on a modern platform now has access to a storage area of pretty large size, there's nothing stopping these new "client" applications from wandering far and completely untethered from the server/cloud for long periods of time.

Uh, what's left?

  • Long term storage and coordination remain. If people are going to work together on data, they need a common place to store and access their information from.
  • Large scale data handling and analysis remain, thanks to those pesky narrow network pipes, for now.
  • Coordination between devices requires a central authority still. Although, not for long, with web sockets I'm sure some JavaScript wizard has already cooked up a browser-to-browser peer-to-peer scheme, so the age of fully distributed open street map will catch up to us eventually.

Have I missed any?

Once all applications are written in 100% JavaScript we will have finally achieved the vision promised to me back in 1995, a write-once, run-anywhere application development language, where applications are not installed but are downloaded as needed over the network (because "the network is the computer"). Just turns out it took 20 years longer and the language and virtual machine are different (and there's this strange "document" cruft floating around, a coccyx-like evolutionary remnant people will be wondering about for years).

Wednesday, December 03, 2014

Building CUnit from Source

I haven't had to build CUnit myself for a while, because most of the systems I work with have it in their packaged software repositories, but for Solaris it's not there, and it turns out, it's quite painful to build!

  • First, the latest version on sourceforge, 2.1.3, is incorrectly bundled, with files (config.h.in) missing, so you have to use 2.1.2.
  • Second, the directions in the README don't include all the details of what autotools commands need to be run.

Here's the commands I finally used to get a build. Note that you do need to run libtoolize to get some missing support scripts installed, and that you need to also run automake in "add missing" mode to get let more support scripts. Then and only then do you get a build.

wget http://downloads.sourceforge.net/project/cunit/CUnit/2.1-2/CUnit-2.1-2-src.tar.bz2
tar xvfj CUnit-2.1-2.tar.bz2
cd CUnit-2.1-2
libtoolize -f -c -i \
&& aclocal \
&& autoconf \
&& automake --gnu --add-missing \
&& ./configure --prefix=/usr/local \
&& make \
&& make install

Update: The bootstrap file does provide the required autotools flags.

Tuesday, December 02, 2014

The Tyranny of Environment

Most users of PostGIS are safely ensconsed in the world of Linux, and their build/deploy environments are pretty similar to the ones used by the developers, so any problems they might experience are quickly found and removed early in development.

Some users are on Windows, but they are our most numerous user base, so we at least test that platform preemptively before release and make sure it is as good as we can make it.

And then there's the rest. We've had a passel of FreeBSD bugs lately, and I've found myself doing Solaris builds for customers, and don't get me started on the poor buggers running AIX. One of the annoyances of trying to fix a problem for a "weird platform" user is just getting the platform setup and running in the first place.

So, having recently learned a bit about vagrant, and seeing that some of the "weird" platforms have boxes already, I thought I would whip off a couple vagrant configurations so it's easy in the future to throw up a Solaris or FreeBSD box, or even a quick Centos box for debugging purposes.

I've just been setting up my Solaris Vagrantfile and using my favourite Solaris crutch: the OpenCSW software repository. But as I use it, I'm not just adding the "things I need", I'm implicitly choosing an environment:

  • my libxml2 is from OpenCSV
  • so is my gcc, which is version 4, not version 3
  • so is my postgres

This is convenient for me, but what are the chances that it'll be the environment used by someone on Solaris having problems? They might be compiling against libraries from /usr/sfw/bin, or using the Solaris gcc-3 package, or any number of other variants. At the end of the day, when testing on such a Solaris environment, will I be testing against a real situation, or a fantasyland of my own making?

For platforms like Ubuntu (apt) or Red Hat (yum) or FreeBSD (port) where there is One True Way to get software, the difficulties are less, but even then there is no easy way to get a "standard environment", or to quickly replicate the combinations of versions a user might have run into that is causing problems (libjson is a current source of pain). DLL hell has never really gone away, it has just found new ways to express itself.

(I will send a psychic wedgie to anyone who says "docker", I'm not kidding.)

Friday, November 21, 2014

What to do about Uber (BC)

Update: New York and Chicago are exploring exactly this approach, as reported in the New York Times: "Regulators in Chicago have approved a plan to create one or more applications that would allow users to hail taxis from any operators in the city, using a smartphone. In New York, a City Council member proposed a similar app on Monday that would let residents “e-hail” any of the 20,000 cabs that circulate in the city on a daily basis."


Nick Denton has a nice little article on Kinja about Uber and how they are slowly taking over the local transportation market in cities they have been allowed to operate.

it's increasingly clear that the fast-growing ride-hailing service is what economists would call a natural monopoly, with commensurate profitability... It's inevitable that one ride-sharing service will dominate in each major metropolitan area. Neither passengers nor drivers want to maintain accounts with multiple services. The latest numbers on [Uber], show a business likely to bring in nearly $1bn a month by this time next year, far ahead of any competitor

BC has thus far resisted the encroachment of Uber, but that cannot last forever, and it shouldn't: users of taxis in Vancouver aren't getting great service, and that's why there's room in the market for Uber to muscle in.

Like Denton, I see Uber as a mixed bag: on the one hand, they've offered a streamlined experience which is qualitatively better than the old taxi service; on the other, in setting up an unregulated and exploitative market for drivers, they've sowed the seeds of chaos. The thing is, many of the positive aspects of Uber are easily duplicable by existing transportation providers: app-based dispatching and payment aren't rocket science by any stretch.

As an American, Denton naturally reaches for the American solution to the natural monopoly: regulated private enterprise. In the USA, monopolists (electric utilities, for example) are allowed to extract profits, but only at a regulated rate. As Canadians, we have an additional option: the Crown corporation. Many of our natural monopolies, like electricity, are run by government-owned corporations.

Since most taxis are independently owned and operated anyways, all that a Crown taxi corporation would need to do is provide a central dispatching service, with enough ease-of-use to compete with Uber and its like. The experience of users would improve: one number to call, one app to use, no payment hassles, optimized routing, maybe even ride sharing. And the Crown corporation could use supply management to prevent a race to the bottom that would impoverish drivers and reduce safety on the roads.

There's nothing magical about what Uber is doing, they are arbitraging a currently inefficient system, but the system can save itself, and all its positive aspects, by recognizing and reforming now. Bring on our next Crown corporation, "BC Dispatching".

Friday, November 14, 2014

Tuesday, November 04, 2014

My Tax Dollars at Work?

Some days I feel like I'm king of the world, and some days I feel like the weak little fool that I am, and today is one of the latter.

I have written previously about the upcoming systems re-write in the British Columbia natural resources sector. This is a big one. Over $100M will be spent on IT services and software. Before you spend that kind of money, you should be clear about what the business value is, about why this big moon-shot of a project is the right thing to do, relative to the other options.

You should prepare a business plan. I wonder what that looks like?

I know something about data management in the natural resources sector, so I was actually interested more in the implementation plans than the business plans, I FOI'ed:

Any business case documents or presentations; System development timelines or development plans; Software or product comparisons

Potentially, this could be a lot of material, at least the FOI person thought so. They suggested I narrow my request:

...they’ve stated that the first option, “starting from the original request but limiting to only documents delivered to Doug Say”, would be the best option for narrowing the request and making the fee smaller, as he would be the only one conducting the search and this would significantly reduce the amount of time that would be spent on searching for records...

So, they were OK with my more limited request, that's great. Until they send back the fee estimate:

28.5 hour(s) to locate and retrieve records @ $30.00/ hour =$855.00
31.5 hours(s) to produce records @ $30.00/hour$945.00
31.5 hour(s) to prepare and handle records for disclosure @ $30.00/hour = $945.00
1 disc(s) @ $4.00/disc =$4.00
2400 scanned electronic copies of paper records @ $0.10/page =$240.00
Total$2989.00

Fortunately, they accept VISA, Mastercard and Amex. Actually, no. Chastened, I reduce my request yet further:

I feel like I’m fishing with hand grenades here. Can I just have a copy of the business plan and delivery plan (project milestones and expected deliverables?) without an overview I’ll never figure out what to rationally ask for.

So now we're down to 1-2 documents, no need for a big expensive search, the fee dogs are called off. No doubt, I'll get my documents shortly. The deadline rolls around:

I am writing to ask that you grant a 30 business day extension to the above-noted FOI request. The additional 30 days are required in order to obtain further information from the Ministry as to the status of the records in their submission to Cabinet.

OK, I'm not a total a**hole, I can be patient here. And heck, they are making a good faith effort so ensure I can see the docs by checking them against cabinet submissions. What an awesome crew. 30 business days go by:

We are nearing the end of the review of the records for this request, however we will require some additional time to have it completed. I am writing to request that you grant a 10 business day extension so that we may provide a complete response to this request.

Again, I'm a man of infinite patience, particularly when a "complete response" is in the offing! By all means, squire, take your 10 business days! 10 business days go by:

Please see the attached documents in response to your FOI request with the Ministry of Forests, Lands and Natural Resource Operations.

Oh, frabjous day! Calooh! Calay! Let's open up our giftie and see what the FOI elves have brought!

  • A signature page.
  • A contacts page.
  • The first 3 lines of the table of contents, the remainder redacted because it constitutes "policy advice or recommendations". That's right, the table of contents is policy advice.
  • The first page of the executive summary.
  • The first page of the "narrative" section (with the middle part of that page redacted for "policy advice" reasons).
  • The first page of the "investment context" section (with the end part of that page redacted for "policy advice" reasons).

That's it! The remaining 185 pages of this 204 page business case are withheld! You want to know the reasoning behind the expenditure of over $100M on IT services and software? Too f***ing bad, chump.

Yep, feeling totally played for the chump. They will spend that $100M, by god, and they will spend it without anyone outside the "yes, minister, yes deputy, yes director" echo chamber seeing the rationale or commenting on it.

Update: I did vent a little back at the FOI officer (which, let's be clear, isn't really fair, he's just the messenger here):

I'm concerned that this "process" is a farcical waste of your time. If you're going to redact the entire document, save yourself the effort and tell me ahead of time, "BTW, the whole thing will be redacted". Why even pretend, at this point? Who reviewed this document and decided on the redactions?

To which he patiently replied:

I apologise for not making you aware ahead of time that very little information would be released. I was hoping by this time that more information could be provided to you, however the Ministry informed that nothing has been implemented with regard to the initiatives, so the majority of the records are required to be withheld [em. mine] at this time. Should you wish to request these records at a later time, please feel free to contact me before submitting your request to determine their status and whether or not more information would be made available at that time.

I'm not sure how the implementation or not of things bears into their releasability, perhaps an FOI expert could drop some wisdom in the comments. It seems like the ultimate Catch-22, "we'll explain the reasoning behind our decisions once they are set in stone and cannot be changed".

Tuesday, October 07, 2014

Failure never smelled so sweet

Politics/IT crossover moment! In his Globe & Mail column today, Gary Mason has this to say about the negotiations underway to bring LNG terminals to the BC coast:

Quick primer for the IT audience: in the last election the government promised 100s of thousands of jobs and 100s of billions of revenue (and amazingly, I'm not exaggerating those figures) should BC successfully seed an LNG "industry" on our coast. It was basically all they talked about in the campaign. Unsurprisingly, having placed all their political eggs in the LNG basket, the government is now at the mercy of the companies that are supposed to bring us this windfall, as Mason notes below.

There is an enormous amount at stake for the Premier and her government. From the outside, it appears B.C. needs Petronas and the others more than they need B.C. Ms. Clark comes out the loser if those companies walk away and her LNG dream evaporates. She can also lose if her government signs desperate deals [emphasis mine] that are deemed to be so slanted in favour of the project promoter that the province becomes a global laughingstock.

Actually no, as anyone on a failed IT project knows, after a desperate deal is signed, both the LNG proponent (vendor) and government (manager) will get together and sing the praises of the finished deal.

"It's a world class deal, and it was a tough negotiation, but we did it", the government will say.

"They drove a hard bargain, but we think we can make it work", the proponent will say.

The media at best will run a he said/she said with the government's claims against the opposition's analysis, and we'll all move on. The government won't be the loser in the case of a bad, desperate deal, the people of BC will.

Just as, when a vendor and a dependent manager deliver a shitty IT project and declare it the best thing since sliced chips, it's the users who suffer in the end.

Friday, October 03, 2014

Solving Boundary-Similkameen

When analyzing the new legislation governing redistricting in British Columbia this cycle, I noted in passing that the creation of three "protected regions" creates an isolated, unprotected region in the Okanagan. If there are districts that have unbalanced population in the Okanagan, the only way to solve the problem is moving population around inside the Okanagan.

Well, there is a district with unbalanced population: Boundary-Similkameen (37,840) is 30% below the provincial population average (54,369), and 36% below the average population of unprotected ridings (58,810). So about 15,000 people need to be added.

On the face of it, this is no problem, the other ridings in the region have excess population in the 4% to 14% range, so there's lots of people to transfer to Boundary-Similkameen, in theory. The trouble is, people don't live in a nice uniform distribution over the whole land area of the region. They are clumped together.

Here's what the situation looks like now:

The commission cannot take population from the east or west, those are both protected regions, the only direction to go is north. But, to the north is Penticton, with a population of 33,000. The whole of Penticton cannot be added, the only way to balance Boundary-Similkameen is going to be splitting Penticton in half.

Splitting communities in half, particularly small ones that are much smaller in population than the district itself is generally avoided by Canadian boundary commissions, because retaining jurisdictional integrity is one of the concerns they attempt to address (unlike the US of A). One of the "failures" of the last commission was the splitting of Williams Lake between Cariboo North and South.

If the commission were drawing borders without the artificial restriction of the protected regions, it would be possible for Boundary-Similkameen to discard some of it's communities on the east and west edges (into underpopulated ridings that need the help) and transform into a simple north-south oriented riding running from Penticton down to the US border.

However, that's not on. It's not the only conundrum the commission will be wrestling with, either.

Monday, September 22, 2014

PostGIS Feature Frenzy

A specially extended feature frenzy for FOSS4G 2014 in Portland. Usually I only frenzy for 25 minutes at a time, but they gave me an hour long session!

PostGIS Feature Frenzy — Paul Ramsey from FOSS4G on Vimeo.

Thanks to the organizers for giving me the big room and big slot!

About Me

My Photo
Victoria, British Columbia, Canada

Followers

Blog Archive

Labels

bc (37) it (30) postgis (20) icm (11) video (11) enterprise IT (10) sprint (9) open source (8) osgeo (8) gis (7) cio (6) enterprise (6) foippa (6) management (6) spatial it (6) foi (5) foss4g (5) mapserver (4) outsourcing (4) politics (4) bcesis (3) oracle (3) COTS (2) architecture (2) boundless (2) esri (2) idm (2) javascript (2) natural resources (2) ogc (2) open data (2) opengeo (2) openstudent (2) postgresql (2) rant (2) technology (2) vendor (2) web (2) 1.4.0 (1) HR (1) access to information (1) accounting (1) agile (1) aspen (1) benchmark (1) buffer (1) build vs buy (1) business (1) business process (1) c (1) cathedral (1) client (1) cloud (1) code (1) common sense (1) consulting (1) contracting (1) core review (1) crm (1) crockofshit (1) cunit (1) custom (1) data warehouse (1) deloitte (1) design (1) development (1) digital (1) email (1) environment (1) essentials (1) evil (1) exadata (1) fcuk (1) fgdb (1) fme (1) foocamp (1) foss4g2007 (1) ftp (1) gds (1) geocortex (1) geometry (1) geoserver (1) google (1) google earth (1) government (1) grass (1) hp (1) iaas (1) icio (1) industry (1) innovation (1) integrated case management (1) introversion (1) iso (1) isss (1) isvalid (1) jts (1) lawyers (1) mapping (1) mcfd (1) microsoft (1) mysql (1) new it (1) nosql (1) opengis (1) openlayers (1) oss (1) paas (1) pirates (1) policy (1) portal (1) proprietary software (1) qgis (1) rdbms (1) recursion (1) redistribution (1) regression (1) rfc (1) right to information (1) saas (1) salesforce (1) sardonic (1) seibel (1) sermon (1) server (1) siebel (1) snark (1) spatial (1) standards (1) svr (1) taxi (1) tempest (1) texas (1) tired (1) transit (1) twitter (1) uber (1) udig (1) uk (1) uk gds (1) verbal culture (1) victoria (1) waterfall (1) wfs (1) where (1) with recursive (1) wkb (1)