31 May 2016
“Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.”
- Douglas Adams, HHGTTG
25 May 2016
I like the internet, I use a lot of sites, I don’t fear online shopping or discussion or communication or banking. As a result, I have a pretty healthy footprint of accounts, and I have been finding recently that my brain can no longer keep up.
So I’ve started using a password manager, LastPass, which works just fine.
Once I fully committed to it, the number of passwords managed started a slow and seemingly relentless climb, as over time I returned to all the many sites at which I had been forced to register accounts in the past. As a result, I’ve learned useful things:
- The number of sites I have accounts on is far larger than I thought. I have 40 entries in LastPass already, and I imagine I’m only about 2/3 of the way through adding all the sites I use more than once a year. Of those, easily a dozen are what you might call “important”: banks, brokerages, email, OAuth sources (Twitter, Facebook, Google), etc. Far more sites than I kept separate passwords for!
- The centrality and potential vulnerability of the email account is hard to overstate. For 90% of the accounts I have added, the first step was “reset the password”, since I had forgotten it. (In fact, that was my old access mechanism, since I accessed many sites so rarely.) And “reset” uses access to email as a source of proxy authentication. So, 0wn my email address, 0wn me, entirely. If you haven’t enabled two-factor authentication on your email account yet, you need to, because of this.
Usually, improving security involves things getting more inconvenient but in the case of using a password manager, it has actually been a net improvement. No more time spent trying to remember which of the passwords in my limited brain key-chain I had used for a site. No more reset-password-and-wait for the many sites at which I had no idea what the password was.
LastPass has been very good, and I have only one note/caveat. The password manager works by installing a plug-in into your browser, so if you use multiple browsers (I use both Chrome and Safari regularly) you’ll end up with multiple plugins. The preferences on those plugins are managed separately. Same thing if you use multiple computers (I do, desktop and laptop). Each plugin on each computer has separate preferences.
This is important because the LastPass preferences are, in my opinion, a little loose. Once you provide the master password, by default, the password vault remains unlocked and available until you actually shut down your browser. I can go days without shutting down my browser. So, I changed that preference to a time-out of 15 minutes instead. But I had to change it on every browser and on every computer I owned, which was not intuitive, since the plugin is good about sharing other information to other installs transparently (add a password on one browser, it’s available on all).
So far I’ve been using the free version of LastPass, but as I move into the mobile world I will probably have to buck up for the paid version for support on mobile devices. Given how much it has simplified my life, I won’t begrudge them the dollars.
Bonus paragraph: Isn’t saving all my passwords on a cloud service really dumb? Not so much. The passwords are only ever decrypted locally by the password manager plugin. The cloud just stores a big encrypted lump of passwords, and I have a lot of faith in AES256. However, my security now has one big central point of failure: the master password. But since I only have to remember one master password, I have been able to make it nice and long, so any remote technical attack on my security is unlikely. That just leaves all the other kinds of attack (social engineering, keylogging, device theft, human factors, etc, etc).
24 May 2016
Building enterprise IT projects as “capital investments” is something I consider potentially dangerous, because it lumps IT “assets” along with much more durable and valuable physical assets.
A key question to ask of an “IT asset” is just how valuable and permanent the asset is. In the venture capital world, nothing makes investors happier than hearing that a company is spending their investment dollars creating “intellectual property”. This is an intellectual product that is defensible and ownable: it’s not locked up between some employee’s ears, it’s owned by the company.
In contrast, building “intellectual capital” is a much more risky proposition. Intellectual capital is my stock in trade, it’s what distinguishes me from a random C/C++ programmer: I know some very detailed things about some specific open source projects that would take quite a long time to learn from scratch. It’s valuable capital, but it lives between my ears. It’s mine and mine alone.
Why is this important for enterprise IT projects? Because so much of the value created in IT projects is “intellectual capital”. Staff are assembled by a consultancy and take months and years to learn a business domain and the particular tools for the problem, and the particular code base that is the system itself.
And then when the project is done… they are sent off to consultancy’s the next project. Flush…
Treating enterprise IT projects as capital projects encourages miscategorization all over the place:
- The “system” is perceived as the target of the investment. But the value of the pile of code and hardware rapidly diminishes to zero once the knowledgeable staff are removed. Changes and enhancements that would take the original developers minutes now require days and weeks of staff learning time before they can be attempted.
- The nature of the funding assumes that at some point the system will be “complete”. It will be in “maintenance” mode. Except the budget assigned to maintainance is too low to make any substantial changes in response to unexpected future needs. So when serious new requirements arise, the response is always to start again from scratch. With a new team.
It’s weird that IT projects get this special treatment, because other areas of government are perfectly aware that when staff leave, they carry out the accumulated intellectual capital of years of learning.
The very nature of the outsourced IT relationship has obscured the fact that government is routinely building up very expensive stores of intellectual capital and then sending them on their way only a handful of months or years after they’ve built them.
Maybe it’s time to get back to building systems in house?
16 May 2016
Back when I first discovered open source software, at the dawn of my computing and consulting career, I was pretty sure it was some kind of rainbow magic, better than sliced bread. Once other folks found this stuff, it was going to catch on like wildfire, and why not?
- It was way more flexible to build systems with than proprietary black boxes.
- The community around it was helpful and knowledgable, further speeding development.
- The price of deployment was (obviously) unbeatable, and just as importantly, the lack of legal restrictions neatly avoided convoluted “licensing oriented architectures”.
That was over 15 years ago, so clearly I was really, really wrong, at least from the point of view of governments and large corporate enterprises (in the start-up space, the revolution happened over 10 years ago, and nobody is looking back). Over time, I began to reformulate my thesis:
- Open source is a toolkit best appreciated by tool users, and,
- Managers have generally moved beyond direct tool using and use proxy data for decision making, but
- Today’s young staffer is tomorrow’s manager, so,
- Eventually the sands of time will deliver an open source literate population of managers into decision making authority.
This morning, I saw this tweet, which is fun:
That’s one good data point, hooray!
But in all honesty, I don’t feel like there is necessarily an open source wave cresting, certainly not in the big organization I’m nearest to, the BC government.
- Yes, there are some young (and not so young) advocates, who have increased their decision making power over time. But, there is a larger population of similarly aged folks who, from an innovation and risk-taking point of view, might as well be from the last generation. Their ascendance will assure continuity: the only change will be from expensive proprietary on-site software to expensive proprietary SaaS solutions.
- Yes, the overall IT environment is more accepting of open source in general (there is even Linux being run in government!! ooo!). But, in general there is a mismatch in sales fire power between solutions that have high-priced outside sales people promoting them and those that have to be dragged in by staff. Managers (even young ones) have moved beyond direct tool use, and are sitting ducks for a good sales presentation.
So it still falls to in-house innovation centers like GDS or the US Digital Service to try to demonstrate, mandate, and teach a new way of doing things to the old guard of IT.
It’s not clear they are succeeding, even on their own terms. More on that another day.
Do you have links to interesting experiments in doing enterprise IT in a new way? Drop them in the comments, I think it’s time to revisit government enterprise IT and what kind of program can chip away at the culture that has accreted over the last generation.
There’s some interesting experiments out there, let’s hear about them.
10 May 2016
Economist Paul Krugman loves to coin phrases, and a favourite of mine is the “magic asterisk”, referring to a footnoted extra calculation that is both
- critical to the financial credibility of a piece of public policy, and
- completely made up.
The NRPP Business Case doesn’t really hide its magic asterisk, it puts it right in the executive summary.
See it? That little stack of green bars? The NRPP business case would have you believe that the lion’s share of project benefit is going to come from “economic growth and job creation” as a result of this technology and business process realignment project.
Please stop laughing.
Please stop.
The leap of faith you must make is to think that not only will an improved system speed up the time between natural resource project conception and delivery (I can buy that) but also it will result in a permanent and persistent rise in the rate of resource extraction in the province (that one is harder). That’s where the green bars marching to the right come from.
So, pop quiz, what is more likely to result in a lasting rise in rates of resource extraction:
Like all good magic asterisks, the NRPP plan involves real mathematics, applied to nonsensical inputs. In this case net present value calculations applied to unrealistic projections about likely upcoming major project investments in BC:
- Calculation based on approval of 8 new mines and 3 LNG facilities after NRPP is implemented
- 12-month acceleration of decision timelines for major mines and LNG facilities will create a present value benefit from faster revenue realization
- Conservative estimate which excludes any impact on other revenue streams
- Data for calculations was sourced from the BC Jobs Plan and LNG Strategy
- Present value calculations assume a 4.5% discount rate and a 30-year timeframe
NRPP Business Case, Key Revenue Assumptions, Page 59
See it? In order to make millions through faster approvals of major projects, only two things have to happen:
- We need 8 new mines and 3 LNG projects to go through.
- The new process actually has to make them go through faster.
So, no problem.
Stop laughing.
Please stop it.
It will be on time, on budget. Guaranteed.