Update: Commenter cpnet notes that in EAO documents the proponent estimated a plant power usage of 1,500,000 MWh / year, which is three times larger than my guesstimate. The EAO document notes it is a very conservative (large) estimate, but at the outside it would imply a subsidy three times larger – about $45,000,000 per year compared to the old LNG power pricing deal. It’s probably somewhere between.
The British Columbia government’s new “eDrive” rate for LNG producers is going to be creating new jobs at an ongoing cost of $138,000 per job at the electrically powered Woodfibre LNG plant in Squamish.
Let’s do the math, shall we? Here’s the input data:
The price BC originally set for electricity used for LNG was $83.02 per MWh, compared to a prevailing industrial rate of $54.34 per MWh. The new “eDrive” rate is actually just the industrial rate, so the announcement today amounts to a rate change of $28.68 per MWh.
The Woodfibre facility will create 100 long-term jobs. Since the cost of electricity inputs will also run for the lifespan of the facility, this seems like the right number to compare to.
Now the math:
2,100,000 tonnes of LNG times
230 kWh of electricty is
483,000,000 kWh per year of useage. Which can also be stated as
483,000 MWh per year. Times
$28.68 per MWh in eDrive subsidy equals
$13,852,440 per year in foregone revenue for BC Hydro, which for
100 permanent jobs is
$138,524 per job
If we want to create 100 new government-funded jobs:
Why are we paying $138,524 for each of them; and,
Why are they freezing methane, and not teaching kids or building transit or training new engineers.
Government is about choices, and this government is making some batshit crazy choices.
The CloudBC initiative to “facilitate the successful adoption of cloud computing services in the British Columbia public sector” is now a little over a year old, and is up to no good.
Like any good spawn of enterprise IT culture, CloudBC’s first impulse has been to set themselves up as the arbiter of cloud vendors in BC, with a dedicated revenue stream for themselves to ensure their ongoing hegemony.
The eligibility request currently online1 for CloudBC is a master-work of institutional over-reach:
Only CloudBC approved services2 can be sold to the BC public sector.
Approved services will add an ‘administration fee’ to all their billing3 to the BC public sector and remit that fee to CloudBC.
The fee4 will be 2%.
And in exchange for that 2% of all cloud services, CloudBC will provide what in return?
Well, they’ll set the eligibility list, so the BC public sector will be literally paying for someone to tell them “no”. Setting the list will include a few useful services like FOIPPA reviews and making the vendors cough up promises to follow various international standards that nobody reads and few people audit. So that’s something. But mostly just more reasons to say “no”.
I misspent some hours reviewing the agendas [Part-1, Part-2] of CloudBC for the its first year in operation, and among the interesting tidbits are:
The request to vendors was supposed to be released in October 2015, but was actually released in fall of 2016.
Negotiations with Microsoft for Office 365 and what was dubbed the “Microsoft opportunity” were started in summer of 2016, but shut down in spring of 2016: “decision to communicate that CCIO5 will not pursue the deal as presented by Microsoft”
Taking the website live was targeted for June 2016, but as of writing it remains “under construction”.
Spring 2016 plans include contracting with a vendor for a “CloudBC digital platform”, so we’ll at least have an expensive under-utilized web presence “soon” (no RFP exists on BC Bid).
CloudBC was budgeted $1.6M for year one, and managed to under-spend by about 20%. Getting almost nothing done has it’s benefits!
An office and several staff have been seconded, so from an institutional existence point-of-view, CloudBC is off to a roaring start.
When I first heard about CloudBC, I was pretty excited. I naïvely thought that the mandate might include promoting, for example, cloud infrastructure inside BC.
Our FOIPPA Act requires that personal information of BC citizens not be stored outside the jurisdiction or by foreign-controlled corporations. That makes using cloud services (usually hosted outside BC, usually owned by foreign corporations) a hard lift for government agencies.
Wouldn’t it be nice if someone did something about that? Yes it would. cough
While setting up “private cloud” infrastructure is anathema (it’s very hard to find success stories, and all signs point to public cloud as the final best solution) in BC there are some strong incentives to take the risk of supporting made-in-Canada clouds:
Thanks to FOIPPA, the alternative to made-in-BC cloud is no almost no cloud at all. Only apps with no personal information in them can go on the US-owned public clouds, which is a sad subset of what government runs.
There are other jurisdictions and other technology domains that need non-US sourced cloud infrastructure. Seeding a Canadian-owned-and-operated PaaS/IaaS cloud industry would open the door to that marketplace.
“Just” getting the FOIPPA Act changed would be the cheapest, “simplest” solution (ignoring the humungous, intense, non-negotiable, insuperable political issues). Since that’s unlikely to occur, the alternative is DIY. I thought CloudBC might be that initiative, but turns out it’s just another enterprise IT control-freak play.
“As only Eligible Customers with a written agreement in effect with the Province will be permitted to use the procurement vehicle established by this Invitation, including the CloudBC Marketplace, the Province intends to establish and maintain a list of Eligible Customers on the CloudBC Marketplace for use by Eligible Cloud Providers.”
“CloudBC Framework Agreements will appoint and require the Eligible Cloud Provider to collect and remit as agent an incremental Administration Fee to be paid by Eligible Customers with Contracts in order to fund the CloudBC operations administered by the Province.”
“An amount equal to 2% of the fees for all services provided”
This week I had the pleasure of presenting the morning keynote at Geomatiqué 2016 in Montreal. I’ve been thinking a lot recently about symbiosis of technology and culture: how new technology is generating new norms to go along with it. We humans are enormously adaptable, so princples that were sacrosanct to one generation become unknown to the next, and back again.
On the chopping block for our generation: privacy.
Not that privacy is any human absolute. I doubt a hunter gatherer had a lot of personal space and privacy: the smaller the group, the more fevered the gossip-mill. On the other hand, the abolition of privacy within the context of an industrial-sized polity will truly be something new under the sun. It could be fine, in its way; it could also be Nineteen Eighty-Four realized.
Anyways, the talk is mostly a survey of technology trends, with some philosophizing at the end. Unfortunately, no video at this event, but if you’d like me to deliver this talk to your organization, drop me a line.
Are your queries unexpectedly slow in PostgreSQL? Is the index not doing what you expect? Surprise! You’ve just discovered a PostgreSQL quirk.
TL;DR: If you are running a locale other than “C” (show LC_COLLATE to check) you need to create a special index to support pattern searching with the LIKE operator: CREATE INDEX myindex ON mytable (mytextcolumn text_pattern_ops). Note the specification of the text_pattern_opsoperator class after the column name.
As a beginner SQL student, you might have asked “will the index make my ‘like’ query fast” and been answered “as long as the wildcard character is at the end of the string, it will.”
That statement is only true in general if your database is initialized using the “C” locale (the North America/English-friendly UNIX default). Running with “C” used to be extremely common, but is less and less so, as modern operating systems automagically choose appropriate regional locales to provide approriate time and formatting for end users.
For example, I run Mac OSX and I live in British Columbia, an English-speaking chunk of North America. I could use “C” just fine, but when I check my database locale (via my collation), I see this:
pramsey=# show LC_COLLATE;
It’s a good choice, it’s where I live, it supports lots of characters via UTF-8. However, it’s not “C”, so there are some quirks.
I have a big table of data linked to postal codes, this is what the table looks like:
Column | Type | Modifiers
postal_code | text | not null
segment | character(4) |
"postal_segments_pkey" PRIMARY KEY, btree (postal_code)
Note the index on the postal code, a standard btree.
I want to search rows based on a postal code prefix string, so I run:
Seq Scan on postal_segments (cost=0.00..2496.85 rows=10 width=68) (actual time=30.320..34.219 rows=4 loops=1)
Filter: (postal_code ~~ 'V8V1X%'::text)
Rows Removed by Filter: 100144
Planning time: 0.250 ms
Execution time: 34.263 ms
I have an index on the postal code, so why am I getting a sequence scan?!?! Because my index is no good for doing pattern matching in any collation other than “C”. I need a special index for that, which I create like this.
The magic part is at the end, invoking text_pattern_ops as the opclass for this index. Now my query works as expected:
Index Scan using postal_segments_text_x on postal_segments (cost=0.29..8.31 rows=10 width=68) (actual time=0.067..0.073 rows=4 loops=1)
Index Cond: ((postal_code ~>=~ 'V8V1X'::text) AND (postal_code ~<~ 'V8V1Y'::text))
Filter: (postal_code ~~ 'V8V1X%'::text)
Planning time: 0.532 ms
Execution time: 0.117 ms
I have gotten so used to PostgreSQL doing exactly the right thing automatically that it took quite a long time to track down this quirk when I ran into it. I hope this page helps others save some time!
Since the Dawn of Time, people have found PostGIS upgrades difficult and confusing, and this is entirely to be expected, because a PostGIS upgrade consists of a number of interlocking parts. Sometimes, they “upgrade” their version of PostGIS and find out they’ve bricked their system. What gives?
What Makes PostGIS Work?
Before talking about upgrades, it’s important to understand how PostGIS works at all, because that understanding is key to seeing how upgrade scenarios go bad.
PostGIS is a “run-time loadable library” for PostgreSQL. That means we have a block of C code that is added to a running PostgreSQL database. That C code sits in a “library file” which is named (for the current 2.2 version): postgis-2.2.so.
Just to add to the confusion: for Windows, the name of the library file is postgis-2.2.dll. For every rule, there must be an exception. For users of Apple OSX, yes, there’s a further exception for you: even though most dynamic libraries on OSX are suffixed .dylib, the PostgreSQL modules on OSX are suffixed .so, just like their Linux counterparts.
The location of the postgis-2.2.so file will vary from system to system.
The presence of the postgis-2.2.so alone is not sufficient to “PostGIS enable” a database. PostGIS consists of a large collection of SQL functions in the database.
The SQL functions are created when you run the CREATE EXTENSION postgis command. Until that time your database knows nothing about the existence or definition of the PostGIS functions.
Once the extension is installed, you can see the definitions of the PostGIS functions in the system tables.
The use of dynamic function and type management catalogs is one of the things which makes PostgreSQL so incredibly flexible for extensions like PostGIS
-[ RECORD 1 ]---+--------------------
proname | st_pointonsurface
pronamespace | 2200
proowner | 10
prolang | 13
procost | 100
prorows | 0
provariadic | 0
protransform | -
proisagg | f
proiswindow | f
prosecdef | f
proleakproof | f
proisstrict | t
proretset | f
provolatile | i
pronargs | 1
pronargdefaults | 0
prorettype | 667466
proargtypes | 667466
prosrc | pointonsurface
probin | $libdir/postgis-2.2
Lots to see here, but most important bit is the entry for the probin column: $libdir/postgis-2.2. This function (like all the other PostGIS functions) is bound to a particular version of the PostGIS C library.
Those of you thinking forward can now begin to see where upgrades could potentially go wrong.
How Things Go Wrong
The most common way for things to go wrong is to upgrade the library on the system without upgrading the database.
So, in Red Hat Linux terms, perhaps running:
yum upgrade postgresql94-postgis
This seems straight-forward, but think about what a package manager does during an upgrade:
Downloads a new version of the software
Removes the old version
Copies in the new version
So, if we had PostGIS 2.1.3 installed, and the latest version is 2.2.2, what has happend?
The postgis-2.1.so file has been removed
The postgis-2.2.so file has been added
So, the pg_proc entries in every PostGIS-enabled database now point to a library file that does not exist
Fortunately this mismatch between the pg_proc entries and the system state is usually solved during the very next step of the upgrade. But it’s a manual step, and if the DBA and system administrator are different people with different schedules, it might not happen.
Your next step should be to go and update the SQL function definitions by running an extension update on all your databases:
If you don’t, you’ll find that none of the PostGIS functions work. That, in fact, you cannot even dump your database. The very act of outputting a representation of the geometry data is something that requires the PostGIS C library file, and until you run ALTER EXTENSION the database doesn’t know where the new library file is.
Since the use of CREATE EXTENSION postgis (available since PostgreSQL 9.1+ and PostGIS 2.0+) became commonplace, migrations now almost always “just work”, which is excellent news.
When you dump a modern PostGIS-enabled database, that was created using the CREATE EXTENSION postgis command, the dump file just includes a CREATE EXTENSION postgis command of its own at the top.
When you load the dump file into a new version of PostgreSQL even with a new version of PostGIS, the extension is created and the data magically loads.
However, there are still some old databases around that were created before the PostgreSQL extension system was invented, and when you dump them you get not only the data, but all the “custom” function and type definitions, including the defintions for PostGIS. A function definition looks like this:
And look what is hiding inside of it: a reference to a particular version of the PostGIS library! So you cannot simply dump your old PostGIS 1.5 database on PostgreSQL 8.4 and load it into a fresh PostGIS 2.2 database on PostgreSQL 9.5: the function definitions won’t reference the right library file.
The best bet for a really old database that was created without the extension mechanism is to use the “hard upgrade” process. The hard upgrade works by:
Taking a special “custom-format” back-up that includes an object catalog;
Filtering the back-up to clean out all the PostGIS-specific function and object definitions; and then
Loading the “cleaned” back-up into a new database with the desired version of PostGIS already installed (using CREATE EXTENSION postgis this time, so you never have to hard upgrade again).
In the case of upgrades that change out the underlying library and other situations that result in a mismatch between the SQL definitions in the database and the state of the system, there are a couple hacks that provide short-term fixes for emergencies:
Symlink the library name the database is looking for to the library name you have. So if your database wants postgis-2.1.so and all you have is postgis-2.2.so, you can ln -s postgis-2.2.so postgis-2.1.so and your database will “work” again.
Update the PostgreSQL catalog definitions for the functions. As a super-user, you can do all kinds of dangerous things, and one of them is to just UPDATE pg_proc SET probin = '$libdir/postgigs-2.2' WHERE probin ~ 'postgis-2.1'
Both hacks “work” because the PostGIS project doesn’t change underlying function names often, and inter-version changes mostly involve adding functions to the C library, not removing old ones.
However, there’s no guarantee that an underlying function name hasn’t change between versions, it’s just unlikely. In the worst case, the function name hasn’t changed, but the parameters have, so it’s now possible that calling the function will crash your database.
All this to say: linking and SQL catalogue hacks should be used temporarily only until you can properly upgrade your database using a hard upgrade.