I am Metadata Man

I have been keeping a secret for the last couple years.

BC CIO Betty-Jo Hughes knows my secret:

Though once considered benign to many, metadata found in government e-mail and server logs is beginning to generate interest from knowledgable freedom-of-information applicants who file requests for these logs spanning large time frames.

And Information Commissioner Elizabeth Denham knows my secret:

In fact, my Office recently issued an Order on this very issue, in which an applicant requested metadata from message tracking logs relating to email traffic for several government ministries and public sector agencies.

I can no longer deny the truth: I am Metadata Man

I am Metadata Man

The Origin Story

I donned my suit back in July of 2013. It was an interesting time:

  • Elizabeth Denham had reported out on the FOI implications of the “ethnic outreach scandal” and included some interesting notes about the use and abuse of government e-mail.
  • A scandal in Ontario revolving around the deliberate deletion of e-mail records made me wonder about the persistance of log files as a kind of guarantee of data integrity.
  • Edward Snowden had revealed that the US government was gathering massive amounts of metadata about public citizens; we would later learn that the Canadian government also engages in this kind of surveilance (it still does).

I wondered if government had the same open attitude about its own metadata that it appeared to have for the general public’s metadata.

Thanks to IT centralization, every mail sent or received by government goes through a handful of SMTP servers. Looking at headers on mail I had received from government, I could see the servers were running Microsoft Exchange. And like other servers, Exchange automatically generates logs of every e-mail processed. So I asked:

Message tracking log files (files beginning with MSGTRK in %ExchangeInstallPath%TransportRoles\Logs\MessageTracking) from e7hub01.idir.bcgov, e7hub02.idir.bcgov, e7hub03.idir.bcgov, e7hub04.idir.bcgov, e7hub05.idir.bcgov, e7hub06.idir.bcgov, e7hub07.idir.bcgov, e7hub08.idir.bcgov, e7hub09.idir.bcgov.
Date range is January 1, 2013 to July 3, 2013.

The log files do not contain email contents, only metadata such as:

  • the sender email address
  • the recipient email address
  • the date and time the email transitted the server
  • the unique message number for the email
  • the subject line of the email (Microsoft Exchange only)

Mail server log files are not under the control of mail recipients and senders, they are under the control of central IT administrators. So while political staff can delete the mails they receive or the mails they send, they cannot delete the log entries that prove the emails existed in the first place.

Except for the subject line, which is easily stripped from the log file, there’s little in any particular log entry that could be considered unreleasable under FOI exceptions:

  • no cabinet confidences
  • no policy advice
  • no personal information
  • etc…

When each entry can be boiled to “John sent an e-mail to Jim on February 13, 2013”, it’s hard to see how any particular entry can be withheld. So I entertained the (naïve) hope that the whole corpus would be releasable post-haste.

Ha ha ha ha ha. Ha ha.

The First Battle

The government initially attempted to deny the request based on “Section 43” of the Act.

43 If the head of a public body asks, the commissioner may authorize the public body to disregard requests under section 5 or 29 that
(a) would unreasonably interfere with the operations of the public body because of the repetitious or systematic nature of the requests, or
(b) are frivolous or vexatious.

Section 43 is supposed to protect public bodies from harassment. For example, if I submitted thousands of requests in an attempt to bury the government in paperwork, the requests could be denied under Section 43.

The Ministry argued that they would have to manually review every line of the log files prior to release, and since the log files were 100’s of millions of lines long, obviously that was an undue burden, so, “go away, Metadata Man!”.

The size and detail in Ministry response were the first hint of just how uneven this field of battle was going to be.

  • The Ministry fielded real, live lawyers, whose full-time job is to argue FOI cases.
  • They took actual affidavits from staff and submitted them as evidence.
  • They cited relevant case law from both courts and previous FOI cases.
  • They double-spaced it all and neatly numbered every paragraph!

However, they also called me “frivolous or vexatious”, which was hurtful, so I felt duty bound to sit down and hammer out a response. I too double-spaced and numbered every paragraph.

In the end, the Adjudicator ignored most of the arguments made by both the Ministry and by me, and ruled that my request was neither “frivolous” nor “vexatious”, according to definitions clarified in prior FOI rulings.

And so I learned another useful lesson about the FOI appeals process: not only was I heavily out-gunned from a legal skills and resources perspective, but also the decision might in the end hinge on a some fact I knew nothing about in the first place.

The Second Battle

Having lost the first round, the Ministry fell back to more defensible territory: Section 22, “a public body must refuse to disclose personal information to an applicant if the disclosure would be an unreasonable invasion of a third party’s personal privacy”.

If Section 22 can be found to apply to records in the log file, then the Ministry can circle back around and point to the volume of data and say “sorry Metadata Man, there’s no way we can review all those records to remove the private bits”.

The trouble with that argument, from the Ministry’s point of view, is that any given record is completely anodyne: “John sent an e-mail to Jim on February 13, 2013”.

So they set about finding true examples of personal information in the data, by having BC Statistics run network analyses on the log files and comment on the network diagram.

From my review of the diagram, reporting structures are evident. In addition, relationships with individual outside the BC public service become evident. These may be business relationships or personal correspondance.

In my opinion, the graph also begins to hint at the sensitive information that might be revealed in this data set. In the lower centre, an individual has sent an email to “Fitness Centre”. There may be other accounts in government of a much more sensitive nature.
Brad Williams Affidavit #1

To the staff in BC Stats who got to step out of the ordinary day-to-day grind and do network and relationship analyses to feed this process, depending on your level of intellectual curiousity: “you’re welcome”; or, “sorry”.

My case was getting harder to argue, but not impossible. There was definitely some provable personal information in the corpus, but the Ministry was still unable to point to any particular record and say “there! that is personal and private”. They were stuck hand-waving at potential “patterns of information” that might be there.

So my counter-argument acknowledged that while there might be some personal information in the data, the public interest in being able to monitor the actions of government (things like deleting important emails, and so on) out-weighed the private interest in keeping these thus-far-unproven data patterns private.

The message tracking logs are an important example of a government record, in that they are a huge corpus of data which should, because of their uncontroversial nature (from, to, date, id) be immediately releasable, but because of the possibility of personal information patterns, have become subject to this contentious process.

Imagine a document warehouse full of boxes of completely uncontroversial, releasable files. Imagine that the government refuses to release the files, because an employee at the warehouse may have placed his university transcript in one of the boxes. They are not sure he did so or not, but the effort of searching all the boxes is too high, therefore none of the boxes is releasable.

The situation with email logs is fundamentally the same. The question is not whether the effort of searching the boxes is too high: that’s a given, it’s too high. The question is whether the potential presence of a small piece of personal information is sufficient to block the release of a huge volume of public information.
— OIPC File F14-58135, Paul Ramsey, Mar 4, 2015

I liked my argument, it was simple and clear. But I also submitted my final argument with low expectations: the Information and Privacy Commissioner has always seemed to me to be an “information and PRIVACY Commissioner” in her priorities. In an OIPC decision balancing public access rights and privacy rights, the latter is more likely to win out.

Judgement Day

In the end, the result was as I expected:

the Ministry’s evidence demonstrates that the Logs reveal personal information, including information that is subject to a presumption that disclosure would be an unreasonable invasion of personal privacy. Against that, I recognize the valuable insights into the practical workings of government that could be gained from the applicant having access to the information.

However, this is insufficient to rebut the presumptions that apply to some of the information or to overcome the invasion of personal privacy that would result from disclosure of the Logs. Disclosure of some information in the requested information in the Logs would be an unreasonable invasion of personal privacy under s. 22.
— Hamish Flanagan, OIPC, Order F15-63

After over two years of process, my bulk request for the email server logs was at an end: Metadata Man was defeated.

Metadata Man Vindicated

However, within the ashes of defeat the seeds of a few victories could be found blooming.

The OIPC did not rule that all email log requests could be rejected, only those for which it was not possible to purge private information.

To be clear, my conclusion does not mean that metadata in the Logs can never be disclosed under FIPPA. The breadth of the applicant’s request means that the volume of responsive information allows patterns to be discerned that make disclosure in this particular case an unreasonable invasion of personal privacy. In a smaller subset of the same type of information, such patterns may not be so easily discerned so the personal privacy issues may be different.
— Hamish Flanagan, OIPC, Order F15-63

And the OIPC itself made use of email logs in investigating the illegal destruction of records in its report, “Access Denied”.

My investigators asked the Investigations and Forensics Unit to provide government’s message tracking logs for all of Duncan’s emails prior to November 21, 2014. My investigators then looked for terms in the subject lines of the emails such as “Highway of Tears” or “Highway 16”.

The query revealed six emails in his account predating his November 20, 2014 search
— OIPC Report F15-03, “Access Denied

While denying the public the information access to carry out such research directly, the OIPC used the same technique to prepare one of its most politically explosive investigation reports.

The OIPC decision makes clear the technique needed to make a releasable request for message tracking logs:

  • Make the request small
  • Make the request targeted

So, if you think a particular employee is deleting the targets of your FOI requests, make simultaneous requests for the email itself and also for all message tracking log entries from and to the employee’s address.

For example, directed to the employee’s ministry:

All emails sent and received by jim.bob@gov.bc.ca between Jan 1 and Jan 8, 2017.

And directed to the Ministry of Citizen’s Services:

All e-mail message tracking log entries To: jim.bob@gov.bc.ca or From: jim.bob@gov.bc.ca between Jan 1 and Jan 8, 2017. Entries should include To, From, CC, Datetime, Message ID and Subject lines.

Even a very busy person is unlikely to generate more than a few dozen emails a day, so it is possible for FOI reviewers to manually scrub any private information from the log files. Similarly because the volume of records is low, there’s no leverage for pattern recognition algorithms to ascertain any information not visible in a line-by-line manual review.

For folks contemplating an FOI appeal, I have kept the full collection of documents submitted by myself and the Ministry during the process. Be prepared to spend a moderate amount of time reading and writing legalese.

Beyond the Borders of BC

The OIPC ruling should be of interest to information and privacy nerds in other parts of Canada and beyond because it provides real-world precedents of:

  • A government arguing forcefully that metadata is personal information.
  • A privacy commissioner ruling that metadata is personal information.

Against the backdrop of bulk harvesting of Canadian metadata by federal security agencies, and the same practice in the USA, having some institutional examples of governments arguing the opposite position might help in arguing for better behaviour at higher levels of government.

The Fall Guy

The BC government’s email retention policy (delete it all, whenever possible) was briefly back in the news last week as a BC Liberal staffer was brought up on charges:

George Steven Gretes has been charged with two counts of wilfully making false statements to mislead, or attempt to mislead, under the province’s Freedom of Information and Protection of Privacy Act.
CBC News

The Fall Guy

Sometimes, taking one for the team really means taking one for the team. But it’s important to remember that, however personally reprehensible Gretes’ actions were, his behaviour is just the tip of the ethical iceberg when it come to the current governments’ attitude towards record keeping.

It has been obvious for years that the government has a deliberate policy of poor management of digital records, and that there is a strong desire in high places to keep that policy in place. Gretes is not an isolated figure, he’s just the only person foolish and unlucky enough to be caught in deliberate law-breaking, instead of quietly taunting the public from the grey area.

Right in the middle of the exposure of Mr. Gretes’ actions last fall, the Opposition brought forward more evidence that political staffers routinely destroy records:

What we did last November is we asked for information pertaining to any e-mails from the chief of staff to the Minister of Natural Gas, Tobie Myers. Ms. Myers, of course, at the time we asked, was in discussions with people within the sector about legislation that was going to be before this House. What we got back from that request for information over a three-week period were three e-mails, just three.

It was curious to us that there would only be three e-mails in existence coming from the minister’s office over a three-week period, when flagship legislation was being tabled. So we asked for the message-tracking documents from the Minister of Citizens’ Services.

We determined through that route … that Ms. Myers sent 800 e-mails over that three-week period. So 797 triple deletes is a whole lot of triple deletes.

In those 800 e-mails, there were e-mails sent to Mr. Spencer Sproule, who may be familiar to members on this side. He used to work in the Premier’s office as her issue management director. He now, of course, is the chief spokesperson for Petronas, the lead agency looking at natural gas here in British Columbia.

Jared Kuehl, the head deputy of government relations at Shell; Neil Mackie, from AltaGas; and right to the minister’s office in Ottawa — 800 e-mails, and we got three.

My question is to the minister of openness and transparency in B.C. Liberal–land. Can he explain how it is that when we asked for 800, we only got three?

— John Horgan, Leader of the Opposition, Oct 26, 2015

The usual excuses are rolled out every time: the emails are “transitory” or they are filed “elsewhere”. Except these emails were from a high ranking Natural Gas ministry staffer to highly placed members of the industry! Transitory? Really? These were all just making plans to get coffee, 800 times?

The Information and Privacy Commissioner noted the same pattern in records keeping by the Premier’s Deputy Chief of Staff.

The Deputy Chief of Staff stated that her practice was to delete emails from her Sent Items folder on a daily basis and if all emails in that folder were of a transitory nature, she would delete all of them. Her evidence was that her Deleted Items folder was set to purge at the end of each day when she exited Microsoft Outlook.
— p50, Access Denied, OIPC, Oct 22, 2015

George Gretes must be quietly eating his liver, to be prosecuted for actions that he knows his former colleagues engage in every single day in their offices at the highest levels of government.

But that’s what it means to be the fall guy. The public thinks little of you because you were venal enough to do the crime. Your former friends think litle of you because you were stupid enough to get caught.

“It’s done. Now you don’t have to worry about it anymore.”
George Gretes, on illegally deleting the target of an FOI request

libpostal for PostgreSQL

Dealing with addresses is a common problem in information systems: people live and work in buildings which are addressed using “standard” postal systems. The trouble is, the postal address systems and the way people use them aren’t really all that standard.

libpostal for PostgreSQL

Postal addressing systems are just standard enough that your average programmer can whip up a script to handle 80% of the cases correctly. Or a good programmer can handle 90%. Which just leaves all the rest of the cases. And also all the cases from countries where the programmer doesn’t live.

The classic resource on postal addressing is called Falsehoods programmers believe about addresses and includes such gems as:

An address will start with, or at least include, a building number.

Counterexample: Royal Opera House, Covent Garden, London, WC2E 9DD, United Kingdom.

No buildings are numbered zero

Counterexample: 0 Egmont Road, Middlesbrough, TS4 2HT

A street name won’t include a number

Counterexample: 8 Seven Gardens Burgh, WOODBRIDGE, IP13 6SU (pointed out by Raphael Mankin)

Most solutions to address parsing and normalization have used rules, hand-coded by programmers. These solutions can take years to write, as special cases are handled as they are uncovered, and are generally restricted in the language/country domains they cover.

There’s now an open source, empirical approach to address parsing and normalization: libpostal.

Libpostal is built using machine learning techniques on top of Open Street Map input data to produce parsed and normalized addresses from arbitrary input strings. It has binding for lots of languages: Perl, PHP, Python, Ruby and more.

And now, it also has a binding for PostgreSQL: pgsql-postal.

You can do the same things with the PostgreSQL binding as you can with the other languages: convert raw strings into normalized or parsed addresses. The normalization function returns an array of possible normalized forms:

SELECT unnest(
  postal_normalize('412 first ave, victoria, bc')
  );
                  unnest                  
------------------------------------------
 412 1st avenue victoria british columbia
 412 1st avenue victoria bc
(2 rows)

The parsing function returns a jsonb object holding the various parse components:

SELECT postal_parse('412 first ave, victoria, bc');
         postal_parse                                   
----------------------------------
 {"city": "victoria", 
  "road": "first ave", 
  "state": "bc", 
  "house_number": "412"}
(1 row)

The core library is very fast once it has been initialized, and the binding has been shown to be acceptably fast, despite some unfortunate implementation tradeoffs.

Thanks to Darrell Fuhriman for motivating this work!

Enter the Panopticon

In 1790, Jeremy Bentham published his plans for the “panopticon”, his design for a new kind of prison that would leverage surveillance to ensure well-behaved prisoners. The panopticon was laid out so that all cells were visible from a central monitoring station, from which a hidden guard could easily watch any inmate, without being observed himself.

Enter the Panopticon

Bentham expected the panopticon approach to improve inmate behaviour at low cost: inmates would obey the rules because they could never be certain when the observer was watching them or not.

The developed world is rapidly turning into a digital panopticon.

Your trail through digital media is 100% tracked.

  • Every web site you visit is traceable, via ad cookies or “like” badges, or Google analytics.
  • Every web search is tracked via cookies or even your own Google login info.

In some respects this tracking is still “opt in” since it is possible, if undesirable, to opt out of digital culture. Drop your email, eschew the web, leave behind your smart phone.

But your trail in the physical work is increasingly being tracked too.

  • If you carry a cell phone, your location is known to within one mobile “cell”, as long as the device is powered.
  • If you use a credit or debit card, or an ATM, you are localised to a particular point of sale when you make a purchase.
  • If you use a car “safety” system like OnStar your location is known while you drive.

Again, these are active signals, and you could opt out. No cell phone, cash only, no vehicles after 1995.

But as I discussed this year in my talk about the future of geo we are rapidly moving beyond the point of “opt out”.

  • Within our lifetimes, most urban areas will be under continuous video surveillance, and more importantly,
  • within our lifetimes, the computational power and algorithms to make sense of all those video feeds in real time will be available.

We take for granted that we have some moments of privacy. When I leave the house and walk to pick up my son at school, for 15 minutes, nobody knows where I am. Not for much longer. All too soon, it will be possible for someone in a data center to say “show me Paul” and get a live picture of me, wherever I may be. A camera will see me, and a computer will identify me in the video stream: there he is.

Speculative fiction is a wonderful thing, and there’s a couple books I read in the last year that are worth picking up for anyone interested in what life in the panopticon might be like.

  • Rainbows End by Verner Vinge (2007) is an exploration of the upcoming world of augmented reality and connectivity. In many ways Vinge leaps right over the period of privacy loss: his characters have already come to terms with a world of continuous visibility.
  • The Circle by David Eggers (2013) jumps into a world right on the cusp of the transition from our current “opt-in” world of partial privacy to one of total transparency, of life in the panopticon.

Both books are good reads, and tightly written, though in the tradition of science fiction the characterizations tend to be a little flat.

The Guardian also has a recent (2015) take on the digital panopticon:

In many ways, the watchtower at the heart of the panopticon is a precursor to the cameras fastened to our buildings – purposely visible machines with human eyes hidden from view.

Once you come to terms with the idea that, at any time, you could be surveilled, the next question is: does that knowledge alter your behaviour? Are you ready for the panopticon? I’m not sure I am.

Paris Code Sprint, PostGIS Recap

At the best of times, I find it hard to generate a lot of sympathy for my work-from-home lifestyle as an international coder-of-mystery. However, the last few weeks have been especially difficult, as I try to explain my week-long business trip to Paris, France to participate in an annual OSGeo Code Sprint.

Paris Code Sprint

Yes, really, I “had” to go to Paris for my work. Please, stop sobbing. Oh, that was light jealous retching? Sorry about that.

Anyhow, my (lovely, wonderful, superterrific) employer, CartoDB was an event sponsor, and sent me and my co-worker Paul Norman to the event, which we attended with about 40 other hackers on PDAL, GDAL, PostGIS, MapServer, QGIS, Proj4, PgPointCloud etc.

Paul Norman got set up to do PostGIS development and crunched through a number of feature enhancements. The feature enhancement ideas were courtesy of Remi Cura, who brought in some great power-user ideas for making the functions more useful. As developers, it is frequently hard to distinguish between features that are interesting to us and features that are using to others so having feedback from folks like Remi is invaluable.

The Oslandia team was there in force, naturally, as they were the organizers. Because they work a lot in the 3D/CGAL space, they were interested in making CGAL faster, which meant they were interested in some “expanded object header” experiments I did last month. Basically the EOH code allows you to return an unserialized reference to a geometry on return from a function, instead of a flat serialiation, so that calls that look like ST_Function(ST_Function(ST_Function())) don’t end up with a chain of three serialize/deserialize steps in them. When the deserialize step is expensive (as it in for their 3D objects) the benefit of this approach is actually measureable. For most other cases it’s not.

(The exception is in things like mutators, called from within PL/PgSQL, for example doing array appends or insertions in a tight loop. Tom Lane wrote up this enhancement of PgSQL with examples for array manipulation and did find big improvements for that narrow use case. So we could make things like ST_SetPoint() called within PL/PgSQL much faster with this approach, but for most other operations probably the overhead of allocating our objects isn’t all that high to make it worthwhile.)

There was also a team from Dalibo and 2nd Quadrant. They worked on a binding for geometry to the BRIN indexes (9.5+). I was pretty sceptical, since BRIN indexes require useful ordering, and spatial data is not necessarily well ordered, unlike something like time, for example. However, they got a prototype working, and showed the usual good BRIN properties: indexes were extremely small and extremely cheap to build. For narrow range queries, they were about 5x slower than GIST-rtree, however, the differences were on the order of 25ms vs 5ms, so not completely unusable. They managed this result with presorted data, and with some data in its “natural” order, which worked because the “natural” order of GIS data is often fairly spatially autocorrelated.

I personally thought I would work on merging the back-log of GitHub pull-requests that have built up on the PostGIS git mirror, and did manage to merge several, both new ones from Remi’s group and some old ones. I merged in my ST_ClusterKMeans() clustering function, and Dan Baston merged in his ST_ClusterDBSCAN() one, so PostGIS 2.3 will have a couple of new clustering implementations.

However, in the end I spent probably 70% of my time on a blocker in 2.2, which was related to upgrade. Because the bug manifests during upgrade, when there are two copies of the postgis.so library floating in memory, and because it only showed up on particular Ubuntu platforms, it was hard to debug: but in the end we found the problem and put in a fix, so we are once again able to do upgrades on all platforms.

The other projects also got lots done, and there are more write-ups at the event feedback page. Thanks to Olivier Courtin from Oslandia for taking on the heavy weight of organizing such an amazing event!