ArcIMS Retrieving...
19 Jun 2019
The @spatialpunk asked if the old ArcIMS loading GIF was around, and I managed to find a live ArcIMS site to scavenge them from. For posterity…
…and…
…ah, the good old days.
19 Jun 2019
The @spatialpunk asked if the old ArcIMS loading GIF was around, and I managed to find a live ArcIMS site to scavenge them from. For posterity…
…and…
…ah, the good old days.
10 Jun 2019
When talking to government audiences, I like to point out that the largest national mapping agency in the world resides in a building decorated like a hypertrophic kindergarten, in Mountain View, California. This generally gets their attention.
The second most important year in my career in geospatial was 20051, the year that Google Maps debuted, and began the migration of “maps on computers” from a niche market dominated by government customers to the central pivot of multiple consumer-facing applications.
The echoes of 2005 lasted for several years.
A funny thing happened in the shake out, though. Nobody ever caught up to Google. Or even came very close. While Google has abandoned Maps Engine–their foray into “real GIS”–their commitment to core location data, knowing where everything is and what everything is, all the time and in real time, remains solid and far in advance of all competitors.
Probably even before they rolled out the iPhone in 2007 Apple knew they had a “maps problem”. Once they added a GPS chip (a development that was only awaiting a little more battery oomph) they would be beholden to Google for all the directions and maps that made the chip actually useful to phone users.
And so we eventually got the Apple maps debacle of 2012. It turns out, building a global basemap that is accurate and includes all the relevant details that users have come to expect is really really really hard.
In an excellent article on the differences between Google and Apple’s maps, Justin O’Beirne posited:
Google has gathered so much data, in so many areas, that it’s now crunching it together and creating features that Apple can’t make—surrounding Google Maps with a moat of time.
Even the most well-funded and motivated competitors cannot keep up. Microsoft came the closest early on with Bing Maps, but seems to have taken their foot off the gas, in acknowledgement of the fact that they cannot achieve parity, let alone get ahead.
And so what? So what if Google has established Google Maps as a source of location data so good and so unassailable that it’s not worth competing?
Well, it sets up an interesting future dynamic, because the competitive advantage those maps provide is too large to leave unchallenged.
Someone is going to take another run at Google, they have to. My prediction is that it will be AWS, either through acquisition (Esri? Mapbox?) or just building from scratch. There is no doubt Amazon already has some spatial smarts, since they have to solve huge logistical problems in moving goods around for the retail side, problems that require spatial quality data to solve. And there is no doubt that they do not want to let Google continue to leverage Maps against them in Cloud sales. They need a “good enough” response to help keep AWS customers on the reservation.
How will we know it’s happening? It might be hard to tell from outside the Silicon Valley bubble. Most of the prominent contributors to the New Mapping Hegemony live behind the NDA walls of organizations like Google and Apple, but they are the kinds of folks Amazon will look to poach, in addition to members of Microsoft’s Bing team (more conveniently already HQ’ed in Seattle).
I think we’re due for another mapping space race, and I’m looking forward to watching it take shape.
1. The most important event of my geospatial career was the release of the iPhone 3GS with a GPS chip as a standard component.
07 Jun 2019
In my last post I demonstrated that PostgreSQL 12 with PostGIS 3 will provide, for the first time, automagical parallelization of many common spatial queries.
This is huge news, as it opens up the possibility of extracting more performance from modern server hardware. Commenters on the post immediately began conjuring images of 32-core machines reducing their query times to miliseconds.
So, the next question is: how much more performance can we expect?
To investigate, I acquired a 16 core machine on AWS (m5d.4xlarge), and installed the current development snapshots of PostgreSQL and PostGIS, the code that will become versions 12 and 3 respectively, when released in the fall.
The number of workers assigned to a query is determined by PostgreSQL: the system looks at a given query, and the size of the relations to be processed, and assigns workers proportional to the log of the relation size.
For parallel plans, the “explain” output of PostgreSQL will include a count of the number of workers planned and assigned. That count is exclusive of the leader process, and the leader process actually does work outside of its duties in coordinating the query, so the number of CPUs actually working is more than the num_workers
, but slightly less than num_workers+1
. For these graphs, we’ll assume the leader fully participates in the work, and that the number of CPUs in play is num_workers+1
.
PostgreSQL’s automatic calculation of the number of workers could be a blocker to performing analysis of parallel performance, but fortunately there is a workaround.
Tables support a “storage parameter” called parallel_workers
. When a relation with parallel_workers
set participates in a parallel plan, the value of parallel_workers
over-rides the automatically calculated number of workers.
In order to generate my data, I re-ran my queries, upping the number of parallel_workers
on my tables for each run.
Before running the tests, I set all the global limits on workers high enough to use all the CPUs on my test server.
I also loaded my data and created indexes as usual. The tables I used for these tests were:
pd
a table of 69,534 polygonspts_10
a table of 695,340 pointsI tested two kinds of queries: a straight scan query, with only one table in play; and, a spatial join with two tables. I used the usual queries from my annual parallel tests.
Scan performance improved well at first, but started to flatten out noticably after 8 cores.
Workers | 1 | 2 | 4 | 8 | 16 |
---|---|---|---|---|---|
Time (ms) | 318 | 167 | 105 | 62 | 47 |
The default number of CPUs the system wanted to use was 4 (1 leader + 3 workers), which is probably not a bad choice, as the expected gains from addition workers shallows out as the core count grows.
The join query computes the join of 69K polygons against 695K points. The points are actually generated from the polygons, so there are precisely 10 points in each polygon, so the resulting relation would be 690K records long.
For unknown reasons, it was impossible to force out a join plan with only 1 worker (aka 2 CPUs) so that part of our chart/table is empty.
Workers | 1 | 2 | 4 | 8 | 16 |
---|---|---|---|---|---|
Time (ms) | 26789 | - | 9371 | 5169 | 4043 |
The default number of workers is 4 (1 leader + 3 workers) which, again, isn’t bad. The join performance shallows out faster than the scan performance, and above 10 CPUs is basically flat.
27 May 2019
For the last couple years I have been testing out the ever-improving support for parallel query processing in PostgreSQL, particularly in conjunction with the PostGIS spatial extension. Spatial queries tend to be CPU-bound, so applying parallel processing is frequently a big win for us.
Initially, the results were pretty bad.
With PostgreSQL 12 and PostGIS 3, all that has changed. All standard query types now readily parallelize using our default costings. That means parallel execution of:
PostgreSQL 12 and PostGIS 3 have finally cracked the parallel spatial query execution problem, and all major queries execute in parallel without extraordinary interventions.
With PostgreSQL 11, most parallelization worked, but only at much higher function costs than we could apply to PostGIS functions. With higher PostGIS function costs, other parts of PostGIS stopped working, so we were stuck in a Catch-22: improve costing and break common queries, or leave things working with non-parallel behaviour.
For PostgreSQL 12, the core team (in particular Tom Lane) provided us with a sophisticated new way to add spatial index functionality to our key functions. With that improvement in place, we were able to globally increase our function costs without breaking existing queries. That in turn has signalled the parallel query planning algorithms in PostgreSQL to parallelize spatial queries more aggressively.
In order to run these tests yourself, you will need:
You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best.
The setup instructions show where to download the Canadian polling division data used for the testing:
pd
a table of ~70K polygonspts
a table of ~70K pointspts_10
a table of ~700K pointspts_100
a table of ~7M pointsWe will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather
at run-time to turn parallelism on and off for comparison purposes.
When max_parallel_workers_per_gather
is set to 0, parallel plans are not an option.
max_parallel_workers_per_gather
sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2.Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers
set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker.
Behaviour for aggregate queries is still good, as seen in PostgreSQL 11 last year.
Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan.
The simplest spatial parallel scan adds a spatial function to the target list or filter clause.
Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan. This query did not work out-of-the-box with PostgreSQL 11.
Gather
(cost=1000.00..27361.20 rows=69534 width=8)
Workers Planned: 3
-> Parallel Seq Scan on pd
(cost=0.00..19407.80 rows=22430 width=8)
Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get:
Right out of the box, we get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11
Gather
(cost=1000.28..837378459.28 rows=5322553884 width=2579)
Workers Planned: 4
-> Nested Loop
(cost=0.28..305122070.88 rows=1330638471 width=2579)
-> Parallel Seq Scan on pts_100 pts
(cost=0.00..75328.50 rows=1738350 width=40)
-> Index Scan using pd_geom_idx on pd
(cost=0.28..175.41 rows=7 width=2539)
Index Cond: (geom && pts.geom)
Filter: st_intersects(geom, pts.geom)
The only quirk in this plan is that the nested loop join is being driven by the pts_100
table, which has 10 times the number of records as the pd
table.
The plan for a query against the pt_10
table also returns a parallel plan, but with pd
as the driving table.
Right out of the box, we still get a parallel plan! No amount of begging and pleading would get a parallel plan in PostgreSQL 11
Gather
(cost=1000.28..85251180.90 rows=459202963 width=2579)
Workers Planned: 3
-> Nested Loop
(cost=0.29..39329884.60 rows=148129988 width=2579)
-> Parallel Seq Scan on pd
(cost=0.00..13800.30 rows=22430 width=2539)
-> Index Scan using pts_10_gix on pts_10 pts
(cost=0.29..1752.13 rows=70 width=40)
Index Cond: (geom && pd.geom)
Filter: st_intersects(pd.geom, geom)
17 May 2019
Last month I was invited to give a keynote talk at FOSS4G North America in San Diego. I have been speaking about open source economics at FOSS4G conferences more-or-less every two years, since 2009, and I look forward to revisting the topic regularly: the topic is every-changing, just like the technology.
In 2009, the central pivot of thought about open source in the economy was professional open source firms in the Red Hat model. Since then we’ve taken a ride through a VC-backed “open core” bubble and are now grappling with an environment where the major cloud platforms are absorbing most of the value of open source while contributing back proportionally quite little.
What will the next two years hold? I dunno! But I have increasingly little faith that a good answer will emerge organically via market forces.
If you liked the video and want to use the materials, the slides are available here under CC BY.