Inquiring about some slow cases in the JTS buffer algorithm, Michaël Michaud asked:
I have no idea why a large buffer should take longer than a small one, and I don't know if there is a better way to compute a buffer.Martin Davis and I dragged out the usual hacks to make large buffers faster, simplifying the input geometry, or buffering outwards in stages. Said Martin:
The reason for this is that as the buffer size gets bigger, the "raw offset curve" that is initially created prior to extracting the buffer outline gets more and more complex. Concave angles in the input geometry result in perpendicular "closing lines" being created, which usually intersect other offset lines. The longer the perpendicular line, the more other lines it intersects, and hence the more noding and topology analysis has to be performed.A couple days later, Michaël comes back with this:
I made some further tests which show there is place for improvements with the excellent stuff you recently added.Holy algorithmic cow, Batman! Creating and merging 1000 wee buffers is orders of magnitude faster than computing one large buffer, if you merge them in the right order (by using the cascaded union). Huge kudos to Michaël for attempting such a counter-intuitive processing path and finding the road to nirvana.
I created a simple linestring: length = 25000m, number of segments = 1000
Trying to create a buffer at a distance of 10000 m ended with a heapspace exception (took more than 256 Mb!) after more than 4 minutes of intensive computation !
The other way is: decompose into segments (less than 1 second); create individual buffers (openjump says 8 seconds but I think it includes redrawing); cascaded union of resulting buffer (6 seconds).