12 Aug 2008
I host my @cleverelephant.ca email using Google Apps, and I even go so far as to pay the $50/year for “Premier” service to have no ads on my account. Having self-hosted infrastructure for 10 years at Refractions (I was President, CEO, and sysadmin for too many years), it has been pretty pleasant to have zero maintenance issues keeping my email online, my spam well filtered, my web pages up.
However, he who lives by the cloud, dies by the cloud.
Yesterday, my web mail access was unavailable for about one hour, exceeding the 99.9% availability per month guaranteed in the Google Apps SLA. For those doing the math, 99.9% availability means that 45 minutes per month of downtime is considered acceptable.
So, I am going to try and get my account credit of three free days of service (retail value, 3 * $50 / 365 = $0.41) from Google. (Does anyone else think that’s a pitiful SLA credit? If they go below 95% uptime (37 hours of downtime) they will credit you a whole 15 days of service – $2.05.)
Step one, who the heck do I contact? Unfortunately, the SLA is completely silent on the matter. Presumably someone in “support”, so I go hunting through the support pages. There we find a reference to 24x7 support for Premier customers (my $50 in action!). And at the bottom of that page, the answer:
You can submit a priority email request as follows:
- Log in to your Google Apps control panel.
- Click on ‘Domain Settings,’ then on ‘Account Information.’
- Click ‘Contact us online.’
So, I submit my request for account credit, and wait. I get an auto-response right away saying they have created a ticket. So the clock is running on Google support, started at 4:30pm, Monday August 11. How long to a response? Updates will follow.
Update (Aug 12, 11:40pm): Received a human response from the Google collective. No answer on the SLA credit yet.
Hello Paul,
Thanks for your message.
I’ve reviewed your report and have forwarded the information you provided to the appropriate team for additional investigation. I appreciate your patience and apologize for any inconvenience this may have caused.
Sincerely,
Shashank
The Google Apps Team
Update (Aug 15, 5:46pm): More signs of life at the GooglePlex:
Hello Paul,
Thanks for your response.
Our billing team is currently reviewing this case. We will let you know when we’ve processed your service credit request.
You can continue to contact the support team, who can advocate on behalf of your domain in situations such as these.
Thank you for your continued support and patience.
Sincerely,
Elliot
The Google Apps Team
Update (Aug 27): The big G has decided that the August down-times warrant a big Mea Culpa. Clearly committed to keeping faith in the cloud high, they are extending a larger credit that strictly required to all us Premiere customers (not just whiners like me):
Given the production incidents that occurred in August, we’ll be extending the full SLA credit to all Google Apps Premier customers for the month of August, which represents a 15-day extension of your service. SLA credits will be applied to the new service term for accounts with a renewal order pending. This credit will be applied to your account automatically so there’s no action needed on your part.
Top marks to the big G, way to not be Evil!
05 Aug 2008
I have been trying to squeeze every last erg of performance into a Mapserver installation for a client. Their workload is very raster-based, pulling from JPEG-compressed, internally-tiled TIFF files. We found one installation mistake, which was good for a big performance boost (do not fill your directories with too many files, or the operating system will spend all its time doing file searching). But once that was identified, only the raster algorithms themselves were left to be optimized.
Firing up the Shark, my new favorite tool, I found that the cycles were being spent about 60% in the JPEG decompression and 30% in the Mapserver average re-sampling. Decompressing the files did make things faster, but at a 10x file size penalty – unacceptable. Removing the average re-sampling did make things faster, but with very visible aesthetic penalty – unacceptable.
Using two tools from Intel, I have been able to cut about 40% off of the render time, without changing any code at all!
First, Intel publishes the “Integrated Performance Primitives “, basically a big catch-all library of multi-media and mathematics routines, built to take maximum advantage of the extended instruction sets in modern CPUs. Things like SSE, SSE2, SSE3, SSE4 and so on.
Ordinarily, this would only be of use if I had a couple weeks to examine the code and fit the primitives into the existing code base. Fortunately for me, one of the example programs Intel ships with IPP is “ijg”, a port of the Independent JPEG Group reference library that uses IPP acceleration wherever possible. So I could simply compile the “ijg” library and slip it into place instead of the standard “libjpeg”.
Second, Intel also publishes their own C/C++ compiler, “icc”, which uses a deep understanding of the x86 architecture and the extensions referenced above to produced optimized code. I used “icc” to compile both Mapserver and GDAL, and saw performance improve noticeably.
So, amazingly, between simply re-compiling the software with “icc” and using the “ijg” replacement library to speed up the JPEG decompression, I’ve managed to extract about 40% in performance, without touching the code of Mapserver, GDAL, or LibJPEG at all.
28 Jul 2008
I’ve put my Mapserver slides online from my recent gig at GeoWeb. We’ll be in town until Friday! Try the veal!
26 Jul 2008
Update: Martin Davis tried out a variant of simplification, and produced results even faster than Michaël’s.
Inquiring about some slow cases in the JTS buffer algorithm, Michaël Michaud asked:
I have no idea why a large buffer should take longer than a small one, and I don’t know if there is a better way to compute a buffer.
Martin Davis and I dragged out the usual hacks to make large buffers faster, simplifying the input geometry, or buffering outwards in stages. Said Martin:
The reason for this is that as the buffer size gets bigger, the “raw offset curve” that is initially created prior to extracting the buffer outline gets more and more complex. Concave angles in the input geometry result in perpendicular “closing lines” being created, which usually intersect other offset lines. The longer the perpendicular line, the more other lines it intersects, and hence the more noding and topology analysis has to be performed.
A couple days later, Michaël comes back with this:
I made some further tests which show there is place for improvements with the excellent stuff you recently added.
I created a simple linestring: length = 25000m, number of segments = 1000
Trying to create a buffer at a distance of 10000 m ended with a heapspace exception (took more than 256 Mb!) after more than 4 minutes of intensive computation !
The other way is: decompose into segments (less than 1 second); create individual buffers (openjump says 8 seconds but I think it includes redrawing); cascaded union of resulting buffer (6 seconds).
Holy algorithmic cow, Batman! Creating and merging 1000 wee buffers is orders of magnitude faster than computing one large buffer, if you merge them in the right order (by using the cascaded union). Huge kudos to Michaël for attempting such a counter-intuitive processing path and finding the road to nirvana.
25 Jul 2008
If I hear this LBS “use case” one more time, I think my ears are going to start to bleed.