First and foremost, my heart goes out to those who have lost loved ones during this unprecedented event. The human tragedy coupled with the extraordinary loss of property makes everything else seem unimportant.
That said, everyone I know has been in cleanup and rebuild mode since the storm passed – so let's take a moment to think about the most successful business continuation plans worked and how others failed.
This is a picture of 200 West Street, the headquarters of Goldman Sachs. The firm was pilloried about it in Facebookistan, the Twitisphere and on most of the InterWeb for being lit up while NYU Langone Medical Center on the other side of Manhattan Island was evacuating patients.
It's easy to take shots at Goldman Sachs for all kinds of things, but not for its best practices business continuation planning. This picture is harsh – but only if you are one of the thousands of other businesses in lower Manhattan that did plan for a power outage on this scale. How did Goldman Sach do this? I asked some friends who work there and I was politely told that Goldman's business continuity plans are highly confidential and considered a competitive advantage for the firm. Not a huge surprise, but I had to ask.
I thought Hurricane Irene was a teaching moment. So last year, we set up special monitoring software, purchased better uninterruptable power supplies (UPSs), mirrored our critical servers with servers in different physical locations and switched some of our email accounts to cloud service providers. We did a pretty good job convincing ourselves that a storm like Irene comes once in a hundred years and we probably would not be hit harder in the near future. Oops!
Hurricane Sandy flooded power stations, tunnels, took out power for everyone below 39th Street in Manhattan (the best, most optimistic estimates for power restoration are sometime over the weekend) and wreaked havoc and unimaginable devastation from Atlantic City, NJ across the entire South Shore of Long Island. Another "once in a hundred year storm" the very next year ... and this one was far worse.
In some locations, good old-fashioned twisted-pair copper phone lines (knows as POTS, Plain Old Telephone Service) was the first to go down. This is highly unusual as it is usually the last service to go out. But this storm was business unusual.
Next, low-end data centers without adequate back-up generators went. This took out thousands of websites and email accounts.
Then, although some customers lost phone service due to flooding -- six to eight hours after the power failures, anyone with a triple play digital phone system lost phone service.
What worked? Cloud-based email like Google Apps for Business, Hotmail, normal Gmail, AOL, Yahoo!, 1and1 and Microsoft Office 365. Big data centers did well.
Cellphones, smart phones and tablets ruled. 3G cell and data service was up the whole time, but 4G LTE service (in lower Manhattan) on Verizon was gone a few hours after the power.
Things that should have failed, worked perfectly. Things that should have worked perfectly, failed. For many of us, technical infrastructure may need to be rethought. Is it time to put all of our servers in the cloud? What will backup look like? How will we deal with wireless vs. wired phone service? There are dozens of questions with long, subtle answers to be discussed. And, there are new business continuity schemas to be thought through.
In a world where we literally can't function without our connections, staying connected may need some serious rethinking.
The business lessons from Post-Tropical Storm Sandy are plentiful and, over the next few weeks, we are going to cover as many as we can. For today, let's just be thankful for what's left, offer help to our neighbors, keep those who have experienced loss in our thoughts and prayers and keep rebuilding.