Things That Go “Bump” in the Data Center

Telling tales of data center horrors this Halloween

October 31 is coming up soon, so it’s time for costume parties, haunted houses, and your favorite scary movies. Halloween stories are frighteningly fun during parties or wrapped up in a blanket at home—we all love a good Dracula or Frankenstein tale!

In the spirit of Halloween, we may tell stories of things going “bump” in the night, but nothing’s as terrifying to tech pros as things that go bump in the data center.

To celebrate Halloween this year, we want to know your scariest, most goosebump-inducing data center fears and anecdotes that not only scare you this season, but year-round. Did you have a mishap with a chiller, HVAC, generator, or branch circuit? Or did something outside the data center affect your ability to conduct business as usual?

Tell us your scary stories and how you fought against these spooky tales by Friday, October 11 and we’ll put 250 THWACK points in your account.

Anonymous
  • After a heavy storm one evening, we discovered a small leak in the data center.  Someone decided to place a bucket in the drop ceiling to prevent the leak from dropping on the tile ceiling.  After about a week, the bucket became so full it broke through the ceiling and landed on the server rack, causing a huge outage.

  • A company I worked for had equipment in sheds, between buildings in New York City that were constantly being broken into to be used as a toilet for the homeless population.

  • Nope.  We opted to just make fun of him about it forever. emoticons_happy.png

  • We had just purchased new core switches (Catalyst 4507's). We had everything ready to go and plugged in but left the power off since we were having consultants help us with the initial setup and config. The day comes when consultants are onsite and we turn everything on. The entire datacenter (and part of the whole basement) goes dark...instantly (lights, not servers, etc.). The room UPS kicks in, thankfully, and things keep running. But we only have about 20 minutes to figure out the problem. Turns out the circuit feeding the room UPS, which is labeled 100 amps, was not 100 amps upstream. It was 30 amps. We turned the switches back off and figured what the problem was and restored power. Maintenance upgraded the circuit breaker and we were fine after that...but we didn't have much time left (a couple minutes) on the room UPS.

    The next nightmare relates to the previous one.

    We had just purchased a new Cisco UCS 5-blade chassis and SAN for our new EMR and production (coming from a rack-full of physical servers). We had everything ready to go and plugged in but left the power off since we were having consultants help us with the initial setup and config. The day comes when consultants are onsite and we turn everything on. The entire datacenter (and part of the whole basement) goes dark...instantly (lights, not servers, etc.). The room UPS kicks in, thankfully, and things keep running (with about 30 minutes on the UPS timer). But we were floored that we had this problem since we had checked with maintenance on power availability. We shut the UCS down but we didn't want to shutdown the EMC SAN dirty so we left it running while we scrambled to figure out the problem and hopefully fix it quickly. Time is dwindling and come to find out that the circuit that was feeding the panel that was feeding these circuits (same panel as previous story) was only a 40 amp breaker.

    So if you're keeping score:

    Story 1: 40 amp to 30 to 100 amp.

    Story 2: 40 amp to 100 amp to 100 amp.

    Apparently maintenance never bothered to check all the way upstream. emoticons_angry.png

  • We had a CSX derailment cut fiber feed along our light rail system.  We had a number of our infrastructure guys working extra to help re-establish communications along the line for a while.