Showing results for 
Search instead for 
Did you mean: 

Things That Go “Bump” in the Data Center

Community Manager

Telling tales of data center horrors this Halloween

October 31 is coming up soon, so it’s time for costume parties, haunted houses, and your favorite scary movies. Halloween stories are frighteningly fun during parties or wrapped up in a blanket at home—we all love a good Dracula or Frankenstein tale!

In the spirit of Halloween, we may tell stories of things going “bump” in the night, but nothing’s as terrifying to tech pros as things that go bump in the data center.

To celebrate Halloween this year, we want to know your scariest, most goosebump-inducing data center fears and anecdotes that not only scare you this season, but year-round. Did you have a mishap with a chiller, HVAC, generator, or branch circuit? Or did something outside the data center affect your ability to conduct business as usual?

Tell us your scary stories and how you fought against these spooky tales by Friday, October 11 and we’ll put 250 THWACK points in your account.

Level 13

Surprisingly I've never experienced any major mishaps so far in my least none that were unrecoverable or devastating.  I have always feared the typical corporate shutdown/layoff type of thing, but my true fear is a government takeover of tech in general.  Or, vice versa with tech merging with government.  That's how you get zombies, you know.....

Level 15

Years ago we had a as in 'one' special motor generator that generated power for our mainframe. The center of the generator weighed several thousand pounds and spun at several thousand RPM's. Once every three months on Sunday night we would shut the mainframe down and while it was down we would grease the bearings on the motor generator which were about 18 inches across. When we were done we started spinning the generator back up and one of the bearings started squealing and the faster it went the louder it got.

I could only picture the bearing seizing up and the generator rolling out of the casing so it was pretty scary. Had to leave it running for about 12 hours to get the mainframe back up while a new bearing was flown in. A second motor generator was soon purchased.      

Level 9

At one of my old jobs, our financial system was on an AS400 (mind you this was about 7 years ago). But running updates on it required me to be there physically in the middle of the night. Around 2AM, outside my office window as I was sitting there watching the update progress... I felt something watching me, I turn and look out my windows and BAM there was a cat staring at me. Scared me to death, as I was the only one in the facility at that time. Other employees, as I was still fairly new here, had told me to be careful at night as the building was haunted so I was already nervous!  Stinking cat must have seen my light on through the window and wanted to see what was going on. I think I scared him as much as he scared me!


There was the one time during month end processing on a Sunday morning (back in the 80's) about 02:30 when the building across the street blew up...literally took out the whole block.  The blast blew out all the glass on one side of our building, moved walls, etc.  Mainframe and all the mini's kept on rocking.  No head crashes, no errors.  Felt like a 4-5+ on the richter scale event.

We had a power outage in our building, which happens a few times per year. The datacenter kicked over to battery backup and the generator spooled up as per usual. The transfer switch automatically flipped everything over to the generator and everything was happy. Until it wasn't.

The generator blew a seal and all the coolant leaked into the parking lot. Once it inevitably overheated, the transfer switch brought everything back to battery backup... which lasted for about another 20 minutes. The datacenter went dark. It took us an entire weekend of working in shifts around the clock to recover. The list of casualties was:

- The packet shaper, of which we didn't have a redundant unit (the only non-redundant hardware in our datacenter... figures)

- The entire Call Manager database (4,000+ phones)

- BOTH of our UCS Fabric Interconnects had corrupted flash and needed reformatted

- A firewall in our test environment

Level 13

I have a few scary incidents to recite.

Major Campus Power Outage.  Running campus from large semi truck generators that kept running out of fuel.  The power was run from these generators for a couple of months or so.  It was before my time at this place, but a scary tale never the less.  they never knew when the power would drop for the whole campus.  Server room initially had no emergency power, no separate generator, and only a limited battery backup for a few devices like the core router.

Server room/data center with no dedicated cooling, well unless you considered the jury rigged hole in the building wall with a bathroom fan in it.  When I first started, there was no wall between this 'server room' and 2 IT personnel cubes.  They had to work in the noise.  Historically, this room had once been the college president's office, yes it still had the carpeting while service as the 'server room'/'data center'. It had wood walls and high lofted wood ceilings.  And the local fire chief/fire code REQUIRED water sprinklers.

Sewage backup in the service tunnels.  The mess covered cabling between buildings.  Oh and that backup happened at the one building that had network gear in the basement/service tunnel level.  No one wanted to touch any of that equipment ever again.  People in hazmat suits had to sanitize the basement/tunnels with a spray down to clean out the sewage.  But that still left bits of paper all over everything.  Yes, I believe it was toilet paper.

New Data Center!  But there was a misconfiguration of the generator and transfer switch.  This missed switching to a large Liebert UPS to prevent the power drop.  When the generator went into test mode, the transfer switch lagged on switching, dropping power to all the racks for a couple of seconds.  Crashed everything.

Also new Data Center.  Misconfiguration of new dedicated AC equipment.  2 AC units, but no alerts on failed failovers, including the scheduled weekly switch from one AC to the other.  These were managed by a standalone monitoring system, not networked, not accessible to our monitoring and managed by facilities. Server room overheated until things started shutting down and people noticed services not working.  Lost a couple of servers that didn't shut down soon enough to protect themselves, lost several hard drives in the SAN and direct attached storage.  It took facilities 3-4 hours to get the AC's working and the room cooled enough to start powering things back up.


There are always the issues with power...out of phase power, over voltage, under voltage, losing a single leg, etc.  All of which can cause a drain on the UPS to correct the situation.  Hopefully it switches over to generator and notification is made.  In my current shop a brief power hit causes an hour run on generator after clean power is noted.  I recall a remote site in a past life that had undervoltage issues all the time so the UPS had to boost the voltage.  Even though it never went to battery backup, it ended up failing due to a drained battery even though it was still on main power.  Then the site went down due to undervoltage issues....

Level 15

Many moons ago, way back in the days where landlines were king, or maybe not that long ago, we had a situation where we lost our RPC service on the PDC in the main datacenter due to a recently dismissed IT team member.   He renamed the RPC.exe service executable to RPCC.exe and it took nearly 48 hours, many calls, and many many people pacing outside the server room before we were able to discover this dastardly deed.  It kept backups from working, databases from loading, etc.  This was on a windows 2000 AD environment.   He also deleted the DNS before it was integrated with AD.   So much fun.   We secured the environment and never had the issue again, but it sure did scare me good, or it could have been the gallons of caffeine I had in me.   either way I got a chill. 

Level 11

Back when Cisco UCS blades were still fairly new we had a fun few months.  Had multiple issues and support techs come out and replace parts trying to figure it out.  Ended up replacing 98-99% of the parts (including moving blades from one chassis to another to rule them out).  Finally got to a breaking point and demanded the chassis be fully replaced with a new one.  Never did understand what it was about it, but that chassis ate parts.  Sometimes it was the same part, sometimes it was a new one.


Ah, then there was the McAfee virus.  A McAfee push had a bad definition that flagged svchost.exe as a virus. Every windows device basically stopped communicating.  Hilarity ensued as it required every windows device to be visited with a thumbdrive to resolve the issue....after we got the fix.  We had machines (laptops, pc, servers, etc) scattered to the four winds worldwide (30+ sites on at least 4 continents).  This had us down as a company for 2-3 days at HQ and then various sites were down for the next week until we could get things resolved.


This happened about a year ago. Sequence of events: sewage backing up due to a blockage; Friday night, toilet waste pipe collapses due to the weight of the sewage, pipe is directly above a UPS which goes bang when covered in the proverbial muck.
Luckily the secondary UPS copes and all is not lost but so close and so scary and so messy.
Thankfully I didn’t have to clean it up.

Level 12

Once upon a time, as a young helpdesk person, I needed to deploy Office to a PC. Our solution copied the install bits to the PC that was receiving the deployment.

This was for a major global company, many thousands of desktops/laptops. It was also early this century so most sites were connected via T1's.  

At deployment, I accidentally selected all of the PC's in the organization to deploy Office to. Roughly 300MB was getting pushed to each of many thousands of PC's, over T1 links....

Calls started coming in immediately as I essentially saturated the entire internal network. The server/solution also stopped responding, as it was trying to satisfy all of these thousands of network requests, so I couldn't even cancel the job.

Nothing to do at that point but literally sprint to the datacenter, and yank the power cord of the server.

Magically everything just started working again...

Level 14

Many, Many years ago I was a computer operator doing the 3rd (overnight) shift. My co-worker on the previous shift had given me turnover and proceeded to leave. I continue to work at the console and swapping tape and was in the beginning stages of reading an unusual message on the console when BANG,CLANG and THUD! Scared the living daylights out of me. Turns out he had put his thermos on top a a printer which was out of my line of vision, this printer was the kind that the lid came up when it needed paper! After I put my heart back in my chest, I left it on his desk. We had a good laugh the next night.

Level 12

We have two UPS's that supply our data center, one is relatively new, the other is not. These are connected to a Generator that is capable of powering the entire building. Occasionally the generator will kick in and spike the power, which will cause the UPS to go into bypass, cutting off utility power, but every once in awhile it will also cause the UPS to fault and shut itself off to protect the batteries... So, every time I see the lights flicker and hear the generator start, I am terrified to go into the data center....

Level 9

Building fire, back in my mainframe days.  We were never in danger, and neither was the computer room, but this was the days of huge vacuum-loading tape drives, and they sucked in ALL THE SMOKE.  Once we were allowed back in the building, I had to spend an hour cleaning equipment before I could get back to processing.  The horror!

Level 12

Once upon a Tuesday dreary, while I pondered, weak and weary,

Over many a quaint and curious KB of forgotten lore-

  While I nodded, nearly napping, suddenly there came mass ringing,-

and of several someone's mumbling, bumbling past each other on the floor

"'Tis some MI" I muttered, "something went wrong for the floor"

          Another Tuesday, nothing more

The noise it grew, the people buzzing, the tickets flew, the system chugging,

I look up to seek the wallboard, calls queuing: 24.

"OK", i thought, seen worse, seen better, luckily i'm not a fretter,

Opened a chat to find the trend. Calls queuing: 54.

"What's down?!" I ask. "Exchange, Citix, maybe Notes and Avocor"

         Whelp, not seen this before

A world away team BigFix panicked, "Stop it!" "I cant, the system's had it!"

An order sent with beefy privilege: "Reboot, yes please, I'm sure"

   Nodes go dark, reports turn red. "That's not what I thought I said!"

A simple click misplaced, mishandled: Invert - All BUT the box in Singapore

Descending steps 6-a-stride. Inside, pulling cords from hosts by the 3 or 4

         "Did we save it?" "Cant be sure."

"Hello!", I say with cheery greeting, "Is your call related to faults we're seeing?

-With Email, Internet, IM, Appliances through the core..."

  Talking quick, my larynx flapping, I list more off, now nearly rapping

-"..Or MDM, IBM, ECM, Mainframe..". Calls queuing 104

I mute the line: "Good god what happened, we're screwed!", I think I swore.

         We can not take much more.

It gets worse and worse as things shut down, gloom descends, deepened frown.

Then at it's peak, it's high, it's zenith- Calls queuing: 984-

   -A calm appears, arises, as support and business realize this:

It's dead jim. it's buggered, scuppered, lets get coffee from next door

A state of zen becomes the normal, informal chatter 'cross the floor

        Quoth the Tech Guy, "Panic nevermore."

Level 8

We have a tech that enjoys putting Jump-Scares in the Datacenter around Halloween. They're always different. Always gets me, haha.

Level 9

Holy Cow. . .explosions and sewage (thankfully not together)?  I can't compete with that.

For me, it'd be the time that during cable cleanup efforts, someone opted to use scissors to remove some old fiber rather than undoing the Velcro on the ladder rack.  Had the right cables been cut, it wouldn't have been a huge deal, but. . . .well. . .you know.

Level 9

When I first got into the field- I worked at a small retail company with management that didn't understand the importance of safeguarding the servers/networking equipment.  When I walked into the racks were- the first things that I noticed were that their form of cooling was setting the AC very low in the whole room where we sat- mind you, there were no vents in the actual rack area.  Secondly- sprinklers.  Actual water sprinklers.  Right above all of the racks.

Another brief tale- We were located in a rather run-down building where the roof required patching at least once every two months.  At one point during a very rainy period we sprung a large leak- above the racks.  We had to scramble to find something to divert the incoming water while we had someone run out to get tarps.  Thankfully, we only had one box go down (this was years ago and I forget what it was) but we were able to get a workaround in place.  It took us the better part of half the day- affecting our warehouse.  The only good that came about of it was that the CFO ​finally​ sat down to seriously discuss DR.

Level 10

Long ago in a galaxy far far away... we used to have a Network Management System station in our IT operations office. When a system went down it would both send an SMS as well as sound a loud audible alarm (mp3) until the alert was acknowledged. For our most critical systems we used a sound byte from a nuclear power plant, a really loud siren with a robotic voice saying constantly "red alert, red alert". One night I was on call and received an SMS. I drove in to fix the issue and found a building security guy hiding in a corner, because he was surprised by the alarm during his rounds and was scared to death. That was probably my most Halloween worthy moment in IT, everything else was just "normal" IT emergencies.

Level 8

Years ago on a dark stormy night there was smoke or someone thought there was smoke in the Data Center.  Long story short someone hit the button to turn off the power to the DC because they thought they smelled or saw smoke.  Processes changed, button got a cover on it, Everyone got training. Happily ever after.

Level 9

More on the fun side, We had an empty full rack on our glassed computer floor. Before our morning operators came in, I crawled into the rack. Once they arrived, I would move slightly. It took them some time to notice but one finally came out on the floor to investigate. After she attempted to 'steady the rack to no avail(I kept moving after she would stop the rack), I reached out through an opening and was greeted by a blood curdling scream and a run for the hills.

Level 9

BWAHAHA. .. .actually LOLed.

Level 13

Most Halloween-worthy would probably be the times (plural!) that several racks' worth of new IBM servers spontaneously shut down (not just hung, actually shut themselves off) within minutes of each other in the middle of the night. I think we attributed it to a power or temperature issue (we had no actual evidence for this besides a history of HVAC problems in the server room), even though the server room lights and A/C were running fine. A few weeks later it happened again, and again, there were nothing obvious that would explain a dozen newish servers all shutting down at once. These servers weren't accessible from the internet, and I don't think we even had WiFi in the building then, so we couldn't even blame hackers. Spooky.

A couple of weeks after that, I was on the phone with an IBM tech for some other issue. He asked if we'd applied a particular BIOS update. It wouldn't fix the problem we were discussing, but it did address a bug that caused a specific generation of hardware to automagically shut down after being up for 75 (?) days. I checked my records and lo: the first shutdown was 75 days after we'd installed all the servers, and the second shutdown was 75 days after that.

I updated the BIOS and that particular problem went away. I've never gotten an explanation of exactly what was going on in the BIOS - I'm guessing the memory occupied by an uptime counter overflowed when the number got too big. Fun times.

Level 8

We had a Cisco unit with 3 fans, 2 of which were dead. SolarWinds caught the issue and thankfully we were able to avert disaster in the datacenter by performing an emergency maintenance and hot-swapping a new fan chassis in about 5 minutes or less without any detrimental changes.


Level 9

I work for a manufacturer. Their HQ (including most of the IT staff, oh, and a small data center) is in the same building as their main manufacturing facility. We've had the occasional fire situation out in the plant that forces us out of the building but thankfully nothing that has turned the fire suppression system on. This setup often lends itself to dirty power and power fluctuations, but for the most part it's all kept under check. We'll have brown-outs in parts of the building, but the data center is kept up on generators, thankfully.

However, for a few days in the summer every year the plant gets shut down for cleaning and maintenance. So all of those big machines that draw a ton of power are no longer on, leaving the power to flow elsewhere. This tends to end up in our data center, at the chillers... overloading one or both, causing the temperature to rise... evidently the equipment in the data center doesn't like that, we've discovered.

Level 13

A few that haunt me.

Way back in the day I was a S/36 operator.  Worked for a cable network and we did all the logs on this box using JDS.  The head of Traffic (lady named Kathy) came in one day and said the system was slow and proceeded to flip off the power. !!!!  It took me nearly an hour to IPL the box and get it back up.  We called it the Kathy switch from then on.

Years later - new server room - VP of IT is showing off our new Cabletron gear.  Talking about the redundant power supplies he rips one out right in the middle of the day.  Of course the other one wasn't seated all that well and the whole thing shuts down and takes down our entire Token Ring network.  I went around right after that and screwed them in so tight you had to have a screwdriver to get them out.

About a week later the same guy is leaving the server room and hits the red switch by the door to open the mag lock, except he misses and hits the scram switch for the UPS.  Never could figure out why the electrician put it near the door....  Put a cover on first thing after we got everything back up.

Last one was a hurricane - we had a really high grade generator that would fire up within seconds of losing utility power (and we exercised it every week), except this time the breaker went on the transfer switch and it was left sitting humming to itself while the UPS slowly bled to death. We monitored a lot of stuff but that bit wasn't monitored, so we were none the wiser until the UPS finally gave up the ghost.  Try to get a new breaker after a major hurricane.  We were down 2 or 3 days.  Not fun.

Level 8

Swapping out network gear for a hospital datacenter.  During the swap, switch running on a single power supply.  Guy tripped over the remaining power cable.  about 15 seconds before someone on the same floor coded and died.  He always felt like it was his fault....


We had a vendor mopping under the raised floor in the datacenter.  The Exit had a big red button by the door to release the maglock.  One of the vendors employees instead went to the wall 15 feet away to where there was a small red button under a clear plastic cover that was labeled Emergency Power Off.  Yep you guessed it, lights off, power off, nothing but the deafening sound of an entire DASD farm spinning down.....  The sound of server cooling fans spinning down is one thing, especially a room full.  An entire DASD farm is totally different in sound and sensation when you have entire cabinets (the size of server racks) with 6  12" multi-platter disc drives (IBM 3390).  Then you have 8 or more of these cabinets.

granted this is a 3380, it is similar in size for perspective

Level 14

Yup. had a similar incident... The guy tried to book it but the Data Center Supervisor held him captive until the head of security and the CIO got there.

One of my team didn't pay attention to the accordion cable management unit on the back of the server. He was standing in the front and the server simply must have been stuck on those sliding rails. So he just tugged a lot harder and then there was a loud pop, sparks flew, and things went dark. He had just cut threw the servers power cords which were not properly placed in the cable management. Fortunately no one was standing by the backside of the units.

At another job, in another city, we had moved out of a very old facility with our data center and people. However, because we still owned the building and needed access for security etc. all of the utilities were still on. Well one unfortunate soul noticed the "abandoned building" and decided they could make a quick buck by re-purposing all of the copper. As my co-worker/friend's wife, who happened to be the police sergeant that responded to the call, told us the next day, they had crawled underneath the fencing. Then while laying on the ground on their belly, proceeded to use bolt cutters to clip the big fat utility lines going into the building. As she described it, "You know how a hot dog or sausage casing splits open when you cook it. Well that is what this person looked like." The individual was literally cooked and died very quickly. The electricity went thru them and to ground along the entire length of their body. The intense heat baked everything so there were no bodily fluids to clean up.


technical term for that in the EMS/Fire industry is "Crispy Critter".

Level 9

One day I got a call to assist with a move. A business unit was moving from one part of a building into a newly refreshed work space. I got called because they learned at the last minute that some servers and routers located in an office needed to move as well. Simple when the business gave me permission, I would shut down the servers and load them onto a cart, then push them to the new server room where I would connect them all back up and power them up. Hurry up and wait, something that should have taken about 15 minutes took all afternoon. When asked about the routers I said the project manager has called the provider. They will be here on Monday to move them. After I had moved the servers and tested with the users, I was confronted by the vice president of the business unit, He asked me what I did when I was moving the servers. I stated I had just tested and everything should be working. He said we lost our connection to all other applications. I went back to the office where the routers were located and they were powered off. I discovered the circuit breaker had been shut off. I looked around to find the project manager for the construction, in a little while he got the power turned back on. I quickly went back to have someone test, after about 10 minutes it was pretty clear it wasn't working. I went back to check the router, something about the CSU was not looking good with flashing red lights. I then ran around until I found the construction manager again, I asked if the work they were doing nearby would have cut any telephone lines? He led me to the area where there was a dumpster nearly full of 50 pair. Several lengths, about several hundred feet. I told him I suspected my connectivity problem was related to all that copper not being properly terminated. I went back to inform the vice president of the bad news. But before I got there I ran into a coworker I asked if he had his cabling tools? I explained the situation. He laughed, we are not going to untangle this mess. I asked if we could use a 300 foot patch cable to connect the router to another company location that existed across the parking lot. After a few crimps and password break procedure on a router, and little but of hacking suddenly we had connectivity. Add a few parking cones and few feet of yellow tape we had a solution until Monday. Not too scary.

Level 11

We were informed by Facilities that there was an issue with a UPS in the main data centre.

Two of their engineers, plus two UPS engineers, attend to investigate the issue, with myself in attendance in case 'anything unexpected' happens.

UPS guy tells us he's putting the UPS into bypass so he can replace the batteries, and throws the switch... at which point everything goes silent and the emergency lights come on.

"Hmm, that wasn't supposed to happen" he says, as we all look at a completely dead DC...

Once upon a time 3 years ago

Beginning of a long holiday from Saturday to Tuesday.

At the end of Friday shift.. 6 pm, it was raining heavily, lightning and thunderstorm. We align with team members to be on standby to keep an eye on the storm and and possible problems in data center.

We had temperature sensors (which sent alerts if it dropped or increased too much) and energy sensors that monitored the energy company, 2 parallel UPS and power generator container that had 8 hours autonomy.

At about 11 pm (same Friday) and after a few wine glasses, I received the first call from the sensor that monitored the energy company ("power outage, power generator started").

At this moment I started to scratch my head and the lightning continued with force.

30 minutes later another sensor called ("generator alert") the generator system itself shut down due to power outages (thunderstorm, spark...) to avoid some damage on the equipment

Well we were running on the UPS (1 hour autonomy) and the only option was to go by taxi to try to restart the generator. I called my analyst to meet me at the company.

arriving in the data center, the situation was like a Christmas tree (everything blinking disorderly), a generator stopped with a warning, a UPS near the end of its load, and we obviously desperate

We called the generator vendor, got the alert off and turned it back on. Then started it recharging the UPS.

We tested all the servers and services, we sent emails to all directors explaining what happened .. when around 2:30 am again lightning / thunder caused the generator to shut down once again, UPS screaming and we desperate one more time.

We called back to the generator vendor and reset it, so it went back to work.

we tested everything again, servers, services, UPS etc ... we ended around 5:30 am (new email sent, explaining again) and luckily the storm was over, the energy company returned to work .. finally the generator gave a relieved

We went home (I arrived with the breakfast and the newspaper) and I was finally able to sleep about 7 am on Saturday morning ... until I got a call at 8:30 am from the CEO thanking me and my analyst for the hard work and that Everything was working....

Level 9

Um....did they get fired?

Level 8

Two or three jobs back I was tasked with the unenviable job of relocating a very old SCO Unix box which ran the Payroll for about 3000 staff, i started to panic a little when I discovered the up time on the server was approximately 10 years and that there was no way of knowing if the backups which had diligently run every day would actually restore succesfuly, after procrastinating over this for a while i decided to take the plunge, when the server came back online i have to say it was the biggest relief I have ever felt, the longest 10 minutes of my life waiting for it to boot! Next project was to modernize the Payroll.

Level 8

We have three cooling units in our datacenter.  Two units do most of the cooling and the third one is essentially a backup.  We had a major issue with one a few weeks ago and the tech had to take it offline.  He said we should be fine with two until parts came in to fix it, but if we lost another we'd be in trouble.  The next day, another one died.  Suffice it to say, that was a long couple of days waiting for parts and hoping the last one survived.

Level 13

quite a while ago, we had a large data center with a few AS/400s running various applications across Canada. The server room was pretty state of the art at that time, with central cooling, proper cable management, alarms, fire suppression, cameras, and even an emergency power shutdown button.

Unfortunately, this button was at "butt" height, and didn't have a protective cover on it.  One administrator bent over, hit the button with his behind, and killed power to the entire room, including the AS/400s, shutting down all the enterprise applications across the country.

the individual had a very embarrassed week trying to recover all the data and get them back up and running...

Level 9

A coworker and myself were in the Data Center wiring some new fiber into our Core Switches. There had been some recent construction due to our new Data Center being built right on the side of our old Data Center and conduits had been installed in preparation for fiber and copper. I was over on the KVM console configuring things and my colleague opened the back of a rack to run some additional cabling and an Ethernet cable fell off the roof of the cabinet, or so he thought. He looked down in time to see it uncoil and begin slithering under the rack and along the DC floor. He yelled in a high pitch unintelligible voice and I could only make out the word snake, who must've been sleeping on top of the rack for warmth in a cold DC. So I ran over to try and find it and located it under the racks as it slide down into a hole in the floor tiles. I started popping tiles and could not locate it. Environmental Services came in with bags and we searched for some time and couldn't locate the snake. My coworker is happy that I saw the snake too, because no one really believed him. He also said it was lucky for the snake it didn't start climbing his leg because it would've drowned... due to a loss of bodily function.

So the legend still stands that after years and years of being below the floor and sharing wall space with the kitchen, that this snake has grown to massive anaconda size and will devour anyone who is dumb enough to pop a tile in the old DC. If you are mining out cables, just make sure the cable you are pulling on is not something living... due to the "The Anaconda of the DC".

Level 8


Level 7

The Bumpin' Plump Dump

I remember something that went "bump" or rather "dump" in the night. It may not have been a larger DC, but while doing some routine maintenance on a remote location at our CAN I noticed the door leading to the outside of the network closet was mysteriously illuminated from the bottom most portions. The light was rather bright so I knelt further down to investigate....To my shock a hole has rusted through! However minuscule it may have been still enough for mammalian intruders of a certain small size. Once my gaze was off the door imperfection I was thus more shocked to see a size-able pile of what could only be described as semi-solid puddling like mound. I contacted the appropriate people to dispose of the suspect animal excrement. The perpetrator is still at large and may never see true justice. Our theme today...ensure your network locations are properly secured.

Exhibit A


Exhibit B


We were adding a secondary fiber connection to all our off site locations that had 2 possible service providers. At one location, a 3rd party was running the line from the poll under a parking lot to our building when he hit a gas line. The equipment operator figured it out quickly, and the nearby buildings were evacuated just moments before the leak hit the hot pizza ovens next door. The pizza shop exploded with enough force to cause structural damage to us and other nearby buildings. We didn't open that location for 6 months.

Gas leak causes explosion, massive fire in west Columbus | WBNS-10TV Columbus, Ohio | Columbus News,...

I have also seen floods, small fires, and electrical issues in data centers, but we mostly stayed online despite those.

Level 9

The first thing that comes to mind is the time we (a co-worker and myself) were working in the sever room putting in a new server. My co-worker said hey look at this and touched a light on one of the servers (not the one we are working on) and the server shuts down. Mind you this was the server in vCenter that hosted Email and our main network drive. The servers power button was a sensor ... not a push a sensor. This was the first of many times this person shutdown that server by "looking at the power button" according to him. I have since removed that server from vCenter and setup failover. This still haunts me to this day, as a lot of the new servers are sensors not an actual push button.

Second on that comes to mind. We have a remote office in Houston TX and if you know the area you probably know what I am talking about. Every time there is heavy rain or hurricane or something similar our office floods with about a foot of water. Luckily in this area all the buildings do not have any electrical low on the wall. We have started to have bets on how much rain it take to flood the office. So far if its 3 inches in an hour the office will start to flood. So when we have bad weather in Houston, i dont hold my breath.

Level 8

Imagine you send two UPS systems to repair shop for battery replacement and maintenance. After couple of weeks UPS came back from repair shop, bill for maintenance is payed the same day and it was not cheap. Me and my colleague spent about three hours to mount UPS and rewire everything like it was before. After couple of days there was a power outage and we was not to worry about that because we knew that we have UPS with brand new batteries. Suddenly whole IT environment crashes , servers went down...UPS was not working, no signs of life. My colleague and I quickly pull the UPS out, start to troubleshoot the problem and guess what? Batteries was not replaced. Service shop sent us back our UPS in exactly the same state like it was when we sent to them. Broken,leaked and inflated batteries.

We learned lesson the hard way and important thing DO NOT TRUST ANYONE UNTIL YOU SEE IT WITH YOUR OWN EYES.

Level 7

Chicken Chicken Bro GIF - Chicken ChickenBro Destroy GIFs

Level 15

This reminds me when we had a 'lights out' secondary data center and the lights were literally out, the room was pitch black.

The emergency shutoff switch was a large red push button inside the door on the right hand side. The lights were on the left hand side.

When you walked into the room it was completely dark. I pointed out that it was only a matter of time..... Now there is a plastic cover on the red switch and a few lights are permanently on, enough to see the switches.

Never became scary, but the potential was there.

Level 13

Yep. Deafening silence.  Listening to those guys spin down is something. Also a not so gentle reminder that you're going to have to spin them back up again....

Level 9

My company dodged a major incident on a cool late October evening several years ago.  Most of the IT team had been out at at local establishment eating and enjoying the evening.  At about 9:30 PM as the last of us headed towards our cars, we observed smoke coming from the basement beneath our DC.  It turned out that an old electrical connection had mistakenly been left live for decades and had just happened to spark a fire that evening.  We were really fortunate to catch the fire in it's infancy as the basement was full of paper document archives and surely would have destroyed the DC.  After about two hours we were permitted to enter the building.  We stayed all night to air out the building and by 6:00 am the decision was made to open for business the next day.  The only evidence of the incident the next day for several hundred employees was a chilly office with a slight smoke smell.


when you are spinning up a number of cabinets you may have have to do them sequentially otherwise the startup load may pop breakers.

Then you need to be sure they are all available before performing your IPL so you have drives available.