So, back in my first IT job at a hospital, the hospital was expanding and wanted to move the Education and Training department out of the hospital to a "mobile unit" (aka Trailer). It needed networking - and this was before WiFi (late 90's). So, they needed to be physically connected to the network. My boss and I decided to dig a trench ourselves, and bury copper ethernet Cat-5 cable in the ground, and connect the trailer to the computer room. We weren't completely ignorant - we did terminate both sides on lightning protection. It worked great for a while ...
The hospital had a proprietary hospital information system (HIS) built on a hierarchical database (versus a relational one). We had four CPU's - 4 mid range systems Data General systems, each with their own Clarrion striped disk arrays, mirrored - RAID 1+0 (10). The way the system was designed, any of the CPU's could process the load of the other CPU's if necessary (degraded) and also run their disk arrays. Each machine (A-D) had a console directly connected to an I/O controller. Each night, an automated process would take the mirrors offline, back them up to tape, and then bring them back online and re-synchronize them. If anything ever interrupted the process, one had to log into the console, and manually re-synchronize the mirrors. The backup and disk operations could only be managed from a directly connected console for security - and this was before remote desktop stuff (Windows 3.11 back then).
On a Thursday, there was a lightning strike, and the lightning came in on the buried Ethernet and crossed the protection and burned out equipment on both ends. In the trailer, some ports were fried. In the computer room, one of the affected devices was the I/O controller (the console) on one of the processors - the A machine. The A machine continued to operate, but without a console. Vendor support was called, and they had to order a card and would be out on Friday to work on it.
That night, the nightly backups automatically ran. The A machine's mirror was taken offline as normal - but there was an error, so it did not automatically re-synchronize. Manual re-synchronization wouldn't work, because the console was inoperable. Still Ok - had the primary set of drives operating the CPU.
On Friday, the vendor tech arrives and his process is to move the drives, from A over to B, which he did - now B was running A drives and B drives and the hierarchical database was functioning normally, slightly degraded. After the tech replaced the drives, he went to move the drives from B back to A - and one of the drives failed.
Now, the A drives were down. And the system, because of the way it was designed, was down. Now we needed backup tapes. But, by this time, it was after 5PM on a Friday. And the tapes were at the off-site storage location. The off-site storage was very secure, but really hadn't been thought through carefully for 24/7 access (this was before Iron Mountain got big and ubiquitous). Off site storage was at a bank - in the 90's before banks being open on the weekend. So, the hospital CEO called the bank President, but even he couldn't open the bank to get the tapes - because of the time-lock vault, which wouldn't open until Monday morning.
So, on Monday, we got the tapes, and restored the whole hospital information system, back to Wednesday, the week before. FOUR days of hospital data, had to be manually re-entered from paper records (most records were entered on paper, and then entered into the system back then).
Figuring all the man-hours involved in IT, Nursing, and the business units involved (Medical Records, Lab, Radiology, Business Office) etc., it would have been A LOT cheaper to hire some cabling experts, who would have recommended fiber optic cables - and trenched and installed.
I learned to at least make a recommendation - do it right the first time - and let the business make the pick two decision between - good, cheap, or fast.