--

Got the Worst IT Horror Story Ever? Maybe an iPad will Ease Your Pain.

If you work in IT, chances are you have your very own IT horror story. Who are we kidding? Chances are you have several IT horror stories. We want your worst—the one your IT buddies tell their IT buddies. We’ll compare the stories we receive and post the three most horrible in our October newsletter, and we’ll post the rest of them in the blog throughout the year. Don’t worry; we’ll change the names to protect the guilty (and the companies that employ them).

And get this: the sender of the absolutely most frightening IT horror story gets an iPad 3. Sure takes the sting out of the trauma, doesn’t it?

In this article

5 comments

Why are you running a competition to reward failure rather than one aimed at ending it? Is this beyond IT?

CERN find the Higgs & FaceBook seriously affect their worth through an inability to manage basic mobile user connectivity. How can this be the case in 2012?

Software functionality has remained stagnant for 15 years. Why & how is this possible under the influence of Moore’s Law etc? Corporates gain major IT contracts & deliver NOTHING but just say the ‘computer’ word and it’s all written off.

If you do run the alternate competition then I can win. It’s easy. Reverse the priority read between source code & data and all the self-indulgent industry hubris is exposed as the counter-productive bullshit it is.

I started a job at a rural hospital in California in the fall of 2009. We started getting tickets on my first day that the time on the clocks on almost all of the PCs were off by an hour. It didn’t take me long to figure out that the PCs had not been patched for quite some time. After a little sleuthing I discovered that all 250+ XP PCs had not been patched beyond service pack 1 and many didn’t even have that. I asked the current sysadmin what was going on and he said that he set up a WSUS to take care of everything and that the PCs should be patched. I try to log into the WSUS and it is dead (we eventaully figured out it was a failed motherboard but that’s another story).

I’m starting to wonder how someone couldn’t know how a WSUS server was down and that the PCs were not being patched for several years but I had to figure out what needed to be done to fix the issue. No problem, we’ve got spare room on another server so I’ll just rebuild a WSUS there I thought to myself. I informed the IT director of what was happening and what I was going to do to remedy the situation. I proceeded to build a WSUS and start downloading all of the needed patches through our only connection to the Internet (a single T1 line). I let it run overnight thinking that all of the patches would be downloaded by the time I got in the next morning.

The next day I come in and we’re competely offline. Nobody seems to know what’s going on. Most pings to outside our network time out. Those few that do come back have an extremely high latency. I soon realized to my horror that all 250 PCs were trying to download several years worth of patches along with several service packs from Microsoft through our single T1 line. I immediately asked the sysadmin what he changed and he said that he deleted the GPO for the previous WSUS since it wasn’t working. He didn’t seem to realize that it would cause the PCs to try to get the patches from Microsoft.

I quickly created a new GPO that pointed to the new WSUS and we were back online later that morning. This was only one of many interesting IT experiences at this hospital.

I started work at a company where at least 80% of their computers had no antivirus software, and even their web site was infected. I cleaned it all up and created a new web site. However at one point I upgraded their main switch to a gigabit managed switch and all hell broke loose. To make a long story short their category 5 cabling had the pairs on the wrong pins. This led to the cabling working at 10, and generally working at 100, but totally failing at Gigabit speeds. The solution was to replace all the wiring where possible and when the cabling was not replaceable, just replacing the connectors at both ends.

Before long their new web site was totally beating all of their competitors, everything was working at gigabit speeds and the boss said that it was the neatest IT he had ever seen. Then he decided to hire someone to further improve IT and the new hire’s first job was to collect every password, then he quit.

Let’s go back to 2013. I had just returned from the military with my IT experience, and was ready to take on the corporate world. I was hired as a Level 1 Incident Response Technician. I worked nights in the US, because I was supporting the EAME locations during their day shifts. I received a call around 3 AM from a very irate employee. He was at a vendors location in a remote location (literally in Siberia) and was in the middle of processing a contract that was over 10 million dollars. The VPN client stopped working on his PC. Here’s what happened, the new systems engineer at HQ had accidentally deployed the new VPN client to EVERY SINGLE EMPLOYEE that was on the network at that time. He thought he was in the dev environment, and turns out he was not. Part of this script was configured to wipe out the registry files from the old client, so that blew up the registry keys as well. Could you imagine if every single employee that was impacted had to send in their laptops to get rebuilt? Well that is exactly what happened. Thankfully I wasn’t on that team. I don’t think the systems engineer was employed after that incident.