So far I’ve written about my love and hatred of what AI can do for me (or cost me) as a network owner. This post is going to cover why I’m afraid to let a machine take the wheel and use all that artificial brain power to drive my network.


A big joke across any field affected by AI is that Skynet and the Terminators are coming. While I don’t share that exact fear, I do wonder what the dangers are of letting an unfeeling, politically unaware machine make important decisions for the operations of my network. I’m also not in the camp that thinks AI will replace me. It could change my role a bit maybe, but nothing more in the foreseeable future. So what about AI gives me panic attacks? One word - decisions.


As I covered in previous posts, the ability to find and correlate information at scale is a huge benefit for anyone running a network. When an AI can scour all my logs and watch live traffic flows to find issues and alert on them, it’s a massive gain. The next logical step would be to allow the AI to run preconfigured actions to remedy the situation. Now you’ve got an intelligently automated network. But what if we go one step further and let the AI start making decision about the network and servers. What if we let the AI start optimizing the network? I’m not talking about a simple "if this, then that" automation script. I’m talking about letting the AI actually pore over the data, devices, and history to make its own decisions about how the network should be configured, for the greater good. This is where things get a bit hairy for me. In the past I’ve used a somewhat morbid example of autonomous vehicles to make the point, and I think it’s a pretty good analogy.


Imagine an autonomous vehicle with you as a human passenger driving down a suburban street, humming along in what has been deemed the safest and most efficient manner. Suddenly, from behind a blind intersection, a group of children pop out on the left while simultaneously on the right a group of business people out to lunch emerge. The AI quickly scans the environment and sees only three possible outcomes.


  • Option 1: Save the children by driving into the group of business people.
  • Option 2: Save the business people driving into the children.
  • Option 3: Save them both by swerving into a nearby brick wall, ultimately resulting in the passengers doom.


How is this decision made? Should the developers hardcode a command to always save the passenger? What if the toll is higher next time and it’s a school bus or even a high-profile politician about to pass laws feeding the hungry while simultaneously removing all taxes? It’s a bit of a conundrum, and like I said, it’s a bit morbid. But it does highlight some things we can’t yet teach AI: situational knowledge, context outside of its training, and politics.Yes, politics. Every office has them.


Take that scenario and translate it into our world. Your network is humming along beautifully when suddenly one of your executives attempts to connect an ancient wireless streaming audio device that will have a huge performance impact on all the workers around him. The logical thing to do in this situation is to simply deny the connection of the device. Clearly the ability of the other employees to do their jobs outweighs a CxO’s ability to stream some audio on an old device, and the greater good is obviously more important. Unless it’s not. This is what scares me about AI.


Even though my example story may have been a bit out there, I hope it helps show you why I am afraid of what infrastructure AI will likely be poised to do in the not too distant future (if not already).