One of my biggest pet peeves with AI is the word itself. Words matter, especially in IT. Whether you’re trying to nail down a scope of work for a project, troubleshooting an outage, or even trying to buy a product, if everyone isn’t using the same terminology, you’re headed for trouble. You could even potentially be headed for a disaster. In my day-to-day work I have to deal with this constantly.
The wireless world is rife with fractured vernacular and colloquialisms. Words like "WAP" and "coverage" can be confusing, misleading, or even frowned upon in the industry. Even a word as simple as "survey" can cause major issues when talking about what work needs to be done. For a team of cable installers, this can mean a site visit to determine cable paths. For a wireless engineer, the same word could mean a site walk to assess the current wireless situation or the validation of design that has yet to be deployed (typically by doing what’s called an AP-on-a-stick or APoS survey). Often before heading out to perform a job, I will set up a meeting to review the information and level-set on terminology. What does all this confusion in Wi-Fi have to do with AI? Like I said, the word itself is what pains me.
Artificial Intelligence has a different definition depending on who you ask, but it seems to bring the same thoughts and ideas to mind for everyone: typically HAL 9000, WOPR, and more recently TARS along with any other movie AI. A system capable of thinking like a human but with instantaneous access to information beyond our wildest dreams. Yet, in the marketplace of tools dubbed AI, there isn’t a single "intelligent" system to be seen. To me, Artificial Intelligence is a general knowledge capable of simulating human thought and response by using vast amounts of experience and seemingly non-related information to make those decisions. As I posited in my past posts, we’re not even close. AI is merely making decisions for us in our networks based on the inputs of its creators (developers).
Now why does this bother me so much? What’s wrong with calling advanced machine learning "artificial intelligence?" As we continue to use machine learning and approach true AI without drawing a line in the sand about what it really is and isn’t, the definitions will blur. The use cases, costs, and abilities are vastly different and similarities will begin to fade. As that happens, the machine learning systems that corporations are buying now will become antiquated and obsolete, while your customers (or even your executives) scream about all the money and time invested -- now wasted. This isn’t a simple case of aging tech, like old firewalls not capable of doing the same thing as the current generation. This is a fundamentally different issue. It’s like comparing a first generation Ethernet hub capable of moving 10Mbps across a shared medium to the new set of core switches available today with layer 3 functions, firewall capabilities, and so much more. Machine Learning is a small subset of Artificial Intelligence and only able to perform a small subset of the tasks. Calling Machine Learning "AI" is imprecise and leads to confusion.
I’d argue that there are maybe three or four systems in place (that we know of) that could be considered AI and I promise you, no one is selling you one of those. Amazon Alexa, Apple Siri, IBM Watson, and whatever Google has dubbed their intelligent system are just about the only intelligent machines out there. And the argument could still be made that these systems aren’t intelligent and are merely working off of vast training sets too large to house in a typical data center. Until we have machines with awareness of more than just the training sets provided to them, let’s stick to calling it Machine Learning.