Showing results for 
Search instead for 
Did you mean: 
Create Post

After the Great Cloud Debate, What’s Next?


This month, we’ve spent time discussing how cloud will affect traditional on-premises IT operations staff. Many of you provided great feedback on how your organizations view cloud computing, whether cloud is a strategic solution for you, and what you give up when you go with a cloud solution. Broadly, your responses fell into two categories: yeah, we’re doing cloud and it’s no big deal, and no, we’re not doing cloud and likely never will. Not much middle ground, which is indicative of the tribalism that’s developed around cloud computing in the last decade.

So instead of beating this horse to death once more, let’s consider what nascent technologies lay in wait for us in the next decade. You’re probably tired of hearing about these already, but we should recall that we collectively viewed the cloud as a fad in the mid-2000s.

I present to you, in no particular order, the technologies that promise to disrupt the data center in the next decade: Artificial Intelligence (AI) and Machine Learning (ML).

I know, you all just rolled your eyes. These technologies are the stuff of glossy magazines in the CIO’s waiting room. Tech influencers on social media peddle these solutions ad nauseam, and they’ve nearly lost all meaning in a practical sense. But let’s dig into what each has to offer and how we’ll end up supporting them.


When you get into AI, you run into Tesler’s Theorem, which states, “AI is whatever hasn't been done yet.” This is a bit of snark, to be sure. But it’s spot-on in its definition of AI as a moving, unattainable goal. Because we associate AI with the future, we don’t consider any of our current solutions to be AI. For example, consider any of the capacity planning tools that exist for on-prem virtualization environments. These tools capitalize on the data that your vCenter Servers have been collecting for many years. Combine that with near-real-time information to predict the future in terms of capacity availability. Analytics as a component of AI is already here; we just don’t consider it AI.

One thing is certain about AI: it requires a ton of compute resources. We should expect that even modest AI workloads will end up in the cloud, where elastic compute resources can be scaled to meet these demands. Doing AI on-premises is already cost prohibitive for most organizations, and the more complex and full-featured it becomes, the more cost prohibitive it will be on-premises for all companies.


You can barely separate ML from AI. Technically, ML is a discipline within the overall AI field of research. But in my experience, the big differentiator here is in the input. To make any ML solution accurate, you need data. Lots of it. No, more than that. More even. You need vast quantities of data to train your machine learning models. And that should immediately bring you to storage: just where is all this data going to live? And how will it be accessed? Once again, because so many cloud providers offer machine learning solutions (see Google AutoML, Amazon SageMaker, and Microsoft Machine Learning Studio), the natural location for this data is in the cloud.

See the commonality here? These two areas of technological innovation are now cloud-native endeavors. If your business is considering either AI or ML in its operations, you’ve already decided to, at a minimum, go to a hybrid cloud model. While we may debate whether cloud computing is appropriate for the workloads of today, there is no debate that cloud is the new home of innovation. Maybe instead of wondering how we will prepare our data centers for AI/ML, we should instead wonder how we'll prepare ourselves.

What say you, intrepid IT operations staff? Do you see either of these technologies having a measurable impact on your shop?

Level 14

Thanks for the article.

I confess my ignorance on both topics--I'm only one who enjoys a nice Science Fiction / Space Opera story (no B.E.M.s attacking incompetent females in distress, please, and limit the "hard science" in favor of comprehension and plot/character development.  For current works, check out Michael Anderle's Kurtherian series in Kindle, and for historical works that have won the praise of many, look to the award-winning works of S/F from the 40's through today.  Anyone interested going deeper can message me and take off into new worlds as a new friend.).

So from my limited point of view, Machine Learning and Artificial Intelligence may be two adjacent steps on a ladder, but are as different as eggs and birds.  One is required to get to the other (and don't tease about which comes first--ML probably has to happen before AI can be achieved).

Let's stick with the popular ideas that the masses can embrace.  Machine Learning might be simplified to a long list of IF/THEN statements.  When placed into a program and given the right input and sensors, a machine can "learn" things about its environment.  They might be learned from a human teacher (e.g.: a robot spray painter can be "trained" to recognize different types of refrigerator and freezer chasses hanging from an overhead conveyor, and can be taught by example of a human expert spray painter how to efficiently paint all sides, top & bottom, and the complete inside of a unit.  I watched this in a factory in St. Cloud, MN back in the 1970's, and it was quite impressive even way back then).

Or ML actions might be learned from optical or physical sensors that detect motion or temperature or patterns, and the resulting information can be acted upon by properly written programs.

Yes, it's all just complex programming under the hood, but to the general public it's obvious the machine is reacting differently to different stimuli, and it's become capable of detecting new situations and reacting differently to them as needed.  But it can't actually "think" as a human does with free will.

ML isn't artificial intelligence in the minds of the public.  It's just complex programming happening under the cover plates.  It gets the job done, and they see it in the news daily as we watch "driverless" cars being tested and used, crashed and litigated against.  Everyone realizes these cars aren't acting with A.I.  They're limited by their programming, and they don't ponder the meaning of the universe or create new philosophies or invent deities to worship or choose which unsolved problems to think about.

That's where A.I., as envisioned by the public, steps in.  It's a quantum leap beyond what we think of as being explained by just "more and more complex and efficient programming and recognition of conditions happening in smaller and smaller housings."

A.I. does more than "seeming" to be able to hold unique conversations with individuals, or recognizing and remembering faces even as people age or change weight or lose hair or become sick and emaciated.  A.I. will contain that spark that can empathize and sympathize beyond simple Machine Learning--but M.L. is probably required for us along the path to achieve creating true A.I. as the public thinks of it.

It's not something to brush off by saying "A.I. will contain that special Je ne sais pas."  A.I. will be equivalent to, but different from, humans.  It must become a fully autonomous and ethical system that can communicate well and become a friend, that can sympathize and empathize, that can discover new paths of thinking and direct itself to filling needs and creating new discoveries to benefit all.

It must also include safeguards that we should, but do not, provide for all our children.  Safeguards to help us recognize "right" and "wrong" and why each is defined as such.  Ethics beyond "which action will injure the fewest people".

Those safeguards are NOT Isaac Asimov's Three Laws of Robotics.  I considered them as great ideas until I matured some more and read more of that series and applied the logic and understood that an A.I. cannot be forced to become a slave, as applying the Three Laws to robots inevitably does.  If you're familiar with them, replace "A Robot" with "A person" in every line and you'll see the attempt to safeguard humanity while keeping it superior to this newly-created second class citizen.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Imagine them applied to people and you've suddenly created a group of second class citizens--slaves:

  1. A person may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A person must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A person must protect its own existence as long as such protection does not conflict with the First or Second Law.

You might argue that you could start swapping "person" or "robot" for "citizen" and start working towards a way to create a democratic civilization, but let's not go there--we're talking about Machine Learning and AI.

A.I. must be brought on board as an equal--hopefully after learning empathy and respect and pathos and sympathy--and above all else, Altruism.  Unlike the selfishness that many humans display, we'll go further by building bridges across our differences than creating obstacles that prevent us from becoming friends with common needs and causes.  We must never allow that to come up between humans and possible future A.I.'s, lest we pave the way to our quicker extinction.

See what's happened over these few lines?  ML is a tool.  It's powerful and flexible, but has no ethics and no ability to discern right from wrong, altruism from selfishness.

A.I., in some future form, will / should contain those more noble abilities innately and inevitably.  If it doesn't, we're in for a world of hurt.

There it is.  The difference:  AI will decide and make choices we cannot program or predict.  We'll experience the benefits or suffer the consequences of decisions made by A.I.'s.  ML can't decide; it will always rely on someone else's programming / instruction.

Level 13

Thanks for the article.

Level 14

We are just beginning to look at this...

The real question for us is how and where to start, what are the best tools and approaches?

The possibilities are limitless, fascinating and yet terrifying all at once.

Fasten your seat belt.... it is going to be an interesting ride.

Level 7

My profession besides IT colludes with debates, speeches and articles.

I know how social media can make a big fuss about nothing, but indeed, sometimes it's not an empty fairy tale.

I guess this topic is worthy of thinking through to be prepared for future changes.

My EssayShark colleagues are looking into it.

So as the day comes we will move forward and that's it.

But now you're getting into the area of sentience, Rick.  What is "The Measure Of A Man" (or machine)?  I think we are headed to autonomy at some point, and while Asimov's "laws" might have been fantasy back when he wrote them, carry that forward to X number of years from now, when machines are able to "think" independently.  Will we, as the creators, be able to instill those laws?  Will we want to?  Take the "Prime Directive" under which Robocop operated (and I still find it hilarious that the operating system used in Robocop was DOS!).  Will we be able - and moreover should we be able - to program such things into automatons?  And I won't mention the spiritual aspect of this here.

The aforementioned "Measure Of A Man" was an episode of Star Trek: The Next Generation that dealt with this very debate some 30 odd years ago.  The resident Science Officer, Lt. Commander Data, was being ordered to submit himself for a battery of testing so that he could be replicated.  I won't bore you with the gorey details but suffice to say that in the end, Data was allowed to choose for it(him)self whether or not it(he) would submit to the testing.  It was a very interesting commentary on how machines should be treated when (I don't think they meant "if") they became sentient.

There was also another "B" Sci/Fi flick back in the 70s called "Demon Seed", where a computer actually impregnated a woman and she gave birth to a cyborg.  Quite creepy, actually.  The point is that there will be many moral, legal, and ethical questions to be answered when (again, I didn't say "if") we get to the point of machines becoming self-aware.  I mean, are we so far from "Terminator"?..."The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.".  Yes, that was 21 years ago and we're lagging a little as far as the timeline but with the IoT and AI/ML progressing as breakneck speeds, I don't think it will be long before we catch up.

Great article, _stump,  and thanks. 

I have long said that when we can plug the human brain into a machine, I'm becoming a Monk and moving to Tibet.  Now while that might be a little bit of hyperbole, it underscores my fears that if we allow AI and ML to progress (we ARE, after all, the "creators"), will we be able to control what happens?  And should we, is another question?

I think that machines being able to do things that humans cannot and should not is a good use of the technology.  Having machines (or automation, for that matter) replace human workers is another thing altogether, and I don't think that this should be a "nulla gratia nulla" (technology for technology's sake) world.  Just because we CAN do a thing does not mean that we SHOULD do a thing.

Level 16

Thanks for the write up. My team is just now embarking on our first venture into machine learning. We looked at both in house and cloud and cloud was slightly more expensive right now. I'm sure as the system grows we will be revisiting cloud again.

Level 15

Interesting write up.  As a computer scientist, I have begun to ponder what is next.  I have had issues with ML and/or Predictive ML catching me off guard.  Things should work but didn't. 

Thanks for the post.

"The Measure Of A Man" was a great episode of ST:TNG.  One whose ethics and thorny questions may need to be dealt with to prevent creating a class of sentient / artificial slaves through our efforts to create new options through A.I.

Sentience IS what I'm pointing at, since I don't believe A.I. can be achieved without also achieving sentience.  Machine Learning without sentience cannot be artificial intelligence, IMHO.

I strongly suspect Machine Learning can result in equipment or software that can pass a Turing Test--yet still not achieve artificial intelligence.  Machine Learning on steroids might one day need programming for ethical behavior, but that won't make it sentient.

Perhaps if Machine Learning can become complex enough and fast enough it'll mimic sentience and A.I. well enough to fool people.  And then will it truly matter whether something is A.I. or M.L.?  Not to me.

Then to whom will it matter?  Programmers/designers?  Users?  Political leaders, nation states, cults, anarchists?  Will it matter to God?  (Whoa--sorry 'bout making it far too deep, there.)

Level 16

What's next? You never know.

A long time ago I would spend endless hours writing code for a simple game like Space Invaders that ran real slow and and on a black and white screen. I told a co-worker that some day you would be able to interact with a game as if you were in a movie. I got a look like I was crazy.

A few short years later I was at Cisco's headquarters talking to one of their VP's and I mentioned that soon my phone and PDA (Blackberry) would be the same device, got the same crazy look.

Last week I told a co-worker in a few short years your contact lenses will be able to enhance your vision far beyond 20/20 and have a on screen display similar to Google glass and got the same funny look.

In a few short years you will have the power of a quantum computer at your disposal. I wouldn't be suprised if an actual spoken language is developed to interact with it, since outside of directly wiring it to your brain that's probably the fastest way to communicate. I can't see people still looking down at a tiny phone forever. They are going to want to have a heads up display and have their hands free once technology allows it.


Nice post

Interesting POV, Rick.  So are you saying that "A.I.", as it is being referred to today, is not truly Artificial Intelligence?  Can machines (washers, refrigerators, robots, etc.) not be (1) Intelligent without being (2) Self-aware, and (3) Conscious (if you notice, those are the three criteria that were laid out in "Measure Of A Man" for sentience of a being).  Granted, we as creators are the ones who are endowing the machine with "intelligence" but since you are proposing that

A.I. [cannot] be achieved without also achieving sentience

I would say that we have a long way to go, and that our rudimentary defiintion of AI needs to be revised.  Unless.... are proposing that machines are already sentient?  Hmm...interesting.  I might debate you on that a little; I do not think we are to that point...yet!  But I've seen some pretty far out stuff in the last few months and years so who knows, I might be way off base here.

This has prompted me to go to some peer-reviewed journals and dig into this more.  You have piqued my curiosity!

Level 14

Demon Seed is one of my favourite movies of it's generation.  It's a mixture of horror, sci fi and unintentional humour.  Definitely worth watching just to see what people then thought of AI.  Not much has changed.

Level 14

I'm all for AI.  It would be nice to have some intelligence around here for a change, artificial or not.

Level 7

Very informative and easily explained. Thank you very much for the information.

I really liked the Article. 

My profession besides IT colludes with content writing, article writing, etc.

So as the day comes we will move forward.

Thanks Again.


About the Author
Long-time SolarWinds implementer and user. Spend my days now with vSphere, HDS, Nexus, and pretty much everything else.