In my final post in this series, I want to talk about some things that we have not touched on already and how it relates to what we’ve already discussed. First, let’s take a quick recap of what we’ve covered.

 

In post 1, we talked about the pace at which people are moving to the public cloud. You need to do your homework before moving as it can be costly or it can be the wrong platform for the workload.

 

In post 2, we covered some of the positives and negatives with moving to the public cloud. We also considered that just because you start in one place, it doesn’t have to stay there for its whole lifespan.

 

Blog 3  discussed the current wave of digital transformation strategy that needs top-down direction. We started to look at agile deployment and paying off your technical debt.

 

Post 4 went on to look at the revolution happening with application deployment and how the use of new technologies, such as containers and microservices, can benefit your application.

 

Number 5 in the series was centered around how proper planning can help you achieve a private cloud and spending money won’t automatically solve all your problems. Metrics can be your friend, and recording them can help you with decisions you may have to make in the future and prove a solution you implement.

 

So, in this post, I also want to try to look to the future at things that may start to appear in more conversations once you have a solid hybrid cloud strategy. I’m hopeful you have figured out a method for using the various hyper-scalers and service providers, as well as online services and how you will adopt those moving forward (thankfully, no cloud project is a big-bang scenario, reorganizing everything in one hit). It’s a hard thing to do, especially if you have a traditional deployment model.

 

A more traditional approach to IT is deploying an application and, barring minor upgrades and support during its life span. It basically stays as it was on the day of deployment. Hence why we have these companies with applications stuck on an NT4 platform. A newer, more agile way of attacking the aging waterfall deployment is to constantly revisit and improve areas. As previously mentioned, this is one of the reasons why people who move to using the cloud for their applications are generally more successful. They respect the fact that there’s always room for improvement and that there are things to learn from getting it wrong.

 

For more traditional IT teams or those with possibly more regulation around their industry, we still see people turning to “cloud seeding.” Cloud seeding is generally the step before a proof of concept (POC): finding out your best option, discussing it with your board, and maybe researching what your competitors are up to, so that when the time comes, you can bring up a workload quickly. It can also be used to describe part of your disaster recovery methodology. These organizations have a lot of work to do to start using a hybrid cloud approach.

 

But there are other areas that may shape your business over the coming years. While you may not be initially planning to undertake any of these, having an understanding and opinion on whether they could fit into your organization will help you make a more strategic decision on the evolution of your hybrid cloud strategy. Many of these go hand in hand and hopefully they will spark some synergies.

 

The Internet of Things (IoT) seems to have been on the cusp of modern IT deployments for many years now, but it feels that in recent months, more and more conversations seem to include some form of IoT. I feel the term “things” doesn’t really do the clever piece of equipment justice at times. This device-to-device communication fueling IPv6 adoption is more about problem-solving than IT. Industrial use cases have been around for many years, and we’re seeing more and more uses spread into everyday business. From monitoring crowd movements, smart devices, wearables, to connected bins, IoT is being deployed in a vast range of opportunities to look at slow points or breakdowns in processes we can spot before they happen and be more proactive with our method of dealing with these critical business processes. We need to consider the security of the devices in question, the information gathered with the devices, and where this information should live.

 

Mobile edge computing is simply where some form of computation happens at the edge device or devices before sending it on to a cloud. A strong example of this is connected driverless cars. It’s impossible for them to transmit every bit of data back to the core. So, some analysis must be done on the device, as the bandwidth requirements for all the devices to upload and be received by the cloud in question would be phenomenal. Take the example of trying to find a needle in a haystack. Do you move the haystack or stacks back to the farm yard and sort through them? Or do you burn the hay where it sits and carry the needle home? If we look at power going into handheld devices today, it would be wise to start using some of this horsepower before sending data to a cloud or clouds for further use. Planning for this type of deployment and understanding of the different varying devices that may be in use is going to take a lot of planning and evolution over time.

 

Distributed ledgers—I’m not talking about cryptocurrencies, but their uses as immutable systems of record—have some very interesting use cases attached. The most interesting one I have heard of recently is around the fishing industry to show the traceability of where the fish were caught, by whom, where landed, etc., on the supply of this food resource. Whether in a private cloud or working in the public cloud space, this can have huge implications on how we do business in the future.

 

Serverless (the millennial way of expressing utility computing) is the idea of pay-as-you-go for a service or function. It removes the need for you to purchase and maintain the platform to carry out the required computation. I see areas where spikes and troughs in demand can really start to capitalize on this new type of deployment model, as you only pay for the code executed. So, if you have an average of 1,000 operations a day, but a ten-times increase at Christmas, you only have to pay for the computation during that season and don’t leave it sitting underutilized during the rest of the year.

 

Artificial intelligence (AI) is the shiny new thing—everyone wants to see what it can do for them. With various hyper-scaler players offering frameworks, you can easily implement some form of machine inference within minutes. There are many subfields to AI, including machine learning, deep learning, speech processing, neural networks, and vision, to name a few. Each of these share some common techniques but also have some radically different techniques to building knowledge.

 

Deep learning for security could be a way to penetration test. Imagining combining it with public key infrastructure for authenticating users and real-time inventory of devices to truly understand what’s going on within your environment so you can accurately secure it, including unmanaged devices.

 

Cyberterrorism, corporate espionage, and ransomware are on the rise. Password-cracking algorithms can easily break ten-character passwords. Using AI for white hat hacking is gaining a lot of traction. Joseph Sirosh, CTO of AI, Microsoft, recently said, “Artificial intelligence is the opposite of natural stupidity.” By using AI as a method to keep checks on your security model, you can improve your overall infrastructure, especially if you’re keeping it in the public cloud where it can be discovered and has an attack surface.

 

If you’re wondering about the fortifications you have built to keep intruders out, well, don’t forget the people who enter from 9 to 5, five days a week. There’s this scene in Eraser (1996) where our lead antagonists go back to where one of them worked to decrypt information she has removed for whistleblowing purposes. Even though the only apparent method to read the disk is via a highly secure vault, the designer left a back door. This highlights the human aspect that must be considered by security teams. I recently read about a university that’s using AI to try to spearfish their employees to make sure they’re following the most recent rules and guidelines, which shows just another one of the many possible uses for AI.

 

I understand that I’ve only briefly touched on some of the topics of thought that will follow from a hybrid cloud adoption, but it’s good to start thinking about the next big thing and how you can help moving your company forward. For some of these and other use cases, a properly adopted hybrid cloud strategy will be needed for this new venture to succeed.

 

Maybe you feel you have already adopted an updated hybrid cloud mode of IT deployment. Maybe you’ve relaxed your dress code for your development team (hoodies optional). Maybe you’ve smartened up the web front-end of your application from Java to HTML5. These changes may be the start of something bigger. You may be thinking about the different tools and methodologies, because hybrid cloud is here to stay, and fighting against it is like King Canute trying to halt the incoming tide. It's inevitable, so embracing and preparing for it seems like the smart move. It’s more about the processes implemented and how quickly you can deal with change that will put you, your team, and your department in better stead with your board, and, truth be told, make you a more successful business. It's about looking past the current challenges and deciding where you want to be in five years—and how best to deliver it. It's about keeping your options open and questioning those who say, “because it has always been done that way, it can’t change.” And with that, I wish you luck.