Geek Speak

10 Posts authored by: ian0x0r

Old Tech, New Name?

Posted by ian0x0r Jan 20, 2020

The marketing machines of today often paint new technologies to suggest they’re the best thing since sliced bread. Sometimes though, the new products are just a rehash of an existing technology. In this blog post, I’ll look at some of these.


As some of you may know, my tech background is heavily focused around virtualization and the associated hardware and software products. With this in mind, this post will have a slant towards those types of products.


One of the recent technology trends I have seen cropping up is something called dHCI or disaggregated hyperconverged infrastructure. I mean, what is that? If you break down to its core components, it’s nothing more than separate switching, compute, and storage. Why is this so familiar? Oh yeah—it’s called converged infrastructure. There’s nothing HCI about it. HCI is the convergence of storage and compute on to a single chassis. To me, it’s like going to a hipster café and asking for a hyperconverged sandwich. You expect a ready-to-eat, turnkey sandwich but instead, you receive a disassembled sandwich you have to construct yourself and then somehow it’s better than the thing it was trying to be in the first place: a sandwich. No thanks. If you dig a little deeper, the secret sauce to dHCI is the lifecycle management software overlaying the converged infrastructure but hey, not everyone wants secret sauce with their sandwich.


If you take this a step further and label these types of components as cloud computing, nothing has really changed. One could argue true cloud computing is the ability to self-provision workloads, but rarely does a product labeled as cloud computing deliver those results, especially private clouds.


An interesting term I came across as a future technology trend is distributed cloud.¹ This sounds an awful lot like hybrid cloud to me. Distributed cloud is when public cloud service offerings are moved into private data centers on dedicated hardware to give a public cloud-like experience locally. One could argue this already happens the other way around with a hybrid cloud. Technologies like VMware on AWS (or any public cloud for that matter) make this happen today.


What about containers? Containers have held the media’s attention for the last few years now as a new way to package and deliver a standardized application portable across environments. The concept of containers isn’t new, though. Docker arguably brought containers to the masses but if you look at this excellent article by Ell Marquez on the history of containers, we can see its roots go all the way back to the mainframe era of the late 70s and 80s.


The terminology used by data protection companies to describe their products also grinds my gears. Selling technology on being immutable. Immutable meaning it cannot be changed once it has been committed to media. Err, WORM media anyone? This technology has existed for years on tape and hard drives. Don’t try and sell it as a new thing.


While this may seem a bit ranty, if you’re in the industry, you can probably guess which companies I’m referring to with my remarks. What I am hoping to highlight though is not everything is new and shiny, some of it is wrapped up in hype or clever marketing.


I’d love to hear your thoughts on this, if you think I’m right or wrong, and if you can think of any examples of old tech, new name.


¹Source: CRN


OK, so the title is hardly original, apologies. But it does highlight the buzz for Kubernetes still out there not showing any signs of going away anytime soon.


Let’s start with a description of what Kubernetes is:


Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available¹


Let me add my disclaimer here. I’ve never used Kubernetes or personally had a use case for it. I have an idea of what it is and its origins (do I need a Borg meme as well?) but not much more.


A bit of background on myself: I’m predominantly an IT operations person, working for a Value-Added Reseller (VAR) designing and deploying VMware based infrastructures. The organizations I work with are typically 100 – 1000 seats in size across many vertical markets. I would be naive to think none of those organizations aren’t thinking about using containerization and orchestration technology, but genuinely none of them currently are.


Is It Really a Thing?

In the past 24 months, I’ve attended many trade shows and events, usually focused around VMware technology, and it’s always asked, “Who is using containers?” The percentage of hands going up is always less than 10%. Is it just the audience type or is this a true representation of container adoption?


Flip it around and when I go to an AWS or Azure user group event, it’s the focus and topic of conversation: containers and Kubernetes. So, who are the people at these user groups? Predominantly the developers! Different audiences, different answers.


I work with one of the biggest Tier-1 Microsoft CSP distributors in the U.K. Their statistics on Azure consumption by type of resource are enlightening. 49% of billable Azure resources are virtual machines, 30-odd% is object storage consumption. There was a small slice of the pie at 7% for misc. services, including AKS (Azure Kubernetes Service). This figure aligns with my first observation at trade events, where less than 10% of people in the room were using containers. I don’t know if those virtual machines are running container workloads.


Is There a Right Way?

This brings us to the question and part of the reason I wrote this article: is there a right way to deploy containers and Kubernetes? Every public cloud has its own interpretation—Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, you get the idea. Each one has its own little nuances capable of breaking the inherent idea behind containers: portability. Moving from one cloud to another, the application stack isn’t necessarily going to work right away.


Anyways, the interesting piece for me, because of my VMware background, is Project Pacific. Essentially, VMware has gone all-in embracing Kubernetes by making it part of the vSphere control plane. IT Ops can manage a Kubernetes application container in the same way they can a virtual machine, and developers can consume Kubernetes in the same way they can elsewhere. It’s a win/win situation. In another step by VMware to become the management plane for all people, think public cloud, on-premises infrastructure, and the software designed data center, Kubernetes moves ever closer to my wheelhouse.


No matter where you move the workload, if VMware is part of the management and control plane, then user experience should be the same, allowing for true workload mobility.



Two things for me.


1. Now more than ever seems like the right time to look at Kubernetes, containerization, and everything it brings.

2. I’d love to know if my observations on containers and Kubernetes adoption are a common theme or if I’m living with my head buried in the sand. Please comment below.


¹ Kubernetes Description -


In a roundabout continuation to one of my previous blog posts, Did Microsoft Kill Perpetual Licensing, I’m going to look at the basic steps required for setting up an Office 365 tenant. There are a few ways you can purchase Office 365. There’s the good old credit card on a monthly subscription, bought directly from Microsoft. This is the most expensive way to buy Office 365 as you will be paying Recommended Retail Price (RRP). Then there are purchasing models typically bought from an IT reseller. The reseller typically either helps add the subscription to an existing Microsoft Enterprise agreement with licenses available on a Microsoft volume license portal, or more likely the reseller will be what’s known as a cloud solution provider (CSP). CSP licensing can be bought on monthly or yearly commitments, with prices lower than RRP. The CSP model offers great flexibility as it’s easy to increase or decrease license consumption on a monthly basis, thus you’re never overpaying for your Office 365 investment.


Now you may be reading this wondering what on earth is a Microsoft Office 365 Tenant? Don’t worry—you’re not alone. Although Office 365 adoption is high, I saw a statistic that something like one in five IT users use Office 365. That’s still only 20% market saturation.


In basic terms, an Office 365 tenant is the focal point from where you manage all the services and features of the Office 365 package. When creating an Office 365 tenant portal, you need a name for the tenant, which will make up a domain name with something like At a minimum, the portal will incorporate a version of Azure Active Directory for user management. Once licenses have been assigned to the users, options are open for using services like Exchange Online for email, SharePoint Online for Intranet and document library-type services, OneDrive for users’ personal document storage, and one of Microsoft’s key plays at the moment, Teams. Teams is the central communication and content collaboration platform bringing much of the Office 365 components together into one place.


Cool, now what?

Setting up your own Office 365 portal may seem like a daunting task, but it doesn’t have to be. I’ll walk you through the basics below.


At this point, I must point out I work for a cloud solution provider, so I’ve already taken the first step in creating the Microsoft tenant. You can do this any way you want—I outlined the methods of payment above.


However you arrive at this point, you’ll end up with a login name like You need this to access the O365 portal at


When you first login, you’ll see something like this. Click on Admin.


The admin panel looks like this when you first log in.


Domain Verification

Select Setup and add a domain. Here we’ll associate your mail domain with the O365 portal.


I’m going to add and verify the domain This O365 portal will be deleted before this article is published.



At this point, you must prove you own the domain. My domain is hosted with 123-reg. I could at this point log in to 123-reg from the O365 portal and it would sort out the domain verification for me. I’ll manually add a TXT record to show what’s involved.


To verify the domain, I have to add a TXT DNS record with the value specified below.


On the 123-reg portal, it looks like this. It’ll be similar for your DNS hosting provider.


Once the DNS record has been added, click Verify the Domain, and with any luck, you should see a congratulations message.

We now have a new verified domain.


User Creation

There are a couple of ways to create users in an O365 portal. They can be synchronized from an external source like an on-premises Active Directory, or they can be manually created on the O365 portal. I’ll show the latter here.


Click Users, Active users, Add a user. Notice is now an option for the domain suffix.


Fill in the user's name details.

If you have any licenses available, assign them here. I didn’t have any licenses available for this environment.


Assign user permissions.


And click Finish.

You now have a new user.



Further Config

I’ve shared some screenshots from my organization’s setup below. Once you have some licenses available in your O365 portal including Exchange online, then there’s some further DNS configuration to put in place. It’s the same idea as above when verifying the domain, but the settings below are used to configure your domain to route mail via O365, amongst other things.


Once all that’s in place, you’ll start to see some usage statistics.



In this how-to guide, we’ve created a basic Office 365 portal, assigned a domain to it, and created some users. We’ve also seen how to configure DNS to allow mail to route to your Office 365 portal.


Although this will get you to a point of having a functioning Office 365 portal with email, I would stress that you continue to configure and lock down the portal. Security and data protection are of paramount importance. Look at security offerings from Microsoft or other third-party solutions.


If you’d like any further clarification on any aspect of this article, please comment below and I’ll aim to get back to you.

Continuing from my previous blog post, Meh, CapEx, I’m going to take a cynical look at how and why Microsoft has killed its perpetual licensing model. Now don’t get me wrong, it’s not just Microsoft – other vendors have done the same. I think a lot of folks in IT can say they use at least one Microsoft product, so it’s easily relatable.



Let’s start with the poster child for SaaS done right: Office 365. Office 365 isn’t merely a set of desktop applications like Word and Excel with such cool features as Clippy anymore.


No, it’s a full suite of services such as email, content collaboration, instant messaging, unified communications, and many more, but you already knew that, right? With a base of 180 million active users as of Q3 2019¹ and counting, it’d be silly for Microsoft not to invest their time and effort into developing the O365 platform. Traditional on-premises apps, though, are lagging in feature parity or in some cases changed in a way that to me, at least, seems like a blatant move to push people towards Office 365. Let’s look at the minimum hardware requirements for Exchange 2019 for example: 128GB memory required for the mailbox server role². ONE HUNDRED AND TWENTY-EIGHT! That’s a 16 times increase over Exchange 2016³. What’s that about then?


To me, it seems like a move to guide people down the path of O365. People without the infrastructure to deploy Exchange 2019 likely have a small enough mail footprint to easily move to O365.


Like I said in my Meh, CapEx blog post, it’s the extras bundled in with the OpEx model make it even more attractive. Microsoft Teams is one such example of a great tool that comes with O365 and O365 only. Its predecessors, Skype for Business and Lync on-premises, are dead.


  Now, what about Microsoft Azure? Check out this snippet from the updated licensing terms as of October 1, 2019:

Beginning October 1, 2019, on-premises licenses purchased without Software Assurance and mobility rights cannot be deployed with dedicated hosted cloud services offered by the following public cloud providers: Microsoft, Alibaba, Amazon (including VMware Cloud on AWS), and Google.

So basically, no more perpetual license on one of the big public cloud providers for you, Mr./Mrs. Customer.


Does this affect you? I’d love to know.


I saw some stats from one of the largest Microsoft distributors in the U.K., 49% of all deployed workloads in Azure that are part of a CSP subscription they’ve sold are virtual machines. I’d be astonished if this license change doesn’t affect a few of those customers.


Wrap It Up

In my cynical view, Microsoft is leading you down a path where subscription licensing is more favorable. You only get the cool stuff with a subscription license, while traditional on-premises services are being made to look less favorable one way or another. And guess what—they were usually licensed with a perpetual license.


It’s not all doom and gloom though. Moving to services like O365 also removes the headache of having to manage services like Exchange and SharePoint. But you must keep on paying, every month, to continue to use those services.



¹ Microsoft third quarter earnings call transcript, page 3


² Exchange 2019 system requirements


³ Exchange 2016 system requirements


⁴ Source, Microsoft licensing terms for dedicated cloud


Meh, CapEx

Posted by ian0x0r Nov 7, 2019

Do you remember the good old days when you saved up your hard-earned cash to buy the shiny thing you’d been longing for? You know, like that Betamax player, the TV with the lovely wood veneer surround, or a Chevrolet Corvair? What happened to those days?


I remember as a young lad growing up in the U.K., there were shops like Radio Rentals or Rumbelows where you could get something “On Tick.” ¹ It didn’t seem like the done thing at the time; it almost seemed like a dirty word. “Hey, have you heard, Sheila got her fridge-freezer on tick.” The look of shock on people’s faces!


Now fast forward 25 years and here we are—almost everything is available on a buy now, pay later or rental model. Want a way to access films quickly? Here’s a Netflix subscription. Need something to watch them on? Hey, here’s a 100” Ultra HD 8K, curved, HDR, smart TV with built-in Freeview and Freesat, yours for 60 low monthly payments of £200. Need a new set of wheels? No problem. Something like 82% of all new cars in the U.K. are purchased with a PCP payment plan. This is people reaching for the shiny thing and being able to get it when previously they couldn’t or shouldn’t.


The OpEx Model


So, what’s my point and how is this relevant to the IT landscape?


First and foremost, rental models can work out more profitably for the vendor selling the goods. Slap a little interest premium on the payments as the cost is spread over a longer term. A rental income is more predictable as well, the annuity is the name of the game.


Incentivizing the rental model makes it look more attractive. Look at the cost of an iPhone, for example. Each new release gets more and more expensive. My first four cars combined cost less than a new iPhone 11 Pro. But Apple (and others) are smart. Pay your monthly sum of cash and every 12 months, they’ll give you a new phone. And oh heck, if you drop it and smash the screen, no problem—it’s covered in the monthly cost. You don’t get that service if you buy an iPhone upfront with your hard-earned cash. It makes customers sticky, as in they’re less likely to move elsewhere.


Now let’s look at IT vendors. Microsoft, Amazon, Dell, HPE, you name it, their rental model fits nicely into an OpEx purchasing plan.


There’s no denying Microsoft has killed it with Office 365. This is something I’ll dive deeper into in an upcoming blog post. AWS and all public clouds allow you to pay for what you consume, on a monthly basis, although this cost isn’t always predictable. 


Even hardware vendors are at it. Dell can wrap up your purchase in a nice finance package spread over 3 – 5 years. HPE has their Green Lake program, which is, and I quote, “Delivering the As A Service experience. We live in a consumption-based world—music, TV shows, groceries, air travel and much more. Why? Because consuming services delivers a better outcome faster, in our personal lives and in business. It’s true for music, and it’s also true for IT.” ²




So why Meh, CapEx? In an increasingly diverse IT landscape, with more and more solutions delivered As-A-Service, coupled with the increased pace of innovation, it can be difficult to predict costs and get it right with a CapEx investment lasting 3 – 5 years. I mean, who wants to still be rocking an iPhone 11 in 2024?


¹ On Tick – To pay for something later, via Urban Dictionary

² HPE Green Lake Quote

Before we can examine if cloud-agnostic is a thing, we must understand what it is in the first place. There are a multitude of opinions on this matter, but from my standpoint, being cloud-agnostic is about being able to run a given workload on any cloud platform, without having to modify the workload for it to work correctly. Typically, this would be in a public cloud like AWS, Azure, or GCP.


With that in mind, this article examines if cloud-agnostic is achievable and if it should be considered when deploying workloads in the public cloud.


Let’s start by abstracting this thinking away from public cloud workloads. What if we apply this to cars? Imagine you wanted an engine to work across platforms. Could you take the engine out of a BMW and drop it into a Mercedes? Probably, but it wouldn’t work very well because the rest of the platform wasn’t designed for that engine. For true engine portability, you’d have to design the engine to fit both the BMW and the Mercedes from the outset and understand how each platform works. This would likely give a compromised setup whereby the engine cannot take full advantage of either platform’s capabilities anymore. Was the additional effort worth it to design a compromised solution?


In cloud computing terms, is it worth spending extra time developing an app so it can run in multiple clouds, or is it better to concentrate on one platform and take full advantage of what it has to offer?


There are some exceptions to this, and the use of containers seems to be one way to work around this cloud-agnostic conundrum. What’s a container, you ask? If we take Docker's explanation (, it’s “a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.”


So, in other words, a container is a level of abstraction where all the software and code are wrapped up in a nice little package and it doesn’t really matter where it runs provided the underlying infrastructure can cater to its requirements.


Let’s use another car analogy. What if a child’s car seat is a container? It’s not tied to one manufacturer and is expected to work the same way across platforms. This is possible due to a set of guidelines and standardization as to how a car seat works. ISOFIX to a car seat is what Docker is to a container. (Don’t ask me how to create a Kubernetes analogy for this, I don’t know; leave your thoughts in the comments. )


Which brings us to vendor lock-in. Things work better if you fully embrace the ecosystem. Got an iPhone? Bet that works best with a Mac. Using Exchange online for email? I bet you’re using Outlook to access it.


Workloads designed to run on a single-vendor public cloud can take advantage of all the available adjacent features. If you’re running a web application in AWS, it would make sense to use AWS S3 storage in the back-end, or maybe AWS RDS for database tasks. They’re designed from the outset to interoperate with each other. If you’re designing for cloud-agnostic, then you can’t take advantage of features like this.


In conclusion, yes, cloud-agnostic is a thing, but is it something you should embrace? That’s for you to decide.


I welcome any comments on this article. I feel like there's a lot more that could be discussed on this topic and these are just my opinions. What are yours?

All aboard the hype train—departing soon!


I think most people love to have the latest and greatest thing. Lines outside of Apple stores waiting for each new product is proof enough.


I’m terrible for it myself, like a magpie drawn to a shiny object. If something is new and exciting, I tend to want to check it out. And it doesn’t just apply to tech products, either. I like to read up about new cars as well… but I digress.


So, what’s the purpose of my article here? HYPE! Hype is the bane of anyone’s life whose job it is to identify if a new tech product has any substance to it. I want to try to help identify if you’re looking at marketing hype or something potentially useful.


Does any of this sound familiar?


  • Everything will move to the cloud; on-premises is dead.

  • You don’t need tape for backups anymore.

  • Hyper-converged infrastructure will replace all your traditional three-tier architectures.

  • Flash storage will replace spinning disk.


The above statements are just not true. Well, certainly not as a blanket statement. Of course, there are companies with products out there to help move you in the direction of the hype, but it can be for a problem that never needed to be solved in the first place.


The tech marketing machines LOVE to get a lot of information out into the wild and say their product is the best thing since sliced bread. This seems to be particularly prevalent with start-ups who’ve secured a few successful rounds of venture capital funding and can pump it into marketing their product and bringing it to your attention. Lots of company-branded swag is usually available to try and entice you to take a peek at the product on offer. And who can blame them? At the end of the day, they need to shift product.


Unfortunately, this makes choosing products tough for us IT professionals, like trying to find the diamond amongst the coal. If there’s a lot of chatter about a product, it could be mistaken for word-of-mouth referrals. You know, like, “Hey Jim, have you used Product X? I have and it’s awesome.” The conversation might look more like this if it’s based on hype: “Hey Jim, have you seen Product X? I’m told it’s awesome.”


The key difference here is giving a good recommendation based on fact vs. a bad recommendation based on hearsay. Now, I’m not pooh-poohing every new product out there. There are some genuinely innovative and useful things available. I’m saying, don’t jump on the bandwagon or hype train and buy something just because of a perception in the marketplace that something’s awesome. Put those magpie tendencies to one side and exercise your due diligence. Don’t buy the shiny thing because it’s on some out-of-this-world deal (it probably isn’t). Assess the product on its merits and what it can do for you. Does it solve a technical or business problem you may have? If yes, excellent. If not, just walk away.


A Little Side Note

If I’m attending any IT trade shows with lots of exhibitors, I apply similar logic to identify to whom I want to speak. What are my requirements? Does vendor X look like they satisfy those requirements? Yes: I will go and talk to you. No: Walk right on by. It can save you a lot of time and allows you to focus on what matters to you.

This is a continuation of one of my previous blog posts, Which Infrastructure Monitoring Tools Are Right for You? On-Premises vs. SaaS, where I talked about the considerations and benefits of running your monitoring platform in either location. I touched on the two typical licensing models geared either towards a CapEx or an OpEx purchase. CapEx is something you buy and own, or a perpetual license. OpEx is usually a monthly spend that can vary month-to-month, like a rental license. Think of it like buying Microsoft Office off-the-shelf (who does that anymore?) where you own the product, or you buy Office 365 and pay month-by-month for the services that you consume.


Let’s Break This Down

As mentioned, there are the two purchasing options. But within those, there are different ways monitoring products are licensed, which could ultimately sway your decision on which product to choose. If your purchasing decisions are ruled by the powers that be, then cost is likely one of the top considerations when choosing a monitoring product.


So what license models are there?


Per Device Licensing

This is one of the most common operating models for a monitoring solution. What you need to look at closely, though, is the definition of a device. I have seen tools classify a network port on a switch as an individual device, and I have seen other tools classify the entire switch as a device. You can win big with this model if something like a VMware vCenter server or an IPMI management tool is classed as one device. You can effectively monitor all the devices managed by the management tool for the cost of a single device license.


Amount of Data Ingested

This model seems to apply more to SaaS-based security information and event management (SIEM) solutions, but may apply to other products. Essentially, you’re billed per GB of data ingested into the solution. The headline cost for 1GB of data may look great, but once you start pumping hundreds of GBs of data into the solution daily, the costs add up very quickly. If you can, evaluate log churn in terms of data generated before looking at a solution licensed like this.


Pay Your Money, Monitor Whatever You Like

Predominantly used for on-premises monitoring solutions deployments, you pay a flat fee and you can monitor as much or as little as you like. The initial buy-in price may be high, but if you compare it to the other licensing models, it may work out cheaper in the long run. I typically find established players in the monitoring solution game have on-premises and SaaS versions of their products. There will be a point where paying the upfront cost for the on-premises perpetual license is cheaper than paying for the monthly rental SaaS equivalent. It’s all about doing your homework on that one.



I may be preaching to the choir here, but I hope there are some useful snippets of info you can take away and apply to your next monitoring solution evaluation. As with any project, planning is key. The more info you have available to help with that planning, the more successful you will be.

If, like me, you believe we’re living in a hybrid IT world when it comes to workload placement, then it would make sense to also consider a hybrid approach when deploying IT infrastructure monitoring solutions. What I mean by this is deciding where to physically run a monitoring solution, be that on-premises or in the public cloud.


I work for a value-added reseller, predominantly involved with deploying infrastructure for virtualization, end-user computing, disaster recovery, and Office 365 solutions. During my time I’ve run a support team offering managed services for a range of different-sized customers from SMB to SME, and managing several different monitoring solutions. They range from SaaS products to bolster our managed service offering to customers’ locally installed monitoring applications. Each has its place and there are reasons for choosing one platform over another.


Licensing Models

Or what I really mean, how much does this thing cost?


Typically for an on-premises solution, there will be a one-off upfront cost and then a yearly maintenance plan. This is called perpetual licensing. The advantage here is that you own the product with the initial purchase, and then maintaining the product support and subscription entitles you to future upgrades and support. Great for a CapEx investment.


SaaS offerings tend to be a monthly cost. If you stop paying, you no longer have access to the solution. This is called subscription licensing. Subscription licensing is more flexible in that you can scale up or scale down the licenses required on a month-by-month basis. I will cover more on this in a future blog post.


Data Gravity

This is physically where the bulk of your application data resides, be that on-premises or in the public cloud. Some applications need to be close to their data so they can run quickly and effectively, and a monitoring solution should take this into consideration. It could be bad to have a SaaS-based monitoring solution if you have time-sensitive data running on-premises that needs near-instantaneous alerts. Consider time sensitivity and criticality when deciding what a monitoring solution should deliver for you. If being alerted about something up to 15 minutes after an event has occurred is acceptable, then a SaaS solution for on-premises workloads could be a good fit.


Infrastructure to Run a Monitoring Solution

Some on-premises monitoring solutions can be resource-intensive, which can prove problematic if the existing environment is not currently equipped to run it. This could lead to further CapEx expenditure for new hardware just to run the monitoring solution, which in turn is something else that will need to be monitored. SaaS, in this instance, could be a good fit.


Also, consider this: do you want to own and maintain another platform, or do you want a SaaS provider to worry about it? If we compare email solutions like Exchange and Exchange Online, Exchange is in many instances faster for user access and you have total control over how to run the solution. Exchange Online, however, is usually good enough and removes the hassle of managing an Exchange server. Ask yourself: do you need good enough or do you need some nerd knobs to turn?



A lot of this article may seem obvious, but the key to choosing a monitoring solution is to start out with a set of criteria that fits your needs and then evaluate the marketplace. Personally, I’ve started with lists with X, Y, and Z requirements with a weighting on each item. Whichever product scored the highest ended up being the best choice.


To give you an idea of those requirements, I typically start out with things like:

  • Is the solution multi-tenanted?
  • How much does it cost?
  • How is it licensed?
  • What else can it integrate with (like a SIEM solution, etc.)?


  A good end product is always the result of meticulous preparation and planning.

Today’s public cloud hyperscalers, such as Microsoft Azure, AWS, and Google, provide a whole host of platforms and services to enable organizations to deliver pretty much any workload you can imagine. However, they aren’t the be-all and end-all of an organization’s IT infrastructure needs.


Not too long ago, the hype in the marketplace was very much geared toward moving all workloads to the public cloud. If you didn’t, you were behind the curve. The reality is, though, it’s just not practical to move all existing infrastructure to the cloud. Simply taking workloads running on-premises and running them in the public cloud is considered by many to be the wrong way to do it. This is referred to as a “lift and shift.” That’s not to say that’s the case for all workloads. Things like file servers, domain controllers, line of business application servers, and so on tend to cost more to run as native virtual machines in the public cloud and introduce extra complexity with application access and data gravity.


The “Cloud-First” mentality adopted by many organizations is disappearing and gradually being replaced with “Cloud-Appropriate.” I’ve found a lot of the “Cloud-First” messaging has been pushed from the board level without any real consideration or understanding for what it means to the organization other than the promise of cost savings. Over time, the pioneers who adopted public cloud first have gained the knowledge and wisdom about what operating in a “Cloud-First” environment looks like. The operating costs don’t always work out as expected—and can even be more expensive.


Let’s look at some examples of what “Cloud-Appropriate” may mean to you. I’m sure you’ve heard of Office 365, which offers an alternative solution to on-premises workloads such as email servers and SharePoint servers, and offers additional value with tools like workplace collaboration via Microsoft Teams, task automation with Microsoft Flow, and so on. This Software as a Service (SaaS) solution, born in the public cloud, can take full advantage of the infrastructure that underpins it. As an organization, the cost of managing the traditional infrastructure for those services disappears. You’re left with a largely predictable bill and arguably superior service offering by just monitoring Office 365.


Application stack refactoring is another great place to think about “Cloud-Appropriate.” You can take advantage of the services available in the public cloud, such as highly performant database solutions like Amazon RDS or the ability to take advantage of public cloud’s elasticity to easily create more workloads in a short amount of time.


So where does that leave us? A hybrid approach to IT infrastructure. Public cloud is certainly a revolution, but for many organizations, the evolution of their existing IT infrastructure will better serve their needs. Hyper converged infrastructure is a fitting example of the evolution of a traditional three-tier architecture comprising of networking, compute, and storage. The services offered are the same, but the footprint in terms of space, cooling, and power consumption is lower while offering greater levels of performance, which ultimately offers better value to the business.



Further Reading

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.