Geek Speak

6 Posts authored by: ian0x0r
ian0x0r

Meh, CapEx

Posted by ian0x0r Nov 7, 2019

Do you remember the good old days when you saved up your hard-earned cash to buy the shiny thing you’d been longing for? You know, like that Betamax player, the TV with the lovely wood veneer surround, or a Chevrolet Corvair? What happened to those days?

 

I remember as a young lad growing up in the U.K., there were shops like Radio Rentals or Rumbelows where you could get something “On Tick.” ¹ It didn’t seem like the done thing at the time; it almost seemed like a dirty word. “Hey, have you heard, Sheila got her fridge-freezer on tick.” The look of shock on people’s faces!

 

Now fast forward 25 years and here we are—almost everything is available on a buy now, pay later or rental model. Want a way to access films quickly? Here’s a Netflix subscription. Need something to watch them on? Hey, here’s a 100” Ultra HD 8K, curved, HDR, smart TV with built-in Freeview and Freesat, yours for 60 low monthly payments of £200. Need a new set of wheels? No problem. Something like 82% of all new cars in the U.K. are purchased with a PCP payment plan. This is people reaching for the shiny thing and being able to get it when previously they couldn’t or shouldn’t.

 

The OpEx Model

 

So, what’s my point and how is this relevant to the IT landscape?

 

First and foremost, rental models can work out more profitably for the vendor selling the goods. Slap a little interest premium on the payments as the cost is spread over a longer term. A rental income is more predictable as well, the annuity is the name of the game.

 

Incentivizing the rental model makes it look more attractive. Look at the cost of an iPhone, for example. Each new release gets more and more expensive. My first four cars combined cost less than a new iPhone 11 Pro. But Apple (and others) are smart. Pay your monthly sum of cash and every 12 months, they’ll give you a new phone. And oh heck, if you drop it and smash the screen, no problem—it’s covered in the monthly cost. You don’t get that service if you buy an iPhone upfront with your hard-earned cash. It makes customers sticky, as in they’re less likely to move elsewhere.

 

Now let’s look at IT vendors. Microsoft, Amazon, Dell, HPE, you name it, their rental model fits nicely into an OpEx purchasing plan.

 

There’s no denying Microsoft has killed it with Office 365. This is something I’ll dive deeper into in an upcoming blog post. AWS and all public clouds allow you to pay for what you consume, on a monthly basis, although this cost isn’t always predictable. 

 

Even hardware vendors are at it. Dell can wrap up your purchase in a nice finance package spread over 3 – 5 years. HPE has their Green Lake program, which is, and I quote, “Delivering the As A Service experience. We live in a consumption-based world—music, TV shows, groceries, air travel and much more. Why? Because consuming services delivers a better outcome faster, in our personal lives and in business. It’s true for music, and it’s also true for IT.” ²

 

Conclusion

 

So why Meh, CapEx? In an increasingly diverse IT landscape, with more and more solutions delivered As-A-Service, coupled with the increased pace of innovation, it can be difficult to predict costs and get it right with a CapEx investment lasting 3 – 5 years. I mean, who wants to still be rocking an iPhone 11 in 2024?

 

¹ On Tick – To pay for something later, via Urban Dictionary https://www.urbandictionary.com/define.php?term=get%20it%20on%20tick

² HPE Green Lake Quote https://www.hpe.com/uk/en/services/it-consumption.html

Before we can examine if cloud-agnostic is a thing, we must understand what it is in the first place. There are a multitude of opinions on this matter, but from my standpoint, being cloud-agnostic is about being able to run a given workload on any cloud platform, without having to modify the workload for it to work correctly. Typically, this would be in a public cloud like AWS, Azure, or GCP.

 

With that in mind, this article examines if cloud-agnostic is achievable and if it should be considered when deploying workloads in the public cloud.

 

Let’s start by abstracting this thinking away from public cloud workloads. What if we apply this to cars? Imagine you wanted an engine to work across platforms. Could you take the engine out of a BMW and drop it into a Mercedes? Probably, but it wouldn’t work very well because the rest of the platform wasn’t designed for that engine. For true engine portability, you’d have to design the engine to fit both the BMW and the Mercedes from the outset and understand how each platform works. This would likely give a compromised setup whereby the engine cannot take full advantage of either platform’s capabilities anymore. Was the additional effort worth it to design a compromised solution?

 

In cloud computing terms, is it worth spending extra time developing an app so it can run in multiple clouds, or is it better to concentrate on one platform and take full advantage of what it has to offer?

 

There are some exceptions to this, and the use of containers seems to be one way to work around this cloud-agnostic conundrum. What’s a container, you ask? If we take Docker's explanation (https://www.docker.com/resources/what-container), it’s “a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.”

 

So, in other words, a container is a level of abstraction where all the software and code are wrapped up in a nice little package and it doesn’t really matter where it runs provided the underlying infrastructure can cater to its requirements.

 

Let’s use another car analogy. What if a child’s car seat is a container? It’s not tied to one manufacturer and is expected to work the same way across platforms. This is possible due to a set of guidelines and standardization as to how a car seat works. ISOFIX to a car seat is what Docker is to a container. (Don’t ask me how to create a Kubernetes analogy for this, I don’t know; leave your thoughts in the comments. )

 

Which brings us to vendor lock-in. Things work better if you fully embrace the ecosystem. Got an iPhone? Bet that works best with a Mac. Using Exchange online for email? I bet you’re using Outlook to access it.

 

Workloads designed to run on a single-vendor public cloud can take advantage of all the available adjacent features. If you’re running a web application in AWS, it would make sense to use AWS S3 storage in the back-end, or maybe AWS RDS for database tasks. They’re designed from the outset to interoperate with each other. If you’re designing for cloud-agnostic, then you can’t take advantage of features like this.

 

In conclusion, yes, cloud-agnostic is a thing, but is it something you should embrace? That’s for you to decide.

 

I welcome any comments on this article. I feel like there's a lot more that could be discussed on this topic and these are just my opinions. What are yours?

All aboard the hype train—departing soon!

 

I think most people love to have the latest and greatest thing. Lines outside of Apple stores waiting for each new product is proof enough.

 

I’m terrible for it myself, like a magpie drawn to a shiny object. If something is new and exciting, I tend to want to check it out. And it doesn’t just apply to tech products, either. I like to read up about new cars as well… but I digress.

 

So, what’s the purpose of my article here? HYPE! Hype is the bane of anyone’s life whose job it is to identify if a new tech product has any substance to it. I want to try to help identify if you’re looking at marketing hype or something potentially useful.

 

Does any of this sound familiar?

 

  • Everything will move to the cloud; on-premises is dead.

  • You don’t need tape for backups anymore.

  • Hyper-converged infrastructure will replace all your traditional three-tier architectures.

  • Flash storage will replace spinning disk.

 

The above statements are just not true. Well, certainly not as a blanket statement. Of course, there are companies with products out there to help move you in the direction of the hype, but it can be for a problem that never needed to be solved in the first place.

 

The tech marketing machines LOVE to get a lot of information out into the wild and say their product is the best thing since sliced bread. This seems to be particularly prevalent with start-ups who’ve secured a few successful rounds of venture capital funding and can pump it into marketing their product and bringing it to your attention. Lots of company-branded swag is usually available to try and entice you to take a peek at the product on offer. And who can blame them? At the end of the day, they need to shift product.

 

Unfortunately, this makes choosing products tough for us IT professionals, like trying to find the diamond amongst the coal. If there’s a lot of chatter about a product, it could be mistaken for word-of-mouth referrals. You know, like, “Hey Jim, have you used Product X? I have and it’s awesome.” The conversation might look more like this if it’s based on hype: “Hey Jim, have you seen Product X? I’m told it’s awesome.”

 

The key difference here is giving a good recommendation based on fact vs. a bad recommendation based on hearsay. Now, I’m not pooh-poohing every new product out there. There are some genuinely innovative and useful things available. I’m saying, don’t jump on the bandwagon or hype train and buy something just because of a perception in the marketplace that something’s awesome. Put those magpie tendencies to one side and exercise your due diligence. Don’t buy the shiny thing because it’s on some out-of-this-world deal (it probably isn’t). Assess the product on its merits and what it can do for you. Does it solve a technical or business problem you may have? If yes, excellent. If not, just walk away.

 

A Little Side Note

If I’m attending any IT trade shows with lots of exhibitors, I apply similar logic to identify to whom I want to speak. What are my requirements? Does vendor X look like they satisfy those requirements? Yes: I will go and talk to you. No: Walk right on by. It can save you a lot of time and allows you to focus on what matters to you.

This is a continuation of one of my previous blog posts, Which Infrastructure Monitoring Tools Are Right for You? On-Premises vs. SaaS, where I talked about the considerations and benefits of running your monitoring platform in either location. I touched on the two typical licensing models geared either towards a CapEx or an OpEx purchase. CapEx is something you buy and own, or a perpetual license. OpEx is usually a monthly spend that can vary month-to-month, like a rental license. Think of it like buying Microsoft Office off-the-shelf (who does that anymore?) where you own the product, or you buy Office 365 and pay month-by-month for the services that you consume.

 

Let’s Break This Down

As mentioned, there are the two purchasing options. But within those, there are different ways monitoring products are licensed, which could ultimately sway your decision on which product to choose. If your purchasing decisions are ruled by the powers that be, then cost is likely one of the top considerations when choosing a monitoring product.

 

So what license models are there?

 

Per Device Licensing

This is one of the most common operating models for a monitoring solution. What you need to look at closely, though, is the definition of a device. I have seen tools classify a network port on a switch as an individual device, and I have seen other tools classify the entire switch as a device. You can win big with this model if something like a VMware vCenter server or an IPMI management tool is classed as one device. You can effectively monitor all the devices managed by the management tool for the cost of a single device license.

 

Amount of Data Ingested

This model seems to apply more to SaaS-based security information and event management (SIEM) solutions, but may apply to other products. Essentially, you’re billed per GB of data ingested into the solution. The headline cost for 1GB of data may look great, but once you start pumping hundreds of GBs of data into the solution daily, the costs add up very quickly. If you can, evaluate log churn in terms of data generated before looking at a solution licensed like this.

 

Pay Your Money, Monitor Whatever You Like

Predominantly used for on-premises monitoring solutions deployments, you pay a flat fee and you can monitor as much or as little as you like. The initial buy-in price may be high, but if you compare it to the other licensing models, it may work out cheaper in the long run. I typically find established players in the monitoring solution game have on-premises and SaaS versions of their products. There will be a point where paying the upfront cost for the on-premises perpetual license is cheaper than paying for the monthly rental SaaS equivalent. It’s all about doing your homework on that one.

 

Conclusion

I may be preaching to the choir here, but I hope there are some useful snippets of info you can take away and apply to your next monitoring solution evaluation. As with any project, planning is key. The more info you have available to help with that planning, the more successful you will be.

If, like me, you believe we’re living in a hybrid IT world when it comes to workload placement, then it would make sense to also consider a hybrid approach when deploying IT infrastructure monitoring solutions. What I mean by this is deciding where to physically run a monitoring solution, be that on-premises or in the public cloud.

 

I work for a value-added reseller, predominantly involved with deploying infrastructure for virtualization, end-user computing, disaster recovery, and Office 365 solutions. During my time I’ve run a support team offering managed services for a range of different-sized customers from SMB to SME, and managing several different monitoring solutions. They range from SaaS products to bolster our managed service offering to customers’ locally installed monitoring applications. Each has its place and there are reasons for choosing one platform over another.

 

Licensing Models

Or what I really mean, how much does this thing cost?

 

Typically for an on-premises solution, there will be a one-off upfront cost and then a yearly maintenance plan. This is called perpetual licensing. The advantage here is that you own the product with the initial purchase, and then maintaining the product support and subscription entitles you to future upgrades and support. Great for a CapEx investment.

 

SaaS offerings tend to be a monthly cost. If you stop paying, you no longer have access to the solution. This is called subscription licensing. Subscription licensing is more flexible in that you can scale up or scale down the licenses required on a month-by-month basis. I will cover more on this in a future blog post.

 

Data Gravity

This is physically where the bulk of your application data resides, be that on-premises or in the public cloud. Some applications need to be close to their data so they can run quickly and effectively, and a monitoring solution should take this into consideration. It could be bad to have a SaaS-based monitoring solution if you have time-sensitive data running on-premises that needs near-instantaneous alerts. Consider time sensitivity and criticality when deciding what a monitoring solution should deliver for you. If being alerted about something up to 15 minutes after an event has occurred is acceptable, then a SaaS solution for on-premises workloads could be a good fit.

 

Infrastructure to Run a Monitoring Solution

Some on-premises monitoring solutions can be resource-intensive, which can prove problematic if the existing environment is not currently equipped to run it. This could lead to further CapEx expenditure for new hardware just to run the monitoring solution, which in turn is something else that will need to be monitored. SaaS, in this instance, could be a good fit.

 

Also, consider this: do you want to own and maintain another platform, or do you want a SaaS provider to worry about it? If we compare email solutions like Exchange and Exchange Online, Exchange is in many instances faster for user access and you have total control over how to run the solution. Exchange Online, however, is usually good enough and removes the hassle of managing an Exchange server. Ask yourself: do you need good enough or do you need some nerd knobs to turn?

 

Conclusion

A lot of this article may seem obvious, but the key to choosing a monitoring solution is to start out with a set of criteria that fits your needs and then evaluate the marketplace. Personally, I’ve started with lists with X, Y, and Z requirements with a weighting on each item. Whichever product scored the highest ended up being the best choice.

 

To give you an idea of those requirements, I typically start out with things like:

  • Is the solution multi-tenanted?
  • How much does it cost?
  • How is it licensed?
  • What else can it integrate with (like a SIEM solution, etc.)?

 

  A good end product is always the result of meticulous preparation and planning.

Today’s public cloud hyperscalers, such as Microsoft Azure, AWS, and Google, provide a whole host of platforms and services to enable organizations to deliver pretty much any workload you can imagine. However, they aren’t the be-all and end-all of an organization’s IT infrastructure needs.

 

Not too long ago, the hype in the marketplace was very much geared toward moving all workloads to the public cloud. If you didn’t, you were behind the curve. The reality is, though, it’s just not practical to move all existing infrastructure to the cloud. Simply taking workloads running on-premises and running them in the public cloud is considered by many to be the wrong way to do it. This is referred to as a “lift and shift.” That’s not to say that’s the case for all workloads. Things like file servers, domain controllers, line of business application servers, and so on tend to cost more to run as native virtual machines in the public cloud and introduce extra complexity with application access and data gravity.

 

The “Cloud-First” mentality adopted by many organizations is disappearing and gradually being replaced with “Cloud-Appropriate.” I’ve found a lot of the “Cloud-First” messaging has been pushed from the board level without any real consideration or understanding for what it means to the organization other than the promise of cost savings. Over time, the pioneers who adopted public cloud first have gained the knowledge and wisdom about what operating in a “Cloud-First” environment looks like. The operating costs don’t always work out as expected—and can even be more expensive.

 

Let’s look at some examples of what “Cloud-Appropriate” may mean to you. I’m sure you’ve heard of Office 365, which offers an alternative solution to on-premises workloads such as email servers and SharePoint servers, and offers additional value with tools like workplace collaboration via Microsoft Teams, task automation with Microsoft Flow, and so on. This Software as a Service (SaaS) solution, born in the public cloud, can take full advantage of the infrastructure that underpins it. As an organization, the cost of managing the traditional infrastructure for those services disappears. You’re left with a largely predictable bill and arguably superior service offering by just monitoring Office 365.

 

Application stack refactoring is another great place to think about “Cloud-Appropriate.” You can take advantage of the services available in the public cloud, such as highly performant database solutions like Amazon RDS or the ability to take advantage of public cloud’s elasticity to easily create more workloads in a short amount of time.

 

So where does that leave us? A hybrid approach to IT infrastructure. Public cloud is certainly a revolution, but for many organizations, the evolution of their existing IT infrastructure will better serve their needs. Hyper converged infrastructure is a fitting example of the evolution of a traditional three-tier architecture comprising of networking, compute, and storage. The services offered are the same, but the footprint in terms of space, cooling, and power consumption is lower while offering greater levels of performance, which ultimately offers better value to the business.

 

 

Further Reading

CRN and IDC: Why early public cloud adopters are leaving the public cloud amidst security and cost concerns. https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.