Geek Speak

6 Posts authored by: jabenedicic

As we head into the final post of this series, I want to thank you all for reading this far. For a recap of the other parts, please find links below:


Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT

Part 5 – Choosing the right location for your workload in a Hybrid IT world


To close out the series, I’d like to talk about my most recent journey in this space: using DevOps tooling and continuous integration/deployment (CI/CD) pipelines. A lot of this will come from my experiences using Azure DevOps, as I’m most familiar with that tool. However, there are a lot of alternatives out there, each with their own pros and cons depending on your business or your customers.


I’ve never been a traditional programmer/developer. I’ve adopted some skills over the years as that knowledge can bring benefit to many areas of IT. Being able to build infrastructure as code or create automation scripts has always served me well, long before the more common use cases evolved from public cloud consumption. I feel it’s an important skill for all IT professionals to have.


More recently, I’ve found that the relationship between traditional IT and developers is growing closer. IT departments need to provide tools and infrastructure to the business to speed development and get products out the door quicker. This is where the DevOps culture has come to the forefront. It’s no longer good enough to just develop a product and throw it over the fence to be managed. The systems we use and the platforms available to us mean that we must work together. To help this new culture, it’s important to have the right DevOps tools in place: good code management repositories, artifact repositories, container registries, and resource management tools like Kanban boards. These all play a role for developers and IT professionals. Bringing all this together into a CI/CD process, however, involves more than just tools. Processes and business practices may need to be adjusted as well.


I’m now working more in this space. It’s a natural extension to the automation space that I previously worked in, and it overlaps quite nicely. Working with businesses to set up pipelines and gain velocity in development has taken me on a great journey. I won’t go into detail on the code side of this, as that’s something for a different blog. What’s important and relevant in hybrid IT environments are how these CI/CD process integrate into the various environments. As I discussed in my previous post, choosing the right location for your workloads is important, and this carries over into these pipelines.


During the software development life cycle, there are stages you may need to go through. Unit, functional, integration, and user acceptance testing are commonplace. Deploying throughout these various stages means there will be different requirements for infrastructure and services. From a hybrid IT perspective, having the tools at hand to deploy to multiple locations of your choice is paramount. Short-lived environments can use cloud-hosted services such as hosted build agents and cloud compute. Medium-term environments that run in isolation can again be cloud-based. Longer-term environments or those that use legacy systems can be deployed on-premises. The toolchain gives you this flexibility.


As I previously mentioned, I work mostly with Azure DevOps. Building pipelines here gives me an extensive set of options, as well as a vibrant marketplace of extensions built by the community and vendors. If I want to deploy code to Azure services, I can just call on Azure Resource Manager Templates to build an environment. If I include cloud-native services, I have richer plugins available to deploy configurations to things like API Management. When it comes to on-premises deployments, I can have DevOps agents deployed within my own data center, allowing build and deployment pipelines. I can configure groups of deployment agents that connect me to my existing servers and services. There are options for me to deploy PowerShell scripts, call external APIs from private cloud management platforms like vRealize Automation, or hook in to Terraform/Puppet/Chef, etc.


I can also hook in these deployment processes to container orchestrators like Kubernetes, storing and deploying Helm charts or Docker compose files. These are the best situations for teams that were traditionally siloed to work together. Developers know how the application should work, and operations and infrastructure people know how they want the system to look. Pulling together code that describes how the infrastructure deploys, heals, upgrades, and retires needs input from all sides. When using these types of tools, you’re looking to achieve an end to end system of code build and deployment. Putting in place all the quality gates, deployment processes, and testing removes human error and speeds up business outcomes.


Outside of those traditional use cases for SDLC, I’ve found these types of tools to be beneficial in my daily tasks as well. Working in automation and infrastructure as code has a similar process. I maintain my own projects in this structure. I keep version-controlled copies of ARM templates, Cloud Formation templates, Terraform code, and much more. The CI/CD process allows me to bring non-traditional elements into my own deployment cycles, testing infrastructure with Pester, checking security with AzSK, or just making sure I clean up my test environments when I’ve finished with them. From my experiences so far, there’s a lot for traditional infrastructure people to learn from software developers and vice versa. Bringing teams and processes together helps build better outcomes for all.


With that we come to a close on this series. I want to thank everyone that took the time to read my posts and those that provided feedback or comments. I have enjoyed writing these and hope to share more opinions with you all in the future.

If you’re a returning reader to my series, thank you for listening this far. We have a couple more posts in store for you. If you’re a new visitor, you can find previous posts below:

Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT


In this post, I’ll be looking at how I help my customers assess and architect solutions across the options available throughout on-premises solutions and the major public cloud offerings. I’ll look at how best to use public cloud resources and how to fit those to use cases such as development/testing, and when to bring workloads back on-premises.


In most cases, modern applications that have been built cloud-native, such as functions or using as-a-service style offerings, will have a natural fit to the cloud that they’ve been developed for. However, a lot of the customers I work with and encounter aren’t that far along the journey. That’s the desired goal, but it takes time to refactor or replace existing applications and processes.


With that in mind, where do I start? The first and most important part is in understanding the landscape. What do the current applications look like? What technologies are in use (both hardware and software)? What do the data flows look like? What does the data lifecycle look like? What are the current development processes?


Building a service catalogue is an important step in making decisions about how you spend your money and time. There are various methods out there for achieving these assessments, like TIME analysis or The 6 Rs. Armed with as much information as possible, you’re empowered to make better decisions.


Next, I usually look at where the quick wins can be made—where the best bang for your buck changes can be implemented to show return to the business. This usually starts in development/test environments and potentially pre-production environments. Increasing velocity here can provide immediate results and value to the business. Another area to consider is backup/long-term data retention.


Development and Testing


For development and test environments, I look at the existing architecture, are these traditional VM-based environments? Can they be containerized easily? Is containerization where possible a good step toward more cloud native behavior/thinking?


In traditional VM environments, can automation be used to quickly build and destroy environments? If I’m building a new feature and I want to do integration testing, can I use mocks and other simulated components to reduce the amount of infrastructure needed? If so, then these short-lived environments are a great candidate for the public cloud. Where you can automate and have predictable lifecycles into the hours, days, and maybe even weeks, the efficiencies and savings of placing that workload in the cloud are evident.


When it comes to longer cycles like acceptance testing and pre-production, perhaps these require a longer lifetime or greater resource allocation. In these circumstances, traditional VM-based architectures and monolithic applications can become costly in the public cloud. My advice is to use the same automation techniques to deploy these to local resources with more reliable costs. However, the plan should always look forward and assess future developments where you can replace components into modern architectures over time and deploy across both on-premises and public cloud.


Data Retention


As I mentioned, the other area I often explore is data retention. Can long-term backups be sent to cold storage in the cloud? The benefits offered above that of tape management for infrequently accessed data are often prominent. Restore access may be slower, but how often are you performing those operations? How urgent is a restore from, say, six years ago? Many times, you can wait to get this information back.


Continuing the theme of data, it’s important to understand what data you need where and how you want to use it. There are benefits to using cloud native services for things like business intelligence, artificial intelligence (AI), machine learning (ML), and other processing. However, you often don’t need the entire data set to get the information you need. Look at building systems and using services that allow you to get the right data to the right location, or bring the data to the compute, as it were. Once you have the results you need, the data that was processed to generate them can be removed, and the results themselves can live where you need them at that point.


Lastly, I think about scale and the future. What happens if your service/application grows beyond your expectations? Not many people will be the next Netflix or Dropbox, but it’s important to think about what would happen if that came about. While uncommon, there are situations where systems scale to a point that using public cloud services becomes uneconomical. Have you architected the solution in a way that allows you to remove yourself? Would major work be required to build back on-premises? In most cases, this is a difficult question to answer, as there are many moving parts and relies on levels of success and scale that may not have been predictable. I’ve encountered this type of situation over the years, usually not to the full extent of complete removal of cloud services. I commonly see this in data storage. Large amounts of active data can become costly quickly. In these situations, I look to solutions that allow me to leverage traditional storage arrays that can be near-cloud, usually systems placed in data centers that have direct access to cloud providers.


In my final post, I’ll be going deeper into some of the areas I’ve discussed here and will cover how I use DevOps/CICD tooling in hybrid IT environments.


Thank you for reading, and I appreciate any comments or feedback.

Welcome back to this series of blogs on my real world experiences of Hybrid IT. If you’re just joining us, you can find previous posts here, here and here. So far I have covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.


So far, I’ve covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.


In this post, I’ll cover something a little different: location and regulatory restrictions driving hybrid IT adoption. I am British, and as such, a lot of this is going to come from my view of the world in the U.K. and Europe. Not all of these issues will resonate with a global audience; however, they are good food for thought. With adoption of the public cloud, there are many options available to deploy services within various regions across the world. For many, this won’t be much of a concern. You consume the services where you need to and where they need to be consumed by the end users. This isn’t a bad approach for most global businesses with global customer bases. In my corner of the world, we have a few options for U.K.-based deployments when it comes to hyperscale clouds. However, not all services are available in these regions, and, especially for newer services, they can take some time to roll out into these regions.


Now I don’t want to get political in this post, but we can’t ignore the fact that Brexit has left everyone with questions over what happens next. Will the U.K. leaving the EU have an impact? The short answer is yes. The longer answer really depends on what sector you work in. Anyone that works with financial, energy, or government customers will undoubtedly see some challenges. There are certain industries that comply with regulations and security standards that govern where services can be located. There have always been restrictions for some industries that mean you can’t locate data outside of the U.K. However, there are other areas where being hosted in the larger EU area has been generally accepted. Data sovereignty needs to be considered when deploying solutions to public clouds. When there is finally some idea of what’s happening with the U.K.’s relationship with the EU, and what laws and regulations will be replicated within the U.K., we in the IT industry will have to assess how that affects the services we have deployed.


For now, the U.K. is somewhat unique in this situation. However, the geopolitical landscape is always changing, and treaties often change, safe harbour agreements can come to an end, and trade embargos or sanctions crop up over time. You need to be in a position where repatriation of services is a possibility should such circumstances come your way. Building a hybrid IT approach to your services and deployments can help with mobility of services—being able to move data between services, be that on-premises or to another cloud location. Stateless services and cloud-native services are generally easier to move around and have fewer moving parts that require significant reworking should you need to move to a new location. Microservices, by their nature, are smaller and easier to replace. Moving between different cloud providers or infrastructure should be a relatively trivial task. Traditional services, monolithic applications, databases, and data are not as simple a proposition. Moving large amounts of data can be costly; egress charges are commonplace and can be significant.


Whatever you are building or have built, I recommend having a good monitoring and IT inventory platform that helps you understand what you have in which locations. I also recommend using technologies that allow for simple and efficient movement of data. As mentioned in my previous post, there are several vendors now working in what has been called a “Data Fabric” space. These vendors offer solutions for moving data between clouds and back to on-premises data centres. Maintaining control of the data is a must if you are ever faced with the proposition of having to evacuate a country or cloud region due to geopolitical change.


Next time, I’ll look at how to choose the right location for your workload in a hybrid/multi-cloud world. Thanks for reading, and I welcome any comments or discussion.

If you’re just joining the series now, you can find the first two posts here and here.


Last time, I talked about the various decision points and considerations when using the public cloud. In this post, I’m going to look at how we build better on-premises data centres for hybrid IT. I’ll cover some of the technologies and vendors that I’ve used in this space over the last few years, what’s happening in this space, and some of the ways I help customers get the most from their on-premises infrastructure.


First of all, let’s start with some of my experience in this area. Over the years, I have spoken at conferences on this topic, I’ve recorded podcasts (you can find my most recent automation podcast here, and I’ve worked on automation projects across most of the U.K. In my last permanent role, I worked heavily on a project productising FlexPod solutions, including factory automation and off-the-shelf private cloud offerings.


Where Do You Begin?

Building better on-premises infrastructure doesn’t start where you would expect. It isn’t about features and nerd-knobs. These should be low on the priority list. Over the last few years, perhaps longer than that, since the mainstream adoption of smartphones and tablets, end users have had a much higher expectation of IT in the workplace. The simplicity of on-demand apps and services has set the bar high; turning up at work and having a Windows XP desktop and waiting three weeks for a server to be provisioned just doesn’t cut it.


I always start with the outcomes the business is trying to achieve. Are there specific goals that would improve time-to-market or employee efficiency? Once you understand those goals, start to look at the current processes (or lack thereof) and get an idea for what work is taking place that creates bottlenecks or where processes spread across multiple teams and delays are created with the transit of the tasks.


Once you’ve established these areas, you can start to map technologies to the desired outcome.


What Do I Use?

From a hardware perspective, I’m looking for solutions that support modularity and scalability. I want to be able to start at the size I need now and grow if I must. I don’t want to be burdened later with forklift replacement of systems because I’ve outgrown them.


Horizontal growth is important. Most, if not all, of the converged infrastructure and hyper-converged infrastructure platforms offer this now. These systems often allow some form of redistribution of capacity as well. Moving under-utilised resources out to other areas of the business can be beneficial, especially when dealing with a hybrid IT approach and potential cloud migrations.


I’m also looking for vendors that support or work with the public cloud, allowing me to burst into resources or move data where I need it when I need it there. Many vendors now have at least some form of “Data Fabric” approach and I think this is key. Giving me the tools to make the best use of resources, wherever they are, makes life easier and gives options.


When it comes to software, there are a lot of options for automation and orchestration. The choice will generally fall to what sort of services you want to provide and to what part of the business. If you’re providing an internal service within IT as a function, then you may not need self-service portals that would be better suited to end users. If you’re providing resources on-demand for developers, then you may want to provide API access for consumption.


Whatever tools you choose, make sure that they fit with the people and skills you already have. Building better data centres comes from understanding the processes and putting them into code. Having to learn too much all at once detracts from that effort.


When I started working on automating FlexPod deployments, the tool of choice was PowerShell. The vendors already had modules available to interact with the key components, and both myself and others working on the project had a background in using it. It may not be the choice for everyone, and it may seem basic, but the result was a working solution, and if need be, it could evolve in the future.


For private cloud deployments, I’ve worked heavily with the vRealize suite of products. This was a natural choice at the time due to the size of the VMware market and the types of customer environments. What worked well here was the extensible nature of the orchestrator behind the scenes, allowing integration into a whole range of areas like backup and disaster recovery, through to more modern offerings like Chef and Ansible. It was possible to create a single customer-facing portal with Day 2 workflows, providing automation across the entire infrastructure.


More recently, I’ve begun working with containers and orchestration platforms like Kubernetes. The technologies are different, but the goals are the same: getting the users the resources that they need as quickly as possible to accelerate business outcomes.


But Why?

You only have to look at the increasing popularity of Azure Stack or the announcement of AWS Outposts to see that the on-premises market is here to stay; what isn’t are the old ways of working. Budgets are shrinking, teams are expected to do more with less equipment and/or people, businesses are more competitive than ever, and if you aren’t being agile, a start-up company can easily come along and eat your dinner.


IT needs to be an enabler, not a cost centre. We in the industry all need to be doing our part to provide the best possible services to our customers, not necessarily external customers, but any consumers of the services that we provide.


If we choose the right building blocks and work on automation as well as defining great processes, then we can all provide a cloud-like consumption model. Along with this, choosing the right vendors to partner with will open a whole world of opportunity to build solutions for the future.


Next time I will be looking at how location and regulatory restrictions are driving hybrid IT. Thanks for reading!

My previous post provided a short introduction to this series.


In this post, I’m going to be focusing on public cloud. I’ll cover my day-to-day experiences in using the major hyperscalers, costs, how I think about managing them, and my reasons for adopting hybrid IT as an operational model.


By now, almost everyone reading this should have had some experience with using a public cloud service. Personally, I’m using public cloud services daily. Being an independent consultant, I run my own company and I want to be able to focus on my customers, not on running my own IT.


When setting up my business, it made perfect sense for me to utilise public offerings such as Office 365 to get my communication and business tools up and running. When I want to work on new services or train myself, I augment my home lab with resources within the public clouds. For these use cases, this makes perfect sense: SaaS products are cheap, reliable, and easy to set up. Short-lived virtual machines or services for development/testing/training are also cost effective when used the right way.


This works for me, but I’m a team of one. Does this experience translate well into other cases? That’s an interesting question because it isn’t one-size-fits-all. I’ve worked with a wide range of customers over the years, and there are many different starting points for public cloud. The most important part of any cloud journey is understanding what tools to use in what locations. I’ve seen lift-and-shift style migrations to the cloud, use of cloud resources for discrete workloads like test/development, consumption of native services only, and every combination in between. Each of these have pros and cons, and there are areas of consideration that are sometimes overlooked in making these decisions.


I want to break down my experiences into the three areas where I’ve seen cost concerns arise, and how planning a hybrid IT approach can help mitigate these.


Virtual Machines

All public clouds offer many forms of virtual machines, ranging in cost, size, and capabilities. The provisioning model of the cloud make these easy to consume and adopt, but this is a double-edged sword. There are several brilliant use cases for these machines. It could be that you have a short-term need for additional compute power to supplement your existing estate. It might be that you need to perform some testing and need the extra resources available to you. Other options include being able to utilise hardware that you wouldn’t traditionally own, such as GPU-enabled platforms.


When planned out correctly, these use cases make financial sense. It is a short-term need that can be fulfilled quickly and allows business agility. The cost vs. benefit is clear. On the flip side, leaving these services running long-term can start to spiral out of control. From my own test environment, I know that a running VM that you forget about can run up bills very quickly, and while my own environment and use cases are relatively small even for me, bills into the hundreds of pounds (or dollars) per month for a couple of machines that I had forgotten to shut down/destroy are common. Now multiply that to enterprise scale, and the bill can become very difficult to justify, let alone manage. Early adopters that took a lift-and-shift approach to cloud without a clear plan to refactor applications are now hitting these concerns. Initial savings of moving expenditure from Cap-Ex to Op-Ex masked the long-term impact to the business.



The cloud offers easy access to many different types of storage. The prices can vary depending on the requirements of that storage. Primary tier storage is generally the most expensive to consume, while archive tiers are the cheapest. Features of storage in the cloud are improving all the time and catching up to what we have come to expect from the enterprise storage systems we have used on-premises over the years, but often at additional cost.


Consuming primary tier storage for long-term usage quickly adds up. This goes hand in hand with the examples of monitoring VM usage above. We can’t always plan for growth within our businesses, however, what starts off as small usage can quickly grow to multiple TB/PB. Managing this growth long-term is important for keeping costs low; ensuring that only the required data is kept on the most expensive tiers is key. We’ve seen many public examples where this kind of growth has required a rethink. The most recent example that comes to mind is that of Dropbox. That might be an exception to the rule, however, it highlights the need to be able to support data migrations either between cloud services or back to on-premises systems.



Getting data to the cloud is now a relatively easy task and, in most instances, incurs little to no cost. However, moving data within or between clouds, and in cases of repatriation, back to your own systems, does incur cost. In my experience, these are often overlooked. Sprawl within the cloud, adopting new features in different regions, or running a multi-region implementation can increase both traffic between services and associated costs.


Starting with a good design and maintaining locality between services helps minimise this. Ensuring the data you need is as close to the application as possible and minimal traffic is widely distributed needs to be considered from the very start of a cloud journey.


Why a Hybrid IT Mindset?

With those examples in mind, why should we adopt a hybrid IT mindset? Having all the tools available to you within your toolbox allows for solutions to be designed that maximise the efficiencies and productivity of your business whilst avoiding growing costs. Keeping long-running services that are low on the refactor/replace priorities in on-premises infrastructure is often the most cost-effective method. Allowing the data generated by these services to take advantage of cloud-native technologies and systems that would be too costly to develop internally (such as artificial intelligence or machine learning) gives your business the best of both worlds. If you have short-term requirements for additional capacity or short-lived workloads like dev/test, giving your teams access to resources in the cloud can speed up productivity and even out spikes in demand throughout the year. The key is in assessing the lifetime of the workload and what the overall costs of the services consumed are.


Next time I’ll look at how we build better on-premises data centres to adopt cloud-like practices and consumption. Thank you for reading and I welcome any questions in the comments section.

Over the coming weeks, I will be writing a series of posts on my varied experiences with hybrid IT. These experiences come from working on countless projects over the last ten years working in a consultancy role.


So, let’s start with a bit of background on me. I’m Jason Benedicic, an independent consultant based in Cambridge, U.K. I have been working in IT for the last twenty years, and of that, the last 10 have been in professional services consultancy across the U.K. My background is traditionally in infrastructure, covering all areas of on-premises solutions such as virtualisation, storage, networking, and backup.


A large part of this time has been spent working on Converged Infrastructure systems such as FlexPod. As the industry has changed and evolved, I have moved into additional skillsets, focusing on automation and infrastructure as code, more recently working in cloud native technologies and DevOps/CICD.


What is Hybrid IT?

The term hybrid IT isn’t exactly new. It has been used in various ways since the early days of public cloud. However, the messaging often changes, and the importance or even relevance of this operating model is regularly put into question.


For me it boils down to a simple proposition. Hybrid IT is all about using the right tools in the right locations to deliver the best business outcome, making use of cloud-native technologies where they fit the business requirements and cost models, and maintaining legacy applications across your existing estate.


Not all applications can be modernised in the required timeframes. Not every application will be suited to cloud. Hybrid IT focuses on flexibility and creating a consistent delivery across all platforms.


What will I be covering?

The series of posts will build on the above concepts and highlight my experiences in each of several areas.


Public Cloud Experiences, Costs, and Reasons for using Hybrid IT

This post will focus on my experiences in deploying services into public clouds, ranging from lift-and-shift type migrations to deploying cloud-native services like functions. I will look at where you make cost decisions and how you can assess what the long-term impact can be to your business.


Building Better On-Premises Data Centres for Hybrid IT

Here I will look at the way we can adopt cloud-like practices on-premises by modernising existing data centre solutions, refining process, and using automation. I will look at the various technologies I have used over the recent years, what the major vendors are doing in this space, and ways you can get the most from your on-premises infrastructure by providing a consistent consumption model for your business.


Location and Regulatory Restrictions Driving Hybrid IT

Constant change throughout the industry, governments, and regulating bodies require us to constantly evaluate how we deploy solutions and where we deploy them. I will look at recent changes such as GDPR and how uncertainties surrounding political events such as the U.K.’s exit from the European Union affect decisions. I will also cover how building a hybrid approach can ease the burden of these external factors on your business.


Choosing the Right Location for your Workload in a Hybrid and Multi-Cloud World

This post will look at how I help customers to assess and architect solutions across the wide landscape of options, including determining how best to use public cloud resources, looking at what use cases suit short-lived cloud services such as Development/Testing, and when to bring a solution back on-premises.


DevOps Tooling and CI/CD in a Hybrid and Multi-Cloud World

My final post will focus on my recent journey in making more use of DevOps tooling and Continuous Integration/Deployment pipelines. I will discuss the tools I have been using and experiences with deploying the multiple life-cycle stages of software into multiple locations.


I hope that you will find this series helpful and enjoy reading through my journey.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.