1 2 3 4 5 6 Previous Next

Geek Speak

2,754 posts

As we head into the final post of this series, I want to thank you all for reading this far. For a recap of the other parts, please find links below:


Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT

Part 5 – Choosing the right location for your workload in a Hybrid IT world


To close out the series, I’d like to talk about my most recent journey in this space: using DevOps tooling and continuous integration/deployment (CI/CD) pipelines. A lot of this will come from my experiences using Azure DevOps, as I’m most familiar with that tool. However, there are a lot of alternatives out there, each with their own pros and cons depending on your business or your customers.


I’ve never been a traditional programmer/developer. I’ve adopted some skills over the years as that knowledge can bring benefit to many areas of IT. Being able to build infrastructure as code or create automation scripts has always served me well, long before the more common use cases evolved from public cloud consumption. I feel it’s an important skill for all IT professionals to have.


More recently, I’ve found that the relationship between traditional IT and developers is growing closer. IT departments need to provide tools and infrastructure to the business to speed development and get products out the door quicker. This is where the DevOps culture has come to the forefront. It’s no longer good enough to just develop a product and throw it over the fence to be managed. The systems we use and the platforms available to us mean that we must work together. To help this new culture, it’s important to have the right DevOps tools in place: good code management repositories, artifact repositories, container registries, and resource management tools like Kanban boards. These all play a role for developers and IT professionals. Bringing all this together into a CI/CD process, however, involves more than just tools. Processes and business practices may need to be adjusted as well.


I’m now working more in this space. It’s a natural extension to the automation space that I previously worked in, and it overlaps quite nicely. Working with businesses to set up pipelines and gain velocity in development has taken me on a great journey. I won’t go into detail on the code side of this, as that’s something for a different blog. What’s important and relevant in hybrid IT environments are how these CI/CD process integrate into the various environments. As I discussed in my previous post, choosing the right location for your workloads is important, and this carries over into these pipelines.


During the software development life cycle, there are stages you may need to go through. Unit, functional, integration, and user acceptance testing are commonplace. Deploying throughout these various stages means there will be different requirements for infrastructure and services. From a hybrid IT perspective, having the tools at hand to deploy to multiple locations of your choice is paramount. Short-lived environments can use cloud-hosted services such as hosted build agents and cloud compute. Medium-term environments that run in isolation can again be cloud-based. Longer-term environments or those that use legacy systems can be deployed on-premises. The toolchain gives you this flexibility.


As I previously mentioned, I work mostly with Azure DevOps. Building pipelines here gives me an extensive set of options, as well as a vibrant marketplace of extensions built by the community and vendors. If I want to deploy code to Azure services, I can just call on Azure Resource Manager Templates to build an environment. If I include cloud-native services, I have richer plugins available to deploy configurations to things like API Management. When it comes to on-premises deployments, I can have DevOps agents deployed within my own data center, allowing build and deployment pipelines. I can configure groups of deployment agents that connect me to my existing servers and services. There are options for me to deploy PowerShell scripts, call external APIs from private cloud management platforms like vRealize Automation, or hook in to Terraform/Puppet/Chef, etc.


I can also hook in these deployment processes to container orchestrators like Kubernetes, storing and deploying Helm charts or Docker compose files. These are the best situations for teams that were traditionally siloed to work together. Developers know how the application should work, and operations and infrastructure people know how they want the system to look. Pulling together code that describes how the infrastructure deploys, heals, upgrades, and retires needs input from all sides. When using these types of tools, you’re looking to achieve an end to end system of code build and deployment. Putting in place all the quality gates, deployment processes, and testing removes human error and speeds up business outcomes.


Outside of those traditional use cases for SDLC, I’ve found these types of tools to be beneficial in my daily tasks as well. Working in automation and infrastructure as code has a similar process. I maintain my own projects in this structure. I keep version-controlled copies of ARM templates, Cloud Formation templates, Terraform code, and much more. The CI/CD process allows me to bring non-traditional elements into my own deployment cycles, testing infrastructure with Pester, checking security with AzSK, or just making sure I clean up my test environments when I’ve finished with them. From my experiences so far, there’s a lot for traditional infrastructure people to learn from software developers and vice versa. Bringing teams and processes together helps build better outcomes for all.


With that we come to a close on this series. I want to thank everyone that took the time to read my posts and those that provided feedback or comments. I have enjoyed writing these and hope to share more opinions with you all in the future.

The Final Eight are upon us. It’s a little less clear this year whether any Cinderellas have made it into these final rounds as all of these superpowers seem a little Cinderella-y (WAY before she met her fairy godmother). And, most of these battles are a lot closer than we anticipated.


There’s certainly been some personal disappointment. (Just to reiterate...it’s PEE. In the pool you’re SWIMMING IN. Granted, it may not help you fight crime, but you can’t fight crime if you catch something gross. Think about it.)


We’re also struggling with what it means for us a society when we believe 140 characters is all we need to get the gist of someone’s mind. But, now isn’t the time for big thoughts and existential drama…


It’s time to announce who has advanced to the next round.


Bracket Battle 2019 - Round 2: USBz vs. Wormz USBz moves on to the next round with 61% of the votes

Bracket Battle 2019 - Round 2: 2ndz vs. zhceepS 2ndz moves on with 51%

Bracket Battle 2019 - Round 2: Binaryz vs. Cheez Binaryz moves on with 59%

Bracket Battle 2019 - Round 2: Tumblz vs. Slowz Tumblz moves on with 65%

Bracket Battle 2019 - Round 2: Tweetz vs. Sortz Tweetz moves on with 82%

Bracket Battle 2019 - Round 2: Legoz vs. Hovrz Hovrz moves on with 59%

Bracket Battle 2019 - Round 2: Giftz vs. Veggz Giftz moves on with 62%

Bracket Battle 2019 - Round 2: Sqrlz vs. Dropz Dropz moves on with 56%




Bracket Schedule:

  • Bracket release is March 18
    • Voting for each round will begin at 10 a.m. CT
    • Voting for each round will close at 11:59 p.m. CT on the date listed on the Bracket Battle home page
  • Play-in battle opens TODAY, March 18
  • Round 1 OPENS March 20
  • Round 2 OPENS March 25
  • Round 3 OPENS March 28
  • Round 4 OPENS April 1
  • Round 5 OPENS April 4
  • The Most USEFUL Useless Superpower will be announced April 10

I'm home from Redmond and the annual Microsoft MVP Summit. Attending Summit is one big #sqlfamily reunion for me. It was great seeing the familiar faces that help Microsoft deliver the best data platform tools, products, and services in the world.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


The U.S. government plans to drop $500M on a ridiculously powerful supercomputer

And it still won't be able to have more than 6 tabs open in Google Chrome.


Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

I'm starting to think that maybe Facebook isn't so good at the privacy thing.


Cyber Attack Puts a Spotlight on Fragile Global Supply Chain

Good reminder that as your systems become more complex you need to have a solid business continuity plan. Otherwise, when disruptions hit, you could find yourself out of business.


They didn’t buy the DLC: feature that could’ve prevented 737 crashes was sold as an option

I cannot understand how safety was allowed to be optional.


Study confirms AT&T’s fake 5G E network is no faster than Verizon, T-Mobile or Sprint 4G

It's almost as if AT&T were comfortable with the idea of lying to customers.


MySpace has lost all the music users uploaded between 2003 and 2015

Losing 12 years worth of data is the second most shocking part of this article. The first being, of course, that MySpace still exists.


Repeated Giving Feels Good

Having an “attitude of gratitude” is a great way to change your mood, and quick.


I don't care what that brogrammer on Reddit says, the answer is still "no":


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article on containers, which are growing in importance to our government customers. Containers play an important role in scaling cloud applications and they need to be monitored.


Hybrid IT and cloud computing are completely changing the computing landscape, from ownership and payment methods to the way applications are developed. The perfect example of this is the seemingly overnight adoption of containers. According to the 2018 SolarWinds IT Trends Report, nearly half of the public sector IT pros who responded ranked containers as the most important technology priority today.


This ranking comes with good reason. Containers simplify the applications development process—and, in particular, provide a critical level of application portability and agility within a hybrid or cloud environment.


All this said, containers are still relatively new. It’s important for every federal IT pro to understand containers and their potential value in helping to enhance an agency’s computing efficiency as the agency moves to a hybrid IT/cloud environment.


What is a container?


When applications are developed, they’re built within a production-staging environment, either on a virtual machine in the cloud, or on a developer’s laptop. The challenge is that the environment where the application will ultimately run may be different than the environment where the application was developed. This has been an application development challenge for years.


Enter the container. A container is a complete environment—the application and all the technological dependencies it needs to run (libraries, binaries, configuration files, etc.), all in a single container. This means the application can run anywhere; the underlying infrastructure becomes irrelevant since the application already has everything it needs to get its job done.


Why are containers important?


First and foremost, containers can enhance the adoption of hybrid and cloud environments as they make applications completely portable. That’s invaluable as agencies look to make the cloud migration process as easy as possible.


Containers can also help agencies meet additional challenges introduced by hybrid IT/cloud environments. According to the 2018 SolarWinds IT Trends Report, those challenges include:


  • Environments that are not optimized for peak performance (46% of respondents)
  • Significant time spent reactively maintaining and troubleshooting IT environments (45% of respondents)


There is another forward-looking advantage of containers. The accelerated development cycle enabled by containers can help open the way to implement automated systems and technologies, further streamlining agency dev cycles.


According to the report, while hybrid IT/cloud ranked highest among the top five most important technologies to their organization’s IT strategies, automation was ranked second in the category of “most important technologies needed for digital transformation over the next three to five years.”


In a nutshell: containers can help agencies more effectively move to hybrid IT/cloud environments, which can then help agencies more effectively incorporate automation—which has the potential to change the game completely.


Find the full article on Government Technology Insider.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

I am so excited about this year’s Bracket Battle that I jumped on a plane yesterday to come to Vegas. I hear this is the place to be for the first weekend of the battle… a place to actually get paid for making the right predictions.


Turns out, the bookmakers aren’t very well-versed in our game and there is some OTHER bracket that everyone here is betting on. Go figure. It certainly doesn’t seem to be as interesting or thought-provoking as ours. But I guess that everyone has their “thing.”


Before we get into the first-round results, let me take a quick moment to answer a lingering question. There has been a bit of debate about the objective of this year’s battle—what EXACTLY are we voting for here? We understand. We debated it mightily as we brainstormed and plotted. Here is where we landed: we are looking for the BEST of the worst (read: useless) superpowers. Between any two given match-ups, which superpower would you prefer to have for its relative utility? (Which might provide more useful than the other?)


Based on the results, it looks like y’all got the hang of it…mostly. I can’t quite account for why the masses think that knowing the exact amount of pee in a pool is less useful than always coming in second. It’s PEE… in the POOL… that you are swimming in. Are you planning to melt down all of those second-place medals into some kind of weapon?


Welp, like I said… I guess everyone has their “thing.”


Here is where we stand as we head into Round 2:


Play-in Round Result: Bracket Battle 2019: Ticklz vs. Tweetz

TWEETZ comes out on top with 74% of the votes—a solid victory. The majority certainly believe that being able to read minds in any capacity is pretty helpful, even if it is just 140 characters.


Round 1 Results:

These rounds proved to be, on the whole, a lot closer than the play-in, with just a few exceptions:


Bracket Battle 2019: Dwaynez vs. Squirrlz  – It turns out the ability to go full "The Rock" with your eyebrows is nothing compared to an army of empathetic squirrels, as Squirrlz edges out Dwaynez with 73%.


Bracket Battle 2019: Giftz vs. Qwertyz – Randomly assigning useless superpowers to create a useless troupe of super pals who CAN remember your current password appears to outweigh the ability to never forget your old password. Giftz takes down Quertyz with 61% of the vote.


Bracket Battle 2019: 2ndz vs. Peez – We had some schemers with entrepreneurial spirits insightfully point out that a number of second-place cash prizes can build you quite the smaller competitor to Wayne Enterprises. Enjoy your second-fastest Bat Mobile, Robin! Peez is defeated by 2ndz, which garnered 60% of the votes.


Bracket Battle 2019: Wormz vs. Tastez – What happens when worm ESP and squirrel empathy clash? We may find out later in the season! Sounds like more people would rather taste-test things for poison than originally thought. Wormz took 64% of the votes.


Bracket Battle 2019: USBz vs. Opaquez – Maybe this clash would be less even before the release of USB type-C, as 48% of THWACK® apparently still prefer to turn invisible when nobody is looking. Don't you remember the pain and agony of thinking, "maybe I'll rotate it," before realizing you had it right the first time? Gen Z will never understand...


Bracket Battle 2019: Blurz vs. Hovrz – Jumping short heights can be appealing to anyone, unless you're watching a horror movie. WHAT THEN, THWACK? WHAT THEN?? Blurred vision loses to short hovering, which took 72% of the votes.


Bracket Battle 2019: Sortz vs. Picz – This one has us curious as to whether there's an underground gambling ring based on our annual Bracket Battles. 65% of THWACK chose the ability to accurately rate the uselessness over a superpower (Sortz) over taking a great license and passport photo. Too bad you can't take down a villain by looking goooooooood.


Bracket Battle 2019: Cheez vs. Whipez – Really? Only 54% of you want to take down your lactose-intolerant enemies with your epic lactokinesis? Maybe too many of us are sick of trying to clean whiteboards. Think of the possible plans you could possibly write on a potentially clean whiteboard!


Bracket Battle 2019: Toastz vs. Tumblz – You don't know fear until you've been chased through the desert by a rogue tumbleweed. Tumblz takes down foot toast (Pedicrunch?) with 62% of THWACK votes. I guess we'll never find a good use for our toe jam.


Bracket Battle 2019: Binaryz vs. Yumz – It's possible we could've put black licorice up against anything and it would still lose. Our fellow geek brethren decided speaking in binary tastes much better on the tongue. 1000100% of the vote.


Bracket Battle 2019: zhceepS vs. Loadz – We're fighting the urge to type this whole summary backwards as zheepS wins with 60%. Maybe it's because we already know all load times are a SHAM, and are MADE UP by whomever EVIL coders decided they'd give us FALSE HOPE about THE END being near. We're not mad.


Bracket Battle 2019: Foldz vs. Dropz – Everyone's days of paper airplanes are over. 'Nuff said. Dropz wins with 62-%.


Bracket Battle 2019: Callz vs. Veggz – Whomever doubts the power of vegetables has never seen this epic video of a vegetable orchestra. Go ahead. You know you want to watch it. Veggz (70%) beats Callz.


Bracket Battle 2019: Critz vs. Legoz – In a battle of entertainment, Legos appear to still be relevant. Fans of the Swedish foot-killers take down critical misses with a narrow six-percent margin.


Bracket Battle 2019: Tweetz vs. Forgetz – Remember round one? Forgetz doesn't. Reading the first 140-characters of people's thoughts is preferred by 71% of THWACK.


Bracket Battle 2019: Loopz vs. Slowz – Ahhh. There's nothing like saving the slowest for last. The tortoise takes down a perfect belt-looping. Slowz – 57%.



Curious as to what other useless superpowers you were surprised to see missing from our competition this year? Let us know below.


It’s time to check out the updated bracket and start voting in Round 2! We need your help and input as we get one step closer to crowning the ultimate legend!


Access the bracket and make your Round 2 picks HERE>>

michael stump

Cloud Anxiety

Posted by michael stump Expert Mar 25, 2019

When electric vehicles leapt out from the shadows and into the light of day, pundits and contrarians latched on to a perceived fatal flaw in their design: a limited range of travel on a single charge. It began in earnest with the Chevy Volt, which initially offered around 40 miles of range when operating in all-electric mode.


“Where can you go on 40 miles of range?”

“You’ll run out and be stranded!”

“My Ford Excursion can drive 300 miles on a tank!” (Except it has a 44-gallon tank, which in the pricey days of the mid-2000s worked out to a cool $160 a fill-up.)


EV makers persisted, and today we have EVs with ranges equal to or greater than their gas-powered predecessors. And if your trip takes you beyond the range of a single charge, there’s a growing network of charging stations to get you where you’re going.


Range anxiety is dead. Right? Maybe not, as you’ll still find columns devoted to the topic on autoblogs, especially those that cozy up to the ICE makers of the world. But the debate is over; only tribalism keeps it alive.


Sound familiar? It should. The same anxiety surrounds cloud computing.


“Cloud is just someone else’s computer!”

“You lose control over everything in the cloud!”

“My mainframe has been doing just fine for the last 30 years!”


In the meantime, cloud service providers kept innovating and improving upon their services. They engaged in an ambitious public relations campaign to educate IT professionals on how to manage cloud solutions. And they won major contracts in the federal IT space for cloud that solidified their ability to deliver massive, secure services to sensitive customers like the CIA.


The debate is over, but cloud anxiety lingers. Why?


It’s easy to throw out words like tribalism and intransigence and stubbornness when talking about change-related anxiety. But these terms serve only to push us to our pre-selected corner; these are not words of unification, they’re words of division.


The reason for any change-related anxiety is evolutionary: humans prefer the known over the unknown. Once you’ve acquired a decade or more experience managing on-premises solutions, your comfort is in continuing to do what you’ve been doing. The thought of drastic change, such as changing your delivery model to hybrid of bi-modal IT, represents the unknown. And when faced with that uncertainty, we start creating reasons to not embrace change. This is where all the “I think the old way’s better” thinking comes into play.


It boils down to this: people are afraid of change until they understand how that change will affect them on a personal level.


IT operations staff have been led to believe that their profession is about to be made obsolete by cloud. DBA, vSphere admins, and hardware geeks have heard a thousand times: your job won’t be around in n years’ time, where n is a short time between two and six years. The result is that these IT workers develop a general sense of uneasiness about cloud computing, and for good reason: they believe it’s here to take their jobs away.


But let's switch back to the EV analogy a moment. Now that every car manufacturer has announced plans to electrify their line-ups in the coming years, grease monkeys of the world have a choice: either embrace the incoming influx of EV service needs and prepare to fix a new class of vehicle, or stick with your skillset and spend the rest of your career supporting legacy vehicles that run on gas.


The same holds true for IT: cloud is coming. The question is, will you embrace that eventuality and prepare yourself to support a cloud-based future, or will you stick with the comfort of your on-prem IT and support legacy for the remainder of your career?


So, THWACK community: what will you do?

Back in January, I had the opportunity to sit in on a stop of the Bob Woodward speaking tour. (Bob Woodward of: Woodward/Bernstein/Washington Post/Watergate/Nixon.) Mr. Woodward ended his speaking hour with a little story on the biggest lesson he learned from the whole Watergate experience. Bob told the audience how he learned of President Nixon’s resignation, and how, only a few months later, he learned that Nixon’s successor, President Ford, had pardoned him.


Both Bob and his partner, Carl Bernstein, were crestfallen. They had sacrificed years of their lives investigating the biggest scandal in U.S. history, ultimately bringing down a sitting president. They both believed that more backroom politicking had occurred, and now Nixon was going to walk free in return for handing Ford the presidency.


Years passed, and Bob interviewed and spoke with Ford several more times. Right before Ford’s death some 30 years after Watergate, Ford revealed to Bob that Bob had never asked him if Nixon’s men had approached him prior to the resignation about a pardon in exchange for the presidency. (Bob explained to the audience that he assumed this was the case.) Ford admitted that they had. Bob exclaimed in his head, “Aha! I knew it! Vindication!” Ford finished his sentence, “I turned them down. I knew Nixon was finished as president. They were negotiating a losing hand.”


As for why Ford pardoned Nixon, I’ll let you research for yourselves. It’s a fascinating piece of U.S. history and a textbook case of political suicide. So… why am I bringing all of this up in a THWACK blog? Here’s why. If there is ever an example of an SME, or subject matter expert, it’s Bob Woodward and the Watergate scandal. He and Carl Bernstein lived, breathed, ate, and slept Watergate for over 30 years, long after it was over. And as to why he assumed that Ford pardoned Nixon, his knowledge and expertise was his undoing. His assumptions and suspicions poisoned his critical thinking skills for 30 years. Bob spent those 30 years convinced that Nixon’s team and Ford were in cahoots on some levels.


I’m surrounded by some very talented IT professionals and have been for the past 27 years. What I’ve done for the past 20 years, ever since I gave up the technical path, is lead these technical professionals working in a manager role, incident response, a war room, and projects. I’ve seen countless examples of where the best and smartest experts and engineers are firmly convinced that solution A, or “this” path to resolution, was the right answer and be firm about it—so convinced on their decision that whether it be professional reputation or stubbornness, they are adamant even when there are clues to state otherwise.


As a leader, this can be challenging. Pray that you never find yourself in the situation where you have two or more experts with differing, passionate opinions and are expected to make the decisions on whose opinion to follow. So, what are you to do in these tense situations? How do you work with a “Bob Woodward”-type personality?


When it comes to diffusing situations like these and (hopefully) gaining consensus, I’ve got some tips I’ve used throughout my career that can help ease heated discussions compromised of headstrong personalities.

  • Food! The tighter the tension, the better the food should be. Good food is perhaps the best tension breaker. If the hours are long, or you’re working on weekends or on a very tight deadline with high visibility, don’t order the team pizza (unless they all agree on pizza). Splurge a little and bring in catering, with the nice thick paper plates, fancy plastic silverware, and absorbent napkins. The team will feel appreciated and much more refreshed. They will convene with a fresh perspective, and you’ll come out being the good guy. A tough decision will land softer.
  • Changing subjects, even to the point of talking about something that isn’t related to the task at hand. How’s the weather? How’s that local sports team performing? Who saw that show last night? Who’s following the latest trends on Twitter? Changing the topic to get people to break their concentration and hard-set thought patterns is a sensible approach when the team is in a rut. As a leader, keep an eye out for those who keep their nose in their laptop and don’t engage in the lighthearted banter. They either refuse to budge on their opinions or are onto something that could be dangerous to the collaborative nature.
  • Taking breaks, even if it’s a simple adjournment for 15 minutes. Surprisingly, this is often neglected in war rooms and incident response teams. Keep in mind that you don’t have to dismiss them all at once. You can stagger them in intervals. Once again, keep an eye out for those who refuse to leave. We leaders refer to them as “martyrs.” Martyrs can be dangerous, as they tend to be the first to crack. They are first to get snippy, overly-opinionated, uncooperative, and a morale killer. In worst cases, they have the potential to be HR write-ups. And then you come out looking like the bad guy.
  • Play dumb! This is my personal favorite. I often refer to myself as the dumbest person in our IT (because most times it’s true). I use this to my advantage when I need to challenge an expert on his/her opinion. The expert then must break their opinion down to a level that I can understand. That allows me to ask “stupid” (aka “leading”) questions and manipulate the conversation to get the expert to try another way of thinking. During my 27 years in IT, I have learned that technology has changed, but human behavior is basically the same.

There are certainly other tactics to use, and I would love to hear your favorites as well as how and when you use them in the comments section below.


Also, given that many of us in the THWACK community are the technical experts, I must ask, have you ever found yourself to be so unwaveringly convinced of something IT related only to find out later that you were wrong? What was it? Did you ever admit you were wrong? Why or why not?


Finally, I'll leave you with a little bit of trivia on President Ford that you may not know. He is the only person to have served as vice president and U.S. president having never been elected to either office.

In my previous articles, I’ve looked at the challenges and decisions people face when undertaking digital transformation and transitioning to a hybrid cloud model of IT operation. Whether it’s using public cloud infrastructure or changing operations to leverage containers and microservices, we all know not everything can or even should move to the public cloud. There’s still a need to run applications in-house. Yet everyone still wants the benefits of the public cloud in our own data center. How do you go about addressing these private cloud needs?


Let’s take an example scenario. You’ve decided it's time to upgrade the company’s virtual machine estate and roll out the latest and greatest version of hypervisor software. You know that there are heaps of great new features in there that will make all the difference in your day-to-day operations, but that doesn’t mean anything to your board or to the finance director, who you need to win over to get a green-light to purchase. You need to present a proposal containing a set of measurables indicating when they are going to see a return on the money you’re asking them to release. In the immortal words of Dennis Hopper in Speed, “…what do you do? What do you do?”


First, you need a plan. The good news is, you can basically reuse the same one time and again. Change a few names here and a few metrics there, and you’ll have a winner straight out of the Bill Belichick of IT’s playbook.

The framework you should build a plan around has roughly nine sections you must address.


  • First, you need to outline the Scenario and the problems you’re facing.
  • This leads you to the Solution you’re proposing.
  • Then the Steps needed to get to this proposed solution.
  • Next, you would outline the Benefits of this solution.
  • And any Risks that might arise while transitioning and once up and running.
  • You would then summarize the Alternatives, including the fact that doing nothing will only exacerbate the issue.
  • After that, you want to profile the costs and compare it to the previous system for Cost Comparisons, detailing as much as possible on each, highlighting the TCO (Total Cost of Ownership). You may think you can finish here, but two important parts follow.
  • Highlight the KPIs (Key Performance Indicators).
  • Finally, the Timeline to implement the solution.


KPIs, or Key Performance Indicators, are a collection of statements related to the new piece of hardware, software, or even the whole system. You may say that we can reduce query time by five seconds during a normal day, or we will reduce power consumption by 10KWh per month. They have to be measurable and determinable via a value. You can’t say “it will be faster or better” as you cannot quantify these. Your KPIs may also have a deadline or date associated with them, so you can definitively say whether there’s been a measured improvement or not.


Sometimes it can be hard or near impossible to pull some of the details in the plan together, but remember, the finance department will have any previous purchases of hardware, software, professional services, or contractors in a ledger. Hopefully you know the number of man-hours per month it takes to maintain the environment, along with the application downtime, profit lost to downtime, and so on. Sometimes there will be points that you need to put down to the best of the knowledge at hand. Moving forward, you’ll start to track some of the figures that the board wants, showing a willingness to try seeing things from their perspective as the nuances of new versions of hardware and software can be lost on them.


Once you have a good understanding of the problem(s) you’re facing, you need to look at the possible solutions, which may mean a combination of demonstrations, proof of concepts (PoCs), evaluations, or try-and-buys for you to gain an insight into the technology available to solve the problem.


Next, it’s time to size up the solution. One of the hardest things to do when you adopt a new technology is to adequately size for the unexpected. Without properly understanding the requirements your solution needs to meet, how can you safely say the proposed new solution is going to fix the situation? I won’t go into how to size your solution as each vendor has a different method. The results are only as good as the figures you put in. You need to decide how many years you want to use the assets for, then figure out a rate of growth over that time. Don’t forget, updates and upgrades can affect the system during this timeframe, and you may need to take these into account.


Another known problem in the service provider space is the fact that you may size a solution for 1,000 users and you move 50 on to begin with, and sure enough, they get blazing speed and no contention. But as you begin to reach 500, the original users start to notice that tasks are taking longer than when they first started using the new system. You want to try to avoid this. You’ll start to get your original users complaining that they are not getting the 10x speed they had when they first moved, even though you spec’d it for a 5x improvement and they’re still getting 6-7x improvement over the legacy system. This pioneer syndrome needs some form of quality of service to prevent it arising—a “training wheels protocol,” if you will.


Now that you’ve identified your possible white knight, it’s time to do some due diligence and verify that it will indeed work in your environment, that it’s currently supported, and so on before going off half-cocked and purchasing something because you were wooed by the sales pitch or one-time only special end of quarter pricing.


I think too many people purchase new equipment and software and then rush to use the shiny new toy that they forget some of the most important steps: benchmarking and baselining. I refer to benchmarking as the ability to understand how a system performs when a known amount of load is put upon it. For example, when I have 50 virtual machines running, or I have 100 concurrent database queries, what happens when I add another 50 or 100? I monitor this increase and record the changes. Keep adding in known increments until you see the resources max out and adding any more has a degrading effect on the existing group. Baselining is getting measurements once a system goes live and seeing what a normal day’s operation does to a specific system. For example, you may have 500 people log on at 9 a.m. with peak load around 10:30. It then tails off until 2 p.m. when people start to come back from lunch, and spikes again around 5 p.m. as they clear their desks, finally dropping to a nominal load by 7 p.m. Only by having statistics on hand as to how everything in this system performs during this typical day can we make accurate measurements. Setting tolerances will help you decide if there’s actually a problem, and if so, narrow the search down when a user opens a support ticket. This process of baselining and benchmarking will ultimately help you determine SLA (Service Level Agreements) response times and define items that are outside the system’s control.


The point is, you’ll need a standard of measurement and documentation; and like any good high school science experiment, you’re probably going to need a hypothesis, method, results, conclusion, and evaluation. What I’m trying to say is you need to understand what you are measuring and its effect on the environment. Yes, there are some variables everyone’s going to measure: CPU and memory utilization, network latency, and storage capacity growth. But your environment may require you to keep an eye on other variables, like web traffic or database queries, and knowing a good value from a bad is critical in management.


The system management tools you’ll use should be tried and tested. If you cannot receive the results you need easily with your current implementation, it may be time to look at something new. It may be something open-source or it may be a paid solution with some professional services and training to maximize this new investment. As long as you’re monitoring and recording statistics that control your environment, you should be in a great position to evaluate new hardware and software options.


You may have heard comments around the “cloud being a great place for speed and innovation,” which is truer now more than ever with the speed at which they start to monitor and possibly bill you for your usage. I believe that to be a proper private cloud or on-premises part of a hybrid cloud, you need to be able to monitor detailed usage growth and have the potential to begin to show or charge departments for their IT usage. By monitoring hybrid cloud metrics, you can make a better-informed decision around moving applications to the cloud. As with any expenditure, you should also look at adding in functionality that starts to give you cloud-like abilities on-premises. Maybe begin by implementing a strategy to show chargeback to different departments or line of business application owners. Start making the move from keeping the lights on to innovation, and have IT lead the drive to a competitive advantage in your industry.


Moving from a data center to a private cloud isn’t as simple as changing the name on the door. It takes time and planning by implementing new processes to achieve goals and providing some form of automation and elasticity back to the business, along with ways to monitor and report trends. Like any problem, breaking it up into bite-size chunks not only gives you a sense of achievement, but also more control on the project going forward.


Whether you are moving to or even from the cloud, the above process can be applied. You need to understand the current usage, baselines, costs, and predicted growth rates, as well as any SLAs that are in place, then how this marries up with the transition to the new platform. It’s all well and good reaching for the New Kid on the Block when they come around telling you that their product will solve everything up to and possibly including “world hunger,” but let’s be realistic and make sure you’ve done your homework and you have a plan of attack. Predetermined KPIs and deliverables may seem like you’re adding shackles to the project, but it helps keep you focused on the goal and delivering back results to your board.


  Does investment equal business value? Spending money on the new shiny toy from Vendor X to replace aging infrastructure doesn’t always mean you’re going to improve things. It’s about what business challenges you’re trying to solve. Once you have set your sights on a challenge, it’s about determining what success is for the project, and what KPIs and milestones you’re going to set and measure. What tools you’ll use and how you can prove the value back to the board, so the next time you ask for money, it’s released a lot easier and faster. 

In sunny Redmond this week for the annual Microsoft MVP Summit. This is my 10th Summit and I’m just as excited for this one as if it was my first.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Massive Admissions Scandal

Having worked in higher education for a handful of years, none of this information is shocking to me. The most shocking part is that any official investigation was done, and indictments handed down. I truly hope this marks a turning point for our college education system.


Philadelphia just banned cashless stores. Will other cities follow?

One of the key benefits touted by cryptocurrency advocates is “access for everyone.” Well, the City of Brotherly Love disagrees with that viewpoint. The poor have cash, but not phones. Good for Philly to take a stand and make certain stores remain available to everyone.


Facebook explains its worst outage as 3 million users head to Telegram

A bad server configuration? That’s the explanation? Hey, Facebook, if you need help tracking server configurations, let me know. I can help you be back up and running in less than 15 hours.


Your old router is an absolute goldmine for troublesome hackers

Included for those that need reminding, don’t use the default settings for the hardware you buy off the shelf (or the internet). I don’t understand why there isn’t a default standard setup that forces a user to choose a new password, seems like an easy step to help end users stay safe.


Why CAPTCHAs have gotten so difficult

Because computers are getting better at pretending to be human. You would think that computers would also be better at recognizing a robot when they see one, and not bother me with trying to pick out all the images that contain a signpost or whatever silly thing CAPTCHAs  are doing this week.


The Need for Post-Capitalism

Buried deep in this post is the comment about a future currency based upon social capital. You may recall that China is currently using such a system. Last week, I thought that was crazy. This week, I think that maybe China is cutting edge, and trying to help get humanity to a better place.


The World Wide Web at 30: We got the free and open internet we deserve

“We were promised instant access to the whole of humanity's knowledge. What we got was fake news in our Facebook feeds.” Yep, that about sums up 30 years of the web.


Ever feel as if you are the dumbest person in the room? That's me at MVP Summit for 4 days:


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s a helpful article on how to achieve your cloud-first objectives for cloud-based email. Email is a critical application for government, and there are several suggestions for improving reliability.


The U.K. government’s Cloud First policy mandates the need to prioritize cloud-based options in all purchasing decisions—and email providers are no exception to this rule. The rationale is clear: to deliver “better value for money.” Cloud-based email can help with this—offering huge operational benefits, especially considering the sheer number of users and the broad geographical footprint of the public sector. It can also be much simpler and cheaper to secure and manage than on-prem email servers.


However, while email services based in the cloud can offer a number of advantages, such services also pose some unique challenges. IT managers in the public sector must track their email applications carefully to help ensure cloud-based email platforms remain reliable, accessible, and responsive. In addition, it’s important to monitor continuously for threats and vulnerabilities.


Unfortunately, even the major cloud-based email providers have had performance problems. Microsoft Office 365, a preferred supplier with whom the U.K. government has secured a preferential pricing deal, has been subject to service outages in Europe and in the United States, as recently as late last year.


Fortunately, many agencies are already actively monitoring cloud environments. Sixty-eight percent of the NHS and 76% of central government organisations in a recent FOI request from SolarWinds reported having migrated some applications to the cloud, and using monitoring tools to oversee this. Although monitoring in the cloud can be daunting, organisations can apply many of the best practices used on-prem to the cloud—and often even use the same tools—as part of a cloud email strategy that can help ensure a high level of performance and reliability.


Gain visibility into email performance


Many of the same hiccups that affect the performance of other applications can be equally disruptive to email services. Issues including network latency and bandwidth constraints, for example, can directly influence the speed at which email is sent and delivered.


Clear visibility into key performance metrics on the operations of cloud-based email platforms is a must for administrators. They need to be able to proactively monitor email usage throughout the organisation, including the number of users on the systems, users who are running over their respective email quotas, archived and inactive mailboxes, and more.


When working across both a cloud-based email platform and an on-prem server, in an ideal world, administrators should set up an environment that allows them to get a complete picture across both. Currently, however, many U.K. public sector entities are using four or more monitoring tools—as is the case for 48% of the NHS and 53% of central government, according to recent SolarWinds FOI research. This highlights a potential disconnect between different existing monitoring tools.


Monitor mail paths


When email performance falters, it can be difficult to tell whether the fault lies in the application or the network. This challenge is often exacerbated when the application resides in the cloud, which can limit an administrator’s view of issues that might be affecting the application.


By using application path monitoring, administrators can gain visibility into the performance of email applications, especially those that reside in a hosted environment. By monitoring the “hops,” or transfers between computers, that requests take to and from email servers, administrators can build a better picture of current service quality and identify any factors that may be inhibiting email performance. In a job where time is scarce, this visibility can help administrators troubleshoot problems without the additional hassle of determining if the application or network is the source of the problem.


By applying existing standard network monitoring solutions and strategies to email platforms, administrators can gain better insight into the performance of cloud email servers. This will help keep communications online and running smoothly.


Find the full article on GovTech Leaders.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

It’s the most wonderful time of year. The time to come together as a community and – through pure opinion and preference – crown someone (or something) VICTORIOUS.


SolarWinds Bracket Battle 2019 is here. And, for its 7th anniversary, have we got something truly thought-provoking, debate-inducing, and oddly… meta for you.


The Premise:

It’s a well-known and well-understood fact that not all superpowers are created equal. Even amongst those blessed with supernatural or mutant abilities, someone’s going to end up with the fuzzy end of the lollipop.


And so we ask: If one were to end up in the shallow end of the superpowers pool, would it be better to just be normal? Or, is having ANY power – even something as random and seemingly worthless as the ability to turn into a bouncing ball (Bouncing Boy, DC Comics, 1961) or to turn music into light (Dazzler, Marvel, 1980) – better than nothing at all?


If the chance to don a super suit, hang at the mutant mansion, or chill with the League is worth enduring an endless stream of ridicule and continuous side-eye, then what IS the ULTIMATE USELESS SUPERPOWER?


Would you rather have the chance to parley with some tree rodents or parlez-vous binary? Superhuman strength for a millisecond or lactokinesis (which is basically milk mind-control)?


These are just some of the real head-scratchers we have in store for you in this year’s SolarWinds Bracket Battle.


Starting today, 33 of the most random and useless superpowers that we could imagine will battle it out until only one remains and reigns supreme as the BEST of the WORST.


We picked the starting point and initial match-ups; however, just like in Bracket Battles of the past, it’ll be up to the THWACK community to decide the winner.


Don’t Forget to Submit Your Bracket: If you correctly guess the final four bracket contestants, we’ll gift you a sweet 1,000 THWACK points. To do this, you’ll need to go to the personal bracket page and select your pick for each category. Points will be awarded after the final four contestants are revealed.


Bracket Battle Rules:


Match-Up Analysis:

  • For each useless superpower match-up, we’ve provided short descriptions of each power. Just hover over or click to vote to decipher the code.
  • Unlike in past years, there really isn’t an analysis of how the match-up would work because – well, let’s just say we want your imaginations to run wild. (Quite frankly, these are all pretty useless to begin with, it’s hard to really grasp how you'd make some of these abilities work to your advantage.)
  • Anyone can view the bracket and match-ups, but in order to vote or comment, you must have a THWACK® account.



  • Again, you must be logged in to vote and trash talk
  • You may vote ONCE for each match-up
  • Once you vote on a match-up, click the link to return to the bracket and vote on the next match-up in the series



  • Please feel free to campaign for your preferred form of uselessness, or debate the relative usefulness of any entry (also, feel free to post pictures of bracket predictions on social media)
  • To join the conversation on social media, use the hashtag #SWBracketBattle
  • There’s a PDF printable versionof the bracket available, so you can track the progress of your favorite picks



  • Bracket release is TODAY, March 18
  • Voting for each round will begin at 10 a.m. CT
  • Voting for each round will close at 11:59 p.m. CT on the date listed on the Bracket Battle home page
  • Play-in battle opens TODAY, March 18
  • Round 1 OPENS March 20
  • Round 2 OPENS March 25
  • Round 3 OPENS March 28
  • Round 4 OPENS April 1
  • Round 5 OPENS April 4
  • The Most USEFUL Useless Superpower will be announced April 10


If you have any other questions, please feel free to comment below and we’ll be sure to get back to you!


What power is slightly better than nothing at all? We’ll let the votes decide!


Access the Bracket Battle overview HERE>>

Migration to the cloud is just like a house move. The amount of preparation done before the move determines how smoothly the move goes. Similarly, there are many technical and non-technical actions that can be taken to make a move to the cloud successful and calm.


Most actions (or decisions) will be cloud-agnostic and can be carried out at any time, but once a platform is chosen, it unlocks even more areas where preparations can be made in advance.


In no particular importance or order, let’s look at some of these preparations.



Some workloads may not be suitable for migration to the cloud. For example, compliance, performance, or latency requirements might force some workloads to stay on-premises. Even if a company adopts a “cloud-first” strategy, such workloads could force the change of model to a hybrid cloud.


If not done initially, identification of such workloads should be carried out as soon as the decision on the platform is made so the design can cater for them right from the start.


Hybrid IT

Most cloud environments are a hybrid of private and public cloud platforms. Depending on the size of the organization, it’s common to arrange for multiple high-speed links from the on-premises environment to the chosen platform.


However, those links are quite cumbersome to set up, as many different parties are involved, such as the provider, carrier, networking, and cloud teams. Availability of ports and bandwidth can also be a challenge.


Suffice it to say that lead times for an end-to-end cloud migration process typically ranges from a few weeks to a few months. For that reason, it’s recommended to prioritize identifying if such link(s) will be required and to which data centers, and get the commissioning process started.


Migration Order

This is an interesting one as many answers exist and all of them are correct. It really depends on the organization and maturity level of the applications involved.


For an organization where identical but isolated development environments exist, it’s generally preferred to migrate those first. However, you may find exceptions in cases where deployment pipelines are implemented.


It’s important to keep stakeholders fully involved in this process, not only because they understand the application best and foresee potential issues, but also so they’re aware of the migration schedule and what constitute reliable tests before signing off.



Most organizations like to move their applications to the cloud and improve later. This is especially true if there’s a deadline and migration is imminent. It makes sense to divide the whole task into two clear and manageable phases, as long as the improvement part isn’t forgotten.


That said, the thinking process on how to refactor existing applications post-migration can start now. There are some universal concepts for public cloud infrastructure like autoscaling, decoupling, statelessness, etc., but there will be some specific to the chosen cloud platform.


Such thinking automatically forces the teams to consider potential issues that might occur and therefore provides enough time to mitigate them.



Operations and support teams are extremely important in the early days of migration, so they should be comfortable with all the processes and escalation paths if things don’t go as planned. However, it’s common to see organizations force rollouts as soon as initial testing is done (to meet deadlines) before those teams are ready.


This can only cause chaos and result in a less-than-ideal migration journey for everyone involved. A way to ensure readiness is to do dry runs with a few select low-impact test environments, driven by the operations and support team solving deliberately created issues. The core migration team should be contactable but not involved at all.


Migration should only take place once both teams are comfortable with all processes and the escalation paths are known to everyone.



Importance of training cannot be emphasized enough, and it’s not just about technical training for the products involved. One often-forgotten exercise is to train staff outside the core application team, e.g., operations and support, about the applications being migrated.


There can be many post-migration factors to consider that make it necessary to provide training on applications, such as application behavior changes, deployment mechanism changes, security profile, and data paths.


Training on technologies involved can start as early as the platform decision. Application-specific training should occur as soon as it’s ready for migration but before the dry runs. Both combined will keep the teams in good stead when migration day comes.



Preparation is key for a significant task like cloud migration. With a bit of thought, many things can be identified that are not dependent on platform choice or the migration and can therefore be taken care of well in advance.


A successful cloud migration sometimes depends on how many factors are involved. Reducing the number of tasks required can mean less stress for everyone. It pays to be prepared. As Benjamin Franklin put it:

  “By failing to prepare, you are preparing to fail.” 

If you’re a returning reader to my series, thank you for listening this far. We have a couple more posts in store for you. If you’re a new visitor, you can find previous posts below:

Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT


In this post, I’ll be looking at how I help my customers assess and architect solutions across the options available throughout on-premises solutions and the major public cloud offerings. I’ll look at how best to use public cloud resources and how to fit those to use cases such as development/testing, and when to bring workloads back on-premises.


In most cases, modern applications that have been built cloud-native, such as functions or using as-a-service style offerings, will have a natural fit to the cloud that they’ve been developed for. However, a lot of the customers I work with and encounter aren’t that far along the journey. That’s the desired goal, but it takes time to refactor or replace existing applications and processes.


With that in mind, where do I start? The first and most important part is in understanding the landscape. What do the current applications look like? What technologies are in use (both hardware and software)? What do the data flows look like? What does the data lifecycle look like? What are the current development processes?


Building a service catalogue is an important step in making decisions about how you spend your money and time. There are various methods out there for achieving these assessments, like TIME analysis or The 6 Rs. Armed with as much information as possible, you’re empowered to make better decisions.


Next, I usually look at where the quick wins can be made—where the best bang for your buck changes can be implemented to show return to the business. This usually starts in development/test environments and potentially pre-production environments. Increasing velocity here can provide immediate results and value to the business. Another area to consider is backup/long-term data retention.


Development and Testing


For development and test environments, I look at the existing architecture, are these traditional VM-based environments? Can they be containerized easily? Is containerization where possible a good step toward more cloud native behavior/thinking?


In traditional VM environments, can automation be used to quickly build and destroy environments? If I’m building a new feature and I want to do integration testing, can I use mocks and other simulated components to reduce the amount of infrastructure needed? If so, then these short-lived environments are a great candidate for the public cloud. Where you can automate and have predictable lifecycles into the hours, days, and maybe even weeks, the efficiencies and savings of placing that workload in the cloud are evident.


When it comes to longer cycles like acceptance testing and pre-production, perhaps these require a longer lifetime or greater resource allocation. In these circumstances, traditional VM-based architectures and monolithic applications can become costly in the public cloud. My advice is to use the same automation techniques to deploy these to local resources with more reliable costs. However, the plan should always look forward and assess future developments where you can replace components into modern architectures over time and deploy across both on-premises and public cloud.


Data Retention


As I mentioned, the other area I often explore is data retention. Can long-term backups be sent to cold storage in the cloud? The benefits offered above that of tape management for infrequently accessed data are often prominent. Restore access may be slower, but how often are you performing those operations? How urgent is a restore from, say, six years ago? Many times, you can wait to get this information back.


Continuing the theme of data, it’s important to understand what data you need where and how you want to use it. There are benefits to using cloud native services for things like business intelligence, artificial intelligence (AI), machine learning (ML), and other processing. However, you often don’t need the entire data set to get the information you need. Look at building systems and using services that allow you to get the right data to the right location, or bring the data to the compute, as it were. Once you have the results you need, the data that was processed to generate them can be removed, and the results themselves can live where you need them at that point.


Lastly, I think about scale and the future. What happens if your service/application grows beyond your expectations? Not many people will be the next Netflix or Dropbox, but it’s important to think about what would happen if that came about. While uncommon, there are situations where systems scale to a point that using public cloud services becomes uneconomical. Have you architected the solution in a way that allows you to remove yourself? Would major work be required to build back on-premises? In most cases, this is a difficult question to answer, as there are many moving parts and relies on levels of success and scale that may not have been predictable. I’ve encountered this type of situation over the years, usually not to the full extent of complete removal of cloud services. I commonly see this in data storage. Large amounts of active data can become costly quickly. In these situations, I look to solutions that allow me to leverage traditional storage arrays that can be near-cloud, usually systems placed in data centers that have direct access to cloud providers.


In my final post, I’ll be going deeper into some of the areas I’ve discussed here and will cover how I use DevOps/CICD tooling in hybrid IT environments.


Thank you for reading, and I appreciate any comments or feedback.

Saw Captain Marvel this past weekend. It's a good movie. You should see it, too. Make sure you stick around for the second end credit scene!


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Quadriga's Wallets Are Empty, Putting Fate Of $137 Million In Doubt

Somehow, I don’t think “create Ponzi scheme using crypto and fake my own death” was the exact pitch to investors. Incidents like this are going to give cryptocurrencies a bad name.


Part 1: How To Be An Adult— Kegan’s Theory of Adult Development

One of the most important skills you can have in life is empathy. Take 10 minutes to read this and then think about the people around you daily, and their stage of development. If you start to recognize the people that are Stage 2, for example, it may change how you interact, and react, with them.


Volvo is limiting its cars to a top speed of 112 mph

Including this because (1) we got a new Volvo this week and (2) the safety features are amazing. There are many times it seems as if the car is driving itself. It takes a while to learn to let go and accept the car is now your pilot.


This bill could ban e-cigarette flavors nationwide

"To me, there is no legitimate reason to sell any product with names such as cotton candy or tutti fruitti, unless you are trying to market it to children.” Preach.


Microsoft is now managing HP, Dell PCs via its Managed Desktop Service

And we move one step closer to MMSP – Microsoft Managed Service Provider.


A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians

We are at the beginning of a boom with regards to machine learning. Unfortunately, most of the data we use comes with certain bias inherent. Date is the fuel for our systems and our decisions. Make sure you know what fuel you are using to power your algorithms.


Burnout Self-Test

We’ve all been there, or are there. Do yourself a favor and take 5 minutes for this quiz. Then, assess where you are, where you want to be, and the steps you need to take. Mental health is as important as anything else in your life.


The last real snowfall of the season, so I took the time to make a Fire Angel (it's like a Snow Angel, but in my Fire Circle):


The most difficult step in any organization’s journey to the cloud is the first one: where do you start? You’ve watched the industry slowly adopt cloud computing over the last decade, but when you look at your on-premises data center, you can’t conceptualize how to break out. You’ve built your own technology prison, and you’re the prisoner.


You might have a traditional three-tier application, and the thought of moving the whole stack to the cloud induces anxiety. You won’t control the hardware, and you won’t know who has access to the hardware. You won’t know where your data is, or who has access to it. It’s the unending un-knowing of cloud that makes so many of us retreat to the cold aisle, lean against the KVM, and clutch our tile pullers a little tighter.


Then you consider the notorious migration methods you’ve read about online.


Lift-and-shift cloud migrations are harrowing events that we should all experience at least once, and should never, ever experience more than once. Refactoring is often an exercise in futility, unless you have a crystal-clear understanding of what the resulting product will look like.


So how do you ease into cloud computing if lift-and-shift and refactoring aren’t practical for you?


You start considering a cloud-based solution for every new project that comes your way.


For example, I’ve recently been in discussions with an application team to improve the resiliency of their database solution. The usual solutions were kicked around: a same-site cluster for HA, a multi-site cluster for HA and DR, or an active-active same-site cluster for HA and FT. Of course, in each case, there’s excess hardware capacity that will sit idle until a failure event. The costs associated with these three solutions would inspire any savvy product manager to think, “there’s got to be a better way.”


And there is. It’s a cloud-native database service with an SLA for performance and availability, infinite elasticity, and bottomless storage. (Yes, I’m exaggerating a bit here, but look at the tech specs for Google CloudSQL or Amazon RDS; infinite is only a mild stretch.) You pay for the service based on consumption, which means all those idle cycles that would otherwise consume power and cooling are poof, gone. You’ll need to sort out the connectivity and determine the right way for your enterprise to connect with your cloud services, but that’s certainly easier than designing, implementing, and procuring the hardware and licenses for your on-prem HA solutions.


Your application team gets the service they want without investing in bare metal that would only serve to make your data center chillers work a little harder. And more importantly, you’ve taken your first step in the journey to the cloud.


A successful migration can spark interest in cloud as a solution for other components. The same application team, realizing now that their data is in the cloud and their app servers aren’t, might express an interest in deploying an instance group of VMs into the same cloud to be close to their data. They’ll want to learn about auto-scaling next. They’ll want to learn about costs savings by moving web servers to Debian. They’ll want to know more about how to set up firewall rules and cloud load balancers. They’ll develop an appetite for all that cloud has to offer.


And while you may not be able to indulge all their cloud fantasies, you’ll find that moving to the cloud is a much simpler and enjoyable effort when you’re working in partnership with your application team.


  That’s the secret to embracing not just cloud, but any new technology: let your business problems lead you to a solution.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.