Small-to-medium-sized businesses (SMBs) tend to get overlooked when it comes to solutions that fit their needs for infrastructure monitoring. There are many tools that exist today that cater to the enterprise, where there’s a much larger architecture with many moving parts. The enterprise data center requires solutions for monitoring all the systems within it that an SMB might not have. The SMB or remote office/branch office (ROBO) is a much smaller operation, sometimes using a few servers and some smaller networking gear. There may be a storage solution in the back-end, but it’s more likely that the data for their systems is stored on a local drive within one or all of their servers.


It seems unfair for the SMB to be ignored as developers typically work to create solutions that will fit better in an enterprise architecture than in a smaller architecture, but that’s the nature of the beast. There’s more money in enterprise solutions, with enterprise licensing agreements (ELAs) reaching into the millions of dollars for some clients. It would make sense that enterprise software is more readily available than SMB software, but that doesn’t mean there aren’t solutions out there for the SMB.

Find What’s Right for YOU

Don’t pick a solution with more than what you need for the infrastructure within your organization. If your organization consists of a single office with three physical servers, a modem/router, and a direct attached storage (DAS) solution, you don’t need an enterprise solution to achieve the systems monitoring. There are many less expensive or open-source server monitoring tools out there that contain a lot of documentation to help you get through installation and configuration. Enterprise solutions aren’t always the answer just because they are “enterprise solutions.” Bigger isn’t always better. If a solution with a support agreement is more in line with your expectations, there are quite a few providers that can offer an SMB-class monitoring solution for you.


Don’t Overpay

Software salespeople are all about selling, selling, selling. Many times, salespeople are sent along on a client call with a solutions engineer (SE) who has more technical experience than the salesperson. Focus more attention on the SE and less on the salesperson. There’s no need to shell out a ton of money for an ELA that’s way more than what you need for your SMB infrastructure. Many times, quality is associated with costs, and that’s just plain false. When it comes to choosing a monitoring tool for your SMB, "quality over quantity" should be your mantra. If you don’t require 24/7/365 monitoring and SLAs at the platinum level with two-minute response time, don’t buy it. Find a tool that fits your SMB budget, and don’t feel you are slighted because you didn’t buy the slightly shiny, more expensive enterprise solution.


Pick a Vendor That Will Work for YOU

Software vendors, especially those that work to develop large enterprise monitoring solutions, don’t always have the best interests of the SMB in mind when building a tool. By focusing your search on vendors that scale to the SMB market, you’ll find that the sales process and the support will be tailored to the needs of your organization. With vendors building scalable tools, customization becomes a key selling point. Vendors can cater to the needs and requirements of the customer, not the market.


Peer Pressure is Real

Don’t take calls from software vendors that cater only to the needs of enterprise-scale monitoring solutions. Nothing against enterprise monitoring solutions—they’re needed for the enterprise. However, focusing on the chatter and the “latest and greatest” types of marketing will make your SMB feel even smaller. There’s no competition. Pick what works for your SMB. Don’t overpay. Find a vendor that will support you. By putting all these tips in place, you can find a monitoring tool for your SMB that won’t make you feel like you had to settle.

As we head into the final post of this series, I want to thank you all for reading this far. For a recap of the other parts, please find links below:


Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT

Part 5 – Choosing the right location for your workload in a Hybrid IT world


To close out the series, I’d like to talk about my most recent journey in this space: using DevOps tooling and continuous integration/deployment (CI/CD) pipelines. A lot of this will come from my experiences using Azure DevOps, as I’m most familiar with that tool. However, there are a lot of alternatives out there, each with their own pros and cons depending on your business or your customers.


I’ve never been a traditional programmer/developer. I’ve adopted some skills over the years as that knowledge can bring benefit to many areas of IT. Being able to build infrastructure as code or create automation scripts has always served me well, long before the more common use cases evolved from public cloud consumption. I feel it’s an important skill for all IT professionals to have.


More recently, I’ve found that the relationship between traditional IT and developers is growing closer. IT departments need to provide tools and infrastructure to the business to speed development and get products out the door quicker. This is where the DevOps culture has come to the forefront. It’s no longer good enough to just develop a product and throw it over the fence to be managed. The systems we use and the platforms available to us mean that we must work together. To help this new culture, it’s important to have the right DevOps tools in place: good code management repositories, artifact repositories, container registries, and resource management tools like Kanban boards. These all play a role for developers and IT professionals. Bringing all this together into a CI/CD process, however, involves more than just tools. Processes and business practices may need to be adjusted as well.


I’m now working more in this space. It’s a natural extension to the automation space that I previously worked in, and it overlaps quite nicely. Working with businesses to set up pipelines and gain velocity in development has taken me on a great journey. I won’t go into detail on the code side of this, as that’s something for a different blog. What’s important and relevant in hybrid IT environments are how these CI/CD process integrate into the various environments. As I discussed in my previous post, choosing the right location for your workloads is important, and this carries over into these pipelines.


During the software development life cycle, there are stages you may need to go through. Unit, functional, integration, and user acceptance testing are commonplace. Deploying throughout these various stages means there will be different requirements for infrastructure and services. From a hybrid IT perspective, having the tools at hand to deploy to multiple locations of your choice is paramount. Short-lived environments can use cloud-hosted services such as hosted build agents and cloud compute. Medium-term environments that run in isolation can again be cloud-based. Longer-term environments or those that use legacy systems can be deployed on-premises. The toolchain gives you this flexibility.


As I previously mentioned, I work mostly with Azure DevOps. Building pipelines here gives me an extensive set of options, as well as a vibrant marketplace of extensions built by the community and vendors. If I want to deploy code to Azure services, I can just call on Azure Resource Manager Templates to build an environment. If I include cloud-native services, I have richer plugins available to deploy configurations to things like API Management. When it comes to on-premises deployments, I can have DevOps agents deployed within my own data center, allowing build and deployment pipelines. I can configure groups of deployment agents that connect me to my existing servers and services. There are options for me to deploy PowerShell scripts, call external APIs from private cloud management platforms like vRealize Automation, or hook in to Terraform/Puppet/Chef, etc.


I can also hook in these deployment processes to container orchestrators like Kubernetes, storing and deploying Helm charts or Docker compose files. These are the best situations for teams that were traditionally siloed to work together. Developers know how the application should work, and operations and infrastructure people know how they want the system to look. Pulling together code that describes how the infrastructure deploys, heals, upgrades, and retires needs input from all sides. When using these types of tools, you’re looking to achieve an end to end system of code build and deployment. Putting in place all the quality gates, deployment processes, and testing removes human error and speeds up business outcomes.


Outside of those traditional use cases for SDLC, I’ve found these types of tools to be beneficial in my daily tasks as well. Working in automation and infrastructure as code has a similar process. I maintain my own projects in this structure. I keep version-controlled copies of ARM templates, Cloud Formation templates, Terraform code, and much more. The CI/CD process allows me to bring non-traditional elements into my own deployment cycles, testing infrastructure with Pester, checking security with AzSK, or just making sure I clean up my test environments when I’ve finished with them. From my experiences so far, there’s a lot for traditional infrastructure people to learn from software developers and vice versa. Bringing teams and processes together helps build better outcomes for all.


With that we come to a close on this series. I want to thank everyone that took the time to read my posts and those that provided feedback or comments. I have enjoyed writing these and hope to share more opinions with you all in the future.

The Final Eight are upon us. It’s a little less clear this year whether any Cinderellas have made it into these final rounds as all of these superpowers seem a little Cinderella-y (WAY before she met her fairy godmother). And, most of these battles are a lot closer than we anticipated.


There’s certainly been some personal disappointment. (Just to’s PEE. In the pool you’re SWIMMING IN. Granted, it may not help you fight crime, but you can’t fight crime if you catch something gross. Think about it.)


We’re also struggling with what it means for us a society when we believe 140 characters is all we need to get the gist of someone’s mind. But, now isn’t the time for big thoughts and existential drama…


It’s time to announce who has advanced to the next round.


Bracket Battle 2019 - Round 2: USBz vs. Wormz USBz moves on to the next round with 61% of the votes

Bracket Battle 2019 - Round 2: 2ndz vs. zhceepS 2ndz moves on with 51%

Bracket Battle 2019 - Round 2: Binaryz vs. Cheez Binaryz moves on with 59%

Bracket Battle 2019 - Round 2: Tumblz vs. Slowz Tumblz moves on with 65%

Bracket Battle 2019 - Round 2: Tweetz vs. Sortz Tweetz moves on with 82%

Bracket Battle 2019 - Round 2: Legoz vs. Hovrz Hovrz moves on with 59%

Bracket Battle 2019 - Round 2: Giftz vs. Veggz Giftz moves on with 62%

Bracket Battle 2019 - Round 2: Sqrlz vs. Dropz Dropz moves on with 56%




Bracket Schedule:

  • Bracket release is March 18
    • Voting for each round will begin at 10 a.m. CT
    • Voting for each round will close at 11:59 p.m. CT on the date listed on the Bracket Battle home page
  • Play-in battle opens TODAY, March 18
  • Round 1 OPENS March 20
  • Round 2 OPENS March 25
  • Round 3 OPENS March 28
  • Round 4 OPENS April 1
  • Round 5 OPENS April 4
  • The Most USEFUL Useless Superpower will be announced April 10

I'm home from Redmond and the annual Microsoft MVP Summit. Attending Summit is one big #sqlfamily reunion for me. It was great seeing the familiar faces that help Microsoft deliver the best data platform tools, products, and services in the world.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


The U.S. government plans to drop $500M on a ridiculously powerful supercomputer

And it still won't be able to have more than 6 tabs open in Google Chrome.


Facebook admits it stored ‘hundreds of millions’ of account passwords in plaintext

I'm starting to think that maybe Facebook isn't so good at the privacy thing.


Cyber Attack Puts a Spotlight on Fragile Global Supply Chain

Good reminder that as your systems become more complex you need to have a solid business continuity plan. Otherwise, when disruptions hit, you could find yourself out of business.


They didn’t buy the DLC: feature that could’ve prevented 737 crashes was sold as an option

I cannot understand how safety was allowed to be optional.


Study confirms AT&T’s fake 5G E network is no faster than Verizon, T-Mobile or Sprint 4G

It's almost as if AT&T were comfortable with the idea of lying to customers.


MySpace has lost all the music users uploaded between 2003 and 2015

Losing 12 years worth of data is the second most shocking part of this article. The first being, of course, that MySpace still exists.


Repeated Giving Feels Good

Having an “attitude of gratitude” is a great way to change your mood, and quick.


I don't care what that brogrammer on Reddit says, the answer is still "no":


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article on containers, which are growing in importance to our government customers. Containers play an important role in scaling cloud applications and they need to be monitored.


Hybrid IT and cloud computing are completely changing the computing landscape, from ownership and payment methods to the way applications are developed. The perfect example of this is the seemingly overnight adoption of containers. According to the 2018 SolarWinds IT Trends Report, nearly half of the public sector IT pros who responded ranked containers as the most important technology priority today.


This ranking comes with good reason. Containers simplify the applications development process—and, in particular, provide a critical level of application portability and agility within a hybrid or cloud environment.


All this said, containers are still relatively new. It’s important for every federal IT pro to understand containers and their potential value in helping to enhance an agency’s computing efficiency as the agency moves to a hybrid IT/cloud environment.


What is a container?


When applications are developed, they’re built within a production-staging environment, either on a virtual machine in the cloud, or on a developer’s laptop. The challenge is that the environment where the application will ultimately run may be different than the environment where the application was developed. This has been an application development challenge for years.


Enter the container. A container is a complete environment—the application and all the technological dependencies it needs to run (libraries, binaries, configuration files, etc.), all in a single container. This means the application can run anywhere; the underlying infrastructure becomes irrelevant since the application already has everything it needs to get its job done.


Why are containers important?


First and foremost, containers can enhance the adoption of hybrid and cloud environments as they make applications completely portable. That’s invaluable as agencies look to make the cloud migration process as easy as possible.


Containers can also help agencies meet additional challenges introduced by hybrid IT/cloud environments. According to the 2018 SolarWinds IT Trends Report, those challenges include:


  • Environments that are not optimized for peak performance (46% of respondents)
  • Significant time spent reactively maintaining and troubleshooting IT environments (45% of respondents)


There is another forward-looking advantage of containers. The accelerated development cycle enabled by containers can help open the way to implement automated systems and technologies, further streamlining agency dev cycles.


According to the report, while hybrid IT/cloud ranked highest among the top five most important technologies to their organization’s IT strategies, automation was ranked second in the category of “most important technologies needed for digital transformation over the next three to five years.”


In a nutshell: containers can help agencies more effectively move to hybrid IT/cloud environments, which can then help agencies more effectively incorporate automation—which has the potential to change the game completely.


Find the full article on Government Technology Insider.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

I am so excited about this year’s Bracket Battle that I jumped on a plane yesterday to come to Vegas. I hear this is the place to be for the first weekend of the battle… a place to actually get paid for making the right predictions.


Turns out, the bookmakers aren’t very well-versed in our game and there is some OTHER bracket that everyone here is betting on. Go figure. It certainly doesn’t seem to be as interesting or thought-provoking as ours. But I guess that everyone has their “thing.”


Before we get into the first-round results, let me take a quick moment to answer a lingering question. There has been a bit of debate about the objective of this year’s battle—what EXACTLY are we voting for here? We understand. We debated it mightily as we brainstormed and plotted. Here is where we landed: we are looking for the BEST of the worst (read: useless) superpowers. Between any two given match-ups, which superpower would you prefer to have for its relative utility? (Which might provide more useful than the other?)


Based on the results, it looks like y’all got the hang of it…mostly. I can’t quite account for why the masses think that knowing the exact amount of pee in a pool is less useful than always coming in second. It’s PEE… in the POOL… that you are swimming in. Are you planning to melt down all of those second-place medals into some kind of weapon?


Welp, like I said… I guess everyone has their “thing.”


Here is where we stand as we head into Round 2:


Play-in Round Result: Bracket Battle 2019: Ticklz vs. Tweetz

TWEETZ comes out on top with 74% of the votes—a solid victory. The majority certainly believe that being able to read minds in any capacity is pretty helpful, even if it is just 140 characters.


Round 1 Results:

These rounds proved to be, on the whole, a lot closer than the play-in, with just a few exceptions:


Bracket Battle 2019: Dwaynez vs. Squirrlz  – It turns out the ability to go full "The Rock" with your eyebrows is nothing compared to an army of empathetic squirrels, as Squirrlz edges out Dwaynez with 73%.


Bracket Battle 2019: Giftz vs. Qwertyz – Randomly assigning useless superpowers to create a useless troupe of super pals who CAN remember your current password appears to outweigh the ability to never forget your old password. Giftz takes down Quertyz with 61% of the vote.


Bracket Battle 2019: 2ndz vs. Peez – We had some schemers with entrepreneurial spirits insightfully point out that a number of second-place cash prizes can build you quite the smaller competitor to Wayne Enterprises. Enjoy your second-fastest Bat Mobile, Robin! Peez is defeated by 2ndz, which garnered 60% of the votes.


Bracket Battle 2019: Wormz vs. Tastez – What happens when worm ESP and squirrel empathy clash? We may find out later in the season! Sounds like more people would rather taste-test things for poison than originally thought. Wormz took 64% of the votes.


Bracket Battle 2019: USBz vs. Opaquez – Maybe this clash would be less even before the release of USB type-C, as 48% of THWACK® apparently still prefer to turn invisible when nobody is looking. Don't you remember the pain and agony of thinking, "maybe I'll rotate it," before realizing you had it right the first time? Gen Z will never understand...


Bracket Battle 2019: Blurz vs. Hovrz – Jumping short heights can be appealing to anyone, unless you're watching a horror movie. WHAT THEN, THWACK? WHAT THEN?? Blurred vision loses to short hovering, which took 72% of the votes.


Bracket Battle 2019: Sortz vs. Picz – This one has us curious as to whether there's an underground gambling ring based on our annual Bracket Battles. 65% of THWACK chose the ability to accurately rate the uselessness over a superpower (Sortz) over taking a great license and passport photo. Too bad you can't take down a villain by looking goooooooood.


Bracket Battle 2019: Cheez vs. Whipez – Really? Only 54% of you want to take down your lactose-intolerant enemies with your epic lactokinesis? Maybe too many of us are sick of trying to clean whiteboards. Think of the possible plans you could possibly write on a potentially clean whiteboard!


Bracket Battle 2019: Toastz vs. Tumblz – You don't know fear until you've been chased through the desert by a rogue tumbleweed. Tumblz takes down foot toast (Pedicrunch?) with 62% of THWACK votes. I guess we'll never find a good use for our toe jam.


Bracket Battle 2019: Binaryz vs. Yumz – It's possible we could've put black licorice up against anything and it would still lose. Our fellow geek brethren decided speaking in binary tastes much better on the tongue. 1000100% of the vote.


Bracket Battle 2019: zhceepS vs. Loadz – We're fighting the urge to type this whole summary backwards as zheepS wins with 60%. Maybe it's because we already know all load times are a SHAM, and are MADE UP by whomever EVIL coders decided they'd give us FALSE HOPE about THE END being near. We're not mad.


Bracket Battle 2019: Foldz vs. Dropz – Everyone's days of paper airplanes are over. 'Nuff said. Dropz wins with 62-%.


Bracket Battle 2019: Callz vs. Veggz – Whomever doubts the power of vegetables has never seen this epic video of a vegetable orchestra. Go ahead. You know you want to watch it. Veggz (70%) beats Callz.


Bracket Battle 2019: Critz vs. Legoz – In a battle of entertainment, Legos appear to still be relevant. Fans of the Swedish foot-killers take down critical misses with a narrow six-percent margin.


Bracket Battle 2019: Tweetz vs. Forgetz – Remember round one? Forgetz doesn't. Reading the first 140-characters of people's thoughts is preferred by 71% of THWACK.


Bracket Battle 2019: Loopz vs. Slowz – Ahhh. There's nothing like saving the slowest for last. The tortoise takes down a perfect belt-looping. Slowz – 57%.



Curious as to what other useless superpowers you were surprised to see missing from our competition this year? Let us know below.


It’s time to check out the updated bracket and start voting in Round 2! We need your help and input as we get one step closer to crowning the ultimate legend!


Access the bracket and make your Round 2 picks HERE>>

michael stump

Cloud Anxiety

Posted by michael stump Expert Mar 25, 2019

When electric vehicles leapt out from the shadows and into the light of day, pundits and contrarians latched on to a perceived fatal flaw in their design: a limited range of travel on a single charge. It began in earnest with the Chevy Volt, which initially offered around 40 miles of range when operating in all-electric mode.


“Where can you go on 40 miles of range?”

“You’ll run out and be stranded!”

“My Ford Excursion can drive 300 miles on a tank!” (Except it has a 44-gallon tank, which in the pricey days of the mid-2000s worked out to a cool $160 a fill-up.)


EV makers persisted, and today we have EVs with ranges equal to or greater than their gas-powered predecessors. And if your trip takes you beyond the range of a single charge, there’s a growing network of charging stations to get you where you’re going.


Range anxiety is dead. Right? Maybe not, as you’ll still find columns devoted to the topic on autoblogs, especially those that cozy up to the ICE makers of the world. But the debate is over; only tribalism keeps it alive.


Sound familiar? It should. The same anxiety surrounds cloud computing.


“Cloud is just someone else’s computer!”

“You lose control over everything in the cloud!”

“My mainframe has been doing just fine for the last 30 years!”


In the meantime, cloud service providers kept innovating and improving upon their services. They engaged in an ambitious public relations campaign to educate IT professionals on how to manage cloud solutions. And they won major contracts in the federal IT space for cloud that solidified their ability to deliver massive, secure services to sensitive customers like the CIA.


The debate is over, but cloud anxiety lingers. Why?


It’s easy to throw out words like tribalism and intransigence and stubbornness when talking about change-related anxiety. But these terms serve only to push us to our pre-selected corner; these are not words of unification, they’re words of division.


The reason for any change-related anxiety is evolutionary: humans prefer the known over the unknown. Once you’ve acquired a decade or more experience managing on-premises solutions, your comfort is in continuing to do what you’ve been doing. The thought of drastic change, such as changing your delivery model to hybrid of bi-modal IT, represents the unknown. And when faced with that uncertainty, we start creating reasons to not embrace change. This is where all the “I think the old way’s better” thinking comes into play.


It boils down to this: people are afraid of change until they understand how that change will affect them on a personal level.


IT operations staff have been led to believe that their profession is about to be made obsolete by cloud. DBA, vSphere admins, and hardware geeks have heard a thousand times: your job won’t be around in n years’ time, where n is a short time between two and six years. The result is that these IT workers develop a general sense of uneasiness about cloud computing, and for good reason: they believe it’s here to take their jobs away.


But let's switch back to the EV analogy a moment. Now that every car manufacturer has announced plans to electrify their line-ups in the coming years, grease monkeys of the world have a choice: either embrace the incoming influx of EV service needs and prepare to fix a new class of vehicle, or stick with your skillset and spend the rest of your career supporting legacy vehicles that run on gas.


The same holds true for IT: cloud is coming. The question is, will you embrace that eventuality and prepare yourself to support a cloud-based future, or will you stick with the comfort of your on-prem IT and support legacy for the remainder of your career?


So, THWACK community: what will you do?

Back in January, I had the opportunity to sit in on a stop of the Bob Woodward speaking tour. (Bob Woodward of: Woodward/Bernstein/Washington Post/Watergate/Nixon.) Mr. Woodward ended his speaking hour with a little story on the biggest lesson he learned from the whole Watergate experience. Bob told the audience how he learned of President Nixon’s resignation, and how, only a few months later, he learned that Nixon’s successor, President Ford, had pardoned him.


Both Bob and his partner, Carl Bernstein, were crestfallen. They had sacrificed years of their lives investigating the biggest scandal in U.S. history, ultimately bringing down a sitting president. They both believed that more backroom politicking had occurred, and now Nixon was going to walk free in return for handing Ford the presidency.


Years passed, and Bob interviewed and spoke with Ford several more times. Right before Ford’s death some 30 years after Watergate, Ford revealed to Bob that Bob had never asked him if Nixon’s men had approached him prior to the resignation about a pardon in exchange for the presidency. (Bob explained to the audience that he assumed this was the case.) Ford admitted that they had. Bob exclaimed in his head, “Aha! I knew it! Vindication!” Ford finished his sentence, “I turned them down. I knew Nixon was finished as president. They were negotiating a losing hand.”


As for why Ford pardoned Nixon, I’ll let you research for yourselves. It’s a fascinating piece of U.S. history and a textbook case of political suicide. So… why am I bringing all of this up in a THWACK blog? Here’s why. If there is ever an example of an SME, or subject matter expert, it’s Bob Woodward and the Watergate scandal. He and Carl Bernstein lived, breathed, ate, and slept Watergate for over 30 years, long after it was over. And as to why he assumed that Ford pardoned Nixon, his knowledge and expertise was his undoing. His assumptions and suspicions poisoned his critical thinking skills for 30 years. Bob spent those 30 years convinced that Nixon’s team and Ford were in cahoots on some levels.


I’m surrounded by some very talented IT professionals and have been for the past 27 years. What I’ve done for the past 20 years, ever since I gave up the technical path, is lead these technical professionals working in a manager role, incident response, a war room, and projects. I’ve seen countless examples of where the best and smartest experts and engineers are firmly convinced that solution A, or “this” path to resolution, was the right answer and be firm about it—so convinced on their decision that whether it be professional reputation or stubbornness, they are adamant even when there are clues to state otherwise.


As a leader, this can be challenging. Pray that you never find yourself in the situation where you have two or more experts with differing, passionate opinions and are expected to make the decisions on whose opinion to follow. So, what are you to do in these tense situations? How do you work with a “Bob Woodward”-type personality?


When it comes to diffusing situations like these and (hopefully) gaining consensus, I’ve got some tips I’ve used throughout my career that can help ease heated discussions compromised of headstrong personalities.

  • Food! The tighter the tension, the better the food should be. Good food is perhaps the best tension breaker. If the hours are long, or you’re working on weekends or on a very tight deadline with high visibility, don’t order the team pizza (unless they all agree on pizza). Splurge a little and bring in catering, with the nice thick paper plates, fancy plastic silverware, and absorbent napkins. The team will feel appreciated and much more refreshed. They will convene with a fresh perspective, and you’ll come out being the good guy. A tough decision will land softer.
  • Changing subjects, even to the point of talking about something that isn’t related to the task at hand. How’s the weather? How’s that local sports team performing? Who saw that show last night? Who’s following the latest trends on Twitter? Changing the topic to get people to break their concentration and hard-set thought patterns is a sensible approach when the team is in a rut. As a leader, keep an eye out for those who keep their nose in their laptop and don’t engage in the lighthearted banter. They either refuse to budge on their opinions or are onto something that could be dangerous to the collaborative nature.
  • Taking breaks, even if it’s a simple adjournment for 15 minutes. Surprisingly, this is often neglected in war rooms and incident response teams. Keep in mind that you don’t have to dismiss them all at once. You can stagger them in intervals. Once again, keep an eye out for those who refuse to leave. We leaders refer to them as “martyrs.” Martyrs can be dangerous, as they tend to be the first to crack. They are first to get snippy, overly-opinionated, uncooperative, and a morale killer. In worst cases, they have the potential to be HR write-ups. And then you come out looking like the bad guy.
  • Play dumb! This is my personal favorite. I often refer to myself as the dumbest person in our IT (because most times it’s true). I use this to my advantage when I need to challenge an expert on his/her opinion. The expert then must break their opinion down to a level that I can understand. That allows me to ask “stupid” (aka “leading”) questions and manipulate the conversation to get the expert to try another way of thinking. During my 27 years in IT, I have learned that technology has changed, but human behavior is basically the same.

There are certainly other tactics to use, and I would love to hear your favorites as well as how and when you use them in the comments section below.


Also, given that many of us in the THWACK community are the technical experts, I must ask, have you ever found yourself to be so unwaveringly convinced of something IT related only to find out later that you were wrong? What was it? Did you ever admit you were wrong? Why or why not?


Finally, I'll leave you with a little bit of trivia on President Ford that you may not know. He is the only person to have served as vice president and U.S. president having never been elected to either office.

In my previous articles, I’ve looked at the challenges and decisions people face when undertaking digital transformation and transitioning to a hybrid cloud model of IT operation. Whether it’s using public cloud infrastructure or changing operations to leverage containers and microservices, we all know not everything can or even should move to the public cloud. There’s still a need to run applications in-house. Yet everyone still wants the benefits of the public cloud in our own data center. How do you go about addressing these private cloud needs?


Let’s take an example scenario. You’ve decided it's time to upgrade the company’s virtual machine estate and roll out the latest and greatest version of hypervisor software. You know that there are heaps of great new features in there that will make all the difference in your day-to-day operations, but that doesn’t mean anything to your board or to the finance director, who you need to win over to get a green-light to purchase. You need to present a proposal containing a set of measurables indicating when they are going to see a return on the money you’re asking them to release. In the immortal words of Dennis Hopper in Speed, “…what do you do? What do you do?”


First, you need a plan. The good news is, you can basically reuse the same one time and again. Change a few names here and a few metrics there, and you’ll have a winner straight out of the Bill Belichick of IT’s playbook.

The framework you should build a plan around has roughly nine sections you must address.


  • First, you need to outline the Scenario and the problems you’re facing.
  • This leads you to the Solution you’re proposing.
  • Then the Steps needed to get to this proposed solution.
  • Next, you would outline the Benefits of this solution.
  • And any Risks that might arise while transitioning and once up and running.
  • You would then summarize the Alternatives, including the fact that doing nothing will only exacerbate the issue.
  • After that, you want to profile the costs and compare it to the previous system for Cost Comparisons, detailing as much as possible on each, highlighting the TCO (Total Cost of Ownership). You may think you can finish here, but two important parts follow.
  • Highlight the KPIs (Key Performance Indicators).
  • Finally, the Timeline to implement the solution.


KPIs, or Key Performance Indicators, are a collection of statements related to the new piece of hardware, software, or even the whole system. You may say that we can reduce query time by five seconds during a normal day, or we will reduce power consumption by 10KWh per month. They have to be measurable and determinable via a value. You can’t say “it will be faster or better” as you cannot quantify these. Your KPIs may also have a deadline or date associated with them, so you can definitively say whether there’s been a measured improvement or not.


Sometimes it can be hard or near impossible to pull some of the details in the plan together, but remember, the finance department will have any previous purchases of hardware, software, professional services, or contractors in a ledger. Hopefully you know the number of man-hours per month it takes to maintain the environment, along with the application downtime, profit lost to downtime, and so on. Sometimes there will be points that you need to put down to the best of the knowledge at hand. Moving forward, you’ll start to track some of the figures that the board wants, showing a willingness to try seeing things from their perspective as the nuances of new versions of hardware and software can be lost on them.


Once you have a good understanding of the problem(s) you’re facing, you need to look at the possible solutions, which may mean a combination of demonstrations, proof of concepts (PoCs), evaluations, or try-and-buys for you to gain an insight into the technology available to solve the problem.


Next, it’s time to size up the solution. One of the hardest things to do when you adopt a new technology is to adequately size for the unexpected. Without properly understanding the requirements your solution needs to meet, how can you safely say the proposed new solution is going to fix the situation? I won’t go into how to size your solution as each vendor has a different method. The results are only as good as the figures you put in. You need to decide how many years you want to use the assets for, then figure out a rate of growth over that time. Don’t forget, updates and upgrades can affect the system during this timeframe, and you may need to take these into account.


Another known problem in the service provider space is the fact that you may size a solution for 1,000 users and you move 50 on to begin with, and sure enough, they get blazing speed and no contention. But as you begin to reach 500, the original users start to notice that tasks are taking longer than when they first started using the new system. You want to try to avoid this. You’ll start to get your original users complaining that they are not getting the 10x speed they had when they first moved, even though you spec’d it for a 5x improvement and they’re still getting 6-7x improvement over the legacy system. This pioneer syndrome needs some form of quality of service to prevent it arising—a “training wheels protocol,” if you will.


Now that you’ve identified your possible white knight, it’s time to do some due diligence and verify that it will indeed work in your environment, that it’s currently supported, and so on before going off half-cocked and purchasing something because you were wooed by the sales pitch or one-time only special end of quarter pricing.


I think too many people purchase new equipment and software and then rush to use the shiny new toy that they forget some of the most important steps: benchmarking and baselining. I refer to benchmarking as the ability to understand how a system performs when a known amount of load is put upon it. For example, when I have 50 virtual machines running, or I have 100 concurrent database queries, what happens when I add another 50 or 100? I monitor this increase and record the changes. Keep adding in known increments until you see the resources max out and adding any more has a degrading effect on the existing group. Baselining is getting measurements once a system goes live and seeing what a normal day’s operation does to a specific system. For example, you may have 500 people log on at 9 a.m. with peak load around 10:30. It then tails off until 2 p.m. when people start to come back from lunch, and spikes again around 5 p.m. as they clear their desks, finally dropping to a nominal load by 7 p.m. Only by having statistics on hand as to how everything in this system performs during this typical day can we make accurate measurements. Setting tolerances will help you decide if there’s actually a problem, and if so, narrow the search down when a user opens a support ticket. This process of baselining and benchmarking will ultimately help you determine SLA (Service Level Agreements) response times and define items that are outside the system’s control.


The point is, you’ll need a standard of measurement and documentation; and like any good high school science experiment, you’re probably going to need a hypothesis, method, results, conclusion, and evaluation. What I’m trying to say is you need to understand what you are measuring and its effect on the environment. Yes, there are some variables everyone’s going to measure: CPU and memory utilization, network latency, and storage capacity growth. But your environment may require you to keep an eye on other variables, like web traffic or database queries, and knowing a good value from a bad is critical in management.


The system management tools you’ll use should be tried and tested. If you cannot receive the results you need easily with your current implementation, it may be time to look at something new. It may be something open-source or it may be a paid solution with some professional services and training to maximize this new investment. As long as you’re monitoring and recording statistics that control your environment, you should be in a great position to evaluate new hardware and software options.


You may have heard comments around the “cloud being a great place for speed and innovation,” which is truer now more than ever with the speed at which they start to monitor and possibly bill you for your usage. I believe that to be a proper private cloud or on-premises part of a hybrid cloud, you need to be able to monitor detailed usage growth and have the potential to begin to show or charge departments for their IT usage. By monitoring hybrid cloud metrics, you can make a better-informed decision around moving applications to the cloud. As with any expenditure, you should also look at adding in functionality that starts to give you cloud-like abilities on-premises. Maybe begin by implementing a strategy to show chargeback to different departments or line of business application owners. Start making the move from keeping the lights on to innovation, and have IT lead the drive to a competitive advantage in your industry.


Moving from a data center to a private cloud isn’t as simple as changing the name on the door. It takes time and planning by implementing new processes to achieve goals and providing some form of automation and elasticity back to the business, along with ways to monitor and report trends. Like any problem, breaking it up into bite-size chunks not only gives you a sense of achievement, but also more control on the project going forward.


Whether you are moving to or even from the cloud, the above process can be applied. You need to understand the current usage, baselines, costs, and predicted growth rates, as well as any SLAs that are in place, then how this marries up with the transition to the new platform. It’s all well and good reaching for the New Kid on the Block when they come around telling you that their product will solve everything up to and possibly including “world hunger,” but let’s be realistic and make sure you’ve done your homework and you have a plan of attack. Predetermined KPIs and deliverables may seem like you’re adding shackles to the project, but it helps keep you focused on the goal and delivering back results to your board.


  Does investment equal business value? Spending money on the new shiny toy from Vendor X to replace aging infrastructure doesn’t always mean you’re going to improve things. It’s about what business challenges you’re trying to solve. Once you have set your sights on a challenge, it’s about determining what success is for the project, and what KPIs and milestones you’re going to set and measure. What tools you’ll use and how you can prove the value back to the board, so the next time you ask for money, it’s released a lot easier and faster. 

In sunny Redmond this week for the annual Microsoft MVP Summit. This is my 10th Summit and I’m just as excited for this one as if it was my first.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Massive Admissions Scandal

Having worked in higher education for a handful of years, none of this information is shocking to me. The most shocking part is that any official investigation was done, and indictments handed down. I truly hope this marks a turning point for our college education system.


Philadelphia just banned cashless stores. Will other cities follow?

One of the key benefits touted by cryptocurrency advocates is “access for everyone.” Well, the City of Brotherly Love disagrees with that viewpoint. The poor have cash, but not phones. Good for Philly to take a stand and make certain stores remain available to everyone.


Facebook explains its worst outage as 3 million users head to Telegram

A bad server configuration? That’s the explanation? Hey, Facebook, if you need help tracking server configurations, let me know. I can help you be back up and running in less than 15 hours.


Your old router is an absolute goldmine for troublesome hackers

Included for those that need reminding, don’t use the default settings for the hardware you buy off the shelf (or the internet). I don’t understand why there isn’t a default standard setup that forces a user to choose a new password, seems like an easy step to help end users stay safe.


Why CAPTCHAs have gotten so difficult

Because computers are getting better at pretending to be human. You would think that computers would also be better at recognizing a robot when they see one, and not bother me with trying to pick out all the images that contain a signpost or whatever silly thing CAPTCHAs  are doing this week.


The Need for Post-Capitalism

Buried deep in this post is the comment about a future currency based upon social capital. You may recall that China is currently using such a system. Last week, I thought that was crazy. This week, I think that maybe China is cutting edge, and trying to help get humanity to a better place.


The World Wide Web at 30: We got the free and open internet we deserve

“We were promised instant access to the whole of humanity's knowledge. What we got was fake news in our Facebook feeds.” Yep, that about sums up 30 years of the web.


Ever feel as if you are the dumbest person in the room? That's me at MVP Summit for 4 days:


By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s a helpful article on how to achieve your cloud-first objectives for cloud-based email. Email is a critical application for government, and there are several suggestions for improving reliability.


The U.K. government’s Cloud First policy mandates the need to prioritize cloud-based options in all purchasing decisions—and email providers are no exception to this rule. The rationale is clear: to deliver “better value for money.” Cloud-based email can help with this—offering huge operational benefits, especially considering the sheer number of users and the broad geographical footprint of the public sector. It can also be much simpler and cheaper to secure and manage than on-prem email servers.


However, while email services based in the cloud can offer a number of advantages, such services also pose some unique challenges. IT managers in the public sector must track their email applications carefully to help ensure cloud-based email platforms remain reliable, accessible, and responsive. In addition, it’s important to monitor continuously for threats and vulnerabilities.


Unfortunately, even the major cloud-based email providers have had performance problems. Microsoft Office 365, a preferred supplier with whom the U.K. government has secured a preferential pricing deal, has been subject to service outages in Europe and in the United States, as recently as late last year.


Fortunately, many agencies are already actively monitoring cloud environments. Sixty-eight percent of the NHS and 76% of central government organisations in a recent FOI request from SolarWinds reported having migrated some applications to the cloud, and using monitoring tools to oversee this. Although monitoring in the cloud can be daunting, organisations can apply many of the best practices used on-prem to the cloud—and often even use the same tools—as part of a cloud email strategy that can help ensure a high level of performance and reliability.


Gain visibility into email performance


Many of the same hiccups that affect the performance of other applications can be equally disruptive to email services. Issues including network latency and bandwidth constraints, for example, can directly influence the speed at which email is sent and delivered.


Clear visibility into key performance metrics on the operations of cloud-based email platforms is a must for administrators. They need to be able to proactively monitor email usage throughout the organisation, including the number of users on the systems, users who are running over their respective email quotas, archived and inactive mailboxes, and more.


When working across both a cloud-based email platform and an on-prem server, in an ideal world, administrators should set up an environment that allows them to get a complete picture across both. Currently, however, many U.K. public sector entities are using four or more monitoring tools—as is the case for 48% of the NHS and 53% of central government, according to recent SolarWinds FOI research. This highlights a potential disconnect between different existing monitoring tools.


Monitor mail paths


When email performance falters, it can be difficult to tell whether the fault lies in the application or the network. This challenge is often exacerbated when the application resides in the cloud, which can limit an administrator’s view of issues that might be affecting the application.


By using application path monitoring, administrators can gain visibility into the performance of email applications, especially those that reside in a hosted environment. By monitoring the “hops,” or transfers between computers, that requests take to and from email servers, administrators can build a better picture of current service quality and identify any factors that may be inhibiting email performance. In a job where time is scarce, this visibility can help administrators troubleshoot problems without the additional hassle of determining if the application or network is the source of the problem.


By applying existing standard network monitoring solutions and strategies to email platforms, administrators can gain better insight into the performance of cloud email servers. This will help keep communications online and running smoothly.


Find the full article on GovTech Leaders.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

It’s the most wonderful time of year. The time to come together as a community and – through pure opinion and preference – crown someone (or something) VICTORIOUS.


SolarWinds Bracket Battle 2019 is here. And, for its 7th anniversary, have we got something truly thought-provoking, debate-inducing, and oddly… meta for you.


The Premise:

It’s a well-known and well-understood fact that not all superpowers are created equal. Even amongst those blessed with supernatural or mutant abilities, someone’s going to end up with the fuzzy end of the lollipop.


And so we ask: If one were to end up in the shallow end of the superpowers pool, would it be better to just be normal? Or, is having ANY power – even something as random and seemingly worthless as the ability to turn into a bouncing ball (Bouncing Boy, DC Comics, 1961) or to turn music into light (Dazzler, Marvel, 1980) – better than nothing at all?


If the chance to don a super suit, hang at the mutant mansion, or chill with the League is worth enduring an endless stream of ridicule and continuous side-eye, then what IS the ULTIMATE USELESS SUPERPOWER?


Would you rather have the chance to parley with some tree rodents or parlez-vous binary? Superhuman strength for a millisecond or lactokinesis (which is basically milk mind-control)?


These are just some of the real head-scratchers we have in store for you in this year’s SolarWinds Bracket Battle.


Starting today, 33 of the most random and useless superpowers that we could imagine will battle it out until only one remains and reigns supreme as the BEST of the WORST.


We picked the starting point and initial match-ups; however, just like in Bracket Battles of the past, it’ll be up to the THWACK community to decide the winner.


Don’t Forget to Submit Your Bracket: If you correctly guess the final four bracket contestants, we’ll gift you a sweet 1,000 THWACK points. To do this, you’ll need to go to the personal bracket page and select your pick for each category. Points will be awarded after the final four contestants are revealed.


Bracket Battle Rules:


Match-Up Analysis:

  • For each useless superpower match-up, we’ve provided short descriptions of each power. Just hover over or click to vote to decipher the code.
  • Unlike in past years, there really isn’t an analysis of how the match-up would work because – well, let’s just say we want your imaginations to run wild. (Quite frankly, these are all pretty useless to begin with, it’s hard to really grasp how you'd make some of these abilities work to your advantage.)
  • Anyone can view the bracket and match-ups, but in order to vote or comment, you must have a THWACK® account.



  • Again, you must be logged in to vote and trash talk
  • You may vote ONCE for each match-up
  • Once you vote on a match-up, click the link to return to the bracket and vote on the next match-up in the series



  • Please feel free to campaign for your preferred form of uselessness, or debate the relative usefulness of any entry (also, feel free to post pictures of bracket predictions on social media)
  • To join the conversation on social media, use the hashtag #SWBracketBattle
  • There’s a PDF printable versionof the bracket available, so you can track the progress of your favorite picks



  • Bracket release is TODAY, March 18
  • Voting for each round will begin at 10 a.m. CT
  • Voting for each round will close at 11:59 p.m. CT on the date listed on the Bracket Battle home page
  • Play-in battle opens TODAY, March 18
  • Round 1 OPENS March 20
  • Round 2 OPENS March 25
  • Round 3 OPENS March 28
  • Round 4 OPENS April 1
  • Round 5 OPENS April 4
  • The Most USEFUL Useless Superpower will be announced April 10


If you have any other questions, please feel free to comment below and we’ll be sure to get back to you!


What power is slightly better than nothing at all? We’ll let the votes decide!


Access the Bracket Battle overview HERE>>

Migration to the cloud is just like a house move. The amount of preparation done before the move determines how smoothly the move goes. Similarly, there are many technical and non-technical actions that can be taken to make a move to the cloud successful and calm.


Most actions (or decisions) will be cloud-agnostic and can be carried out at any time, but once a platform is chosen, it unlocks even more areas where preparations can be made in advance.


In no particular importance or order, let’s look at some of these preparations.



Some workloads may not be suitable for migration to the cloud. For example, compliance, performance, or latency requirements might force some workloads to stay on-premises. Even if a company adopts a “cloud-first” strategy, such workloads could force the change of model to a hybrid cloud.


If not done initially, identification of such workloads should be carried out as soon as the decision on the platform is made so the design can cater for them right from the start.


Hybrid IT

Most cloud environments are a hybrid of private and public cloud platforms. Depending on the size of the organization, it’s common to arrange for multiple high-speed links from the on-premises environment to the chosen platform.


However, those links are quite cumbersome to set up, as many different parties are involved, such as the provider, carrier, networking, and cloud teams. Availability of ports and bandwidth can also be a challenge.


Suffice it to say that lead times for an end-to-end cloud migration process typically ranges from a few weeks to a few months. For that reason, it’s recommended to prioritize identifying if such link(s) will be required and to which data centers, and get the commissioning process started.


Migration Order

This is an interesting one as many answers exist and all of them are correct. It really depends on the organization and maturity level of the applications involved.


For an organization where identical but isolated development environments exist, it’s generally preferred to migrate those first. However, you may find exceptions in cases where deployment pipelines are implemented.


It’s important to keep stakeholders fully involved in this process, not only because they understand the application best and foresee potential issues, but also so they’re aware of the migration schedule and what constitute reliable tests before signing off.



Most organizations like to move their applications to the cloud and improve later. This is especially true if there’s a deadline and migration is imminent. It makes sense to divide the whole task into two clear and manageable phases, as long as the improvement part isn’t forgotten.


That said, the thinking process on how to refactor existing applications post-migration can start now. There are some universal concepts for public cloud infrastructure like autoscaling, decoupling, statelessness, etc., but there will be some specific to the chosen cloud platform.


Such thinking automatically forces the teams to consider potential issues that might occur and therefore provides enough time to mitigate them.



Operations and support teams are extremely important in the early days of migration, so they should be comfortable with all the processes and escalation paths if things don’t go as planned. However, it’s common to see organizations force rollouts as soon as initial testing is done (to meet deadlines) before those teams are ready.


This can only cause chaos and result in a less-than-ideal migration journey for everyone involved. A way to ensure readiness is to do dry runs with a few select low-impact test environments, driven by the operations and support team solving deliberately created issues. The core migration team should be contactable but not involved at all.


Migration should only take place once both teams are comfortable with all processes and the escalation paths are known to everyone.



Importance of training cannot be emphasized enough, and it’s not just about technical training for the products involved. One often-forgotten exercise is to train staff outside the core application team, e.g., operations and support, about the applications being migrated.


There can be many post-migration factors to consider that make it necessary to provide training on applications, such as application behavior changes, deployment mechanism changes, security profile, and data paths.


Training on technologies involved can start as early as the platform decision. Application-specific training should occur as soon as it’s ready for migration but before the dry runs. Both combined will keep the teams in good stead when migration day comes.



Preparation is key for a significant task like cloud migration. With a bit of thought, many things can be identified that are not dependent on platform choice or the migration and can therefore be taken care of well in advance.


A successful cloud migration sometimes depends on how many factors are involved. Reducing the number of tasks required can mean less stress for everyone. It pays to be prepared. As Benjamin Franklin put it:

  “By failing to prepare, you are preparing to fail.” 

If you’re a returning reader to my series, thank you for listening this far. We have a couple more posts in store for you. If you’re a new visitor, you can find previous posts below:

Part 1 – Introduction to Hybrid IT and the series

Part 2 – Public Cloud experiences, costs and reasons for using Hybrid IT

Part 3 – Building better on-premises data centres for Hybrid IT

Part 4 – Location and regulatory restrictions driving Hybrid IT


In this post, I’ll be looking at how I help my customers assess and architect solutions across the options available throughout on-premises solutions and the major public cloud offerings. I’ll look at how best to use public cloud resources and how to fit those to use cases such as development/testing, and when to bring workloads back on-premises.


In most cases, modern applications that have been built cloud-native, such as functions or using as-a-service style offerings, will have a natural fit to the cloud that they’ve been developed for. However, a lot of the customers I work with and encounter aren’t that far along the journey. That’s the desired goal, but it takes time to refactor or replace existing applications and processes.


With that in mind, where do I start? The first and most important part is in understanding the landscape. What do the current applications look like? What technologies are in use (both hardware and software)? What do the data flows look like? What does the data lifecycle look like? What are the current development processes?


Building a service catalogue is an important step in making decisions about how you spend your money and time. There are various methods out there for achieving these assessments, like TIME analysis or The 6 Rs. Armed with as much information as possible, you’re empowered to make better decisions.


Next, I usually look at where the quick wins can be made—where the best bang for your buck changes can be implemented to show return to the business. This usually starts in development/test environments and potentially pre-production environments. Increasing velocity here can provide immediate results and value to the business. Another area to consider is backup/long-term data retention.


Development and Testing


For development and test environments, I look at the existing architecture, are these traditional VM-based environments? Can they be containerized easily? Is containerization where possible a good step toward more cloud native behavior/thinking?


In traditional VM environments, can automation be used to quickly build and destroy environments? If I’m building a new feature and I want to do integration testing, can I use mocks and other simulated components to reduce the amount of infrastructure needed? If so, then these short-lived environments are a great candidate for the public cloud. Where you can automate and have predictable lifecycles into the hours, days, and maybe even weeks, the efficiencies and savings of placing that workload in the cloud are evident.


When it comes to longer cycles like acceptance testing and pre-production, perhaps these require a longer lifetime or greater resource allocation. In these circumstances, traditional VM-based architectures and monolithic applications can become costly in the public cloud. My advice is to use the same automation techniques to deploy these to local resources with more reliable costs. However, the plan should always look forward and assess future developments where you can replace components into modern architectures over time and deploy across both on-premises and public cloud.


Data Retention


As I mentioned, the other area I often explore is data retention. Can long-term backups be sent to cold storage in the cloud? The benefits offered above that of tape management for infrequently accessed data are often prominent. Restore access may be slower, but how often are you performing those operations? How urgent is a restore from, say, six years ago? Many times, you can wait to get this information back.


Continuing the theme of data, it’s important to understand what data you need where and how you want to use it. There are benefits to using cloud native services for things like business intelligence, artificial intelligence (AI), machine learning (ML), and other processing. However, you often don’t need the entire data set to get the information you need. Look at building systems and using services that allow you to get the right data to the right location, or bring the data to the compute, as it were. Once you have the results you need, the data that was processed to generate them can be removed, and the results themselves can live where you need them at that point.


Lastly, I think about scale and the future. What happens if your service/application grows beyond your expectations? Not many people will be the next Netflix or Dropbox, but it’s important to think about what would happen if that came about. While uncommon, there are situations where systems scale to a point that using public cloud services becomes uneconomical. Have you architected the solution in a way that allows you to remove yourself? Would major work be required to build back on-premises? In most cases, this is a difficult question to answer, as there are many moving parts and relies on levels of success and scale that may not have been predictable. I’ve encountered this type of situation over the years, usually not to the full extent of complete removal of cloud services. I commonly see this in data storage. Large amounts of active data can become costly quickly. In these situations, I look to solutions that allow me to leverage traditional storage arrays that can be near-cloud, usually systems placed in data centers that have direct access to cloud providers.


In my final post, I’ll be going deeper into some of the areas I’ve discussed here and will cover how I use DevOps/CICD tooling in hybrid IT environments.


Thank you for reading, and I appreciate any comments or feedback.

Saw Captain Marvel this past weekend. It's a good movie. You should see it, too. Make sure you stick around for the second end credit scene!


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Quadriga's Wallets Are Empty, Putting Fate Of $137 Million In Doubt

Somehow, I don’t think “create Ponzi scheme using crypto and fake my own death” was the exact pitch to investors. Incidents like this are going to give cryptocurrencies a bad name.


Part 1: How To Be An Adult— Kegan’s Theory of Adult Development

One of the most important skills you can have in life is empathy. Take 10 minutes to read this and then think about the people around you daily, and their stage of development. If you start to recognize the people that are Stage 2, for example, it may change how you interact, and react, with them.


Volvo is limiting its cars to a top speed of 112 mph

Including this because (1) we got a new Volvo this week and (2) the safety features are amazing. There are many times it seems as if the car is driving itself. It takes a while to learn to let go and accept the car is now your pilot.


This bill could ban e-cigarette flavors nationwide

"To me, there is no legitimate reason to sell any product with names such as cotton candy or tutti fruitti, unless you are trying to market it to children.” Preach.


Microsoft is now managing HP, Dell PCs via its Managed Desktop Service

And we move one step closer to MMSP – Microsoft Managed Service Provider.


A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians

We are at the beginning of a boom with regards to machine learning. Unfortunately, most of the data we use comes with certain bias inherent. Date is the fuel for our systems and our decisions. Make sure you know what fuel you are using to power your algorithms.


Burnout Self-Test

We’ve all been there, or are there. Do yourself a favor and take 5 minutes for this quiz. Then, assess where you are, where you want to be, and the steps you need to take. Mental health is as important as anything else in your life.


The last real snowfall of the season, so I took the time to make a Fire Angel (it's like a Snow Angel, but in my Fire Circle):


The most difficult step in any organization’s journey to the cloud is the first one: where do you start? You’ve watched the industry slowly adopt cloud computing over the last decade, but when you look at your on-premises data center, you can’t conceptualize how to break out. You’ve built your own technology prison, and you’re the prisoner.


You might have a traditional three-tier application, and the thought of moving the whole stack to the cloud induces anxiety. You won’t control the hardware, and you won’t know who has access to the hardware. You won’t know where your data is, or who has access to it. It’s the unending un-knowing of cloud that makes so many of us retreat to the cold aisle, lean against the KVM, and clutch our tile pullers a little tighter.


Then you consider the notorious migration methods you’ve read about online.


Lift-and-shift cloud migrations are harrowing events that we should all experience at least once, and should never, ever experience more than once. Refactoring is often an exercise in futility, unless you have a crystal-clear understanding of what the resulting product will look like.


So how do you ease into cloud computing if lift-and-shift and refactoring aren’t practical for you?


You start considering a cloud-based solution for every new project that comes your way.


For example, I’ve recently been in discussions with an application team to improve the resiliency of their database solution. The usual solutions were kicked around: a same-site cluster for HA, a multi-site cluster for HA and DR, or an active-active same-site cluster for HA and FT. Of course, in each case, there’s excess hardware capacity that will sit idle until a failure event. The costs associated with these three solutions would inspire any savvy product manager to think, “there’s got to be a better way.”


And there is. It’s a cloud-native database service with an SLA for performance and availability, infinite elasticity, and bottomless storage. (Yes, I’m exaggerating a bit here, but look at the tech specs for Google CloudSQL or Amazon RDS; infinite is only a mild stretch.) You pay for the service based on consumption, which means all those idle cycles that would otherwise consume power and cooling are poof, gone. You’ll need to sort out the connectivity and determine the right way for your enterprise to connect with your cloud services, but that’s certainly easier than designing, implementing, and procuring the hardware and licenses for your on-prem HA solutions.


Your application team gets the service they want without investing in bare metal that would only serve to make your data center chillers work a little harder. And more importantly, you’ve taken your first step in the journey to the cloud.


A successful migration can spark interest in cloud as a solution for other components. The same application team, realizing now that their data is in the cloud and their app servers aren’t, might express an interest in deploying an instance group of VMs into the same cloud to be close to their data. They’ll want to learn about auto-scaling next. They’ll want to learn about costs savings by moving web servers to Debian. They’ll want to know more about how to set up firewall rules and cloud load balancers. They’ll develop an appetite for all that cloud has to offer.


And while you may not be able to indulge all their cloud fantasies, you’ll find that moving to the cloud is a much simpler and enjoyable effort when you’re working in partnership with your application team.


  That’s the secret to embracing not just cloud, but any new technology: let your business problems lead you to a solution.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article on BYOD, including data on DHS employees. I feel that, in some cases, a better balance needs to be achieved on this issue.


Agencies are still grappling with the types of devices using their networks and making device security a non-issue. Today, the mobile device challenge has gotten even more complex.


Welcome to BYOD’s second act, which may be even bigger than the first.


The numbers from the Department of Homeland Security tell the tale. According to DHS, its employees are currently using 90,000 devices. Thirty-eight percent of those employees are using government-issued devices, while the rest rely on their personal iPhone or Android mobile devices.


Although policies and guidance attempt to ensure mobile device security, initiatives like the DHS Mobile Device Security project and the Committee on National Security Systems (CNSS) Policy No. 11 go only so far. Employees don’t necessarily want to carry highly encrypted or modified devices. Like everyone else, they are accustomed to their phones being easy to use, not a burden.


While programs like these are necessary, and must be encouraged and followed, agencies should consider augmenting their mobile device security efforts with a few additional strategies.


Let employees keep their devices. Employees will inevitably use their personal devices over government networks. The trick is to make those devices secure while letting employees continue to use them with minimal inconvenience.


Keep tabs on those devices. Agencies must balance the reality of personal device use with security measures that allow administrators to easily manage and secure those devices, preferably from a central location. Administrators should be able to remotely wipe, lock, set passwords on devices, and implement mobile device tracking that uses GPS to find lost and stolen devices.


Go beyond the devices into the network itself. Automated threat monitoring solutions that employ constantly updated threat intelligence and continuously scan for potential anomalies are good places to start. Agency teams should consider complementing this tactic with user device tracking to quickly identify and locate unauthorized devices. Monitoring and capturing network logins and other events can also help detect questionable network activity and prevent unwanted intrusions.


Get a handle on bandwidth. Device management also involves managing the impact that devices can have on the network. Mobile devices used for bandwidth-hogging applications, such as video, can significantly slow down the network. Agency administrators should consider implementing network bandwidth analysis solutions that allow them to identify which applications and endpoints are consuming the most bandwidth. Through device tracking, they can also track excessive bandwidth usage back to a particular user and mobile device.


Although most of the focus on BYOD has been on security, mobile device management really must be a two-pronged approach. Security is and always will be important, but the ability to ensure that networks continue to operate efficiently and effectively in the midst of a device onslaught is also critical.


It’s also something that many agencies are still grappling with, nearly 10 years after BYOD was first introduced. We’ve come a long way since then, as the programs initiated by the DHS and other agencies show. But we still have far to go.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

In early January we traveled back in time to see how SolarWinds works with Cisco gear, in preparation for Cisco Live EMEA.
Now we’re talking about Microsoft and VMware, but we won’t travel back in time, and there’s no event happening. Why? New releases, of course.


But let’s take a few steps back to reiterate where we are, and what’s happening.

Looking at the Microsoft universe, what are you guys monitoring?

The most essential thing is probably the operating system. SolarWinds® supports Windows Server 2003-2016 out of the box with Server & Application Monitor (SAM) templates, and there’s a Server 2019 template already in the community, and I’d guess that it might turn into an “official” template at some point.



The templates can be assigned to nodes after a search, and they provide well-rounded monitoring with a few mouse clicks. Each template can be customized based on your needs, but essentially, they are ready to use, and the key performance indicators monitored make sure the machine is not acting up.


Next, there are applications.

A quick search for “Microsoft” here shows 71 templates built in to the product, and 263 templates have been contributed by the community. Thanks, guys!



Some of the templates are quite old, while others have been updated recently. You might find yourself in a situation with a template available for “Solution X 2016,” but you just updated to “2019” or whatever—I suggest giving the 2016 template a try first, as new products usually don’t change too much about monitoring.

It’s all nice and shiny, but there’s more.


The next step up is a feature called AppInsight, and this brightens up the day.
There are a couple of Microsoft solutions that are oh-so-popular, almost every organization is using them, and managing these applications can be unnecessarily complicated.
At SolarWinds, we don’t like complicated things, so in 2013, AppInsight for SQL came onboard, followed by Exchange and IIS in later versions. AppInsight isn’t exactly magic, but not too far from it.

During the process of adding a Windows node, SAM will automatically check if AppInsight is going to be a match and will suggest adding it in one of the steps. Just a single mouse click and loads of KPIs are monitored without further intervention required. And the best thing is, each KPI comes with supporting information to explain what exactly is going on if it suddenly turns red. This is invaluable for someone like me, who thinks SQL has been invented by the devil to bug humankind.
But we didn’t stop with the three AppInsights. There was a race here on THWACK for quite a while between different feature requests, but this one won by far, so SAM 6.8 comes with AppInsight for AD—great!


Let’s move on.
At some point, I think it was 2017, we saw the first templates for Office365. They have been updated recently, and you’ll find a quick overview here. As Microsoft does not provide access to logs—have a look here—the templates instead require the AzureAD module and use PowerShell for monitoring.



Together with NetPath, you get an excellent overview of Office365 as shown here in the online demo.

Likewise, in 2017, we added support for monitoring Azure.



A few articles explain the what and the how, and I suggest starting with this one. On a basic level, you attach the instance, throw an agent on the box, and monitor whatever is running on it. Again, NetPath is your friend even for this scenario.


One more thing regarding Microsoft: Hyper-V.
You can monitor the basics with SAM or Network Performance Monitor (NPM), and both provide information about used resources, what machines are running on a host, and how they relate to each other with Orion® Maps.
That’s nice, but we can do better, and here’s how:


Let me introduce you to Virtualization Manager (VMAN), which will deal with both Hyper-V and VMware.

VMAN goes much deeper in virtual environments and containers and trust me—virtualization isn’t dead at all.

Besides checking everything between a VM and the datastore—even vSAN—VMAN comes with pretty cool features like capacity planning, which just now received multi-clustering as a feature, and my all-time favorite: VM sprawl.
It contains this guy:


I so enjoy clicking “power off VM” to see if someone complains.


The latest version finally added another longtime feature request, VMware events.
Some of you guys used workarounds to get these events into the Orion Platform in the past.
We use the API to retrieve events in near real-time from vCenter or standalone hosts, and it works automatically—you add the gear, and we’ll do the rest for you.


As I said earlier, we don’t like complicated things here at SolarWinds.

Monitoring tools are vital to any infrastructure. They analyze and give feedback on what’s going on in your data center. If there are anomalies in traffic, a network monitoring tool will catch it and alert the administrators. When disk space is getting full on a critical server, a server monitoring tool alerts the server administrators that they need to add space. Some tools are only network tools, or only systems tools. However, these may not always provide all the analysis you need. There are additional monitoring tools that can cover everything happening within your environment.


In searching for a monitoring tool that fits the needs of your organization, it can be difficult to find one that’s the right size for your environment. Not all monitoring tools are one-size-fits-all. If you’re searching for a network monitoring tool, you don’t need to purchase one that covers server performance, storage metrics, and more. There are several things to consider when choosing a monitoring tool that fits your environment.


Run an Analysis on Your Environment

The first order of business when trying to determine which monitoring tool best fits your needs is to analyze your current environment. There are tools on the market today that help map out your network environment and gather key information such as operating systems, IP addresses, and more. Knowing which systems are in your data center, what types of technologies are present, and what application or applications they support will help you decide which tools are the best fit.


Define Your Requirements

There may be legal requirements defining what tools need to be present in your environment. Understanding these specific requirements will likely narrow down the list of potential tools that will work for you. If you’re running a Windows environment, there are many built-in tools that perform the tasks needed in an environment. Additionally, if your organization is using these built-in tools, it may not be necessary to spend money on another tool to do the same thing.


Know Your Budget

Budgetary demands typically determine these decisions for most organizations. Analyzing your budget will help you understand which tools you can afford and will narrow the list down. Many tools do more than needed for some, so it’s not necessary to spend more on a tool that might be outside your budget.


On-prem or Cloud?

When picking a monitoring tool, it’s important to research whether you want an on-premises tool or a cloud-based one. SaaS tools are very flexible and can store the information the tool gathers in the cloud. On the other hand, having an on-premises tool keeps everything in-house and provides a more secure option for data gathered. Choosing an on-prem tool gives you the ability to see your data 24/7/365 and have complete ownership of it. With a SaaS tool, it’s likely you could lose some visibility into how things are operating on a daily basis. Picking the right hosting option should be strictly based on your requirements and comfort with the accessibility of your data.


Just Pick One Already

This isn’t meant to be harsh, but spending too time researching and looking for a tool that fits your needs may put you in a bad position. While you’re trying to choose between the best network monitoring tools, you could be missing out on what’s actually going on inside your systems. Analyze your environment, define your requirements, know your budget, pick a hosting model, and then make your selection. By ensuring the monitoring tool solution fits the needs of your environment, it will pay dividends in the end.


There’s a revolution underway with application deployment, and you may or may not be aware of it. We’re seeing a move by businesses to adopt technology the large public cloud providers have been using for many years, and that trend is only going to increase. In my previous post on digital transformation, I looked at examining the business processes you run and how to position a new digital strategy within your organization. In this article, we look at one of the more prevailing methods of modernizing and paying off some of that technical debt. The large cloud providers and Google in particular have been deploying applications using a technology called containerization for years to allow them to run and scale isolated application elements and gain greater amounts of CPU efficiency.


What is Containerization?

While virtualization is hardware abstraction, containerization is operating system abstraction. Containerization has been growing in popularity, as this technology can get around some of the limitations of machine virtualization, like the sheer size of operating systems, the necessary overheads associated with getting the operating system up and keeping them running, and, as previously mentioned, the lower CPU utilization. (Remember it’s the application we really want to be running and interacting with; the operating system is just there to allow it to stand up).


Benefits of Containerization

A key benefit of containerization is that you can now run multiple applications within the user space of a Linux operating system kept separate from the Kernel. While each application requires its own dedicated operating system when it’s virtualized, containers hold only everything required to run the application (encapsulated runtime environment, if you will). Because of this encapsulation, it means that the application doesn’t see processes or resources outside of itself. As isolation is done down at the kernel level, this then removes the need for each application to have their own bloated operating system. It also allows for the ability to move it without any reliance on underlying operating systems and hardware, which in turn gives greater reliability for the application and removal of migration issues. As the operating system is already up and running and there’s no hypervisor getting in the way of the execution path, you can spin up a single container or thousands within seconds.


One of the earliest mainstream use cases of containers with which the wider audience may have interacted is Siri, Apple’s voice and digital assistant. Each time you record some audio, it’s sent to a container to be analyzed and a response generated. Then the application quits, and the container is removed. This helps explain why you can’t get a follow-up query to work with Siri. Another key benefit of containerization is its ability to help speed up development. Docker’s slogan "run any app anywhere" comes from the idea that if a developer builds an application on his laptop and it works there, it should run on a server elsewhere. This is the origin of the idea around improved reliability. In turn, it allows the development environment to now need to be exactly like production, therefore reducing costs and tackling and resolving the real issues we see with applications.


One of the major advantages of moving to the cloud  is elasticity, and a great way to make use of this is to start using containers. By starting on-premises with legacy application stacks and then slowly converting, or refactoring, their constituent parts to containers, you can make the transition to a cloud provider with greater ease. After all, containerization is basically a way of abstracting all the differences in OS distributions and undying infrastructure by encapsulating the application files’ libraries and environment variables and its dependencies into one neat package. Containers also help with application management  issues by breaking them up into smaller parts that function independently. You can monitor and refine these components, which leads us nicely to microservices.



Applications separated into microservices are easier to manage, as you can alter various smaller parts for improvement while not breaking the overall application. Or, individual instances can be brought online immediately when required to meet growing demand.


By using microservices, you become more agile as you move to independently deployable services and the carving up of an application into smaller pieces. It allows for independent developing, testing, and deployment of a service on a more frequent schedule. This should allow you to start paying off some of that previously discussed technical debt.


Understanding the Market

There are several different types of container software, and it seems sometimes this subdivision is misunderstood when talking about the various products available. These are container engine software, container management software, container monitoring software, and container network software. The main bit of confusion in the IT market comes between container engine platforms and orchestration/management solutions. Docker, Kubernetes, Rancher, Amazon Elastic Container Service, Red Hat OpenShift, Mesosphere, and Docker Swarm are just some of the more high-profile names thriving in this marketplace. Two of the main players are Docker and Kubernetes.


Docker is a container platform designed to help build, ship, and run applications. At its heart, the Docker Engine is the core container runtime environment and the foundation for running containers. Docker Swarm is part of their enterprise suite and provides orchestration and management functionality similar to that of Kubernetes. Docker Swarm and Kubernetes are interchangeable if you’re using the Docker Engine.


Kubernetes is an open-source container management platform system that grew out of a software development project at Google. It can deploy, manage, and scale containerized applications on a planetary scale.


There is still some debate going on as to whether virtualized or containerized applications are more or less secure than the other, and while there are reasons to argue for each, it’s worth noting these two technologies can be used in conjunction each other. For instance, a VMware has Photon OS, which is a container system that can run on vSphere.


When it comes to dealing with containers, there are some design factors and ideals that differ from those of running virtual machines. Instances are disposable. If an instance stops working, just kill it and start another. Log files are saved externally from the instance, so they can be analyzed later or collated into a central repository. Applications need to have the ability to retry operations rather than crashing. This allows for new microservices to be started if demand cannot be met. Persistent data needs to be treated as special and therefore how it is accessed and stored needs to come into consideration. Containers consist of two parts: an image file, which is like a snapshot of the required application, and a configuration file. These are read-only, and therefore you need to store data elsewhere or it will be deleted on clean-up. By planning for redundancy and scalability, you are planning on how best to help the container improve over time. You must have a method to check that the container is both alive and ready, and if it’s not responding, to quickly kill that instance and start another.


Automation and APIs

Application programmable interfaces (APIs) are selections of code exposed by the application to allow for other code or applications to control its behavior. So, you have the CLI and GUI for human interaction with an application and an API for machines. By allowing other machines to access APIs, you start to gain the ability to automate the application and infrastructure as a whole. There are tools available today to ensure applications are in their desired state (i.e., working as intended with the correct settings enabled, etc.) and modify them if necessary. This interaction requires the ability to access the application’s API to make these changes and to interrogate it to see if it’s indeed at the required desired state.


Containers and Automation

As mentioned previously, the ability to spin up vast numbers of containers with next to no delay to meet spikes in demand requires a level of monitoring and automation that removes the need for human interaction. Whether you’re Google doing several billion start-ups and stops a week, a website that finds cheap travel, or a billing application that needs to scale depending on the fiscal period, there’s a benefit to looking at the use of containers and automation to meet the range of demands.


As we move to a more hybrid style of deploying applications, having the ability to run these as containers and microservices allows you the flexibility to transition this to the best location possible for the workload, which may or may not be a public cloud, without fear of breaking the service. Whether you start in one and move to another, this migration will soon be viewed in the same way a version update happens in an environment, and it will be just another task that needs to be undertaken during an application’s lifespan.


Containers offer a standardized way of deploying applications, and automation is a way to accomplish repetitive tasks and free up your workforce. As the two become more entwined in your environment, you should start to see a higher standard and faster deployment within your infrastructure. This then leads you on the journey to continuous integration and continuous delivery, but that’s a story for another day.

March is here, and I can only hope that means the worst of winter snow is behind us. I’m looking forward to getting the fire circle operational again. It’s amazing what a little bit of yard work can do to help alleviate stress. Burning things while sipping scotch helps, too.


As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!


Experiments, growth engineering, and exposing company secrets through your API: Part 1

For the 1% of people out there that know how to capture network traffic *AND* analyze what is happening, you have a lot of value to offer software companies.


California bill aims to strengthen data breach notification law

I do wish that we had some GDPR-type support at the federal level. It’s beyond time for our government to step in and help protect data privacy for all citizens.


Plain wrong: Millions of utility customers’ passwords stored in plain text

As I was just saying...


Rotten Tomatoes Bans User Comments Before Films’ Release

I’m a bit surprised this was even allowed, but this won’t stop the trolls. I’ve seen similar issues at conferences where you are allowed to rate sessions and speakers despite not attending the session itself, or watch it later.


America’s Cities Are Running on Software From the ’80s

Easily the least surprising headline of 2019.


China banned millions of people with poor social credit from transportation in 2018

Maybe instead of denying these citizens access to transportation, they should force them all to travel together. I’m certain Fox would purchase the broadcast rights to “Big Brother Airlines.”


Microsoft starts rolling out ability to turn photos of table data into Excel spreadsheets

While I love this, the part of me that cherishes data quality just died a little bit inside.


Microsoft Certified Bacon Engineer:


One is spoiled for choice when it comes to choosing a cloud provider. Cloud platforms have come a long way since their humble beginnings and now offer a myriad of services to suit most use cases customers might have. The question on every CTO's mind is, “Which cloud platform is the best for my business?”


So, what should you look for when choosing a cloud provider in 2019? Let’s look at some common factors about how to choose the best cloud computing option.




This is where most cloud platform evaluations start, as the desire to save on costs is natural to any company. It also makes sense as the cloud is consumption-based. The cheaper the cloud platform services are to start with, the less recurring cost it will have.


If an audit of existing infrastructure has been carried out, estimated costs for an equivalent deployment on the different clouds in scope shouldn’t be too difficult. Network traffic charges and applications might not be as straightforward, so some guesstimates, based on realistic assumptions, might be necessary.


However, bear in mind that even if it seems cheaper, those costs are estimates at this point and might change as a result of the eventual design. The same cloud might also become more expensive if the infrastructure profile changes in the long term, perhaps due to refactoring.


Existing Infrastructure


Cost should never be the only factor considered for this decision. There are many others, and existing infrastructure is among them.


Traditional infrastructure has grown organically for every company in the past. Platform and technology choices were driven by the needs of the company at the time. Some went the open-source way, while many had no choice but to have proprietary software.

When moving to the cloud, that history can influence the choice of platform. This is especially true for larger companies that might have enterprise licenses for software, translating into discounts on the platform and lowering costs.


Existing Expertise


Existing infrastructure also affects the expertise that exists within the teams that will be working with the chosen platform going forward. It’s important to take that into consideration, given that learning a new cloud platform takes a lot of time and effort.

Consider that the teams have worked with their existing environment for years to develop their expertise, but for the new platform, they’re expected to be up and running in a fraction of that. It helps if the platform chosen reuses at least some of their existing expertise.


Future Roadmap


What will application development and infrastructure look like in the future? Platforms aren’t changed frequently, and the ones that fit that vision should weigh heavier  than the ones that don’t when considering a cloud platform.


Be careful here. Popular opinion might put one cloud platform in front of the other for certain services, but is the company likely to use those services, and if so, would it use the features that differentiate it from other platforms?


Services Offered


Assuming the migration needs to happen soon, does the cloud platform cover (or provide equivalents for) all the services required today? Keeping the on-premises environment and going hybrid might be an option if some applications are not suitable for migration or too difficult to refactor, but it’s safer to look for a cloud provider that can provide the needed services from the start.


One significant consideration here is database platforms that may not have a natively licensed version on that platform. A workaround for that problem is to migrate to another cloud-based database platform, but it’s difficult, especially in the timescales for migration, and comes with a certain amount of risk. Another way is to host it on dedicated instances, but that’s an expensive and inflexible workaround, and is best avoided.




Some organizations have their sights set on a multi-cloud deployment, which, if successful, reduces the risk of choosing the wrong cloud. It might work, but only if there’s existing knowledge of those cloud platforms and compelling reasons to do so, e.g., some functionality that a platform excels in.


However, if there’s no existing knowledge and experience, then it could be a risky strategy. Becoming comfortable with one cloud platform is difficult enough with all the innovation and options  available. Adding another platform will stretch the teams too much, without much gain in capability.


A better way is to focus on one platform and do it really well and in-depth. Cloud concepts translate well between all, so there’s no reason the other platform can’t be added to the overall infrastructure later.


In the meantime, applications should be built on platform-agnostic infrastructure with standard interfaces between services and that should allow cloud mobility when more options become available.




Picking the best cloud provider is no easy task and a lot of thought goes into it. Comparative cost is never the only factor, and there are many other considerations that can influence the decision.


There’s so many choices available that it’s hard to find a use case that cannot be catered for by the cloud platforms available today. While it makes the decision-making harder, it’s a nice problem to have.

By Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article on the internet of things (IoT) and security threats. We’ve all been expecting IoT devices to be problematic and it’s good to see recognition that better controls are needed for the federal government.


The Department of Defense is hearing the IoT alarm bells.


Did you hear about the heat maps used by GPS-enabled fitness tracking applications, which the U.S. Department of Defense (DOD) warned showed the location of military bases, or the infamous Mirai Botnet attack of 2016? The former led to the banning of personal devices from classified areas in the Pentagon, as well as a ban on all devices that use geolocating services for deployed personnel. While the latter may not have specifically targeted government networks, it still served as an effective wakeup call that connected devices have the potential to create a large-scale security crisis.


Indeed, the federal government is evidently starting to hear the alarm bells, considering the creation of the IoT Cybersecurity Act of 2017. The act emphasizes the need for better controls over the procurement of connected devices and assurances that those devices are vulnerability free and easily patchable.


Physical and cultural silos


Technical, physical, and departmental silos could undermine the government’s IoT security efforts. The DOD is comprised of about 15,000 networks, many of which operate independently of each other. According to respondents cited in SolarWinds’ 2018 IT Trends Report, federal agencies are susceptible to inadequate organizational strategies and lack of appropriate training on new technologies.


Breaking the silos


Bringing technology, people, and policy together to protect against potential IoT threats is a tricky business, particularly given the complexity of DOD networks. But it is not impossible, as long as defense agencies adhere to a few key points.


Focus on the people


First, it is imperative that federal defense agencies prioritize the development of human-driven security policies.


Malicious and careless insiders are real threats to government networks—perhaps just as much, if not more so, than external bad actors. Policies regarding which devices are allowed on the network—and who is allowed to use them—should be established and clearly articulated to every employee.


Agencies must also try to ensure everyone understands how those devices can and cannot be used, and continually emphasize those policies. Implementing a form of user device tracking—mapping devices on the network directly back to their users and potentially detecting dangerous activity—can assist in this effort.


Gain a complete view of the entire network


DOD agencies should provide their IT teams with tools that allow them to gain a complete, holistic view of their entire networks. They must institute security and information event management to automatically track network and device logins across these networks and set up alerts for unauthorized devices.


Get everyone involved


It is incumbent upon everyone to be vigilant and involved in all aspects of security, and someone has to set this policy. That could be the chief information security officer or an authorizing official within the agency. People will still have their own unique roles and responsibilities, but just like travelers in the airport, all agency employees need to understand the threats and be on the lookout. If they see something, they need to say something.


Finally, remember that networks are evolutionary, not revolutionary. User education, from top management on down, must be as continuous and evolving as the actions taken by adversaries. People need to be regularly updated and taught about new policies, procedures, tools, and the steps they can take to be on the lookout for potential threats.


As the fitness tracking apps issue and the Mirai Botnet incident have shown, connected devices and applications have the potential to do some serious damage. While government legislation like the IoT Cybersecurity Act is a good and useful step forward, it’s ultimately up to agency information technology professionals to be the last line of defense against IoT security risks. The actions outlined here can help strengthen that line of defense and effectively protect DOD networks against external and internal threats.


Find the full article on SIGNAL.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Welcome back to this series of blogs on my real world experiences of Hybrid IT. If you’re just joining us, you can find previous posts here, here and here. So far I have covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.


So far, I’ve covered a brief introduction to the series, public cloud costing and experiences, and how to build better on-premises data centres.


In this post, I’ll cover something a little different: location and regulatory restrictions driving hybrid IT adoption. I am British, and as such, a lot of this is going to come from my view of the world in the U.K. and Europe. Not all of these issues will resonate with a global audience; however, they are good food for thought. With adoption of the public cloud, there are many options available to deploy services within various regions across the world. For many, this won’t be much of a concern. You consume the services where you need to and where they need to be consumed by the end users. This isn’t a bad approach for most global businesses with global customer bases. In my corner of the world, we have a few options for U.K.-based deployments when it comes to hyperscale clouds. However, not all services are available in these regions, and, especially for newer services, they can take some time to roll out into these regions.


Now I don’t want to get political in this post, but we can’t ignore the fact that Brexit has left everyone with questions over what happens next. Will the U.K. leaving the EU have an impact? The short answer is yes. The longer answer really depends on what sector you work in. Anyone that works with financial, energy, or government customers will undoubtedly see some challenges. There are certain industries that comply with regulations and security standards that govern where services can be located. There have always been restrictions for some industries that mean you can’t locate data outside of the U.K. However, there are other areas where being hosted in the larger EU area has been generally accepted. Data sovereignty needs to be considered when deploying solutions to public clouds. When there is finally some idea of what’s happening with the U.K.’s relationship with the EU, and what laws and regulations will be replicated within the U.K., we in the IT industry will have to assess how that affects the services we have deployed.


For now, the U.K. is somewhat unique in this situation. However, the geopolitical landscape is always changing, and treaties often change, safe harbour agreements can come to an end, and trade embargos or sanctions crop up over time. You need to be in a position where repatriation of services is a possibility should such circumstances come your way. Building a hybrid IT approach to your services and deployments can help with mobility of services—being able to move data between services, be that on-premises or to another cloud location. Stateless services and cloud-native services are generally easier to move around and have fewer moving parts that require significant reworking should you need to move to a new location. Microservices, by their nature, are smaller and easier to replace. Moving between different cloud providers or infrastructure should be a relatively trivial task. Traditional services, monolithic applications, databases, and data are not as simple a proposition. Moving large amounts of data can be costly; egress charges are commonplace and can be significant.


Whatever you are building or have built, I recommend having a good monitoring and IT inventory platform that helps you understand what you have in which locations. I also recommend using technologies that allow for simple and efficient movement of data. As mentioned in my previous post, there are several vendors now working in what has been called a “Data Fabric” space. These vendors offer solutions for moving data between clouds and back to on-premises data centres. Maintaining control of the data is a must if you are ever faced with the proposition of having to evacuate a country or cloud region due to geopolitical change.


Next time, I’ll look at how to choose the right location for your workload in a hybrid/multi-cloud world. Thanks for reading, and I welcome any comments or discussion.

At the start of this week, I began posting a series of how-to blogs over in the NPM product forum on building a custom report in Orion®. If you want to go back and catch up, you can find them here:


It all started when a customer reached out with an “unsolvable” problem. Just to be clear, they weren’t trying to play on my ego. They had followed all the other channels and really did think the problem had no solution. After describing the issue, they asked, “Do you know anyone on the development team who could make this happen?”


As a matter of fact, I did know someone who could make it happen: me.


That’s not because I'm a super-connected SolarWinds employee who knows the right people to bribe with baklava to get a tough job done. (FWIW, I am and I do, but that wasn’t needed here.)


Nor was it because, as I said at the beginning of the week, “I’m some magical developer unicorn who flies in on my hovercraft, dumps sparkle-laden code upon a problem, and all is solved.”


Really, I’m more like a DevOps ferret than a unicorna creature that scrabbles around, seeking out hidden corners and openings, and delving into them to see what secret treasures they hold. Often, all you come out with is an old wine cork or a dead mouse. But every once in a while, you find a valuable gem, which I tuck away into my stash of shiny things. And that leads me to the first big observation I recognized as part of this process:


Lesson #1: IT careers are more often built on a foundation of “found objects”small tidbits of information or techniques we pick up along the waywhich we string together in new and creative ways.


And in this case, my past ferreting through the dark corners of the Orion Platform had left me with just the right stockpile of tricks and tools to provide a solution.


I’m not going to dig into the details of how the new report was built, because that’s what the other four posts in this series are all about. But I *do* want to list out the techniques I used, to prove a point:

  • Know how to edit a SolarWinds report
  • Understand basic SQL queries (really just select and joins)
  • Have a sense of the Orion schema
  • Know some HTML fundamentals


Honestly, that was it. Just those four skills. Most of them are trivial. Half of them are skills that most IT practitioners may possess, regardless of their involvement with SolarWinds solutions.


Let’s face it, making a loaf of bread isn’t technically complicated. The ingredients aren’t esoteric or difficult to handle. The process of mixing and folding isn’t something that only trained hands can do. And yet it’s not easy to execute the first time unless you are comfortable with the associated parts. Each of the above techniques had some little nuance, some minor dependency, that would have made this solution difficult to suss out unless you’d been through it before.


Which takes me to the next observation:


Lesson #2: None of those techniques are complicated. The trick was knowing the right combination and putting them together.


I had the right mix of skills, and so I was able to pull them together. But this wasn’t a task my manager set for me. It’s not in my scope of work or role. This wasn’t part of a side-hustle that I do to pay for my kid’s braces or feed my $50-a-week comic book habit. So why would I bother with this level of effort?


OK, I'll admit I figured it might make a good story. But besides that?


I’d never really dug into Orion’s web-based reporting before. I knew it was there, I played with it here and there, but really dug into the guts of it and built something useful? Nah, there was no burning need. This gave me a reason to explore and a goal to help me know when I was “done.” Better still, this goal wouldn’t just be a thought experiment, it was actually helping someone. And that leads me to my last observation:


Lesson #3: Doing for others usually helps you more.


I am now a more accomplished Orion engineer than I was when I started, and in the process I’ve (hopefully) been able to help others on THWACK® become more accomplished as well.


And there’s nothing complicated about knowing how that’s a good thing.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.