1 2 3 4 Previous Next

Geek Speak

2,903 posts

It's been a few weeks since VMworld Europe, and that's given Sascha and me a chance to digest both the information and the vast quantities of pastries, paella, and tapas we consumed.


VMworld was held again in Barcelona this year and came two months after the U.S. convention, meaning there were fewer big, jaw-dropping, spoiler-filled announcements, but more detail-driven, fill-in-the-gaps statements to clarify VMware's direction and plans.


As a refresher, at the U.S. event, some of the announcements included:

  • VMware Tanzu – a combination of products and services leveraging Kubernetes at the enterprise level.
  • Project Pacific – related to Tanzu, this will turn vSphere into a Kubernetes native platform.
  • Tanzu Mission Control – will allow customers to manage Kubernetes clusters regardless of where in the enterprise they're running.
  • CloudHealth Hybrid – will let organizations update, migrate, and consolidate applications from multiple points in the enterprise (data centers, alternate locations, and even different cloud providers) as part of an overall cloud optimization and consolidation strategy
  • The intent to acquire Pivotal
  • The intent to acquire Carbon Black


Going into the European VMworld, one could logically wonder what else there was to say about things. It turns out there were many questions left hanging in the air after the booths were packed and the carpet pulled up and in San Francisco.


Executive Summary

VMware, since selling vCloud to OVH, started looking into other ways to diversify their business and embrace the cloud. The latest acquisitions show it’s a vision, and their earning calls show it’s a successful one. (https://ir.vmware.com/overview/press-releases/press-release-details/2019/VMware-Reports-Fiscal-Year-2020-Third-Quarter-Results/default.aspx)



At both the U.S. and Europe conventions, Tanzu was clearly the linchpin initiative around which VMware's new vision for itself revolves. While the high-level sketch of Tanzu products and services was delivered in San Francisco, in Barcelona we also heard:

  • Tanzu Mission Control will allow operators to set policies for access, backup, security, and more to clusters (either individual or groups) across the environment.
  • Developers will be able to access Kubernetes resources via APIs enabled by Tanzu Mission Control.
  • Project Pacific does more than merge vSphere and Kubernetes. It allows vSphere administrators to use tools they’re already familiar with to deploy and manage container infrastructures anywhere vSphere is running—on-prem, in hybrid cloud, or on hyperscalers.
  • Conversely, developers familiar with Kubernetes tools and processes can continue to roll out apps and services using the tools THEY know best and extend their abilities to provision to things like vSphere-supported storage on-demand.


The upshot is Tanzu and the goal of enabling complete Kubernetes functionality is more than a one-trick-pony idea. This is a broad and deep range of tools, techniques, and technologies.


Carbon Black

In September we had little more than the announcement of VMware's "intent to acquire" Carbon Black. By November the ink had dried on that acquisition and we found out a little more.

  • Carbon Black Cloud will be the preferred endpoint security solution for Dell customers.
  • VMware AppDefense and Vulnerability Management products will merge with several modules acquired through the Carbon Black acquisition.


While a lot more still needs to be clarified (in the minds of customers and analysts alike), this is a good start in helping us understand how this acquisition fits into VMware's stated intent of disrupting the endpoint security space.



The week before VMworld US, VMware announced its Q2 earnings, which included NSX adoption had increased more than 30% year over year. This growth explains the VMworld Europe announcement of new NSX distributed IDS and IPS services, as well as "NSX Federation," which let customers deploy policies across multiple data centers and sites.


In fact, NSX has come a long way. VMware offers two flavors of NSX: The well-known version, which is meanwhile called NSX Data Center for vSphere, and the younger sibling NSX-T Data Center.

The vSphere version continuously improved in two areas preventing a larger adoption; the user experience and security and is nowadays a matured and reliable technology.

NSX-T has been around for two years or so, but realistically it was always behind in features and not as valuable. As it turns out, things have changed, and NSX-T fits well into the greater scheme of things and is ready to play with the other guys in the park, including Tanzu and HCX.



Pivotal was initially acquired by EMC, and EMC combined it with assets from another acquisition: VMware. Next, Dell acquired EMC, and a little later both VMware and Pivotal became individual publicly traded companies with DellEMC remaining as the major shareholder. And now, in 2019, VMware acquired Pivotal.


One could call that an on/off relationship, similar to the one cats have with their owners servants. It’s complicated.


Pivotal offers a SaaS solution to create other SaaS solutions, a concept which comes dangerously close to Skynet, minus the self-awareness and murder-bots.


But the acquisition does makes sense, as Pivotal Cloud Foundry (PCF) runs on most major cloud platforms, and on vSphere, and (to no one's surprise), Kubernetes.


PCF allows developers to ignore the underlying infrastructure and is therefore completely independent from the type of deployment. It will help companies in their multi-cloud travels, while still allowing them to remain a VMware customer.


New Announcements

With all of that said, we don't want you to think there was nothing new under the unseasonably warm Spanish sun. In addition to the expanded information above, we also heard about a few new twists in the VMware roadmap:

  • Project Galleon will see the speedy delivery of an app catalog with greater security being key.
  • VMware Cloud Director service was announced, giving customers multi-tenant capabilities in VMware Cloud on AWS. This will allow Managed Service Providers (MSPs) to share the instances (and costs) of VMware Cloud on AWS across multiple tenants.
  • Project Path was previewed.
  • Project Maestro was also previewed—a telco cloud orchestrator designed to deliver a unified approach to modelling, onboarding, orchestrating, and managing virtual network functions and services for Cloud Service Providers.
  • Project Magna, another SaaS-based solution, was unveiled. This will help customers build a “self-driving data center” by collecting data to drive self-tuning automations.


Antes Hasta Tardes

Before we wrap up this summary, we wanted to add a bit of local color for those who live vicariously through our travels.


Sascha loved the “meat with meat” tapas variations and great Spanish wine. Even more so, as someone who lives in rainy Ireland, I enjoyed the Catalan sun. It was fun to walk through the city in a t-shirt while all the locals consider the temperature in November as barely acceptable.

Similarly, Leon, (who arrived in Barcelona three days after it had started snowing back home) basked in the warmth of the region and of the locals willing to indulge his rudimentary Spanish skills; and basked equally in the joy of kosher paella and sangria.


Until next time!

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Jim Hansen where he discusses some ideas on improving agency security, including helping your staff develop cyberskills and giving them the tools to successfully prevent and mitigate cyberattacks.


Data from the Center for Strategic and International Studies paints a sobering picture of the modern cybersecurity landscape. The CSIS, which has been compiling data on cyberattacks against government agencies since 2006, found the United States has been far and away the top victim of cyber espionage and cyber warfare.


These statistics are behind the Defense Department’s cybersecurity strategy for component agencies that details on how they can better fortify their networks and protect information.


DoD’s strategy is built on five pillars: building a more lethal force, competing and deterring in cyberspace, strengthening alliances and attracting new partnerships, reforming the department, and cultivating talent.


While aspects of the strategy don’t apply to all agencies, three of the tactics can help all government offices improve the nation’s defenses against malicious threats.


Build a Cyber-Savvy Team


Establishing a top-tier cybersecurity defense should always start with a team of highly trained cyber specialists. There are two ways to do this.


First, agencies can look within and identify individuals who could be retrained as cybersecurity specialists. Prospects may include employees whose current responsibilities feature some form of security analysis and even those whose current roles are outside IT. For example, the CIO Council’s Federal Cybersecurity Reskilling Academy trains non-IT personnel in the art and science of cybersecurity. Agencies may also explore creating a DevSecOps culture intertwining development, security, and operations teams to ensure application development processes remain secure and free of vulnerabilities.


Second, agencies should place an emphasis on cultivating new and future cybersecurity talent. To attract new talent, agencies can offer potential employees the opportunity for unparalleled cybersecurity skills training, exceptional benefits, and even work with the private sector. The recently established Cybersecurity Talent Initiative is an excellent example of this strategy in action.


Establish Alliances and Partnerships


The Cybersecurity Talent Initiative reflects the private sector’s willingness to support federal government cybersecurity initiatives and represents an important milestone in agencies’ relationship with corporations. Just recently, several prominent organizations endured what some called the cybersecurity week from hell when multiple serious vulnerabilities were uncovered. They’ve been through it all, so it makes sense for federal agencies to turn to these companies to learn how to build up their own defenses.


In addition to partnering with private-sector organizations, agencies can protect against threats by sharing information with other departments, which will help bolster everyone’s defenses.


Arm Your Team With the Right Tools


It’s also important to have the right tools to successfully prevent and mitigate cyberattacks. Continuous monitoring solutions, for example, can effectively police government networks and alert managers to potential anomalies and threats. Access rights management tools can ensure only the right people have access to certain types of priority data, while advanced threat monitoring can keep managers apprised of security threats in real-time.


Of course, IT staff will need continuous training and education. A good practice is implementing monthly or at least bi-monthly training covering the latest viruses, social engineering scams, agency security protocols, and more.


The DoD’s five-pillared strategy is a good starting point for reducing the risk of the nation. Agencies can follow its lead by focusing their efforts on cultivating their staff, creating stronger bonds with outside partners, and supporting this solid foundation with the tools and training necessary to win the cybersecurity war.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

A few years back I was working at a SaaS provider when we had an internal hackathon. The guidelines were simple: As part of your project you had to learn something, and you had to choose a framework/methodology beneficial to the organization. I was a Windows and VI admin, but along with my developer friend, we wrote an inventory tool that was put into operational practice immediately. I learned a ton over those two days, but little did I know I’d discovered a POwerful SHortcut to advancing my career as well as immediately making my organization’s operations more efficient. What POtent SHell could have such an effect? The framework I chose in the hackathon: PowerShell.


A Brief History of PowerShell

Windows has always had some form of command line utility. Unfortunately, those tools never really kept pace and by the mid-2000s, a change was badly needed. Jeffrey Snover led the charge that eventually became PowerShell. The goal was to produce a management framework to manage Windows environments, and as such it was originally used to control Windows components like Active Directory. The Microsoft Exchange UIs were even built on top of PowerShell, but over time it evolved into way more.


Today, one of the largest contributors to the PowerShell ecosystem is VMware, who competes with Microsoft in multiple spaces. Speaking of Microsoft, it’s shed its legacy of being walled off from the world and is now a prolific open-source contributor, with one of their biggest contributions being to make PowerShell open-source in 2016. Since being open-sourced, you can run PowerShell on Mac and Linux computers, as well as for managing the big three (AWS, Azure, Google Cloud) cloud providers.


Lots of People Are on Board With PowerShell, But Why Do YOU Care?

In an IT operations role, with no prior experience with PowerShell, I was able create a basic inventory system leveraging WMI, SNMP, Windows Registry, and PowerCLI, amongst others. I mention this example again because it demonstrates two of the most compelling reasons to use PowerShell: its low barrier to entry and its breadth and depth.


Low Barrier to Entry

We already determined you can run PowerShell on anything, but it’s also been included in all Windows OSs since Windows 7. If you work in an environment with Windows, you already have access to PowerShell. You can type powershell.exe to launch the basic PowerShell command window, but I’d recommend powershell_ise.exe for most folks who are learning, as the lightweight ISE (Integrated Scripting Environment) will give you some basic tools to troubleshoot and debug your scripts.


Once you’re in PowerShell, it’s time to get busy. The things performing work in PowerShell are called cmdlets (pronounced command-lets). Think of them as tiny little programs, or functions, to do a unit of work. If you retain nothing else from this post, please remember this next point and I promise you can become effective in PowerShell: all cmdlets take the form of verb-noun and if properly formed, will describe what they do. If you’re trying to figure something out, as long as you can remember Get-Help, you’ll be OK.


Here’s the bottom line on having a rapid learning curve: there are a lot of IT folks who don’t have background or experience in writing code. We’re in a period where automation is vitally important to organizations. Having a tool you can pick up and start using on day one means you can increase your skill set and increase your value to the organization easily. Now if only you had a tool that could grow with you as those skillsets evolved…


Depth and Breadth

At its most fundamental level, automation is about removing inefficiencies. A solution doesn’t need to be complex to be effective. When writing code in PowerShell, you can string together multiple commands, where the output of one cmdlet is passed along as the input of the next, via a process called the pipeline. Chaining commands together can be a simple and efficient way to get things done more quickly. Keep in mind PowerShell is a full-fledged object-oriented language, so you can write functions, modules, and thousands of lines of code as your skills expand.


So, you can go deep on the tool, but you can go wide as well. We already mentioned you can manage several operating systems, but software across the spectrum are increasingly enabling management via PowerShell snap-ins or modules. This includes backup, monitoring, and networking tools of all shapes and sizes. But you’re not limited to tools vendors provide you—you can write your own. That’s the point! If you need some ideas on how you can jumpstart your automation practice, here’s a sampling of some fun things I’ve written in PowerShell: network mapper, port scanner, environment provisioning, ETL engine, and web monitors. The only boundary to what you can accomplish is defined by the limits of your imagination.


What’s Next

For some people, PowerShell may be all they need and as far as they go. If this PowerShell exploration just whets your appetite though, remember you can go deeper. Much of automation is going toward APIs, and PowerShell gives you a couple of ways to begin exploring them. Invoke-WebRequest and Invoke-RestMethod will allow you to take your skills to the next step and build your familiarity of APIs and their constructs within the friendly confines of the PowerShell command shell.


No matter how far you take your automation practice, I hope you can use some of these tips to kickstart your automation journey.

We’re more than halfway through the challenge now, and I’m simply blown away by the quality of the responses. While I’ve excerpted a few for each day, you really need to walk through the comments to get a sense of the breadth and depth. You’ll probably hear me say it every week, but thank you to everyone who has taken time out of their day (or night) to read, reply, and contribute.


14. Event Correlation

Correlating events—from making a cup of coffee to guessing at the contents of a package arriving at the house—is something we as humans do naturally. THWACK MVP Mark Roberts uses those two examples to help explain the idea that, honestly, stymies a lot of us in tech.


Beth Slovick Dec 16, 2019 9:46 AM

Event Correlation is automagical in some systems and manual in others. If you can set it up properly, you can get your system to provide a Root Cause Analysis and find out what the real problem is. Putting all those pieces together to set it up can be difficult in an ever-changing network environment. It is a full-time job in some companies with all the changes that go on. The big problem there is getting the information in a timely manner.


Richard Phillips  Dec 17, 2019 1:02 PM

She’s a “box shaker!” So am I.


Flash back 20 years—a box arrives just before Christmas. The wife and I were both box shakers and proceed to spend the next several days, leading up to Christmas, periodically shaking the box and trying to determine the contents. Clues: 1) it’s light 2) it doesn’t seem to move a lot in the box 3) the only noise it makes is a bit of a scratchy sound when shaken.


Finally Christmas arrives and we anxiously open to the package to find a (What was previously very nice) dried flower arrangement—can you imagine what happens to a dried flower arrangement after a week of shaking . . .

Matt R  Dec 18, 2019 12:57 PM

I think of event correlation like weather. Some people understand that dark clouds = rain. some people check the radar. Some people have no idea unless the weather channel tells them what the weather will be.


15. Application Programming Interface (API)

There’s nobody I’d trust more to explain the concept of APIs than my fellow Head Geek Patrick Hubbard—and he did not disappoint. Fully embracing the “Thing Explainer” concept—one of the sources of inspiration for the challenge this year—Patrick’s explanation of “Computer-Telling Laws” is a thing of beauty.


Tregg Hartley Dec 15, 2019 11:37 AM

I click on an icon

You take what I ask,

Deliver it to

The one performing the task.


When the task is done

And ready for me,

You deliver it back

In a way I can see.


You make my life easier

I can point, click and go,

You’re the unsung hero

and the star of the show.


Vinay BY  Dec 16, 2019 5:45 AM

API to me is a way to talk to a system or an application/software running on it, while we invest a lot of time in building that we should also make sure it’s built with standards and rules/laws in mind. Basically we shouldn’t be investing a lot of time on something that can’t be used.


Dale Fanning Dec 16, 2019 9:36 AM

In many ways an API is a lot like human languages. Each computer/application usually only speaks one language. If you speak in that language, it understands what you want and will do that. If you don’t, it won’t. Just like in the human world, there are translators for computers that know both languages and can translate back and forth between the two so each can understand the other.


16. SNMP

Even though “simple” is part of its name, understanding SNMP can be anything but. THWACK MVP Craig Norborg does a great job of breaking it down to its most essential ideas.


Jake Muszynski  Dec 16, 2019 7:16 AM

SNMP still is relevant after all these years because the basics are the same on any device with it. Most places don’t have just one vendor in house. They have different companies. SNMP gets out core monitoring data with very little effort. Can you get more from SNMP with more effort? Probably. Can other technologies get you real time data for specialty systems? Yup, there is lots of stuff companies don’t put in SNMP. But that’s OK. Right up there with ping, SNMP is still a fundamental resource.


scott driver Dec 16, 2019 1:38 PM

SNMP: Analogous to a phone banking system (these are still very much a thing btw).


You have a Financial Institution (device)

You call in to an 800# (an oid)

If you know the right path you can get your balance (individual metric)


However when things go wrong, the fraud department will reach out to you (Trap)


Tregg Hartley Dec 17, 2019 12:10 PM

Sending notes all of the time

For everything under the sun,

The task is never ending

And the Job is never done.


I can report on every condition

I send and never look back,

My messages are UDP

I don’t wait around for the ACK.


17. Syslog

What does brushing your teeth have to do with an almost 30-year-old messaging protocol? Only a true teacher—in this case the inimitable “RadioTeacher” (THWACK MVP Paul Guido)—could make something so clear and simple.


Faz f Dec 17, 2019 4:54 AM

Like a diary for your computer


Juan Bourn Dec 17, 2019 11:24 AM

A way for your computer/server/application to tell you what it was doing at an exact moment in time. It’s up to you to determine why, but the computer is honest and will tell you what and when.


18. Parent-Child

For almost 20 days, we’ve seen some incredible explanations for complex technical concepts. But for day 18, THWACK MVP Jez Marsh takes advantage of the concept of “Parent-Child” to remind us our technical questions and challenges often extend to the home, but at the end of the day we can’t lose sight of what’s important in that equation.


Jeremy Mayfield  Dec 18, 2019 7:41 AM

Thank you, this is great. I think of the parent-Child as one is present with the other. As the child changes the parent becomes more full, and eventually when the time is right the child becomes a parent and the original parent may be no more.


The Parent-Child Relationship is one that nurtures the physical, emotional and social development of the child. It is a unique bond that every child and parent will can enjoy and nurture. ... A child who has a secure relationship with parent learns to regulate emotions under stress and in difficult situations.


Dale Fanning Dec 18, 2019 8:36 AM

I’m a bit further down the road than you, having launched my two kids a few years ago, but I will say that the parent-child relationship doesn’t change even then, although I count them more as peers than children now. I’m about to become a grandparent for the first time, and our new role is helping them down the path of parenthood without meddling too much hopefully. It’s only much later that you realize how little you knew when you started out on the parent journey.


Chris Parker Dec 18, 2019 9:37 AM

In keeping with the IT world:


This is the relationship much like you and your parent.


You need your parents/guardians in order to bring you up in this world and without them you might be ‘orphaned’

Information on systems sometimes need a ‘Parent’ in order for the child to belong

You can get some information from the Child but you would need to go to the Parent to know where the child came from

One parent might have many children who then might have more children but you can follow the line all the way to the beginning or first ‘parent’


19. Tracing

I’ve mentioned before how LEGO bricks just lend themselves to these ELI5 explanations of technical terms, especially as it relates to cloud concepts. In this case, Product Marketing Manager Peter Di Stefano walks through the way tracing concepts would help troubleshoot a failure many of us may encounter this month—when a beloved new toy isn’t operating as expected.


Chris Parker Dec 19, 2019 4:57 AM

Take apart the puzzle until you find the piece that is out of place


Duston Smith Dec 19, 2019 9:26 AM

I think you highlight an important piece of tracing—documentation! Just like building LEGOs, you need a map to tell you what the process should be. That way when the trace shows a different path you know where the problem sits.


Holly Baxley Dec 19, 2019 10:15 AM

Hey Five-year-old me,


Remember when I talked about Event Correlation a while back and told you that it was like dot to dot, because all the events were dots and if you connected them together you can see a clearer “picture” of what’s going on?


Well, today we’re going to talk about Tracing, which “seems” like the same thing, but it isn’t.


See in Event Correlation you have no clue what the picture is. Event Correlation’s job is to connect events together, so it can create as clear a picture as it can of the events to give you an outcome. Just remember, Event Correlation is only as good as the information that’s provided. If events (dots) are left out—the picture is still incomplete, and it takes a little work to get to the bottom of what’s going on.


In tracing—you already know what the picture is supposed to look like.


Let’s say you wanted to draw a picture of a sunflower.


Your mom finds a picture of the sunflower on the internet and she prints it off for you.


Then she gives you a piece of special paper called “vellum” that’s just the right amount of opaque (a fancy term for see-through) paper, so you can still see the picture of the sunflower underneath it. She gives you a pencil so you can start tracing.


Now in tracing does it matter where you start to create your picture?


No it doesn’t.


You can start tracing from anywhere.


In dot-to-dot, you can kinda do the same thing if you want to challenge yourself. It’s not always necessary to start at dot 1, and if you’re like me (wait...you are me)...you rarely find dot 1 the first time anyway. You can count up and down to connect the dots and eventually get there.


Just remember—in this case, you still don’t know what the picture is and that’s the point of dot to dot—to figure out what the picture is going to be.


In tracing—we already know what the picture either is, or at least is supposed to look like.


And just like in tracing, once you lift your paper off the picture, you get to see—did it make the picture that you traced below?


If it didn’t—you can either a) get a new sheet and try again or b) start with where things got off track and erase it and try again.


To understand tracing in IT, I want you to think about an idea you’ve imagined. Close your eyes. Imagine your picture in your mind. Do you see it?


  1. Good.


We sometimes say that we can “picture” a solution, or we “see” the problem, when in reality, a problem can be something that we can’t really physically see. It’s an issue we know is out there: e.g., the network is running slow and we see a “picture” of how to fix it in our mind; a spreadsheet doesn’t add up right like it used to, and we have a “picture” in our mind of how it’s supposed to behave and give the results we need.


But we can’t physically take a piece of paper and trace the problem.


We have programs that trace our “pictures’ for us and help us see what went right and what went wrong.


Tracing in IT is a way to see if your program, network, spreadsheet, document, well...really anything traceable did what it was supposed to do and made the “picture” you wanted to see in the end.


It’s a way to fix issues and get the end result you really want.


Sometimes we get our equipment and software to do what it’s supposed to, but then we realize—it could be even BETTER, and so we use tracing to figure out the best “path” to take to get us there.


That would be like deciding you want a butterfly on your Sunflower, so your mom prints off a butterfly for you and you put your traced Sunflower over the butterfly and then decide what’s the best route to take to make your butterfly fit on your sunflower the way you want it.


And just like tracing—sometimes you don’t have to start at the beginning to get to where you want to be.


If you know that things worked up to a certain point but then stopped working the way you want, you can start tracing right at the place where things aren’t working the way you want. You don’t always have to start from a beginning point. This saves time.


There’s lots of different types of tracing in IT. Some people trace data problems on their network, some people trace phone problems on their network, some trace document and spreadsheet changes on their files, some trace database changes. There’s all sorts of things that people can trace in IT to either fix a problem or make something better.


But the end question of tracing is always the same.


Did I get what I “pictured?”


And if the answer is “yes” - we stop and do the tech dance of joy.


It’s a secret dance.


Someday, I’ll teach you.


20. Information Security

THWACK MVP Peter Monaghan takes a moment to simply and clearly break down the essence of what InfoSec professionals do, and to put it into terms that parents would be well-advised to use with their own kids.


(while I don’t normally comment on the comments, I’ll make an exception here)

In the comments, a discussion quickly started about whether using this space to actually explain infosec (along with the associated risks) TO a child was the correct use of the challenge. While the debate was passionate and opinionated, it was also respectful and I appreciated that. Thank you for making THWACK the incredible community that it has grown to be!


Holly Baxley Dec 20, 2019 3:18 PM (in response to Jeremy Mayfield)

I think Daddy's been reading my diary

He asks if I'm okay

Wants to know if I want to take walks with him

Or go outside and play


He tells Mommy that he's worried

There's something wrong with me

Probably from reading things in the diary

Things he thinks he shouldn't see


But I'll tell you a little secret

That diary isn't real

I scribbled nonsense in that journal

And locked away the one he can't steal


If Daddy was smart he woulda noticed

Something he's clearly forgot

Never read a diary covered with Winnie the Poo

Whose head is stuck in the Honeypot.


Jon Faldmo Dec 20, 2019 1:27 PM

I haven't thought of how information security applies or is in the same category as privacy and being secure online. I have always thought of Information Security in the context of running a business. It is the same thing, just usually referenced differently. Thanks for the write up.


Tregg Hartley Dec 20, 2019 3:33 PM

The OSI model

Has seven layers,

But it leaves off

The biggest players.


The house is protected

By the people inside,

We are all on watch

As such we abide.


To protect our house

As the newly hired,

All the way

To the nearly retired.


OK, so the title is hardly original, apologies. But it does highlight the buzz for Kubernetes still out there not showing any signs of going away anytime soon.


Let’s start with a description of what Kubernetes is:


Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available¹


Let me add my disclaimer here. I’ve never used Kubernetes or personally had a use case for it. I have an idea of what it is and its origins (do I need a Borg meme as well?) but not much more.


A bit of background on myself: I’m predominantly an IT operations person, working for a Value-Added Reseller (VAR) designing and deploying VMware based infrastructures. The organizations I work with are typically 100 – 1000 seats in size across many vertical markets. I would be naive to think none of those organizations aren’t thinking about using containerization and orchestration technology, but genuinely none of them currently are.


Is It Really a Thing?

In the past 24 months, I’ve attended many trade shows and events, usually focused around VMware technology, and it’s always asked, “Who is using containers?” The percentage of hands going up is always less than 10%. Is it just the audience type or is this a true representation of container adoption?


Flip it around and when I go to an AWS or Azure user group event, it’s the focus and topic of conversation: containers and Kubernetes. So, who are the people at these user groups? Predominantly the developers! Different audiences, different answers.


I work with one of the biggest Tier-1 Microsoft CSP distributors in the U.K. Their statistics on Azure consumption by type of resource are enlightening. 49% of billable Azure resources are virtual machines, 30-odd% is object storage consumption. There was a small slice of the pie at 7% for misc. services, including AKS (Azure Kubernetes Service). This figure aligns with my first observation at trade events, where less than 10% of people in the room were using containers. I don’t know if those virtual machines are running container workloads.


Is There a Right Way?

This brings us to the question and part of the reason I wrote this article: is there a right way to deploy containers and Kubernetes? Every public cloud has its own interpretation—Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, you get the idea. Each one has its own little nuances capable of breaking the inherent idea behind containers: portability. Moving from one cloud to another, the application stack isn’t necessarily going to work right away.


Anyways, the interesting piece for me, because of my VMware background, is Project Pacific. Essentially, VMware has gone all-in embracing Kubernetes by making it part of the vSphere control plane. IT Ops can manage a Kubernetes application container in the same way they can a virtual machine, and developers can consume Kubernetes in the same way they can elsewhere. It’s a win/win situation. In another step by VMware to become the management plane for all people, think public cloud, on-premises infrastructure, and the software designed data center, Kubernetes moves ever closer to my wheelhouse.


No matter where you move the workload, if VMware is part of the management and control plane, then user experience should be the same, allowing for true workload mobility.



Two things for me.


1. Now more than ever seems like the right time to look at Kubernetes, containerization, and everything it brings.

2. I’d love to know if my observations on containers and Kubernetes adoption are a common theme or if I’m living with my head buried in the sand. Please comment below.


¹ Kubernetes Description - https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

I visited the Austin office this past week, my last trip to SolarWinds HQ for 2019. It’s always fun to visit Austin and eat my weight in pork products, but this week was better than most. I took part in deep conversations around our recent acquisition of VividCortex.


I can’t begin to tell you how excited I am for the opportunity to work with the VividCortex team.


Well, maybe I can begin to tell you. Let’s review two data points.


In 2013, SolarWinds purchased Confio Software, makers of Ignite (now known as Database Performance Analyzer, or DPA) for $103 million. That’s where my SolarWinds story begins, as I was included with the Confio purchase. I was with Confio since 2010, working as a sales engineer, customer support, product development, and corporate marketing. We made Ignite into a best of breed monitoring solution that’s now the award-winning, on-prem and cloud-hosted DPA loved by DBAs globally.


The second data point is from last week, when SolarWinds bought VividCortex for $117.5 million. One thing I want to make clear is SolarWinds just doubled down on our investment in database performance monitoring. Anyone suggesting anything otherwise is spreading misinformation.


Through all my conversations last week with members of both product teams one theme was clear. We are committed to providing customers with the tools necessary to achieve success in their careers. We want happy customers. We know customer success is our success.


Another point that was made clear is the VividCortex product will complement, not replace DPA, expanding our database performance monitoring portfolio in a meaningful way. Sure, there is some overlap with MySQL, as both tools offer support for that platform. But the tools have some key differences in functionality. Currently, VividCortex is a SaaS monitoring solution for popular open-source platforms (PostgreSQL, MySQL, MongoDB, Amazon Aurora, and Redis). DPA provides both monitoring and query performance insights for traditional relational database management systems and is not yet available as a SaaS solution.


This is why we view VividCortex as a product to enhance what SolarWinds already offers for database performance monitoring. We’re now stronger this week than we were just two weeks ago. And we’re now poised to grow stronger in the coming months.


This is an exciting time to be in the database performance monitoring space, with 80% of workloads still Earthed. If you want to know about our efforts regarding database performance monitoring products, just AMA.


I can't wait to get started on helping build next-gen database performance monitoring tools. That’s what VividCortex represents, the future for database performance monitoring, and why this acquisition is so full of goodness. Expect more content in the coming weeks from me regarding our efforts behind the scenes with both VividCortex and DPA.

As we head into the new year, people will once again start quoting a popular list describing the things kids starting college in 2020 will never personally experience. Examples of these are things like “They’re the first generation for whom a ‘phone’ has been primarily a video game, direction finder, electronic telegraph, and research library.” And “Electronic signatures have always been as legally binding as the pen-on-paper kind.” Or most horrifying, “Peanuts comic strips have always been repeats.”


That said, it’s also interesting to note the things fell into obsolescence over the last few decades. In this post, I’m going to list and categorize them, and add some of my personal thoughts about why they’ve fallen out of vogue, if not use.


It’s important to note many of these technologies can still be found “in the wild”—whether because some too-big-to-fail, mission-critical system depends on it (c.f. the New York Subway MetroCard system running on the OS/2 operating system—https://www.vice.com/en_us/article/zmp8gy/the-forgotten-operating-system-that-keeps-the-nyc-subway-system-alive); or because devotees of the technology keep using it even though newer, and ostensibly better, tech has supplanted it (such as laserdiscs and the Betamax tape format*).


Magnetic Storage

This includes everything from floppy disks (whether 10”, 8”, 5.25”, or 3.5”), video tapes (VHS or the doubly obsolete** Betamax), DAT, cassette tapes or their progenitor reel-to-reel, and so on.


The reason these technologies are gone is because they weren’t as good as what came after. Magnetic storage was slow, prone to corruption, and often delicate and/or difficult to work with. Once a superior technology was introduced, people abandoned these as fast as they could.


Disks for Storage

This category includes the previously-mentioned floppy disks, but extends to include CDs, DVDs, and the short-lived mini-disks. All have—by and large—fallen by the wayside.


The reason for this is less because these technologies were bad and/or hard to use, per se (floppies notwithstanding) but because what came after—flash drives, chip-based storage, SSD, and cloud storage, to name a few—were so much better.


Mobile Communications

Since the introduction of the original cordless phone in 1980, mobile tech has become both ubiquitous and been an engine of societal and technological change. But not everything invented has remained with us. Those cordless phones I mentioned are a good example, as are pagers and mobile phones that are JUST phones and nothing else.


It’s hard to tell how much of this is because the modern smartphone was superior to its predecessors, and how much was because the newest tech is so engaging—both in terms of the features it contains and the social cachet it brings.


Portable Entertainment

Once a juggernaut in the consumer electronics sector, the days of Walkman, Discman, and portable DVD players has largely ended.


In one of the best examples of the concept of “convergence,” smartphone features have encompassed and made obsolete the capabilities once performed by any and all those mobile entertainment systems.


School Tech

There was a range of systems which were staples in the classroom until relatively recently: if the screen in the classroom came down, students might turn their attention to information emanating from an overhead projector, a set of slides, a filmstrip, or even an actual film.


Smartboards, in-school media servers, and computer screen sharing all swooped in to make lessons far more dynamic, interactive, and (most importantly) simple for the teacher to prepare. And no wonder, since no teacher in their right mind would go back to the long hours drawing overhead cells in multiple marker colors, only to have that work destroyed by a wayward splash of coffee.


A Short List of More Tech We Don’t See (Much) Any More:

  • CRT displays
  • Typewriters
  • Fax machines (won’t die, but still)
  • Public phones
  • Folding maps
  • Answering machines

What other tech or modern conveniences of a bygone era do you miss—or at least notice is missing? Talk about it in the comments below.


* Ed. note: Betamax was far superior, especially for TV usage, until digital records became commercially acceptable from a budget perspective, thankyouverymuch. Plus, erasing them on the magnet thingy was fun.

** Ed. note: Rude.

I hope this edition of the Actuator finds you and yours in the middle of a healthy and happy holiday season. With Christmas and New Year's falling on Wednesday, I'll pick this up again in 2020. Until then, stay safe and warm.


As always, here's a bunch of stuff I found on the internet I thought you might enjoy.


Why Car-Free Streets Will Soon Be the Norm

I'm a huge fan of having fewer cars in the middle of any downtown city. I travel frequently enough to European cities and I enjoy the ability to walk and bike in areas with little worry of automobiles.


Microsoft and Warner Bros trap Superman on glass slide for 1,000 years

Right now, one of you is reading this and wondering how to monitor glass storage and if an API will be available. OK, maybe it's just me.


The trolls are organizing—and platforms aren't stopping them

This has been a problem with online communities since they first started; it's not a new problem.


New Orleans declares state of emergency following cyberattack

Coming to a city near you, sooner than you may think.


Facebook workers' payroll data was on stolen hard drives

"Employee wasn’t supposed to take hard drives outside the office..." Security is hard because people are dumb.


A Sobering Message About the Future at AI's Biggest Party

The key takeaway here is the discussion around how narrow the focus is for specific tasks. Beware the AI snake oil salesman promising you their algorithms and models work for everyone. They don't.


12 Family Tech Support Tips for the Holidays

Not a bad checklist for you to consider when your relatives ask for help over the holidays.


Yes, I do read books about bacon. Merry Christmas, Happy Holidays, and best wishes.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by Jim Hansen about leveraging access rights management to reduce insider threats and help improve security.


According to the SolarWinds 2019 Cybersecurity Survey, cybersecurity threats are increasing—particularly the threat of accidental data exposure from people inside the agency.


According to the survey, 56% of respondents said the greatest source of security threats to federal agencies is careless and/or untrained agency insiders; 36% cited malicious insiders as the greatest source of security threats. Nearly half of the respondents—42%—say the problem has gotten worse or has remained a constant battle.


According to the survey, federal IT pros who have successfully decreased their agency’s risk from insider threats have done so through improved strategy and processes to apply security best practices.


While 47% of respondents cited end-user security awareness training as the primary reason insider threats have improved or remained in control, nearly the same amount—45%—cited network access control as the primary reason for improvement, and 42% cited intrusion detection and prevention tools.


The lesson here is good cyberhygiene in the form of access management can go a long way toward enhancing an agency’s security posture. Certain aspects of access management provide more protection than others and are worth considering.


Visibility, Collaboration, and Compliance


Every federal IT security pro should be able to view permissions on file servers to help identify unauthorized access or unauthorized changes to more effectively prevent data leaks. Federal IT pros should also be able to monitor, analyze, and audit Active Directory and Group Policy to see what changes have been made, by whom, and when those changes occurred.


One more thing: be sure the federal IT team can analyze user access to services and file servers with visibility into privileged accounts and group memberships from Active Directory and file servers.


Collaboration tools—including SharePoint and Microsoft Exchange—can be a unique source of frustration when it comes to security and, in particular, insider threats. One of the most efficient ways to analyze and administer SharePoint access rights is to view SharePoint permissions in a tree structure, easily allowing the user to see who has authorized access to any given SharePoint resource at any given time.


To analyze and administer Exchange access rights, start by setting up new user accounts with standardized role-specific templates to provide access to file servers and Exchange. Continue managing Exchange access by tracking changes to mailboxes, mailbox folders, calendars, and public folders.


Finally, federal IT pros know while managing insider threats is of critical importance, so is meeting federal compliance requirements. Choose a solution with the ability to create and generate management and auditor-ready compliance reports showing user access rights, as well as the ability to log activities in Active Directory and file servers by user.




There are options available to dramatically help the federal IT security pro get a better handle on insider threats and go a long way toward mitigating risks and keeping agency data safe.


Find the full article on our partner DLT’s blog Government Technology Insider.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

I was in the pub recently for the local quiz and afterwards, I got talking to someone I hadn’t seen for a while. After a few minutes, we started discussing a certain app he loves on his new phone, but he wished the creators would fix a problem with the way it displayed information, so it looks like it does when he logs in on a web browser.


“It’s all got to do with technical debt,” I blurted out.


“What?” he replied.


“When they programmed the app, the programmers took an easier method rather than figure out how to display the details the same way as your browser to be able to ship it quicker to you, the consumer, and have yet to repay the debt. It’s like a credit card.”


It’s fine to have some technical debt, like having an outstanding balance on a credit card, and sometimes you can pay off the interest, i.e., apply a patch; but there comes a point when you need to pay off the balance. This is when you need to revisit the code and implement a section properly; and hence pay off the debt.


There are several reasons you accrue technical debt, one of which is lack of experience and inferior skills by the coding team. If the team doesn’t have the right understanding or skills to solve the problem, it’ll only get worse.


How can you help solve this? I’m a strong proponent of the education you can glean from attending a conference, whether it be Kubecon, Next, DEFCON, or AWS re:Invent, which I just attended. These are great places to sit down and discuss things with your peers, make new friends, discover fresh GitHub repositories, learn from experts, and hear about new developments in the field, possibly ahead of their release, which may either give you a new idea or help solve an existing problem. Another key use case for attending is the ability to provide feedback. Feedback loops are a huge source of information for developers. Getting actual customer feedback, good or bad, helps shape the short-term to long-term goals of a project and can help you understand if you’re on the right path for your target audience.


So, how do you get around these accrued debts? First, you need to have a project owner whose goal is to make sure the overall design and architecture is adhered to. It should also be their job to make sure coding standards are adhered to and documentation is created to accompany the project. Then with the help of regression testing and refactoring over time, you’ll find problems and defects in your code and be able to fix them. Any rework from refactoring needs to be planned and assigned correctly.


There are other ways to deal with debt, like bug fix days and code reviews, and preventative methods like regular clear communication between business and developer teams, to ensure the vision is implemented correctly and it delivers on time to customers.


Another key part of dealing with technical debt is taking responsibility and everyone involved with the project being aware of where they may have to address issues. By being open rather than hiding the problem, it can be planned for and dealt with. Remember, accruing some technical debt is always going to happen—just like credit card spending.

Scenario: a mission-critical application is having performance issues during peak business hours. App developers blame the storage. The storage team blames the network. The network admin blames the infrastructure. The cycle of blame continues until finally someone shouts, “Why don’t we just put it in the cloud?” Certainly, putting the application into the public cloud will solve all these issues, right? Right?! While this might sound like a tempting solution, just simply installing an application on server in the public cloud may not resolve the problem—it might open the company to more unforeseen issues.


Failure to Plan Is Planning to Fail


The above adage is one of the biggest roadblocks to successful cloud migrations. Often when an application is looked at to be moved to the cloud, the scope of its interactions with servers, networks, and databases isn’t fully understood. What appears to be a Windows Server 2016 box with four vCPU and 16Gb RAM running an application turns out to be an interconnected series of SQL Server instances, Apache Web Servers, load balancers, application servers, and underlying data storage. If this configuration is giving your team performance issues on your on-premises hardware, why would moving it to hardware in a different data center resolve the problem?


If moving to the cloud is a viable option at this juncture of your IT strategy, it’s also time to consider refactoring the application into a more cloud-native format. What is cloud-native? Per the Cloud Native Computing Foundation (CNCF), the definition of cloud-native is:


“(Cloud-native) technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.


These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”


Cloud-native applications have been developed or refactored to use heavy automation, use containers for application execution, are freed from operating system dependencies, and present elastic scalability traditional persistent virtual servers cannot provide. Applications become efficient not only in performance, but in cost as well with this model. However, refactoring an application to a cloud-native state can take lots of time and money to make the transition.


The Risks of Shadow IT


If you’ve taken the time to understand the application dependencies, a traditional application architecture can be placed in a public cloud while an app is refactored to help alleviate some issues. But again, the process can be time-consuming. Administrators can grow impatient during these periods, or if their request for additional resources have been denied, can grow frustrated. The beautiful thing about public clouds is the relative ease of entry into services. Any Joe Developer with a credit card can fire up an AWS or Azure account on their own and have a server up and running within a matter of minutes by following a wizard.


Cool, my application is in the cloud and I don’t have to wait for the infrastructure teams to figure out the issues. Problem solved!


Until an audit finds customers’ credit card data in an AWS S3 bucket open to the public. Or when the source of a ransomware outbreak is traced back to an unsecured Azure server linked to your internal networks. Oh, and let’s not even discuss the fact an employee is paying for these services outside of the purview of the accounting department or departmental budgets (which is a topic for another blog post later in this series).


Security and compliance can be achieved in the cloud, but much like before, it comes down to planning. By default, many public cloud services aren’t locked down to corporate or compliance standards. Unfortunately, this information isn’t widely known or advertised by the cloud vendors. It’s on the tenant to make sure their deployments are secure and the data is backed up. Proper cloud migration planning involves all teams of the business’s IT department, including the security team. Everyone should work together to make sure the cloud architecture is designed in a way allowing for performance, scalability, and keeping all data secure.


Throwing an application at the cloud isn’t a fix for poor architecture or aging technologies. It can be a valuable tool to help in the changing world of IT, but without proper planning it can burn your company in the end. In the next post in the “Battle of the Clouds” series, we’ll look at determining the best location for the workload and how to plan for these moves.

Week 2 of the challenge has brought even more insights and wisdom than I imagined - although I should have expected it, given how incredible the THWACK community is day after day, year in and year out. As a reminder, you can find all the posts here: December Writing Challenge 2019.


I also wanted to take a moment to to talk about the flexibility of the ELI5 concept. If you have a child, or have been around a child, or ever were a child, you’re probably acutely aware no two kids are exactly alike. Therefore, “Explain Like I’m Five” (ELI5) implicitly allows for a range of styles, vocabularies, and modalities. Like some of the best ideas in IT (or at least the ones making the most impact), there’s not a single, correct way to “do” explain-it-simply. ELI5 is not a single standard, it’s a framework, a way of approaching a task. Explanations can use simple words; or present simple concepts using more sophisticated words; or use examples familiar to a child; or even be presented in pictures instead of words. Because the best thing about explaining something simply is there are many ways to do it.


With that said, here are the featured words words and lead writers for this week, and some of the notable comments from each day.



7. Troubleshoot

Kicking off the second week of the challenge, THWACK MVP Nick Zourdos tackles one of the most common tasks in IT—one of the things we most hate to do, and yet also one of the skills we take most pride in.


Jake Muszynski  Dec 7, 2019 6:53 PM

In IT the ability to troubleshoot problems will set you apart. So many people I have worked with go in circles or have no idea how to move forward to resolve issues. Starting with ruling out the things that are right, and listing what you don’t know goes a long way to a resolution.


Tregg Hartley Dec 8, 2019 4:33 PM

Understanding how things work

Is at the very core,

Of knowing how to troubleshoot

And doing well, this chore.



Knowing which tools to use

Will also help with this,

To localize the issue

And return to cyber bliss.


Thomas Iannelli  Dec 10, 2019 11:46 AM

SUZIE: Uncle Tom?

TOM: Yes Suzie.

SUZIE: Mom says you troubleshoot computers. What’s troubleshoot?

TOM: Well Suzie, see Alba over there?

SUZIE: uh huh

TOM: See how she is just laying there?

SUZIE: uh huh

TOM: Is she sleeping or dead?


TOM: How can you tell she is not dead?

SUZIE: I can see her chest moving.

TOM: What else?

SUZIE: When I squeak this toy her head will pop up, watch.

[Suzie squeaks the toy, but Alba doesn’t move.]

TOM: Oh, no Suzie Alba didn’t move. What next?

SUZIE: I’ll give her a treat.

[Suzie repeatedly says Alba’s name and offers a treat, but Alba is not interested.]

TOM: Oh, no Suzie Alba didn’t move again! I think I know a good way to test if she is still alive.

TOM: Hey, Alba do you want to go for a ride?

[At which point Alba jumps up, almost knocking Suzie over, and heads toward the garage door.]

TOM: You see Suzie, troubleshooting is like trying to answer the question whether Alba was alive or dead. It is a problem to be solved. You did very good things to find out if she was alive and kept trying. Sometimes it just takes someone with a little more experience who knows the right question to ask or thing to do in order to solve a problem. That is the same thing I do when I troubleshoot computers. But see next time you will know to simply ask if Alba wants to go for a ride, we all learn from each other.

SUZIE: Uncle Tom, I also learned not to get between Alba and the garage door when you ask her if she wants to go for a ride!

[They both laugh and go take Alba for a ride around the neighborhood. Otherwise she will stand by the garage door barking for the next 30 minutes, definitely letting everyone know she is alive.]


  1. 8. Virtualization

The second word of the week has—as many of the commenters said—completely changed the nature of IT for many of us. SolarWinds SE Colin Baird gives a simple, but not simplistic, explanation of what and how this technology has been so transformative.


Faz f Dec 9, 2019 4:10 AM

I have a very big Cardboard box, too big for me, cardboard, Scissors and sellotape.

My friend comes and also wants a box,

I get the cardboard, scissors and sellotape and make my friend a box inside my box,

My box is still too big for me.


Another friend comes who wants a box.

I get the cardboard, scissors and sellotape and make my friend another box inside my box,

Next to my first box.

My box is still too big for me.


Another friend comes who wants a box.

I get the cardboard, scissors and sellotape and make my friend another box inside my box,

Next to the other boxes.

I think my box is now just right for me,

My friends are having fun in their Visualisation of a box.


George Sutherland Dec 9, 2019 8:29 AM

The pie analogy is perfect. Virtualization is the natural progression of computing....

I also think that virtualization is “divide and conquer” a large box can support a number of smaller boxes, each solving a needed business problem.


scott driver Dec 9, 2019 12:01 PM

Thank you for getting back to the ELI5 approach.


Virtualization: Computers running inside other computers


  1. 9. Cloud Migration

THWACK MVP Holger Mundt kicks of a series of days focusing not only on cloud-based technologies and techniques, but also featuring those little plastic blocks kids (of all ages) love to play with to build new things, worlds, and dreams.


Chris Parker Dec 9, 2019 3:29 AM

All your precious items

Saved at home

Under your care, in your hands


But in time there are too many

Not enough space

A single collection

A risk, danger


A solution, though not always best

Someone else to take care for you

The burden lifted from your hands

A Gringotts in the sky


A cost attached

But sometimes needed

Safest option to suit most needs


But be warned

The goblins can be tricky

The cloud unmanaged

A cost too big


Control passed over

Hard to return


Sascha Giese  Dec 9, 2019 3:51 AM

Not gonna migrate my LEGO Super Star Destroyer!


Michael Perkins
Dec 9, 2019 8:50 AM

I am old enough (barely) to remember when computers were usually big machines in central locations accessed via dumb terminals. The big machine’s owner or administrator sold or doled out resources to you: storage, processor time, etc. I grew up through the PC revolution—the first box on which I actually worked was a 6k Commodore PET (one for the whole school), followed quickly by an Apple IIe (one in each classroom). My first home PC was the 128k Mac—the same one advertised on the ‘1984’ Super Bowl ad. I’ve used various flavors of DOS, Linux/UNIX, MacOS, and Windows through grade school, high school, undergraduate and graduate work, home and employment.


Now, everyone is migrating to the cloud. The big machine at the other end is a lot more complex: more redundant, better connected, faster. It offers additional services than the old ones, at least if you purchase the right ‘aaS.’ At its heart though, we are going back to paying for processor cycles, storage, and connectivity to it.


Everything old is new again.


  1. 10. Container

David Wagner is one of the product managers for the team building and supporting SolarWinds solutions for the cloud, so it makes sense for him to tackle this word.


Kelsey Wimmer Dec 10, 2019 12:21 PM

In some ways, it’s like keeping the forks, knives, and spoons in one drawer that has dividers rather than keeping forks, knives, and spoons in different drawers. That last part sounds silly, but that’s exactly what people who developed containers thought.


Rick Schroeder  Dec 10, 2019 4:52 PM

Some containers let us manage many smaller items that are put into groups, and it’s a huge time-saver, and very powerful. Rather than contacting 100,000 soldiers individually, one might contact “The army” container. Or one of several Corps, Divisions, Brigades or Regiments, Battalions, Companies, Platoons, right down to squads. Managing by containers, or by groups, is part of what makes Active Directory powerful—or ridiculously complex and inefficient, depending on one’s great planning and experience—or the lack thereof.


Other containers are computer environments that are isolated from other systems, and that allow us to execute commands without impacting resources that should NOT be disturbed. Containers can make installing/running apps on a Linux server simpler and more uniform. And that makes for faster deployment and better security.


Matt R  Dec 11, 2019 10:31 AM

Ha, this is perfect. My child has a specific definition of containers, as well. We had this conversation last year:


(daughter): Mommy, will you sit in the trash can (next to potty) while I go potty?

(mom): People don’t sit in the trash

(daughter): Except for when they die, then we throw them in the trash

(mom): We don’t throw dead people away

(daughter): Oh, only animals. What do we do with dead people?


So, be careful what you define as a container or it may end up with some...unwanted results.


Laura Desrosiers Dec 11, 2019 11:49 AM

I keep everything as neat, clean and simple as possible. I don’t like to over complicate things and everything has its place.


  1. 11. Orchestration

Another day of cloud-based topics, and product manager Dave Wagner is back to explain how yesterday’s word and todays fit together to create a more automated environment.


Anthony Hoelscher Dec 11, 2019 12:22 PM

Another way to imagine this is baking a cake. It’s awfully hard to find a substitute for an egg when you are out. All the ingredients must be added within a certain time to be effective. There are certain sub tasks that must be completed to achieve a delicious cake, you beat the egg before you add it to your working recipe, and you always crack it open, careful not to lose any shell in the batter.

Everything has its place, and recipes help achieve the same result, don’t leave out the eggs.


Holly Baxley Dec 11, 2019 12:59 PM (in response to Dave Wagner)

Workflow: Mom’s before-bed-to-do-list

Orchestration: Mom directing all of us to do our tasks before bed


Jason Scobbie Dec 11, 2019 12:46 PM

Automation is a great thing... Combining these tasks and process through orchestration is the difference between fixing things for an Engineer or small team to turning it into an Enterprise wide improvement. When you can automate a change, but also the change ticket, taking the device in/out of monitoring, pre/post change verification, and NOC notification all by a single click to start is a key to greatness.


  1. 12. Microservices

For this cloud-centric term, SolarWinds product manager Melanie Achard once again invoked the (practically) holy LEGO concept, to great effect.


Jeremy Mayfield  Dec 12, 2019 8:33 AM

Of course I am a fan of the Lego analogies. Great way to explain this. Just to be different today, right from Google: The honeycomb is an ideal analogy for representing the evolutionary microservices architecture. In the real world, bees build a honeycomb by aligning hexagonal wax cells. They start small, using different materials to build the cells. Construction is based on what is available at the time of building. Repetitive cells form a pattern and result in a strong fabric structure. Each cell in the honeycomb is independent but also integrated with other cells. By adding new cells, the honeycomb grows organically to a big, solid structure. The content inside each cell is abstracted and not visible outside.


Kelsey Wimmer Dec 12, 2019 9:27 AM

A microservice is a small program that does one job but does it really well. It doesn’t try to do everything. Just its job. It needs to communicate with other programs but it doesn’t do their jobs. You can put a bunch of microservices together and do even bigger things.


Holly Baxley Dec 12, 2019 10:54 AM

Hey Five-year-old me,


Do you remember the Power Rangers? How cool they are? Remember how you always wished you were the Pink Ranger, even though you were told the Green Ranger was always the strongest? You thought gymnastic skills could kick butt over raw brawn any day.


Well, keep that in your mind, as we talk about IT Microservices.


Just like each Power Ranger can stand on its own and have its own cool robot technology without affecting anyone else, each Ranger can take their powers and robots and add it to each other to make one HUGE super cool mega Ranger that can fight any beast.


Sometimes the Rangers had to work independently to root out the bad guys, and sometimes it takes a very big robot as a unified team to really tackle some big battles.


Microservices work like that in IT.


Each Microservice can stand on its own, like each Power Ranger. It can have its own skills, be upgraded independently, and get some really cool features—without affecting anyone else.


Each Microservice is very specific, just like a Power Ranger has very specific powers and skills it brings to the team.


But what’s cool is if you take several of these microservices and connect them together, they morph into a bigger application—just like the Power Rangers could morph into one unified giant robot ranger. This bigger application can tackle some giants that other applications and software on its own can’t.


Maybe that’s why giants such as Amazon and Netflix use Microservices in their IT architecture.


Maybe they should really call microservices: “Mighty Morphin’ Microservices!”


Yes, I suppose the nano-bots on Tony Stark’s Iron Man suit are microservices too. Maybe Tony uses microservices to create the nano-bots to do what they do to form Iron Man’s suit. You think?


  1. 13. Alert

For the last word of the week, THWACK MVP Adam Timberley gave us what amounts to D&D character cards, explaining the different personas that you may meet when working with alerts.


Faz f Dec 13, 2019 6:54 AM

Alerts you know,

Your Alarm clock in the Morning (this could be Mum)

When Dad is cooking and the oven beeps and dinner is ready!

At School when the dinner bell rings and you can play outside.

This are all Alerts you know


Mike Ashton-Moore Dec 13, 2019 9:24 AM

holy smokes, I read that and kept expecting a truncated post message

Love the detail and the archetypes - and recognize many of them, I have examples of most of them in my team.

My problem with alerts is what the intended use is.

I would advice going to the googles and searching "Red Dwarf Blue Alert"

I love my Trek/Wars etc, but Red Dwarf is aimed squarely at grown ups


George Sutherland Dec 13, 2019 1:00 PM

Alert: SHIELDS UP!!!!!


  1. Seriously.. Instinctively it's the fight or flight dilemma we face when confronted with the barrage of atomic particle pieces of information.


(great graphics and analysis of the people types involved.... WELL DONE!)


I use the STEP technique

Survey the situation

Take the appropriate action based on what is presented

Evaluate your response

Prepare for the next situation


In a roundabout continuation to one of my previous blog posts, Did Microsoft Kill Perpetual Licensing, I’m going to look at the basic steps required for setting up an Office 365 tenant. There are a few ways you can purchase Office 365. There’s the good old credit card on a monthly subscription, bought directly from Microsoft. This is the most expensive way to buy Office 365 as you will be paying Recommended Retail Price (RRP). Then there are purchasing models typically bought from an IT reseller. The reseller typically either helps add the subscription to an existing Microsoft Enterprise agreement with licenses available on a Microsoft volume license portal, or more likely the reseller will be what’s known as a cloud solution provider (CSP). CSP licensing can be bought on monthly or yearly commitments, with prices lower than RRP. The CSP model offers great flexibility as it’s easy to increase or decrease license consumption on a monthly basis, thus you’re never overpaying for your Office 365 investment.


Now you may be reading this wondering what on earth is a Microsoft Office 365 Tenant? Don’t worry—you’re not alone. Although Office 365 adoption is high, I saw a statistic that something like one in five IT users use Office 365. That’s still only 20% market saturation.


In basic terms, an Office 365 tenant is the focal point from where you manage all the services and features of the Office 365 package. When creating an Office 365 tenant portal, you need a name for the tenant, which will make up a domain name with something like yourcompanyname.onmicrosoft.com. At a minimum, the portal will incorporate a version of Azure Active Directory for user management. Once licenses have been assigned to the users, options are open for using services like Exchange Online for email, SharePoint Online for Intranet and document library-type services, OneDrive for users’ personal document storage, and one of Microsoft’s key plays at the moment, Teams. Teams is the central communication and content collaboration platform bringing much of the Office 365 components together into one place.


Cool, now what?

Setting up your own Office 365 portal may seem like a daunting task, but it doesn’t have to be. I’ll walk you through the basics below.


At this point, I must point out I work for a cloud solution provider, so I’ve already taken the first step in creating the Microsoft tenant. You can do this any way you want—I outlined the methods of payment above.


However you arrive at this point, you’ll end up with a login name like admin@yourcompanyname.onmicrosoft.com. You need this to access the O365 portal at https://portal.microsoft.com.


When you first login, you’ll see something like this. Click on Admin.


The admin panel looks like this when you first log in.


Domain Verification

Select Setup and add a domain. Here we’ll associate your mail domain with the O365 portal.


I’m going to add and verify the domain snurf.uk. This O365 portal will be deleted before this article is published.



At this point, you must prove you own the domain. My domain is hosted with 123-reg. I could at this point log in to 123-reg from the O365 portal and it would sort out the domain verification for me. I’ll manually add a TXT record to show what’s involved.


To verify the domain, I have to add a TXT DNS record with the value specified below.


On the 123-reg portal, it looks like this. It’ll be similar for your DNS hosting provider.


Once the DNS record has been added, click Verify the Domain, and with any luck, you should see a congratulations message.

We now have a new verified domain.


User Creation

There are a couple of ways to create users in an O365 portal. They can be synchronized from an external source like an on-premises Active Directory, or they can be manually created on the O365 portal. I’ll show the latter here.


Click Users, Active users, Add a user. Notice snurf.uk is now an option for the domain suffix.


Fill in the user's name details.

If you have any licenses available, assign them here. I didn’t have any licenses available for this environment.


Assign user permissions.


And click Finish.

You now have a new user.



Further Config

I’ve shared some screenshots from my organization’s setup below. Once you have some licenses available in your O365 portal including Exchange online, then there’s some further DNS configuration to put in place. It’s the same idea as above when verifying the domain, but the settings below are used to configure your domain to route mail via O365, amongst other things.


Once all that’s in place, you’ll start to see some usage statistics.



In this how-to guide, we’ve created a basic Office 365 portal, assigned a domain to it, and created some users. We’ve also seen how to configure DNS to allow mail to route to your Office 365 portal.


Although this will get you to a point of having a functioning Office 365 portal with email, I would stress that you continue to configure and lock down the portal. Security and data protection are of paramount importance. Look at security offerings from Microsoft or other third-party solutions.


If you’d like any further clarification on any aspect of this article, please comment below and I’ll aim to get back to you.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner with ideas for improving the management of school networks by analyzing performance and leveraging alerts and capacity planning.


Forty-eight percent of students currently use a computer in school, while 42% use a smartphone, according to a recent report by Cambridge International. These technologies provide students with the ability to interact and engage with content both inside and outside the classroom, and teachers with a means to provide personalized instruction.


Yet technology poses significant challenges for school IT administrators, particularly with regard to managing network performance, bandwidth, and cybersecurity requirements. Many educational applications are bandwidth-intensive and can lead to network slowdowns, potentially affecting students’ abilities to learn. And when myriad devices are tapping into a school’s network, it can pose security challenges and open the doors to potential hackers.


School IT administrators must ensure their networks are optimized and can accommodate increasing user demands driven by more connected devices. Simultaneously, they must take steps to lock down network security without compromising the use of technology for education. And they must do it all as efficiently as possible.


Here are a few strategies they can adopt to make their networks both speedy and safe.


Analyze Network Performance


Finding the root cause of performance issues can be difficult. Is it the application or the network?


Answering this question correctly requires the ability to visualize all the applications, networks, devices, and other factors affecting network performance. Administrators should be able to view all the critical network paths connecting these items, so they can pinpoint and immediately target potential issues whenever they arise.


Unfortunately, cloud applications like Google Classroom or Office 365 Education can make identifying errors challenging because they aren’t on the school’s network. Administrators should be able to monitor the performance of hosted applications as they would on-premises apps. They can then have the confidence to contact their cloud provider and work with them to resolve the issue.


Rely on Alerts


Automated network performance monitoring can save huge amounts of time. Alerts can quickly and accurately notify administrators of points of failure, so they don’t have to spend time hunting; the system can direct them to the issue. Alerts can be configured so only truly critical items are flagged.


Alerts serve other functions beyond network performance monitoring. For example, administrators can receive an alert when a suspicious device connects to the network or when a device poses a potential security threat.


Plan for Capacity


A recent report by The Consortium for School Networking indicates within the next few years, 38% of students will use, on average, two devices. Those devices, combined with the tools teachers are using, can heavily tax network bandwidth, which is already in demand thanks to broadband growth in K-12 classrooms.


It’s important for administrators to monitor application usage to determine which apps are consuming the most bandwidth and address problem areas accordingly. This can be done in real-time, so issues can be rectified before they have an adverse impact on everyone using the network.


They should also prepare for and optimize their networks to accommodate spikes in usage. These could occur during planned testing periods, for example, but they also may happen at random. Administrators should build in bandwidth to accommodate all users—and then add a small percentage to account for any unexpected peaks.


Tracking bandwidth usage over time can help administrators accurately plan their bandwidth needs. Past data can help indicate when to expect bandwidth spikes.


Indeed, time itself is a common thread among these strategies. Automating the performance and optimization of a school network can save administrators from having to do all the maintenance themselves, thereby freeing them up to focus on more value-added tasks. It can also save schools from having to hire additional technical staff, which may not fit in their budgets. Instead, they can put their money toward facilities, supplies, salaries, and other line items with a direct and positive impact on students’ education.


Find the full article on Today’s Modern Educator.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Garry Schmidt first got involved in IT Service Management almost 20 years ago. Since becoming the manager of the IT Operations Center at SaskPower, ITSM has become one of his main focuses. Here are part 1 and part 2 of our conversation.


Bruno: What have been your biggest challenges with adopting ITSM and structuring it to fit SaskPower?

Garry: One of the biggest challenges is the limited resources available. Everybody is working hard to take care of their area of responsibility. Often you introduce new things, like pushing people to invest time in problem management, for example. The grid is full. It’s always a matter of trying to get the right priority. There are so many demands on everybody all day long that even though you think investing time and improving the disciplines, you still have to figure out how to convince people the priority should be placed there. So, the cultural change aspect, the organizational change is always the difficult part. “We’ve always done it this way, and it’s always worked fine. So, what are you talking about Schmidt?”


Taking one step at a time and having a plan of where you want to get to. Taking those bite-sized pieces and dealing with it that way. You just can’t get approval to add a whole bunch of resources to do anything. It’s a matter of molding how we do things to shift towards the ideal instead of making the big steps.


It’s more of an evolution than a revolution. Mind you a big tool change or something similar gives you a platform to be able to do a fair amount of the revolution at the same time. You’ve got enough funding and dedicated resources to be able to focus on it. Most of the time, you’re not going to have that sort of thing to leverage.


Bruno: You’ve mentioned automation a few times in addition to problem resolution and being more predictive and proactive. When you say automation, what else are you specifically talking about?

Garry: Even things like chatbots need to be able to respond to requests and service desk contacts. I think there’s more and more capability available. The real challenge I’ve seen with AI tools is it’s hard to differentiate between those that just have a new marketing spin on an old tool versus the ones with some substance to them. And they’re expensive.


We need to find some automation capability to leverage the tools we’ve already got, or it’s an incremental investment rather than a wholesale replacement. The enterprise monitoring and alerting experience, I’m not willing to go down the path of replacing all our monitoring tools and bringing everything into a big, jumbo AI engine again. I’m skittish of that kind of stuff.


Our typical pattern has been we buy a big complicated tool, implement, and then use only a tenth of the capability. And then we go look for another tool.


Bruno: What recommendations would you make to someone who is about to introduce ITSM to an organization?

Garry: Don’t underestimate the amount of time it takes to handle the organizational change part of it.


You can’t just put together job aides and deliver training for a major change and then expect it to just catch on and go. It takes constant tending to make it grow.


It’s human nature: you’re not going to get everything the first time you see new processes. So, having a solid support structure in place to be able to continually coach and even evolve the things you do. We’ve changed and improved upon lots of things based on the feedback we’ve gotten from the folks within the different operational teams. Our general approach was to try and get the input and involvement of the operational teams as much as we could. But there were cases where we had to make decisions on how it was going to go and then teach people about it later. In both cases, you get smarter as you go forward.


This group operates a little differently than all the rest, and there are valid reasons for it, so we need to change our processes and our tools to make it work better for them. You need to make it work for them.


Have the mentality that our job is to make them successful.


You just need to have an integrated ITSM solution. We were able to make progress without one, but we hit the ceiling.


Bruno: Any parting words on ITSM, or your role, or the future of ITSM as you see it?

Garry: I really like the role my team and I have. Being able to influence how we do things. Being able to help the operational teams be more successful while also improving the results we provide to our users, to our customers. I’m happy with the progress we’ve made, especially over the last couple of years. Being able to dedicate time towards reviewing and improving our processes and hooking it together with the tools.


I think we’re going to continue to make more good progress as we further automate and evolve the way we’re doing things.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.