Geek Speak Blogs - Page 2

Showing results for 
Search instead for 
Did you mean: 
Create Post

Geek Speak Blogs - Page 2

Level 17

This week's Actuator comes to you from the suddenly mild January here in the Northeast. I'm taking advantage of the warm and dry days up here, spending time walking outdoors. Being outdoors is far better than the treadmill at the gym.

As always, here's a bunch of links from the internet I hope you will find useful. Enjoy!

Jeff Bezos hack: Amazon boss's phone 'hacked by Saudi crown prince'

I don't know where to begin. Maybe we can start with the idea that Bezos uses WhatsApp, an app known to be unsecured and owned by the unsecured Facebook. I'm starting to think he built a trillion-dollar company by accident, not because he's smart.

New Ransomware Process Leverages Native Windows Features

This is notable, but not new. Ransomware often uses resources available on the machine to do damage. For example, VB macros embedded in spreadsheets. I don't blame Microsoft for saying they won't provide security service for this, but it would be nice if they could hint at finding ways to identify and halt malicious activity.

London facial recognition: Metropolitan police announces new deployment of cameras

Last week the EU was talking about a five-year ban on facial recognition technology. Naturally, the U.K. decides to double down on their use of that same tech. I can't help but draw the conclusion this shows the deep divide between the U.K. and the EU.

Security Is an Availability Problem

I'm not certain, but I suspect many business decision-makers tend to think "that can't happen to us," and thus fail to plan for the day when it does happen to them.

Apple's dedication to 'a diversity of dongles' is polluting the planet

Words will never express my frustration with Apple for the "innovation" of removing a headphone jack and forcing me to buy additional hardware to continue to use my existing accessories.

Webex flaw allowed anyone to join private online meetings - no password required

The last thing I'm doing during the day is trying to join *more* meetings.

Play Dungeons & Deadlines

You might want to set aside some time for this one.

Walking through Forest Park this past Sunday, after a rainstorm the day before and the temperature so perfect to catch the steam coming off the trees.


Read more
2 34 1,160
Level 17

Back in October, 2019, I shared my love of both Raspberry Pi ( devices and the Pi-Hole ( software; and showed how—with a little know-how about Application Programming Interfaces (APIs) and scripting (in this case, I used it as an excuse to make my friend @kmsigma happy and expand my knowledge of PowerShell) you could fashion a reasonable API-centric monitoring template in Server & Application Monitor (SAM). For those who are curious, you can find part 1 here: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) and part 2 here: Don’t Shut Your Pi-Hole, Monitor It! (part 2 of 2) 

It was a good tutorial, as far as things went, but it missed one major point: even as I wrote the post, I knew @Serena and her daring department of developers were hard at work building an API poller into SAM 2019.4. As my tutorial went to post, this new functionality was waiting in the wings, about to be introduced to the world.

Leaving the API poller out of my tutorial was a necessary deceit at the time, but not anymore. In this post I’ll use all the same goals and software as my previous adventure with APIs, but with the new functionality.

A Little Review

I’m not going to spend time here discussing what a Raspberry Pi or Pi-Hole solution is (you can find that in part 1 of the original series: Don’t Shut Your Pi-Hole, Monitor It! (part 1 of 2) . But I want to take a moment to refamiliarize you with what we’re trying to accomplish.

Once you have your Raspberry Pi and Pi-Hole up and running, you get to the API by going to http://<your pi-hole IP or name>/admin/api.php. When you do, the data you get back looks something like this:


If you look at it with a browser capable of formatting JSON data, it looks a little prettier:


That’s the data we want to collect using the new Orion API monitoring function.

The API Poller – A Step-by-Step Guide

To start off, make sure you’re monitoring the Raspberry Pi in question at all, so there’s a place to display this data. What’s different from the SAM Component version is you can monitor it using the ARM agent, or SNMP, or even as a ping-only node.

Next, on the Node Details page for the Pi, look in the “Management” block and you should see an option for “API Poller.” Click that, then click “Create,” and you’re on your way.


You want to give this poller a name, or else you won’t be able to include these statistics in PerfStack (Performance Analyzer) later. You can also give it a description and (if required) the authentication credentials for the API.


On the next screen, put in the Pi-Hole API URL. As I said before, that’s http://<your pi-hole IP or Name>/admin/api.php. Then click “Send Request” to pull a sample of the available metrics.


The “Response” area below will populate with items. For the ones you want to monitor, click the little computer screen icon to the right.


If you want to monitor the value without warning or critical thresholds, click “Save.” Otherwise change the settings as you desire.


As you do, you’ll see the “Values to Monitor” list on the right column populate. Of course, you can go back and edit or remove those items later. Because nobody’s perfect.


Once you’re done, click “Save” at the bottom of the screen. Scroll down on the Node Details page and you’ll notice a new “API Pollers” Section is now populated.


I’m serious, it’s this easy. I’m not saying coding API monitors with PowerShell wasn’t a wonderful learning experience, and I’m sure down the road I’ll use the techniques I learned.

But when you have several APIs, with a bunch of values each, this process is significantly easier to set up and maintain.

Kudos once again to @kmsigma for the PowerShell support; and @serena and her team for all their hard work and support making our lives as monitoring engineers better every day.

Try it out yourself and let us know your experiences in the comments below!

Read more
1 11 598
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by Jim Hansen about using patching, credential management, and continuous monitoring to improve security of IoT devices.

Security concerns over the Internet of Things (IoT) are growing, and federal and state lawmakers are taking action. First, the U.S. Senate introduced the Internet of Things Cybersecurity Improvement Act of 2017, which sought to “establish minimum security requirements for federal procurements of connected devices.” More recently, legislators in the state of California introduced Senate Bill No. 327, which stipulated manufacturers of IoT devices include “a reasonable security feature” within their products.

While these laws are good starting points, they don’t go far enough in addressing IoT security concerns.

IoT Devices: A Hacker’s Best Friend?

Connected devices all have the potential to connect to the internet and local networks and, for the most part, were designed for convenience and speed—not security. And since they’re connected to the network, they offer a backdoor through which other solutions can be easily compromised.

As such, IoT devices offer tantalizing targets for hackers. A single exploit from one connected device can lead to a larger, more damaging breach. Remember the Target hack from a few years ago? Malicious attackers gained a foothold into the retail giant’s infrastructure by stealing credentials from a heating and air condition company whose units were connected to Target’s network. It’s easy to imagine something as insidious—and even more damaging to national security—taking place within the Department of Defense or other agencies, which has been an early adopter of connected devices.

Steps for Securing IoT Devices

When security managers initiate IoT security measures, they’re not only protecting their devices, they’re safeguarding everything connected to those devices. Therefore, it’s important to go beyond the government’s baseline security recommendations and embrace more robust measures. Here are some proactive steps government IT managers can take to lock down their devices and networks.

  • Make patching and updating a part of the daily routine. IoT devices should be subject to a regular cadence of patches and updates to help ensure the protection of those devices against new and evolving vulnerabilities. This is essential to the long-term security of connected devices.

The Internet of Things Cybersecurity Improvement Act of 2017 specifically requires vendors to make their IoT devices patchable, but it’s easy for managers to go out and download what appears to be a legitimate update—only to find it’s full of malware. It’s important to be vigilant and verify security packages before applying them to their devices. After updates are applied, managers should take precautions to ensure those updates are genuine.

  • Apply basic credential management to interaction with IoT devices. Managers must think differently when it comes to IoT device user authentication and credential management. They should ask, “How does someone interact with this device?” “What do we have to do to ensure only the right people, with the right authorization, are able to access the device?” “What measures do we need to take to verify this access and understand what users are doing once they begin using the device?”

Being able to monitor user sessions is key. IoT devices may not have the same capabilities as modern information systems, such as the ability to maintain or view log trails or delete a log after someone stops using the device. Managers may need to proactively ensure their IoT devices have these capabilities.

  • Employ continuous threat monitoring to protect against attacks. There are several common threat vectors hackers can use to tap into IoT devices. SQL injection and cross-site scripting are favorite weapons malicious actors use to target web-based applications and could be used to compromise connected devices.

Managers should employ IoT device threat monitoring to help protect against these and other types of intrusions. Continuous threat monitoring can be used to alert, report, and automatically address any potentially harmful anomalies. It can monitor traffic passing to and from a device to detect whether the device is communicating with a known bad entity. A device in communication with a command and control system outside of the agency’s infrastructure is a certain red flag that the device—and the network it’s connected to—may have been compromised.

The IoT is here to stay, and it’s important for federal IT managers to proactively tackle the security challenges it poses. Bills passed by federal and state legislators are a start, but they’re not enough to protect government networks against devices that weren’t designed with security top-of-mind. IoT security is something agencies need to take into their own hands. Managers must understand the risks and put processes, strategies, and tools in place to proactively mitigate threats caused by the IoT.

Find the full article on Fifth Domain.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 504
Level 9

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

The following story is true. The names have been changed to protect the innocent.

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

And then the bill showed up.

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

Read more
5 21 1,030
Level 17

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the ...

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

7_18_13 - 1.jpg

Read more
2 21 666
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

1. Identify Existing Threats and Vulnerabilities

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

3. Establish Ongoing User Training and Education Programs

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
2 13 894
Level 11

While there are many silly depictions of machine learning and artificial intelligence throughout Hollywood, its reality delivers significant benefits. Administrators today oversee so many tasks, like system monitoring, performance optimizing, networking configuration, and more. Many of these tasks can be monotonous and tedious. Also, those tasks are generally required daily. In these cases, machine learning helps ease the burden on the administrator and helps make them more productive with their time. Lately, however, more people seem to think too much machine learning may replace the need for humans to get a job done. While there are instances of machine learning eliminating the need for some tasks to be manned by a human, I don’t believe we’ll see humans replaced by machines (sorry, Terminator fans). Instead, I’ll highlight why I believe machine learning matters now and will continue to matter for generations to come.

Machine Learning Improves Administrator’s Lives

Some tasks administrators are responsible for can be very tedious and take a long time to complete. With machine learning, automation makes the daily tedious tasks run on a schedule and efficiently as system behavior is learned and optimized on the fly. A great example comes in the form of spam mail or calls. Big name telecom companies are now using machine learning to filter out the spam callers flooding cell phones everywhere. Call blocker apps can now screen calls for you based on spam call lists analyzed by machine learning and then block potential spam. In other examples, machine learning can analyze system behavior against a performance baseline and then alert the team of any anomalies and/or the need to make changes. Machine learning is here to help the administrator, not give them anxiety about being replaced.

Machine Learning Makes Technology Better

There are so many amazing software packages available today for backup and recovery, server virtualization, storage optimization, or security hardening. There’s something for every type of workload. When machine learning is applied to these software technologies, it enhances the application and increases the ease of use. Machine learning is doing just that: always learning. If an application workload suddenly increases, machine learning captures it and then will use an algorithm to determine how to react in those situations. When there’s a storage bottleneck, machine learning analyzes the traffic to determine what’s causing the backup and then works out a possible solution to the problem for administrators to implement.

Machine Learning Reduces Complexity

Nobody wants their data center to be more complex. In fact, technology trends in the past 10 to 15 years have leaned towards reducing complexity. Virtualization technology has reduced the need for a large footprint in the data center and reduced the complexity of systems management. Hyperconverged infrastructure (HCI) has gone a step further and consolidated an entire rack’s worth of technology into one box. Machine learning takes it a step further by enabling automation and fast analysis of large data sets to produce actionable tasks. Tasks requiring a ton of administrative overhead are now reduced to an automated and scheduled task monitored by the administrator. Help desk analysts benefit from machine learning’s ability to recognize trending data to better triage certain incident tickets and reduce complexity in troubleshooting those incidents.

Learn Machine Learning

If you don’t have experience with machine learning, dig in and start reading everything you can about it. In some cases, your organization may already be using machine learning. Figure out where it’s being used and start learning how it affects your job day to day. There are so many benefits to using machine learning—find out how it benefits you and start leveraging its power.

Read more
1 15 634
Level 10

The marketing machines of today often paint new technologies to suggest they’re the best thing since sliced bread. Sometimes though, the new products are just a rehash of an existing technology. In this blog post, I’ll look at some of these.

As some of you may know, my tech background is heavily focused around virtualization and the associated hardware and software products. With this in mind, this post will have a slant towards those types of products.

One of the recent technology trends I have seen cropping up is something called dHCI or disaggregated hyperconverged infrastructure. I mean, what is that? If you break down to its core components, it’s nothing more than separate switching, compute, and storage. Why is this so familiar? Oh yeah—it’s called converged infrastructure. There’s nothing HCI about it. HCI is the convergence of storage and compute on to a single chassis. To me, it’s like going to a hipster café and asking for a hyperconverged sandwich. You expect a ready-to-eat, turnkey sandwich but instead, you receive a disassembled sandwich you have to construct yourself and then somehow it’s better than the thing it was trying to be in the first place: a sandwich. No thanks. If you dig a little deeper, the secret sauce to dHCI is the lifecycle management software overlaying the converged infrastructure but hey, not everyone wants secret sauce with their sandwich.

If you take this a step further and label these types of components as cloud computing, nothing has really changed. One could argue true cloud computing is the ability to self-provision workloads, but rarely does a product labeled as cloud computing deliver those results, especially private clouds.

An interesting term I came across as a future technology trend is distributed cloud.¹ This sounds an awful lot like hybrid cloud to me. Distributed cloud is when public cloud service offerings are moved into private data centers on dedicated hardware to give a public cloud-like experience locally. One could argue this already happens the other way around with a hybrid cloud. Technologies like VMware on AWS (or any public cloud for that matter) make this happen today.

What about containers? Containers have held the media’s attention for the last few years now as a new way to package and deliver a standardized application portable across environments. The concept of containers isn’t new, though. Docker arguably brought containers to the masses but if you look at this excellent article by Ell Marquez on the history of containers, we can see its roots go all the way back to the mainframe era of the late 70s and 80s.

The terminology used by data protection companies to describe their products also grinds my gears. Selling technology on being immutable. Immutable meaning it cannot be changed once it has been committed to media. Err, WORM media anyone? This technology has existed for years on tape and hard drives. Don’t try and sell it as a new thing.

While this may seem a bit ranty, if you’re in the industry, you can probably guess which companies I’m referring to with my remarks. What I am hoping to highlight though is not everything is new and shiny, some of it is wrapped up in hype or clever marketing.

I’d love to hear your thoughts on this, if you think I’m right or wrong, and if you can think of any examples of old tech, new name.

¹Source: CRN

Read more
2 25 998
Level 10

2019 was busy year for DevOps as measured by the events held on the topic. Whether it be DevOps days around the globe, DockerCon, DevOps Enterprise Summits, KubeCon, or CloudNativeCon, events are springing up to support this growing community. With a huge number of events already scheduled for 2020, people plan on improving their skills with this technology. This is great—it’ll allow DevOps leaders to close capability gaps and it should be a must for those on a DevOps journey in 2020.

Hopefully, we’ll see more organizations adopt the key stages of DevOps evolution (foundation building, normalization, standardization, expansion, automated infrastructure delivery, and self-service) by following this model. Understanding where you are on the journey helps you plan what needs to be satisfied at each level before trying to move on to an area of greater complexity. By looking at the levels of integration and the growing tool chain, we can see where you are and plan accordingly. I look forward to seeing and reading about the trials and how they were overcome by organizations looking to further their DevOps movement in 2020.

You’ll probably hear terms like NoOps and DevSecOps gain more traction over the coming year from certain analysts. I believe the name DevOps is currently fine for what you’re trying to achieve. If you follow correct procedures, then security and operations already make up a large subset of your workflows. Therefore, you shouldn’t need to call them out in separate terms. If you’re not pushing changes to live systems, then you aren’t really doing any operations, and therefore not truly testing your code. So how can you go back and improve or reiterate on it? As for security, while it’s hard to implement correctly and as difficult to work collaboratively together, there’s a greater need to adopt this technology correctly. Organizations that have matured and evolved through the stages above are far more likely to place emphasis on the integration of security than those just starting out. Improved security posture will be a key talking point as we progress through 2020 and into the next decade.

Kubernetes will gain even more ground in 2020 as more people look to a way to provide a robust method of container orchestration to scale, monitor, and run any application, with many big-name software vendors investing in what they see as the “next battleground” for variants on the open-source application management tool.

Organizations will start to invest in more use of artificial intelligence, whether it be for automation, remediation, or improved testing. You can’t deny artificial intelligence and machine learning are hot right now and will seep into this aspect of technology in 2020. The best place to be to try this is in a cloud provider, saving you the need to invest in hardware. Instead, the provider can get you up and running in minutes.

Microservices and containers infrastructure will be another area of growth within the coming 12 months. Container registries are beneficial to organizations. Registries allow companies to apply policies, whether a security or access control policy and more, to how they manage containers. JFrog Container Registry is probably going to lead the charge in 2020, but don’t think they’ll have it easy as AWS, Google, Azure, and other software vendors have products fighting for this space.

These are just a few areas I see will be topics of conversation and column inches as we move into 2020 and beyond, but it tells me this is the area to develop your skills if you want to be in demand as we move into the second decade of this century.

Read more
1 9 492
Level 17

In Austin this week for our annual meeting of Head Geeks. The first order of business is to decide what to call our group. I prefer a "gigabyte of Geeks," but I continue to be outvoted. Your suggestions are welcome.

As always, here's a bunch of links from the internet I hope you find interesting. Enjoy!

Facebook again refuses to ban political ads, even false ones

Zuckerberg continues to show the world he only cares about ad revenue, for without that revenue stream his company would collapse.

Scooter Startup Lime Exits 12 Cities and Lays Off Workers in Profit Push

Are you saying renting scooters your customers then abandon across cities *is not* a profitable business model? That's crazy!

Russian journals retract more than 800 papers after ‘bombshell’ investigation

I wish we could do the same thing with blog posts, old and new.

Alleged head of $3.5M crypto mining scam bought stake in nightclub

A cryptocurrency scam? Say it isn't so! Who knew this was even possible?

Ring confirms it fired four employees for watching customer videos

Ah, but only after an external complaint, and *after* their actions were known internally. In other words, these four would still have jobs if not for the external probe.

Tesla driver arrested for flossing at 84 mph on autopilot

Don't judge, we've all been there, stuck in our car and in need of flossing our teeth.

It's helpful for a restaurant to publish their menu outside for everyone to see.


Read more
2 36 943
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Brandon Shopp with ideas about modernizing security along with agency infrastructure to reduce cyberthreats.

As agencies across the federal government modernize their networks to include and accommodate the newest technologies such as cloud and the Internet of Things (IoT), federal IT professionals are faced with modernizing security tactics to keep up.

There’s no proverbial silver bullet, no single thing capable of protecting an agency’s network. The best defense is implementing a range of tactics working in concert to provide the most powerful security solution.

Let’s take a closer look.

Access Control

Something nearly all of us take for granted is access. The federal IT pro can help dramatically improve the agency’s security posture by reining in access.

There can be any number of reasons for federal IT pros to set overly lenient permissions—from a lack of configuration skills to a limited amount of time. The latter is often the more likely culprit as access control applies to many aspects of the environment. From devices to file folders and databases, it’s difficult and time-consuming to manage setting access rights.

Luckily, an increasing number of tools are available to help automate the process. Some of these tools can go so far as to automatically define permission parameters, create groups and ranges based on these parameters, and automatically apply the correct permissions to any number of devices, files, or applications.

Once permissions have been set successfully, be sure to implement multifactor authentication to ensure access controls are as effective as possible.

Diverse Protection

The best protection against a complex network is multi-faceted security. Specifically, to ensure the strongest defense, invest in both cloud-based and on-premises security.

For top-notch cloud-based security, consider the security offerings of the cloud provider with as much importance as other benefits. Too many decisionmakers overlook security in favor of more bells and whistles.

Along similar lines of implementing diverse, multi-faceted security, consider network segmentation. If an attack happens, the federal IT pro should be able to shut down a portion of the network to contain the attack while the rest of the network remains unaffected.


Once the federal IT pro has put everything in place, the final phase—testing—will quickly become the most important aspect of security.

Testing should include technology testing (penetration testing, for example), process testing (is multi-factor authentication working?), and people testing (testing the weakest link).

People testing may well be the most important part of this phase. Increasingly, security threats caused by human error are becoming one of the federal government’s greatest threats. In fact, according to a recent Cybersecurity Survey, careless and malicious insiders topped the list of security threats for federal agencies.


There are tactics federal IT pros can employ to provide a more secure environment, from enhancing access control to implementing a broader array of security defenses to instituting a testing policy.

While each of these is important individually, putting them together goes a long way toward strengthening any agency’s security infrastructure.

Find the full article on Government Technology Insider.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 5 290
Level 10

Part 1 of this series introduced IT Service Management (ITSM) and a few of the adaptable frameworks available to fit the needs of an organization. This post focuses on the benefits of using a couple of the principles from the Lean methodology to craft ITSM.

I’d like to acknowledge and thank Trish Livingstone and Julie Johnson for sharing their expertise in this subject area. I leaned on them heavily.

The Lean Methodology

The Lean methodology is a philosophy and mindset focused on driving maximum value to the customer while minimizing waste. These goals are accomplished through continuous improvement and respect for people. More on this in a bit.

Because the Lean methodology originated in the Toyota Production System, it’s most commonly associated with applications in manufacturing. However, over the years, Lean has brought tremendous benefits to other industries as well, including knowledge work. Probably the most recognizable form of Lean in knowledge work is the Agile software development method.

Continuous Improvement

“If the needs of the end user are the North Star, are all the services aligned with achieving their needs?” — Julie Johnson

Continuous improvement should be, not surprisingly, continuous. Anyone involved in a process should be empowered to make improvements to it at any time. However, some situations warrant bringing a team of people together to affect more radical change than can be achieved by a single person.

3P—Production, Preparation, Process

One such situation is right at the beginning of developing a new product or process.

This video and this video demonstrate 3P being used to design a clinic and a school, respectively. In both cases, the design team includes stakeholders of all types to gather the proper information to drive maximum value to the customer. The 3P for the clinic consists of patients, community members, caregivers, and support staff. The same process for the school includes students, parents, teachers, and community members.

While both examples are from tangible, brick-and-mortar situations, the 3P is also helpful in knowledge work. One of the most challenging things to gather when initiating a new project is proper and complete requirements. Without sufficient requirements, the creative process of architecting and designing IT systems and services often ends up with disappointing outcomes and significant rework. This 3P mapped out the information flows that fed directly into the design of new safety alert system software for healthcare.

The goal of the 3P is to get the new initiative started on the right path by making thoughtful, informed decisions using information gathered from all involved in the process. A 3P should begin with the team members receiving a short training session on how to participate in the 3P. Armed with this knowledge and facilitated by a Lean professional keeping things on track, the 3P will produce results more attuned to the customer’s needs.

Value Stream Mapping (VSM)

If you bring visibility to the end-to-end process, you create understanding around why change is needed, which builds buy-in.” — Trish Livingstone

Another situation warranting a team of people coming together to affect more radical change is when trying to improve an existing process or workflow. Value Stream Mapping (VSM) is especially useful when the process contains multiple handoffs or islands of work.

For knowledge work, VSM is the technique of analyzing the flow of information through a process delivering a service to a customer. The process and flow are visually mapped out, and every step is marked as either adding value or not adding value.

There are many good people in IT, and many want to do good work. As most IT departments operate in silos, it’s natural to think if you produce good quality work, on time, and then hand it off to the next island in the process, the customer will see the most value. The assumption here is each island in the process is also producing good quality work on time. This style of work is known as resource efficiency. The alternative, referred to as flow efficiency, focuses on the entire process to drive maximum customer value. This video, although an example from healthcare and not knowledge work, explains why flow efficiency can be superior to resource efficiency.

I was presented with a case study where a government registrar took an average of 53 days to fulfill a request for a new birth certificate. VSM revealed many inefficiencies because of tasks adding no value. The process after the VSM fulfilled requests in 3 days without adding new resources. The registrar’s call volume fell by 60% as customers no longer needed to phone for updates.

It’s easy to see how Value Stream Mapping could help optimize many processes in IT, including change management, support ticket flow, and maintenance schedules, to name a few.

Respect for People

Respect for people is one of the core tenets of Lean and a guiding principle for Lean to be successful in an organization.

Respect for the customer eliminates waste. Waste is defined as anything the customer wouldn’t be willing to pay for. In the case of government service, waste is anything they wouldn’t want their tax dollars to pay for.

Language matters when respecting the customer. The phrase “difficult user” is replaced with “a customer who has concerns.” As demonstrated in the 3P videos above, rather than relying on assumptions or merely doing things “the way we’ve always done them,” customers are actively engaged to meet their needs better.

Lean leadership starts at the top. Respect for employees empowers them to make decisions allowing them to do their best work. Leadership evolves to be less hands-on and takes on a feeling of mentorship.

Respect for coworkers keeps everyone’s focus on improving the processes delivering value to the customer. Documenting these processes teaches new skills, so everyone can participate and move work through the system faster.

The 4 Wins of Lean

Using some or all of the Lean methodology to customize IT service management can be a win, win, win, win situation. Potential benefits could include:

1. Better value for the ultimate customer (employees)

    • Reduced costs by eliminating waste
    • Faster service or product delivery
    • Better overall service

2. Better for the people working in the process

    • Empowered to make changes
    • Respected by their leaders
    • Reduced burden

3. Better financially for the organization

    • Reduced waste
    • Increased efficiency
    • Reduced cost

4. Better for morale

    • Work has meaning

    • No wasted effort

    • Work contributes to the bottom line

Read more
1 23 781
Level 11

There have been so many changes in data center technology in the past 10 years, it’s hard to keep up at times. We’ve gone from a traditional server/storage/networking stack with individual components, to a hyperconverged infrastructure (HCI) where it’s all in one box. The data center is more software-defined today than it ever has been with networking, storage, and compute being abstracted from the hardware. On top of all the change, we’re now seeing the rise of artificial intelligence (AI) and machine learning. There are so many advantages to using AI and machine learning in the data center. Let’s look at ways this technology is transforming the data center.

Storage Optimization

Storage is a major component of the data center. Having efficient storage is of the utmost importance. So many things can go wrong with storage, especially in the case of large storage arrays. Racks full of disk shelves with hundreds of disks, of both the fast and slow variety, fill data centers. What happens when a disk fails? The administrator gets an alert and has to order a new disk, pull the old one out, and replace it with the new disk when it arrives. AI uses analytics to predict workload needs and possible storage issues by collecting large amounts of raw data and finding trends in the usage. AI also helps with budgetary concerns. By analyzing disk performance and capacity, AI can help administrators see how the current configuration performs and order more storage if it sees a trend in growth.

Fast Workload Learning

Capacity planning is an important part of building and maintaining a data center. Fortunately, with technology like HCI being used today, scaling out as the workload demands is a simpler process than it used to be with traditional infrastructure. AI and machine learning capture workload data from applications and use it to analyze the impact of future use. Having a technology aid in predicting the demand of your workloads can be beneficial in avoiding downtime or loss of service for the application user. This is especially important in the process of building a new data center or stack for a new application. The analytics AI provides helps to see the entire needs of the data center, from cooling to power and space.

Less Administrative Overhead

The new word I love to pair with artificial intelligence and machine learning is “autonomy.” AI works on its own to analyze large amounts of data and find trends and create performance baselines in data centers. In some cases, certain types of data center activities such as power and cooling can use AI to analyze power loads and environmental variables to adjust cooling. This is done autonomously (love that word!) and used to adjust tasks on the fly and keep performance at a high level. In a traditional setting, you’d need several different tools and administrators or NOC staff to handle the analysis and monitoring.

Embrace AI and Put it to Work

The days of AI and machine learning being a scary and unknown thing are past. Take stock of your current data center technology and decide whether AI and/or machine learning would be of value to your project. Another common concern is AI replacing lots of jobs soon, and while it’s partially true, it isn’t something to fear. It’s time to embrace the benefits of AI and use it to enhance the current jobs in the data center instead of fearing it and missing out on the improvements it can bring.

Read more
2 9 464
Level 10

There are many configuration management, deployment, and orchestration tools available, ranging from open-source tools to automation engines. Ansible is one such software stack available to cover all the bases, and seems to be gaining more traction by the day. In this post, we’ll look at how this simple but powerful tool can change your software deployments by bringing consistency and reliability to your environment.

Ansible gives you the ability to provision, control, configure, and deploy applications to multiple servers from a single machine. Ansible allows for successful repetition of tasks, can scale from one to 10,000 or more endpoints, and uses YAML to apply configuration changes, which is easy to read and understand. It’s lightweight, uses SSH PowerShell and APIs for access, and as mentioned above, is an open-source project. It’s also agentless, differentiating it from some other similar competitive tools in this marketplace. Ansible is designed with your whole infrastructure in mind rather than individual servers. If you need dashboard monitoring, then Ansible Tower is for you.

Once installed on a master server, you create an inventory of machines or nodes for it to perform tasks on. You can then start to push configuration changes to nodes. An Ansible playbook is a collection of tasks you want to be executed on a remote server, in a configuration file. Get complicated with playbooks from the simple management and configuration of remote machines all the way to a multifaceted deployment with these five tips to start getting the most out of what tool can deliver.

  1. Passwordless keys (for SSH) is the way to go. Probably one you should undertake from day one. Not just for Ansible, this uses a public shared key between hosts based on the v2 standard with most default OSs creating 2048-bit keys, but can be changed in certain situations up to 4096-bit. No longer do you have to type in long complex passwords for every login session—this more reliable and easier-to-maintain method makes your environment both more secure and easier for Ansible to execute.
  2. Use check mode to dry run most modules. If you’re not sure how a new playbook or update will perform, dry runs are for you. With configuration management and Ansible’s ability to provide you with desired state and your end goal, you can use dry run mode to preview what changes are going to be applied to the system in question. Simply add the --check command to the ansible-playbook command for a glance at what will happen.
  3. Use Ansible roles. This is where you break a playbook out into multiple files. This file structure consists of a grouping of files, tasks, and variables, which now moves you to modularization of your code and thus independent adaptation upgrade, and allows for reuse of configuration steps, making changes and improvements to your Ansible configurations easier.
  4. Ansible Galaxy is where you should start any new project. Access to roles, playbooks, and modules from community and vendors—why reinvent the wheel? Galaxy is a free site for searching, rating, downloading, and even reviewing community-developed Ansible roles. This is a great way to get a helping hand with your automation projects.
  5. Use a third-party vault software. Ansible Vault is functional, but a single shared secret makes it hard to audit or control who has access to all the nodes in your environment. Look for something with a centrally managed repository of secrets you can audit and lock down in a security breach scenario. I suggest HashiCorp Vault as it can meet all these demands and more, but others are available.

Hopefully you now have a desire to either start using Ansible and reduce time wasted on rinse and repeat configuration tasks, or you’ve picked up a few tips to take your skills to the next level and continue your DevOps journey.

Read more
3 14 1,068
Level 17

Welcome back! I hope y'all had a happy and healthy holiday break. I'm back in the saddle after hosting a wonderful Christmas dinner for 20 friends and family. I had some time off as well, which I used to work a bit on my blog as well as some Python and data science learning.

As usual, here's a bunch of links from the internet I hope you'll find useful. Enjoy!

Team that made gene-edited babies sentenced to prison, fined

I wasn't aware we had reached the point of altering babies' DNA, but here we are.

2019 Data Breach Hall of Shame: These were the biggest data breaches of the year

I expect a longer list from 2020.

Bing’s Top Search Results Contain an Alarming Amount of Disinformation

A bit long, but worth some time and a discussion. I never think about how search engines try to determine the veracity of the websites returned in a search.

Google and Amazon are now in the oil business

File this under "Do as I say, not as I do."

Seven Ways to Think Like a Programmer

An essay about data that warmed my heart. I think a lot of this applies to every role, especially for those of us inside IT.

The other side of Stack Overflow content moderation

Start this post by reading the summary, then take in some of the specific cases he downvoted. The short of it is this: humans are horrible at communicating through texts, no matter what the forum.

This Is How To Change Someone’s Mind: 6 Secrets From Research

If you want to have more success at work, read this post. I bet you can think of previous discussions at work and understand where things went wrong.

For New Year's Eve I made something special - 6 pounds of pork belly bites in a honey soy sauce. They did not last long. No idea what everyone else ate, though.


Read more
1 31 764
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen about the state of security and insider threats for the federal government and what’s working to improve conditions. We’ve been doing these cyber surveys for years and I always find the results interesting.

Federal IT professionals feel threats posed by careless or malicious insiders and foreign governments are at an all-time high, yet network administrators and security managers feel like they’re in a better position to manage these threats.

Those are two of the key takeaways from a recent SolarWinds federal cybersecurity survey, which asked 200 federal government IT decision makers and influencers their impressions regarding the current security landscape.

The findings showed enterprising hackers are becoming increasingly focused on agencies’ primary assets: their people. On the bright side, agencies feel more confident to handle risk thanks to better security controls and government-mandated frameworks.

People Are the Biggest Targets

IT security threats posed by careless or untrained insiders and nation states have risen substantially over the past five years. Sixty-six percent of survey respondents said things have improved or are under control when it comes to malicious threats, but when asked about careless or accidental insiders, the number decreased to 58%.

Indeed, hackers have seen the value in targeting agencies’ employees. People can be careless and make mistakes—it’s human nature. Hackers are getting better at exploiting these vulnerabilities through simple tactics like phishing attacks and stealing or guessing passwords. The most vulnerable are those with access to the most sensitive data.

There are several strategies agencies should consider to even the playing field.

Firstly, ongoing training must be a top priority. All staff members should be hyper-aware of the realities their agencies are facing, including the potential for a breach and what they can do to stop it. Simply creating unique and undetectable passwords or reporting suspicious emails might be enough to save the organization from a perilous data breach. Agency security policies must be updated and shared with the entire organization at least once a month, if not more. Emails can help relay this information, but live meetings are much better at conveying urgency and importance.

Employing a policy of zero trust is also important. Agency workers aren’t bad people, but everyone makes mistakes. Data access must be limited to those who need it and security controls, such as access rights management, should be deployed to monitor and manage access.

Finally, agencies must implement automated monitoring solutions to help security managers understand what’s happening on their network at all times. They can detect when a person begins trying to access data they normally wouldn’t attempt to retrieve or don’t have authorization to view. Or perhaps when someone in China is using the login credentials of an agency employee based in Virginia. Threat monitoring and log and event management tools can flag these incidents, making them essential for every security manager’s toolbox.

Frameworks and Best Practices Being Embraced, and Working

Most survey respondents believe they’re making progress managing risk, thanks in part to government mandates. This is a sharp change from the previous year’s cybersecurity report, when more than half of the respondents indicated regulations and mandates posed a challenge. Clearly, agencies are starting to get used to—and benefit from—programs like the Risk Management Framework (RMF) and Cybersecurity Framework.

These frameworks help make security a fundamental component of government IT and provide a roadmap on how to do it right. With frameworks like the RMF, developing a better security hygiene isn’t a matter of “should we do this?” but a matter of “here’s how we need to do this.” The frameworks and guidelines bring order to chaos by giving agencies the basic direction and necessities they need to protect themselves and, by extension, the country.

A New Cold War

It’s encouraging to see recent survey respondents appearing to be emboldened by their cybersecurity efforts. Armed with better tools, guidelines, and knowledge, they’re in a prime position to defend their agencies against those who would seek to infiltrate and do harm.

But it’s also clear this battle is only just beginning. As hackers get smarter and new technologies become available, it’s incumbent upon agency IT professionals to not rest on their laurels. We’re entering what some might consider a cyber cold war, with each side stocking up to one-up the other. To win this arms race, federal security managers must continue to be innovative, proactive, and smarter than their adversaries.

Find the full article on Federal News Network.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 569
Level 12

“Security? We don’t need no stinking security!”

I’ve actually heard a CTO utter words this effect. If you subscribe to a similar mindset, here are five ways you too can stink at information security.

  • Train once and never test

Policy says you and your users need to be trained once a year, so once a year is good enough. Oh, and make sure you never test the users either—it’ll only confuse them.

  • Use the same password

It just makes life so much easier. Oh, and a good place to store your single password is in your email, or on Post-It notes stuck to your monitor.

  • Patching breaks things, so don’t patch

Troubleshooting outages is a pain. If you don’t patch and you don’t look at the device in the corner, then it won’t break.

  • The firewall will protect everything on the inside

We have the firewall! The bad guys stay out, so on the inside, we can let everyone get to everything.

  • Just say no and lock EVERYTHING down

If we say no to everything, and we restrict everything, then nothing bad will happen.

OK, now it’s out of my system—the above is obviously sarcasm.

But some of you will work in places that subscribe to one or more of the above. I’ve been there. But what can YOU do? Well, it’s 2020, and information security is everyone’s responsibility. One thing I commonly emphasize with our staff is no cybersecurity tool can ever be 100% effective. To even think about approaching 100% efficacy, everyone has to play a role as the human firewall. As IT professionals, our jobs aren’t just to put the nuts and bolt in place to keep the org safe. It’s also our job to educate our staff about the impact information security has on them.

So, let’s flip the above “tips” on their head and talk about what you can do to positively affect the cyber mindsets in your organization.

Train and Test Your Users Often

Use different training methods. Our head of marketing likes to use the phrase “six to eight to resonate.” You’re trying to keep the security mindset at the front of your staff’s consciousness. In addition to frequent CBT trainings, use security incidents as a learning mechanism. One of our most effective awareness campaigns was when we gamified a phishing campaign. The winner got something small like a pair of movie tickets. This voluntary “training” activity got a significant portion of our staff to actively respond. Don’t minimize the positive effect incentives can have on your users.

Lastly, speaking of incentives, make sure you run actual simulated phishing exercises. It’s a safe way to train your users. It’s also an easy way to test the effectiveness of your InfoSec training program and let users know how important data security is to the business.

Practice Good Password Hygiene

Security pros generally agree you should use unique, complex passwords or passphrases for every service you consume. This way, when (not if) an account you’re using is compromised, the account is only busted for a single service, rather than everywhere. If you use passwords across sites, you may be susceptible to credential stuffing campaigns.

Once you get beyond a handful of sites, it’s impossible to expect your users to remember all their passwords. So, what do you do? The easiest and most effective thing to do is introduce a password management solution. Many solutions out there run as a SaaS offering. The best solutions will dramatically impact security, while simplifying operations for your users. It’s a win-win!

One final quick point before moving on: make sure someone in your org is signed up for notifications from At the time of this writing, there are over 9 BILLION accounts on HIBP. This valuable free service can be an early warning sign if users in your org have been scooped up in data breaches. Additionally, SolarWinds Identity Monitor can notify you if your monitored domains or email addresses have been exposed in a data leak.

Patch Early and Often

I’m guessing I’m not alone in having worked at places afraid of applying security patches. Let’s just say if you’ve been around IT operations for a while, chances are you have battle scars from patching. Times change, and in my opinion, vendors have gotten much better at QAing their patches. Legacy issues aside, I’ll give you three reasons to patch frequently: Petya, NotPetya, and WannaCry. These three instances of ransomware caused some of the largest computer disruptions in recent memory. They were also completely preventable, as Microsoft released a patch plugging the EternalBlue vulnerability months before attacks were seen in the wild. From a business standpoint, patching makes good fiscal sense. The operational cost related to a virus can be extreme—just ask Maersk, the company projected to lose $300 million dollars from NotPetya. This doesn’t even account for the reputational risk a company can suffer from a data breach, which in many cases can be just as detrimental to the long-term vibrancy of a business.

Firewall Everywhere

If you’re breached, you want to limit the bad actors’ ability to pivot their attack from a web server to a system with financials. This technique is demonstrated with a DMZ approach. However, a traditional DMZ may not be enough, resulting in the rise of micro-segmentation over the last few years. The fun added benefit you can get with a micro-segmentation approach is as you’re limiting the attack surface, you can also handle events programmatically, like having the firewall automatically isolate a VM when a piece of malware has been observed on it.

Work With the Business to Understand the “Right” Level of Security

If you’ve read my other blog posts, you know I believe IT organizations should partner with business units. But more than a couple of us have seen InfoSec folks who just want to lock everything down to the point where running the business can be difficult. When this sort of a combative approach is taken, distrust between the units can be sowed, and shadow IT is one of the possible results.

Instead, work with the BUs to understand their needs and craft your InfoSec posture based on that. After all, an R&D team or a Dev org needs different levels of security than credit card processing, which must follow regulatory requirements. This for me was one of the most resonant messages to come out of The Phoenix Project: if you craft the security solution to fit the requirements, the business can better meet their needs, Security can still have an appropriate level of rigor, and better relationships should ensue. Win, win, win.

Security is a balancing act. We all have a role to play in cybersecurity. If you can apply these five simple information security hygiene tips, then you’re on the path towards having a secure organization, and I think we can all agree, that’s something to be thankful for.

Read more
6 42 2,271
Level 9

Two roads diverged in a yellow wood,

And sorry I could not travel both

-Robert Frost

At this point in our “Battle of the Clouds” journey, we’ve seen what the landscape of the various clouds looks like, cut through some of the fog around cloud, and glimpsed what failing to plan can do to your cloud migration. So, where does that leave us? Now it’s time to lay the groundwork for the data center’s future. Beginning this planning and assessment phase can seem daunting, so in this post, we’ll lay out some basic guidelines and questions to help build your roadmap.

First off, let’s start at what’s already in the business’s data center.

The Current State of Applications and Infrastructure

When looking forward, you must always look to see where you’ve been. By understanding the previous decisions, you can gain an understanding of the business’ thinking, see where mistakes may have been made, and work to correct them in the newest iteration of the data center. Inventory everything in the data center, both hardware and software. You’d be surprised what may play a critical role in prevention of migration to new hardware or a cloud. Look at the applications in use not only by the IT department, but also the business, as their implementation will be key to a successful migration.

  • How much time is left on support for the current hardware platforms?
    • This helps determine how much time is available before the execution of the plan has to be done
  • What vendors are currently in use in the data center?
    • Look at storage, virtualization, compute, networking, and security
    • Many existing partners already have components to ease the migration to public cloud
  • What applications are in use by IT?
    • Automation tools, monitoring tools, config management tools, ticketing systems, etc.
  • What applications are in use by the business?
    • Databases, customer relationship management (CRM), business intelligence (BI), enterprise resource planning (ERP), call center software, and so on

What Are the Future-State Goals of the Business?

As much as most of us want to hide in our data centers and play with the nerd knobs all day, we’re still part of a business. Realistically, our end goal is to deliver consistent and reliable operations to keep the business successful. Without a successful business, it doesn’t matter how cool the latest technology you installed is, its capabilities, or how many IOPs it can process—you won’t have a job. Planning out the future of the data center has to line up with the future of the company. It’s a harsh reality we live in. But it doesn’t mean you’re stuck in your decision making. Use this opportunity to make the best choices on platforms and services based on the collective vision of the company.

  • Does the company run on a CapEx or OpEx operating model, or a mixture?
    • This helps guide decisions around applications and services
  • What regulations and compliances need to be considered?
    • Regulations such as HIPAA
  • Is the company attempting to “get out of the data center business?”
    • Why does the C-suite think this, and should it be the case?
  • Is there heavy demand for changes in the operations of IT and its interaction with end users?
    • This could lead to more self-service interactions for end users and more automation by admins
  • How fast does the company need to react and evolve to changes in the environment?
    • DevOps and CI/CD can come into effect
    • Will applications need to be spun up and down quickly?
  • Of the applications inventoried in the current state planning, how many could be moved to a SaaS product?
    • Whether moving to a public cloud or simply staying put, the ability to reduce the total application footprint can affect costs and sizing.
    • This can also call back to the OpEx or CapEx question from earlier

Using What You’ve Collected

All the information is collected and it’s time to start building the blueprint, right? Well, not quite. One final step in the planning journey should be a cloud readiness assessment. Many value-added resellers, managed services providers, and public cloud providers can help the business in this step. This step will collect deep technical data about the data center and applications, map all dependencies, and provide an outline of what it’d look like to move them to a public cloud. This information is crucial as well as it lays out what can easily be moved as well as what applications would need to be refactored or completely rebuilt. The data can be applied to a business impact analysis as well, which will give guidance on what these changes can do to the business’s ability to execute.

This seems like a lot of work. A lot of planning and effort into deciding to go to the public cloud or to stay put. To stick to “what works.” Honestly, may companies look at the work and decide to stay on-premises. Some choose to forgo the planning and have their cloud projects fail. I can’t tell you what to do in your business’s setting—you have to make the choices based on your needs. All I can do is offer up advice and hope it helps.

Read more
3 9 567
Level 11

Social media has become a mainstream part of our lives. Day in and day out, most of us are using social media to do micro-blogging, interact with family, share photos, capture moments, and have fun. Over the years, social media has changed how we interact with others, how we use our language, and how we see the world. Since social media is so prevalent today, it’s interesting to how artificial intelligence (AI) is changing social media. When was the last time you used social media for fun? What about for business? School? There are so many applications for social media, and AI is changing the way we use it and how we digest the tons of data out there.

Social Media Marketing

I don’t think the marketing world has been more excited about an advertising vehicle than they are with social media marketing. Did you use social media today? Chances are this article triggered you to pick up your phone and check at least one of your feeds. When you scroll through your feeds, how many ads do you get bombarded with? The way I see it, there are two methods of social media marketing: overt and covert. Some ads are overt and obviously placed in your feed to get your attention. AI has allowed those ads to be placed in user feeds based on their user data and browsing habits. AI crunches the data and pulls in ads relevant to the current user. Covert ads are a little sneakier, hence the name. Covert social media marketing is slipped into your feeds via paid influencers or YouTube/Instagram/Twitter mega users with large followings. Again, AI analyzes the data and posts on the internet to bring you the most relevant images and user posts.

Virtual Assistants

Siri, Alexa, Cortana, Bixby… whatever other names are out there. You know what I’m talking about, the virtual assistant living in your phone, car, or smart speaker, always listening and willing to pull up whatever info you need. The need to tweet while driving or search for the highest rated restaurant on Yelp! while biking isn’t necessary—let Siri do it for you. When you want to use Twitter, ask Alexa to tweet and then compose it all with your voice. Social media applications tied into virtual assistants make interacting with your followers much easier. AI constantly allows these virtual assistants to tweet, type, and text via dictation easily and accurately.

Facial Recognition

Facebook is heavily invested in AI, as evidenced by their facial recognition technology tagging users in a picture automatically via software driven by AI. You can also see this technology in place at Instagram and other photo-driven social media offerings. Using facial recognition makes it easier for any user who wants to tag family or friends with Facebook accounts. Is this important? I don’t think so, but it’s easy to see how AI is shaping the way we interact with social media.

Catching the Latest Trends

AI can bring the latest trends to your social media feed daily, even hourly if you want it to. Twitter is a prime example of how AI is used to crunch large amounts of data and track trends in topics across the world. AI has the ability analyze traffic across the web and present the end user with breaking news, or topics suddenly generating a large spike in internet traffic. In some cases, this can help social media users get the latest news, especially as it pertains to personal safety and things to avoid. In other cases, it simply leads us to more social media usage, as witnessed by meteoric trending recently when a bunch of celebrities started using FaceApp to see how their older selves might look.

What About the Future?

It seems like what we have today is never enough. Every time a new iPhone comes out, you can read hundreds of articles online the next day speculating on the next iPhone design and features. Social media seems to be along the same lines, especially since we use it daily. I believe AI will shape the future of our social media usage by better aligning our recommendations and advertisements. Ads will become much better targeted to a specific user based on AI analysis and machine learning. Apps like LinkedIn, Pinterest, and others will be much more usable thanks to developers using AI to deliver social media content to the user based on their data and usage patterns.

Read more
0 11 641
Level 17

December Writing Challenge Week 5 Recap

And with these last four words, the 2019 writing challenge comes to a close. I know I’ve said it a couple of times, but even so, I cannot express enough my gratitude and appreciation for everyone who took part in the challenge this year—from the folks who run the THWACK® community, to our design team, to the lead writers, to the editing team, and, of course, to everyone who took time out of their busy day (and nights, in some cases) to thoughtfully comment and contribute.

Because we work in IT, and therefore have an ongoing love affair with data, here are some numbers for you:

The 2019 December Writing Challenge

  • 31 days, 31 words
  • 29 authors
    • 13 MVPs

It’s been an incredible way to mentally pivot from the previous year, and set ourselves up for success, health, and joy in 2020.

Thank you again.

  • - Leon

Day 28. Software Defined Network (SDN)

THWACK MVP Mike Ashton-Moore returned to the ELI5 roots, by crafting his explanation using the xkcd Simple Writer online tool. Despite limiting himself to the first 1,000 (or “ten-hundred,” to use the xkcd term) words, Mike created a simple and compelling explanation.

George Sutherland Dec 29, 2019 8:49 PM

SDN is the network version of the post office. You put your package in the system and let the post office figure out the best and fastest mode of delivery.

Ravi Khanchandani  Dec 30, 2019 12:58 AM

Life comes a full circle with SDN, from centralized routing to distributed routing and back to centralized routing.

The SDN controller is like the Traffic Police HQ that sends out instructions to the crossings or edge devices to control the traffic. How much traffic gets diverted to what paths, what kind of traffic goes which path, who gets priority over the others. Ambulances accorded highest priority, trucks get diverted to the wider paths, car pools & public transport get dedicated lanes, other cars get a best effort path

Juan Bourn Dec 30, 2019 10:37 AM

I gotta admit, I had to read this twice. Not being very familiar with SDN prior to this, I didn’t understand the special boxes and bypassing them lol. I couldn’t make the relationship tangible. But after a second read through, it made sense. Good job on making it easy to understand. You can’t do anything about your audience, so no knock for my inability to understand the first time around!

Day 29. Anomaly detection

As product marketing manager for our security portfolio, Kathleen Walker is extremely well versed in the idea of anomaly detection. But her explanation today puts it in terms even non-InfoSec folks can understand.

Vinay BY  Dec 30, 2019 5:06 AM

As the standard definition states -> “anomaly detection is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data”

Something unusual from the data pattern that you see on a regular basis, this as well helps you to dig down further to understand what happened exactly and why was it so. Anomaly detection can be performed in several areas, basically performed before aggregating the data into your system or application.

Thomas Iannelli
  Dec 30, 2019 5:27 AM

We don’t have kids living with us, but we do the same thing for our dog, Alba, and she for us. We watch her to make sure she eats, drinks, and performs her biological functions. When one of those things is off we either change the diet or take her to the vet. She watches us. She even got used to my wife, who works at home, going into her office at certain times, taking a break at certain times to put her feet up, or watch TV during lunch. So much so that Alba will put herself in the rooms of the house before my wife. She does it just so casually. But when my wife doesn’t show up, she frantically goes thru the house looking for her. Why isn’t she where she is supposed to be. Alba does the same thing when I go to work. It is fine that I am leaving during the week. She will not fuss and sometimes will greet me at the door. But if I get my keys on the weekend or in the evening she is all over me wanting to go for the ride. There is a trip happening out of the ordinary. When we have house guests, as we did over the holiday, she gets very excited when they arrive, and even the next morning will try to go to the guest bedroom and check to make sure they are still here. But after a day or two it is just the new normal. Nothing to get too excited about. The anomaly has become the norm.

I guess the trick is to detect the anomaly and assess quickly if it is outlier, if it is going to be the new normal, or if it is a bad thing that needs to be corrected.

Paul Guido  Dec 30, 2019 9:32 AM (in response to Charles Hunt)

cahunt As soon as I saw the subject, I thought of this song. “One of these things is not like the other” is one of my primary trouble-shooting methods to this very day.

Once I used the phrase that “The systems were nominal” and people did not understand the way I used “nominal.” I was using it in the same way that NASA uses it in space systems that are running within specifications.

In my brain, an anomaly is outside the tolerance of nominal.

Day 30. AIOps

Melanie Achard returns for one more simple explanation, this time of a term heavily obscured by buzzwords, vendor-speak, and confusion.

Vinay BY  Dec 30, 2019 10:21 AM

AIOps or artificial intelligence for IT operations includes the below attributes in one or the other way, to me they are all interlinked:

Proactive Monitoring

Data pattern detection, Anomaly detection, self-understanding and Machine Learning which improvises the entire automation flow

Events, Logs, Ticket Dump

Bots & Automation

Reduction of Human effort, cost reduction, time reduction and service availability

Thomas Iannelli  Dec 30, 2019 5:41 AM

Then the computer is watching and based on either machine learning or anthropogenic algorithms process the data for anomaly detection and then takes some action. In the form of an automated response to remediate the situation or to alert a human that something here needs you to focus attention on it. Am I understanding correctly?

Jake Muszynski Dec 30, 2019 12:13 PM

Computers don't lose focus, my biggest issue with people reviewing the hordes of data that varous monitors create is that they get distracted, they only focus on the latest thing. AI helps by looking at all the things, then surfacing what might need attention. In a busy place, it can really make a difference.

Day 31. Ransomware

On our final day of the challenge, THWACK MVP Jeremy Mayfield spins a story bringing into sharp clarity both the meaning and the risk of the word of the day.

Faz f Dec 31, 2019 5:57 AM

This is when your older Sibling/friend has your toy and will not give it back unless you do something for them. It's always good to keep your toys safe.

Michael Perkins Dec 31, 2019 11:36 AM

Ransomware lets a crook lock you out of your own stuff, then make you pay whatever the crook wants for the key. This is why you keep copies of your stuff. It takes away the crook's leverage and lets you go "Phbbbbbbbbbbt!" in the crook's face.

Brian Jarchow Dec 31, 2019 12:38 PM

Ransomware reminds me of the elementary school bully who is kind enough to make sure you won't get beaten up if you give him all of your lunch money.

Read more
2 12 747
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen where he provides tips on leveraging automation to improve your cybersecurity, including deciding what to automate and what tools to deploy to help.

Automation can reduce the need to perform mundane tasks, improve efficiency, and create a more agile response to threats. For example, administrators can use artificial intelligence and machine learning to ascertain the severity of potential threats and remediate them through the appropriate automated responses. They can also automate scripts, so they don’t have to repeat the same configuration process every time a new device is added to their networks.

But while automation can save enormous amounts of time, increase productivity, and bolster security, it’s not necessarily appropriate for every task, nor can it operate unchecked. Here are four strategies for effectively automating network security within government agencies.

1. Earmark What Should—And Shouldn’t—Be Automated.

Setting up automation can take time, so it may not be worth the effort to automate smaller jobs requiring only a handful of resources or a small amount of time to manage. IT staff should also conduct application testing themselves and must always have the final say on security policies.

Security itself, however, is ripe for automation. With the number of global cyberattacks rising, the challenge has become too vast and complex for manual threat management. Administrators need systems capable of continually policing their networks, automatically updating threat intelligence, and monitoring and responding to potential threats.

2. Identify the Right Tools.

Once the strategy is in place, it’s time to consider which tools to deploy. There are several security automation tools available, and they all have different feature sets. Begin by researching vendors with a track record of government certifications, such as Common Criteria, or are compliant with the Defense Information Systems Agency requirements.

Continuous network monitoring for potential intrusions and suspicious activity is a necessity. Being able to automatically monitor log files and analyze them against multiple sources of threat intelligence is critical to being able to discover and, if necessary, deny access to questionable network traffic. The system should also be able to automatically implement predetermined security policies and remediate threats.

3. Augment Security Intelligence.

Artificial intelligence and machine learning should also be considered indispensable, especially as IT managers struggle to keep up with the changing threat landscape. Through machine learning, security systems can absorb and analyze data retrieved from past intrusions to automatically and dynamically implement appropriate responses to the latest threats, helping keep administrators one step ahead of hackers.

4. Remember Automation Isn’t Automatic.

The old saying “trust but verify” applies to computers as much as people. Despite the move toward automation, people are and will always be an important part of the process.

Network administrators must conduct the appropriate due diligence and continually audit, monitor and maintain their automated tasks to ensure they’re performing as expected. Updates and patches should be applied as they become available, for example.

Automating an agency’s security measures can be a truly freeing experience for time- and resource-challenged IT managers. They’ll no longer have to spend time tracking down false red flags, rewriting scripts, or manually attempting to remediate every potential threat. Meanwhile, they’ll be able to rest easy knowing the automated system has their backs and their agencies’ security postures have been improved.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
0 11 655
Level 17


The year is winding down, and—while it’s not something I do every year—I thought I’d take a moment to look ahead and make a few educated guesses about what the coming months have in store for us nerds, geeks, techies, and web-heads (OK, the last category is for people from the Spider-verse, but I’m still keeping them in the mix.)

As with any forward-looking set of statements, decisions made based on this information may range from “wow, lucky break” to “are you out of your damn mind?” And, while I could make many predictions about the national (regardless of which nation you live in) and/or global landscape as it relates to economy, politics, entertainment, cuisine, alpaca farming, etc., I’m going to keep my predictions to tech.

Prediction 1: The Ever-Falling Price of Compute

This one is a no-brainer, honestly. The cost of compute workloads is going to drop in 2020. This is due to the increased efficiencies of hardware and the rising demand for computer resources—especially in the cloud.

I can also make this prediction because it’s basically been true for the last 30 years.

With that said, it’s worth noting—according to some sources (—the following milestones/benchmarks will be reached:

  • (Moore’s Law) Calculations per second, per $1,000, will reach 10^13 (equivalent to one mouse brain)
  • Average number of connected devices, per person, is 6.5
  • Global number of internet-connected devices reaches 50,050,000,000
  • Predicted global mobile web traffic equals 24 exabytes
  • Global internet traffic grows to 188 exabytes


  • Share of global car sales taken by autonomous vehicles will be about 5%
  • World sales of electric vehicles will reach 6,600,000

In addition, in 2017, Elon Musk posited it would take 100 square miles of solar panels total to provide all the electricity used in the U.S. on an average day. In 2018, took another swipe at it and figured the number slightly higher—21,500 sq. miles. But that’s still 0.5% of the total available land in the U.S. and amounts to (if you put it all in one place, which you would not) a single square of solar panels 145 miles on each side.

What I’m getting at is that the impending climate crisis and the improving state of large-storage batteries and renewable energy sources may push the use of environmentally friendly transportation options even further than expected. If nothing else, these data points will provide background to continue to educate everyone across the globe about ways to make economically AND ecologically healthy energy choices.

*Ra’s Al Ghul to Bruce Wayne, “Batman Begins”

Prediction 4: Say “Blockchain” One. More. Time.

Here’s a non-prediction prediction: People (mostly vendors and dudes desperate to impress the laydeez) are going to keep throwing buzzwords around, making life miserable for the rest of us.

HOWEVER, eventually enough of us diligent IT folks nail down the definition so the hype cycle quiets down. In 2020, I think at least a few buzzwords will get a little less buzz-y.

One of those is “AI” (artificial intelligence). IT professionals and even business leaders are finally coming to grips with how this ISN’T (androids like Data; moderately complex algorithms; or low-paid offshore workers doing a lot of work without credit) and will be more clearly be able to understand when true AI is both relevant and necessary.

Closely related, machine learning (the “ML” in the near-ubiquitous “AI/ML” buzzword combo) will also reach a state of clarity, and businesses wanting to leverage sophisticated automation and behavioral responses in their products will avoid being caught up (and swindled) by vendors hawking cybernetic snake oil.

Finally, the term 5G is going to get nailed down and stop being seen as “Better because it’s got one more G than what I have today.” This is more out of necessity than anything else, because carriers are building out their 5G infrastructure and selling it, and the best cure for buzzword hype are vendor contracts clearly limiting what they’re legally obligated to provide.

Prediction 5: Data As A Service

While this effort was well under way in 2019 from the major cloud vendors, I believe 2020 is when businesses will, en masse, take up the challenge of building both data collection and data use features into their systems. From the early identification of trends and fads; to flagging public health patterns; to data-based supply chain decisions—the name of the game is to use massive data sets to analyze complex financial behaviors and allow businesses to react more accurately and effectively.

Again, this isn’t so much the invention of something new as it is the adoption of a capability providers like AWS and Azure have made available in various forms since 2018 and putting it to actual use.

Prediction N: We’re So Screwed

Security? Privacy? Protection of personal information? Everything I described above—plus the countless other predictions which will come out to be true in the coming year—is going to come at the cost of your information. Not only that, but the primary motivator in each of those innovations and trends is profit, not privacy. Expect a healthy helping of hacks, breaches, and data dumps in 2020.

Just like last year.

Read more
1 11 712
Level 17

One of the things I like most about the writing challenge is we’ve set it at a time when many of us are either “off” (because how many of us in tech are ever REALLY “off”) or at least find ourselves with a few extra compute cycles to devote to something fun. This week, more than any so far, has shown this to be true.

Despite a conspicuous absence of references to brightly colored interlocking plastic blocks, our ELI5 imaginations ran wild, from tin can telephones to poetry (with and without illustration) to libraries.

I’m thrilled with how the challenge has gone so far, and what other examples are yet to come as we finish strong next week.

21. Routing

Kevin Sparenberg—former SolarWinds customer, master of SWUG ceremonies, semi-official SolarWinds DM, and owner of many titles both official and fictitious—takes the idea of routing back to its most fundamental and builds it back up from there.

Mike Ashton-Moore  Dec 22, 2019 10:43 AM

I love this challenge—all these definitions that I can now use when a non-geek asks me what I do.

For adults—so not five-year olds, I always reverted to the sending a letter through the post office, which seems to cover it

Jeremy Mayfield  Dec 23, 2019 9:21 AM

Holly Baxley Dec 23, 2019 10:37 AM (in response to Kevin M. Sparenberg)

I use that same analogy when I explain to my unfortunate Sales Agents who work in neighborhoods with a shared DSL line. I always get calls around the holidays that the internet in their model has suddenly started crawling. It’s too hard for me to explain shared DSL lines and that we have no control over what’s put under our feet when we build houses, and that ISP’s with older cable lines will “store” up a certain amount of data per neighborhood—and depending on how heavily it’s used during the day—it can make all the difference with those…speeds UP TO 75 MBps.”

If I tried to explain how the old neighborhood DSL’s route and “borrow” data when it’s not being used by someone else, their heads would explode.

So, I use our highway as an example.

“You know how the internet’s called an Information Highway?” Well, in your neighborhood it works a lot like that. During the day, your speeds are okay because most people are at work and school. They’re not on your “information highway.” But when the holidays hit, you got kids at home streaming and gaming and suddenly your own internet’s gonna drop because now many people are on your “highway.” Just like when you get on the highway to go home—if you don’t have many people on the road, you can go the 60 – 70 mph that you’re allowed on the posted signs. But if it’s the rush hour—and cars are jammed for miles—it doesn’t matter if the posted signs say “60 mph”—you’re gonna go the same crawling 30 mph that everyone else is, because it’s jammed.

Right now—you got a heavy “rush hour” on your DSL line because there’s a lot of people in your neighborhood on it.”

What I wouldn’t give to have us all on fiber.

But such is the life of a home builder.

  1. ... you’re my only friend.

I also think about it as getting to work. I know the preferred path, but due to insane drivers outside of my control, I’m sometimes forced to take alternate paths to get to the same place. If there’s an accident on the main road I take, US 301, then I might have to take the interstate 75 which is often flowing smoothly until it gets backed up then I might need to take the turnpike. Luckily for me, there’s almost no time difference in getting from home to work and vice versa, but at the end of the day, I’m only able to measure the difference in distance traveled. It’s more miles to use the I-75 and/or turnpike. So, my route is within minutes of each other, but the distance traveled to get there is much greater when I don’t get to use my preferred path.

22. Ping

When a few folks here at SolarWinds began talking about “NetFlowetry”—mostly as a silly idea—we had no idea how it would take off. THWACK MVP Thomas Iannelli’s entry shows how much the idea has caught on, and how well it can be used to make a challenging concept seem accessible.

Shmi Ka  Dec 23, 2019 6:30 AM

This is so wonderful! This is so great for non-experts in this subject. Your poem is full of visual words for a visual learner like me. Thank you!

Rick Schroeder  Dec 26, 2019 12:52 PM

We rely on ping for a lot, but we as Network Analysts understand much about pings that many other folks may not. For example, a switch or router may be humming along, working perfectly, forwarding and routing packets for users without a single issue. But pinging that switch or router may not be the best way to discover latency between that switch and any other device. This is because ICMP isn’t as important to forward or respond to as TCP traffic.

A switch or router “knows” its primary job is to forward data and reply to pings as fast as possible just isn’t as important as moving TCP packets between users. So, a perfectly good network and set of hardware may serve users quite well, but might simultaneously show varying amounts of latency. It’s because we may be monitoring a switch that’s busy doing other things; when it gets a free microsecond, it might reply to our pings. Or it might not. And users aren’t experiencing slowness or outages when the switch starts showing higher latency than it did when there was very little traffic going through it.

It’s important to not place excessive reliance on pings “to” routers or switches for this very reason.

However, you might just find pings more valuable if you ping from endpoint to endpoint instead of from monitoring station to switch or routers. The switch or router will forward the ICMP traffic nicely, and may do so much better than it will REPLY to the pings.

So, ping from a workstation to another workstation, or to a server, or server to server, instead of from a workstation to a router or switch that might have better things to do with its processing resources than reply to your ping quickly.

Greg Palgrave Dec 22, 2019 9:56 PM

Give me a ping, Vasili. One ping only, please.

23. IOPS

When explaining the speed of reads and writes, most people’s minds wouldn’t think about libraries. But THWACK MVP Jake Muszynski isn’t like most people, and his example was brilliantly, elegantly simple.

Tregg Hartley Dec 23, 2019 10:33 AM

Reads and writes per second

Is a metric measured here?

Where is the system bottleneck

Of our data we hold so dear?

Vinay BY  Dec 23, 2019 11:26 AM

IOPS—Read and write without any latency, most of us would want the data on our screen in split seconds and IOPS does contribute to it, we would always love to keep this as healthy as possible, with data pouring in we need to keep these things at scale -> IOPS, data storage, data retrieval and throughput.

George Sutherland Dec 23, 2019 12:14 PM

Well said sir... I love the book analogy.

It’s also like a puzzle... except that you get the same picture but the number of pieces in box change... sometimes 50, others 100, others 500 and some even 1000 pieces. Same view just more to consider.

Or when I mentioned your post to an accountant friend of mine.... debits=credits!!!!

24. Virtual Private Network (VPN)

THWACK MVP Matthew Reingold finds what is perhaps the most amazing, most simple, and most accurate ELI5 explanation for virtual private networks I’ve ever seen. You can bet I will be adding it to my mental toolbox.

Beth Slovick Dec 24, 2019 4:16 AM

We use VPNs for everything from connecting to the office to protecting our torrent downloads from nosy ISPs. Everyone uses a VPN these days to encrypt and protect their information from prying eyes.

Kelsey Wimmer Dec 24, 2019 10:59 AM

You could also describe it as using a water hose through a pool. You get to go through the pool, but the hose hides your data and what comes out of the pool is only what has gone through your hose.

Tregg Hartley Dec 24, 2019 11:17 AM

Open a connection between me and you

Encrypt the data before it goes through,

Then the only people who can see

The flowing data is you and me.

25. Telemetry

The word telemetry is still obscured by a healthy dose of “hand wavium” from companies and individuals who don’t understand it but want to sound impressive. – Josh Biggley, who has devoted a good portion of his career to both building systems to gather and present telemetry data; and clarifying what the word means.

Vinay BY  Dec 26, 2019 9:13 AM

To me, telemetry is to reach to a point/milestone where normal/generic process/procedure can’t -> be it collecting data, be it monitoring, be it inducing instructions or any other possible thing.

Juan Bourn Dec 26, 2019 11:05 AM

If telemetry can tell a pit crew in NASCAR exactly how the race car is behaving, it can do the same (and possibly more) for us. The idea is, as mentioned by the author, to remove the noise. What do we care about? What matters? What is measurable vs. what is observable? Finally, how do we put that into a dashboard that we can use to have an overview of everything at once? That’s where telemetry really is useful, the combined overview of all our metrics.

Brian Jarchow Dec 26, 2019 4:44 PM

I’ve known people who worked on Boeing’s Delta program and the SpaceX Falcon 9 program. In rocketry, a lot of telemetry data is the difference between “it exploded” and “here’s what went wrong.”

26. Key Performance Indicator (KPI)

If Senior UX researcher Rashmi Kakde ever thought about a second career, I’d suggest writing and illustrating tech books for kids. Her poetic story about KPI is something I plan to print and use often.

Jake Muszynski  Dec 26, 2019 10:24 AM

I have started working with KPI’s that I track for the Orion® Platform. As I delegate work to others or if I get distracted (when) I need an easy way to verify that the Orion Platform is doing what I expect it to. I have overall system health from App monitors and the “my Orion deployment” page, but what about all those things that are more like house cleaning? Things like custom properties. Unknown devices. Nodes missing polls. I build out dashboards and reports to let me know how the processes I have in place (both automated and human) are getting things done. I pull them into a PowerShell monitor from SAM via SWQL queries.

Did I have a spike in unmanaged devices? Do I need to find out why?

Do all my Windows servers have at least one disk?
Are there disks that need to be removed?

Not all of them are important, at least not right now. But once I gather stats on what we need to clean up to be current, then I choose a few significant metrics to improve. Those are my KPI’s. I look at the number for a quarter and try to improve the process and the automation to make sure stuff doesn’t fall between the cracks. And having stats over time mean that I can see if thing change and need my attention. If I make a few things better, and other stuff suffers, I change my KPI’s.

Mike Ashton-Moore  Dec 26, 2019 12:18 PM

I love, love, love this one, especially the pictures

For me the most important part of KPIs is to try to refer to them by their full name rather than the TLA.




I’ve seen several “service desk” systems that try to label ticket close rates as a KPI.

Where something is measured because it was easy to measure, not because it indicates how well the service desk is being run.

That it isn’t a Key Performance Indicator any more than MPG is a KPI for how comfortable a car is.

It’s just an interesting statistic, not a KPI.

We need to remember what Performance our Indicator is Key for highlighting to us and why it is important enough to make it “Key.”

Michael Perkins Dec 26, 2019 2:55 PM

The trick with KPIs is figuring out what is actually “key” and observable to system performance. Of course, one must begin by asking what it means that the system is performing well.

I was laid off years ago because someone “upstairs” decided to change what was key without telling anyone. I was laid off for handling fewer tickets than my colleagues. For months, if not a couple of years, I had been an unofficial escalation point—working high-priority tickets and customers. That took—with explicit approval from managers with whom I shared space—more time than ordinary tickets, so I handled fewer overall. I also would help colleagues if they had questions.

Well above those folks, it was decided that my group would have one KPI—number of tickets processed. On Friday going into Labor Day Weekend that year, I was working with a customer, who thanked me profusely, when I heard my manager (two levels above me), getting rather upset. I found out later that was when higher-ups told him I was getting laid off. I found out about 20 – 30 minutes later.

So, was processing tickets quickly the KPI? Should it have been combined with, say, customer satisfaction, perhaps measured via survey? What about some sort of metric in which the severity or difficulty of the tickets was taken into account? What was really key to the support desk’s performance?

27. Root Cause Analysis

Principal UX researcher Kellie Mecham is trying to inspire an entire new generation of UX/UI folks with her explanation, by pointing out the ability to ask questions a core skill. By way of example, she shows how enough “why” questions can uncover the root cause of any situation.

Richard Phillips  Dec 27, 2019 9:12 AM

Root cause analysis is critical to understanding the past and the why did that happen. Along with RCA I like to include the how questions of How can we prevent that in the future and How can we use this information to make things better, faster, more resilient. When asked for the root cause I like to provide not just the answer, but the value obtained from that answer.

Tregg Hartley
Dec 27, 2019 10:37 AM

Getting to the bottom of things

Is what we are looking for,

Diagnose the disease

Lease the symptoms at the door.

Brian Jarchow Dec 27, 2019 11:03 AM

Unfortunately, I have worked with people who would then take it to the level of: “Why do we need to pay? Why can’t we just have?”

A root cause analysis can only go so far, and some people have difficulty with reasonable limits.

Read more
0 5 341
Level 17


It's been a few weeks since VMworld Europe, and that's given Sascha and me a chance to digest both the information and the vast quantities of pastries, paella, and tapas we consumed.

VMworld was held again in Barcelona this year and came two months after the U.S. convention, meaning there were fewer big, jaw-dropping, spoiler-filled announcements, but more detail-driven, fill-in-the-gaps statements to clarify VMware's direction and plans.

As a refresher, at the U.S. event, some of the announcements included:

  • VMware Tanzu – a combination of products and services leveraging Kubernetes at the enterprise level.
  • Project Pacific – related to Tanzu, this will turn vSphere into a Kubernetes native platform.
  • Tanzu Mission Control – will allow customers to manage Kubernetes clusters regardless of where in the enterprise they're running.
  • CloudHealth Hybrid – will let organizations update, migrate, and consolidate applications from multiple points in the enterprise (data centers, alternate locations, and even different cloud providers) as part of an overall cloud optimization and consolidation strategy
  • The intent to acquire Pivotal
  • The intent to acquire Carbon Black

Going into the European VMworld, one could logically wonder what else there was to say about things. It turns out there were many questions left hanging in the air after the booths were packed and the carpet pulled up and in San Francisco.

Executive Summary

VMware, since selling vCloud to OVH, started looking into other ways to diversify their business and embrace the cloud. The latest acquisitions show it’s a vision, and their earning calls show it’s a successful one. (


At both the U.S. and Europe conventions, Tanzu was clearly the linchpin initiative around which VMware's new vision for itself revolves. While the high-level sketch of Tanzu products and services was delivered in San Francisco, in Barcelona we also heard:

  • Tanzu Mission Control will allow operators to set policies for access, backup, security, and more to clusters (either individual or groups) across the environment.
  • Developers will be able to access Kubernetes resources via APIs enabled by Tanzu Mission Control.
  • Project Pacific does more than merge vSphere and Kubernetes. It allows vSphere administrators to use tools they’re already familiar with to deploy and manage container infrastructures anywhere vSphere is running—on-prem, in hybrid cloud, or on hyperscalers.
  • Conversely, developers familiar with Kubernetes tools and processes can continue to roll out apps and services using the tools THEY know best and extend their abilities to provision to things like vSphere-supported storage on-demand.

The upshot is Tanzu and the goal of enabling complete Kubernetes functionality is more than a one-trick-pony idea. This is a broad and deep range of tools, techniques, and technologies.

Carbon Black

In September we had little more than the announcement of VMware's "intent to acquire" Carbon Black. By November the ink had dried on that acquisition and we found out a little more.

  • Carbon Black Cloud will be the preferred endpoint security solution for Dell customers.
  • VMware AppDefense and Vulnerability Management products will merge with several modules acquired through the Carbon Black acquisition.

While a lot more still needs to be clarified (in the minds of customers and analysts alike), this is a good start in helping us understand how this acquisition fits into VMware's stated intent of disrupting the endpoint security space.


The week before VMworld US, VMware announced its Q2 earnings, which included NSX adoption had increased more than 30% year over year. This growth explains the VMworld Europe announcement of new NSX distributed IDS and IPS services, as well as "NSX Federation," which let customers deploy policies across multiple data centers and sites.

In fact, NSX has come a long way. VMware offers two flavors of NSX: The well-known version, which is meanwhile called NSX Data Center for vSphere, and the younger sibling NSX-T Data Center.

The vSphere version continuously improved in two areas preventing a larger adoption; the user experience and security and is nowadays a matured and reliable technology.

NSX-T has been around for two years or so, but realistically it was always behind in features and not as valuable. As it turns out, things have changed, and NSX-T fits well into the greater scheme of things and is ready to play with the other guys in the park, including Tanzu and HCX.


Pivotal was initially acquired by EMC, and EMC combined it with assets from another acquisition: VMware. Next, Dell acquired EMC, and a little later both VMware and Pivotal became individual publicly traded companies with DellEMC remaining as the major shareholder. And now, in 2019, VMware acquired Pivotal.

One could call that an on/off relationship, similar to the one cats have with their owners servants. It’s complicated.

Pivotal offers a SaaS solution to create other SaaS solutions, a concept which comes dangerously close to Skynet, minus the self-awareness and murder-bots.

But the acquisition does makes sense, as Pivotal Cloud Foundry (PCF) runs on most major cloud platforms, and on vSphere, and (to no one's surprise), Kubernetes.

PCF allows developers to ignore the underlying infrastructure and is therefore completely independent from the type of deployment. It will help companies in their multi-cloud travels, while still allowing them to remain a VMware customer.

New Announcements

With all of that said, we don't want you to think there was nothing new under the unseasonably warm Spanish sun. In addition to the expanded information above, we also heard about a few new twists in the VMware roadmap:

  • Project Galleon will see the speedy delivery of an app catalog with greater security being key.
  • VMware Cloud Director service was announced, giving customers multi-tenant capabilities in VMware Cloud on AWS. This will allow Managed Service Providers (MSPs) to share the instances (and costs) of VMware Cloud on AWS across multiple tenants.
  • Project Path was previewed.
  • Project Maestro was also previewed—a telco cloud orchestrator designed to deliver a unified approach to modelling, onboarding, orchestrating, and managing virtual network functions and services for Cloud Service Providers.
  • Project Magna, another SaaS-based solution, was unveiled. This will help customers build a “self-driving data center” by collecting data to drive self-tuning automations.

Antes Hasta Tardes

Before we wrap up this summary, we wanted to add a bit of local color for those who live vicariously through our travels.

Sascha loved the “meat with meat” tapas variations and great Spanish wine. Even more so, as someone who lives in rainy Ireland, I enjoyed the Catalan sun. It was fun to walk through the city in a t-shirt while all the locals consider the temperature in November as barely acceptable.


Similarly, Leon, (who arrived in Barcelona three days after it had started snowing back home) basked in the warmth of the region and of the locals willing to indulge his rudimentary Spanish skills; and basked equally in the joy of kosher paella and sangria.


Until next time!

Read more
0 8 638
Level 12

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

Here’s an interesting article by my colleague Jim Hansen where he discusses some ideas on improving agency security, including helping your staff develop cyberskills and giving them the tools to successfully prevent and mitigate cyberattacks.

Data from the Center for Strategic and International Studies paints a sobering picture of the modern cybersecurity landscape. The CSIS, which has been compiling data on cyberattacks against government agencies since 2006, found the United States has been far and away the top victim of cyber espionage and cyber warfare.

These statistics are behind the Defense Department’s cybersecurity strategy for component agencies that details on how they can better fortify their networks and protect information.

DoD’s strategy is built on five pillars: building a more lethal force, competing and deterring in cyberspace, strengthening alliances and attracting new partnerships, reforming the department, and cultivating talent.

While aspects of the strategy don’t apply to all agencies, three of the tactics can help all government offices improve the nation’s defenses against malicious threats.

Build a Cyber-Savvy Team

Establishing a top-tier cybersecurity defense should always start with a team of highly trained cyber specialists. There are two ways to do this.

First, agencies can look within and identify individuals who could be retrained as cybersecurity specialists. Prospects may include employees whose current responsibilities feature some form of security analysis and even those whose current roles are outside IT. For example, the CIO Council’s Federal Cybersecurity Reskilling Academy trains non-IT personnel in the art and science of cybersecurity. Agencies may also explore creating a DevSecOps culture intertwining development, security, and operations teams to ensure application development processes remain secure and free of vulnerabilities.

Second, agencies should place an emphasis on cultivating new and future cybersecurity talent. To attract new talent, agencies can offer potential employees the opportunity for unparalleled cybersecurity skills training, exceptional benefits, and even work with the private sector. The recently established Cybersecurity Talent Initiative is an excellent example of this strategy in action.

Establish Alliances and Partnerships

The Cybersecurity Talent Initiative reflects the private sector’s willingness to support federal government cybersecurity initiatives and represents an important milestone in agencies’ relationship with corporations. Just recently, several prominent organizations endured what some called the cybersecurity week from hell when multiple serious vulnerabilities were uncovered. They’ve been through it all, so it makes sense for federal agencies to turn to these companies to learn how to build up their own defenses.

In addition to partnering with private-sector organizations, agencies can protect against threats by sharing information with other departments, which will help bolster everyone’s defenses.

Arm Your Team With the Right Tools

It’s also important to have the right tools to successfully prevent and mitigate cyberattacks. Continuous monitoring solutions, for example, can effectively police government networks and alert managers to potential anomalies and threats. Access rights management tools can ensure only the right people have access to certain types of priority data, while advanced threat monitoring can keep managers apprised of security threats in real-time.

Of course, IT staff will need continuous training and education. A good practice is implementing monthly or at least bi-monthly training covering the latest viruses, social engineering scams, agency security protocols, and more.

The DoD’s five-pillared strategy is a good starting point for reducing the risk of the nation. Agencies can follow its lead by focusing their efforts on cultivating their staff, creating stronger bonds with outside partners, and supporting this solid foundation with the tools and training necessary to win the cybersecurity war.

Find the full article on Government Computer News.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
1 6 652
Level 12

A few years back I was working at a SaaS provider when we had an internal hackathon. The guidelines were simple: As part of your project you had to learn something, and you had to choose a framework/methodology beneficial to the organization. I was a Windows and VI admin, but along with my developer friend, we wrote an inventory tool that was put into operational practice immediately. I learned a ton over those two days, but little did I know I’d discovered a POwerful SHortcut to advancing my career as well as immediately making my organization’s operations more efficient. What POtent SHell could have such an effect? The framework I chose in the hackathon: PowerShell.

A Brief History of PowerShell

Windows has always had some form of command line utility. Unfortunately, those tools never really kept pace and by the mid-2000s, a change was badly needed. Jeffrey Snover led the charge that eventually became PowerShell. The goal was to produce a management framework to manage Windows environments, and as such it was originally used to control Windows components like Active Directory. The Microsoft Exchange UIs were even built on top of PowerShell, but over time it evolved into way more.

Today, one of the largest contributors to the PowerShell ecosystem is VMware, who competes with Microsoft in multiple spaces. Speaking of Microsoft, it’s shed its legacy of being walled off from the world and is now a prolific open-source contributor, with one of their biggest contributions being to make PowerShell open-source in 2016. Since being open-sourced, you can run PowerShell on Mac and Linux computers, as well as for managing the big three (AWS, Azure, Google Cloud) cloud providers.

Lots of People Are on Board With PowerShell, But Why Do YOU Care?

In an IT operations role, with no prior experience with PowerShell, I was able create a basic inventory system leveraging WMI, SNMP, Windows Registry, and PowerCLI, amongst others. I mention this example again because it demonstrates two of the most compelling reasons to use PowerShell: its low barrier to entry and its breadth and depth.

Low Barrier to Entry

We already determined you can run PowerShell on anything, but it’s also been included in all Windows OSs since Windows 7. If you work in an environment with Windows, you already have access to PowerShell. You can type powershell.exe to launch the basic PowerShell command window, but I’d recommend powershell_ise.exe for most folks who are learning, as the lightweight ISE (Integrated Scripting Environment) will give you some basic tools to troubleshoot and debug your scripts.

Once you’re in PowerShell, it’s time to get busy. The things performing work in PowerShell are called cmdlets (pronounced command-lets). Think of them as tiny little programs, or functions, to do a unit of work. If you retain nothing else from this post, please remember this next point and I promise you can become effective in PowerShell: all cmdlets take the form of verb-noun and if properly formed, will describe what they do. If you’re trying to figure something out, as long as you can remember Get-Help, you’ll be OK.

Here’s the bottom line on having a rapid learning curve: there are a lot of IT folks who don’t have background or experience in writing code. We’re in a period where automation is vitally important to organizations. Having a tool you can pick up and start using on day one means you can increase your skill set and increase your value to the organization easily. Now if only you had a tool that could grow with you as those skillsets evolved…

Depth and Breadth

At its most fundamental level, automation is about removing inefficiencies. A solution doesn’t need to be complex to be effective. When writing code in PowerShell, you can string together multiple commands, where the output of one cmdlet is passed along as the input of the next, via a process called the pipeline. Chaining commands together can be a simple and efficient way to get things done more quickly. Keep in mind PowerShell is a full-fledged object-oriented language, so you can write functions, modules, and thousands of lines of code as your skills expand.

So, you can go deep on the tool, but you can go wide as well. We already mentioned you can manage several operating systems, but software across the spectrum are increasingly enabling management via PowerShell snap-ins or modules. This includes backup, monitoring, and networking tools of all shapes and sizes. But you’re not limited to tools vendors provide you—you can write your own. That’s the point! If you need some ideas on how you can jumpstart your automation practice, here’s a sampling of some fun things I’ve written in PowerShell: network mapper, port scanner, environment provisioning, ETL engine, and web monitors. The only boundary to what you can accomplish is defined by the limits of your imagination.

What’s Next

For some people, PowerShell may be all they need and as far as they go. If this PowerShell exploration just whets your appetite though, remember you can go deeper. Much of automation is going toward APIs, and PowerShell gives you a couple of ways to begin exploring them. Invoke-WebRequest and Invoke-RestMethod will allow you to take your skills to the next step and build your familiarity of APIs and their constructs within the friendly confines of the PowerShell command shell.

No matter how far you take your automation practice, I hope you can use some of these tips to kickstart your automation journey.

Read more
4 13 523
Level 17

We’re more than halfway through the challenge now, and I’m simply blown away by the quality of the responses. While I’ve excerpted a few for each day, you really need to walk through the comments to get a sense of the breadth and depth. You’ll probably hear me say it every week, but thank you to everyone who has taken time out of their day (or night) to read, reply, and contribute.

14. Event Correlation

Correlating events—from making a cup of coffee to guessing at the contents of a package arriving at the house—is something we as humans do naturally. THWACK MVP Mark Roberts uses those two examples to help explain the idea that, honestly, stymies a lot of us in tech.

Beth Slovick Dec 16, 2019 9:46 AM

Event Correlation is automagical in some systems and manual in others. If you can set it up properly, you can get your system to provide a Root Cause Analysis and find out what the real problem is. Putting all those pieces together to set it up can be difficult in an ever-changing network environment. It is a full-time job in some companies with all the changes that go on. The big problem there is getting the information in a timely manner.

Richard Phillips  Dec 17, 2019 1:02 PM

She’s a “box shaker!” So am I.

Flash back 20 years—a box arrives just before Christmas. The wife and I were both box shakers and proceed to spend the next several days, leading up to Christmas, periodically shaking the box and trying to determine the contents. Clues: 1) it’s light 2) it doesn’t seem to move a lot in the box 3) the only noise it makes is a bit of a scratchy sound when shaken.

Finally Christmas arrives and we anxiously open to the package to find a (What was previously very nice) dried flower arrangement—can you imagine what happens to a dried flower arrangement after a week of shaking . . .

Matt R  Dec 18, 2019 12:57 PM

I think of event correlation like weather. Some people understand that dark clouds = rain. some people check the radar. Some people have no idea unless the weather channel tells them what the weather will be.

15. Application Programming Interface (API)

There’s nobody I’d trust more to explain the concept of APIs than my fellow Head Geek Patrick Hubbard—and he did not disappoint. Fully embracing the “Thing Explainer” concept—one of the sources of inspiration for the challenge this year—Patrick’s explanation of “Computer-Telling Laws” is a thing of beauty.

Tregg Hartley Dec 15, 2019 11:37 AM

I click on an icon

You take what I ask,

Deliver it to

The one performing the task.

When the task is done

And ready for me,

You deliver it back

In a way I can see.

You make my life easier

I can point, click and go,

You’re the unsung hero

and the star of the show.

Vinay BY  Dec 16, 2019 5:45 AM

API to me is a way to talk to a system or an application/software running on it, while we invest a lot of time in building that we should also make sure it’s built with standards and rules/laws in mind. Basically we shouldn’t be investing a lot of time on something that can’t be used.

Dale Fanning Dec 16, 2019 9:36 AM

In many ways an API is a lot like human languages. Each computer/application usually only speaks one language. If you speak in that language, it understands what you want and will do that. If you don’t, it won’t. Just like in the human world, there are translators for computers that know both languages and can translate back and forth between the two so each can understand the other.

16. SNMP

Even though “simple” is part of its name, understanding SNMP can be anything but. THWACK MVP Craig Norborg does a great job of breaking it down to its most essential ideas.

Jake Muszynski  Dec 16, 2019 7:16 AM

SNMP still is relevant after all these years because the basics are the same on any device with it. Most places don’t have just one vendor in house. They have different companies. SNMP gets out core monitoring data with very little effort. Can you get more from SNMP with more effort? Probably. Can other technologies get you real time data for specialty systems? Yup, there is lots of stuff companies don’t put in SNMP. But that’s OK. Right up there with ping, SNMP is still a fundamental resource.

scott driver Dec 16, 2019 1:38 PM

SNMP: Analogous to a phone banking system (these are still very much a thing btw).

You have a Financial Institution (device)

You call in to an 800# (an oid)

If you know the right path you can get your balance (individual metric)

However when things go wrong, the fraud department will reach out to you (Trap)

Tregg Hartley Dec 17, 2019 12:10 PM

Sending notes all of the time

For everything under the sun,

The task is never ending

And the Job is never done.

I can report on every condition

I send and never look back,

My messages are UDP

I don’t wait around for the ACK.

17. Syslog

What does brushing your teeth have to do with an almost 30-year-old messaging protocol? Only a true teacher—in this case the inimitable “RadioTeacher” (THWACK MVP Paul Guido)—could make something so clear and simple.

Faz f Dec 17, 2019 4:54 AM

Like a diary for your computer

Juan Bourn Dec 17, 2019 11:24 AM

A way for your computer/server/application to tell you what it was doing at an exact moment in time. It’s up to you to determine why, but the computer is honest and will tell you what and when.

18. Parent-Child

For almost 20 days, we’ve seen some incredible explanations for complex technical concepts. But for day 18, THWACK MVP Jez Marsh takes advantage of the concept of “Parent-Child” to remind us our technical questions and challenges often extend to the home, but at the end of the day we can’t lose sight of what’s important in that equation.

Jeremy Mayfield  Dec 18, 2019 7:41 AM

Thank you, this is great. I think of the parent-Child as one is present with the other. As the child changes the parent becomes more full, and eventually when the time is right the child becomes a parent and the original parent may be no more.

The Parent-Child Relationship is one that nurtures the physical, emotional and social development of the child. It is a unique bond that every child and parent will can enjoy and nurture. ... A child who has a secure relationship with parent learns to regulate emotions under stress and in difficult situations.

Dale Fanning Dec 18, 2019 8:36 AM

I’m a bit further down the road than you, having launched my two kids a few years ago, but I will say that the parent-child relationship doesn’t change even then, although I count them more as peers than children now. I’m about to become a grandparent for the first time, and our new role is helping them down the path of parenthood without meddling too much hopefully. It’s only much later that you realize how little you knew when you started out on the parent journey.

Chris Parker Dec 18, 2019 9:37 AM

In keeping with the IT world:

This is the relationship much like you and your parent.

You need your parents/guardians in order to bring you up in this world and without them you might be ‘orphaned’

Information on systems sometimes need a ‘Parent’ in order for the child to belong

You can get some information from the Child but you would need to go to the Parent to know where the child came from

One parent might have many children who then might have more children but you can follow the line all the way to the beginning or first ‘parent’

19. Tracing

I’ve mentioned before how LEGO bricks just lend themselves to these ELI5 explanations of technical terms, especially as it relates to cloud concepts. In this case, Product Marketing Manager Peter Di Stefano walks through the way tracing concepts would help troubleshoot a failure many of us may encounter this month—when a beloved new toy isn’t operating as expected.

Chris Parker Dec 19, 2019 4:57 AM

Take apart the puzzle until you find the piece that is out of place

Duston Smith Dec 19, 2019 9:26 AM

I think you highlight an important piece of tracing—documentation! Just like building LEGOs, you need a map to tell you what the process should be. That way when the trace shows a different path you know where the problem sits.

Holly Baxley Dec 19, 2019 10:15 AM

Hey Five-year-old me,

Remember when I talked about Event Correlation a while back and told you that it was like dot to dot, because all the events were dots and if you connected them together you can see a clearer “picture” of what’s going on?

Well, today we’re going to talk about Tracing, which “seems” like the same thing, but it isn’t.

See in Event Correlation you have no clue what the picture is. Event Correlation’s job is to connect events together, so it can create as clear a picture as it can of the events to give you an outcome. Just remember, Event Correlation is only as good as the information that’s provided. If events (dots) are left out—the picture is still incomplete, and it takes a little work to get to the bottom of what’s going on.

In tracing—you already know what the picture is supposed to look like.

Let’s say you wanted to draw a picture of a sunflower.

Your mom finds a picture of the sunflower on the internet and she prints it off for you.

Then she gives you a piece of special paper called “vellum” that’s just the right amount of opaque (a fancy term for see-through) paper, so you can still see the picture of the sunflower underneath it. She gives you a pencil so you can start tracing.

Now in tracing does it matter where you start to create your picture?

No it doesn’t.

You can start tracing from anywhere.

In dot-to-dot, you can kinda do the same thing if you want to challenge yourself. It’s not always necessary to start at dot 1, and if you’re like me ( are me) rarely find dot 1 the first time anyway. You can count up and down to connect the dots and eventually get there.

Just remember—in this case, you still don’t know what the picture is and that’s the point of dot to dot—to figure out what the picture is going to be.

In tracing—we already know what the picture either is, or at least is supposed to look like.

And just like in tracing, once you lift your paper off the picture, you get to see—did it make the picture that you traced below?

If it didn’t—you can either a) get a new sheet and try again or b) start with where things got off track and erase it and try again.

To understand tracing in IT, I want you to think about an idea you’ve imagined. Close your eyes. Imagine your picture in your mind. Do you see it?

  1. Good.

We sometimes say that we can “picture” a solution, or we “see” the problem, when in reality, a problem can be something that we can’t really physically see. It’s an issue we know is out there: e.g., the network is running slow and we see a “picture” of how to fix it in our mind; a spreadsheet doesn’t add up right like it used to, and we have a “picture” in our mind of how it’s supposed to behave and give the results we need.

But we can’t physically take a piece of paper and trace the problem.

We have programs that trace our “pictures’ for us and help us see what went right and what went wrong.

Tracing in IT is a way to see if your program, network, spreadsheet, document, well...really anything traceable did what it was supposed to do and made the “picture” you wanted to see in the end.

It’s a way to fix issues and get the end result you really want.

Sometimes we get our equipment and software to do what it’s supposed to, but then we realize—it could be even BETTER, and so we use tracing to figure out the best “path” to take to get us there.

That would be like deciding you want a butterfly on your Sunflower, so your mom prints off a butterfly for you and you put your traced Sunflower over the butterfly and then decide what’s the best route to take to make your butterfly fit on your sunflower the way you want it.

And just like tracing—sometimes you don’t have to start at the beginning to get to where you want to be.

If you know that things worked up to a certain point but then stopped working the way you want, you can start tracing right at the place where things aren’t working the way you want. You don’t always have to start from a beginning point. This saves time.

There’s lots of different types of tracing in IT. Some people trace data problems on their network, some people trace phone problems on their network, some trace document and spreadsheet changes on their files, some trace database changes. There’s all sorts of things that people can trace in IT to either fix a problem or make something better.

But the end question of tracing is always the same.

Did I get what I “pictured?”

And if the answer is “yes” - we stop and do the tech dance of joy.

It’s a secret dance.

Someday, I’ll teach you.

20. Information Security

THWACK MVP Peter Monaghan takes a moment to simply and clearly break down the essence of what InfoSec professionals do, and to put it into terms that parents would be well-advised to use with their own kids.

(while I don’t normally comment on the comments, I’ll make an exception here)

In the comments, a discussion quickly started about whether using this space to actually explain infosec (along with the associated risks) TO a child was the correct use of the challenge. While the debate was passionate and opinionated, it was also respectful and I appreciated that. Thank you for making THWACK the incredible community that it has grown to be!

Holly Baxley Dec 20, 2019 3:18 PM (in response to Jeremy Mayfield)

I think Daddy's been reading my diary

He asks if I'm okay

Wants to know if I want to take walks with him

Or go outside and play

He tells Mommy that he's worried

There's something wrong with me

Probably from reading things in the diary

Things he thinks he shouldn't see

But I'll tell you a little secret

That diary isn't real

I scribbled nonsense in that journal

And locked away the one he can't steal

If Daddy was smart he woulda noticed

Something he's clearly forgot

Never read a diary covered with Winnie the Poo

Whose head is stuck in the Honeypot.

Jon Faldmo Dec 20, 2019 1:27 PM

I haven't thought of how information security applies or is in the same category as privacy and being secure online. I have always thought of Information Security in the context of running a business. It is the same thing, just usually referenced differently. Thanks for the write up.

Tregg Hartley Dec 20, 2019 3:33 PM

The OSI model

Has seven layers,

But it leaves off

The biggest players.

The house is protected

By the people inside,

We are all on watch

As such we abide.

To protect our house

As the newly hired,

All the way

To the nearly retired.

Read more
3 9 448
Level 10



OK, so the title is hardly original, apologies. But it does highlight the buzz for Kubernetes still out there not showing any signs of going away anytime soon.

Let’s start with a description of what Kubernetes is:

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available¹

Let me add my disclaimer here. I’ve never used Kubernetes or personally had a use case for it. I have an idea of what it is and its origins (do I need a Borg meme as well?) but not much more.

A bit of background on myself: I’m predominantly an IT operations person, working for a Value-Added Reseller (VAR) designing and deploying VMware based infrastructures. The organizations I work with are typically 100 – 1000 seats in size across many vertical markets. I would be naive to think none of those organizations aren’t thinking about using containerization and orchestration technology, but genuinely none of them currently are.

Is It Really a Thing?

In the past 24 months, I’ve attended many trade shows and events, usually focused around VMware technology, and it’s always asked, “Who is using containers?” The percentage of hands going up is always less than 10%. Is it just the audience type or is this a true representation of container adoption?

Flip it around and when I go to an AWS or Azure user group event, it’s the focus and topic of conversation: containers and Kubernetes. So, who are the people at these user groups? Predominantly the developers! Different audiences, different answers.

I work with one of the biggest Tier-1 Microsoft CSP distributors in the U.K. Their statistics on Azure consumption by type of resource are enlightening. 49% of billable Azure resources are virtual machines, 30-odd% is object storage consumption. There was a small slice of the pie at 7% for misc. services, including AKS (Azure Kubernetes Service). This figure aligns with my first observation at trade events, where less than 10% of people in the room were using containers. I don’t know if those virtual machines are running container workloads.

Is There a Right Way?

This brings us to the question and part of the reason I wrote this article: is there a right way to deploy containers and Kubernetes? Every public cloud has its own interpretation—Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, you get the idea. Each one has its own little nuances capable of breaking the inherent idea behind containers: portability. Moving from one cloud to another, the application stack isn’t necessarily going to work right away.

Anyways, the interesting piece for me, because of my VMware background, is Project Pacific. Essentially, VMware has gone all-in embracing Kubernetes by making it part of the vSphere control plane. IT Ops can manage a Kubernetes application container in the same way they can a virtual machine, and developers can consume Kubernetes in the same way they can elsewhere. It’s a win/win situation. In another step by VMware to become the management plane for all people, think public cloud, on-premises infrastructure, and the software designed data center, Kubernetes moves ever closer to my wheelhouse.

No matter where you move the workload, if VMware is part of the management and control plane, then user experience should be the same, allowing for true workload mobility.


Two things for me.

1. Now more than ever seems like the right time to look at Kubernetes, containerization, and everything it brings.

2. I’d love to know if my observations on containers and Kubernetes adoption are a common theme or if I’m living with my head buried in the sand. Please comment below.

¹ Kubernetes Description -

Read more
1 8 541
Level 17

I visited the Austin office this past week, my last trip to SolarWinds HQ for 2019. It’s always fun to visit Austin and eat my weight in pork products, but this week was better than most. I took part in deep conversations around our recent acquisition of VividCortex.

I can’t begin to tell you how excited I am for the opportunity to work with the VividCortex team.

Well, maybe I can begin to tell you. Let’s review two data points.

In 2013, SolarWinds purchased Confio Software, makers of Ignite (now known as Database Performance Analyzer, or DPA) for $103 million. That’s where my SolarWinds story begins, as I was included with the Confio purchase. I was with Confio since 2010, working as a sales engineer, customer support, product development, and corporate marketing. We made Ignite into a best of breed monitoring solution that’s now the award-winning, on-prem and cloud-hosted DPA loved by DBAs globally.

The second data point is from last week, when SolarWinds bought VividCortex for $117.5 million. One thing I want to make clear is SolarWinds just doubled down on our investment in database performance monitoring. Anyone suggesting anything otherwise is spreading misinformation.

Through all my conversations last week with members of both product teams one theme was clear. We are committed to providing customers with the tools necessary to achieve success in their careers. We want happy customers. We know customer success is our success.

Another point that was made clear is the VividCortex product will complement, not replace DPA, expanding our database performance monitoring portfolio in a meaningful way. Sure, there is some overlap with MySQL, as both tools offer support for that platform. But the tools have some key differences in functionality. Currently, VividCortex is a SaaS monitoring solution for popular open-source platforms (PostgreSQL, MySQL, MongoDB, Amazon Aurora, and Redis). DPA provides both monitoring and query performance insights for traditional relational database management systems and is not yet available as a SaaS solution.

This is why we view VividCortex as a product to enhance what SolarWinds already offers for database performance monitoring. We’re now stronger this week than we were just two weeks ago. And we’re now poised to grow stronger in the coming months.

This is an exciting time to be in the database performance monitoring space, with 80% of workloads still Earthed. If you want to know about our efforts regarding database performance monitoring products, just AMA.

I can't wait to get started on helping build next-gen database performance monitoring tools. That’s what VividCortex represents, the future for database performance monitoring, and why this acquisition is so full of goodness. Expect more content in the coming weeks from me regarding our efforts behind the scenes with both VividCortex and DPA.

Read more
4 9 597
Level 17


As we head into the new year, people will once again start quoting a popular list describing the things kids starting college in 2020 will never personally experience. Examples of these are things like “They’re the first generation for whom a ‘phone’ has been primarily a video game, direction finder, electronic telegraph, and research library.” And “Electronic signatures have always been as legally binding as the pen-on-paper kind.” Or most horrifying, “Peanuts comic strips have always been repeats.”

That said, it’s also interesting to note the things fell into obsolescence over the last few decades. In this post, I’m going to list and categorize them, and add some of my personal thoughts about why they’ve fallen out of vogue, if not use.

It’s important to note many of these technologies can still be found “in the wild”—whether because some too-big-to-fail, mission-critical system depends on it (c.f. the New York Subway MetroCard system running on the OS/2 operating system—; or because devotees of the technology keep using it even though newer, and ostensibly better, tech has supplanted it (such as laserdiscs and the Betamax tape format*).

Magnetic Storage

This includes everything from floppy disks (whether 10”, 8”, 5.25”, or 3.5”), video tapes (VHS or the doubly obsolete** Betamax), DAT, cassette tapes or their progenitor reel-to-reel, and so on.

The reason these technologies are gone is because they weren’t as good as what came after. Magnetic storage was slow, prone to corruption, and often delicate and/or difficult to work with. Once a superior technology was introduced, people abandoned these as fast as they could.

Disks for Storage

This category includes the previously-mentioned floppy disks, but extends to include CDs, DVDs, and the short-lived mini-disks. All have—by and large—fallen by the wayside.

The reason for this is less because these technologies were bad and/or hard to use, per se (floppies notwithstanding) but because what came after—flash drives, chip-based storage, SSD, and cloud storage, to name a few—were so much better.

Mobile Communications

Since the introduction of the original cordless phone in 1980, mobile tech has become both ubiquitous and been an engine of societal and technological change. But not everything invented has remained with us. Those cordless phones I mentioned are a good example, as are pagers and mobile phones that are JUST phones and nothing else.

It’s hard to tell how much of this is because the modern smartphone was superior to its predecessors, and how much was because the newest tech is so engaging—both in terms of the features it contains and the social cachet it brings.

Portable Entertainment

Once a juggernaut in the consumer electronics sector, the days of Walkman, Discman, and portable DVD players has largely ended.

In one of the best examples of the concept of “convergence,” smartphone features have encompassed and made obsolete the capabilities once performed by any and all those mobile entertainment systems.

School Tech

There was a range of systems which were staples in the classroom until relatively recently: if the screen in the classroom came down, students might turn their attention to information emanating from an overhead projector, a set of slides, a filmstrip, or even an actual film.

Smartboards, in-school media servers, and computer screen sharing all swooped in to make lessons far more dynamic, interactive, and (most importantly) simple for the teacher to prepare. And no wonder, since no teacher in their right mind would go back to the long hours drawing overhead cells in multiple marker colors, only to have that work destroyed by a wayward splash of coffee.

A Short List of More Tech We Don’t See (Much) Any More:

  • CRT displays
  • Typewriters
  • Fax machines (won’t die, but still)
  • Public phones
  • Folding maps
  • Answering machines

What other tech or modern conveniences of a bygone era do you miss—or at least notice is missing? Talk about it in the comments below.

* Ed. note: Betamax was far superior, especially for TV usage, until digital records became commercially acceptable from a budget perspective, thankyouverymuch. Plus, erasing them on the magnet thingy was fun.

** Ed. note: Rude.

Read more
3 30 989