Home this week and getting ready for Microsoft Ignite next week in Orlando. If you're at Ignite, please stop by the booth and say hello. I love talking data with anyone.


As always, here's a bunch of links I found interesting. Enjoy!


Microsoft beats Amazon to win the Pentagon’s $10 billion JEDI cloud contract

The most surprising part of this is an online bookstore thought they were the frontrunner. This deal underscores the difference between an enterprise software company with a cloud, and an enterprise infrastructure hosting company that also sells books.


Google claims it has achieved 'quantum supremacy' – but IBM disagrees

You mean Google would embellish upon facts to make themselves look better? Color me shocked.


Amazon migrates more than 100 consumer services from Oracle to AWS databases

"Amazon doesn't run on Oracle; why should you?"


“BriansClub” Hack Rescues 26M Stolen Cards

Counter-hacking is a thing. Expect to see more stories like this one in the coming years.


Berkeley City Council Unanimously Votes to Ban Face Recognition

Until the underlying technology improves, it's best for us to disallow the use of facial recognition for law enforcement purposes.


China’s social credit system isn’t about scoring citizens — it’s a massive API

Well, it's likely both, and a possible surveillance system. But if it keeps jerks away from me when I travel, I'm all for it.


Some Halloween candy is actually healthier than others

Keep this in mind when you're enforcing the Dad Tax on your kid's candy haul tomorrow night.


Every now and then my fire circle regresses to its former life as a pool.


Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner with suggestions on improving your agency’s FITARA score. FITARA rolls up scores from other requirements and serves to provide a holistic view of agency performance.


The most recent version of the scorecard measuring agency implementation of the Federal IT Acquisition Reform Act gave agencies cause for both celebration and concern. On the whole, scores in December’s FITARA Scorecard 7.0 rose, but some agencies keep earning low scores.


Agencies don’t always have the appropriate visibility into their networks to allow them to be transparent. All agencies should strive for better network visibility. Let’s look at how greater visibility can help improve an agency’s score and how DevOps and agile approaches can propel their modernization initiatives.


Software Licensing


Agencies with the lowest scores in this category failed to provide regularly updated software licensing inventories. This isn’t entirely surprising; after all, when licenses aren’t immediately visible, they tend to get forgotten or buried as a budget line item. Out of sight, out of mind.


However, the Making Electronic Government Accountable by Yielding Tangible Efficiencies Act (MEGABYTE Act) of 2016 is driving agencies to make some changes. MEGABYTE requires agencies to establish comprehensive inventories of their software licenses and use automated discovery tools to gain visibility into and track them. Agencies are also required to report on the savings they’ve achieved by optimizing their software licensing inventory.


Even if an agency doesn’t have an automated suite of solutions, it can still assess their inventory. This can be a great exercise for cleaning house and identifying “shelfware,” software purchased but no longer being used.


Risk Management


Risk management is directly tied to inventory management. IT professionals must know what applications and technologies comprise their infrastructures. Obtaining a complete understanding of everything within those complex networks can be daunting, but there are solutions to help.


Network and inventory monitoring technologies can give IT professionals insight into the different components affecting their networks, from mobile devices to servers and applications. They can use these technologies to monitor for potential intrusions and threats, but also to look for irregular traffic patterns and bandwidth issues.


Data Center Optimization


Better visibility can also help IT managers identify legacy applications to modernize. Knowing which applications are being used is critical to being able to determine which ones should be removed and where to focus modernization efforts.


Unfortunately, agencies discover they still need legacy solutions to complete certain tasks. They get stuck in a vicious circle where they continue to add to, not reduce, their data centers. Their FITARA scores end up reflecting this struggle.


Applying a DevOps approach to modernization can help agencies achieve their goals. DevOps is often based on agile development practices enabling incremental improvements in short amounts of time; teams see what they can realistically get done in three to five weeks. They prioritize the most important projects and strive for short-term wins. This incremental progress can build momentum toward longer-term goals, including getting all legacy applications offline and reducing costly overhead.


While visibility and transparency are essential for improvements across all these categories, FITARA scorecards themselves are also useful for shining light on the macro problems agencies face today. They can help illuminate areas of improvement, so IT professionals can prioritize their efforts and make a significant difference to their organizations. Every government IT manager should stay up-to-date on the scoring methodologies and how other agencies are doing.


Find the full article on Government Computer News.


  The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Following up on Will I Save Money by Moving to the Cloud? This post is part two of taking an atypical look at the public cloud and why you may or may not want to leverage it.


If you stop and think for a moment, cloud computing is still in its youth. EC2 has only been a public offering since 2006. Gartner first treated “the cloud” as a real thing about a decade ago. Seven years ago, I saw an early version of the IaaS offering for one of the big three, and it was almost unusable. From this perspective, “the cloud” is still maturing. The last several years have seen a dramatic evolution in the plethora of offerings.


Cloud has fundamentally changed the technological landscape, in a similar way as virtualization did a few years before. The benefits of cloud have had many going nuts for a while with cheers of “Cloud first!” and “We’re all in on cloud!” But what if you’re hesitant and wondering if the cloud is right for you and your organization? That’s OK and part of what we’ll explore today—some reasons you may or may not want to consider staying on-premises.


What’s Your Motivation?

In my mind, this is the biggest question you need to ask yourself, your org, and your stakeholders. Why are you moving to the cloud?

  • Is your business highly dynamic or does it need to scale rapidly?
  • Do you need to leverage an OpEx cost model for cash flow purposes?
  • Does your app need a refactor to leverage new technologies?


These are some of the best drivers for moving to the cloud and they bear more investigation.


  • Is your manager prone to moving you on to the next big thing, but only until the next big thing comes along?
  • Are you making the move simply because everyone else is doing it?
  • Do you believe you’ll inherently save money by shifting your infrastructure to cloud?

These things should give you pause, and you’ll need to overcome them if you want a successful cloud strategy.


Risk and Control

In my experience, most people hesitate to move to the cloud because of risk. Namely, your tolerance for risk within your information security program. It seems every week we hear news of a breach from an insecure cloud configuration. Now, is the cloud provider to blame for the breach? Almost certainly not. However, depending on several factors, most primarily your competencies. They may make it easier to leave yourself open to risk. Can the same situation perpetuate on-premises? Absolutely. Just remember, breaches happened before cloud was a thing. Ask yourself if any additional risk from changing providers/paradigms is within your tolerance level. If it is, great! You’re ready to continue your cloud analysis. If not, you need to determine a better move for you. Do you change your platform? Or do you change your risk tolerance?


What about where your data is and who has access to it? One of the early IaaS leaders, who’s still one of the top 10 providers, required administrative access to your servers. How particular are you and your organization about data locality and retention times? What happens to your data when you leave a provider? All these problems can be overcome, but before committing to any change in direction, ask yourself where you want to spend your resources: on changing how you mitigate risk in a new environment or dealing with a known commodity.



What do you want your people to do and what do they want to do? Chances are your IT organization has somewhere between one and hundreds of technologists. Switching platforms requires retraining and leveling these people up. You need to consider your appetite for these tasks and weigh it against the costs of running your business should you stick with the status quo.


You should have a similar discussion around your toolsets. In the grand scheme of things, cloud is still relatively young. Many vendors aren’t ready to support a multi-cloud or hybrid cloud approach. As it relates to operations, do you need to standardize and have a single pane of glass or are you OK with different toolsets for different environments?


Finally, you need to think about how your strategies affect your staff and what it means for employee retention. If your business is cutting-edge, pushing boundaries, and disrupting without leveraging the cloud, you could end up with a people problem. Conversely, if you operate in a stable, predictable environment, you’ll need to consider whether disruption from changing your infrastructure is worth upending your team. Don’t get me wrong, you shouldn’t decide on a business strategy solely on employee happiness. On the other hand, engaged teams are routinely shown to be more effective, so it’s a factor to consider.


Costs as it pertains to cloud is a complicated matter, and you can look at it from many different angles. I explore several relevant questions in my post Will I Save Money by Moving to the Cloud?


All these questions aside, neither the cloud nor the legacy data center is going anywhere anytime soon. Heck, I just installed a mainframe recently, so you can trust most technology has varying degrees of stickiness. I want you and your organization to choose the right tool for the situation. Hopefully, considering a couple of different viewpoints helps you make the right choice.


The conversation continues in part three of the series as we take a look at SaaS in Beyond IaaS, Cloud for the SMB, the Enterprise, and Beyond!

Sascha Giese

VMworld EMEA 2019

Posted by Sascha Giese Employee Oct 27, 2019

It's this time of the year again! VMworld EMEA is in Barcelona again.


As one of the annual family reunions of all things data center, virtualization, and cloud, we can't miss it. This year the event is November 4 – 7, and you can find us at stand B524.


Last year, we had VMAN 8.3 out of the door a few months before the event, and now, we’re on version 8.5.

If things work out, we might be able to show you some of what we’re working on. And while we can't promise anything, it's looking good so far. Fingers crossed!


Tom and Patrick are attending MS Ignite in Orlando, which is happening at the same time, so it will be Leon and me, plus a group of experts from our EMEA offices, helping you in Barcelona.

Last November I saw this thing but unfortunately, I couldn’t find time to play with it:


Now, who wouldn’t want to play (I mean, do research) with a virtual shovel excavator? Will these guys return? Asking for a friend!

And Barcelona being Barcelona, there’s good food to look forward to again:



And for sure, we’re also looking forward to seeing you guys again!

In my last post I gave some background on one of my recent side projects: setting up and then monitoring a Raspberry Pi running Pi-Hole. In this post, I’m going to dive into the details of how I set up the actual monitoring. As a reminder, you can download these Server & Application Monitor (SAM) templates from the THWACK content exchange:



Also, the SolarWinds legal team has persistently insisted I remind you that these are provided as-is, for educational purposes only. The user agrees to indemnify the author, the author’s company, and the author’s third grade math teacher of any unexpected side effects such as drowsiness, nausea, ability to fly, growth of extra limbs, or attacks by flightless water fowl.


Setting Up Monitoring

As I said at the start of this series (**LINK**), on top of enjoying what Pi-Hole was doing for my home browsing experience, I also wanted to see if I could collect meaningful monitoring statistics from an application of this type.I started off with the basics—getting the services monitored. There weren’t many, and it looked like this once I was set up.



In the end, the services I needed to monitor were:

  • pihole-FTL
  • lighttpd
  • lightdm
  • dhcpd


Because monitoring services is sort of “basic blocking and tackling,” I’m not going to dig too deep here. Also, because I’ve provided the template for you to use, you shouldn’t have to break a sweat over it.


Next, I wanted to capture all those lovely statistics the API is providing. The only way I could do this was by building a script-based component in SolarWinds SAM. Now I’m no programmer, more like a script-kiddie, but I can sling code in a pinch, so I wasn’t overly worried…


…Until I realized I didn’t want to do this in Perl. It’s one thing to shoehorn Perl into making JSON calls because I wanted to prove a point. But since I wanted to put this template on THWACK for other folks to use, I had to do it in a scripting language that hadn’t celebrated more anniversaries than my wife and I had (31 years and going strong, thank you very much. My marriage, I mean, not Perl.). So, I took a good, hard look in the mirror and admitted to myself it was finally time to hunker down and write some code with PowerShell.


Jokes aside, for a project where I knew I’d be interacting with web-based API calls to return XML style data, I knew PowerShell was going to give me the least amount of friction, and cause others who used my code in the future the least amount of grief. I also knew I could lean on Kevin Sparenberg, Steven Klassen, and the rest of the THWACK MVP community when (sorry, if) I got stuck.


I’m happy to report it didn’t take me too long to get the core functionality of the script working—connect to the URL, grab all the data, and filter out the piece I want. It would look something like this:

$pi_data = Invoke-RestMethod -Uri "http://mypihole/admin/api.php" 
$pi_stat = $pi_data.domains_being_blocked 
Write-Host "Statistic: " $pi_stat

Now I needed not only to pretty this up, but also to add a little bit of error-checking and adapt it to the conventions SAM script components expect. Luckily, my MVP buddies rose to the challenge. It turns out Kevin Sparenberg had already created a framework for SAM PowerShell script components. This gem ensured I followed good programming standards and output the right information at the right time. You can find it here.


As I began to pull my basic script into the SAM template, I immediately ran into a problem: Raspberry Pi doesn’t run PowerShell, but the script was attempting to run there anyway.


After a bit of digging, I realized the problem. First, I was monitoring the Raspberry Pi itself using a SolarWinds agent. When you do that, SAM “presumes” you want to run script components on the target, instead of the polling engine. In most cases, this presumption is true, but not here. The fix is to change the template advanced options to run in agentless mode.


Once that was done, the rest was simple. For those reading this who have experience building script components, the process is obvious. For those of you who don’t have experience, trust me when I say it’s too detailed for this post, but I have plans to dig into the step-by-step of SAM script monitors later!


Looking Ahead

At the time I was playing with this, script monitors were the best way to get API data out of a system. HOWEVER, as you can see on the SAM product roadmap page, one of the top items is a built-in, generic API component.


I think I just found my next side project.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Jim Hansen about the benefits and challenges of edge computing. Ultimately, this new technology requires scrutiny and planning.


Edge computing is here to stay and it’s no wonder. Edge computing provides federal IT pros with a range of advantages they simply don’t have with more traditional computing environments.


First, edge computing brings memory and computing power closer to the source of data, resulting in faster processing times, lower bandwidth requirements, and improved flexibility. Edge computing can be a source of potential cost savings. With edge computing, data is processed in real time at the edge devices, therefore, it can help save computing cycles on cloud servers and reduce bandwidth requirements.


However, edge computing may also introduce its share of challenges. Among the greatest challenges are visibility and security, based on the decentralized nature of edge computing.




As with any technology implementation, start with a strategy. Remember, edge devices are considered agency devices, not cloud devices, therefore they’re the responsibility of the federal IT staff.

Include compliance and security details in the strategy, as well as configuration management. Create thorough documentation. Standardize wherever possible to enhance consistency and ease manageability.


Visualization and Security


Remember, accounting for all IT assets includes edge-computing devices, not just those devices in the cloud or on-premises. Be sure to choose a tool to not only monitors remote systems, but provides automated discovery and mapping, so you have a complete understanding of all edge devices.

In fact, consider investing in tools with full-infrastructure visualization, so you can have a complete picture of the entire network at all times. Network, systems, and cloud management and monitoring tools will optimize results and provide protection across the entire distributed environment.


To help strengthen security all the way out to edge devices, be sure all data is encrypted and patch management is part of the security strategy. Strongly consider using automatic push update software to ensure software stays current and vulnerabilities are addressed in a timely manner. This is an absolute requirement for ensuring a secure edge environment, as is an advanced Security Information and Event Management (SIEM) tool to ensure compliance while mitigating potential threats.


A SIEM tool will also assist with continuous monitoring, which helps federal IT pros maintain an accurate picture of the agency’s security risk posture, providing near real-time security status. This is particularly critical with edge-computing devices which can often go unsecured.




The distributed nature of edge computing technology is increasing in complexity, with more machines, greater management needs, and a larger attack surface.


Luckily, as computing technology has advanced, so has monitoring and visualization technology, helping federal IT pros realize the benefits of edge computing without additional management or monitoring pains.


Find the full article on Government Technology Insider.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

If you’ve read my posts for any length of time, you know I sometimes get caught up in side projects. Whether it’s writing an eBook, creating a series of blog posts about custom SolarWinds reports, or figuring out how to make JSON requests in Perl, when my ADD and inspiration team up to conspire against me, I have no choice but to follow. The good news is I usually learn something interesting along the way.


That’s what this series of posts is going to be about—yet another trip down the technical rabbit hole of my distractibility. Specifically, I implemented Pi-Hole on a spare Raspberry Pi at home, and then decided it needed to be monitored.


In the first part of the series (today’s post), I’m going to give some background on what Pi-Hole and the Raspberry Pi are and how they work. In the next installment, I’ll cover how to monitor it using SolarWinds Server & Application Monitor (SAM).


If you’re impatient, you can download all three of the templates I created from the THWACK content exchange. The direct links are here:


Please note these are provided as-is, for educational purposes only. Do not hold the author, the author’s company, or the author’s dog responsible for any hair loss, poor coffee quality, or lingering childhood trauma.


What Is a Raspberry Pi?

This is a whole computer on a 3.5” x 2.25” board. For those who haven’t had exposure to these amazing little devices, a Raspberry Pi is a small, almost credit-card-sized full computer on a single board. It has a CPU, onboard memory, GPU, and support hardware for a keyboard, mouse, monitor, and network connection.While most people use the operating system “Raspbian” (a Linux Debian variation), it also supports several other OS options built off variants of Linux, RISC, and even Microsoft Windows.


What Is Pi-Hole?

Pi-Hole software makes your home (or, work, if your IT group is open-minded enough) network faster and safer by blocking requests to malicious, unsavory, or just plain obnoxious sites. If you’re using Pi-Hole, it’ll be most noticeable when advertisements on a webpage fail to load like this:


BEFORE: pop-overs and hyperbolic ads.



AFTER: No pop-overs, spam ads blocked


But under the hood, it’s even more significant:


BEFORE: 45 seconds to load



AFTER: 6 seconds to load



Look in the lower-right corner of each of those images. Load time without Pi-Hole was over 45 seconds. With it, the load time was 6 seconds.You may not think there are many of these, but your computer is making calls out to these sites all the time. Here are the statistics from my house on a typical day.



The Pi-Hole software was originally built for the Raspberry Pi, but has since extended to run on full computers (or VMs) running Ubuntu, CentOS, Debian, or Fedora; or on docker containers hosted on those systems. That said, I’m focusing on the original, Raspberry Pi-based version for this post.


What Is This API?

If you’ve already dug into APIs as part of your work, you can probably skip this section. Otherwise, read on!An Application Programming Interface is a way of getting information out of (or sometimes into) a program without using the normal interface. In the case of Pi-Hole, I could go to the web-based admin page and look at statistics on each screen, but since I want to pull those statistics into my SolarWinds monitoring system, I’m going to need something a bit more straightforward. I want to be able to effectively say directly to Pi-Hole, “How many DNS queries have you blocked so far today?” and have Pi-Hole send back “13,537” without all the other GUI frou-frou.SHAMELESS PROMOTION: If you find the idea of APIs exciting and intriguing, then I should point you toward the SolarWinds Orion Software Developer Kit (SDK)—a full API supporting the language of your choice (Yes, even Perl. Trust me. I tried it.). There’s a whole forum on THWACK dedicated to it. Head over there if you want to find out how to add nodes, assign IP addresses, acknowledge alerts, and other forms of monitoring wizardry.


How Does the Pi-Hole API Work?

If you have Pi-Hole running, you get to the API by going to http://<your pi-hole url>/admin/api.php.There are two modes to extracting data—summary and authorized. Summary mode is what you get when you hit the URL I gave above. It will look something like this:




If you look at it with a browser capable of formatting JSON data, it looks a little prettier:

Meanwhile, the authorized version is specific to certain data elements and requires a token you get from the PiHole itself. You view the stats by adding ?”<the value you want>” along with “&auth=<your token>” to the end of the URL, so to get the TopItems data, it would look something like this:


And the result would be:

You get a token by going to the Pi-Hole dashboard, choosing Settings, clicking the “API/Web Interface” tab, and clicking the “Show Token” button. Meanwhile, the values requiring a token are described on the Discourse page for the Pi-Hole API.


Until Next Time

That’s it for now. In my next post of the series, I’ll dig deep into building the SAM template. Your homework is to repurpose, dust off, or buy a Raspberry Pi, load it up with Pi-Hole, and get it configured. Then you’ll be ready to try out the next steps when I come back.And if you want to have those templates ready to go, you can download them here:


Perhaps the title of this post should be “To Hyperconverge or Not to Hyperconverge,” since it’s the question at hand. Understanding whether HCI is a good idea requires a hard look at metrics and budget. If your data center has been running a traditional server, storage, network architecture for a while, it should be easy to gather the metrics on power, space, and cooling. By gathering these metrics, you can compare the TCO of running a traditional architecture versus an HCI architecture.


Start by getting an accurate comparison. While having a TCO baseline will help with the comparison, you need to consider a few other items before making a final decision.


Application Workload

Forecasting the application workload(s) running in your current environment is important when considering HCI over traditional infrastructure. Current application workloads aren’t a good indicator of the health and status of your infrastructure. A good rule of thumb is to forecast three years out, which gives you a game plan for upgrading or scaling out your current configuration. If you’re only running a few workloads and they aren’t forecasted to be out of space within three years, you probably don’t need to upgrade to HCI. Re-evaluate HCI again in two years while looking three years ahead.


Time to Scale

Having an accurate three-year workload forecast will help you understand when you need to scale out. If you need to scale out now, I suggest going all in on HCI. Why all in on HCI? Because scaling with HCI is a piece of cake. It doesn’t require a forklift upgrade and you can scale on-demand. With most HCI vendors, simply adding a new block or node to your existing HCI deployment will scale out. You can add more than one block or node at a time, making the choice to go HCI very attractive. The process of scaling out in a traditional infrastructure costs more and is more time-constrained.


Staff Experience

You can’t afford to overlook this area in the decision process. Moving from traditional infrastructure to HCI can present a learning curve for some. In traditional infrastructure, most technologies are separate and require a separate team to manage them. In some cases where security is a big requirement, there’s a separation of duties, with which different admins manage network, compute, and storage. Upgrading to an HCI deployment gives you the benefit of having all components in one. When you move to a new technology, it requires somewhat of a learning curve, and this couldn’t be truer with HCI. The interfaces differ, and the way the technologies are managed differ. Some HCI vendors offer a proprietary hypervisor, which will require learning from the old industry standard hypervisor.


Make an Informed Decision

If you’re determined to transition to a new HCI deployment, make sure you consider all the previous items. In most cases, the decisionmakers attend vendor conferences and get excited about new technology. They then buy in to a new technology and leave it to their IT staff to figure out. Ensure there’s open communication and consider staff experience, TCO, and forecasted application workload. If you consider those things, you can feel good about making an informed decision.

In Austin this week for THWACKcamp. I hope you're watching the event and reading this post later in the day. We tried a new format this year--I hope you enjoy what we built.


As always, here are some links I found interesting this week. Enjoy!


GitHub renews controversial $200,000 contract with ICE

“At GitHub, we believe in empowering developers around the world. We also believe in basic human rights, treating people with respect and dignity, and cold, hard, cash.”


NASA has a new airplane. It runs on clean electricity

I hope this technology doesn't take 30 years to come to market.


Revealed: the 20 firms behind a third of all carbon emissions

Maybe we need to work on electric projects for these companies instead.


WeWork expected to cut 500 tech roles

It seems every week there's another company collapsing under the weight of the absurdity of the business model.


Visa, MasterCard, Stripe, and eBay all quit Facebook’s Libra in one day

I don't understand why they were involved to begin with.


Linus Torvalds isn't concerned about Microsoft hijacking Linux

Microsoft is absolutely a different company. It's good to see Linus acknowledge this.


Elizabeth Warren trolls Facebook with 'false' Zuckerberg ad

Here's a thought - maybe don't allow any political ads on Facebook. That way we don't have to worry about what is real or fake. Of course that can't happen, because Facebook wants money.


The leaves have turned, adding some extra color to the fire circle.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Mav Turner about how complying with federal frameworks can help improve security. Intruders are using new tactics and threats are increasing, so please take note.


Over the past few years, several critical cybersecurity frameworks have been introduced to help agency IT professionals detect and deter stealthy intruders. These include the Cyber Threat Framework (CTF), the Federal Risk and Authorization Management Program (FedRAMP), and the Continuous Diagnostics and Mitigation (CDM) Program. Let’s take a look at each of these and identify strategies you can employ to support and strengthen these frameworks.


CTF Strategies: Assessment and Intelligence


The CTF is about learning hackers’ patterns and trends. Administrators should strive to gain as much information as possible about their own networks and the known and unknown security threats putting their systems and data at risk.


Begin by establishing a baseline inventory of the systems and applications on the network. This assessment can help establish “normal” network behaviors and patterns. From there, you can better detect if something is amiss—an unauthorized user or device, for example—raising a flag.


Take time to understand the breadth and depth of the attacks being used by malicious actors to attack unsuspecting users. Online security forums and websites are a good starting point.


FedRAMP Strategies: Patching and Education


FedRAMP is as vital today as it was when it was first introduced nearly a decade ago. FedRAMP provides useful guidance on different factors, but one of the most important is the need for frequent patching. Vendors are required to patch their systems on a routine basis and report those actions to retain their FedRAMP designations.


Beyond patching, FedRAMP also makes a case for continuing education. Administrators are required to do monthly system scans and annual assessments, reviewing system changes and updates. Stay informed about threats and the latest techniques and technologies to combat those threats.


CDM Strategies: Monitoring Activity and Devices


The CDM program asks you to continuously monitor activity, including data at rest and in transit, user behaviors, and more. You must be able to see who’s connected, when they’re connected, and what they’re connected to, and be able to discern deviations from the norm. This requires mechanisms to detect odd usage and irregular behaviors and issue alerts when an unknown or unauthorized device is detected. You must be prepared to respond quickly to these incidents or be able to automatically remediate the problem.


Ideally, administrators should also go beyond simple device monitoring to a more in-depth analysis of device behavior. A simple printer could be used as an information-sharing device. Administrators must be able to detect when something is being used in an unusual way.


Each framework approaches cybersecurity from a slightly different direction, but they all have one thing in common: the need for constant vigilance and complete awareness. Administrators must do whatever it takes to gain complete visibility into their network operations using all the tools at their disposal to shine a light on those areas and keep intruders out.


Find the full article on GovLoop.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

This post is post one in a small series, taking an atypical look at the public cloud and why you may or may not want to leverage it. Part 2 asks the question I'm Still On-Premises, Is That OK?  and in part 3 we explore SaaS in Beyond IaaS, Cloud for the SMB, the Enterprise, and Beyond!


Did you know AWS came about partly because Amazon realized they could recognize cost efficiencies within their own internal cloud? Did you also know AWS launched right around the same time the virtualization revolution took off? I believe these two massively disruptive technologies (virtualization and public cloud) launching around the same time caused a lot of people to equate the cost savings immediately recognized from virtualization and transferred the same philosophy to cloud. In fact, this was one of the earliest rationales for moving to the cloud—you’ll save money. “Cost” is a broad paradigm and it’s not as simple as saying “if I’m all in on cloud, I’ll save money.” Today we’ll explore some of the cost decisions you’ll have to make, whether your plan is to stay on-premises or if a move to the cloud is in your future.


Before we go any further, let’s settle on a few definitions. “The cloud” is no more a single entity than “the web” is. Cloud offerings are diverse, but for the sake of these conversations, I’d like to define the three primary types.


  • SaaS (Software as a Service) – You subscribe to and access software that most likely lives elsewhere. Salesforce.com is one well-known example.

  • PaaS (Platform as a Service) – All you do is develop applications and manage data in a PaaS. Everything else is abstracted away and operated by the provider.

  • IaaS (Infrastructure as a Services) – You own and manage everything from the operating system up, and the provider owns the underlying infrastructure. It’s the closest to your traditional on-premises data center since you manage the server OSs themselves.


For the sake of time, today we’ll primarily focus on the differences between IaaS and on-premises solutions and how you pay for them.


Cash Flow

Does your business operate primarily on an Operational Expense (OpEx) basis? Are significant cash outlays OK within the organization? How you answer questions like these may help you decide right away whether on-premises or cloud is right for you. Nearly all cloud offerings sell on an OpEx model, which you can think of like a subscription, where you pay for what you use on a monthly basis. Traditionally, on-prem infrastructure was sold outright, and hence incurred large initial cash outlays. The prospect of these large capital expenditures can be undesirable or even scary for many organizations, particularly young ones, needing to delicately balance their cash flow. The cash decision is less of a differentiator than it once was. Some cloud providers ask for long-term commitments to get the best rate, and traditional infrastructure vendors are now more flexible with their financing options. Nonetheless, the CapEx vs. OpEx question is one you’ll want to account for in your organization.


Total Cost of Ownership (TCO)

Coming up with a TCO should be simple, right? The months I spent building a lift-and-shift IaaS cost comparison model would disagree. There are a lot of commonalities between what’s required to run an IaaS solution or on-prem solution. With both, you’ll need to backup and monitor your solution. You’ll also need to protect it from bad guys. Let’s for now consider these items a wash. You also pay for your operating system, so what’s primarily left to differentiate IaaS from on-premises is the infrastructure to run your stuff. If you’re on-premises, you’ll need to account not just for the hardware and virtualization platform, but everything down to your environmental controls (power, cooling, fire suppression, etc.), the physical space your infrastructure occupies, and all the odds and ends (racks, wires) to make all the blinking lights go. Yes, a colocation facility can help with many of these, but that’s a conversation for another day.


On the other side of the coin, with an IaaS solution, you can sometimes get away with paying for just what you use, but you’ll be paying for what you use in perpetuity. If you’re the type of car buyer who drives a car until its wheels fall off, an OpEx model may not be attractive to you. Further, if you keep your infrastructure for long periods of time, on-premises will likely save you money, as you can depreciate the gear over a longer time. Lastly, when you have a system storing data in the cloud, you need to consider egress costs, where it’s free or cheap to get your data into the cloud but potentially expensive to get your data back out.


So, what’s the verdict when it comes to TCO? Remember TCO is measured over time. Cloud can be a significant cost savings, especially if your organization has a flexible workload, where you don’t need to outright buy the capacity and performance to handle peak workloads. However, if your workloads and environment are stable and predictable, depreciating the costs of on-premises equipment over a longer time period may make fiscal sense.


Opportunity and People Costs

Spinning a server in the cloud up or down, and dynamically changing the infrastructure it supports is a massive selling point for those who want to go faster and push the envelope. What if your business or industry doesn’t operate this way? You need to decide if it’s worth taking time away from your core competencies to embark on a cloud endeavor.


As it pertains to people costs, you need to understand your business, your people, and what makes them tick. We’ll continue our exploration of these topics in the second part of this atypical look at the cloud: I’m Still On-Premises, Is That OK?

During this series, we’ve looked at virtualization’s past, the challenges it has created, and how its evolution will allow it to continue to play a part in infrastructure delivery in the future. In this final part, I’d like to draw these strands together and share how I believe the concept of virtualization will continue to be a core part of infrastructure delivery, even if it’s a little different from what we’re used to.


Will Cloud Be the End of Virtualization?

When part one of this series was published, one question caught my attention: “Will public cloud kill virtualization?”


I hadn’t considered this for this series, but it intrigued me nonetheless.


It caught my attention not because I believe cloud will be its end, but because it’s played a significant part in redefining the way we think about infrastructure delivery, and consequently, how we need to think about virtualization.


Redefining Infrastructure

This redefinition is not just a technical one; it’s also a fundamental shift in focus. When we discuss infrastructure in a cloud environment, we don’t think about vendor preferences, hardware configs, or individual component configs. We focus on the service, from virtual machine to AI and all in between. Our focus is the outcome—what the service delivers—not the technicalities of how we deliver it.


I believe this change in expectation drives the future evolution of virtualization.


Virtualizing All the Things

This future is based on virtualizing increasing elements of our infrastructure stack, not just servers, but networking and storage. It's about abstracting the capabilities of each of these elements from specific custom hardware and allowing them to be deployed in any compatible environment.


More than this, making more of our infrastructure software-based allows us to more easily automate deployment, deliver our infrastructure as code, and provide the flexibility and portability a modern enterprise demands.


Abstracting Even Further

However, this isn’t where the evolution of virtualization stops. We’re already seeing the development of its next phase.


New models like containerization and serverless functions are not only abstracting the reliance on hardware but also on operating systems. They’re designed to be ephemeral, created, or called on-demand, delivering their outcome before disappearing, to be recreated whenever or wherever they’re needed and not remaining in our infrastructure forever, creating an endless sprawl of virtual resources.


Virtualization Next

Virtualizing infrastructure has, over the last 20 years, transformed the way we deliver our IT systems and has allowed us to deliver models focused on outcomes, provide flexibility, and quickly meet our needs.


At the start of this series we asked whether virtualization has a future, and as we start to not only rethink how we deliver infrastructure, but also how we architect it, does virtualization have a place?


The new architectures we’re building are inspired by the large cloud providers, to be built at scale, deployed at speed, anywhere we want it, without much consideration of the underlying infrastructure and where appropriate to exist only as needed.


Virtualization remains at the very core of these new infrastructures, whether as software-defined, containers, or serverless. These are all evolutions of the virtualization concept and while it continues to evolve, it will remain relevant for some time to come.


I hope you’ve enjoyed this series. Thank you for your comments and getting involved. Hopefully it’s provided you with some new ideas around how you can use virtualization in your infrastructure now and in the future.

This blog series has been all about taking a big step back and reviewing your ecosystem. What do you need to achieve? What are the organization’s goals and mandates? What assets are in play? Are best practices and industry recommendations in place? Am I making the best use of existing infrastructure? The more questions asked and answered, the more likely you’re to build something securable without ignoring business needs and compromising usability. You also created a baseline to define a normal working environment.


There’s no such thing as a 100% protected network. Threats evolve daily. If you can’t block every attack, the next best thing is detecting when something abnormal is occurring. Anomaly detection requires the deployment of methodologies beyond the capabilities of the event logs generated by the elements on the network. Collecting information about network events has long been essential to providing a record of activities related to accounting, billing, compliance, SLAs, forensics, and other requirements. Vendors have provided data in standardized forms such as Syslog, as well as device specific formats. These outputs are then analyzed to provide a starting point for business-related planning, security breach identification and remediation, and many other outcomes.


In this blog, I’ll review different analysis methods you can use to detect threats and performance issues based on the collection of event log data from any or all systems in the ecosystem.


Passive/deterministic traffic analysis: Based on rule and signature-based detection, passive traffic analysis continually monitors traffic for protocol anomalies, known threats, and known behaviors. Examples include tunneled protocols such as IRC commands within ICMP traffic, use of non-standard ports and protocol field values, and inspecting application-layer traffic to observe unique application attributes and behaviors to identify operating system platforms and their potential vulnerabilities.


Correlating threat information from intrusion prevention systems and firewalls with actual user identities from identity management systems allows security professionals to identify breaches of policy and fraudulent activity more accurately within the internal network.


Traffic flow patterns and behavioral analysis: Capture and analysis using techniques based on flow data. Although some formats of flow data are specific to one vendor or another, most include traffic attributes with information about what systems are communicating, where the communications are coming from and headed to, and in what direction the traffic is moving. Although full-packet inspection devices are a critical part of the security infrastructure, they’re not designed to monitor all traffic between all hosts communicating within the network interior. Behavior-based analysis, as provided by flow analysis systems, is particularly useful for detecting traffic patterns associated with malware.


Flow analysis is also useful for specialized devices like multifunction printers, point-of-sale (POS) terminals, automated teller machines (ATMs), and other Internet of Things (IoT) devices. These systems rarely accommodate endpoint security agents, so techniques are needed to compare actions to predictable patterns of communication. Encrypted communications are yet another application for flow and behavioral analysis. Increasingly, command-and-control traffic between a malicious server and a compromised endpoint is encrypted to avoid detection. Behavioral analysis can be used for detecting threats based on the characteristics of communications, not the contents. For example, an internal host is baselined as usually communicating only with internal servers, but it suddenly begins communicating with an external server and transferring large amounts of data.


Network Performance Data: This data is most often used for performance and uptime monitoring and maintenance, but it can also be leveraged for security purposes. For example, availability of Voice over IP (VoIP) networks is critical, because any interruptions may cripple telephone service in a business. CPU and system resource pressure may indicate a DoS attack.


Statistical Analysis and Machine Learning: Allows us to determine possible anomalies based on how threats are predicted to be instantiated. This involves consuming and analyzing large volumes of data using specialized systems and applications for predictive analytics, data mining, forecasting, and optimization. For example, a statistics-based method might detect anomalous behavior, such as higher-than-normal traffic between a server and a desktop. This could indicate a suspicious data dump. A machine learning-based classifier might detect patterns of traffic previously seen with malware.


Deriving correlated, high fidelity outputs from large amounts of event data has seeded the need for different methods of its analysis. The large number of solutions and vendors in the SIEM, MSSP, and MDR spaces indicates how important event ingest and correlation has become in the evolving threat landscape as organizations seek a full view of their networks from a monitoring and attack mitigation standpoint.


Hopefully this blog series has been a catalyst for discussions and reviews. Many of you face challenges trying to get management to understand the need for formal reviews and documentation. Presenting data on real-world breaches and their ramifications may be the best way to get attention, as is reminding decision makers of their biggest enemy: complacency.

Can you believe THWACKcamp is only a week away?! Behind the scenes, we start working on THWACKcamp in March, maybe even earlier. I really hope you like what we have in store for you this year!


As always, here are some links I found interesting this week. Enjoy!


Florida man arrested for cutting the brakes on over 100 electric scooters

As if these scooters weren't already a nuisance, now we have to worry about the fact that they could have been tampered with before you use one. It's time we push back on these thing until the service providers can demonstrate a reasonable amount of safety.


Groundbreaking blood test could detect over 20 types of cancer

At first I thought this was an old post for Theranos, but it seems recent, and from an accredited hospital. As nice as it would be to have better screening, it would be nicer to have better treatments.


SQL queries don't start with SELECT

Because I know some of y'all write SQL every now and then, and I want you to have a refresher on how the engine interprets your SELECT statement to return physical data from disk.


Facebook exempts political ads from ban on making false claims

This is fine. What's the worst that could happen?


Data breaches now cost companies an average of $1.41 million

But only half that much for companies with good security practices in place.


Decades-Old Code Is Putting Millions of Critical Devices at Risk

Everything is awful.


How Two Kentucky Farmers Became Kings Of Croquet, The Sport That Never Wanted Them

A bit long, but worth the time. I hope you enjoy the story as much as I did.



Even as the weather turns cold, we continue to make time outside in the fire circle.

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article from my colleague Jim Hansen about ways to reduce insider threats. It comes down to training and automation.


A recent federal cybersecurity survey by SolarWinds found federal IT professionals feel threats posed by careless or malicious insiders or foreign governments are at an all-time high. Worse, hackers aren’t necessarily doing technical gymnastics to navigate through firewalls or network defenses. Instead, they’re favoring some particularly vulnerable targets: agency employees.


Who hasn’t worked a 12-hour shift and, bleary-eyed at the end of a long night, accidentally clicked on an email from a suspicious source? Which administrator hasn’t forgotten to change user authorization protocols after an employee leaves an agency? A recent study found 47% of business leaders claimed human error caused data breaches within their organizations.


The “People Problem”


Phishing attacks and stealing passwords through a keylogger attack are some of the more common threats. Hackers have also been known to simply guess a user’s password or log in to a network with former employees’ old credentials if the administrator neglects to change their authorization.


This “people problem” has grown so big, attempting to address the problem through manual security processes has become nearly impossible. Instead, agency IT professionals should automate their security protocols to have their systems look for suspicious user patterns and activities undetected by a human network administrator.


Targeting Security at the User Level


Automating access rights managing and user activity monitoring brings security down to the level of the individual user.


It can be difficult to ascertain who has or should have access rights to applications or data, particularly in a large Department of Defense agency. Reporting and auditing of access rights can be an onerous task and can potentially lead to human error.


Automating access rights management can take a burden off managers while improving their security postures. Managers can leverage the system to assign user authentications and permissions and analyze and enforce those rights. Automated access rights management reinforces a zero-trust mentality for better security while ensuring the right people have access to the right data.


User activity monitoring should be considered an essential adjunct to access rights management. Administrators must know who’s using their networks and what they’re doing while there. Managers can automate user tracking and receive notifications when something suspicious takes place. The system can look for anomalous behavioral patterns that may indicate a user’s credentials have been compromised or if unauthorized data has been accessed.


Monitoring the sites users visit is also important. When someone visits a suspicious website, it’ll show on a user’s log report. High risk staff should be watched more closely.


Active Response Management


Some suspicious activity is even harder to detect. The cybercriminal on the other end of the server could be gathering a treasure trove of data or the ability to compromise the defense network, and no one would know.


Employing a system designed to specifically look for this can head off the threat. The system can automatically block the IP address to effectively kick the attacker out, at least until they discover another workaround.


Staying Ahead in the Arms Race


Unfortunately, hackers are industrious and indefatigable. The good news is we now know hackers are targeting employees first. Administrators can build automated defenses around this knowledge to stay ahead.


Find the full article on Fifth Domain.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

vSphere, which many consider to be the flagship product of VMware, is virtualization software with the vCenter management software and its ESXi hypervisor. vSphere is available in three different licenses: vSphere Standard, vSphere Enterprise Plus, and vSphere Platinum. Each comes with a different cost and set of features. The current version for vSphere is 6.7, which includes some of the following components.


Have a spare physical server lying around that can be repurposed? Voila, you now have an ESXi Type 1 Hypervisor. This type of hypervisor runs directly on a physical server and doesn’t need an operating system. This is a perfect use case if you have an older physical server lying around that meets the minimum requirements. The disadvantages to this setup include higher costs, a rack server, higher power consumption, and lack of mobility.


What if you don’t have a physical server at your disposal? Your alternative is an ESXi Type 2 Hypervisor because it doesn’t run on a physical server but requires an operating system. A great example is my test lab, which consists of a laptop with the minimum requirements. The laptop includes Windows 10 Pro as its host operating system, but I have my lab running in a virtual image via VMware Workstation. The advantages to this setup include minimal costs, lower power consumption, and mobility.


To provide some perspective, the laptop specifications are listed below:

  • Lenovo ThinkPad with Windows 10 Pro as the host operating system
  • Three hard drives: (1) 140GB as the primary partition and (2) 465GB hard drives to act as my datastores (DS1 and DS2 respectively) with 32GB RAM
  • One VMware ESXi Host (v6.7, build number 13006603)
  • Four virtual machines (thin provisioned)
    • Linux Cinnamon 19.1 (10GB hard drive, 2GB RAM, one vCPU)
    • Windows 10 Pro 1903 (50GB hard drive, 8GB RAM, two vCPUs)
    • Windows Server 2012 R2 (60GB hard drive, 8GB RAM, two vCPUs)
    • Pi-Hole (20GB hard drive, 1GB RAM, one vCPU)


With the introduction of vSphere 6.7, significant improvements were created over its predecessor vSphere 6.5. Some of these improvements and innovations include:

  • Simple and efficient management at scale
  • Two times faster than v6.5
  • Three times less memory consumption
  • New APIs improve deployment and management of the vCenter Appliance
  • Single reboot and vSphere Quick Boot reduce upgrade and patching times
  • Comprehensive built-in security for the hypervisor and guest OS also secures data across the hybrid cloud
  • Integrates with vSAN, NSX, and vRealize Suite
  • Supports mission-critical applications, big data, artificial intelligence, and machine learning
  • Any workloads can be run, including hybrid, public, and private clouds
  • Seamless hybrid cloud experience with a single pane of glass to manage multiple vSphere environments on different versions between an on-premises data center and any vCenter public, like VMware on AWS


If you’re interested in learning more about vSphere, VMware provides an array of options to choose from, including training, certifications, and hands-on labs.

So far in this virtualization series, we’ve discussed its history, the pre-eminence of server virtualization, the issues it has created, and how changing demands placed on our infrastructure are leading us to consider virtualizing all elements of our stack and move to a more software-defined infrastructure model.


In this post, we explore the growing importance of infrastructure as code and the part virtualization of our entire stack plays in delivering it.


Why Infrastructure as Code (IaC)

According to Wikipedia, IaC is


“the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources.”


Why is this important?


Evolving Infrastructure Delivery

Traditional approaches need us to individually acquire and install custom hardware, then configure, integrate, and commission all elements of it before presenting our new environment. This both introduced delays and opened risks.


As enterprises demand more flexible, portable, and responsive infrastructure, these approaches are no longer acceptable. Therefore, the move to a more software-defined, virtual hardware stack is crucial to removing these impediments and meeting the needs of our modern enterprise. IaC is at the heart of this change.


The IaC Benefit

If you need the ability to deliver at speed, at scale, and with the consistency of building, changing, or extending infrastructure following your best practices, then Infrastructure as Code is worthy of your consideration.


What does this have to do with virtualization? If we want to deploy as code, then our infrastructure must in its own way be code. Virtualizing is our way to software define it and provide the ability to deploy anywhere compatible infrastructure exists.


IaC in Action

How does IaC work? Public cloud perhaps provides us with the best examples, as we can automate cloud infrastructure deployment at scale and with consistency, unaffected by concerns of underlying hardware infrastructure.


If I wanted to create 100 virtual Windows desktops, I can, via an IaC template, call a predefined image, deploy on to a predefined VM, connect to a predefined network, and automate the delivery of 100 identical desktops into the infrastructure.


Importantly, the template means I can recreate the infrastructure whenever I like and, regardless of who deploys it, know it will be consistent, delivering the same experience every time.


The real power of this approach doesn’t come from a template only working in one place. The increasing amount of standard approaches will allow us to deploy the same template in multiple locations. When our template can be deployed across multiple environments, in our data center, public cloud, or a mix of both, it provides us with flexibility and portability crucial to modern infrastructure.


As our enterprises demand quick consistent response, at scale, across hybrid environments, then standardizing our deployment models is crucial. This can only be done if we standardize our infrastructure elements and this ties us back to the importance of virtualization in the delivery not only of our contemporary infrastructure but also the way we will deliver infrastructure in the future.


We started this series asking the question of whether virtualization would remain relevant in our future infrastructure. In the final part, we’ll look to summarize how future virtualization will look and why its concepts will remain a core part of our infrastructure.

Back from Austin and THWACKcamp filming. Can you believe the event is only two weeks away? I'm excited for what we have in store for you this year. It's a lot of work to pull TC together, but the finished product always makes me smile. Wearing the bee suit helps, too.


As always, here are some links I found interesting this week. Enjoy!


15,000 private webcams left open to snooping, no password required

The manufacturers of these devices should be held accountable. Until actions are taken against the makers, we will continue to have incidents like this.


Microsoft: Customers are entitled to know about federal data requests

Great moment for Microsoft here, stepping forward as an advocate for customer privacy rights.


Crown Sterling Claims to Factor RSA Keylengths First Factored Twenty Years Ago

A silly marketing stunt, and I have no idea why they would do this except the idea that there's no such thing as bad publicity. But I think they're hurting their reputation with stunts like this one.


Doordash Discloses Massive Data Breach That Affected 4.9 Million People

Interesting that new users are not affected. Makes me think perhaps the hackers got hold of an older database, maybe a backup.


The simplest explanation of machine learning you’ll ever read

Next time you're in a meeting and someone brings the machine learning hype, just ask yourself, "Do we need a label maker?"


IBM will soon launch a 53-qubit quantum computer

I'm excited for the possibilities brought about by quantum computing, and cautiously optimistic this won't result in Skynet.


Banks, Arbitrary Password Restrictions and Why They Don't Matter

Great summary of the security issue faced by online banking.


If you ever get the chance to have a beef rib at Terry Black's in Austin, you will not be disappoint:

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering


Here’s an interesting article by my colleague Brandon Shopp about emerging technology adoption in fed. It’s easy to get caught up with the hype, but our IT Trends survey results seem more grounded.


Exciting new technologies like artificial intelligence, blockchain, and the Internet of Things are dominating news cycles, but are they dominating federal IT environments? Maybe not.


According to the latest SolarWinds® IT Trends Report, emerging technology may be more of a pain than a benefit. Public sector IT managers in North America, the U.K., and Germany said they believe they’re currently ill-equipped to manage AI and blockchain with their current skillsets. Meanwhile, these same managers believe they need more training on the cloud and hybrid IT, established technologies we all seem to take for granted.


What’s going on here?


For many agencies, AI and blockchain are not yet considered essential. Agencies aren’t heavily investing in AI training, and managers don’t have the time or inclination to teach themselves about the tools.


On the other hand, survey respondents said they expected cloud and hybrid IT to be the most important technologies to learn about over the next three to five years. They also noted developing skills to manage hybrid IT environments has been one of their top priorities over the past 12 months. This indicates the importance of the cloud and hybrid IT for their organizations.


Managers want to learn, but it’s hard to do when they’re also trying to migrate legacy applications to the cloud. The migration process takes time and juggling new projects while also trying to “keep the lights on” will always be a challenge.


Still, respondents listed “technology innovation” as their top career development goal over the next three to five years. How can they achieve this goal with so many obstacles seemingly in their way?


Leverage Third-Party Contractors With Specific Expertise


Third-party contractors aren’t just for implementing technology roadmaps; they’re also excellent sources of knowledge. What better way for an agency’s IT team to learn than from a skilled contractor working on-site? A contractor can show the team how it’s done and equip agency staff with the necessary knowledge.


Encourage Participation in User Groups and Online Forums


There are several free government-centric user groups where IT managers can find answers to questions and hone their skills. They’re great resources for problem-solving and learning about new technologies.


There are also online forums and communities professionals can leverage. From technology-specific communities to internal government message boards, there’s a strong argument for interacting with like-minded individuals willing to help each other out.


Attend Trade Shows and Industry Events


Trade shows and industry events can be exceptional resources for learning about what’s next. Everyone from less experienced employees to more seasoned professionals can benefit from sitting in on workshops, listening to presentations, or simply wandering the show floor. Here’s a great one coming up this year… Blockchain Expo North America.


Regardless of how it’s done, agencies and managers must invest in learning about emerging and evolving technologies because AI, blockchain, and the cloud will affect the careers of public sector IT professionals for the next several years.


Find the full article on Government Computer News.


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.