Product Blog

10 Posts authored by: KMSigma Administrator
As of Orion Core version 2019.4, SolarWinds Service Desk has native integration with the Orion Platform.

When we launched SolarWinds® Service Desk (SWSD), I couldn’t wait to get my hands on it. I was very excited to see a new solution to handle incident management, asset management, an internal knowledge base, problem management, and an employee self-service portal. There’s so much to this new product to unpack, I needed to figure out where to start. Thankfully, there was already an excellent document introducing everyone to the solution I could read.


For the past three years, I’ve been getting deeper and deeper into leveraging various APIs to do my bidding. This lets me go nuts on my keyboard and automate out as many repeatable functions as possible. No, I’m not breaking up with my mouse. We had a healthy discussion, my mouse and I, and he’s fine with the situation. Really. What was I talking about? Oh yeah, APIs!


One of the things I absolutely love about working with APIs (and scripting languages as well) is there’s no one way to do something. If you can think it, you can probably do it. Most RESTful APIs allow you to work with whatever language you prefer. You can use the curl executable, Perl (tagging Leon here), PowerShell, or nearly anything else. PowerShell is my personal preference, so I’m doing my scripting with it. But more on those details later.


You’ve seen me write and talk about using the SolarWinds® Orion® API to help automate your monitoring infrastructure. I’ve even gotten some of my friends in on the trend. But, the launch of SWSD opened a brand-new API for me to explore. I started where I always do with something new: by reading the manual. SolarWinds Service Desk has extensive documentation about using the API. There’s so much there for me to explore, but I had to limit myself. In trying to pick a place to start, I thought about my past.


SolarWinds has always been in the business of helping IT professionals do their jobs better. Many of us technology professionals, like me, started our careers working on a help desk. Based on everything SWSD offers, I limited myself to the Incidents Management area. Then I just had to think about how I would leverage this type of solution in some of my previous roles.


As a help desk supervisor who went on to be a monitoring engineer, I thought about how great it would be to get tickets automatically created based on an alert. I could talk all day about what qualifies for an alert (I have) and what’s best to include in an alert message (that, too), but the biggest thing to strive towards is some level of tracking. The most common tracking method for alerts has been email notifications. This is the default for most people, and 90% of the time it’s fine. But what about the times when email is the problem? You need another way to get your incidents reported and tracked.


Like scripting languages, the Orion alerting engine allows for multiple ways to handle alert logic—not just for the trigger conditions, but also for the actions when the trigger occurs. One of those mechanisms is to execute a program. On the surface, this may sound boring, but not to me and other keyboard junkies. This is a great way to leverage some scripting and the SWSD API to do the work for us.


First things first, we need to decide how to handle the calls to the API. The examples provided in the API documentation use the curl program to do the work, but I’m not in love with the insanely long command lines required to get it to work. But since this is a RESTful API, I should be able to use my preferred scripting language, PowerShell. (I told you I’d get back to it, didn’t I?)


Let’s assemble what you need to get started. First you need your authentication. If you’re an administrator in SWSD, you can go to Setup, Users & Access, and then select yourself (or a service account you want to use). Inside the profile, you’ll find the JSON web token.



This is how you authenticate with the SWSD API. The web token is a single line of text. In the web display, it’s been wrapped for visual convenience. Copy that line of text and stash it somewhere safe. This is basically the API version of “you.” Protect it as you would any other credentials. In a production system, I’d have it set up to use the service account for my Orion installation.


API Test

For the API call, we need to send over some header information. Specifically, we need to send over the authorization, the version of the API we’ll be using, and the content type we’ll be sending. I found these details in the API documentation for Incidents. To start things off, I did a quick test to see if I could enumerate all the existing incidents.


I’m trying to get more comfortable with JSON, so I’m using it instead of XML. In PowerShell, the HTTP header construction looks like this:

$JsonWebToken = "Your token goes here. You don't get to see mine."


$Headers = @{ "X-Samanage-Authorization" = "Bearer $JsonWebToken";

              "Accept"                   = "application/vnd.samanage.v2.1+json"

              "Content-Type"             = "application/json" }


Basically, we’re saying (in order): this is me (auth), I’d like to use this version of the API with JSON (accept), and I’m sending over JSON as the request itself (content-type).


This block of headers is your pass to speak with the API. I’m testing this from the United States, so I’ll use the base URI via There’s a separate one specifically for EU people ( If you are in the EU, that’s the one you should be using.

To list out the incidents, we make an HTTP GET call to the “incidents” URI as specified in the documentation. I saved this as a variable so I wouldn’t have copy/paste failures later.


$URI = ""


Then to get the list of all incidents, I can just invoke the REST method.

Invoke-RestMethod -Method Get -Headers $Headers -Uri $URI



Excellent! I can talk to the API and get some information back. This means I’m authenticating correctly and getting the list of incidents back. Time to move on.

Creating a Test Incident

To create an incident, I only technically need three fields: name (of the incident), the requester, and the title. I’ve seen this called the payload, the body, or the contents. To stay on the same page with the PowerShell parameters, I’ll refer to it as the body. Using it, I built a very small JSON document to see if this would work using the script I’ve started developing. The beauty of it is I can repeatedly use the header I already built. I’ve put the JSON in a string format surrounded by @” and “@. In PowerShell this is called a here-string and there are many things you can do with it.

$TestBody = @"


"incident": {

   "name":        "Testing Incident - Safe to Close with no notes",

   "priority":    "Critical",

   "requester":   { "email" : "" }





Invoke-RestMethod -Method Post -Headers $Headers -Uri $URI -Body $TestBody


When I run it, I get back all kinds of information about the incident I just created.

But to be really, doubly sure, we should check the web console.

There it is. I can create an incident with my script.


So, let’s build this into an actual alert script to trigger.


Side note: When I “resolved” this ticket, I got an email asking if I was happy with my support. Just one more great feature of an incident management solution.

Building the new SolarWinds Service Desk Script

For my alert, I’m going with a scenario where email is probably not the best alert avenue: your email server is having a problem. This is a classic downstream failure. We could create an email alert, but since the email server is the source, the technician would never get the message.



The above logic looks for only nodes with names containing “EXMBX” (Exchange Mailbox servers) and when the status is not Up (like Down, Critical, or Warning).


Now that we have the alert trigger, we need to create the action of running a script.


For a script to be executed by the Orion alerting engine, it should “live” on the Orion server. Personally, I put them all in a “Scripts” folder in the root of the C: drive. Therefore, the full path to my script is “C:\Scripts\New-SwsdIncident.ps1”


I also need to tweak the script slightly to allow for command line parameters (how I send the node and alert details). If I don’t do this, then the exact same payload will be sent every time this alert triggers. For this example, I’m just sticking with four parameters I want to pass. If you want more, feel free to tweak them as you see fit.


Within a PowerShell file, you access command line parameters via the $args variable, with the first argument being $args[0], the next being $args[1], and so on. Using those parameters, I know I want the name of the alert, the details on the alert, the IP of the node, and the name of the node. Here’s what my script looks like:

You can see I added a few more fields to my JSON body so a case like this could be routed easier. What did I forget? Whoops, this should have said this was a test incident. Not quite ready for production, but let’s move on.

When we build the alert, we set one of the trigger actions as execution of an external program and give it an easily recognizable name.



The full command line I put here is:


C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -File "C:\Scripts\New-SwsdIncident.ps1" "${N=SwisEntity;M=StatusDescription}" "${N=SwisEntity;M=Caption}" "${N=SwisEntity;M=IP_Address}" "${N=Alerting;M=AlertName}"


This is the path and executable for PowerShell, the script file we want to execute, and the parameters (order is important) we want to pass to the script. I’ve also surrounded the parameters with double quotes because they *may* contain spaces. In this case, better safe than sorry.


Then I just need to sit back and wait for an alert matching my description trigger. There’s one now!



Just like every alert I write, I’ve already found ways to improve it. Yes, I know this is a very rudimentary example, but it’s a great introduction to the integrations possible. I’ll need to tweak this alert a little bit before I’d consider it ready for prime time, but it’s been a great learning experience. I hope you learned a little bit along with me.


So, I ask you all: where should I go next?

Anyone who knows me knows that I’m a fan of PowerShell. “Fan” is a diminutive version of the word “fanatic,” and in this instance both are true. That’s why I was so excited to see that PowerShell script output is now supported in Server Configuration Monitor (SCM).


Since SCM’s release, I’ve always thought it was a great idea to monitor the directory where you store your scripts to make sure they didn’t vary and to validate changes over time, even going in and reverting them in case there was a change without approval. However, that part was available in the initial release of SCM. Using PowerShell with SCM, you can monitor your C:\Scripts\*.ps1 files and get notified when any deviate from their baselines.


Using PowerShell scripts to pull information from systems you’re monitoring is only limited by your scripting prowess. But let me say this plainly: You don’t need to be a scripting genius. The THWACK® members are here to be your resources. If you have something great you wrote, post about it. If you need help formatting output, post about it. If you can’t remember how to get a list of all the software installed on a system, post about it. Someone here has probably already done the work.


Monitoring the Server Roles

Windows now handles many of the “roles” of a machine (Web Server, Active Directory Server, etc.) based on the installed features. There never was a really nice way to understand what roles were installed on a machine outside the Server Manager. This is especially true if you’re running Windows Server Core because it has no Server Manager.


Now, you can just write yourself a small PowerShell script:

Get-WindowsFeature | Where-Object { $_.Installed } | Select-Object -Property Name, DisplayName | Sort-Object -Property Name


…and get the list of all features displayed for you.


Name                      DisplayName

----                      -----------

FileAndStorage-Services   File and Storage Services

File-Services             File and iSCSI Services

FS-Data-Deduplication     Data Deduplication

FS-FileServer             File Server

MSMQ                      Message Queuing

MSMQ-Server               Message Queuing Server

MSMQ-Services             Message Queuing Services

NET-Framework-45-ASPNET   ASP.NET 4.7

NET-Framework-45-Core     .NET Framework 4.7

NET-Framework-45-Features .NET Framework 4.7 Features

NET-WCF-Services45        WCF Services

NET-WCF-TCP-PortSharing45 TCP Port Sharing

PowerShell                Windows PowerShell 5.1

PowerShell-ISE            Windows PowerShell ISE

PowerShellRoot            Windows PowerShell

Storage-Services          Storage Services

System-DataArchiver       System Data Archiver

Web-App-Dev               Application Development

Web-Asp-Net45             ASP.NET 4.7

Web-Common-Http           Common HTTP Features

Web-Default-Doc           Default Document

Web-Dir-Browsing          Directory Browsing

Web-Dyn-Compression       Dynamic Content Compression

Web-Filtering             Request Filtering

Web-Health                Health and Diagnostics

Web-Http-Errors           HTTP Errors

Web-Http-Logging          HTTP Logging

Web-ISAPI-Ext             ISAPI Extensions

Web-ISAPI-Filter          ISAPI Filters

Web-Log-Libraries         Logging Tools

Web-Metabase              IIS 6 Metabase Compatibility

Web-Mgmt-Compat           IIS 6 Management Compatibility

Web-Mgmt-Console          IIS Management Console

Web-Mgmt-Tools            Management Tools

Web-Net-Ext45             .NET Extensibility 4.7

Web-Performance           Performance

Web-Request-Monitor       Request Monitor

Web-Security              Security

Web-Server                Web Server (IIS)

Web-Stat-Compression      Static Content Compression

Web-Static-Content        Static Content

Web-WebServer             Web Server

Web-Windows-Auth          Windows Authentication

Windows-Defender          Windows Defender Antivirus

WoW64-Support             WoW64 Support

XPS-Viewer                XPS Viewer


This is super simple. If someone adds or removes one of these features, you’ll know moments after it’s done because it would deviate from your baseline.

Monitoring Local Administrators

This got me thinking about all manner of other possible PowerShell script uses. One that came to mind immediately was local security. We all know the local administrator group is an easy way to have people circumvent security best practices, so knowing who is in that security group has proven difficult.


Now that we don’t have those limitations, let’s look at the local admins group and look at local users.


Get-LocalGroupMember -Group Administrators | Where-Object { $_.PrincipalSource -eq "Local" } | Sort-Object -Property Name


Now, you’ll get returned a list of all the local users in the Administrators group.

ObjectClass Name                         PrincipalSource
----------- ----                         ---------------
User        NOCKMSMPE01V\Administrator   Local
User        NOCKMSMPE01V\Automation-User Local

Now we’ll know if someone is added or deleted. You could extend this to know when someone is added to power users or any other group. If you really felt like going gang-busters, you could ask for all the groups, and then enumerate the members of each.


Local Certificates

These don’t have to be relegated to PowerShell one-liners either. You can have entire scripts that return a value that you can review.


Also, on the security front, it might be nice to know if random certificates start popping up everywhere. Doing this by hand would be excruciatingly slow. Thankfully it’s pretty easy in PowerShell.


$AllCertificates = Get-ChildItem -Path Cert:\LocalMachine\My -Recurse

# Create an empty list to keep the results

$CertificateList = @()

ForEach ( $Certificate in $AllCertificates )


    # Check to see if this is a "folder" or a "certificate"

    if ( -not ( $Certificate.PSIsContainer ) )


        # Certificates are *not* containers (folders)

        # Get the important details and add it to the $CertificateList

        $CertificateList += $Certificate | Select-Object -Property FriendlyName, Issuer, Subject, Thumbprint, NotBefore, NotAfter





As you can see, you aren’t required to stick with one-liners. Write whatever you need for your input. As long as there’s output, SCM will capture it and present it in a usable format for parsing.

FriendlyName : SolarWinds-Orion
Issuer       : CN=SolarWinds-Orion
Subject      : CN=SolarWinds-Orion
Thumbprint   : AF2A630F2458E0A3BE8D3EF332621A9DDF817502
NotBefore    : 10/12/2018 5:59:14 PM
NotAfter     : 12/31/2039 11:59:59 PM


FriendlyName :
Issuer       : CN=SolarWinds IPAM Engine
Subject      : CN=SolarWinds IPAM Engine
Thumbprint   : 4527E03262B268D2FCFE4B7B4203EF620B41854F
NotBefore    : 11/5/2018 7:13:34 PM
NotAfter     : 12/31/2039 11:59:59 PM


FriendlyName :
Issuer       : CN=SolarWinds-Orion
Subject      : CN=SolarWinds Agent Provision - cc10929c-47e1-473a-9357-a54052537795
Thumbprint   : 2570C476DF0E8C851DCE9AFC2A37AC4BDDF3BAD6
NotBefore    : 10/11/2018 6:46:29 PM
NotAfter     : 10/12/2048 6:46:28 PM


FriendlyName : SolarWinds-SEUM_PlaybackAgent
Issuer       : CN=SolarWinds-SEUM_PlaybackAgent
Subject      : CN=SolarWinds-SEUM_PlaybackAgent
Thumbprint   : 0603E7052293B77B89A3D545B43FC03287F56889
NotBefore    : 11/4/2018 12:00:00 AM
NotAfter     : 11/5/2048 12:00:00 AM


FriendlyName : SolarWinds-SEUM-AgentProxy
Issuer       : CN=SolarWinds-SEUM-AgentProxy
Subject      : CN=SolarWinds-SEUM-AgentProxy
Thumbprint   : 0488D26FD9576293C30BB5507489D96C3ED829B4
NotBefore    : 11/4/2018 12:00:00 AM
NotAfter     : 11/5/2048 12:00:00 AM


FriendlyName : WildcardCert_Demo.Lab
Issuer       : CN=demo-EASTROOTCA-CA, DC=demo, DC=lab
Subject      : CN=*.demo.lab, OU=Information Technology, O=SolarWinds Demo Lab, L=Austin, S=TX, C=US
Thumbprint   : 039828B433E38117B85E3E9C1FBFD5C1A1189C91
NotBefore    : 3/30/2018 4:37:41 PM
NotAfter     : 3/30/2020 4:47:41 PM

Antivirus Exclusions

How about your antivirus exclusions? I’m sure you really, really want to know if those change.


$WindowsDefenderDetails = Get-MpPreference

$WindowsDefenderExclusions = $WindowsDefenderDetails.ExclusionPath

$WindowsDefenderExclusions | Sort-Object


Now you’ll know if something is added to or removed from the antivirus exclusion list.

C:\Program Files (x86)\Common Files\SolarWinds
C:\Program Files (x86)\SolarWinds

Trying to find this out by hand would be tedious, so let’s just have SCM do the work for you.


This is all just a sample of the power of PowerShell and SCM. We’d love to know what you’ve got in mind for your environment. So, download a trial or upgrade to the latest version of SCM. Be sure to share your excellent scripting adventure so the rest of us can join in the fun!

Doing more with less seems to be a staple of working in IT today.  Part of that also includes doing more with the existing tools at your disposal or adding tools to make your job easier, giving you back your well-earned free time.  For IT professionals, getting the information into a network monitoring software (NMS) solution is only the first step in the battle.  Deciding how to work with that information is the next phase in your evolution as a monitoring professional.


It's no secret that I was a customer before I come to work for SolarWinds.  As a former customer, I tried my best to make sure that my monitoring solution did as much of the heavy lifting for my organization as possible.  In my eyes, what good was a system that just alerted me to problems without actively trying to correct those problems?  Were the metrics important to specific teams being shown in the proper light to guide business decisions?  Were our help desk and IT teams properly informed about the state of the infrastructure? These are just a few of the questions that I asked myself on a near daily basis.  Making tweaks and updates to the NMS became a process of care and feeding for the solution.  Thankfully, I had the THWACK® community and the plethora of content therein to help guide me, but every environment is different.  I often thought it would be nice to be able to get first-hand information and ask questions of the experts about my situation.


I've spoken at multiple SolarWinds User Groups (SWUGs) about the ways to help optimize and scale out your build, streamline your support and monitoring processes, isolate and pinpoint the root cause of issues, eliminate noise in your alerting, and leverage the data collected to make informed decisions.  Any one of these can help you do more with what you already have.  The more feedback we get from the community, the more resources we can provide to give you back your weekends.  We've listened, and that's part of what makes THWACKcamp so epic.


This year, THWACKcamp is packed full of knowledge bombs that can help you do your job better, more efficiently, and with less stress.


If you are thinking about working towards automating your infrastructure, take what you've learned and leverage the SolarWinds Orion API with “There's an API for That: Introduction to the SolarWinds Orion SDK.”  We are even including our code samples (here, here, and here) so that you can play along with the home game.


There's an API for That: Introduction to the SolarWinds Orion SDK

Repetitive tasks are boring and repetitive. Why do we have computer systems if not to make our lives easier? In this session, learn why we want to leverage the SolarWinds® Orion API to do just that with me and fellow THWACK MVPs Leon Adato and Zack Mutchler.


Do you have a large or growing deployment and are worried about optimizing your build?  There's a session just on “Optimizing Orion.”

Optimizing Orion

In this session, we will help you understand how to optimize your Orion® Platform. We will discuss topics such as when to know if you should scale up or scale out, and how to identify issues with your current Orion installation.


Are you struggling from beneath a pile of unnecessary alerts and are trying to find the light?  Why not have your alerts work for you, instead of against you?  “Alerts, How I Hate Thee” is the session for you.

Alerts, How I Hate Thee

Many IT professionals believe if you can't get an alert when something is going wrong, why monitor at all? Yet alerting is also seen as a curse, a source of constant interruptions, false alarms, and “noise.” In reality, it can be so much better. Well-crafted alerts based on insightful monitoring are a benefit to the business and a gift to the recipient, saving hours of investigation and thousands of dollars. Whether alerts are a blessing or a curse depends largely on their design and implementation (more so than any specific monitoring tool or technique) and thankfully, good design can be taught and learned. In this session, we'll take a brief tour of the alerting hall of horrors, and then look at real-world, vendor-agnostic techniques to make alerts meaningful, effective, valuable, and actionable.


Is it time to get the biggest bang for your bits with your NMS, it's probably useful to take peruse the goodness during “Tips & Tricks: Thinking Outside the Box

Tips & Tricks: Thinking Outside the Box

Making its third round at THWACKcamp, this popular session covers the tips and tricks of our beloved SolarWinds products that are a way of life for me and Head Geek Destiny Bertucci. Combined forces makes for some great tips and tricks that help you to think outside of the box for your day-to-day monitoring needs. There are always ways to create solutions with just a little tweaking to make your SolarWinds products be the highlight in any organization.


The best part about a THWACKcamp session?  They are completely free for everyone – you just need to sign up, select your sessions, and then sit back and absorb all the nerdy goodness online. Plus, if you watch live, you can join the chat and ask questions of the experts while the session is happening. What are you waiting for?  Sign up now!


In my previous posts, I talked about building the virtual machine and then about prepping the disks.  That's all done for this particular step.


This is a long set of scripts.  Here's the list of what we'll be doing:

  1. Variable Declaration
  2. Installing Windows Features
  3. Enabling Disk Performance Metrics
  4. Installing some Utilities
  5. Copying the IIS Folders to a new Location
  6. Enable Deduplication (optional)
  7. Removing unnecessary IIS Websites and Application Pools
  8. Tweaking the IIS Settings
  9. Tweaking the ASP.NET Settings
  10. Creating a location for the TFTP and SFTP Roots (for NCM)
  11. Configuring Folder Redirection
  12. Pre-installing ODBC Drivers (for SAM Templates)


Stage 1: Variable Declaration

This is super simple (as variable declarations should be)

#region Variable Declaration
$PageFileDrive = "D:\"
$ProgramsDrive = "E:\"
$WebDrive      = "F:\"
$LogDrive      = "G:\"


Stage 2: Installing Windows Features

This is the longest part of the process.. and it can't be helped.  The Orion installer will do this for you automatically, but if I do it in advance, I can play with some of the settings before I actual perform the installation.

#region Add Necessary Windows Features
# this is a list of the Windows Features that we'll need
# it's being filtered for those which are not already installed
$Features = Get-WindowsFeature -Name FileAndStorage-Services, File-Services, FS-FileServer, Storage-Services, Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext, Web-Net-Ext45, Web-Asp-Net, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-Features, NET-Framework-Core, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-MSMQ-Activation45, NET-WCF-Pipe-Activation45, NET-WCF-TCP-Activation45, NET-WCF-TCP-PortSharing45, MSMQ, MSMQ-Services, MSMQ-Server, FS-SMB1, User-Interfaces-Infra, Server-Gui-Mgmt-Infra, Server-Gui-Shell, PowerShellRoot, PowerShell, PowerShell-V2, PowerShell-ISE, WAS, WAS-Process-Model, WAS-Config-APIs, WoW64-Support, FS-Data-Deduplication | Where-Object { -not $_.Installed }
$Features | Add-WindowsFeature


Without the comments, this is 2 lines.  Yes, only 2 lines, but very important ones.  The very last Windows Feature that I install is Data Deduplication (FS-Data-Deduplication).  If you don't want this, you are free to remove this from the list and skip Stage 6.


Stage 3: Enabling Disk Performance Metrics

This is something that is disabled in Windows Server by default, but I like to see them, so I re-enable them.  It's super-simple.

#region Enable Disk Performance Counters in Task Manager
Start-Process -FilePath "C:\Windows\System32\diskperf.exe" -ArgumentList "-Y" -Wait


Stage 4: Installing some Utilities

This is entirely for me.  There are a few utilities that I like on every server that I use regardless of version.  You can configure this to do it in whatever way you like.  Note that I no longer install 7-zip as part of this script because I'm deploying it via Group Policy.

#region Install 7Zip
# This can now be skipped because I'm deploying this via Group Policy
# Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "\\Path\To\Installer\7z1604-x64.msi", "/passive" -Wait
#region Install Notepad++
# Install NotePad++ (current version)
# Still need to install the Plugins manually at this point, but this is a start
Start-Process -FilePath "\\Path\To\Installer\npp.latest.Installer.exe" -ArgumentList "/S" -Wait
#region Setup UTILS Folder
# This contains the SysInternals and Unix Utils that I love so much.
$RemotePath = "\\Path\To\UTILS\"
$LocalPath  = "C:\UTILS\"
Start-Process -FilePath "C:\Windows\System32\robocopy.exe" -ArgumentList $RemotePath, $LocalPath, "/E", "/R:3", "/W:5", "/MT:16" -Wait
$MachinePathVariable = [Environment]::GetEnvironmentVariable("Path", "Machine")
if ( -not ( $MachinePathVariable -like '*$( $LocalPath )*' ) )
    $MachinePathVariable += ";$LocalPath;"
    $MachinePathVariable = $MachinePathVariable.Replace(";;", ";")
    Write-Host "Adding C:\UTILS to the Machine Path Variable" -ForegroundColor Yellow
    Write-Host "You must close and reopen any command prompt windows to have access to the new path"
    [Environment]::SetEnvironmentVariable("Path", $MachinePathVariable, "Machine")
    Write-Host "[$( $LocalPath )] already contained in machine environment variable 'Path'"


Stage 5: Copying the IIS folders to a New Location

I don't want my web files on the C:\ Drive.  It's just something that I've gotten in the habit of doing from years of IT, so I move them using robocopy.  Then I need to re-apply some permissions that are stripped.

#region Copy the IIS Root to the Web Drive
# I can do this with Copy-Item, but I find that robocopy works better at keeping permissions
Start-Process -FilePath "robocopy.exe" -ArgumentList "C:\inetpub", ( Join-Path -Path $WebDrive -ChildPath "inetpub" ), "/E", "/R:3", "/W:5" -Wait
#region Fix IIS temp permissions
$FolderPath = Join-Path -Path $WebDrive -ChildPath "inetpub\temp"
$CurrentACL = Get-Acl -Path $FolderPath
$AccessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList "NT AUTHORITY\NETWORK SERVICE", "FullControl", ( "ContainerInherit", "ObjectInherit" ), "None", "Allow"
$CurrentACL | Set-Acl -Path $FolderPath


Stage 6: Enable Deduplication (Optional)

I only want to deduplicate the log drive - I do this via this script.

#region Enable Deduplication on the Log Drive
Enable-DedupVolume -Volume ( $LogDrive.Replace("\", "") )
Set-DedupVolume -Volume ( $LogDrive.Replace("\", "") ) -MinimumFileAgeDays 0 -OptimizeInUseFiles -OptimizePartialFiles


Stage 7: Remove Unnecessary IIS Websites and Application Pools

Orion will create its own website and application pool, so I don't need the default ones.  I destroy them with PowerShell.

#region Delete Unnecessary Web Stuff
Get-WebSite -Name "Default Web Site" | Remove-WebSite -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0" -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0 Classic" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5 Classic" -Confirm:$false
Remove-WebAppPool -Name "Classic .NET AppPool" -Confirm:$false
Remove-WebAppPool -Name "DefaultAppPool" -Confirm:$false


Step 8: Tweak the IIS Settings

This step is dangerous.  There's no other way to say this.  If you get the syntax wrong you can really screw up your system... this is also why I save a backup of the file before I make and changes.

#region Change IIS Application Host Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Application Host settings
$ConfigFilePath = "C:\Windows\System32\inetsrv\config\applicationHost.config"
# Load the Configuration File
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.applicationHost'.log.centralBinaryLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.log.centralW3CLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logFormat", "W3C" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logExtFileFlags", "Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("period", "Hourly")
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.traceFailedRequestsLogging.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\FailedReqLogFiles" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("directory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp\IIS Temporary Compressed File" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("maxDiskSpaceUsage", "2048" )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("minFileSizeForComp", "5120" )
# Save the file
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue


There's a lot going on here, so let me see if I can't explain it a little.

I'm accessing the IIS Application Host configuration file and making changes.  This file governs the entire IIS install, which is why I make a backup.

The changes are:

  • Change any log file location (lines 15 - 17, 21)
  • Define the log type (line 18)
  • Set the elements that I want in the logs (line 19)
  • Set the log roll-over period to hourly (line 20)
  • Set the location for temporary compressed files (line 22)
  • Set my compression settings (lines 23-24)


Stage 9: Tweaking the ASP.NET Configuration Settings

We're working with XML again, but this time it's for the ASP.NET configuration.  I use the same process as Stage 8, but the changes are different.  I take a backup again.

#region Change the ASP.NET Compilation Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Compilation settings in the ASP.NET Web Config
$ConfigFilePath = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config"
Write-Host "Editing [$ConfigFilePath]" -ForegroundColor Yellow
# Load the Configuration File
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("tempDirectory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp") )
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("maxConcurrentCompilations", "16")
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("optimizeCompilations", "true")
# Save the file
Write-Host "Saving [$ConfigFilePath]" -ForegroundColor Yellow
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue


Again, there's a bunch going on here, but the big takeaway is that I'm changing the temporary location of the ASP.NET compilations to the drive where the rest of my web stuff lives and the number of simultaneous compilations. (lines 16-18)


Stage 10: Create NCM Roots

I hate having uploaded configuration files (from network devices) saved to the root drive.  This short script creates folders for them.

#region Create SFTP and TFTP Roots on the Web Drive
# Check for & Configure SFTP and TFTP Roots
$Roots = "SFTP_Root", "TFTP_Root"
ForEach ( $Root in $Roots )
    if ( -not ( Test-Path -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) ) )
        New-Item -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) -ItemType Directory


Stage 11: Configure Folder Redirection

This is the weirdest thing that I do.  Let me see if I can explain.


My ultimate goal is to automate installation of the software itself.  The default directory for installation the software is C:\Program Files (x86)\SolarWinds\Orion (and a few others).  Since I don't really like installing any program (SolarWinds stuff included) on the O/S drive, this leaves me in a quandary.  I thought to myself, "Self, if this was running on *NIX, you could just do a symbolic link and be good."  Then I reminded myself, "Self, Windows has symbolic links available."  Then I just needed to tinker until I got things right.  After much annoyance, and rolling back to snapshots, this is what I got.

#region Folder Redirection
$Redirections = @()
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]1; SourcePath = "C:\ProgramData\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]2; SourcePath = "C:\ProgramData\SolarWindsAgentInstall"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]3; SourcePath = "C:\Program Files (x86)\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]4; SourcePath = "C:\Program Files (x86)\Common Files\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]5; SourcePath = "C:\ProgramData\SolarWinds\Logs"; TargetDrive = $LogDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]6; SourcePath = "C:\inetput\SolarWinds"; TargetDrive = $WebDrive } )
$Redirections | Add-Member -MemberType ScriptProperty -Name TargetPath -Value { $this.SourcePath.Replace("C:\", $this.TargetDrive ) } -Force
ForEach ( $Redirection in $Redirections | Sort-Object -Property Order )
    # Check to see if the target path exists - if not, create the target path
    if ( -not ( Test-Path -Path $Redirection.TargetPath -ErrorAction SilentlyContinue ) )
        Write-Host "Creating Path for Redirection [$( $Redirection.TargetPath )]" -ForegroundColor Yellow
        New-Item -ItemType Directory -Path $Redirection.TargetPath | Out-Null
    # Build the string to send to the command prompt
    $CommandString = "mklink /D /J `"$( $Redirection.SourcePath )`" `"$( $Redirection.TargetPath )`""
    Write-Host "Executing [$CommandString]... " -ForegroundColor Yellow -NoNewline
    # Execute it
    Start-Process -FilePath "cmd.exe" -ArgumentList "/C", $CommandString -Wait
    Write-Host "[COMPLETED]" -ForegroundColor Green

The reason for the "Order" member in the Redirections object is because certain folders have to be built before others... IE: I can't build X:\ProgramData\SolarWinds\Logs before I build X:\ProgramData\SolarWinds.


When complete the folders look like this:


Nice, right?


Stage 12: Pre-installing ODBC Drivers

I monitor many database server types with SolarWinds Server & Application Monitor.  They each require drivers  - I install them in advance (because I can).

#region Pre-Orion Install ODBC Drivers
# This is for any ODBC Drivers that I want to install to use with SAM
# You don't need to include any driver for Microsoft SQL Server - it will be done by the installer
# I have the drivers for MySQL and PostgreSQL in this share
# There is also a Post- share which includes the files that I want to install AFTER I install Orion.
$Drivers = Get-ChildItem -Path "\\Path\To\ODBC\Drivers\Pre\" -File
ForEach ( $Driver in $Drivers )
    if ( $Driver.Extension -eq ".exe" )
        Write-Host "Executing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath $Driver.FullName -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    elseif ( $Driver.Extension -eq ".msi" )
        # Install it using msiexec.exe
        Write-Host "Installing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "`"$( $Driver.FullName )`"", "/passive" -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
        Write-Host "Bork-Bork-Bork on $( $Driver.FullName )"


Running all of these with administrator privileges cuts this process down to 2 minutes and 13 seconds.  And over 77% of that is installing the Windows Features.


Execution time: 2:13

Time saved: over 45 minutes


This was originally published on my personal blog as Building my Orion Server [Scripting Edition] – Step 3 – Kevin's Ramblings


On a previous post I showed off a little PowerShell script that I've written to build my SolarWinds Orion Servers.  That post left us with a freshly imaged Windows Server.  Like I said before, you can install the O/S however you like.  I used Windows Deployment Services because I'm comfortable with it.


I used Windows Server 2016 because this is my lab and...



Now I've got a list of things that I want to do to this machine.

  1. Bring the Disks Online & Initialize
  2. Format Disks & Disable Indexing
  3. Configure the Page File
  4. Import the Certificate for SSL (Optional)


Because I'm me, I do this with PowerShell.  I'm going to go through each of stage one by one.

Stage 0: Declare Variables

I don't list this because this is something that I have in every script.  Before I even get into this, I need to define my variables.  For this it's disk names and drive letters.

#region Build Disk List
$DiskInfo  = @()
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]1; DriveLetter = "D"; Label = "Page File" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]2; DriveLetter = "E"; Label = "Programs" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]3; DriveLetter = "F"; Label = "Web" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]4; DriveLetter = "G"; Label = "Logs" } )


This looks simple, because it is.  It's simply a list of the disk numbers, the drive letter, and the labels that I want to use for the additional drives.

Stage 1: Bring the Disks Online & Initialize

Since I need to bring all offline disks online and choose a partition type (GPT), I can do this all at once.

#region Online & Enable RAW disks
Get-Disk | Where-Object { $_.OperationalStatus -eq "Offline" } | Set-Disk -IsOffline:$false
Get-Disk | Where-Object { $_.PartitionStyle -eq "RAW" } | ForEach-Object { Initialize-Disk -Number $_.Number -PartitionStyle GPT }


Stage 2: Format Disks & Disable Indexing

This is where I really use the variables that are declared in Stage 0.  I do this with a ForEach Loop.

#region Create Partitions & Format
$FullFormat = $false # indicates a "quick" format
ForEach ( $Disk in $DiskInfo )
    # Create Partition and then Format it
    New-Partition -DiskNumber $Disk.DiskNumber -UseMaximumSize -DriveLetter $Disk.DriveLetter | Out-Null
    Format-Volume -DriveLetter $Disk.DriveLetter -FileSystem NTFS -AllocationUnitSize 64KB -Force -Confirm:$false -Full:$FullFormat -NewFileSystemLabel $Disk.Label
    # Disable Indexing via WMI
    $WmiVolume = Get-WmiObject -Query "SELECT * FROM Win32_Volume WHERE DriveLetter = '$( $Disk.DriveLetter ):'"
    $WmiVolume.IndexingEnabled = $false


We're getting closer!  Now we've got this:



Stage 3: Configure the Page File

Getting the "best practices" for page files are crazy and all over the board.  Are you using flash storage?  Do you keep it on the O/S disk?  Do you declare a fixed size?  I decided to fall back on settings that I've used for years.

Page file does not live on the O/S disk.

Page file is statically set

Page file is RAM size + 257MB

In script form, this looks something like this:

#region Set Page Files
$CompSys = Get-WmiObject -Class Win32_ComputerSystem -EnableAllPrivileges
# is the system set to use system managed page files
if ( $CompSys.AutomaticManagedPagefile )
    # if so, turn it off
    $CompSys.AutomaticManagedPagefile = $false
# Set the size to 16GB + 257MB (per Microsoft Recommendations) and move it to the D:\ Drive
# as a safety-net I also keep 8GB on the C:\ Drive.
$PageFileSettings = @()
$PageFileSettings += "c:\pagefile.sys 8192 8192"
$PageFileSettings += "d:\pagefile.sys 16641 16641"
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\" -Name "pagingfiles" -Type multistring -Value $PageFileSettings


Stage 4: Import the Certificate for SSL (Optional)

Since this is my lab, I get to do what I want.  (See above)  I include using SSL for Orion.  I have a wildcard certificate that I can use within my lab, so if I import it, then I can enable SSL when the configuration wizard runs.  This certificate is saved on a DFS share in my lab.  This is the script to import it.

#region Import Certificate
# Lastly, import my internal PKI Certificate for use with HTTPS
$CertName = "WildcardCert_demo.lab"
$CertPath = "\\Demo.Lab\Files\Data\Certificates\"
$PfxFile = Get-ChildItem -Path $CertPath -Filter "$CertName.pfx"
$PfxPass = ConvertTo-SecureString -String ( Get-ChildItem -Path $CertPath -Filter "$CertName.password.txt" | Get-Content -Raw ) -AsPlainText -Force
Import-PfxCertificate -FilePath $PfxFile.FullName -Password $PfxPass


That's it.  Now the disks are setup, the page file is set, and the certificate is installed.  Next, I rename the computer, reboot, run Windows Updates, reboot, run Windows Updates, reboot, run Windows Updates... (you see where this is going, right?)


Execution Time: 16 seconds

Time saved: at least 15 minutes.


There's still some prep work I can do via scripting and I'll provide that next.


This is a cross-post from my personal blog post Building my Orion Server [Scripting Edition] – Step 2 – Kevin's Ramblings

As the Product Manager for Online Demos, I need to install the SolarWinds Orion platform frequently... sometimes as much as 4 times per month.  This can get tiresome, but I've gotten some assistance from PowerShell, the community, and some published help documents.


I've thought about scripting these out for a while now and I came up with a list of things to do.

  1. Build the virtual machines
  2. Pre-configure the virtual machine's disks
  3. Prep the machine for installation
  4. Install the software (silently)
  5. Finalize the installation (silently)

This post is the first step in this multi-step process - Building your virtual machine.


Now dependent on your hypervisor there are two different paths to follow: Hyper-V or VMware.  In my lab, I've got both because I try to be as agnostic as possible.  It's now time to start building the script.  I'm going to use PowerShell.


Scripting Preference: PowerShell

Preference Reasoning: I know it and I'm comfortable using it.


Hyper-V vs. VMware


Each Hypervisor has different requirements when building a virtual machine, but some are the same for each - specifically the number & size of disks, the CPU count and the maximum memory.  The big deviation comes from the way that each hypervisor handles memory & CPU reservations.


Hyper-V handles CPU reservation as a percentage of total whereas VMware handles is via the number of MHz.  I've elected to keep the reservation as a percentage.  It seemed easier to keep straight (in my head) and only required minor tweaks to the script.


Step 1 - Variable Declaration

  • VM Name [string] - both
  • Memory (Max Memory for VM) [integer] - both
  • CPU Count (number of CPUs) [integer] - both
  • CPU Reservation (percentage) [integer] - both
  • Disk Letters and Sizes - both
  • Memory at Boot (Memory allocated at boot) [integer] - Hyper-V
  • Memory (Minimum) [integer] - Hyper-V
  • Use Dynamic Disks [Boolean] - Hyper-V
  • VLAN (VLAN ID to use for Network Adapter) [integer] - Hyper-V
  • vCenter Name [string] - VMware
  • ESX Host [string] - VMware
  • Disk Format ("thin", "thick", etc.) [string] - VMware
  • VLAN (VLAN name to use for Network Adapter) [string] - VMware
  • Guest OS (identify the Operating System) [string] - VMware


Step 2 - Build the VM

Building the VM is an easy step that you actually only takes 1 line using the "New-VM" command (regardless of Hypervisor).  The syntax and parameters change depending on the hypervisor, but otherwise we just build the shell.  In Hyper-V, I do this in two commands and in VMware I do it in one.


Step 3 - Assign Reservations

This is a trickier step in VMware because it uses MHz and not percentages.  For that I need to know what the MHz of the processor in the host is running.  Thankfully, this can be calculated pretty easily.  Then I just set the CPU & Memory reservations based around each hypervisor's requirements


Step 4 - Assign VLAN

Hyper-V uses the VLAN ID (integer) and VMware uses the VLAN Name (string).  It's nearly the same command with just a different parameter.


Step 5 - Congratulate yourself.



Execution Time: 9 seconds on either architecture.

Time saved: at least 10 minutes.


The full script is below.


#region Variable Declaration
$VMName       = "OrionServer" # Virtual Name
$Architecture = "Hyper-V"     # (or "VMware")
# Global Variable Declaration
$CPUCount     = 4             # Number of CPU's to give the VM
$CPUReserve   = 50            # Percentage of CPU's being reserved
$RAMMax       = 16GB          # Maximum Memory
# Sizes and count of the disks
$VHDSizes = [ordered]@{ "C" =  40GB; # Boot
                        "D" =  30GB; # Page
                        "E" =  40GB; # Programs
                        "F" =  10GB; # Web
                        "G" =  10GB; # Logs
# Architecture-specific commands
if ( $Architecture -eq "Hyper-V" )
    $RAMBoot      = 8GB           # Startup Memory
    $RAMMin       = 8GB           # Minimum Memory (should be the same as RAMBoot)
    $DynamicDisks = $true         # Use Dynamic Disks?
    $Vlan         = 300           # VLAN assignment for the Network Adapter
    # Assume that we want to make all the VHDs in the default location for this server.
    $VHDRoot = Get-Item -Path ( Get-VMHost | Select-Object -ExpandProperty VirtualHardDiskPath )
    # Convert the hash table of disks into PowerShell Objects (easier to work with)
    $VHDs = $VHDSizes.Keys | ForEach-Object { New-Object -TypeName PSObject -Property ( [ordered]@{ "ServerName" = $VMName; "Drive" = $_ ; "SizeBytes" = $VHDSizes[$_] } ) }
    # Extend this object with the name that we'll want to use for the VHD
    # My naming scheme is [MACHINENAME]_[DriveLetter].vhdx - adjust to match your own.
    $VHDs | Add-Member -MemberType ScriptProperty -Name VHDPath -Value { Join-Path -Path $VHDRoot -ChildPath ( $this.ServerName + "_" + $this.Drive + ".vhdx" ) } -Force
    # Create the VHDs
    $VHDs | ForEach-Object { 
        if ( -not ( Test-Path -Path $_.VHDPath -ErrorAction SilentlyContinue ) )
            Write-Verbose -Message "Creating VHD at $( $_.VHDPath ) with size of $( $_.SizeBytes / 1GB ) GB"
            New-VHD -Path $_.VHDPath -SizeBytes $_.SizeBytes -Dynamic:$DynamicDisks | Out-Null
            Write-Host "VHD: $( $_.VHDPath ) already exists!" -ForegroundColor Red
    #region Import the Hyper-V Module & Remove the VMware Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue )
        Remove-Module VMware.PowerCLI -Confirm:$false -Force
    if ( -not ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue ) )
        Import-Module -Name "Hyper-V" -Force
    #endregion Import the VMware Module & Remove the Hyper-V Module
    # Step 1 - Create the VM itself (shell) with no Hard Drives to Start
    $VM = New-VM -Name $VMName -MemoryStartupBytes $RAMBoot -SwitchName ( Get-VMSwitch | Select-Object -First 1 -ExpandProperty Name ) -NoVHD -Generation 2 -BootDevice NetworkAdapter
    # Step 2 - Bump the CPU Count
    $VM | Set-VMProcessor -Count $CPUCount -Reserve $CPUReserve
    # Step 3 - Set the Memory for the VM
    $VM | Set-VMMemory -DynamicMemoryEnabled:$true -StartupBytes $RAMBoot -MinimumBytes $RAMMin -MaximumBytes $RAMMax
    # Step 4 - Set the VLAN for the Network device
    $VM | Get-VMNetworkAdapter | Set-VMNetworkAdapterVlan -Access -VlanId $Vlan
    # Step 5 - Add Each of the VHDs
    $VHDs | ForEach-Object { $VM | Add-VMHardDiskDrive -Path $_.VHDPath }
elseif ( $Architecture -eq "VMware" )
    #region Import the VMware Module & Remove the Hyper-V Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue )
        Remove-Module -Name "Hyper-V" -Confirm:$false -Force
    if ( -not ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue ) )
        Import-Module VMware.PowerCLI -Force
    #endregion Import the VMware Module & Remove the Hyper-V Module
    $vCenterServer = "vCenter.Demo.Lab"
    $DiskFormat = "Thin" # or "Thick" or "EagerZeroedThick"
    $VlanName = "External - VLAN 300"
    $GuestOS = "windows9Server64Guest" # OS Identifer of the Machine
    #region Connect to vCenter server via Trusted Windows Credentials
    if ( -not ( $global:DefaultVIServer ) )
        Connect-VIServer -Server $vCenterServer
    #endregion Connect to vCenter server via Trusted Windows Credentials
    # Find the host with the most free MHz or specify one by using:
    # $VMHost = Get-VMHost -Name "ESX Host Name"
    $VmHost = Get-VMHost | Sort-Object -Property @{ Expression = { $_.CpuTotalMhz - $_.CpuUsageMhz } } -Descending | Select-Object -First 1
    # Calculate the MHz for each processor on the host
    $MhzPerCpu = [math]::Floor( $VMHost.CpuTotalMhz / $VMHost.NumCpu )
    # Convert the Disk Sizes to a list of numbers (for New-VM Command)
    $DiskSizes = $VHDSizes.Keys | Sort-Object | ForEach-Object { $VHDSizes[$_] / 1GB }
    # Create the VM
    $VM = New-VM -Name $VMName -ResourcePool $VMHost -DiskGB $DiskSizes -MemoryGB ( $RAMMax / 1GB ) -DiskStorageFormat $DiskFormat -GuestId $GuestOS -NumCpu $CPUCount
    # Setup minimum resources
    # CPU is Number of CPUs * Reservation (as percentage) * MHz per Processor
    $VM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuReservationMhz ( $CPUCount * ( $CPUReserve / 100 ) * $MhzPerCpu ) -MemReservationGB ( $RAMMax / 2GB )
    # Set my VLAN
    $VM | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName $VlanName -Confirm:$false
    Write-Error -Message "Neither Hyper-V or VMware defined as `$Architecture"


Next step is to install the operating system.  I do this with Windows Deployment Services.  Your mileage may vary.


After that, we need to configure the machine itself.  That'll be the next post.


About this post:

This post is a combination of two posts on my personal blog: Building my Orion Server [VMware Scripting Edition] – Step 1 & Building my Orion Server [Hyper-V Scripting Edition] – Step 1.

To receive updates on the Patch Manager roadmap, JOIN thwack and BOOKMARK this page.


We recently released Patch Manager 2.0, which included support for Windows Server 2012 R2 and Windows 8.1. We're now working on simplifying the interface and making the Patch Administrator's time more efficient. We're also working on other ease of use enhancements such as:

PLEASE NOTE: We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel. If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

On November 11th, Microsoft released a total of 16 security updates for November's Patch Tuesday which mitigate potential security threats in Office, Windows, SharePoint, and Internet Explorer. If you are a patch administrator for your company, then it's worth your time to read the Microsoft Article (Microsoft Security Bulletin Summary for November 2014).


In theory, there are several issues that could be causing concern when you read the article, but the one that seems to be the generating the most buzz is MS14-066: Vulnerability in Schannel Could Allow Remote Code Execution (2992611).  This vulnerability was additionally reported as CVE-2014-6321.  Although there are no known exploits of this vulnerability, it is quite serious and you should take note.  This vulnerability in the Microsoft Secure Channel or Schannel, is a set of security protocols used to encrypt traffic between endpoints and is primarily used for Internet communications over HTTPS.  This particular patch is applicable to every Windows Operating System under active maintenance.  This ranges from Windows Server 2003 SP2 and Windows Vista SP2 through Windows Server 2012 R2 and Windows 8.1.


Although the media is touting both the scope and the number of updates as the craziest thing that we've ever seen in patching, this isn't even the largest bundle of patches that Microsoft has released for a single Patch Tuesday. That current record is for April 2011 with a total of 29. But fear not, Patch Administrators, although the quantity seems daunting, the process is still the same. This is most definitely not a "sky is falling" moment - we're here to help.


One thing that people seem to forget is that Patch Administration is the same on Patch Tuesday whether there are 2 patches or 100 patches. If you follow the same procedure from start to finish, you are guaranteed to make sure that your environment is up to date and secure.  The best practices for any software (not just patches) is to test them on a small segment of your infrastructure before sending them everywhere. Thankfully, you can do this easily with SolarWinds Patch Manager.



Download the Updates from Microsoft to your WSUS Server

I've found that the easiest way to see these updates is within a Custom Updates View. I have one called "Microsoft Updates - This Week" which is defined as "Updates were released within a specific time period: Last Week" and "Updates source is Microsoft Update." If you need to create one, you can navigate to "Patch Manager\Enterprise\Update Services\WSUS Server\Updates" and then right-click on the Updates node and select "New Update View." Feel free to use this screenshot as a reference.


On that view, I tweak a few of the settings so that I can get a direct look at the updates that are concerned with this particular Patch Tuesday. I start by flipping the "Approval Settings" to "All" and the "Status" to "Any" and let the list refresh. Then I group it by the MSRC Number, which I do by dragging the "MSRC Number" header to the gray area just above the headings.


Now I have a list of the all the items released by Microsoft within the Last Week, grouped by the MSRC Number. After that it's as easy as expanding each and scanning through the list and seeing if all the updates that are applicable to my environment are approved. (You can also flip the Approval filter at the top to "Unapproved", but I like seeing all the information.) It's also good to check the "State" field to make sure that the updates are "Ready for Installation."


If you don't see any of this information, it means that the updates haven't yet been synchronized to your WSUS Server. Running a manual Synchronization with Patch Manager is simple - highlight your WSUS server in the left pane and click on the "Synchronize Server" in the Action Pane (right-side of the screen). Click Finish and it's kicked off.


After the synchronization is completed, you can go back and verify that the updates are available using the Update View that we just finished building. Or you can use a report. I've crafted one especially for this month's updates.


To use it, download (Software Updates for MS14-NOV) and import it into the Windows Server Update Services folder. This report has a prerequisite that the WSUS Inventory job has completed after the updates have been synchronized to WSUS. This is normally a scheduled job that runs daily, so you can either kick it off yourself, or just wait for tomorrow to run the report.


This report tells you the WSUS Servers in the environment, the Security Bulletin updates, the Approval Status, the Product Titles, and the Update Title, filtered to only those updates in the MS14-NOV Bulletin.


Test Group

You should run updates against a test group whenever possible.  For me, I've got a test group with a few different versions of Windows in it, so I'll use that.


Approve the Updates for a Test Group

If you need to approve the updates, just right-click on them and select Approve and then "Approve for Install" in the Approve Updates window and (recommended) scope it only to your testing computers. They may already be updated based on your automatic approval rules. If that's the case and you are trusting, then you are good to go!  If not, send the updates to a test group first.


Theoretically, you can stop here and the updates will apply based on your defined policies (either via GPO or Local Policies), but where's the fun in that?

Run a Simulation of the Patch Installation on the Test Group

For any Patch Tuesday, I'm a fan of creating a set of  Update Management Rules and save it as a template. That way I can refer to it for pre-testing, deployment to the test group, and then deployment to the rest of the organization. You can either create your own MS14-NOV Template (within the Update Management Wizard) or download and use mine. It's defined to include all Security Bulletins from MS14-064 through MS14-079.


Now it's time to pre-test these patches. I right-click on my Test Group and then select "Update Management Wizard."


Select "Load existing update management rules" and select the MS14-NOV entry from the drop-down. (If you need to build your own, you can select "Create custom dynamic update management rules"). Click Next.


Verify that the Dynamic Rule shows Security Bulletins from MS14-064 through MS14-079 and click Next.


You can leave most of the defaults on the Options page, but be sure to check the "Run in planning mode" checkbox in the Advanced Options. Click Finish.


Either change your scope to include a few other computers or add additional computers for the testing and then click Next.


Select the schedule (I am a fan of "now") and Export or Email the results as you like and click Finish.


Planning mode is an oft-overlooked feature that you should definitely use for large patch deployments.


This gives you in three quick tabs, the overall status summary of the job, the per patch and per computer details, and the distribution of the job (if you have multiple Patch Manager Servers in your environment).




Pre-Stage the Patch Files on a Test-Group (optional)

If you have a large or highly distributed environment, you can use the Update Management Wizard to deploy the patches to the endpoint, but hold off on installing them. This can be staged to run over several hours or multiple days. This is as simple as running through the same steps as the previous wizard and then checking a different box in the "Advanced Options." Leave the Planning Mode checkbox unchecked and check the box for "Only download the updates, do not install the updates."


That's it. Just use the rest of the wizard as before and check this one box to pre-stage the updates to your environment.

Install the Patches on your Test Group

Same rules apply here. Just make sure that you leave the Planning Mode and Download Only checkboxes empty.  Yeah - it's really just that simple.


Reporting on the Results

To report on the status of these patches within your environment, you can use any number of our pre-built reports or customize your own for your needs.  Likewise, you should take advantage of all the Shared Reports in the Content Exchange here on Thwack and download this one that was kindly donated by LGarvin and repurposed (very) slightly by yours truly:Report: Computer Update Status for MS14-NOV This report shows the status of the MS14-NOV Patches in your environment.


Plan & Test, Stage, Install: Three steps, one wizard, and you have successfully deployed those pesky patches to your test environment.  So what's up next?  Moving to production...

Patching Production Endpoints

You ever read the instruction on the back of a bottle of shampoo?  It says "Lather, rinse, repeat."  This is no different.

The previous process (hopefully) has been run against a group of test computers in your environment. If that go wells, then it's time to schedule the updates for the rest of your environment.  Just use the same few steps (Approve, Test in Planning, Deploy, and Install) to deploy the patches all at once, in waves, or staggered based on your needs.


Next Steps...

Hopefully, this "day-in-the-life" snapshot for a Microsoft Patch Tuesday has been helpful, but this just scratches the surface of what SolarWinds Patch Manager can do to help you keep your environment running smoothly.  Now that you've got Operating System patching under control, extend your knowledge of Patch Manager by keeping on top of patching Third Party Updates from Adobe, Google, Mozilla, Sun, and more!


If you need more help, like everything SolarWinds, start on Thwack.  We've got the best user community bar none.  Just ask a question in the forums and watch the people come out of the woodwork to help.  Start with the Patch Manager Forum and you can go from there.

With the release of Orion NPM 11.0 and the SAM 6.2 Beta, we are introduced to a new extension to the Orion Family: the SolarWinds Orion Agent.  We cover the below procedure step-by-step in the Orion NPM Administrator's Guide, but we skip all of the "why's and what's" about the process.  I'm hoping that this post will demystify some of the steps involved with setting this up within Patch Manager.


I chose to call out the Orion Agent to showcase custom package creation within Patch Manager because it's an almost perfect application for a couple of reasons:

  1. It is packaged as an MSI file
  2. It has simple rules about its compatibility
  3. It has some customization that needs to be passed to the installer

All in all, it gives me the ammunition to show how to use Patch Manager to install a custom application with some (but not too much) complexity.  Although this step-by-step calls out the Orion Agent for package creation, there is no reason that you cannot use this procedure with any custom package.  In fact, after you go through this process with the Orion Agent, I highly encourage you to try it with a package of your own choosing.  But, without further ado, let's dig in!



Isn't Patch Manager just for Patches?

If you really think about it, patches are nothing but small software programs which just happen to "fix" other preexisting software problems.  The beauty and elegance of the Patch Manager solution is that you don't need to learn sixty different command line parameters, all of the switches for the Microsoft installer, or pretty much anything when deploying Third Party Software patches.  With Patch Manager, patching third party applications is as easy as 1 (download the package), 2 (publish the package), and 3 (approve for deployment).


So, you ask yourself, "Self, if patches are really just little programs with some rules, what's preventing me from using Patch Manager to deploy my own programs?"  The short answer:  Nothing!


Some background on the Windows Update Agent's Checks

If you dig into any of the predefined packages within the SolarWinds Patch Manager 3rd Party Update Catalog (3PUP), you'll see that there is all kinds of goodness that we pack into the tabs along the bottom.  If you don't have Patch Manager, download a copy and you can follow along.  Open up the Patch Manager Console, and check out the some entries in the catalog.  Go ahead, I'll wait.... **humming Tetris® theme to self**


So now that we're all looking at the same screens, I can continue.  There are a total of six tabs for every package; Package Details, Prerequisite Rules, Applicability Rules, Installed Rules, Content, and Version History.  I'm only showing the first four above and I'm only going to deep dive on the tabs for Prerequisite Rules, Applicability Rules, and Installed Rules.  I think of these rule tabs as answering very simple questions about the package:

  1. Prerequisite Rules
    Does this program apply to the target computer as a whole?
  2. Applicability Rules
    Does this program apply to the installed software on the target computer?
  3. Installed Rules
    How do I know if this is already installed on the target computer?

If you really want a super deep dive on these rule sets, you can dig in to Update Applicability Rules on the Microsoft MSDN Pages.  If you plan on digging into further custom packages (which you should), then I highly encourage you to read this article on the deep logic used in these rules and how it applies to Patch Manager.  Otherwise, just consider this post as a quick primer  - we'll give you enough information to get started.


Let's Build Us a Package!


Step 1 - Get the Installer Files

Since I've chosen the Orion Agent as our software package for custom package deployment, we'll need to get the necessary files for installing it.  The Orion Agent consists of two files, an MSI (the installer) and an MST (a transform file).  In very simple terms, a transform file is a "recording" of the information that you enter while running an installer.  Think of it as a script for all the text boxes, radio buttons, and check boxes for which the installer prompts.


You get these two files from within the Orion Web Console.  Launch the Orion Web Console with Admin rights, and go to Settings, then Agent Settings, then Download Agent Settings.  From there download the two files (MSI and MST) from the Mass Deployment Pane.  In my environment, I downloaded the MSI and MST file and saved them with the names SolarWinds_Agent_1.0.0.866.msi and SolarWinds_Agent_1.0.0.866.mst.  You can name them whatever you want - the above naming scheme just helps me keep the versions straight and I find that replacement of any spaces with underscores generally tends to be helpful.  I also saved them in a staging folder (C:\Staging\Tools\OrionAgent).  Again, this was just to help me keep everything straight.


Now it's time to start the package creation.  Navigate down in the Patch Manager Console to "SolarWinds, Inc. Packages" (you should find it under Patch Manager / Administration and Reporting / Software Publishing / SolarWinds, Inc. Packages) and click on it.  Now click on "New Package" in the Action Pane on the far right side of your screen.  Please note that you don't really have to navigate down to the "SolarWinds Inc. Packages" entry in the tree, you can create a package from any node under the Software Publishing node.  Again, this is just my preferred way so that I can keep things straight in my head..


Step 2 - Package Creation

Package Information Screen

Now you are in the Package Wizard. (Click on the image to pop up a larger version of each)


Above are the minimum entries that you'll have to make for a package.  Please be sure to change this for the proper version of the program.  Please take note that the Product entry (where it says Orion Agent) needs to be hand-typed the first time that you run through the wizard.  Once the information looks good, click Next to move onto the next screen.

What did we just do?

We just created the base information which categorizes the software package and how it is deployed.  The two important fields here are the Impact and the Reboot Behavior.  The Impact is used to determine the push schedule.  If you have Allow Automatic Updates immediate installation setting within your Windows Update Policy set to True, and the Impact is set to Minor, the install can take place immediately at next check-in by the Windows Update Agent.  If it is set to Normal, it is handled as all other patch tasks.

Reboot Behavior determines if the product requires a reboot.  Many products will, but some may not.  This is normally determined by the software package itself.  The Orion Agent does not require a reboot, so we've selected Never Reboots.

Prerequisite Rule Screen

Here is where having some background on the Rules is super helpful.  From reading about the Orion Agent, we already know that it supports Windows Server operating systems with Windows 2008 R2 and later.  It's also not recommended to be run on Domain Controllers.  So with that in mind, let's build that rule.


On the Prerequisite Rule Screen, click Add Rule.  Select Create Basic Rule as the format, and then Windows Version as the Rule Type.

  1. For Comparison, select Greater than or Equal to
  2. Enter 6 for the Major Version
  3. Enter 1 for the Minor Version
  4. Leave SP Major Version and SP Minor Version as 0
  5. Leave the Build Number empty
  6. Select Server for the Product Type

Click OK to save the rule and Next when you see the rule in the list.


What did we just do?

In simple words, we just created a prerequisite rule that says "check the Windows Version of the target computer and verify that it is a Windows Server version 2008 R2 or later running on a Server."  Windows version information (5.2, 6.0, 6.1, etc.) is explained a little more here.  Additionally, if you feel that this rule could be used repeatedly, you could have checked the box and save the rule with a name (Like "Windows 2008 R2 Servers or Later").


Select Package Screen

Here's where we tell the software how to install.  With MSI files, it's pretty straight forward, but I'll run you through the process step-by-step.

  1. Select Windows Installer as the Package Type.
  2. Move the radio button under Details down to I already have the package, click the browse button on the right and browse to your MSI file.
  3. You will get a popup indicating that the package has been downloaded and then one which asks if you trust the package.  Click OK to dismiss each of the pop-ups.
  4. Now take a step back.  You'll see something new on this screen.  The Product ID has been populated near the top (just under the Package Type).  Select this and copy it to the clipboard.  You'll need it in the next step.
  5. Further down in Details, check the box next to Include additional files with the package and click the package content button on the right side.  On the select additional files screen, click Add File and browse to your MST File.  Then click OK to close the Package Content Editor.  Click on Yes to copy these files.
  6. Back on the Select Package Screen, select the Binary Language as None.
  7. Finally, enter TRANSFORMS=<Full Name of MST File> in the Command Line (silent install) field.  This is the part where replacing spaces in the MST file with underscores (or something else) is handy.  If you have spaces, you'll need to surround the MST file with double-quotes.  This is one of the reasons that I saved the files with the names that I used above.

Here is your before and after...


When you are satisfied, click Next to continue to the Applicability Rules screen.


What did we just do?

We have selected the type of installer (which determines the necessary command line parameters), the installer file, the transform file to be included, and then told that installer to use that MST file during install.


Applicability Rules Screen

Click on Add Rule and select MSI Rule as the format, Product Installed an the Type, and check the box for Not Rule.

Now paste in the Product ID that we copied from the previous page in the Product Code field.  Be sure to remove the curly braces so that the format matches up with the example.  The rest can be left blank.

Click OK to save the Rule, then click Next to move to the Installed Rules Screen.


What did we just do?

We just created a rule that asks the Windows Update Agent to check to see if a package with Package ID {E59C88B3-8C59-43A0-846D-B8AFC36D78C6} is installed on your computer.  If it's NOT, then the package that we're making now is applicable to be installed.  In essence, we said, "if the Orion Agent isn't already installed, it can be installed."

Installed Rules Screen

Lastly we need to define the rules for how we detect if the new package is already installed.  There are many ways to check to see if a package is installed.  You can use any rule that you like (MSI Rules are a popular selection), but I chose to use the File Version with Registry Value for this example.  The Installed rules are how the Windows Update Agent determines if an installation succeeded.  For this package, we'll only go semi-complex.  Other examples within the catalog are much more complex and I encourage you to look at them for more details on how you can detect installation.


Many times, figuring out the appropriate registry settings for this process requires you to install the software on at least one machine so that you can dig down through the registry and file system.  I'll save you that step and just give you the information that you need to build the rule.

  1. As before, click on Add Rule, but this time select Basic Rule for the format and File Version with Registry Value for the Rule Type.
  2. In the Registry Key field, enter "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SolarWinds\Agent"
  3. In the Registry Value field, enter "InstallDir"
  4. In the Sub-Path field, enter "SolarWinds.Agent.Service.exe"
  5. In the Comparison Field, select Greater Than Equal To
  6. In the Version Field, enter the version number of the installer (in this case

You'll end up with what you see below.  Click OK to save the Rule and then Next to get to the Summary screen.


What did we just do?

This one is more complex than all the previous rules, so let's go through it a step at a time.  We just created a rule that asks the Windows Update Agent to check in the Registry for the key stored at "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SolarWinds\Agent" and extract the contents in the "InstallDir" value.  Then it takes the content from they registry key & value (C:\Program Files (x86)\SolarWinds\Agent\) and appends the "SolarWinds.Agent.Service.exe" to the end which yields C:\Program Files (x86)\SolarWinds\Agent\SolarWinds.Agent.Service.exe.  Then we extract the version from the file at that path.  Finally, it compares the version extracted from the file to that which we hand-entered in the rule.  If the value matches or is newer, then the software has been successfully installed.


Summary Screen

You are almost done!  I know that you've seen enough screenshots, so I'll be skipping the one here... yes, you're welcome.  If you want to add notes, do so and then click Next to actually Build the package.  When it's all done, you will get the a confirmation message - just click on OK.  That's it!  The package has been built!


Step 3 - Publish & Approve

Now this package is just like any third party update.  Just right-click on it and select Publish Packages.  Normally, you just keep the defaults and click on Next to publish it and click Finish to confirm it.  It's now been published to your default WSUS Server.


Finally, in the Patch Manager Console, move up to Patch Manager / Enterprise / Update Services / (Your WSUS Server Name) / All Updates.  Click on the newly created and published package.  Right-click on it and select Approve.  From here, you can determine which groups can get the new package and setup a schedule.  Like I said, treat this just like any other update from this point.


The Orion Agent is awesome!  What if I want to deploy now?!?

Patch Manager gives you that option as well!  Just right-click on the package within Update Services, and select Update Management.  Click OK on the "Update Management - 1 Updates" screen.  On the Task Options Wizard screen, you can add computers to the list.  Click on Next and choose the timing of the deployment and if you want schedule the update, configure logging, or add email notifications.  Click Next once more and then click Finish to complete the deployment.  The beauty of this is that if you try to deploy this package to machines where the update isn't applicable, it won't get installed because of the rules that we built together.  If you want to watch the installation process, you can go down to the Active Tasks to watch the Agent be deployed in real time.


Where can I get more information?

Like everything SolarWinds, I'd tell you to start on Thwack.  We've got the best user community bar none.  Just ask a question in the forums and watch the people come out of the woodwork to help.  We also have some excellent articles written up about the rules for custom update packages, troubleshooting the installation, and the logic of the Windows Update Agent.  However, If reading isn't your thing and you prefer the wonderful world of video, we've got Package Creation using SolarWinds Patch Manager and Package Creation Fundamentals videos for additional guidance.


And don't forget that one of the biggest resources of untapped information in Patch Manager are the packages that we already make for you in the Third Party Catalog.  You can use these as a resource for learning and building your own rules.  I encourage everyone to pop open a few of them to look into the way that we build the rules for each of them.  We've already done the heavy lifting for you with these packages.  Learn from us.  Now go forth and create your own packages!

Storage Manager 5.7 is in General Availability

It is my honor to announce the General Availability (GA) of Storage Manager 5.7.  This release is the culmination of all the great feedback that we received from the Beta and Release Candidate (RC) community!  If you participated in the Beta or RC program, give yourself a pat on the back.  In large part, this release is thanks to you!

There are many improvements and enhancements, but there are two that need a special call-out: the new User-Defined LUN Groupings and Storage Manager Health Status Overview.

User-defined LUN Grouping

Collecting and displaying metrics is great, but organizing them into something that makes more sense in your environment is even better.  It's been a longstanding request that Storage Manager Server be able to group information about your LUNs.  Customers have asked for this for a multitude of reasons.  If you are an MSP, you'd like to create a view broken down for each customer.  If you are geographically disbursed, you will want to see what's happening with a specific region in a single view.  If you concerned about a Tier 1 application (think Microsoft Exchange, SQL, CRM, etc.), you'll benefit from having all of those LUNs on a single view.  In 5.7, that's delivered using User-defined LUN Grouping!


There is a step-by-step for creating your own User-defined LUN Groups in the Beta 2 post.

Performance Charts

There are two charts and I'd like to bring to the forefront for everyone. They are the VM Disk Metrics per LUN and the LUN Metrics per VM. The names are similar, but they each give you a different insight into your storage and virtual infrastructure.


VM Disk Metrics per LUNLUN Metrics per VM
Select the LUN to display the Virtual Machine informationSelect the Virtual Machine to display the LUN information

Image Source: VMware Storage Infrastructure

These are great charts that can be used to identify problem virtual machines or problem LUNs within your environment. Storage I/O drops are some of the biggest culprits of performance drops in any Virtualized Environment (see Configuring Database Servers for Optimal Performance). These charts let you see if you are taxing a particular LUN and conversly, what VM's that might affect.  Each view provides insight into the other.


Ever wish that you could see Read IOPS and Write IOPS for a LUN in one view? We did too, so we added that as well.


Just pick the "LUN Performance Comparison" report from the Performance Tab, select your LUN, and select your additional metric, and presto-chango, you've got both statistics. This is just one of the new features added as part of the entire charting enhancements in 5.7. You'll get the full breakdown below.


Storage Manager Health Status Overview

Monitoring the health of your Storage Manager server is critical to making sure that you are collecting valid data consistently.  We've heard many of the same questions over the years regarding sizing the memory, processors and storage in the Storage Manager server, setting up memory allocation on services, determining which devices are failing on collection and why, and questions about scaling your environment with additional agents.  All of this information could be gathered by running from one page to another in previous versions of Storage Manager.


We knew that we could do better.  Storage Manager would be no good to anyone if the server on which it was running started having problems.  There are many parts to this page and I'll only cover a few in details, but if you'd like a full breakdown of all of the elements on the page, you can review the Beta 2 posting.

Everything critical - all at a glance...



Any monitoring system worth its salt is good at collecting metrics.  Storage Manager is no exception.  For quite some time we've polled metrics from the server itself, but there's never been a "single pane of glass" from which to review this information.  Since the Storage Manager Server is the brain of the solution, we want to make sure it stays healthy. Part of keeping Storage Manager in tip-top shape is sizing it correctly for your constantly evolving environment. There's an excellent resource on thwackCamp about how to size your infrastructure depending on your environmental needs.


The Storage Manager Server Performance Metrics displays the overall health of the Storage Manager Server.  In a quick visual, you see the CPU, RAM, and Disk consumption of the Storage Manager Server.  These are all linked to more detailed information so that you can dig in and view detailed graphs of the metric over time.  The graphing engine has also been drastically improved, but I'll talk about that a little further down.


If you breach a threshold on your disk consumption, there's also a cool warning that displays (cool that it's there, not that you have a disk in a warning state).  The warning will let you be proactive about the disk space on the Storage Manager Server itself.  Based on the metrics that it's calculated so far, it lets you know if your server is projected to run out of room and when.  Lack of disk space on a Storage Manager Server or an STM Proxy Agent are key reasons for collection failures, so this allows you to prevent any future issues.


The Storage Manager Services pane displays memory consumption and Java Heap memory allocation of the various services critical to Storage Manager. If any of these services are stopped, they will be classified as "offline."  When these values start pushing up against thresholds, we provide a convenient help link so you can learn how to allocate physical memory for services.


Within the Database Status pane, you are presented with critical stats revolving around the database as a whole; get the database size, the largest table, and the last maintenance date. With a quick click you get the processes within the database, the crashed database tables (if any exist), and can learn how to properly run maintenance on the database.


STM Proxy Agents are responsible for reporting back information from various parts of the infrastructure.  They are akin to an additional polling engine from Orion and used so that you can get data from devices in a different location or to simply split up the work in very large environments. From this single pane, you can see the status of all of the STM Proxy Agents within your environment. If any are overworked, I'd recommend taking a look at the thwackCamp resource for infrastructure sizing.


Getting information about such a complex system is very useful.  With that in mind, we've included links to the most helpful information we have available.  At the very bottom of the Storage Manager Health Status Overview is a few links to your best bet for assistance: the Storage Manager Video Resources, the Thwack Storage Community, and the Storage Manager Admin Guide.  Each of these is a wealth of information and you should avail yourself of them.


Did I forget to mention?

Storage Manager Health Status Overview and User-Defined LUN Grouping are great features, but there's even more in this release.  I'll touch on each of them in turn, but first up is…

Improved Chart Performance & Data Visibility

Yes, I'm finally going to talk about it.  The previous charting engine was dated.  There's no other nicer way to say it.  The graphs were created as a PNG file on the fly, which limited the interactivity of the metrics that could be shown.  Shifting the time scale (the x-axis) was not as intuitive as it should be.  Limiting your view (say to only two or three data series from a larger amount) was impossible.  Lastly, we didn't like the restrictive time selection.  It's your data and you should be able to view whatever area of time you choose.


In light of these limitations, we've adopted the same charting package being used by the SolarWinds Orion products.  The voodoo magic of this charting engine gives us many, many enhancements and addresses all of the charting concerns.


There is native "hover-over" support for looking at individual data points on any graph. This is especially useful when dealing with multiple data sets. You can easily zoom in by horizontally dragging on the chart area, and zoom out (or in) by moving the edges of the selector in the y-axis.  You can also dynamically select the data sets using the check boxes at the bottom.  It's all very quick and easy.


EMC's Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a new technology which takes advantage of the high I/O capable with SSD Drives.  It's currently supported on the VMAX/Symmetric and VNX arrays.  And although the technological theory differs greatly between the product lines, it all falls under the umbrella of "FAST VP."  If you want to dig deeper into the details of the technology, I'd recommend Vijay Swami's post as an ideal reference.  That being said, we've adopted support for this technology within Storage Manager.


We can go on and on regarding the specifics of the data which is collected.  In summary, for VMAX gear, we collect the disk groups, the virtual pools, the tiers, the policies, and the storage groups.  For the VNX gear, we pull information on the physical drives, the storage pool, the LUN, and the tiering policy of those LUNs.


When you select an EMC SAN Array which supports FAST VP, you'll see this additional information in the sub-views in the Storage Tab.


Deprecation of Generic "Storage Array" Reports

The "Storage Array" report category is being deprecated.  In the announcement for Beta 1, we give the reason they are being deprecated.  If you want specifics, I'd recommend that you check out that post.


Already a Storage Manager customer? Go to the Customer Portal to download your copy today!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.