Skip navigation
1 2 3 Previous Next

Product Blog

679 posts

PoShSWI.png

In my previous posts, I talked about building the virtual machine and then about prepping the disks.  That's all done for this particular step.

 

This is a long set of scripts.  Here's the list of what we'll be doing:

  1. Variable Declaration
  2. Installing Windows Features
  3. Enabling Disk Performance Metrics
  4. Installing some Utilities
  5. Copying the IIS Folders to a new Location
  6. Enable Deduplication (optional)
  7. Removing unnecessary IIS Websites and Application Pools
  8. Tweaking the IIS Settings
  9. Tweaking the ASP.NET Settings
  10. Creating a location for the TFTP and SFTP Roots (for NCM)
  11. Configuring Folder Redirection
  12. Pre-installing ODBC Drivers (for SAM Templates)

 

Stage 1: Variable Declaration

This is super simple (as variable declarations should be)

#region Variable Declaration
$PageFileDrive = "D:\"
$ProgramsDrive = "E:\"
$WebDrive      = "F:\"
$LogDrive      = "G:\"
#endregion

 

Stage 2: Installing Windows Features

This is the longest part of the process.. and it can't be helped.  The Orion installer will do this for you automatically, but if I do it in advance, I can play with some of the settings before I actual perform the installation.

#region Add Necessary Windows Features
# this is a list of the Windows Features that we'll need
# it's being filtered for those which are not already installed
$Features = Get-WindowsFeature -Name FileAndStorage-Services, File-Services, FS-FileServer, Storage-Services, Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext, Web-Net-Ext45, Web-Asp-Net, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-Features, NET-Framework-Core, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-MSMQ-Activation45, NET-WCF-Pipe-Activation45, NET-WCF-TCP-Activation45, NET-WCF-TCP-PortSharing45, MSMQ, MSMQ-Services, MSMQ-Server, FS-SMB1, User-Interfaces-Infra, Server-Gui-Mgmt-Infra, Server-Gui-Shell, PowerShellRoot, PowerShell, PowerShell-V2, PowerShell-ISE, WAS, WAS-Process-Model, WAS-Config-APIs, WoW64-Support, FS-Data-Deduplication | Where-Object { -not $_.Installed }
$Features | Add-WindowsFeature
#endregion

 

Without the comments, this is 2 lines.  Yes, only 2 lines, but very important ones.  The very last Windows Feature that I install is Data Deduplication (FS-Data-Deduplication).  If you don't want this, you are free to remove this from the list and skip Stage 6.

 

Stage 3: Enabling Disk Performance Metrics

This is something that is disabled in Windows Server by default, but I like to see them, so I re-enable them.  It's super-simple.

#region Enable Disk Performance Counters in Task Manager
Start-Process -FilePath "C:\Windows\System32\diskperf.exe" -ArgumentList "-Y" -Wait
#endregion

 

Stage 4: Installing some Utilities

This is entirely for me.  There are a few utilities that I like on every server that I use regardless of version.  You can configure this to do it in whatever way you like.  Note that I no longer install 7-zip as part of this script because I'm deploying it via Group Policy.

#region Install 7Zip
# This can now be skipped because I'm deploying this via Group Policy
# Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "\\Path\To\Installer\7z1604-x64.msi", "/passive" -Wait
#endregion
#region Install Notepad++
# Install NotePad++ (current version)
# Still need to install the Plugins manually at this point, but this is a start
Start-Process -FilePath "\\Path\To\Installer\npp.latest.Installer.exe" -ArgumentList "/S" -Wait
#endregion
#region Setup UTILS Folder
# This contains the SysInternals and Unix Utils that I love so much.
$RemotePath = "\\Path\To\UTILS\"
$LocalPath  = "C:\UTILS\"
Start-Process -FilePath "C:\Windows\System32\robocopy.exe" -ArgumentList $RemotePath, $LocalPath, "/E", "/R:3", "/W:5", "/MT:16" -Wait
$MachinePathVariable = [Environment]::GetEnvironmentVariable("Path", "Machine")
if ( -not ( $MachinePathVariable -like '*$( $LocalPath )*' ) )
{
    $MachinePathVariable += ";$LocalPath;"
    $MachinePathVariable = $MachinePathVariable.Replace(";;", ";")
    Write-Host "Adding C:\UTILS to the Machine Path Variable" -ForegroundColor Yellow
    Write-Host "You must close and reopen any command prompt windows to have access to the new path"
    [Environment]::SetEnvironmentVariable("Path", $MachinePathVariable, "Machine")
}
else
{
    Write-Host "[$( $LocalPath )] already contained in machine environment variable 'Path'"
}
#endregion

 

Stage 5: Copying the IIS folders to a New Location

I don't want my web files on the C:\ Drive.  It's just something that I've gotten in the habit of doing from years of IT, so I move them using robocopy.  Then I need to re-apply some permissions that are stripped.

#region Copy the IIS Root to the Web Drive
# I can do this with Copy-Item, but I find that robocopy works better at keeping permissions
Start-Process -FilePath "robocopy.exe" -ArgumentList "C:\inetpub", ( Join-Path -Path $WebDrive -ChildPath "inetpub" ), "/E", "/R:3", "/W:5" -Wait
#endregion
#region Fix IIS temp permissions
$FolderPath = Join-Path -Path $WebDrive -ChildPath "inetpub\temp"
$CurrentACL = Get-Acl -Path $FolderPath
$AccessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList "NT AUTHORITY\NETWORK SERVICE", "FullControl", ( "ContainerInherit", "ObjectInherit" ), "None", "Allow"
$CurrentACL.SetAccessRule($AccessRule)
$CurrentACL | Set-Acl -Path $FolderPath
#endregion

 

Stage 6: Enable Deduplication (Optional)

I only want to deduplicate the log drive - I do this via this script.

#region Enable Deduplication on the Log Drive
Enable-DedupVolume -Volume ( $LogDrive.Replace("\", "") )
Set-DedupVolume -Volume ( $LogDrive.Replace("\", "") ) -MinimumFileAgeDays 0 -OptimizeInUseFiles -OptimizePartialFiles
#endregion

 

Stage 7: Remove Unnecessary IIS Websites and Application Pools

Orion will create its own website and application pool, so I don't need the default ones.  I destroy them with PowerShell.

#region Delete Unnecessary Web Stuff
Get-WebSite -Name "Default Web Site" | Remove-WebSite -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0" -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0 Classic" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5 Classic" -Confirm:$false
Remove-WebAppPool -Name "Classic .NET AppPool" -Confirm:$false
Remove-WebAppPool -Name "DefaultAppPool" -Confirm:$false
#endregion

 

Step 8: Tweak the IIS Settings

This step is dangerous.  There's no other way to say this.  If you get the syntax wrong you can really screw up your system... this is also why I save a backup of the file before I make and changes.

#region Change IIS Application Host Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Application Host settings
$ConfigFilePath = "C:\Windows\System32\inetsrv\config\applicationHost.config"
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.applicationHost'.log.centralBinaryLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.log.centralW3CLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logFormat", "W3C" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logExtFileFlags", "Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("period", "Hourly")
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.traceFailedRequestsLogging.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\FailedReqLogFiles" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("directory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp\IIS Temporary Compressed File" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("maxDiskSpaceUsage", "2048" )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("minFileSizeForComp", "5120" )
# Save the file
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

There's a lot going on here, so let me see if I can't explain it a little.

I'm accessing the IIS Application Host configuration file and making changes.  This file governs the entire IIS install, which is why I make a backup.

The changes are:

  • Change any log file location (lines 15 - 17, 21)
  • Define the log type (line 18)
  • Set the elements that I want in the logs (line 19)
  • Set the log roll-over period to hourly (line 20)
  • Set the location for temporary compressed files (line 22)
  • Set my compression settings (lines 23-24)

 

Stage 9: Tweaking the ASP.NET Configuration Settings

We're working with XML again, but this time it's for the ASP.NET configuration.  I use the same process as Stage 8, but the changes are different.  I take a backup again.

#region Change the ASP.NET Compilation Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Compilation settings in the ASP.NET Web Config
$ConfigFilePath = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config"
Write-Host "Editing [$ConfigFilePath]" -ForegroundColor Yellow
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("tempDirectory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp") )
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("maxConcurrentCompilations", "16")
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("optimizeCompilations", "true")
# Save the file
Write-Host "Saving [$ConfigFilePath]" -ForegroundColor Yellow
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

Again, there's a bunch going on here, but the big takeaway is that I'm changing the temporary location of the ASP.NET compilations to the drive where the rest of my web stuff lives and the number of simultaneous compilations. (lines 16-18)

 

Stage 10: Create NCM Roots

I hate having uploaded configuration files (from network devices) saved to the root drive.  This short script creates folders for them.

#region Create SFTP and TFTP Roots on the Web Drive
# Check for & Configure SFTP and TFTP Roots
$Roots = "SFTP_Root", "TFTP_Root"
ForEach ( $Root in $Roots )
{
    if ( -not ( Test-Path -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) ) )
    {
        New-Item -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) -ItemType Directory
    }
}
#endregion

 

Stage 11: Configure Folder Redirection

This is the weirdest thing that I do.  Let me see if I can explain.

 

My ultimate goal is to automate installation of the software itself.  The default directory for installation the software is C:\Program Files (x86)\SolarWinds\Orion (and a few others).  Since I don't really like installing any program (SolarWinds stuff included) on the O/S drive, this leaves me in a quandary.  I thought to myself, "Self, if this was running on *NIX, you could just do a symbolic link and be good."  Then I reminded myself, "Self, Windows has symbolic links available."  Then I just needed to tinker until I got things right.  After much annoyance, and rolling back to snapshots, this is what I got.

#region Folder Redirection
$Redirections = @()
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]1; SourcePath = "C:\ProgramData\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]2; SourcePath = "C:\ProgramData\SolarWindsAgentInstall"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]3; SourcePath = "C:\Program Files (x86)\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]4; SourcePath = "C:\Program Files (x86)\Common Files\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]5; SourcePath = "C:\ProgramData\SolarWinds\Logs"; TargetDrive = $LogDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]6; SourcePath = "C:\inetput\SolarWinds"; TargetDrive = $WebDrive } )
$Redirections | Add-Member -MemberType ScriptProperty -Name TargetPath -Value { $this.SourcePath.Replace("C:\", $this.TargetDrive ) } -Force
ForEach ( $Redirection in $Redirections | Sort-Object -Property Order )
{
    # Check to see if the target path exists - if not, create the target path
    if ( -not ( Test-Path -Path $Redirection.TargetPath -ErrorAction SilentlyContinue ) )
    {
        Write-Host "Creating Path for Redirection [$( $Redirection.TargetPath )]" -ForegroundColor Yellow
        New-Item -ItemType Directory -Path $Redirection.TargetPath | Out-Null
    }
    # Build the string to send to the command prompt
    $CommandString = "mklink /D /J `"$( $Redirection.SourcePath )`" `"$( $Redirection.TargetPath )`""
    Write-Host "Executing [$CommandString]... " -ForegroundColor Yellow -NoNewline
    # Execute it
    Start-Process -FilePath "cmd.exe" -ArgumentList "/C", $CommandString -Wait
    Write-Host "[COMPLETED]" -ForegroundColor Green
}
#endregion

The reason for the "Order" member in the Redirections object is because certain folders have to be built before others... IE: I can't build X:\ProgramData\SolarWinds\Logs before I build X:\ProgramData\SolarWinds.

 

When complete the folders look like this:

mklinks.png

Nice, right?

 

Stage 12: Pre-installing ODBC Drivers

I monitor many database server types with SolarWinds Server & Application Monitor.  They each require drivers  - I install them in advance (because I can).

#region Pre-Orion Install ODBC Drivers
#
# This is for any ODBC Drivers that I want to install to use with SAM
# You don't need to include any driver for Microsoft SQL Server - it will be done by the installer
# I have the drivers for MySQL and PostgreSQL in this share
#
# There is also a Post- share which includes the files that I want to install AFTER I install Orion.
$Drivers = Get-ChildItem -Path "\\Path\To\ODBC\Drivers\Pre\" -File
ForEach ( $Driver in $Drivers )
{
    if ( $Driver.Extension -eq ".exe" )
    {
        Write-Host "Executing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath $Driver.FullName -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    elseif ( $Driver.Extension -eq ".msi" )
    {
        # Install it using msiexec.exe
        Write-Host "Installing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "`"$( $Driver.FullName )`"", "/passive" -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    else
    {
        Write-Host "Bork-Bork-Bork on $( $Driver.FullName )"
    }
}
#endregion

 

Running all of these with administrator privileges cuts this process down to 2 minutes and 13 seconds.  And over 77% of that is installing the Windows Features.

 

Execution time: 2:13

Time saved: over 45 minutes

 

This was originally published on my personal blog as Building my Orion Server [Scripting Edition] – Step 3 – Kevin's Ramblings

Since we launched the PerfStack™ feature in the beginning of 2017, we have seen a lot of interesting use cases from our customers. If you are unfamiliar with PerfStack, you can check out Drag & Drop Answers to Your Toughest IT Questions, where aLTeReGo outlines the basic functions of the feature. Those that are familiar have come to realize just how powerful the feature is for troubleshooting issues within their environment. Whether you're a system, network, virtualization, storage, or other IT administrator, being able to see metric data across the entire infrastructure stack is very valuable.

 

One of the common use cases I hear about lately, is visualizing synthetic web transaction metrics in the context of application performance metrics. For instance, lets say you have an intranet site and you need to monitor it for performance issues, including end users access and performance from multiple locations. With WPM and SAM, this is a reality. However, before PerfStack, you needed to browse multiple pages to compare metric data against each other.

 

In PerfStack, you can easily add all of the response times for WPM transactions, from multiple locations, to a single metric palette, and quickly visualize those metrics together. In the scenario of the intranet site mentioned above, you can see the response time average duration, from each of the four locations that are monitoring this particular transaction.

You can also easily add all of the transaction steps, for a particular transaction, to provide a more granular view of the response time of your web applications. All you have to do is click on the related entities icon for a given transaction. Then, add the related transaction step entities and subsequent response time metrics. This will allow you to quickly see which steps are contributing to the elevated response time for the related transaction.

save image

But what about the performance metrics of the application itself, and the infrastructure that hosts it? Those metrics are crucial to determining root cause of application issues. With Perfstack, it is easy to quickly add those metrics to the metric palette when using the related entities function. This does require a pre-requisite to be configured. When configuring your WPM transaction you will need to define related applications and nodes as shown below.

Once that is done, Orion does the rest. As you can see in AppStack, the relationship from the transaction to the application and the rest of the infrastructure stack is fully visible.

save image

This will allow you to add and remove all of the necessary entities and metrics to a PerfStack project, to complete your troubleshooting efforts. The more Orion Platform products you have, the more entities you will be able to collect metrics from and be able to visualize in PerfStack. Below you can see the multitude of data available through the relationships in Orion. When all of the needed metrics are added, you can save the PerfStack project to recall for future troubleshooting efforts.

save image

We have tried to make it easy to access the data needed to quickly troubleshoot any issues that arise in your environment. With the data at your finger tips you can drastically reduce the mean time to resolution for many issues. We all know that shorter mean time to resolution is important, because performance problems equate to unhappy end user. And when end users aren't happy...

iStock-135165692.jpg

Hopefully you are already reaping the benefit from the many improvements that were made in Network Performance Monitor 12.1, Server & Application Monitor 6.4, Storage Resource Monitor 6.4, Virtualization Manager 7.1, Netflow Traffic Analyzer 4.2.2, and Network Configuration Manager 7.6. If you haven't yet had a chance to upgrade to these releases, I encourage you to do so at your earliest convenience, as there are a ton of exciting new features that you're missing out on.

 

Something a few who already upgraded may have seen, is one or more deprecation notices within the installer. These may have included reference to older Windows operating systems or Microsoft SQL versions. Note that these deprecation notices will only appear when upgrading to any of the product versions listed above, provided you are installing on any of the Windows OS or SQL versions deprecated in those releases. But what does it mean when a feature or software dependency has been deprecated? Does this mean it's no longer supported, or those versions can't be used anymore?

 

Upgrade.png

 

Many customers throughout the years have requested advance notice whenever older operating systems and SQL database versions would no longer be supported in future versions of Orion, allowing them sufficient time to properly plan for those upgrades. Deprecation does not mean that those versions can't be used, or that they are no longer supported at the time the deprecation notice is posted. Rather, those deprecated versions continue to remain fully supported, but that future Orion product releases will likely no longer support them. As such, all customers affected by these deprecation notices should take this opportunity to begin planning their migrations if they wish to stay current with the latest releases. So what exactly was deprecated with the Q1'17 Orion product releases?

 

Windows Server 2008 R2

 

Released on October 22, 2009, Microsoft ended mainstream support for Windows Server 2008 R2 SP1 six years later on January 13, 2015. For customers, this means that while new security updates continue to be made available for the aging operating system, bug fixes for critical issues will require a separate purchase of an Extended Hotfix Support contract agreement; in addition to paying for each fix requested. Since so few of our customers have such agreements with Microsoft, the only recourse, often times, is an unplanned, out-of-cycle, operating system upgrade.

 

Microsoft routinely launches new operating system versions, with major releases on average every four years, and minor version releases approximately every two. As new server operating system versions are released, customer adoption begins immediately thereafter; sometimes even earlier, during Community Technical Preview, where some organizations place production workloads on the pre-released operating system. Unfortunately, in order to leverage the technological advances these later versions of Windows provide, it occasionally requires losing backwards compatibility support for some older versions along the way. Similar challenges occur also during QA testing whenever a new operating system is released. At some point it's simply not practical to thoroughly and exhaustively test every possible permutation of OS version, language, hotfix rollup or service pack. Eventually the compatibility matrix becomes so unwieldy that a choice between quality or compatibility must be made; and really that's not a choice at all.

 

 

SQL Server 2008 / 2008 R2

 

SQL Server 2008 was released on August 6, 2008, with SQL 2008 R2 being released just four months shy of two years later, on April 21, 2010. Seven years later, there have been tremendous advances in Microsoft's SQL server; from the introduction of new redundancy options, to technologies like OLTP and columnstore indexes, which provide tremendous performance improvements. Maintaining compatibility with older versions of Microsoft SQL precludes Orion from being able to leverage these and other advances made in later releases of Microsoft SQL Server, many of which have potential to tremendously accelerate the overall performance and scalability of future releases of the Orion platform.

 

If you happen to be running SQL Server 2008 or SQL 2008 R2 on Windows Server 2008 or 2008 R2, not to worry. There's no need to forklift your existing SQL server prior to upgrading to the next Orion release. In fact, you don't even need to upgrade the operating system of your SQL server, either. Microsoft has made the in-place upgrade process from SQL 2008/R2 to SQL 2014 extremely simple and straightforward. If your SQL server is running on Windows Server 2012 or later, then we recommend upgrading directly to SQL 2016 SP1 or beyond so you can limit the potential for additional future upgrades when/if support SQL 2012 is eventually deprecated.

 

Older Orion Version Support for Windows & SQL

 

Once new Orion product module versions are eventually released which no longer support running on Windows Server 2008, 2008 R2, or SQL 2008/R2, SolarWinds will continue to provide official support for those previous supported Orion module versions running on these older operating systems and SQL server versions. These changes only affect Orion module releases running Orion Core versions later than 2017.1. If you are already running the latest version of an Orion product module on Windows Server 2008/R2 or SQL 2008/R2 and have no ability to upgrade either of those in the near future, not to worry. Those product module versions will continue to be supported on those operating system and SQL versions for quite some time to come.

 

Monitoring vs Running

 

While the next release of Orion will no longer support running on Windows or SQL 2008/R2, support for monitoring systems which are running on these older versions of Windows and SQL will absolutely remain supported. This also includes systems where the Orion Agent is deployed. What that means is if you're using the Orion agent to monitor systems running on Windows Server 2008 or Windows Server 2008 R2, rest assured that support for monitoring those older systems with the Orion Agent remains fully intact in the next Orion release. The same is also true if you're monitoring Windows or SQL 2008/R2 agentlessly via WMI, SNMP, etc. You're next upgrade will not impact your ability to monitor these older operating systems or SQL versions in any way.

 

 

32Bit vs 64Bit

 

Support for installing evaluations on 32bit operating systems will also be dropped from all future releases of Orion product modules, allowing us to begin the migration of Orion codebase to 64bit. In doing this, it should improve the stability, scalability, and performance for larger Orion deployments. Once new product versions begin to be released without support for 32bit operating systems, users wishing to evaluate Orion based products on a 32bit operating system are encouraged to contact Sales to obtain earlier product versions which support 32bit operating systems.

 

 

.NET 4.6.2

 

Current Orion product module releases, such as Network Performance Monitor 12.1 and Server & Application Monitor 6.4, require a minimum version of .NET 4.5.1. All future Orion product module releases built atop Core versions later than 2017.1 will require a minimum version of Microsoft's .NET 4.6.2, which was released on 7/20/2016. This version of .NET is also fully compatible with all current shipping and supported versions of Orion product module releases, so there's no need to wait until your next Orion module upgrade to update to this newer version of .NET. Subsequently, .NET 4.7 was released on 5/2/2017 and is equally compatible with all existing Orion product module versions in the event you would prefer to upgrade directly to .NET 4.7 and bypass .NET 4.6.2 entirely.

 

It's important to note that Microsoft's .NET 4.6.2 has a hard dependency on Windows Update KB2919355, which was released in May 2014 for Windows Server 2012 R2 and Windows 8.1. This Windows Update dependency is rather sizable, coming in between 319-690MB. It also requires a reboot before .NET 4.6.2 can be installed and function properly. As a result, if you don't already have .NET 4.6.2 installed, you may want to plan for this upgrade during your next scheduled maintenance window to ensure your next Orion upgrade goes smoothly and as quick as possible.

 

Minimum Memory Requirements

 

With many of the changes referenced above, minimum system requirements have also needed adjustment as well. Windows Server 2012 and later operating systems utilize more memory than previous versions. Similarly, .NET 4.6 can also utilizes slightly more memory than .NET 4.5.1. As we move forward however, 64bit processes inherently use more memory than the same process compiled for 32bit. To ensure users have a pleasurable experience running the next version of Orion products, we will be increasing the absolute minimum memory requirements from 4GB to 6GB of RAM for future versions of Orion product modules. The recommended minimum memory requirement however, will remain at 8GB.

 

While most readers themselves today would never consider running and using a Windows 10 laptop on a day-in-day-out basis with just 4GB of RAM, those same people also likely wouldn't imagine running an enterprise grade server based monitoring solution on a system with similar such specs either. If you do, however, find yourself in an environment running Orion on 4GB of RAM today, an 8GB memory upgrade can typically be had for less than $100.00. This can be done before the next release of Orion product modules and will even likely provide a significant and immediate improvement to the overall performance of your Orion server.

 

 

How to Prepare

 

All items listed above can be completed prior to the release of the next Orion product module versions and will ensure your next upgrade goes off without a hitch. This posting is intended to provide anyone impacted by these changes with sufficient notice to plan any of these upgrades during their regularly scheduled maintenance periods, rather than during the upgrade process itself. In-place upgrades of SQL, as stated above, are a fairly simple and effective process to get upgraded quickly with the least possible amount of effort. If you're running Orion on Windows Server 2008 or 2008 R2, in-place OS upgrades are also feasible. If either of these are not feasible or desirable for any reason, you can migrate your Orion installation to a new server or migrate your Orion database to a new SQL server by following the steps outlined in our migration guide.

 

Other Options

 

If for any reason you find yourself running Orion on Windows Server 2008, Server 2008 R2, or on SQL 2008/R2 and unable to upgrade, don't fret. The current releases of Orion product modules will continue to remain fully supported for quite some time to come. There is absolutely zero requirement to be on the latest releases to receive technical support. In almost all cases, you can also utilize newly published content from Thwack's Content Exchange with previous releases, such as Application Templates, Universal Device Pollers, Reports, and NCM Configuration Change Templates. When you're ready to upgrade, we'll be here with plenty of exciting new features, enhancements and improvements.

 

 

Planning Beyond The Next Release

 

At any given time, Orion supports running on a minimum of three major versions of the Windows Operating System and SQL database server. When a new server OS or SQL version is released by Microsoft, SolarWinds makes every effort possible to support up to four OS and SQL versions for a minimum of one Orion product module release. If at any time you find yourself four releases behind the most current OS or SQL server version, you may want to begin planning an in-place upgrade or migration to a new server during your next regularly scheduled maintenance window to ensure your next Orion product module upgrade goes flawlessly.

 

For your reference, below is a snapshot of Windows Operating Systems and SQL Server versions which will be supported for the next release of Orion product modules. This list is not finalized and is still subject to change before release. However, nothing additional will be removed from this list, though there could be additional version support added after this posting.

 

Supported Operating System VersionsSupported Microsoft SQL Server Versions
Server 2012SQL 2012
Server 2012 R2SQL 2014
Server 2016SQL 2016

PoShSWI.png

On a previous post I showed off a little PowerShell script that I've written to build my SolarWinds Orion Servers.  That post left us with a freshly imaged Windows Server.  Like I said before, you can install the O/S however you like.  I used Windows Deployment Services because I'm comfortable with it.

 

I used Windows Server 2016 because this is my lab and...

giphy.gif

 

Now I've got a list of things that I want to do to this machine.

  1. Bring the Disks Online & Initialize
  2. Format Disks & Disable Indexing
  3. Configure the Page File
  4. Import the Certificate for SSL (Optional)

 

Because I'm me, I do this with PowerShell.  I'm going to go through each of stage one by one.

Stage 0: Declare Variables

I don't list this because this is something that I have in every script.  Before I even get into this, I need to define my variables.  For this it's disk names and drive letters.

#region Build Disk List
$DiskInfo  = @()
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]1; DriveLetter = "D"; Label = "Page File" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]2; DriveLetter = "E"; Label = "Programs" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]3; DriveLetter = "F"; Label = "Web" } )
$DiskInfo += New-Object -TypeName PSObject -Property ( [ordered]@{ DiskNumber = [int]4; DriveLetter = "G"; Label = "Logs" } )
#endregion

 

This looks simple, because it is.  It's simply a list of the disk numbers, the drive letter, and the labels that I want to use for the additional drives.

Stage 1: Bring the Disks Online & Initialize

Since I need to bring all offline disks online and choose a partition type (GPT), I can do this all at once.

#region Online & Enable RAW disks
Get-Disk | Where-Object { $_.OperationalStatus -eq "Offline" } | Set-Disk -IsOffline:$false
Get-Disk | Where-Object { $_.PartitionStyle -eq "RAW" } | ForEach-Object { Initialize-Disk -Number $_.Number -PartitionStyle GPT }
#endregion

 

Stage 2: Format Disks & Disable Indexing

This is where I really use the variables that are declared in Stage 0.  I do this with a ForEach Loop.

#region Create Partitions & Format
$FullFormat = $false # indicates a "quick" format
ForEach ( $Disk in $DiskInfo )
{
    # Create Partition and then Format it
    New-Partition -DiskNumber $Disk.DiskNumber -UseMaximumSize -DriveLetter $Disk.DriveLetter | Out-Null
    Format-Volume -DriveLetter $Disk.DriveLetter -FileSystem NTFS -AllocationUnitSize 64KB -Force -Confirm:$false -Full:$FullFormat -NewFileSystemLabel $Disk.Label
    
    # Disable Indexing via WMI
    $WmiVolume = Get-WmiObject -Query "SELECT * FROM Win32_Volume WHERE DriveLetter = '$( $Disk.DriveLetter ):'"
    $WmiVolume.IndexingEnabled = $false
    $WmiVolume.Put()
}
#endregion

 

We're getting closer!  Now we've got this:

DiskConfiguration.png

 

Stage 3: Configure the Page File

Getting the "best practices" for page files are crazy and all over the board.  Are you using flash storage?  Do you keep it on the O/S disk?  Do you declare a fixed size?  I decided to fall back on settings that I've used for years.

Page file does not live on the O/S disk.

Page file is statically set

Page file is RAM size + 257MB

In script form, this looks something like this:

#region Set Page Files
$CompSys = Get-WmiObject -Class Win32_ComputerSystem -EnableAllPrivileges
# is the system set to use system managed page files
if ( $CompSys.AutomaticManagedPagefile )
{
    # if so, turn it off
    $CompSys.AutomaticManagedPagefile = $false
    $CompSys.Put()
}
# Set the size to 16GB + 257MB (per Microsoft Recommendations) and move it to the D:\ Drive
# as a safety-net I also keep 8GB on the C:\ Drive.
$PageFileSettings = @()
$PageFileSettings += "c:\pagefile.sys 8192 8192"
$PageFileSettings += "d:\pagefile.sys 16641 16641"
Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\" -Name "pagingfiles" -Type multistring -Value $PageFileSettings
#endregion

 

Stage 4: Import the Certificate for SSL (Optional)

Since this is my lab, I get to do what I want.  (See above)  I include using SSL for Orion.  I have a wildcard certificate that I can use within my lab, so if I import it, then I can enable SSL when the configuration wizard runs.  This certificate is saved on a DFS share in my lab.  This is the script to import it.

#region Import Certificate
# Lastly, import my internal PKI Certificate for use with HTTPS
$CertName = "WildcardCert_demo.lab"
$CertPath = "\\Demo.Lab\Files\Data\Certificates\"
$PfxFile = Get-ChildItem -Path $CertPath -Filter "$CertName.pfx"
$PfxPass = ConvertTo-SecureString -String ( Get-ChildItem -Path $CertPath -Filter "$CertName.password.txt" | Get-Content -Raw ) -AsPlainText -Force
Import-PfxCertificate -FilePath $PfxFile.FullName -Password $PfxPass
#endregion

 

That's it.  Now the disks are setup, the page file is set, and the certificate is installed.  Next, I rename the computer, reboot, run Windows Updates, reboot, run Windows Updates, reboot, run Windows Updates... (you see where this is going, right?)

 

Execution Time: 16 seconds

Time saved: at least 15 minutes.

 

There's still some prep work I can do via scripting and I'll provide that next.

 

This is a cross-post from my personal blog post Building my Orion Server [Scripting Edition] – Step 2 – Kevin's Ramblings

As the Product Manager for Online Demos, I need to install the SolarWinds Orion platform frequently... sometimes as much as 4 times per month.  This can get tiresome, but I've gotten some assistance from PowerShell, the community, and some published help documents.

 

I've thought about scripting these out for a while now and I came up with a list of things to do.

  1. Build the virtual machines
  2. Pre-configure the virtual machine's disks
  3. Prep the machine for installation
  4. Install the software (silently)
  5. Finalize the installation (silently)

This post is the first step in this multi-step process - Building your virtual machine.

 

Now dependent on your hypervisor there are two different paths to follow: Hyper-V or VMware.  In my lab, I've got both because I try to be as agnostic as possible.  It's now time to start building the script.  I'm going to use PowerShell.

 

Scripting Preference: PowerShell

Preference Reasoning: I know it and I'm comfortable using it.

 

Hyper-V vs. VMware

 

Each Hypervisor has different requirements when building a virtual machine, but some are the same for each - specifically the number & size of disks, the CPU count and the maximum memory.  The big deviation comes from the way that each hypervisor handles memory & CPU reservations.

 

Hyper-V handles CPU reservation as a percentage of total whereas VMware handles is via the number of MHz.  I've elected to keep the reservation as a percentage.  It seemed easier to keep straight (in my head) and only required minor tweaks to the script.

 

Step 1 - Variable Declaration

  • VM Name [string] - both
  • Memory (Max Memory for VM) [integer] - both
  • CPU Count (number of CPUs) [integer] - both
  • CPU Reservation (percentage) [integer] - both
  • Disk Letters and Sizes - both
  • Memory at Boot (Memory allocated at boot) [integer] - Hyper-V
  • Memory (Minimum) [integer] - Hyper-V
  • Use Dynamic Disks [Boolean] - Hyper-V
  • VLAN (VLAN ID to use for Network Adapter) [integer] - Hyper-V
  • vCenter Name [string] - VMware
  • ESX Host [string] - VMware
  • Disk Format ("thin", "thick", etc.) [string] - VMware
  • VLAN (VLAN name to use for Network Adapter) [string] - VMware
  • Guest OS (identify the Operating System) [string] - VMware

 

Step 2 - Build the VM

Building the VM is an easy step that you actually only takes 1 line using the "New-VM" command (regardless of Hypervisor).  The syntax and parameters change depending on the hypervisor, but otherwise we just build the shell.  In Hyper-V, I do this in two commands and in VMware I do it in one.

 

Step 3 - Assign Reservations

This is a trickier step in VMware because it uses MHz and not percentages.  For that I need to know what the MHz of the processor in the host is running.  Thankfully, this can be calculated pretty easily.  Then I just set the CPU & Memory reservations based around each hypervisor's requirements

 

Step 4 - Assign VLAN

Hyper-V uses the VLAN ID (integer) and VMware uses the VLAN Name (string).  It's nearly the same command with just a different parameter.

 

Step 5 - Congratulate yourself.

Hyper-VVMware
OrionServer_Hyper-V.pngOrionServer_VMware.png

 

Execution Time: 9 seconds on either architecture.

Time saved: at least 10 minutes.

 

The full script is below.

 

#region Variable Declaration
$VMName       = "OrionServer" # Virtual Name
$Architecture = "Hyper-V"     # (or "VMware")
# Global Variable Declaration
$CPUCount     = 4             # Number of CPU's to give the VM
$CPUReserve   = 50            # Percentage of CPU's being reserved
$RAMMax       = 16GB          # Maximum Memory
# Sizes and count of the disks
$VHDSizes = [ordered]@{ "C" =  40GB; # Boot
                        "D" =  30GB; # Page
                        "E" =  40GB; # Programs
                        "F" =  10GB; # Web
                        "G" =  10GB; # Logs
                       } 
#endregion
# Architecture-specific commands
if ( $Architecture -eq "Hyper-V" )
{
    $RAMBoot      = 8GB           # Startup Memory
    $RAMMin       = 8GB           # Minimum Memory (should be the same as RAMBoot)
    $DynamicDisks = $true         # Use Dynamic Disks?
    $Vlan         = 300           # VLAN assignment for the Network Adapter
    # Assume that we want to make all the VHDs in the default location for this server.
    $VHDRoot = Get-Item -Path ( Get-VMHost | Select-Object -ExpandProperty VirtualHardDiskPath )
    # Convert the hash table of disks into PowerShell Objects (easier to work with)
    $VHDs = $VHDSizes.Keys | ForEach-Object { New-Object -TypeName PSObject -Property ( [ordered]@{ "ServerName" = $VMName; "Drive" = $_ ; "SizeBytes" = $VHDSizes[$_] } ) }
    # Extend this object with the name that we'll want to use for the VHD
    # My naming scheme is [MACHINENAME]_[DriveLetter].vhdx - adjust to match your own.
    $VHDs | Add-Member -MemberType ScriptProperty -Name VHDPath -Value { Join-Path -Path $VHDRoot -ChildPath ( $this.ServerName + "_" + $this.Drive + ".vhdx" ) } -Force
    # Create the VHDs
    $VHDs | ForEach-Object { 
        if ( -not ( Test-Path -Path $_.VHDPath -ErrorAction SilentlyContinue ) )
        {
            Write-Verbose -Message "Creating VHD at $( $_.VHDPath ) with size of $( $_.SizeBytes / 1GB ) GB"
            New-VHD -Path $_.VHDPath -SizeBytes $_.SizeBytes -Dynamic:$DynamicDisks | Out-Null
        }
        else
        {
            Write-Host "VHD: $( $_.VHDPath ) already exists!" -ForegroundColor Red
        }
    }
    #region Import the Hyper-V Module & Remove the VMware Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue )
    {
        Remove-Module VMware.PowerCLI -Confirm:$false -Force
    }
    if ( -not ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue ) )
    {
        Import-Module -Name "Hyper-V" -Force
    }
    #endregion Import the VMware Module & Remove the Hyper-V Module
    # Step 1 - Create the VM itself (shell) with no Hard Drives to Start
    $VM = New-VM -Name $VMName -MemoryStartupBytes $RAMBoot -SwitchName ( Get-VMSwitch | Select-Object -First 1 -ExpandProperty Name ) -NoVHD -Generation 2 -BootDevice NetworkAdapter
    # Step 2 - Bump the CPU Count
    $VM | Set-VMProcessor -Count $CPUCount -Reserve $CPUReserve
    # Step 3 - Set the Memory for the VM
    $VM | Set-VMMemory -DynamicMemoryEnabled:$true -StartupBytes $RAMBoot -MinimumBytes $RAMMin -MaximumBytes $RAMMax
    # Step 4 - Set the VLAN for the Network device
    $VM | Get-VMNetworkAdapter | Set-VMNetworkAdapterVlan -Access -VlanId $Vlan
    # Step 5 - Add Each of the VHDs
    $VHDs | ForEach-Object { $VM | Add-VMHardDiskDrive -Path $_.VHDPath }
}
elseif ( $Architecture -eq "VMware" )
{
    #region Import the VMware Module & Remove the Hyper-V Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue )
    {
        Remove-Module -Name "Hyper-V" -Confirm:$false -Force
    }
    if ( -not ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue ) )
    {
        Import-Module VMware.PowerCLI -Force
    }
    #endregion Import the VMware Module & Remove the Hyper-V Module
    $vCenterServer = "vCenter.Demo.Lab"
    $DiskFormat = "Thin" # or "Thick" or "EagerZeroedThick"
    $VlanName = "External - VLAN 300"
    $GuestOS = "windows9Server64Guest" # OS Identifer of the Machine
    #region Connect to vCenter server via Trusted Windows Credentials
    if ( -not ( $global:DefaultVIServer ) )
    {
        Connect-VIServer -Server $vCenterServer
    }
    #endregion Connect to vCenter server via Trusted Windows Credentials
    # Find the host with the most free MHz or specify one by using:
    # $VMHost = Get-VMHost -Name "ESX Host Name"
    $VmHost = Get-VMHost | Sort-Object -Property @{ Expression = { $_.CpuTotalMhz - $_.CpuUsageMhz } } -Descending | Select-Object -First 1
    # Calculate the MHz for each processor on the host
    $MhzPerCpu = [math]::Floor( $VMHost.CpuTotalMhz / $VMHost.NumCpu )
    # Convert the Disk Sizes to a list of numbers (for New-VM Command)
    $DiskSizes = $VHDSizes.Keys | Sort-Object | ForEach-Object { $VHDSizes[$_] / 1GB }
    # Create the VM
    $VM = New-VM -Name $VMName -ResourcePool $VMHost -DiskGB $DiskSizes -MemoryGB ( $RAMMax / 1GB ) -DiskStorageFormat $DiskFormat -GuestId $GuestOS -NumCpu $CPUCount
    # Setup minimum resources
    # CPU is Number of CPUs * Reservation (as percentage) * MHz per Processor
    $VM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuReservationMhz ( $CPUCount * ( $CPUReserve / 100 ) * $MhzPerCpu ) -MemReservationGB ( $RAMMax / 2GB )
    # Set my VLAN
    $VM | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName $VlanName -Confirm:$false
}
else
{
    Write-Error -Message "Neither Hyper-V or VMware defined as `$Architecture"
}

 

Next step is to install the operating system.  I do this with Windows Deployment Services.  Your mileage may vary.

 

After that, we need to configure the machine itself.  That'll be the next post.

 

About this post:

This post is a combination of two posts on my personal blog: Building my Orion Server [VMware Scripting Edition] – Step 1 & Building my Orion Server [Hyper-V Scripting Edition] – Step 1.

We’re happy to announce the release of SolarWinds® Port Scanner, a standalone free tool that delivers a list of open, closed, and filtered ports for each scanned IP address.

Designed for network administrators of businesses of all sizes, Port Scanner gives your team insight into TCP and UDP ports statuses, can resolve hostnames and MAC addresses, and detects operating systems. Even more importantly, it enables users to run the scan from a CLI and exports the results into a file.

What else does Port Scanner do?

  • Supports threading using advanced adaptive timing behavior based on network status monitoring and feedback mechanisms, in order to shorten the scan run time
  • Allows you to save scan configurations into a scan profile so you can run the same scan again without redoing previous configurations
  • Resolves hostnames using default local machine DNS settings, with the alternative option to define a DNS server of choice
  • Exports to XML, XLSX, and CSV file formats

For more detailed information about Port Scanner, please see the SolarWinds Port Scanner Quick Reference guide here on THWACK®: https://thwack.solarwinds.com/docs/DOC-190862

 

Download SolarWinds Port Scanner: http://www.solarwinds.com/port-scanner

We are excited to share, that we've reached GA for Web Help Desk (WHD) 12.5.1

 

This service release includes:

 

Improved application support

  • Microsoft® Office 365
  • Microsoft Exchange Server 2016
  • Microsoft Windows Server® 2016

Improved protocol and port management

  • The Server Options setting provides a user interface to manage the HTTP and HTTPS ports from the Web Help Desk Console. You can enable the listening port to listen for HTTP or HTTPS requests, configure the listening port numbers, and create a custom port for generated URL links.

Improved keystore management

  • The Server Options setting also includes Keystore Options. Use this option to create a new Java Keystore (JKS) or a Public-Key Cryptography Standards #12 (PKCS12) KeyStore to store your SSL certificates.

Improved SSL certificate management

  • The Certificates setting allows you to view the SSL certificates in the KeyStore that provide a secure connection between a resource and the Web Help Desk Console.

Improved APNS management

  • The Certificates setting also allows you to monitor your Apple Push Certification Services (APNS) Certificate expiration date and upload a new certificate prior to the expiration date.

Brand new look and feel

  • The new user interface offers clean visual design that eliminates visual noise to help you focus on what is important.

Other improvements

  • Apache Tomcat updated to 7.0.77

 

We encourage all customers to upgrade to this latest release which is available within your customer portal.

Thank you!

SolarWinds Team

With the key application I support, our production environment is spread across a Citrix Farm of 24 servers connected to an AppServer Farm of 6 Servers all with load balanced Web and App Services.  So the question is when is my application down?  If a Citrix server is off line?  If a Web or App Service is down on one AppServer? We have to assess the criticality of our application status.

 

We have determined it 1 service is down, it does not affect the availability of the application or even the experience of our user.  Truth be told, the application can support all users even if only one AppServer is running the Document Service for example.  Of course, in that scenario we have not redundancy and no safety net.

 

So I created a script that allows us to look at a particular service and based on the number of  instances running, we determine a criticality.

 

Check out Check Multiple Nodes for a Running Service, Then Show Application Status Based on Number of Instances

 

Within this script you can identify a list of servers to poll, a minimum critical value and return either up, warning or critical for the application component based on the number of instances.

What is NIST 800-171 and how does it differ from NIST 800-53?

 

NIST SP 800-171 – "Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations" provides guidance for protecting the confidentiality of Controlled Unclassified Information (CUI) residing in non-federal information systems and organizations. The publication is focused on information that is shared by federal agencies with a non-federal entity. If you are a contractor or sub-contractor to governmental agencies whereby CUI resides on your information systems, NIST-800-171 will impact you.

 

Cybercriminals regularly target federal data such as healthcare records, Social Security numbers, and more. It is vital that this information is protected when residing on non-federal information systems. NIST 800-171 has an implementation deadline of 12/31/2017, which has contractors scrambling.

 

Many of the controls contained within NIST 800-171 are based on NIST 800-53, but they are tailored to protect CUI in nonfederal information systems. There are 14 “families” of controls within NIST 800-171, but before we delve into those, we should probably discuss Controller Unclassified Information (CUI) and what it is.

 

There are several categories and subcategories of CUI, which you can be view here. You may be familiar with Sensitive but Unclassified (SBU) information—there were various categories that fell under SBU—but CUI replaces SBU and all its sub-categories. CUI is information which is not classified but in the federal government’s best interest to protect.

 

NIST 800-171 Requirements

As we mentioned above, there are 14 classes of controls within NIST 800-171. These are:

 

  • Access Control
  • Awareness and Training
  • Audit and Accountability
  • Configuration Management
  • Identification and Authentication
  • Incident Response
  • Maintenance
  • Media Protection
  • Personnel Security
  • Physical Protection
  • Risk Assessment
  • Security Assessment
  • System and Communications Protection
  • System and Information Integrity

 

We will now delve further into each of these categories and discuss the basic and derived security requirements where SolarWinds® products can help. Basic security requirements are high-level requirements, whereas derived requirements are the controls you need to put in place to meet the high-level objective of the basic requirements.

 

3.1 Access Control

3.1.1 – Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).

 

3.1.2 – Limit information system access to the types of transactions and functions that authorized users are permitted to execute.

 

This category limits access to systems to authorized users only and limits user activity to authorized functions only. There are a few areas within Access Control where our products can help, but many of these controls are implemented at the policy or device levels.

 

3.1.5 – Employ the principle of least privilege, including for specific security functions and privileged accounts.

SolarWinds Log & Event Manger (LEM) can audit deviations from least privilege—e.g., unauthorized file access and unexpected system access. Auditing can be done in real-time or via reports. LEM can also monitor Microsoft® Active Directory® (AD) for unexpected escalated privileges being assigned to a user.

 

3.1.6 – Use of non-privileged accounts when accessing non-security functions.

SolarWinds LEM can monitor privileged account usage and audit the use of privileged accounts for non-security functions.

 

3.1.7 – Prevent non-privileged users from executing privileged functions and audit the execution of such functions.

Execution of privileged functions such as creating and modifying registry keys and editing system files can be audited in real-time or via reports in LEM. On the network device side, SolarWinds Network Configuration Manager (NCM) includes a change approval system which helps ensure that non-privileged users cannot execute privileged functions without approval from a privileged user.

 

3.1.8 – Limit unsuccessful logon attempts.

The number of logon attempts before lockout are generally set at the domain/system policy level, but LEM can confirm if the lockout policy is being enforced via reports/nDepth. LEM can also be used to report on unsuccessful logon attempts, as well as automatically lock a user account via the Active Response feature.

 

3.1.12 – Monitor and control remote access sessions.

LEM can monitor and report on remote logons. Correlation rules can be configured to alert and respond to unexpected remote access (e.g., access outside normal business hours). SolarWinds NCM can audit how remote access is configured on your network device, identify any configuration violations, and remediate accordingly.

 

3.1.21 – Limit use of organizational portable storage devices on external information systems.

LEM can audit and restrict usage of portable storage devices with its USB Defender feature.

 

3.2 Awareness and Training

3.2.1 – Ensure that managers, systems administrators, and users of organizational information systems are made aware of the security risks associated with their activities and of the applicable policies, standards, and procedures related to the security of organizational information systems.

 

3.2.2 – Ensure that organizational personnel are adequately trained to carry out their assigned information security-related duties and responsibilities.

 

This section relates to user awareness training, especially around information security. Users should be aware of policies, procedures, and attack vectors such as phishing, malicious email attachments, and social engineering. Unfortunately, SolarWinds can’t provide information security training your users—we would if we could!

 

3.3 Audit and Accountability

3.3.1 – Create, protect, and retain information system audit records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity.

 

3.3.2 – Ensure that the actions of individual information system users can be uniquely traced to those users so they can be held accountable for their actions.

 

This set of controls helps to ensure that audit logs are in place and that they are monitored to identify authorized or suspicious activity. These controls relate to the data you want LEM to ingest and how those logs are protected and retained. LEM can help satisfy some of the controls in this section directly.

 

3.3.3 – Review and update audited events.

LEM helps with the review of audited events, provided the appropriate logs are sent to LEM.

 

3.3.4 – Alert in the event of an audit process failure.

LEM can generate alerts when agents go offline or the log storage database is running low on space. LEM can also alert on behalf of systems when audit logs are cleared—e.g., if a user clears the Windows® event log.

 

3.3.5 – Correlate audit review, analysis and reporting processes for investigation and response to indications of inappropriate, suspicious, or unusual activity.

LEM’s correlation engine and reporting can assist with audit log reviews and help ensure that administrators are alerted to indications of inappropriate, suspicious, or unusual activity.

 

3.3.6 – Provide audit reduction and report generation to support on-demand analysis and reporting.

Audit logs can generate a huge amount of information. LEM can analyze event logs and generate scheduled or on-demand reports to assist with analysis. However, you will need to ensure that your audit policies and logging levels are appropriately configured.

 

3.3.7 – Provide an information system capability that compares and synchronizes internal system clocks with an authoritative source to generate time stamps for audit records.

LEM satisfies this requirement through Network Time Protocol server synchronization. LEM also includes a predefined correlation rule that monitors for time synchronization failures.

 

3.3.8 – Protect audit information and audit tools from unauthorized access, modification, and deletion.

LEM helps satisfy this requirement through the various mechanisms outlined in this post: Log & Event Manager Appliance Security and Data Protection.

 

3.3.9 – Limit management of audit functionality to a subset of privileged users.

As per the response to 3.3.8, LEM provides role-based access control, which limits access and functionality to a subset of privileged users.

 

3.4 Configuration Management

3.4.1 Establish and maintain baseline configurations and inventories of organizational information systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

 

3.4.2 Establish and enforce security configuration settings for information technology products employed in organizational information systems.

 

Minimum acceptable configurations must be maintained and change management controls must be in place. Inventory comes into play here, too. NCM will have the biggest impact here (on the network device side), thanks to its ability to establish baseline configurations and report on violations. LEM and SolarWinds Patch Manager can also play roles within this set of controls.

 

3.4.3 – Track, review, approve/disapprove, and audit changes to information systems.

NCM’s real-time change detection, change approval management and tracking reports can be used to detect, validate, and document changes to network devices. LEM can monitor and audit changes to information systems, provided the appropriate logs are sent to LEM.

 

3.4.8 – Apply deny-by-exception (blacklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

LEM can monitor for the use of unauthorized software. Thanks to Active Response, you can configure LEM to automatically kill nonessential programs and services.

 

3.4.9 – Control and monitor user-installed software.

LEM can audit software installations and alert accordingly. Patch Manager can inventory machines on your network and report on the software and patches installed.

 

3.5 Identification and Authentication

3.5.1 Identify information system users, processes acting on behalf of users, or devices.

 

3.5.2 Authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational information systems.

 

This section includes controls such as using multifactor authentication, enforcing password complexity and storing/transmitting passwords in an encrypted format. SolarWinds does not have products to support these requirements.

 

3.6 Incident Response

3.6.1 Establish an operational incident-handling capability for organizational information systems that includes adequate preparation, detection, analysis, containment, recovery, and user response activities.

 

3.6.2 Track, document, and report incidents to appropriate officials and/or authorities both internal and external to the organization.

 

There is only one derived security requirement within the Incident Response section, namely:

3.6.3 Test the organizational incident response capability.

 

LEM can play a role in the incident generation and the subsequent investigation. LEM can generate an incident based on a defined correlation trigger and respond to an incident via the Active Responses. Reports can be produced based on detected incidents.

 

3.7 Maintenance  

3.7.1 Perform maintenance on organizational information systems.

 

3.7.2 Provide effective controls on the tools, techniques, mechanisms, and personnel used to conduct information system maintenance.

 

SolarWinds isn’t relevant to most of the requirements in this section. Controls contained within the maintenance category include: ensuring equipment remove for off-site maintenance is sanitized of CUI, checking media for malicious code and requiring multifactor authentication for nonlocal maintenance sessions.

 

LEM can assist with the 3.7.6 requirement that states “Supervise the maintenance activities of maintenance personnel without required access authorization.” Provided the appropriate logs are being generated and sent to LEM, reports can be used to audit the activity performed by maintenance personnel. NCM also comes into play, allowing you to compare configurations before and after maintenance windows.

 

3.8 Media Protection

3.8.1 Protect (i.e., physically control and securely store) information system media containing CUI, both paper and digital.

 

3.8.2 Limit access to CUI on information system media to authorized users.

 

3.8.3 Sanitize or destroy information system media containing CUI before disposal or release for reuse.

 

Most of the controls within the Media Protection systems are not applicable to SolarWinds products. However, LEM can assist with one control.

 

3.8.7 – Control the use of removable media on information system components. 

LEM’s USB Defender feature can monitor for usage of USB removable media and can automatically detach USB devices when unauthorized usage is detected.

 

3.9 Personnel Security

3.9.1 Screen individuals prior to authorizing access to information systems containing CUI.

 

3.9.2 Ensure that CUI and information systems containing CUI are protected during and after personnel actions such as terminations and transfers.

 

There are no derived security requirements within this section. LEM can assist with 3.9.2 by auditing usage of credentials of terminated personnel, validating that accounts are disabled in a timely manner, and validating group/permission changes after a personnel transfer.

 

3.10 Physical Protection

3.10.1 Limit physical access to organizational information systems, equipment, and the respective operating environments to authorized individuals.

 

3.10.2 Protect and monitor the physical facility and support infrastructure for those information systems.

 

SolarWinds cannot assist with any of the physical security controls contained within this section.

 

3.11 Risk Assessment

3.11.1 Periodically assess the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals, resulting from the operation of organizational information systems and the associated processing, storage, or transmission of CUI.

 

3.11.2 Vulnerable software poses a great risk to every organization. These vulnerabilities should be identified and remediated—that is exactly what the controls within this section aim to do.

 

Risk Assessment involves lots of policies and procedures; however, Patch Manager can be leveraged to keep systems up to date with the latest security patches.

 

3.11.2 – Scan for vulnerabilities in the information system and applications periodically and when new vulnerabilities affecting the system are identified.

Patch Manager cannot perform vulnerability scans, but it can be used to identify missing application patches on your Windows machines. NCM identifies risks to network security based on device configuration. NCM also accesses the NIST National Vulnerability Database to get updates on potential emerging vulnerabilities in Cisco® ASA and IOS® based devices.

 

3.11.3 – Remediate vulnerabilities in accordance with assessments of risk.

Patch Manager can remediate software vulnerabilities on your Windows machines via Microsoft® and third-party updates. Patch Manager can be used to install updates on a scheduled basis or on demand. On the network device side, NCM performs Cisco IOS® firmware upgrades to potentially mitigate identified vulnerabilities.

 

3.12 Security Assessment

3.12.1 – Periodically assess the security controls in organizational information systems to determine if the controls are effective in their application.

 

3.12.2 – Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational information systems.

 

3.12.3 – Monitor information system security controls on an ongoing basis to ensure the continued effectiveness of the controls.

We can help with one of the Security Assessment controls. LEM can monitor event logs relating to information system security and perform correlation, alerting, reporting, and more. SolarWinds has several other modules that support monitoring the health and performance of your information systems and networks.

 

3.13 System and Communications Protection

3.13.1 – Monitor, control, and protect organizational communications (i.e., information transmitted or received by organizational information systems) at the external boundaries and key internal boundaries of the information systems.

 

3.13.2 – Employ architectural designs, software development techniques, and systems engineering principles that promote effective information security within organizational information systems.

 

Many of the controls in this section involve protecting confidentiality of CUI at rest, ensuring encryption is used and keys are appropriately managed, and networks are segmented. However, the basic security requirement 3.13.1 is certainly an area where SolarWinds can assist. This requirement involves monitoring (and controlling/protecting) communication at external and internal boundaries. LEM can collect logs from your network devices and alert to any suspicious traffic. SolarWinds NetFlow Traffic Analyzer (NTA) can also be used to monitor traffic flows for specific protocols, applications, domain names, ports, and more.

 

3.13.6 Deny network communications traffic by default and allow network communications traffic by exception (i.e., deny all, permit by exception).

LEM can ingest traffic from network devices that provides auditing to validate that traffic is being appropriated denied/permitted. NPM and NTA can also be used to monitor traffic. NCM can provide configuration reports to help ensure that your access control lists are compliant with “deny all and permit by exception,” as well as providing the ability to execute scripts to make ACL changes en masse.

 

3.13.14 – Control and monitor the use of VoIP technologies.

NPM/NTA and SolarWinds VoIP & Network Quality Manager can be used to monitor VoIP traffic/ports.

 

3.14 System and Information Integrity

3.14.1 – Identify, report, and correct information and information system flaws in a timely manner.

 

3.14.2 – Provide protection from malicious code at appropriate locations within organizational information systems.

 

3.14.3 – Monitor information system security alerts and advisories and take appropriate actions in response.

 

The controls within this section set out to ensure that the information system or the information within the system has been compromised. Patch Manager and LEM can play a role in system/information integrity.

 

3.14.4 Update malicious code protection mechanisms when new releases are available.

Essentially, this control requires you to patch your systems. Patch Manager provides the ability to patch your systems with Microsoft and third-party updates on a scheduled or ad-hoc basis. Custom packages can also be created to update products that are not included in our catalog.

 

3.14.5 Perform periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened or executed.

This control ensures that you have an anti-virus tool in place to scan for malicious files. LEM can receive alerts from a wide range anti-virus/malware solutions to correlate, alert, and respond to identified threats.

 

3.14.6 Monitor the information system including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks.

This security control is very well suited to LEM—the correlation engine can monitor logs for any suspicious or malicious behavior. LEM can be used to monitor inbound/outbound traffic, although NPM/NTA could be used to detect unusual traffic patterns.

 

3.14.7 Identify unauthorized use of the information system.

LEM can monitor for unauthorized activity. User-defined groups come into play here which can create blacklists/whitelists of authorized users and events. 

 

Still with me? As you can see, there is a substantial number of requirements within the 14 sets of controls, but when implemented correctly, the framework can go a long way to ensure the confidentiality, integrity, and availability of Controlled Unclassified Information and your information system as a whole. The SolarWinds products I’ve mentioned above all include a wide variety of out-of-the-box content such as rules, alerts, and reports that can help with the NIST 800-171 requirements.

 

I hope this blog post has helped you with untangling some of the NIST-800-171 requirements and how you can leverage SolarWinds products to help. If you’ve got any questions or feedback, please feel free to comment below. 

cobrien

GNS3 Launches Version 2.0

Posted by cobrien Employee May 3, 2017

Version 2.0 is a new major release of GNS3, which brings significant architectural changes and new features to the growing community of over 900,000 registered network professionals. Since inception of the project GNS3 has made over 79 releases and been downloaded over 13,693,000 times.

 

GNS3 started as only a desktop application from the first version up to version 0.8.3. With the more recent 1.x versions, GNS3 has grown to now allow for the use of remote servers. Within version 2.0, multiple clients can control GNS3 at the same time, also all the “application intelligence” has been moved to the GNS3 server.

 

What does it mean?

 

  • Third parties can make applications controlling GNS3. This will also allow individuals to easily configure and spin up a virtual network with pre-established templates

  • Multiple users can be connected to the same project and see each other modifications in real time, allowing individuals in remote location to work and collaboration on projects in the GNS3 virtual network environment

  • No need to duplicate your settings on different computers if they connect to the same central server.

  • Easier to contribute to GNS3, the separation between the graphical user interface and the server/backend is a lot clearer

  • GNS3 now supports the following vendor devices: Arista vEOS, Cumulus VX, Brocade Virtual ADX, Checkpoint GAiA, A10 vThunder, Alcatel 7750, NetScaler VPX, F5 BIG-IP LTM VE, MikroTik CHR, Juniper vMX and more....


All the complexity of connecting multiple emulators has been abstracted in what we call the controller (part of GNS3 server). From a user point of view, it means that it is possible to start a packet capture on any link, connect anything to a cloud etc. Finally, by using the NAT object in GNS3, connections to Internet work out of the box (note this is only available with the GNS3 VM or on Linux with libvirt installed).

 

Get started with GNS3 v2.0 now!

 

NEW FEATURES DETAILS

 

Save as you go

Your projects are automatically saved as you make changes to them, there is no need to press any save button anymore. An additional benefit is this will avoid synchronization issues between the emulators’ virtual disks and projects.

 

Multiple users can be connected to the same project

Multiple user can be connected to the same project and see each other changes in real time and collaborate. If you open a console to a router you will see the commands send by other users.

 

Smart packet capture

Now starting a packet capture is just as easy as clicking on a link and asking for new capture. GNS3 will guess the pick the best interface where to capture from. The packet capture dialog has also been redesigned to allow changing the name of the output file or to prevent automatically starting Wireshark:

NEW API

 

Developers can find out how to control GNS3 using an API here: http://api.gns3.net/en/2.0/ Thanks to our controller, it is no longer required to deal with multiple GNS3 servers since most of the information is available in the API. All the visual objects are exposed as SVG.

 

Key Videos on GNS3 v2.0

 

This video explains and demonstrates how to upgrade both the GNS3 Windows GUI and the GNS3 VM to version 2.0.0 of GNS3. This video uses Windows 10 for the demonstrations.

 

 

More helpful videos on GNS3 v2.0:

 

milan.hulik

Localize your DameWare

Posted by milan.hulik Employee Apr 27, 2017

Our distributor in Germany has created French, German and Spanish language packs. The language packs are not a full versions. If you already installed the English version, then copy (replace) the language pack files into the program installation directory:

 

German v12.0.4 files:

French v12.0.4 files:

Spanish v12.0.3 files:

Is "A Picture Worth A Thousand Words?"  Maybe even more than that!

 

I frequently receive requests to show departments / teams how their system(s) may be performing on the network.  It's not enough to just tell them "Your server's latency is sub-millisecond, it's connected at 1 Gb/s at Full Duplex, and no network errors have been recorded on its switchports in the last three months."  When you offer that kind of information it's accurate, but it may not fill the actual need of the request.

 

Maybe they want to know a LOT more that NPM can show about even basic network status of a system:

  • Latency
  • QoE
  • Throughput on multiple interfaces
  • Traffic trending over time--especially with an eye towards predicting whether more resources must be allocated for their system(s) in the future:
    • More memory
    • More bandwidth
    • More storage
  • Growth over the last year in various metrics:
    • Server bandwidth consumed
    • WAN bandwidth utilization
    • Latency changes

 

I never found a canned NPM Report that would show everything the customer wanted.  But I learned how to build one!

 

Check out my method of building a Custom View that shows multiple windows in a single screen--my own little "single-pane-of-glass" deployment here:  How to create a simple custom view of multiple interfaces' bandwidth utilization

 

It's incredibly easy to build a blank custom View that has as many monitors/ windows/ reports as I want:

  1. Create a new View
  2. Add the right number of columns and add the right Custom HTML (or other) resources to them
  3. Edit those resources to display useful and intuitive titles and subtitles
  4. Test that they show the time frames and polling frequency most useful to the customers
  5. Give the customer access to them
  6. Sit back and listen to the praise and excitement!

 

 

I just built a new view for all the interfaces on a Core 7609 yesterday using this process, and will build another one today on another 7609.  In the screen shot below I've zoomed far out to give you an idea of what I can see--a graph for each interface's utilization.  Normally I'd have the browser window zoomed in enough that the three columns fill a 24" display and are easily readable, and easy to scroll through vertically.

 

Benefits:

  • I need just one screen that shows everything physically connected to those core routers.
  • My team sees the labels and information displayed; that helps them understand what needs to be worked on as we move those interfaces onto replacement Nexus 7009's.
  • My boss is able to track progress, see interfaces and throughput change.
  • His boss knows the work's being done, sees what's left to do, and can answer the CIO's questions quickly and easily.

 

What would you pay an outside vendor to build this kind of custom view for you?  $5K?  More?   Depending on the complexity and number of interfaces, I can start and complete a new multi-interface view, complete with custom labels and customized monitoring periods and granularity in less than ten minutes--AND provide a wealth of useful, specialized information to whichever customer wants it.  Better still, I can show them how to tweak the individual windows' views to reflect differing amounts of polling granularity and time covered by the graphs.

 

The ability to build this has filled needs by my team, by our IBM Big Iron team (always wanting to see how much of the multiple 10 Gig interfaces they're consuming at once), our Bio Med group (which LANTronix devices have the best or worst uptime--and where they're located!), the Help Desk (what sites or systems are impacted by any given port being down), and more.  I've built all these specialized views / reports and made them available via NPM to multiple teams, and the need for this info only grows.  I've also built custom multi-graph windows that provide information about:

  • Corporate WAN utilization for interfaces that connect campuses in multiple states
  • Contracted EMHR customers' uptime, reliability, and throughput
  • Performance and availability of WAN connected sites based on WAN vendor to track SLA's
  • Cisco UCS Chassis interface utilization
  • Vendor-specific systems and hardware (particularly useful when the vendor blames our network for their hardware or application issues)

 

Take a look at my "How to" page here:  How to create a simple custom view of multiple interfaces' bandwidth utilization.  Then talk over the idea of providing this kind of information with your favorite (or problem) customers, and you'll see you can build bridges and service unmet needs a lot easier than you expected.  It's another way NPM shines--with information already available to you!

Our Desktop Support Team (which I'll call "EUPS" from here on--for End User Platform Support) rarely unpatches data cables from switches when PC's or printers or other devices move or are retired.  That results in switches and blades with potentially many ports patched, and nothing using those ports.

 

That's a waste of money.

 

When someone has a new device to add to a network, possibly requiring a new data drop to be pulled to the network room, and there's no open ports on a stack of switches or in a chassis switch, we've got few options:

  1. Tell the customer "Sorry--there's no room in the inn."  You know that won't float.
  2. Tell the customer "Sorry, we're out of network switch ports, and no one in your department budgeted for adding a new switch or blade (between $5K and $8K, depending on the hardware).  When you can come up with the funds, we'll order the hardware and install it.  Probably in three weeks, if you get the funds to us today."  Nope, that won't float either--although sometimes we have to play hard ball to ensure folks think before they add new devices to a network.
  3. Take a new/spare switch from inventory, install it, and take heat from up above for not planning adequately.  Naw, that's not right, either.
  4. Run Rick's Favorite Report #1 and find which ports haven't been used in a year, have EUPS unpatch them, and then patch in the new devices.  TAH-DAH!  Money saved, customer up and running right away, budget conserved, resources reused--we're the Facilitators that make things happen instead of the No-Sayers that are despised.

 

So how does this magical report work?  Easily and efficiently!  Check out how to build it here:  https://thwack.solarwinds.com/docs/DOC-188091

 

Once it's built for a switch, it's easily modified to change switches--just change the switch name in the title, and change the switch name in Datasource1, and run the report.

 

 

My team uses this almost every day, and I bet I use it weekly.  How many switches has this saved us from buying, how many ports have we been able to reuse?  Let's say we use it only twice a week.  That's over a hundred ports every year that are repurposed at no cost!  And since they're typically in different network rooms, you might say we avoid having to buy between fifty and a hundred new switches or blades every year. 

 

A network switch port costs about $169 (including 10/100/1000/POE) if it's in a high-density high-availability chassis switch that's fully populated, and about the same if it's in a stackable switch.

 

So the actual cost of 50 ports X $169 = $8,450.  That's not too bad since it's money not spent for recovered ports.  100 ports is $16,900.   Not insignificant, and not something you want to waste.

 

But let's build a worst-case scenario: 

  • Every port on a switch is used
  • You have to buy another switch every time someone needs to patch something new into the network.
  • 50 devices X $5K per switch is a QUARTER MILLION DOLLARS.
  • Perhaps a more realistic approach: Suppose your ports aren't so perfectly mispatched. Maybe only every tenth port to be patched requires adding another switch.  So if you find 100 ports incorrectly patched, you'd spend up to $80K on additional switches.

 

Some organizations offer a bonus to employees who discover and recommend process changes that result in significant cost decreases to the company, and the company bonus could be equal to 10% to 25% of the annual savings. If someone offered me 25% of $80K for saving the company from having to buy more switches every year, I'd be all over that!

 

And this easy Solarwinds report does it for free. Did the money saved pay for something significant to the company? Did you get a juicy bonus for your money-saving suggestion?

 

;^)

 

p.s.:  This report ALSO saves unnecessary downtime--we don't end up guessing about the purpose of a port, and unpatching and repurposing mission critical ports that are only used once every few months or years--because we label those ports in the switches.  The report includes those labels in its output along with how long the ports have been down.  It even displays them by length of down time, from longest to shortest.  Schweet!

OpsGenie – a cloud based alert and notification management solution – has recently announced integration with SolarWinds Web Help Desk. So, how does it work?

  1. WHD sends an email to OpsGenie, which in turn creates a new alert in OpsGenie.
  2. OpsGenie then sends alert actions back to WHD via Web Help Desk API. OpsGenie can make a web request to the WHD and update the ticket with a note. WHD needs to have a web-based URL that is accessible from the internet (http://hostname:port).

WHD-OpsGenie.jpg

 

By using OpsGenie with SolarWinds Web Help Desk Integration, you can forward SolarWinds Web Help Desk tickets to OpsGenie. OpsGenie can then determine the right people to notify, based on on-call schedules, email, text messages (SMS), phone calls and iOS & Android push notifications. OpsGenie will continue to escalate the alert until it’s acknowledged or closed.

For more information, please refer to the following support document.

If you have been keeping up with the Thwack product blog lately you know that Drag & Drop Answers to Your Toughest IT Questions revealed PerfStack, the new way to view performance analysis by visualizing and correlating data within the Orion web interface and Get Your Head Out of the Clouds, and Your Data Too identified that you can collect configuration and metric data directly from Amazon Web Services® (AWS), and have that data visualized along with the rest of the environment you are already monitoring in SAM 6.4 or VMAN 7.1.  This is great news for any administrator that needs to troubleshoot a Hybrid IT application with on-premises VMs and AWS cloud instances.

 

The good news is that Virtualization Manager (VMAN) 7.1 allows you to leverage the "new hotness" found in PerfStack and Cloud Infrastructure monitoring to correlate and pinpoint where the application performance issue is in your hybrid infrastructure. In the following example, we have a hybrid cloud application that has resources in the cloud (SQL AWS) as well as an on-premises virtual server (Analytics VM) both of which are monitored with VMAN. As with most IT incidents, the administrator is left trying to figure out what exactly is causing the performance degradation in their application with little to go on other than the "application is slow". Using PerfStack you can quickly dive down into each KPI and drag and drop the metrics you want to compare until the troubleshooting discovery surfaces what the issue is or isn't. The fact that VMAN contains cloud infrastructure monitoring means that you can add AWS counters from your cloud instances into PerfStack and correlate those with other cloud instances or with your on-premises VMs to troubleshoot your hybrid infrastructure with VMAN.

hybrid-2.jpg

 

In the example above, the cloud instance, SQL AWS is experiencing some spikes in CPU load but it is well within the normal operating parameters while the on-premises VM, Analytics VM is experiencing very little CPU load. With PerfStack my attention is easily drawn to the memory utilization being high on both servers that participate in the application's performance and the fact that my on-premises VM has an active alert gives tells me I need to dig into that VM further.

 

By adding Virtualization Manager counters that indicators of VM performance (CPU Ready, ballooning, Swapping) I see that there are no hidden performance issues from the hypervisor configuration (figure below).

Perfstack -2.jpg

From the Virtualization Manager VM details page for the Analytics VM, I see that the active alert is related to large snapshots on the VM which can be a cause of performance degradation. In addition, the VM resides on a host with an active alert for Host CPU utilization which may be a growing concern for me in the near future.  To monitor hybrid cloud infrastructure in VMAN  allows the administrator the ability to create a highly customized view for discovery and troubleshooting with the contextual data necessary minus the alert noise that can regularly occur.

activealerts.jpg

One of the added benefits of monitoring cloud instances with VMAN is that you can now build on the single pane of glass view that will be the monitoring authority for both your on-premises and cloud environment. Not only is it essential to have one place to monitor and correlate your data when applications span across hybrid environments but having visibility into both environments from Solarwinds allows you to determine what requirements will be necessary when moving workloads into your AWS cloud or off your AWS environment.

 

on-prem2.jpg         cloud-sum-2.jpg

 

For more information on PerfStack and Hybrid end-to-end monitoring, check out the following link.

Hybrid IT – Hybrid Infrastructure | SolarWinds

 

Don't forget to check these blog posts for deeper dives into PerfStack & Cloud Monitoring respectively.

Drag & Drop Answers to Your Toughest IT Questions

Get Your Head Out of the Clouds, and Your Data Too

Filter Blog

By date:
By tag: