Skip navigation
1 2 3 4 5 Previous Next

Product Blog

704 posts

As the Product Manager for Online Demos, I need to install the SolarWinds Orion platform frequently... sometimes as much as 4 times per month.  This can get tiresome, but I've gotten some assistance from PowerShell, the community, and some published help documents.


I've thought about scripting these out for a while now and I came up with a list of things to do.

  1. Build the virtual machines
  2. Pre-configure the virtual machine's disks
  3. Prep the machine for installation
  4. Install the software (silently)
  5. Finalize the installation (silently)

This post is the first step in this multi-step process - Building your virtual machine.


Now dependent on your hypervisor there are two different paths to follow: Hyper-V or VMware.  In my lab, I've got both because I try to be as agnostic as possible.  It's now time to start building the script.  I'm going to use PowerShell.


Scripting Preference: PowerShell

Preference Reasoning: I know it and I'm comfortable using it.


Hyper-V vs. VMware


Each Hypervisor has different requirements when building a virtual machine, but some are the same for each - specifically the number & size of disks, the CPU count and the maximum memory.  The big deviation comes from the way that each hypervisor handles memory & CPU reservations.


Hyper-V handles CPU reservation as a percentage of total whereas VMware handles is via the number of MHz.  I've elected to keep the reservation as a percentage.  It seemed easier to keep straight (in my head) and only required minor tweaks to the script.


Step 1 - Variable Declaration

  • VM Name [string] - both
  • Memory (Max Memory for VM) [integer] - both
  • CPU Count (number of CPUs) [integer] - both
  • CPU Reservation (percentage) [integer] - both
  • Disk Letters and Sizes - both
  • Memory at Boot (Memory allocated at boot) [integer] - Hyper-V
  • Memory (Minimum) [integer] - Hyper-V
  • Use Dynamic Disks [Boolean] - Hyper-V
  • VLAN (VLAN ID to use for Network Adapter) [integer] - Hyper-V
  • vCenter Name [string] - VMware
  • ESX Host [string] - VMware
  • Disk Format ("thin", "thick", etc.) [string] - VMware
  • VLAN (VLAN name to use for Network Adapter) [string] - VMware
  • Guest OS (identify the Operating System) [string] - VMware


Step 2 - Build the VM

Building the VM is an easy step that you actually only takes 1 line using the "New-VM" command (regardless of Hypervisor).  The syntax and parameters change depending on the hypervisor, but otherwise we just build the shell.  In Hyper-V, I do this in two commands and in VMware I do it in one.


Step 3 - Assign Reservations

This is a trickier step in VMware because it uses MHz and not percentages.  For that I need to know what the MHz of the processor in the host is running.  Thankfully, this can be calculated pretty easily.  Then I just set the CPU & Memory reservations based around each hypervisor's requirements


Step 4 - Assign VLAN

Hyper-V uses the VLAN ID (integer) and VMware uses the VLAN Name (string).  It's nearly the same command with just a different parameter.


Step 5 - Congratulate yourself.



Execution Time: 9 seconds on either architecture.

Time saved: at least 10 minutes.


The full script is below.


#region Variable Declaration
$VMName       = "OrionServer" # Virtual Name
$Architecture = "Hyper-V"     # (or "VMware")
# Global Variable Declaration
$CPUCount     = 4             # Number of CPU's to give the VM
$CPUReserve   = 50            # Percentage of CPU's being reserved
$RAMMax       = 16GB          # Maximum Memory
# Sizes and count of the disks
$VHDSizes = [ordered]@{ "C" =  40GB; # Boot
                        "D" =  30GB; # Page
                        "E" =  40GB; # Programs
                        "F" =  10GB; # Web
                        "G" =  10GB; # Logs
# Architecture-specific commands
if ( $Architecture -eq "Hyper-V" )
    $RAMBoot      = 8GB           # Startup Memory
    $RAMMin       = 8GB           # Minimum Memory (should be the same as RAMBoot)
    $DynamicDisks = $true         # Use Dynamic Disks?
    $Vlan         = 300           # VLAN assignment for the Network Adapter
    # Assume that we want to make all the VHDs in the default location for this server.
    $VHDRoot = Get-Item -Path ( Get-VMHost | Select-Object -ExpandProperty VirtualHardDiskPath )
    # Convert the hash table of disks into PowerShell Objects (easier to work with)
    $VHDs = $VHDSizes.Keys | ForEach-Object { New-Object -TypeName PSObject -Property ( [ordered]@{ "ServerName" = $VMName; "Drive" = $_ ; "SizeBytes" = $VHDSizes[$_] } ) }
    # Extend this object with the name that we'll want to use for the VHD
    # My naming scheme is [MACHINENAME]_[DriveLetter].vhdx - adjust to match your own.
    $VHDs | Add-Member -MemberType ScriptProperty -Name VHDPath -Value { Join-Path -Path $VHDRoot -ChildPath ( $this.ServerName + "_" + $this.Drive + ".vhdx" ) } -Force
    # Create the VHDs
    $VHDs | ForEach-Object { 
        if ( -not ( Test-Path -Path $_.VHDPath -ErrorAction SilentlyContinue ) )
            Write-Verbose -Message "Creating VHD at $( $_.VHDPath ) with size of $( $_.SizeBytes / 1GB ) GB"
            New-VHD -Path $_.VHDPath -SizeBytes $_.SizeBytes -Dynamic:$DynamicDisks | Out-Null
            Write-Host "VHD: $( $_.VHDPath ) already exists!" -ForegroundColor Red
    #region Import the Hyper-V Module & Remove the VMware Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue )
        Remove-Module VMware.PowerCLI -Confirm:$false -Force
    if ( -not ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue ) )
        Import-Module -Name "Hyper-V" -Force
    #endregion Import the VMware Module & Remove the Hyper-V Module
    # Step 1 - Create the VM itself (shell) with no Hard Drives to Start
    $VM = New-VM -Name $VMName -MemoryStartupBytes $RAMBoot -SwitchName ( Get-VMSwitch | Select-Object -First 1 -ExpandProperty Name ) -NoVHD -Generation 2 -BootDevice NetworkAdapter
    # Step 2 - Bump the CPU Count
    $VM | Set-VMProcessor -Count $CPUCount -Reserve $CPUReserve
    # Step 3 - Set the Memory for the VM
    $VM | Set-VMMemory -DynamicMemoryEnabled:$true -StartupBytes $RAMBoot -MinimumBytes $RAMMin -MaximumBytes $RAMMax
    # Step 4 - Set the VLAN for the Network device
    $VM | Get-VMNetworkAdapter | Set-VMNetworkAdapterVlan -Access -VlanId $Vlan
    # Step 5 - Add Each of the VHDs
    $VHDs | ForEach-Object { $VM | Add-VMHardDiskDrive -Path $_.VHDPath }
elseif ( $Architecture -eq "VMware" )
    #region Import the VMware Module & Remove the Hyper-V Module (if enabled)
    # This is done because there are collisions in the names of functions
    if ( Get-Module -Name "Hyper-V" -ErrorAction SilentlyContinue )
        Remove-Module -Name "Hyper-V" -Confirm:$false -Force
    if ( -not ( Get-Module -Name "VMware.PowerCLI" -ErrorAction SilentlyContinue ) )
        Import-Module VMware.PowerCLI -Force
    #endregion Import the VMware Module & Remove the Hyper-V Module
    $vCenterServer = "vCenter.Demo.Lab"
    $DiskFormat = "Thin" # or "Thick" or "EagerZeroedThick"
    $VlanName = "External - VLAN 300"
    $GuestOS = "windows9Server64Guest" # OS Identifer of the Machine
    #region Connect to vCenter server via Trusted Windows Credentials
    if ( -not ( $global:DefaultVIServer ) )
        Connect-VIServer -Server $vCenterServer
    #endregion Connect to vCenter server via Trusted Windows Credentials
    # Find the host with the most free MHz or specify one by using:
    # $VMHost = Get-VMHost -Name "ESX Host Name"
    $VmHost = Get-VMHost | Sort-Object -Property @{ Expression = { $_.CpuTotalMhz - $_.CpuUsageMhz } } -Descending | Select-Object -First 1
    # Calculate the MHz for each processor on the host
    $MhzPerCpu = [math]::Floor( $VMHost.CpuTotalMhz / $VMHost.NumCpu )
    # Convert the Disk Sizes to a list of numbers (for New-VM Command)
    $DiskSizes = $VHDSizes.Keys | Sort-Object | ForEach-Object { $VHDSizes[$_] / 1GB }
    # Create the VM
    $VM = New-VM -Name $VMName -ResourcePool $VMHost -DiskGB $DiskSizes -MemoryGB ( $RAMMax / 1GB ) -DiskStorageFormat $DiskFormat -GuestId $GuestOS -NumCpu $CPUCount
    # Setup minimum resources
    # CPU is Number of CPUs * Reservation (as percentage) * MHz per Processor
    $VM | Get-VMResourceConfiguration | Set-VMResourceConfiguration -CpuReservationMhz ( $CPUCount * ( $CPUReserve / 100 ) * $MhzPerCpu ) -MemReservationGB ( $RAMMax / 2GB )
    # Set my VLAN
    $VM | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName $VlanName -Confirm:$false
    Write-Error -Message "Neither Hyper-V or VMware defined as `$Architecture"


Next step is to install the operating system.  I do this with Windows Deployment Services.  Your mileage may vary.


After that, we need to configure the machine itself.  That'll be the next post.


About this post:

This post is a combination of two posts on my personal blog: Building my Orion Server [VMware Scripting Edition] – Step 1 & Building my Orion Server [Hyper-V Scripting Edition] – Step 1.

We’re happy to announce the release of SolarWinds® Port Scanner, a standalone free tool that delivers a list of open, closed, and filtered ports for each scanned IP address.

Designed for network administrators of businesses of all sizes, Port Scanner gives your team insight into TCP and UDP ports statuses, can resolve hostnames and MAC addresses, and detects operating systems. Even more importantly, it enables users to run the scan from a CLI and exports the results into a file.

What else does Port Scanner do?

  • Supports threading using advanced adaptive timing behavior based on network status monitoring and feedback mechanisms, in order to shorten the scan run time
  • Allows you to save scan configurations into a scan profile so you can run the same scan again without redoing previous configurations
  • Resolves hostnames using default local machine DNS settings, with the alternative option to define a DNS server of choice
  • Exports to XML, XLSX, and CSV file formats

For more detailed information about Port Scanner, please see the SolarWinds Port Scanner Quick Reference guide here on THWACK®:


Download SolarWinds Port Scanner:

We are excited to share, that we've reached GA for Web Help Desk (WHD) 12.5.1


This service release includes:


Improved application support

  • Microsoft® Office 365
  • Microsoft Exchange Server 2016
  • Microsoft Windows Server® 2016

Improved protocol and port management

  • The Server Options setting provides a user interface to manage the HTTP and HTTPS ports from the Web Help Desk Console. You can enable the listening port to listen for HTTP or HTTPS requests, configure the listening port numbers, and create a custom port for generated URL links.

Improved keystore management

  • The Server Options setting also includes Keystore Options. Use this option to create a new Java Keystore (JKS) or a Public-Key Cryptography Standards #12 (PKCS12) KeyStore to store your SSL certificates.

Improved SSL certificate management

  • The Certificates setting allows you to view the SSL certificates in the KeyStore that provide a secure connection between a resource and the Web Help Desk Console.

Improved APNS management

  • The Certificates setting also allows you to monitor your Apple Push Certification Services (APNS) Certificate expiration date and upload a new certificate prior to the expiration date.

Brand new look and feel

  • The new user interface offers clean visual design that eliminates visual noise to help you focus on what is important.

Other improvements

  • Apache Tomcat updated to 7.0.77


We encourage all customers to upgrade to this latest release which is available within your customer portal.

Thank you!

SolarWinds Team

With the key application I support, our production environment is spread across a Citrix Farm of 24 servers connected to an AppServer Farm of 6 Servers all with load balanced Web and App Services.  So the question is when is my application down?  If a Citrix server is off line?  If a Web or App Service is down on one AppServer? We have to assess the criticality of our application status.


We have determined it 1 service is down, it does not affect the availability of the application or even the experience of our user.  Truth be told, the application can support all users even if only one AppServer is running the Document Service for example.  Of course, in that scenario we have not redundancy and no safety net.


So I created a script that allows us to look at a particular service and based on the number of  instances running, we determine a criticality.


Check out Check Multiple Nodes for a Running Service, Then Show Application Status Based on Number of Instances


Within this script you can identify a list of servers to poll, a minimum critical value and return either up, warning or critical for the application component based on the number of instances.

What is NIST 800-171 and how does it differ from NIST 800-53?


NIST SP 800-171 – "Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations" provides guidance for protecting the confidentiality of Controlled Unclassified Information (CUI) residing in non-federal information systems and organizations. The publication is focused on information that is shared by federal agencies with a non-federal entity. If you are a contractor or sub-contractor to governmental agencies whereby CUI resides on your information systems, NIST-800-171 will impact you.


Cybercriminals regularly target federal data such as healthcare records, Social Security numbers, and more. It is vital that this information is protected when residing on non-federal information systems. NIST 800-171 has an implementation deadline of 12/31/2017, which has contractors scrambling.


Many of the controls contained within NIST 800-171 are based on NIST 800-53, but they are tailored to protect CUI in nonfederal information systems. There are 14 “families” of controls within NIST 800-171, but before we delve into those, we should probably discuss Controller Unclassified Information (CUI) and what it is.


There are several categories and subcategories of CUI, which you can be view here. You may be familiar with Sensitive but Unclassified (SBU) information—there were various categories that fell under SBU—but CUI replaces SBU and all its sub-categories. CUI is information which is not classified but in the federal government’s best interest to protect.


NIST 800-171 Requirements

As we mentioned above, there are 14 classes of controls within NIST 800-171. These are:


  • Access Control
  • Awareness and Training
  • Audit and Accountability
  • Configuration Management
  • Identification and Authentication
  • Incident Response
  • Maintenance
  • Media Protection
  • Personnel Security
  • Physical Protection
  • Risk Assessment
  • Security Assessment
  • System and Communications Protection
  • System and Information Integrity


We will now delve further into each of these categories and discuss the basic and derived security requirements where SolarWinds® products can help. Basic security requirements are high-level requirements, whereas derived requirements are the controls you need to put in place to meet the high-level objective of the basic requirements.


3.1 Access Control

3.1.1 – Limit information system access to authorized users, processes acting on behalf of authorized users, or devices (including other information systems).


3.1.2 – Limit information system access to the types of transactions and functions that authorized users are permitted to execute.


This category limits access to systems to authorized users only and limits user activity to authorized functions only. There are a few areas within Access Control where our products can help, but many of these controls are implemented at the policy or device levels.


3.1.5 – Employ the principle of least privilege, including for specific security functions and privileged accounts.

SolarWinds Log & Event Manger (LEM) can audit deviations from least privilege—e.g., unauthorized file access and unexpected system access. Auditing can be done in real-time or via reports. LEM can also monitor Microsoft® Active Directory® (AD) for unexpected escalated privileges being assigned to a user.


3.1.6 – Use of non-privileged accounts when accessing non-security functions.

SolarWinds LEM can monitor privileged account usage and audit the use of privileged accounts for non-security functions.


3.1.7 – Prevent non-privileged users from executing privileged functions and audit the execution of such functions.

Execution of privileged functions such as creating and modifying registry keys and editing system files can be audited in real-time or via reports in LEM. On the network device side, SolarWinds Network Configuration Manager (NCM) includes a change approval system which helps ensure that non-privileged users cannot execute privileged functions without approval from a privileged user.


3.1.8 – Limit unsuccessful logon attempts.

The number of logon attempts before lockout are generally set at the domain/system policy level, but LEM can confirm if the lockout policy is being enforced via reports/nDepth. LEM can also be used to report on unsuccessful logon attempts, as well as automatically lock a user account via the Active Response feature.


3.1.12 – Monitor and control remote access sessions.

LEM can monitor and report on remote logons. Correlation rules can be configured to alert and respond to unexpected remote access (e.g., access outside normal business hours). SolarWinds NCM can audit how remote access is configured on your network device, identify any configuration violations, and remediate accordingly.


3.1.21 – Limit use of organizational portable storage devices on external information systems.

LEM can audit and restrict usage of portable storage devices with its USB Defender feature.


3.2 Awareness and Training

3.2.1 – Ensure that managers, systems administrators, and users of organizational information systems are made aware of the security risks associated with their activities and of the applicable policies, standards, and procedures related to the security of organizational information systems.


3.2.2 – Ensure that organizational personnel are adequately trained to carry out their assigned information security-related duties and responsibilities.


This section relates to user awareness training, especially around information security. Users should be aware of policies, procedures, and attack vectors such as phishing, malicious email attachments, and social engineering. Unfortunately, SolarWinds can’t provide information security training your users—we would if we could!


3.3 Audit and Accountability

3.3.1 – Create, protect, and retain information system audit records to the extent needed to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity.


3.3.2 – Ensure that the actions of individual information system users can be uniquely traced to those users so they can be held accountable for their actions.


This set of controls helps to ensure that audit logs are in place and that they are monitored to identify authorized or suspicious activity. These controls relate to the data you want LEM to ingest and how those logs are protected and retained. LEM can help satisfy some of the controls in this section directly. NCM also includes some powerful features which which can assist with the Audit and Accountability controls, including real-time change detection, configuration change reports, and a change approval system. 


3.3.3 – Review and update audited events.

LEM helps with the review of audited events, provided the appropriate logs are sent to LEM.


3.3.4 – Alert in the event of an audit process failure.

LEM can generate alerts when agents go offline or the log storage database is running low on space. LEM can also alert on behalf of systems when audit logs are cleared—e.g., if a user clears the Windows® event log.


3.3.5 – Correlate audit review, analysis and reporting processes for investigation and response to indications of inappropriate, suspicious, or unusual activity.

LEM’s correlation engine and reporting can assist with audit log reviews and help ensure that administrators are alerted to indications of inappropriate, suspicious, or unusual activity.


3.3.6 – Provide audit reduction and report generation to support on-demand analysis and reporting.

Audit logs can generate a huge amount of information. LEM can analyze event logs and generate scheduled or on-demand reports to assist with analysis. However, you will need to ensure that your audit policies and logging levels are appropriately configured.


3.3.7 – Provide an information system capability that compares and synchronizes internal system clocks with an authoritative source to generate time stamps for audit records.

LEM satisfies this requirement through Network Time Protocol server synchronization. LEM also includes a predefined correlation rule that monitors for time synchronization failures.


3.3.8 – Protect audit information and audit tools from unauthorized access, modification, and deletion.

LEM helps satisfy this requirement through the various mechanisms outlined in this post: Log & Event Manager Appliance Security and Data Protection.


3.3.9 – Limit management of audit functionality to a subset of privileged users.

As per the response to 3.3.8, LEM provides role-based access control, which limits access and functionality to a subset of privileged users.


3.4 Configuration Management

3.4.1 Establish and maintain baseline configurations and inventories of organizational information systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.


3.4.2 Establish and enforce security configuration settings for information technology products employed in organizational information systems.


Minimum acceptable configurations must be maintained and change management controls must be in place. Inventory comes into play here, too. NCM will have the biggest impact here (on the network device side), thanks to its ability to establish baseline configurations and report on violations. LEM and SolarWinds Patch Manager can also play roles within this set of controls.


3.4.3 – Track, review, approve/disapprove, and audit changes to information systems.

NCM’s real-time change detection, change approval management and tracking reports can be used to detect, validate, and document changes to network devices. LEM can monitor and audit changes to information systems, provided the appropriate logs are sent to LEM.


3.4.8 – Apply deny-by-exception (blacklist) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

LEM can monitor for the use of unauthorized software. Thanks to Active Response, you can configure LEM to automatically kill nonessential programs and services.


3.4.9 – Control and monitor user-installed software.

LEM can audit software installations and alert accordingly. Patch Manager can inventory machines on your network and report on the software and patches installed.


3.5 Identification and Authentication

3.5.1 Identify information system users, processes acting on behalf of users, or devices.


3.5.2 Authenticate (or verify) the identities of those users, processes, or devices, as a prerequisite to allowing access to organizational information systems.


This section includes controls such as using multifactor authentication, enforcing password complexity and storing/transmitting passwords in an encrypted format. SolarWinds does not have products to support these requirements.


3.6 Incident Response

3.6.1 Establish an operational incident-handling capability for organizational information systems that includes adequate preparation, detection, analysis, containment, recovery, and user response activities.


3.6.2 Track, document, and report incidents to appropriate officials and/or authorities both internal and external to the organization.


There is only one derived security requirement within the Incident Response section, namely:

3.6.3 Test the organizational incident response capability.


LEM can play a role in the incident generation and the subsequent investigation. LEM can generate an incident based on a defined correlation trigger and respond to an incident via the Active Responses. Reports can be produced based on detected incidents.


3.7 Maintenance  

3.7.1 Perform maintenance on organizational information systems.


3.7.2 Provide effective controls on the tools, techniques, mechanisms, and personnel used to conduct information system maintenance.


SolarWinds isn’t relevant to most of the requirements in this section. Controls contained within the maintenance category include: ensuring equipment remove for off-site maintenance is sanitized of CUI, checking media for malicious code and requiring multifactor authentication for nonlocal maintenance sessions.


LEM can assist with the 3.7.6 requirement that states “Supervise the maintenance activities of maintenance personnel without required access authorization.” Provided the appropriate logs are being generated and sent to LEM, reports can be used to audit the activity performed by maintenance personnel. NCM also comes into play, allowing you to compare configurations before and after maintenance windows.


3.8 Media Protection

3.8.1 Protect (i.e., physically control and securely store) information system media containing CUI, both paper and digital.


3.8.2 Limit access to CUI on information system media to authorized users.


3.8.3 Sanitize or destroy information system media containing CUI before disposal or release for reuse.


Most of the controls within the Media Protection systems are not applicable to SolarWinds products. However, LEM can assist with one control.


3.8.7 – Control the use of removable media on information system components. 

LEM’s USB Defender feature can monitor for usage of USB removable media and can automatically detach USB devices when unauthorized usage is detected.


3.9 Personnel Security

3.9.1 Screen individuals prior to authorizing access to information systems containing CUI.


3.9.2 Ensure that CUI and information systems containing CUI are protected during and after personnel actions such as terminations and transfers.


There are no derived security requirements within this section. LEM can assist with 3.9.2 by auditing usage of credentials of terminated personnel, validating that accounts are disabled in a timely manner, and validating group/permission changes after a personnel transfer.


3.10 Physical Protection

3.10.1 Limit physical access to organizational information systems, equipment, and the respective operating environments to authorized individuals.


3.10.2 Protect and monitor the physical facility and support infrastructure for those information systems.


SolarWinds cannot assist with any of the physical security controls contained within this section.


3.11 Risk Assessment

3.11.1 Periodically assess the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals, resulting from the operation of organizational information systems and the associated processing, storage, or transmission of CUI.


3.11.2 Vulnerable software poses a great risk to every organization. These vulnerabilities should be identified and remediated—that is exactly what the controls within this section aim to do.


Risk Assessment involves lots of policies and procedures; however, Patch Manager can be leveraged to keep systems up to date with the latest security patches.


3.11.2 – Scan for vulnerabilities in the information system and applications periodically and when new vulnerabilities affecting the system are identified.

Patch Manager cannot perform vulnerability scans, but it can be used to identify missing application patches on your Windows machines. NCM identifies risks to network security based on device configuration. NCM also accesses the NIST National Vulnerability Database to get updates on potential emerging vulnerabilities in Cisco® ASA and IOS® based devices.


3.11.3 – Remediate vulnerabilities in accordance with assessments of risk.

Patch Manager can remediate software vulnerabilities on your Windows machines via Microsoft® and third-party updates. Patch Manager can be used to install updates on a scheduled basis or on demand. On the network device side, NCM performs Cisco IOS® firmware upgrades to potentially mitigate identified vulnerabilities.


3.12 Security Assessment

3.12.1 – Periodically assess the security controls in organizational information systems to determine if the controls are effective in their application.


3.12.2 – Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational information systems.


3.12.3 – Monitor information system security controls on an ongoing basis to ensure the continued effectiveness of the controls.

We can help with the monitoring of the Security Assessment controls via modules such as Network Configuration Manager (NCM) and Log and Event Manager (LEM). When monitoring security controls and performing assessments, network configuration should not be overlooked. NCM enables you to standardize network device configuration, detect out-of-process changes, audit configurations and correct violations. LEM can monitor event logs relating to information system security and perform correlation, alerting, reporting and more. LEM can monitor event logs relating to information system security and perform correlation, alerting, reporting, and more. SolarWinds provides several other modules that support monitoring the health and performance of your information systems and networks.


3.13 System and Communications Protection

3.13.1 – Monitor, control, and protect organizational communications (i.e., information transmitted or received by organizational information systems) at the external boundaries and key internal boundaries of the information systems.


3.13.2 – Employ architectural designs, software development techniques, and systems engineering principles that promote effective information security within organizational information systems.


Many of the controls in this section involve protecting confidentiality of CUI at rest, ensuring encryption is used and keys are appropriately managed, and networks are segmented. However, the basic security requirement 3.13.1 is certainly an area where SolarWinds can assist. This requirement involves monitoring (and controlling/protecting) communication at external and internal boundaries. LEM can collect logs from your network devices and alert to any suspicious traffic. SolarWinds NetFlow Traffic Analyzer (NTA) can also be used to monitor traffic flows for specific protocols, applications, domain names, ports, and more.


3.13.6 Deny network communications traffic by default and allow network communications traffic by exception (i.e., deny all, permit by exception).

LEM can ingest traffic from network devices that provides auditing to validate that traffic is being appropriated denied/permitted. NPM and NTA can also be used to monitor traffic. NCM can provide configuration reports to help ensure that your access control lists are compliant with “deny all and permit by exception,” as well as providing the ability to execute scripts to make ACL changes en masse.


3.13.14 – Control and monitor the use of VoIP technologies.

NPM/NTA and SolarWinds VoIP & Network Quality Manager can be used to monitor VoIP traffic/ports.


3.14 System and Information Integrity

3.14.1 – Identify, report, and correct information and information system flaws in a timely manner.


3.14.2 – Provide protection from malicious code at appropriate locations within organizational information systems.


3.14.3 – Monitor information system security alerts and advisories and take appropriate actions in response.


The controls within this section set out to ensure that the information system or the information within the system has been compromised. Patch Manager and LEM can play a role in system/information integrity.


3.14.4 Update malicious code protection mechanisms when new releases are available.

Essentially, this control requires you to patch your systems. Patch Manager provides the ability to patch your systems with Microsoft and third-party updates on a scheduled or ad-hoc basis. Custom packages can also be created to update products that are not included in our catalog.


3.14.5 Perform periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened or executed.

This control ensures that you have an anti-virus tool in place to scan for malicious files. LEM can receive alerts from a wide range anti-virus/malware solutions to correlate, alert, and respond to identified threats.


3.14.6 Monitor the information system including inbound and outbound communications traffic, to detect attacks and indicators of potential attacks.

This security control is very well suited to LEM—the correlation engine can monitor logs for any suspicious or malicious behavior. LEM can be used to monitor inbound/outbound traffic, although NPM/NTA could be used to detect unusual traffic patterns.


3.14.7 Identify unauthorized use of the information system.

LEM can monitor for unauthorized activity. User-defined groups come into play here which can create blacklists/whitelists of authorized users and events. 


Still with me? As you can see, there is a substantial number of requirements within the 14 sets of controls, but when implemented correctly, the framework can go a long way to ensure the confidentiality, integrity, and availability of Controlled Unclassified Information and your information system as a whole. The SolarWinds products I’ve mentioned above all include a wide variety of out-of-the-box content such as rules, alerts, and reports that can help with the NIST 800-171 requirements.


I hope this blog post has helped you with untangling some of the NIST-800-171 requirements and how you can leverage SolarWinds products to help. If you’ve got any questions or feedback, please feel free to comment below. 

Version 2.0 is a new major release of GNS3, which brings significant architectural changes and new features to the growing community of over 900,000 registered network professionals. Since inception of the project GNS3 has made over 79 releases and been downloaded over 13,693,000 times.


GNS3 started as only a desktop application from the first version up to version 0.8.3. With the more recent 1.x versions, GNS3 has grown to now allow for the use of remote servers. Within version 2.0, multiple clients can control GNS3 at the same time, also all the “application intelligence” has been moved to the GNS3 server.


What does it mean?


  • Third parties can make applications controlling GNS3. This will also allow individuals to easily configure and spin up a virtual network with pre-established templates

  • Multiple users can be connected to the same project and see each other modifications in real time, allowing individuals in remote location to work and collaboration on projects in the GNS3 virtual network environment

  • No need to duplicate your settings on different computers if they connect to the same central server.

  • Easier to contribute to GNS3, the separation between the graphical user interface and the server/backend is a lot clearer

  • GNS3 now supports the following vendor devices: Arista vEOS, Cumulus VX, Brocade Virtual ADX, Checkpoint GAiA, A10 vThunder, Alcatel 7750, NetScaler VPX, F5 BIG-IP LTM VE, MikroTik CHR, Juniper vMX and more....

All the complexity of connecting multiple emulators has been abstracted in what we call the controller (part of GNS3 server). From a user point of view, it means that it is possible to start a packet capture on any link, connect anything to a cloud etc. Finally, by using the NAT object in GNS3, connections to Internet work out of the box (note this is only available with the GNS3 VM or on Linux with libvirt installed).


Get started with GNS3 v2.0 now!




Save as you go

Your projects are automatically saved as you make changes to them, there is no need to press any save button anymore. An additional benefit is this will avoid synchronization issues between the emulators’ virtual disks and projects.


Multiple users can be connected to the same project

Multiple user can be connected to the same project and see each other changes in real time and collaborate. If you open a console to a router you will see the commands send by other users.


Smart packet capture

Now starting a packet capture is just as easy as clicking on a link and asking for new capture. GNS3 will guess the pick the best interface where to capture from. The packet capture dialog has also been redesigned to allow changing the name of the output file or to prevent automatically starting Wireshark:



Developers can find out how to control GNS3 using an API here: Thanks to our controller, it is no longer required to deal with multiple GNS3 servers since most of the information is available in the API. All the visual objects are exposed as SVG.


Key Videos on GNS3 v2.0


This video explains and demonstrates how to upgrade both the GNS3 Windows GUI and the GNS3 VM to version 2.0.0 of GNS3. This video uses Windows 10 for the demonstrations.



More helpful videos on GNS3 v2.0:


Our distributor in Germany has created French, German and Spanish language packs. The language packs are not a full versions. If you already installed the English version, then copy (replace) the language pack files into the program installation directory:


German v12.0.4 files:

French v12.0.4 files:

Spanish v12.0.3 files:

Is "A Picture Worth A Thousand Words?"  Maybe even more than that!


I frequently receive requests to show departments / teams how their system(s) may be performing on the network.  It's not enough to just tell them "Your server's latency is sub-millisecond, it's connected at 1 Gb/s at Full Duplex, and no network errors have been recorded on its switchports in the last three months."  When you offer that kind of information it's accurate, but it may not fill the actual need of the request.


Maybe they want to know a LOT more that NPM can show about even basic network status of a system:

  • Latency
  • QoE
  • Throughput on multiple interfaces
  • Traffic trending over time--especially with an eye towards predicting whether more resources must be allocated for their system(s) in the future:
    • More memory
    • More bandwidth
    • More storage
  • Growth over the last year in various metrics:
    • Server bandwidth consumed
    • WAN bandwidth utilization
    • Latency changes


I never found a canned NPM Report that would show everything the customer wanted.  But I learned how to build one!


Check out my method of building a Custom View that shows multiple windows in a single screen--my own little "single-pane-of-glass" deployment here:  How to create a simple custom view of multiple interfaces' bandwidth utilization


It's incredibly easy to build a blank custom View that has as many monitors/ windows/ reports as I want:

  1. Create a new View
  2. Add the right number of columns and add the right Custom HTML (or other) resources to them
  3. Edit those resources to display useful and intuitive titles and subtitles
  4. Test that they show the time frames and polling frequency most useful to the customers
  5. Give the customer access to them
  6. Sit back and listen to the praise and excitement!



I just built a new view for all the interfaces on a Core 7609 yesterday using this process, and will build another one today on another 7609.  In the screen shot below I've zoomed far out to give you an idea of what I can see--a graph for each interface's utilization.  Normally I'd have the browser window zoomed in enough that the three columns fill a 24" display and are easily readable, and easy to scroll through vertically.



  • I need just one screen that shows everything physically connected to those core routers.
  • My team sees the labels and information displayed; that helps them understand what needs to be worked on as we move those interfaces onto replacement Nexus 7009's.
  • My boss is able to track progress, see interfaces and throughput change.
  • His boss knows the work's being done, sees what's left to do, and can answer the CIO's questions quickly and easily.


What would you pay an outside vendor to build this kind of custom view for you?  $5K?  More?   Depending on the complexity and number of interfaces, I can start and complete a new multi-interface view, complete with custom labels and customized monitoring periods and granularity in less than ten minutes--AND provide a wealth of useful, specialized information to whichever customer wants it.  Better still, I can show them how to tweak the individual windows' views to reflect differing amounts of polling granularity and time covered by the graphs.


The ability to build this has filled needs by my team, by our IBM Big Iron team (always wanting to see how much of the multiple 10 Gig interfaces they're consuming at once), our Bio Med group (which LANTronix devices have the best or worst uptime--and where they're located!), the Help Desk (what sites or systems are impacted by any given port being down), and more.  I've built all these specialized views / reports and made them available via NPM to multiple teams, and the need for this info only grows.  I've also built custom multi-graph windows that provide information about:

  • Corporate WAN utilization for interfaces that connect campuses in multiple states
  • Contracted EMHR customers' uptime, reliability, and throughput
  • Performance and availability of WAN connected sites based on WAN vendor to track SLA's
  • Cisco UCS Chassis interface utilization
  • Vendor-specific systems and hardware (particularly useful when the vendor blames our network for their hardware or application issues)


Take a look at my "How to" page here:  How to create a simple custom view of multiple interfaces' bandwidth utilization.  Then talk over the idea of providing this kind of information with your favorite (or problem) customers, and you'll see you can build bridges and service unmet needs a lot easier than you expected.  It's another way NPM shines--with information already available to you!

Our Desktop Support Team (which I'll call "EUPS" from here on--for End User Platform Support) rarely unpatches data cables from switches when PC's or printers or other devices move or are retired.  That results in switches and blades with potentially many ports patched, and nothing using those ports.


That's a waste of money.


When someone has a new device to add to a network, possibly requiring a new data drop to be pulled to the network room, and there's no open ports on a stack of switches or in a chassis switch, we've got few options:

  1. Tell the customer "Sorry--there's no room in the inn."  You know that won't float.
  2. Tell the customer "Sorry, we're out of network switch ports, and no one in your department budgeted for adding a new switch or blade (between $5K and $8K, depending on the hardware).  When you can come up with the funds, we'll order the hardware and install it.  Probably in three weeks, if you get the funds to us today."  Nope, that won't float either--although sometimes we have to play hard ball to ensure folks think before they add new devices to a network.
  3. Take a new/spare switch from inventory, install it, and take heat from up above for not planning adequately.  Naw, that's not right, either.
  4. Run Rick's Favorite Report #1 and find which ports haven't been used in a year, have EUPS unpatch them, and then patch in the new devices.  TAH-DAH!  Money saved, customer up and running right away, budget conserved, resources reused--we're the Facilitators that make things happen instead of the No-Sayers that are despised.


So how does this magical report work?  Easily and efficiently!  Check out how to build it here:


Once it's built for a switch, it's easily modified to change switches--just change the switch name in the title, and change the switch name in Datasource1, and run the report.



My team uses this almost every day, and I bet I use it weekly.  How many switches has this saved us from buying, how many ports have we been able to reuse?  Let's say we use it only twice a week.  That's over a hundred ports every year that are repurposed at no cost!  And since they're typically in different network rooms, you might say we avoid having to buy between fifty and a hundred new switches or blades every year. 


A network switch port costs about $169 (including 10/100/1000/POE) if it's in a high-density high-availability chassis switch that's fully populated, and about the same if it's in a stackable switch.


So the actual cost of 50 ports X $169 = $8,450.  That's not too bad since it's money not spent for recovered ports.  100 ports is $16,900.   Not insignificant, and not something you want to waste.


But let's build a worst-case scenario: 

  • Every port on a switch is used
  • You have to buy another switch every time someone needs to patch something new into the network.
  • 50 devices X $5K per switch is a QUARTER MILLION DOLLARS.
  • Perhaps a more realistic approach: Suppose your ports aren't so perfectly mispatched. Maybe only every tenth port to be patched requires adding another switch.  So if you find 100 ports incorrectly patched, you'd spend up to $80K on additional switches.


Some organizations offer a bonus to employees who discover and recommend process changes that result in significant cost decreases to the company, and the company bonus could be equal to 10% to 25% of the annual savings. If someone offered me 25% of $80K for saving the company from having to buy more switches every year, I'd be all over that!


And this easy Solarwinds report does it for free. Did the money saved pay for something significant to the company? Did you get a juicy bonus for your money-saving suggestion?




p.s.:  This report ALSO saves unnecessary downtime--we don't end up guessing about the purpose of a port, and unpatching and repurposing mission critical ports that are only used once every few months or years--because we label those ports in the switches.  The report includes those labels in its output along with how long the ports have been down.  It even displays them by length of down time, from longest to shortest.  Schweet!

OpsGenie – a cloud based alert and notification management solution – has recently announced integration with SolarWinds Web Help Desk. So, how does it work?

  1. WHD sends an email to OpsGenie, which in turn creates a new alert in OpsGenie.
  2. OpsGenie then sends alert actions back to WHD via Web Help Desk API. OpsGenie can make a web request to the WHD and update the ticket with a note. WHD needs to have a web-based URL that is accessible from the internet (http://hostname:port).



By using OpsGenie with SolarWinds Web Help Desk Integration, you can forward SolarWinds Web Help Desk tickets to OpsGenie. OpsGenie can then determine the right people to notify, based on on-call schedules, email, text messages (SMS), phone calls and iOS & Android push notifications. OpsGenie will continue to escalate the alert until it’s acknowledged or closed.

For more information, please refer to the following support document.

If you have been keeping up with the Thwack product blog lately you know that Drag & Drop Answers to Your Toughest IT Questions revealed PerfStack, the new way to view performance analysis by visualizing and correlating data within the Orion web interface and Get Your Head Out of the Clouds, and Your Data Too identified that you can collect configuration and metric data directly from Amazon Web Services® (AWS), and have that data visualized along with the rest of the environment you are already monitoring in SAM 6.4 or VMAN 7.1.  This is great news for any administrator that needs to troubleshoot a Hybrid IT application with on-premises VMs and AWS cloud instances.


The good news is that Virtualization Manager (VMAN) 7.1 allows you to leverage the "new hotness" found in PerfStack and Cloud Infrastructure monitoring to correlate and pinpoint where the application performance issue is in your hybrid infrastructure. In the following example, we have a hybrid cloud application that has resources in the cloud (SQL AWS) as well as an on-premises virtual server (Analytics VM) both of which are monitored with VMAN. As with most IT incidents, the administrator is left trying to figure out what exactly is causing the performance degradation in their application with little to go on other than the "application is slow". Using PerfStack you can quickly dive down into each KPI and drag and drop the metrics you want to compare until the troubleshooting discovery surfaces what the issue is or isn't. The fact that VMAN contains cloud infrastructure monitoring means that you can add AWS counters from your cloud instances into PerfStack and correlate those with other cloud instances or with your on-premises VMs to troubleshoot your hybrid infrastructure with VMAN.



In the example above, the cloud instance, SQL AWS is experiencing some spikes in CPU load but it is well within the normal operating parameters while the on-premises VM, Analytics VM is experiencing very little CPU load. With PerfStack my attention is easily drawn to the memory utilization being high on both servers that participate in the application's performance and the fact that my on-premises VM has an active alert gives tells me I need to dig into that VM further.


By adding Virtualization Manager counters that indicators of VM performance (CPU Ready, ballooning, Swapping) I see that there are no hidden performance issues from the hypervisor configuration (figure below).

Perfstack -2.jpg

From the Virtualization Manager VM details page for the Analytics VM, I see that the active alert is related to large snapshots on the VM which can be a cause of performance degradation. In addition, the VM resides on a host with an active alert for Host CPU utilization which may be a growing concern for me in the near future.  To monitor hybrid cloud infrastructure in VMAN  allows the administrator the ability to create a highly customized view for discovery and troubleshooting with the contextual data necessary minus the alert noise that can regularly occur.


One of the added benefits of monitoring cloud instances with VMAN is that you can now build on the single pane of glass view that will be the monitoring authority for both your on-premises and cloud environment. Not only is it essential to have one place to monitor and correlate your data when applications span across hybrid environments but having visibility into both environments from Solarwinds allows you to determine what requirements will be necessary when moving workloads into your AWS cloud or off your AWS environment.


on-prem2.jpg         cloud-sum-2.jpg


For more information on PerfStack and Hybrid end-to-end monitoring, check out the following link.

Hybrid IT – Hybrid Infrastructure | SolarWinds


Don't forget to check these blog posts for deeper dives into PerfStack & Cloud Monitoring respectively.

Drag & Drop Answers to Your Toughest IT Questions

Get Your Head Out of the Clouds, and Your Data Too

I’m pleased to announce three new releases are now available!


NPM 12.1


PerfStack TMMeraki Wireless Monitoring


All your IT metrics on a shared timeline.  >> Learn more


API based monitoring for your cloud managed wireless gear.  >> Learn more

Along with:

  • Mute alerts
  • Minor NetPath enhancements
  • Improved Arista 7500E support


Check out all the details in the release notes.  You can find NPM 12.1 now on the Customer Portal!



NCM 7.6


Firmware Upgrades


Leverage the power and simplicity of the new Firmware Upgrade Wizard to help

with all your Cisco IOS upgrades.  >> Learn more


Check out all the details in the release notes.  You can find NCM 7.6 now on the Customer Portal!



VNQM 4.4


  • Updated Cisco Call Manager Support (11.x)
  • Avaya Aura Communication Manager Support (6.3.x)
  • Web-based Alerting & Reporting


Check out all the details in the release notes.  You can find NPM VNQM 4.4 now on the Customer Portal!

There has been a lot of focus on security lately in the news and rightfully soSeemingly each week there’s news of companies being hacked, data being stolen, and mass DDoS attacks.  With the amount of news on this topic I sometimes wonder if companies are actually taking the steps to protect themselves.  Granted, taking the proper steps to protect your network can be time consuming and tedious work, of which most engineers don’t have time for.  Well times are a-changing and now with SolarWinds Network Configuration Manager 7.6 and the new Firmware Upgrade feature---Everyone Has Time for That!


Before I start to dig into the new features of NCM 7.6, let’s back up and talk about a previous version of NCM (7.4) where SolarWinds introduced the Firmware Vulnerabilities feature.   This feature leverages the National Vulnerability Database to notify NCM users when they’re running firmware that potentially has a serious vulnerability.

Network Configuration Manager Vulnerability Summary


Vulnerability Details



I’ve received a lot of really positive feedback about this feature but the obvious question that always comes up after I show it to customers is; “Can SolarWinds fix this for me?”.  Historically I would have said, Yes, using the Network Configuration Manager and the amazing scripting technologies you can upgrade your firmware.  Well, now I’m pleased to say I can answer that question differently.  Using the new Firmware Upgrade Wizard in NCM 7.6 you can upgrade one or many of your Cisco IOS devices.


According to Cisco documentation there are 11 steps needed to complete a firmware upgrade on your Cisco IOS devices.  While 11 steps don’t sound too bad there are actually several sub-steps which drag this process out to over 40 tasks users must complete to upgrade a SINGLE device.  Seriously, who has time for that?  We here at SolarWinds decided we could save our users time and the misery of completing these upgrades by simplifying this process while adding a bit of automation.




NCM Firmware Upgrade Wizard



The new Firmware Upgrade feature in NCM contains a 3-step process of upgrading your devices.  During this process, we will collect a wealth of data about the devices you want to upgrade, including several important settings and of course we will ensure there is enough free space to successfully transfer your new image.  In addition, we will automatically backup running and startup configuration files and do a comparison after the upgrade has completed.  We’ve taken the necessary steps to make this process as smooth and safe as possible.  




After you’ve verified all of the settings and options you can then proceed to run the upgrade immediately or schedule it for a later date.  You can always keep track of the upgrade process on the Firmware Upgrade Operations page.  Hopefully you’ll agree that this is a much-improved process to the standard method of upgrading your Cisco IOS devices.  Ready to give this a try?  You can find the Release Candidate in your customer portal if you’re under active maintenance for NCM.  Otherwise you’ll have to wait until the official release of NCM 7.6.


Everyone Has Time for That!

If you are in the cloud or heading there, I'm excited to tell you Database Performance Analyzer 11.0 has your databases covered.  DPA already monitors databases on cloud VMs and Amazon RDS, but now we've got each vendors database DBaaS offerings as well.  Also, the updated DPA Integration Module (DPAIM) shows these new databases in Orion, and add SRM integration as well.  Here are some of the great features in DPA/DPAIM 11.0 RC:

  • Azure SQL Database... and in Orion too!
  • Amazon RDS Aurora support
  • SQL Server Availability Groups
  • Oracle Adaptive Plans
  • SRM Integration
  • GUI Improvements
  • Updated wait type descriptions


Monitor Azure SQL Databases

Microsoft did an awesome job creating a DBaaS option in Azure using SQL Server technology, and now we can analyze the database just like we do SQL Server, with database cloud metrics too!

Moving databases Azure SQL DB is measured and priced by by DTU's (Database Transaction Unit).  The more CPU, Memory or IO your SQL DB needs, the more money you pay.  In addition to our measuring the Wait Time, DPA captures DTU size and utilization, and captures CPU, Memory and IO in terms of percent utilization of the DTU, making it easy to see which resource is driving your DTU consumption.

So if you're bumping up your DTU limit, before you move up to the next DTU tier and increase you OPEX, try tuning some queries, eliminating blocking or adding some indexes.

And you can use Azure SQL DB as your DPA repository too.


Azure SQL Database in Orion too!

Oh, did I mention this would show up in Orion too!  If you have the DPA Integration Module (DPAIM) installed and configured, as soon as you add Azure SQL to DPA, you'll see your Azure Databases Orion as well.

Now you can map your DBaaS to your applications running on Azure VMs to fully support Hyrbrid-IT end-to-end single pane of glass!!!


Monitor Amazon Aurora

Aurora is Amazon's database that is MySQL compatible is now fully supported by DPA, to round out support for Amazon's database offerings.


Support for SQL Server Availability Groups

Availability Groups are one of the most popular features of SQL server these days, and DPA can now show you health and status of the availability groups and their member replicas and  databases.  You can configure DPA two ways:

  • Monitor via the listener - DPA will follow the primary replica from server to server.
  • Monitor each instance in the cluster

Either way, you'll see the same data for the primary server - the status of all availability groups for that server.

And if you drill down on an individual availability groups, you can see the status of all the replicas and databases.

When an instance is not the primary replica, you can still see the status of the itself in the availability group, but not the overall AG health or health of other replicas.

So now, when you see HADR wait types increasing, you can drill in and see the health and status of your availability groups


See Oracle Predicates and Adaptive Plans

For all Oracle users, we've added structure to our plan view and made it easy to filter out noise by hiding/showing the predicates.  Oracle 12c instances get the added bonus of seeing how Oracle is adapting plans for their queries.  And you can download the plan in easy to use and share text version, complete with link back to the plan view.


Storage Resource Monitor Integration

If you are a DBA and need to take a deeper dive into your storage array, you can now monitor your arrays with SRM and build relationships to the databases you are monitoring with DPA.  Once built, you can see capacity utilization and performance of the LUNs connected to your databases, including where you are at risk at to run out of a storage or performance capacity.


And a lot more!

In every release, we do a lot more than we can include here, but here are a couple more features worth mentioning:

  • GUI Improvements - Streamlined home page and filters, pages require less scrolling, simplified flow to Advisors and Historical Charts.
  • Updated Expert Advice - the expert advice for the most common Wait Type descriptions were expanded and improved... and to compliment the new Availability Group feature, we updated the HADR wait types as well.
  • Simplified Help - we unified our help into a single "Learn More" button and began adding and updating content, especially training around wait time analysis to help new users... more to come.
  • To see more, check out the release notes.


So, what are you waiting for? Log into the Customer Portal and download DPA 11.0. If you have any feedback or questions, feel free to post them in the Database Performance Analyzer RC group as well.

Now that you’ve seen how to use PerfStack for networks, it’s time to check out the improved Meraki wireless monitoring of NPM 12.1!


This feature was born from a simple customer request: to cover Meraki gear in the same way we do all the other wireless vendors, showing their hybrid infrastructure in a single pane of glass.  Additional research show a common need: don’t develop some different monitoring specific to Meraki, just cover all my wireless gear in one spot, in the same way.


Now NPM has been able to monitor Meraki wireless gear for quite some time.  Use the Add Node wizard to add APs with SNMP polling, and you’re off the races.  But there were a couple of problems:

  1. Each AP had to be added individually.  Discovery can speed this up, but many users prefer not to run comprehensive discovery often. This really clashes with the idea that Meraki APs can be deployed with near zero touch.
  2. Client information is not available via SNMP, so it is missing in NPM.  Turns out client information is kind of important if you want to know what’s going on with your wireless service.


Essentially, these two issues crop up because of the unique and innovative way Meraki technology works. In traditional thin AP deployments, APs connect to a physical wireless controller that is on-prem.  The controller controls (obviously) the APs and provides a central spot for configuration and management.  Meraki compounds the benefit of the wireless controller by replacing the physical unit with a logical controller in the Meraki cloud.  This means you can express the wireless policy for all of your locations on Meraki’s dashboard, at virtually any scale.  You have one wireless configuration.  As new APs are added to the network, they can be deployed with virtually zero touch.  And you don’t have to manage any physical controllers.


Pretty slick.


How do you poll the cloud controller though?  SNMP really doesn’t make sense to use in a Meraki environment where the controller is logical, and you have to reach it over an insecure medium like the Internet.


The solution is to poll via API.  TLS protects the communication just like it does your credit card number when you make a purchase online.  And RESTful APIs are just a more modern, intelligent solution.


So we partnered with Meraki to get that done.  In NPM 12.1, you’ll notice a new option in the Add Node wizard:



Upon selecting Meraki Wireless: API, you’ll be prompted for your API key.  You can find this in your Meraki Dashboard.  Once that’s filled in, NPM will connect to the Meraki cloud and retrieve your Organization list.  Most companies will have a single Organization, but MSPs or companies that go through a lot of acquisitions may have multiple.  After selecting a specific organization and clicking next, NPM will discover all of the APs and list them just as we would for a traditional wireless controller.  You can select which you’d like to monitor, and additional APs can be monitored automatically.  With that done, you’ll see the Meraki logical controller, APs, and clients in your Node Details and Wireless views:




That’s it!  All the complexity happens on the backend, and the UI you use stays the same, just with more data.  Simple, right?


Some additional facts you may be interested:

  • Licensing works just like it does for traditional on-prem wireless controllers and thin APs.  The controller costs one node license.  The thin APs do not take licenses!
  • While NPM shows more metrics than before, we’re still missing a few things.  We’ll look to improve this as the data becomes available in the RESTful API.  And some data just doesn’t make sense for Meraki gear, for example Wireless Controller CPU and RAM.


We’re very excited to continue down this path of providing complete monitoring visibility of hybrid infrastructure.  Special thanks to Meraki for providing us with shiny new Meraki gear for our lab and working side by side with our development team.



NPM 12.1 is out, with Meraki wireless monitoring!  Check out the video to see how it works.  Current customers can find it in the Customer Portal.  If you don't own NPM and want to try it, or want to try it in a lab, you can get it here.


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.