The team at Solarwinds has been hard at work at producing the next release of Virtualization Manager with some great new features. It is my pleasure to announce Virtualization Manager 6.2 Beta which is packed full of features and continues VMAN's journey of integration with Orion. The Beta is open to VMAN customers currently on active maintenance, to try it out please fill out this short survey.


VMAN Beta button.png



Beta Requirements

Orion integration with VMAN Beta 1 will require an install of the SAM 6.2 Beta.  Check out the the 3 part blog post (links to the blog posts can be found here) which details the new features of SAM including AppStack support for end-to-end visualization of the environment: App > Server/VM/Host > Storage (datastore).  As a reminder, beta software is for testing only and should not be used in a production environment.



Essential to diagnosing an issue (or preventing one) is understanding dependencies and relationships in a virtual environment.  Most administrators start troubleshooting an issue armed only with the knowledge that an application is slow or a server process is functioning sub-optimally.  AppStack provides an at-a-glance view of environmental relationships by mapping dependencies end to end and providing easily identifiable statuses which can quickly be used to identify root cause.  If a VM is reporting high latency an administrator can quickly identify the datastore, host, and related VMs to determine where the issue may be.  In cases where NPM, SAM, and VMAN are integrated, AppStack can quickly identify if a SAM threshold alert is attributed to network latency or virtual host contention.


AppStack can provide a view of your entire environment with relational dependencies visible end-to-end.  Status from additional integrated SolarWinds products such as SAM, SRM, and WPM are provided and the relationship mapped to the virtual environment.



AppStack is also visible from an individual Virtual Machine's Details View and displays the relational context of that virtual machine only.



Management Actions

A great new feature is the ability to perform virtual management actions without leaving Virtualization Manager.  With all your virtualization monitoring information available you can make a decision to stop, start, or  pause a VM from within VMAN.  In addition to power management options a Virtual Admin can also create and delete snapshots without ever leaving the VMAN console or logging into vCenter. An administrator can now act on an alert they see in the dashboard such as VMs with Old Snapshots and then delete those snapshots from within Virtualization Manager.  The addition of management actions also allows a Virtual Administrator to delegate operational tasks to non-virtual admins or teams (such as helpdesk) without providing access to vCenter.

Mgmt actions 1.JPG


Management ActionsWhat you will see

Power Management Actions

  • Power Off VM
  • Suspend VM
  • Reboot VM

Selecting a power management action will present a pop up confirming your action.


Snapshot Management Actions

  • Take Snapshot of VM
  • Delete Snapshots

Creating a snapshot will present you a pop up to Confirm the snapshot and create a custom name.

Create Snapshot.JPG

Deleting a snapshot will present a pop up to select a snapshot to delete.


Delete snapshot.JPG

Management Action Permissions

To execute management actions, the account used to configure the credentials that SolarWinds will use to contact your Virtual Center, must be given the necessary vCenter Server permissions. The minimum vCenter Server permissions required are:


  • Virtual machine > State > Power Off
  • Virtual machine > State > Power On
  • Virtual machine > State > Reset
  • Virtual machine > State > Suspend
  • Virtual machine > Snapshot management > Create snapshot
  • Virtual Machine > Snapshot management > Remove snapshot

Note: In vCenter Server 5.0 the location of snapshot privileges is below:

  • Virtual machine > State > Create snapshot
  • Virtual Machine > State > Remove snapshot


Managing access to the Management Actions within SolarWinds

Once permissions are set on the account used to connect to vCenter you can delegate access within SolarWinds to execute the management actions. The default permissions for management actions is set to Disallow which disables the virtual machine power management and snapshot management options.  To enable these features for a user or group select Settings in the upper right hand corner of Orion.  Then select Manage Accounts found under User Accounts and select the account or group to edit. Expand Integrated Virtual Infrastructure Monitor Settings and select Allow for the management action you wish to enable.


Mgmt Actions Permissions.JPGVirtual Machine Power Management

     Allow - Enable the options to start, stop, or restart a virtual machine.

     Disallow - Do not enable the options to start, stop, or restart a virtual machine


Snapshot Management

    Allow - Enable the options to take snapshots of a virtual machine, or to delete snapshots.

    Disallow - Do not enable the options to take snapshots of a virtual machine, or to delete snapshots.


If an admin attempts to perform a management action without being provided access within Solarwinds they will receive an error similar to the one below.

failed snapshot.JPG



Co-stop % Counter

A new counter has been added the VM Sprawl dashboard that is useful for detecting when too many vCPUs have been added to a VM thereby resulting in poor performance.  Co-Stop ( %CSTP as it is represented in ESXTOP) identifies the delay incurred by the VM as the result of too many vCPUs deployed in an environment.   Any value for co-stop above 3 indicates that virtual machines configured with multiple vCPUs are experiencing performance slowdowns due to vCPU scheduling. The expected action to take is to reduce the amount of vCPUs or vMotion to a host with less vCPU contention.



CoStop - VIM.JPG


Web Based Reports

Web-based reporting introduced in VMAN 6.1 allowed us to represent all data in the VMAN integration in the web based report writer bringing about an easy way to create powerful customizable reports in an easy to use web interface.  We have extended this functionality to include out--of-the-box web based reports found previously only in the VMAN console. These new web based reports include:


    • Disconnected Hosts
    • High Host CPU Utilization
    • Hosts with Recent Reboots
    • Newly Added Hosts
    • Datastores in Multiple Clusters

    • All VMs
    • Existing VMs with Recent Reboot
    • High VM CPU Utilization - Latest
    • High VM Memory Utilization - Latest
    • Linux VMs
    • Low VM CPU Utilization – Day
    • Low VM Memory Utilization – Day
    • Newly Added VMs
    • Stale VMs – Not Recently Powered On
    • VMs configured for Windows
    • VMs that are not Running
    • VMs Using over 50GB in storage
    • VMs Using Thin Provisioned Virtual Disks
    • VMs With a Bad Tools Status
    • VMs With a Specific Tools Version
    • VMs With Less Than 10% Free Space
    • VMs With Less Than 1GB Free Space
    • VMs With Local Datastore
    • VMs With More Than 20GB Free Space
    • VMs With More Than 2 snapshots
    • VMs With No VMware Tools
    • VMs With old snapshots (older than 30 days)
    • VMs With One Disk Volume
    • VMs With One Virtual CPU
    • VMs With One Virtual Disk
    • VMs With over 2GB in snapshots
    • VMs With RDM Disks
    • VMs With Snapshots
    • Windows VMs
    • High Host Disk Utilization
    • VMs on Full Datastores (Less than 20% Free)
    • VMs Using More Than One Datastore

Web Based Alerts

As discussed in the Virtualization manager 6.1 blog post web based reporting was introduced which allowed virtual administrators to take advantage of baselines and dynamic thresholds. Originally we only included only a small subset of standard VMAN alert out-of-the-box but we now have now extended these out-of-the-box alerts to include:


ClusterHostVMDatastore/Cluster Shared Volume


    • Cluster low VM capacity
    • Cluster predicted disk depletion
    • Cluster predicted CPU depletion
    • Cluster predicted memory depletion

    • Host CPU utilization
    • Host Command Aborts
    • Host Bus Reset
    • Host console memory swap
    • Host memory utilization
    • Hosts - No heartbeats
    • Hosts - No BIOS ID
    • Host to Datastore Latency

    • Guest storage space utilization
    • Inactive VMs - disk
    • Stale VMs
    • VM - No heartbeat
    • VM Memory Limit Configuration
    • VM disk near full
    • VMs with Bad Tools
    • VMs with Connected Media
    • VMs with Large Snapshots
    • VMs with Old Snapshots
    • VMs With More Allocated Space than Used
    • Disk 100% within the week
    • VM Phantom Snapshot Files
    • Datastore Excessive VM Log Files


Appliance Password Improvements

With ease of setup provided by the virtual appliance it is easy to overlook changing the appliance admin account (Linux password) or the Virtualization Manager admin account password.  In Virtualization Manager 6.2 the initial login will now display a password change reminder with links that will easily change both passwords.

Password change.JPG

Linux Password.JPG 

That's it for now.  Don't forget to sign up for the Beta and provide your feedback in the Virtualization Manager Beta forum

VMAN Beta Button 2.png

In my many chats with customers, I’ve found they didn’t know we have tools that allow them to synchronize files across servers automatically – and we give it away for FREE!

Some history here, when we acquired Rhinosoft back many moons ago, they sold a crazy popular FTP Client called FTP Voyager.  As a part of the acquisition we decided to make this free to the world.  A hidden gem within this product was their scheduler set of functionality, which is a service that allows you to automate file transfer operations on a scheduled basis.

Once you have installed FTP Voyager, you can open this by right clicking on the task bar and selecting “Start Management Console”.



Once you launch it you get a dialogue with some options to start from.



Backup & Synchronize are purpose built wizards which ultimately create the same thing as the first option, which is create a task.

Within a task you can set it to perform many tasks and behave in different ways, for example

  • Tasks starting additional tasks
  • Email notifications on success, failure & task completion
  • Tray icon balloons for custom alerts
  • Multiple transfer sessions
  • Conditional event handling
  • Optional trace logs (with log aging) for each task
  • Custom icons & colors for each task


The Backup wizard backs up files or folders and allows you to restore lost data due to hardware failure, software glitches, or some other cause for data loss. Many organizations perform backups of day to day activities to insure minimal downtime in case a disaster like a corrupt hard-drive occurs.


Synchronization focuses on keeping folders and files from/to your local machine to/from a remote server in sync. For example, web developers may use synchronization to maintain a local copy of an entire website. This allows the web developer to make changes to the website offline, and then upload those changes when they are complete.


I can already read your mind and predict your next question via the power of thwack.

               “This is awesome Brandon, but how do I do this in bulk or push out to many machines?”

Here’s the bare minimum you would need to have for a fully configured FTP Voyager Scheduler configuration with no end user interaction.

  1. Deploy FTP Voyager via your favorite app deployment application or leveraging GPO
  2. The registration ID: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerID.txt
  3. Pre-configured Scheduled tasks: C:\ProgramData\RhinoSoft\FTP Voyager\Scheduler.Archive
  4. Pre-configured site profiles for Scheduler: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyager.Archive
  5. Pre-configured transfer settings for Scheduler:  C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerSettings.Archive


This assumes that all machines receiving these files use the same file system structure defined in the tasks, that all required folders for those tasks already exist, etc.

    NOTE: When doing #3 and #4, you must copy them from the same machine, as the items configured in step #3 uses values present in #4 to link the two together. If different machines are used to generate the data, the values will not link properly.

That’s it, simple as that.  Whether you want to deploy this on one of your machines or many in your environment, now you can have your files automatically backed up or synchronized to your Serv-U server for safe keeping.

It's 2014 and simple bandwidth saturation continues to be one of the most common network problems.  Every time we make advances in data transit rates, we seem to find new ways to use more bandwidth.  Or is it the other way around?  QoS helps us use the bandwidth we have more effectively, but adds a layer of complexity.  Another interesting side effect of QoS in many organizations is that, by successfully protecting critical application traffic, QoS allows you to defer bandwidth upgrades.  By the numbers, keeping your circuits saturated absolutely makes business sense.  Who wants to pay for bandwidth you don't use?  The strategy is so effective that saturated circuits are becoming the new normal.


When bandwidth saturation is the normal state and QoS is responsible for protecting our critical traffic, how do we monitor and troubleshoot slowness?  We no longer have an available bandwidth buffer that we can monitor as an indication of the health of the circuit and the quality of the service we're providing to all of the transit traffic.  Saturated circuits require a different strategy.


1-minute max to tell us about your IT troubleshooting pain points


Rummaging Through the Tool Box


Let's take a look in our tool box and see what makes sense to use with saturated circuits:

Ping - Ping falls in only one QoS category and it generally isn't the same category as your important traffic.  Accordingly, ping is not a very good indicator of health.

Bandwidth Utilization - High bandwidth utilization no longer means bad service.  In fact, arguably, high bandwidth utilization is exactly what you're trying to achieve.  This is also not a good indicator of health.

NTA - Knowing what different types of traffic make up your total traffic load continues to be important.  Knowing how much traffic is being sent or received within each QoS class and what types of traffic make up the contents of that QoS class are more important than ever.  Additionally, QoS queue monitoring provides a window into when QoS is making the decision to drop your traffic.

VNQM - VNQM uses IP SLA operations that can masquerade themselves as most other types of traffic.  You can monitoring how VOIP, SQL, HTTP, and other traffic are each uniquely serviced.

QoE - Where VNQM uses synthetic transactions (read: monitoring system generated) to constantly measure the service you're providing to a particular type of traffic, QoE watches your real transit traffic to see what kind of service you're actually providing your users.


Some of our older tools no longer have the fidelity and granularity we need to help.  Ping and simple bandwidth utilization are no longer good indicators of health.  We have to rely on more advanced testing tools like VNQM and QoE.  NTA remains our detailed view into what our traffic actually consists of, with a shift of importance to the QoS metrics.

It's interesting to me that QOE overlaps in some ways with NTA and in other ways with VNQM.  Like NTA, QOE can tell you about what traffic is traversing a specific link (using NPAS) or a specific device (SPAS).  NTA is much better at it than QOE though.  Like VNQM, QOE allows you to continually monitor the quality of service your network is providing but in a different way.  Where VNQM sends synthetic traffic and analyzes the results, QOE analyzes real end user traffic.  QOE's strong suit is providing visibility into the service level you're really providing to your end users and then breaking that down into network service time and application service time.

Now that we've reviewed our tools in this new light, let's look at how our monitoring and troubleshooting processes have changed.


Monitoring Saturated Circuits


Here's an example of how we traditionally monitored for slowness:

1) Ping across the circuit and watch for high latency.

2) Monitor interfaces and watch for high bandwidth utilization.

3) Setup flow exports so that when an issue occurs, you can understand the traffic.


When circuit saturation is expected or desirable, the steps change:

1a) Setup IP SLA operations to test network service at all times and/or

1b) Setup QoE to monitor real user traffic whenever it is occurring.

2) Setup flow exports so that when an issue occurs, you can understand the traffic.


One thing that is very important when all your circuits are full is to be able to ignore things that don't matter.  If bittorrent is running slow for your end users, do you care?  When all of your circuits are running hot, you need to be able to detect when critical applications are impacting your user's ability to do their job, and ignore the rest.


Troubleshooting Saturated Circuits


Here's an example of how we traditionally troubleshot network slowness:

1) Receive high latency alert, high bandwidth utilization alert, or user complaint.

2) Isolate slowness to a specific circuit by finding the high bandwidth interface in the transit path.

3) Analyse flow data to determine what new traffic is causing the congestion.

4) Find some way (nice or not nice) to stop the offending traffic.


When circuit saturation is the norm, troubleshooting looks more like this:

1) Receive alert for poor service from VNQM or QOE, or receive a user complaint.

2) Isolate slowness to a specific circuit by finding high bandwidth interfaces in the transit path and comparing which endpoints or IP SLA operations are experiencing the slowness.

3) Determine which QoS class to focus on by mapping the type of traffic affected to a specific QoS class.

4) Determine if the QoS class is being affected by competition from other traffic in the same class or in a higher priority class.

5) Analyze flow data within the offending QoS class to find the traffic causing the congestion.

6) Find some way (nice or not nice) to stop the offending traffic.


Your Thoughts?


Everyone has different tools in their toolbox and different environments necessitate different approaches.  What tools do you use to monitor and troubleshoot your saturated circuits?  What process works for your environment?


1-minute max to tell us about your IT troubleshooting pain points

It is always exciting to present what is new in our products, and this time I have something I'm particularly happy about: Web Help Desk (WHD) 12.2 and Dameware (DW) 11.1 integration. This Beta starts today and it also includes features mentioned in Web Help Desk 12.2.0: Looking behind the curtain. If you are interested in participation, please go ahead and sign up here:




Note: You have to own at least one of those products to be able to participate in this Beta. Also this Beta is not suitable for production environment and you need a separate test system.


Before we talk about the details, let's quickly introduce both products.


Web Help Desk is a powerful and easy-to-use help desk solution. It does not overload you with tons of configuration settings, but it's still enormously flexible, and for many use-cases it works out of the box. You can integrate it with Active Directory, turn Orion alerts into tickets automatically, use it for Asset Management, or use the powerful Knowledge Base to divert new service requests by proactively displaying possible service fulfillment articles.


Dameware on the other hand is an efficient remote support and administration tool, with support for multiple platforms (native MRC, RDP or VNC protocols) and the ability to connect to remote machines on LAN as well as over the Internet.


Help desk and remote support tools are natural complements and part of every technician's workflow. You have to track incidents and service requests, and in many cases you need to connect remotely to fix an issue or to collect more information as part of the investigation. It is a no-brainer that this workflow should be smooth, without unnecessary steps and clumsy moving around between systems. Therefore we work hard to integrate WHD and DW to create such workflows, and I want to share some initial results with you.


Seamless Start of a Remote Session

It was possible to integrate WHD and DW even before by using the "Other Tool Links" feature of WHD (read more about this integration in Dameware / Web Help Desk Integration article). This integration helps you to start remote sessions directly from WHD

and it requires little configuration. However, it also requires the installation of a 3rd party tool - CustomURL on every Technician's computer. To avoid this hassle, Dameware can register itself as a protocol handler in this Beta version for custom Dameware links. This works for all major browsers (specifically IE, Chrome, Firefox, Safari and Opera).


The links have the format of a normal URL with the schema part of the URL changed to "dwrcc". The browser will recognize such a URL and pass it to Dameware. Here is an example of the URL:




Dameware takes this URL, extracts information such as the IP address ( in this example) or the host name if used, the WHD URL ( in the example above) and so on, and tries to locate the remote machine in the Save Host list. If the IP address or the host name is found, Dameware will use the saved credentials and other information such as protocol to start a remote session. If no information is present in the Saved Host list, then you will see a connection dialog and you can enter the credentials. Yes, they will be saved for later re-use. :). In the following screenshot you can notice that Dameware runs in Web Help Desk integration mode. This is indicated by the yellow bar on the top of the window.




The MRC application will exit after the sessions ends, which is different from the normal behavior when you start a session manually.


We also added a new configuration option to WHD to display Dameware links. You can enable displaying links by selecting the check box under Setup -> Assets -> Options -> DameWare Integration Links Enabled (this is actually everything needed to configure the integration):




Notice that there is also a possibility to define the Request Type. This is related to the second part of integration which is available in this Beta: the ability to save session data to WHD.


Saving Session Data into Tickets

When the remote session ends, you have the opportunity to save some session data to WHD. Specifically this means session meta-data, chat history and any screenshot made from Dameware. Meta-data which will be saved includes the following:


  • Technican's host name or IP address
  • End user's host name or IP address
  • Session start time
  • Session end time
  • Duration
  • Session termination reason (e.g. "The session was closed by technician")


Apart from meta-data, Dameware will also pass the chat history to WHD in the form of an RTF document. This document includes all formatting, and is easier to transfer for later use. The RTF document will be attached to a newly created ticket note, which will also include the aforementioned meta-data. Any screenshot you make will also be saved to WHD as an attachment. (Just be aware of the size limit of WHD attachments and make sure it is sufficient for screenshots). DW uses WHD API to pass the information and inform you about the progress. If anything goes wrong (for example the attachment too big), it will be displayed in a dialog. You will have the opportunity to save the session data locally, so no information is lost. All data is saved into a Note of the given Ticket. Any attachment will be the attachment of the Note, rather than a Ticket attachment to make is easier to identify attachments related to a given session. This information will not only provide customer evidence for remote sessions, but it also saves important information for troubleshooting during investigation and make it possible to track work time and report on it.


Connection from existing Ticket

The common scenario is that a user creates a new ticket reporting an issue on this desktop. You open the ticket details and there is information missing. Therefore you decide to connect remotely. You open the Asset Info tab and click the Dameware connection icon.




If you have previously connected to this machine and the credentials are saved in the Save Host list, you are seamlessly connected to the desktop of the user. Again, notice the yellow WHD integration banner.


Yellow Banner.png


While you troubleshoot, you make several screenshots, ask user for additional details over chat and investigate. When you are finished you close the session, and since the remote session was created from a WHD ticket, the integration dialog is displayed immediately (in some cases the workflow is different, but we will discuss this in a minute).


Ticket update dialog - CHAT.png


In the window title you can see the ticket number. The session meta-data section is displayed and you can describe what happened during the session in the text field below. This text will be saved into the ticket note. The duration of the session will be saved as work time in the same note. You can also decide to attach the chat history or any screenshot you made. Finally you can also decide whether to make this note visible to the client or not. Then you click on "Save".


In the next dialog window you can observe the progress of communication between Dameware and WHD. If anything goes wrong, you can see it here. If the session data was saved sucessfully you will see a confirmation dialog.


Upload progress.png


Following video demonstrates the workflow



Incident reported over phone

There are cases when the user will call you directly. In such a case, there is no ticket. If the user is however reporting an issue on this laptop, you can still connect remotely and troubleshoot the issue.


You can first search for the user's assets and connect directly from the resulting list.




In this case the session is initiated from the asset, not from the ticket. You are remotely connected and do whatever you need to do on the remote machine. When you are done, you close the session normally, but this time there is no ticket to be updated. Dameware will therefore display the list of existing tickets linked to the given asset. Only tickets which are not closed will be displayed, and you can update the ticket if the remote session was a follow-up on an existing incident.


Existing tickets list.png


If this is a completely new incident, you have the opportunity to create a new ticket. Click "Create new ticket" and the simplified new ticket form is displayed. Only the subject and the request details are available, and in the next step you can define more details of the newly created ticket (again, notice the ticket number in the window title).


New ticket.png


The rest of the workflow is the same as in the previous case. You can add a note and save and screenshots or char into a ticket and newly created ticket is updated.


Over the Internet session

Sometimes you have mobile users and you need to connect to them over the internet. The workflow is similar: you click the Dameware connection icon, and Dameware tries to connect. If it doesn't work, you will get back to the connection dialog and you can click on "Internet connection" and establish an OTI session. Once you connect sucesfully, the workflow is the same as in the previous cases. You are asked to update a ticket, or you are offered a list of existing tickets and you can update the ticket as before.


This integration works with all supported protocols: MRC, RDP and VNC. The only difference is that with RDP we do not support screenshots and chat, and with VNC only screenshots are supported.



This integration aims to streamline incident resolution and make the workflow smoother. I'm looking forward to hearing from you, if you have any feedback, questions, comments or ideas, please let us know in comments.


Also do not forget to sign-up for Beta here!

One evening this week, I was reading the latest in tech news on Engadget and Re/code about yet another organization whose network and data had been compromised. With businesses like Target, Home Depot, and even JP Morgan Chase falling victim to Advanced Persistent Threats I wondered what controls, processes and procedures these organization had to monitor suspicious activity and the sharing and storing of sensitive files. Add concerns with compliance requirements like those mandated by PCI and HIPAA, and you end up with a severe migraine.


There are logs, logs everywhere with tons of data and there are solutions in the SIEM space which analyze all of these logs from a security perspective, but this is typically reactive in nature. Organizations need proactive protection of data while it resides on the corporate networks – they need encryption of data at rest.


Reality is, you need protection, both in transit and at rest.  Serv-U MFT Server protects data while it is in transit using SSL and SSH. Serv-U Gateway, the reverse proxy add-on which prevents the storage of data in the DMZ, further reduces risk.  However, data-at-rest encryption is another important part of the picture, protecting data while it resides on network storage or on a server.


Image 1.png


There are several options available to customers who are seeking to provide this additional layer of security on their network. Typically, encrypted file systems are the optimal choice as they are usually easiest to deploy. Depending on the platform you want to secure, there are a couple different options.


Image 3.png

You can leverage EFS or Encrypting File System, which is a feature already built into many Windows versions including the newest versions of Windows and Windows Server.  There is another feature within Windows in regards to file encryption called BitLocker, but don’t confuse this with Cryptolocker. You can read more about BitLocker vs. EFS here.


If you are looking for non-Windows options or even other Windows options that are not created by Microsoft, historically many folks used an open source program called TrueCrypt, but active development for this recently ended.  You can still use this product, but just know that any new issues will not be fixed.  With this being said, this code base has been forked and in the process of being turned into a free product called CipherShed, which will work on Windows, Mac OS and GNU/Linux.


If any of the above don’t fit the bill for you, here are some other options for you to look at and consider.


Combining Serv-U with one of the options listed above ensures that you data is completely secure, both in-transit and at rest.

Hello fellow network diagram geeks!  I'm excited to announce that NTM v2.2 beta1 is now available.  It has been quite some time since we've had a beta for NTM, but we're making some big changes and we need your input.


Let's take a look at what's new in this beta.


Modular Scanning


This is the big one!  We've fundamentally changed how scanning works in NTM.


What is Modular Scanning?

Since NTM's inception, one scan has resulted in one map.  To make multiple maps, you perform multiple scans.  Each map may contain any or all nodes found during the scan.  If you want to map a different part of your network, start over with a scan.  If you want to create a different view of the same network (L2 vs L3 diagrams anyone?), start over with a new scan.  Map getting too big and unwieldy?  Start over with a smaller scan.  This worked, but we think there's a better way.


In beta1, we introduce the concept of topology databases (shout out to you OSPF engineers out there!).  Now, when you perform a network scan, what you discover with NTM is saved in what we call a topology database.  You then build maps based upon the data in the topology database.  One map, 20 maps; however many maps you want.  All maps are stored in the topology database they're built from.  A single file on your hard drive contains a topology database and all maps built upon it.


This realignment of core functionality is simple to explain, but has had a big impact on how NTM works and how you interact with it.


How Does Modular Scanning Help Me?


Modular scanning lets you:

  • Scan once for your whole environment.
  • Build maps on the fly without performing new scans.
  • Flip between maps quickly because the maps are based on the same topology database.
  • Copy nodes (including their layout) between maps because the maps are based on the same topology database.
  • Keep your NTM diagrams updated with a single manual rescan or rescan schedule.
  • Spend more of your time making useful maps and less of your time configuring, scheduling, and waiting on scans.


In addition, the rendering changes (described in the next section) that were required to deliver modular scanning properly provide the following additional benefits:

  • NTM's is faster and scales better in medium and large environments.  Try it.
  • Auto Arrange functions tend to produce more appropriate icon layouts without so much extra space.


Rendering Change

When we started down this path, we knew we would also have to significantly change how we render nodes.  In previous versions of NTM, filters were used to control whether nodes would display or not.  Enabling filters or disabling filters does not give NTM any indication of where you would actually like to place those nodes.  As a result, whether you were displaying all of your discovered nodes or 5% of your discovered nodes, NTM always kept track of every node and where it was on the screen to prevent the nodes from overlapping.  Although there were some benefits to this, there were two big negative effects:

  • Displaying maps where many nodes were discovered took more compute resources than was necessary.  Maps with many discovered nodes could be slow or laggy feeling, even if only a handful of nodes were actually visible.
  • When many nodes were hidden, the Auto Arrangement functions would often separate node icons by huge amounts of distance to make room for nodes that were hidden in between the visible nodes.


We knew modular maps will  naturally result in users doing bigger scans.  The performance issues and layout issues noted above would cause severe problems at that scale.


So we fixed them.  NTM now only renders nodes that are actually on the map.  All nodes exist in the topology database.  You place nodes onto your map by dragging and dropping from the left hand pane (ala Network Atlas), and the nodes that you don't want to see don't bog you down or cause rendering oddities.


While NTM's scalability and performance used to revolve around the total number of discovered nodes regardless of how many nodes were displayed, it now revolves around how many nodes are displayed on the map you're looking at.

Filters.pngDrag and Drop.png

Icons, Icons, Icons!


You want better icons, bigger icons, smaller icons, custom icons, icon alignment, icon spacing, and anything else icon related we're willing to build.  We've heard you.  Let's see how much of this functionality we can show in one screenshot:




  1. All new out of the box icons in a scalable format
  2. Re-sizable icons
  3. Custom icon import
  4. Icon alignment tools
  5. Even icon spacing tools
  6. Text alignment
  7. Arbitrary text placement.

Best of all, you can do all of these with bulk operations.



Enhanced Etherchannel Support


This wouldn't be an NTM release if it didn't improve NTM's ability to map your network.  This beta now does a much better job of discovering and mapping your Etherchannels, port-channels, or whatever you choose to call your L2 aggregated links.  Take a look:




  1. Accurately detecting Etherchannel
  2. Rendering member links of Etherchannel as parallel
  3. Excluding links that are not part of Etherchannel, even if they are parallel
  4. Identifying port-channel name
  5. Protocol name
  6. Protocol mode
  7. Aggregate bandwidth


So Much More


In addition to the big items above, we've made a truck load of other smaller improvements.  Here's some of them:


  • Not sure what connects to the nodes on your map?  Try right clicking and selecting "Add Neighbors".
  • How do you interact with multiple topology databases at the same time?  A drop down in the application?  Multiple levels of tabs?  Yuck.  In this beta, for the first time, you can open multiple instances of NTM.
    • Each NTM instance holds all the maps for one topology database.
    • If you have two topology databases open, you can scan in one window and continue working in the other.
    • If you have three topology databases open, you can scan in two windows (parallel scanning) and continue working in the remaining one.
  • Shortcuts allow you to add intelligent groupings of nodes to your map all at once.
  • Select nodes in the map you're looking at intelligently by clicking on individual nodes, groups of nodes, or node shortcuts in the left hand pane.  For example, selecting the "Router" group in the left hand pane selects all routers on your current map.
  • Connection jumps.
  • Copy paste nodes or groups of nodes between your maps.  Layout is preserved.
  • Improved Credential Management UI.
  • Extensible regex interface name handler to intelligently shorten interface names so they fit well on the map.



Known Issues in This Beta


  • Undo/Redo buttons sometimes do not work.
  • Etherchannels that are configured statically (mode "ON") are not currently detected as Etherchannels.




We're very excited about this release and can't wait to see the maps you guys create.  Feel free to post your feedback or your new maps in the NTM Beta Forum or email me directly at chris.obrien{at}




As a reminder, beta software is for testing only and should not be used in a production environment.

Up until this point, much of the noise surrounding the Server & Application Monitor 6.2 beta has been focused exclusively on a new optional agent that allows for (among many other things) polling servers that reside in the cloud or DMZ. More information regarding this new optional agent can be found in my three part series entitled "Because Sometimes You Feel Like A Nut" linked below.



If you can believe it, this release is positively dripping with other incredibly awesome new features and it's time now to turn the spotlight onto one that's sure to be a welcome addition to the product.


Since the advent of AppInsight for SQL in SAM 6.0, and AppInsight for Exchange in 6.1, you may have grown accustomed to the inclusion of a new AppInsight application with each new release. Well I'm happy to announce that this beta release of SAM 6.2 is no different, and includes the oft requested AppInsight for Microsoft's Internet Information Services (IIS).



HTML tutorial



As with all previous AppInsight applications, monitoring your IIS Servers with SAM is fairly straightforward. For existing nodes currently managed via WMI simply click List Resources from the Node Details view and select Microsoft IIS directly beneath AppInsight Applications. You will also see this option listed for any new IIS servers added individually to SAM and managed via WMI using the Add Node Wizard.


You can add AppInsight for IIS applications individually using the methods above, or en masse using the Network Sonar Discovery Wizard.  One-time Discovery and scheduled recurring discovery of IIS servers in the environment using Network Sonar are fully supported. Either method will allow you to begin monitoring all IIS servers running in the environment quickly and easily.


AppInsight for IIS has been designed for use with IIS v7.0 and greater running on Windows Server 2008 and later operating systems. To monitor earlier versions of IIS such as 6.0 and 6.5 running on Windows 2003 and 2003R2 respectively, you should use theInternet Information Service (IIS) 6 template available in the Content Exchange.


AppInsight for IIS discovery is currently limited to WMI managed nodes. For nodes managed via SNMP, ICMP, etc. you can manually assign AppInsight for IIS to the node no differently than any other application template in SAM.

List Resources.png


Network Sonar One Time Discovery.png

Network Sonar Discovery Results.png



AppInsight for IIS Configure Server.png

AppInsight for IIS leverages PowerShell to collect much of its information about the IIS server. As such, PowerShell 2.0 must be installed on the local Orion server or Additional Poller to which the node is assigned. PowerShell 2.0 must also be installed on the IIS Server being monitored. Windows 2008 R2 and later operating systems include PowerShell 2.0 by default. Only if you are running Orion on Windows 2008 (non-R2) or are planning to monitor servers running IIS 7.0, will you need to worry about the PowerShell 2.0 requirement.


Beyond simply having PowerShell installed, Windows Remote Management (WinRM) must also be configured. This is true both locally on the Orion server, as well as on the remotely monitored IIS host. If you're not at all familiar with how to configure WinRM, don't worry. We've made this process as simple as clicking a button.


After discovering your IIS servers and choosing which of them you wish to monitor, either through the Add Node WizardList Resource, or Network Sonar Discovery, you will likely find them listed in the All Applications tree resource on the SAM Summary view in an "Unknown" state. This is because WinRM has not been configured on either the local Orion server or the remotely monitored IIS host. Clicking on any AppInsight for IIS application in an "Unknown" state from the All Applications resource launches the AppInsight for IIS configuration wizard.

When the AppInsight for IIS configuration wizard is launched you will be asked to enter credentials that will be used to configure WinRM. These credentials will also be used for the ongoing monitoring of the IIS application once configuration has successfully completed. By default the same credentials used to manage the node via WMI are selected. Under some circumstances however, the permissions associated with that account may not be sufficient for configuring WinRM. If that is the case, you can select from the list of existing credentials available from your Credential Library, or enter new credentials for use with AppInsight for IIS.


Once you've selected the appropriate existing, or newly defined credential for use with AppInsight for IIS, simply click "Configure Server". The configuration wizard will do the rest. It should only take a minute or two and you're up and monitoring your IIS server.


If configuring WinRM to remotely monitor your IIS server isn't your jam, or if perhaps you'd simply prefer not using any credentials at all to monitor your IIS servers, AppInsight for IIS can be used in conjunction with the new optional agent, also included as part of this SAM 6.2 beta. When AppInsight for IIS is used in conjunction with the agent you can monitor IIS servers running in your DMZ, remote sites, or even in the cloud, over a single encrypted port that's NAT friendly and resilient enough to monitor across high latency low bandwidth links.




Sites and Pools.pngAs with any AppInsight application, AppInsight for IIS is designed to provide a near complete hands off monitoring experience. All Sites and Application Pools configured on the IIS server each appear in their respective resources. As Sites or Application Pools are added or removed through the Windows IIS Manager, those Sites and Application Pools are automatically added or removed respectively from monitoring by AppInsight for IIS. This low touch approach allows you to spend more time designing and building your IT infrastructure, rather than managing and maintaining the monitoring of it.


Each website listed in the Sites resource displays the current status of the site, it's state (started/stopped/etc.), the current number of connections of that size, the average response time, and whether the side is configured to automatically start when the server is booted.


It is all too common for people to simply stop or disable the Default Web Site or other unused sites in IIS rather than delete them entirely. To reduce or eliminate false positive "Down" alert notifications in these scenarios, any sites that are in a Stopped state when AppInsight for IIS is first assgined to a node are placed into an unmanaged state automatically. These sites can of course be easily re-managed from the Site Details view at anytime should you wish to monitor them.


The Application Pools resource also displays a variety of useful information, such as the overall status of the Application Pool, it's current state (stopped/started/etc.), the current number of worker processes associated with the pool, as well as the total CPU, Memory and Virtual Memory consumed by those worker processes. It's the perfect at-a-glance view for helping identify runaway worker processes, or noisy neighbor conditions that can occur as a result of resource contention when multiple Application Pools are competing for the same limited share of resources.


As one might expect, clicking on a Site or Application Pool listed in either of these resources will direct you to the respective Site or Application Pool details view where you will find a treasure trove of valuable performance, health, and availability information.



Response Time.png


The release of Network Performance Monitor v11 included an entirely new method of monitoring end user application performance in relationship to network latency with the advent of Deep Packet Inspection, from which the Quality of Experience (QoE) dashboard was born. The TopXX Page Requests by Average Server Execution Time resource is powered by the very same agent as the Server Packet Analysis Sensor that was included in NPM v11. AppInsight for IIS complements the network and application response time information provided by QoE by showing you exactly what pages are taking the longest to be served up by the IIS server.


This resource associates user requested URLs with their respective IIS "site", and its average execution time. Expanding any object in the list displays the HTTP verb associated with that request. Also shown is the date and time of the associated web request, the total elapsed time, IP address of the client who made the request, and any URL query parameters passed as part of the request.


New linear gauges found in this resource now make it possible to easily understand how the value relates to the warning and critical thresholds defined. Did I just barely cross over into the "warning" threshold? Am I teetering on the brink of crossing into the "critical threshold? These are important factors that weigh heavily into the decision making process of what to do next.


Perhaps you've just barely crossed over into the "warning" threshold and really no corrective action is required. Or maybe you've blown so far past the "critical" threshold that you can barely even see the "good" or "critical" thresholds anymore and a full scale investigation into what's going on is in order. In these cases, understanding where you are in relation to the thresholds defined is critical to determining both the severity of the incident, as well as measuring a sufficiently adequate response.


Server execution time is the time spent by the server processing the users request. This includes the web server's CPU processing time, as well as backend database query time, and everything in between. The values shown in this resource are irrespective of network latency; meaning, page load times will never be better than what's shown here in this resource without backend server improvements, regardless of what network performance looks like. Those improvements could be as simple as rebuilding database indexes, defragmenting the hard drive, or adding additional RAM to the server. Either way, high server execution time means users are waiting on the web server or backend database queries to complete before the page can be fully rendered.


AppInsight for IIS - Management.png



What good is the entire wealth of information that AppInsight for IIS provides if there is no way to remediate issues when they occur?

In addition to a powerful assortment of key performance indicators, you will notice new remediation options available within the Management resource of the AppInsight for IIS "Site Details" and "Application Pool Details" views. Within each respective resource is the ability to stop, start, and even restart the selected site or application pool, as well as unmanage. Remediation actions executed through the web interface are fully audited, no differently than terminating processes through the Real-Time Process Explorer, or stopping/starting/restarting services via the Windows Service Control Manager.

IIS Alert Trigger Action.png
In conjunction with one of the many other improvements also included in the SAM 6.2 beta, it is now possible to automate these IIS remediation actions from within the all new web based Alert Manager. Not only can you say, restart a site that has failed, or stop an Application Pool that's running amuck, but you can perform those remediation actions against any IIS site or Application Pool in your environment regardless if it's the site or application pool in distress. This is important for complex distributed applications where issues with backend application servers or database servers may require a restart of the frontend IIS site or application to resolve. Now this can be completely automated through the Advanced Alert Manager, and you can get some much need shuteye.

These are just a few of the capabilities of AppInsight for IIS. If you'd like to see more, you can try it out for yourself by participating in the SAM 6.2 beta. To do so, simply sign-up here. We thrive on feedback, both positive and negative. So kick the tires on the new SAM 6.2 beta and let us know what you think in the SAM Beta Forum.


Please note that you must currently own Server & Application Monitor and be under active maintenance to participate in this beta. Betas must be installed on a machine separate from your production installation and used solely for testing purposes only. Also, if you're already participating in the NPM v12 beta, the SAM 6.2 beta can be run on the same server, alongside the NPM v12 beta.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.