It's a rare thing these days to see a product essentially reinvent itself. Most product improvements across the industry tend to be more evolutionary than revolutionary; and with Server & Application Monitor 6.0 beta 3 (sign-up here) we are definitely aiming our sights towards the latter. Since we very first sat down at the drawing board to begin design on SAM 6.0, our mantra has been "question everything". We wanted to provide a tremendous level of additional application monitoring depth, while simultaneously simplifying the entire experience of initial setup and configuration of application monitoring; we also wanted to reduce overhead associated with configuration changes made in the environment. As result of this endeavor AppInsight was born.


AppInsight is a new application monitoring technique we are introducing as part of the Server & Application Monitor 6.0 release. Beginning with Microsoft SQL Server, AppInsight provides a whole new level of application monitoring detail that was previously very difficult, if not impossible to achieve using Application Templates. AppInsight is not a direct replacement for Applications Templates but rather an entirely new monitoring concept within SAM. Application Templates remain the primary method for quickly monitoring virtually any commercial, open source, or home grown application imaginable. In contrast, AppInsight is more akin to an entirely new product deeply embedded within SAM; designed from the top down to solve common, yet complex problems for a specific application, rather than merely a new feature.


AppInsight for SQL


Microsoft's SQL Server is a ubiquitous beast at the very core of many organizations critical business applications, both large and small. Unfortunately, gaining the necessary visibility into various different aspects of the SQL Server instance, or worse yet, the databases themselves, that effect the performance characteristics of the applications that rely on the database server has proven challenging without the necessary tools to provide visibility.


With SAM 6.0 we knew we wanted to deliver comprehensive SQL database server monitoring in a simple, intuitive, easy to use manner that would aid anyone, from novice systems administrators to veteran DBAs alike, identify both common and complex SQL issues in their environment.




The AppInsight experience, as with most things,starts at the beginning during discovery. When adding nodes individually all available SQL 2008, 2008R2, and SQL 2012 Server instances found on the host are listed. As you can see in the screenshot to the right, you can select each individual instance you'd like to monitor the same as you would for volumes or interfaces.


If adding nodes individually isn't your speed, AppInsight for SQL has also been tightly integrated into the Network Sonar Discovery wizard. This allows you to easily scan an entire subnet, group of subnets, IP address ranges, or a list of individual IP addresses, to quickly discover hosts on your network and any SQL Server instances installed on those hosts. Regardless of which method you choose, monitoring your SQL server instances is as easy as checking the box.

Add Node - AppInsight.png

Once you've enabled AppInsight for SQL monitoring for any SQL Server instance(s), you will see them listed within the same All Applications resource alongside the template based applications that you're monitoring with SAM today.


A new icon adorns AppInsight for SQL instances in the All Applications resource, making them easily distinguishable from traditional template based applications.

AppInsight Dashboard


When you click on any AppInsight for SQL listed in the All Applications tree, you will be taken to a new, radically overhauled version of the Application Details view that's been designed from the ground up to serve as the dashboard for that Microsoft SQL Server instance.


The very first thing you're likely to notice when accessing this new Application Details view are the the newly redesigned chart resources that combines information, such as status and current value normally found within the Components resource, with those metrics historical values charted over time. This allows you to see not only how the application is performing now, but how that performance compares to historical trends without needing to drill deeper into the Component Details view.

All Applications - AppInsight.png

There is such an enormous amount of valuable information on this view that it has been categorized and broken up logically across multiple resources; a theme carried over from the Asset Inventory subview I discussed in my previous SAM 6.0 Beta blog post. This prevents anyone from becoming too overwhelmed with all the information shown, while also making the diagnosis of root cause easier. For example, if you're troubleshooting a potential memory issue on your SQL Server, then all information related to memory is contained within the Memory resource. There are similar resources for information such as latches & locks, connections, cache, paging, buffers, and more.


Database Details


AppInsight for SQL doesn't merely gather information about the SQL Server instance itself but also individual database managed by that SQL Server as well. Contained within the Application Details view, you will find the All Databases resource that displays status and basic size information for the databases running on this SQL instance. As databases are created on the SQL server, AppInsight will automatically begin monitoring them. Conversely, when databases are deleted from the SQL server they will disappear from the All Databases resource. Click on any of the databases listed and you'll be taken to the Database Details view which contains all information specific to that database.

All databases.png

Database Index Fragmentation.png


The Database Details view helps to identify various database performance issues, such as those associated with poorly performing indexes through the top 10 clustered and non-clustered indexes by fragmentation resources.There's also a wealth of other valuable information contained in this view that will help you to determine which tables consume the most storage space, the number of rows in each table, white space remaining in the database, as well as resources to help isolate disk I/O contention issues for both database and transaction log files.

Database - Tables by Size.png
Expensive Queries

What do you do when "Sally" in accounting complains that the ERP system, that's critical to the business and relies on Microsoft SQL, is "slow", but you've exhausted all avenues trying to isolate the cause of the issue? SAM's AppInsight for SQL may be telling you that the issue is CPU related, but how do you determine the actual cause? This is where having insight into the most expensive queries executing on the SQL server becomes absolutely essential. Within the Application Details view you will find the Top 10 Expensive Queries resource that shows the the most expensive queries running across all databases for that SQL server instance. Additionally, you will find the same Top 10 Expensive Queries resource on the Database Details view that shows the list of most expensive queries running against that specific database. If those resources don't provide an adequate level of detail to help isolate the issue there's also a dedicated Queries subview where you can filter on a specific database, time frame, host, or user. From this view you can quickly identify that the cause of the high CPU utilization on the SQL server, and the ERP systems poor performance, are caused by "Jan" in the Finance department running her quarterly reports. That's actionable information everyone can use.

Most Expensive Queries.png

Download Details


There's still a ton more AppInsight for SQL goodness that I haven't yet shown. If you'd like to see it in action for yourself sign-up here to participate in the SAM 6.0 beta. Note: You must already own a license of Server & Application Monitor that's currently under active maintenance to participate in the Beta.

New IP Address Manager 4.0 just has arrived. The upgrade is available for free for all IPAM customers under active maintenance and can be downloaded from the SolarWinds customer portal.


The 4.0 comes with following list of new features and improvements:

  • Support for BIND DNS management and monitoring
    • Create, edit, delete DNS zones and DNS records for BIND DNS v8.x and v9.x (A, AAAA, MX, PTR, CNAME)
    • Monitor BIND DNS service status
    • Monitor BIND DNS zone status
  • Active IP Address conflict detection in both static and DHCP environments.
  • Integration of IPAM with our UDT product via subviews
    • See Port and User information on the same page as IP address Host or DNS assignment history
    • Shutdown port remotely in case of IP address conflict (UDT functionality)
  • New Icon pack for IP addresses, DNS zones and DHCP scopes.


Some of you may read IPAM 4.0 RC blog post that presents BIND DNS management and monitoring and IP Address conflict detection. For thos who didn't, here is a short summary:


BIND DNS Monitoring and Management

BIND DNS is de-facto DNS standard and it's frequently used among IT guys. Those who need to configure BIND DNS via CLI know that there is a lot of space of human mistake during adding new DNS zones or updating DNS records. I can also imagine that admins are not comfortable with giving CLI credentials for read/write/create/modify BIND config files to people that just need to maintain DNS records. And that's why we added support for BIND management into IPAM - no more errors during config and no more sharing of CLI credentials.


Adding and configuring BIND DNS servers is a matter of few minutes. As a user, you need to do three steps:

  1. Add device that hosts BIND DNS as a node into IPAM (that will also allow you to monitor device performance)
  2. Let IPAM to scan BIND configuration (enable scanning)
  3. Allow/Deny other users to change BIND configuration (Power User role and above can do that).


BIND How To (arrows) resize.png


Active IP address conflict & User Device Tracker integration

IPAM 4.0 can detect IP Address conflicts (both IP static and DHCP environments) and help you to troubleshoot the problem. It would allow you to switch down the port remotely if you have UDT installed along with IPAM. Let's take an example of IP conflict and how IPAM may help you:


Client_conflict wireless.png

IPAM 4.0 offers following IP Address Conflict detection and historical data in order to help you identify devices in conflict.

  1. IP Address conflict is triggered
  2. IPAM contains information about the IP Address history assignment from where may user see MAC addresses that are using IP in conflict
  3. If UDT is installed, UDT view provides information about Device, Port and Active Directory information.
  4. UDT can remotely shut down switch port and unblock IP in conflict.

IP Address conflict_cr 1.png

IP Address conflict_cr 2.png

For more details about IPAM 4.0 release visit SolarWinds IP Address Manager Release Notes

We are pleased to announce the general availability of SolarWinds Web Help Desk v12. This release contains the following product improvements and new features:


  • Improved LDAP Directory and Active Directory auto-discovery
  • Native Asset Discovery (using WMI)
  • Native integration of Asset Database with SolarWinds Network Performance Monitor (NPM), Network Configuration Manager (NCM) and Server and Application Monitor asset databases (SAM)
  • Native integration with Alerts from SolarWinds NPM, NCM and SAM
  • New Getting Started wizard
  • Automatic setup of WHD database in the wizard
  • Support for Windows 2012, Windows 8 (only for evaluation version of WHD)
  • Support for MS SQL 2012, support for embedded PostgreSQL database
  • Migration utility from Frontbase to PostgreSQL


For details on these features please visit previous announcement about Release Candidate on Product Blog.


You can view the full set of release notes, including problems fixed here.



Download Web Help Desk now and have fun!

Latest and greatest version of NPM 10.5 is ready for download from your customer portal.


Here are major improvements and new features:

  • Routing information including alerting for major routing protocols (RIP, OSPF, BGP)
    • View and search in routing tables.
    • See changes in default routes and flapping routes
    • View router topology and neighbor statuses
  • New Interface filtering UI to import discovery results:
    • Exclude virtual interfaces and access ports, or specific interface type
    • Select interfaces based on pattern matching including Regex formulas
    • The new preview UI for final selection of imported interfaces
  • Multicast traffic information monitoring and alerting, including topology information.
    • Automatic detection of multicast protocol and multicast group import into NPM
    • Display multicast information, route information, and device information in a single unified view
    • View multicast topology using upstream and downstream device list information
    • Generate intelligent alerts based on multicast errors
  • Interface Auditing
    • View user actions related to interface monitoring in NPM


As you may notice from above, this version of NPM adds another important element for successful and effective troubleshooting of connectivity and performance issues - monitoring of OSI L3 routing protocol information.


How does routing information impact network performance and availability?
     IP networks are critical for business missions, it’s not only router or switch hardware which can impact your network availability and performance. It’s also Incorrect routing or routing issues causing undesirable performance degradation, flapping and/or downtime. Getting such information requires “analytic tools”. In bigger network, routing can change very dynamically (adding/removing switches, Aps, VPNs, routers and IP subnets) and it’s almost impossible to monitor routing changes from the inside of the routers (too much simultaneous data).


Also, information like “Flapping Route” are not available for most of protocols directly on the server. The flapping route problem will cause frequent re-calculation of network topology by all participating routers or flood network with many Update packets. In both cases it prevents the network from routing and correct packet addressing. Another common issue is human error caused by incorrect creation of static routes. ICMP Ping won’t help you and since you can see that the device is up and running but you really need to see routers settings and check packet routing.

How is routing presented in NPM?

     If NPM recognizes that Node is actually a router that runs one of the OSPF, BPG or RIP protocol it will automatically gets routing information from such device. Routing information is then mapped to existing Nodes and Interfaces in NPM so you may see device availability statuses and network performance data. All new resources are under "Networking" tab on Node detail page.


NPM 10.5 also automatically detects multicast protocol traffic and imports multicast groups as a monitored object.

What is multicast?

      IP multicast is widely deployed in enterprises, and multimedia content delivery networks. A common enterprise use of IP multicast is for IPTV application. It’s also used for video and teleconferencing and it is about to transmit a single message to a selected group of recipients. A simple example of multicasting is sending an e-mail message to a mailing list. Mutlicast protocol is well known as a bandwidth-conserving technology that reduces traffic because it simultaneously delivers a single stream of information to thousands of recipients.

How to understand multicast availability and performance information?

     Because the topology is sometimes complicated it’s difficult to understand who can and who can’t receive multicast traffic. The network availability is a key and the status of the whole multicast group is a representative of (not)successful transfer of multicast stream.

Even though Multicast is designed to save bandwidth utilization, with many groups and many multicast sources & receivers, network may become saturated very quickly. There are tools like NTA which can sniff the network data and tell you what apps are causing the network traffic but people usually need to quickly read multicast node traffic utilization – bits per second. Looking at overall traffic consumed by multicast node, users may see the ratio of multicast vs. another type of traffic on the network and then optimizes QoS configuration to increase/decrease traffic priority, upgrade the link or change multicast routing so the traffic may go via another line. In a case that the multicast group status reports a problem, people need to have an information about Multicast Topology. Only topology information allows you to quickly jump between Upstream or Downstream device and let you find the root cause of the problem fast.

How NPM 10.5 solves Multicast problem?

     During first network scan, NPM detects multicast groups and list of nodes that are subscribed to each one.


NPM then creates a topology of upstream and downstream devices within a group and detects interfaces that forward or receive multicast traffic.


Re-worked Interface filtering page

     This is something you've asked us to improve (for example here or here on Thwack). We understand that filter & import only desirable interfaces took some time with previous version of the Interface discovery filter. If you run Discovery Importing wizard in NPM 10.5 you'll find re-worked UI for interface importing. You can filter by Interface status, VLAN port type, protocol type, hardware type. It now has support for advanced filtering and Regex conditions so create filter that will import just physical interfaces that are part of the specific VLAN ID is a matter of few seconds.


I believe that you will like the new functionality of NPM 10.5 and stay tuned, because we are already working on the next version.

Log & Event Manager's latest Release Candidate (v5.6) is now available on everyone's customer portal. As always, this release candidate is supported in production, so if you have any questions or issues post them here on our Security Event Manager (SEM) Release Candidate forum or contact our awesome support team. Here's the two big items that will make it worth your while to upgrade.



Let's take all those rules... and move them to categories!

LEM ships with a lot of rules out of the box... a lot. The problem we've had is that they are hard to find - if you're looking for "rules that help with PCI Compliance" you have to cross reference a separate list. Well, no longer! We have categorized and tagged everything into different areas that are oriented much more toward how you're actually using rules.



  • To use a rule template, click the "Gear" to the right of the rule you want to use, then choose "Clone". (This isn't new, but where it's located is!)
  • Your rules will appear in the "Custom" category (underneath "Compliance" in the screenshot above).
  • We've hidden some of the advanced refinements (searching by date, user who modified the rule last, etc) to the "Advanced Search" area on the top left.
  • There are some rules that monitor common traffic patterns that are enabled by default. They will show up as Created By "Unknown" and be enabled (easy to spot if you click on All Rules).


When editing a rule...


  • Click the link next to "Tags" to the top left to add categories and tags. The "Tags" dialog will open where you can add your own or check off existing tags.
  • To delete a category/tag from the list, remove the category/tag from any rules that are using that category/tag.
  • You can create your own categories/tags on the fly, too, just add them when you're editing the rule, and check them off. You can always go back and add those tags to new rules.


Some cool things you can do with categories and tags:

  • Tag rules you're using for compliance so that they don't get inadvertently disabled.
  • Categorize rules used for production, lab, and other environments so that you know how rules are used.
  • Tag "in progress" or "testing" rules so that you can find rules that you're working on developing.
  • Categorize rules for different departments or teams (sort of like how we have Security and IT Operations) so that each team can find their relevant rules quickly.




Next up....

Improvements to Database Storage Infrastructure, Archiving, and nDepth

We've done some revamping of our database storage backend in order to satisfy some internal and external requirements. What does this mean to you?

  • Your data will be migrated to a new format during upgrade. You'll want to take a snapshot (upgrade will remind you) or archive before starting the migration since it's a one-time operation.
  • You can resize your appliance beyond the previous 1TB limit (the next most common barrier is 2.2TB, based on virtual infrastructure capabilities to address a single disk).
  • Database archiving is not a full archive each time anymore, only what's new. The first time after you upgrade when your archive runs (use "archiveconfig" in the CMC to check) it will effectively be a "full" archive, though, so be prepared.


What you'll see during migration:

  • The console will show the progress of your migration, along with an estimate for completion. For most people, it should be a few days to a week. People with larger databases could experience longer migrations (and if you've got a full 1TB database it could be a couple weeks).
  • While the migration is taking place, you'll be able to search new data and migrated data, but not older data. Most recent data is prioritized, as is processing real-time data vs. migrating.
  • The Database Maintenance Report will also show you the status and historical info (so you can tell how far back you're migrated in days, rather than percentage/numbers).


With nDepth search, you'll see a new cool feature that draws the charts dynamically as your data is returned, rather than waiting until the end. You'll also see that you can now sort results by oldest to newest or newest to oldest, rather than always having them in the same order.


How to Upgrade

  1. Go to the customer portal. Scroll to the "Release Candidates" area. Click on the LEM v5.6 RC. Download the "Upgrade" bit (zip file).  If it's not in the Release Candidates section, you can go to License Management, then under your LEM license you'll see a "Release Candidates" area.
  2. Extract the zip file contents. Put the "TriGeo" and "Upgrade" folders in a windows file share that the appliance can access. (It's big, so you probably don't want to pull over the WAN or anything)
  3. Log in to the CMC via SSH, your hardware appliance, or your virtual appliance console's "Advanced Configuration".
  4. Run the "upgrade" command
  5. Answer the prompts.
  6. That's it! (It'll take about 5 minutes to run through everything, then migration starts in the background)


Get help, ask questions, tell us what we missed!

Come to the Security Event Manager (SEM) Release Candidate forum and tell us what you think. There's also an additional thread over there with some more technical details: LEM 5.6 Release Candidate Notes & Info (RC2 - Available Now).

As described in this blog post, SolarWinds Network Configuration Manager v7.2 has reached Beta status. In the meantime, we have been working on further enhancements and improvements. One of the features we focused on is the possibility to attach end-of-life information to nodes managed by NCM. This blog post decribes the new version of this feature, available in NCM v7.2 Beta 2. To participate in the Beta program, simply fill out this survey and you will be sent the download links for the Beta. (If you already subscribed, you don't have to do it again.) Remember, Betas cannot be installed in production and you cannot upgrade the Beta to any other future versions.


NCM EOL Summary Page


Why Do I Need to Know the EoL Status of My Devices?

As described in SolarWinds EOL Lookup: Which of My Devices Will Become End-of-Life?, this is important for planning purposes; both budgetary and operational. Your organization may also be a subject of policies that require running up-to-date equipment. Please take a look at the referred post for more details.


As you can see in the picture above, the End-of-Life management screen is accessible directly from the NCM menu bar. The main grid provides an overview of various device attributes that help the user assign the End-of-Sales and End-of-Support dates (together called End-of-Life dates). By default, the nodes are grouped by matching type.


Assigning EoL Items to Devices

Matching the EoL information to devices is not an easy task. To address it, we have developed this End-of-Life feature that works in the following way: When user wants to assign EoL information to a device, NCM will search its EoL database and suggest a few possibilities to choose from (ordered by a rank). The user will then choose the best match himself. To make the choice easier, NCM will supply additional information such as node details, custom properties, link to vendor's EoL website (if available) etc. If there is no suitable EoL item, the user can enter his/her own dates. Another option is to mark the device as ignored by the End-of-Life feature. (Applicable e.g. to some special devices.)


Each night, NCM will process its EoL database and try to find the suggestions for devices with no EoL information assigned. (This can also be triggered on demand.) The EoL information for a particular device can be in one of the following states (called matching type):

  • Suggested Dates Found -- NCM found suggestions for EoL information and expects the user to choose one of them.
  • Suggested Dates Assigned -- User already assigned one of the suggested EoL items to this node.
  • Custom Dates Assigned -- User entered his own EoL dates.
  • No Suggestions -- NCM has not found any EoL candidates for that device.
  • Ignored -- For some reason, user does not want to manage EoL info for this device.


Show me the Workflow!

Typical workflow will look as follows:

  1. Go to the End-of-Life management screen (see the picture above) to check if there are any devices with suggested dates found.
  2. Select one or more nodes and click "Assign Dates".

    NCM EOL Summary

  3. On the Assign page, you can select one of the suggestions or enter your own EoL dates.


  4. You can also easily select more nodes that will be assigned the same EoL dates. By default, devices of the same type (same SysOID) will be pre-selected.

    NCM EOL Add More Nodes

  5. You may enter a comment explaining your choice or any other information you want to attach.
  6. Click Assign.
  7. The devices you just processed can be found in the "Suggested Dates Assigned" category.

    NCM EOL Summary

You can take the same steps for devices with no suggestions, too. There will just be no options to choose from.


NCM stores the assigned EoL dates in the database, so that they are not overwritten when NCM generates suggestion next time. However, the user has the option to delete the EoL dates manually.


How Do I Create a Report?

You have the possibility to create the report by adjusting the information shown in the main grid and exporting the result in Excel or CSV format. Let's explore the flexibility that you have:

  • You can change the grouping. (Default is matching type.)

    NCM EOL Group By

  • You can add or remove columns.

  • You can filter the information. The EoL dates have a few predefined filters to make the reporting easier.


Last but not least, we have a new resource that can be placed e.g. on the summary page:


NCM EOL Resource

I am very excited to announce the beginning of the Virtualization Manager 6.0 beta (sign up for the beta).  Many of you have been voting for your favorite feature on the Virtualization Manager Feature Requests forum, but no feature is requested more than Integration with SAM and NPM. Well, you are finally going to get your wish!  Before we dive into the details of the integration, lets review all the goodness that is coming with 6.0.

  • Integration with Orion - we will talk about this feature more below.
  • Resizable widgets - one size does not fit all, and now you can change the size of your widgets.  Check out the details in the feature request Add Ability to Adjust Widget Grid for all the details and some cool screenshots.
  • Performance Analyzer (charting) - we have made numerous improvements like better date selections, allowing you to adjust the size of the legend, and improving the segmentation of data.  Be watching the feature request Change the format of the legend for charts for more details.

  • Capacity Planner - speed improvements and bug fixes
  • Speed and Security - we made adjustments to improve the speed of collection and the GUI.  A few of these sneaked into the recent 5.1.1 service release, but most arrive in this release.  We have also updated the core components of the appliance to address known issues.


Sign Up: Don't forget to sign up for the beta at Virtualization Manager 6.0 Beta Participation Survey.


Integration - what does it mean?

Before we dig into the features and screenshots, a couple of notes on what integration means:

  • Seeing it in Orion - we aren't just pasting some Virtualization Manager widgets into Orion views, were actually exposing the data in Orion in native resources.
  • Keeping it in Orion - we wanted to keep you navigating in Orion naturally, moving from Application to VM to Host to Datastore and back again without leaving the Orion interface - unless you want to.
  • Linking it to Virtualization Manager - we do want to expose the Virtualization Manager interface, but contextually.  When you see a link to Virtualization Manager, it will be in context of the Orion object you are viewing, and it will be clear that it links to Virtualization Manager.

With that out of the way, let's get to the cool stuff.

What can you do with all this Goodness?

This is a lot of new data we are exposing in Orion but how will you be able to leverage it?  Here are a few new and enhanced ways you can solve problems:

  • Enhanced:  Why is my application slow? 
    My application is running slow, I drill down to the VM, CPU and Memory look good... storage feels slow, but I don't see anything suspicious on the VM.  Click the storage subview and drill down to the Datastore View.  From here,I see another VM is using all the IO on the datastore, click to that VM and see the applications associated with it, diagnose the issue from there.
  • New: Which of my datastores are busiest?
    My hypervisor is suppose to load balance everything, but I constantly run into storage performance issues.  With the new Top 10 lists, you can find out quickly which datastores are busiest and what VMs are causing the problem.
  • New: Do I have any CPU/memory/storage capacity I can reclaim?
    With the sprawl view, Orion will give you recommendations on modifying or deleting VMs to reclaim precious CPU, Memory and Storage resources - but before you do, you can drill down to the application view to make sure there is nothing important running on that server.

List of New Views

There are many new views, subviews and resources - here is a quick list.

  • Added to the Virtualization Tab
    • Storage - summary of all virtualization storage, both capacity and performance.
    • Sprawl - where you could reclaim computer, memory and
    • Map - show Virtualization Manager map embedded within Orion
    • Reporting - show Virtualization Manager reports within Orion
  • Added to the some virtual object views in Orion (Cluster, Host and Virtual Machine)
    • Storage Subview - summarizing the storage related to that virtual object, both capacity and performance
  • Datastore View: Summary of the capacity and performance of a datastore

I'll focus on the datastore view next.



Datastore View Details

The datastore view focuses on a single datastore, presenting both capacity, utilization and performance, as well as the other virtualization objects and applications related to this datastore. Yes, applications too.


List of Resources:

  • Virtualization Manager Tools:  Links to Virtualization Manager from this object
  • Virtualization Manager Alerts: All Virtualization Manager alerts for this object
  • Datastore Info: Status, type and location of the datastore
  • Datastore usage: Capacity, utilization, over provisioning and predicted depletion date
  • Related nodes: All the nodes in Orion related to this datastore.
  • Applications on this Datastore: Show the applications running on the VM's on this datastore.
  • Top Ten VMs by Used Space: What VM's are consuming the most space
  • Top Ten VMs by Allocated Space: What VM have been allocated the most space.
  • Top Ten VMs by Low Storage Space: What VM's are almost out of space.
  • Datastore IOPS and Latency gauges: Last value collected for IOPS and Latency
  • IOPS (Datastore & Top VMs): Chart of datastore IOPS over time overlaid with IOPS from the busiest VMs.
  • Latency (Datastore & Top VMs): Chart of datastore latency over time overload with the latency of the busiest VMs.


More to Come

We couldn't list everything or cover all the aspects in a single post.  There will be lots of questions about core features (alerts, reports, configuration, etc.) that we will cover in future posts, but please ask questions here or contact me directly if you want to have a more in depth conversation about integration. 

Disclaimer:  The screenshots presented here are mockups and are not identical to the final deliverable.





From the beginning, one of our key objectives at SolarWinds has been to make your  lives better and your jobs a little easier.  We believe that the commitment can be seen in our products and in the strong user community that we have around those products…


But, we haven’t always done the best job of making it easy to work with us.

We hear you and understand that this is an important part of your experience with SolarWinds. For the past several months, we’ve been working on a handful of initiatives that we hope will change that.


Today, we’re tackling one request that we have heard time and time again.


You told us you wanted a personalized experience when working with SolarWinds, including a dedicated team that knows your account, time to evaluate products on your own schedule and a place where you can go as a customer for all your needs. (For reference, visit this thread: Pet Peeve)


SolarWinds is rolling out new processes, tools, and procedures to better serve you in this regard.


  • We’re introducing a customer sales experience featuring a team of hand-picked, experienced professionals who will take the time to understand your current environment and the business challenges you are facing.

    The team, as a whole, has the goal to deliver a host of value-added services to customers including customer-only offers, trainings, webcasts, and more.


  • We’ve launched new functionality allowing you, our customer, to download trials of other SolarWinds products directly from the license management page inside the Customer Portal.

    When you download a trial through the Portal, there won’t be any more registration forms and you won’t be added to our marketing emails, which should cut down on the volume of emails about other unrelated SolarWinds products.

    What will happen when you download from the Customer Portal?  You can expect to receive no more than two calls from a dedicated customer sales team representative.

    The first, shortly after download, to make sure you have their contact information and all the resources you need to install and evaluate your new software.

    Then – unless we hear from you first – you will receive only one more call a bit later to find out what you thought of the evaluation and determine if you have any questions.

    That’s all.




In order to help us improve our interaction and conversations, you now have direct access to our Senior Sales Management team to share feedback and frustrations in real-time. Think of it as our “How’s our Selling?” hotline.


  • Contact us via phone (+1 512.498.6510) and email ( These messages will route directly to all SolarWinds sales VPs, SVPs and EVPs.


  • Within 48 hours of sending feedback, you will get a personal response from a VP and we will take immediate action to correct the situation.


This content is outdated and has been updated here.


In the next few weeks, we will be rolling out a big change to the SolarWinds Customer Portal, adding individual user profiles to be used for login and account management moving forward. For purposes of logging in, changing/resetting passwords, managing account information, and more, you will now use a login based on your email address instead of a shared SolarWinds Customer ID (SWID).


Changes will be immediately obvious the next time that you log in to the Customer Portal.


Frequently Asked Questions About Individual User Profiles:


Will I still log in with my SolarWinds Customer ID (SWID)?

No, you will no longer be able to log in using your SWID. You will need to create a user profile in order to log in to the Customer Portal


Do I still need my SWID?

Yes, your SWID (SolarWinds Customer ID) is still used to identify your account with SolarWinds. When logged into the portal, you will be able to see your company name and SWID at the top of the page.


What is the new individual user profile, and why do I have to create one?

For purposes of logging in, changing/resetting passwords, managing account information, and more, you will now use a login based on your email address instead of a shared SolarWinds Customer ID (SWID). This will make is much simpler for you to manage your account, and more secure for you as well, because you will no longer have to share credentials among multiple team members.


Help: I can't create an individual account?

In order to create an individual account, you will need to already have a SolarWinds ID (SWID) and be a SolarWinds customer. You will use your SWID and password to create your user profile. If you do not have a SWID, but are a customer, contact customer service to get this. If you do not know you password, use the SWID password retrieval links on the Customer Portal log in page to retrieve these. You need to be listed as your account's primary contact in order to recover these credentials.


What are the different types of individual account? How do I know which one I have?

Currently, there are two types of accounts. Standard Access and Account Administrator. Standard access allows you full access to the customer portal as you know it today. Account Administrators have access to additional account administration functionality within the portal and can modify contact types and roles for other users on the SWID.


Are there plans to add more roles?

Yes, we are planning to add additional roles and more granular access levels in the future.


I am the main contact on my account. Can I add other users to the account?

Yes, you can add contacts and users to the account. You will need to do this from the Company Account Settings screen. The only way to access this page is to be listed as an Account Administrator.


How do I become an account administrator for my company?

If there is an existing account administrator for your company’s Customer Portal, they can change the roles of other users and make them account admins. If no one is an account administrator, a support rep can assist you in becoming an account administrator. In order to request this role, please submit a support ticket for customer service -


I have access to several different SWIDs. Do I need different user profiles for each SWID's Customer Portal I need to log in to?

You will only have one user profile for your email address. This profile can be linked to access multiple SWIDs.


How do I link different SWIDs?

You can link your user profile to additional SWIDs by going to your User Profile Settings page after logging in to the Customer Portal. In order to link to an additional SWID, you will need to the SWID and Password for the account that you wish to link to.


Can you help me create an individual user profile?

Please refer to the tutorial below on how to create your user profile for the first time. We tried to make the process as simple as possible, but if you do experience issues while creating your account, this tutorial should help.


Can I change my password for my user profile?

Yes, you can change your individual user profile password from within the customer portal on your profile page.


What if I forget my password?

You can get a temporary password and reset it to a new password using the Forgot Password link on the SolarWinds Customer Portal login page.


What if someone else in my organization forgets their password?

The user who has forgotten their password will need to reset the password themselves.


Will Customer Service give me my password if I forget it?

No, for your security, SolarWinds Customer Service will not have access to your password. You will need to retrieve and change your user profile password yourself.  


How do I make updates to my company’s information that is on file with SolarWinds?

On the Company Account update page (accessible by the account admin), users can download and submit the Account Information Update form and submit this to customer service via email to be updated.


What if I need the password for my shared account in order to create a user profile?

                The person who is listed as the primary account contact with SolarWinds can retrieve this information using the Forgot Password dialog and clicking I need to retrieve account information for a SWID. This will allow the primary contact to retrieve the shared credentials that are needed in order to create an individual account.


When you come to the Customer Portal, you will use your SWID and password as you have in the past if you have not created your user profile. If you have created a user profile already, log in with your email address and the password you selected.


cp log in page.png


After logging in with the shared credentials, you will be taken to the following page to create an individual user profile. Click get started, and head to the form. If you have already done this, click I'm already set up and log in.




Fill out all fields on the form to create your account.


form page.png


After filling out this form, click Create Account and you will immediately be logged into the customer portal. After your initial log in, you will receive an email to the account provided with a link to confirm your account. You are required to click on the link before your next log in to the customer portal. If you do not confirm your account, you will not be able to log in.


confirm account modal.png


If you return to the portal and re-enter your shared SWID credentials, you will be prompted again to create a user profile. If you have already created an user profile, simply click on the Log In button. We recommend that if you have access to more than one SWID, that you link your user profile to the other SWIDs, so that you only have to remember a single set of credentials. To do this, log in to your account and navigate to the user profile settings page:


home page unconfirmed account.png

On this page, you will have the option to link to another account. You will need the SWID and password of that account in order to link it to your user profile.


link profile screen.png

If you are the administrator for your account, you can also access the company profile settings screen where you can view users who have access to the Customer Portal for your company, add users, assign roles and contact types and review other information related to your account.


company profile settings page.png


manage user interface.png

In the previous post you have seen how to handle the perils of JavaScript and we also touched on the topic of clean and maintainable transactions. Today we will continue with a few more tips on how to make the transactions more maintainable and we will discuss common issues with playbacks.


Add some pauses

There are many variables which influence the performance of monitored web applications. Responses are sometimes quick and your page is displayed instantly, but if IT decides it’s a good time to backup the main servers then you might experience delays and variations in response times. There might be many other reasons like Internet connection problems, performance issues on the side of your browser or operating system, or simply the web application is still under development and it has not been optimized for performance. Java applets are a good example where you might need to watch the load time more carefully. Slow load times can not only cause variations in response times, but can even cause the transaction to fail.


Variations in response times and false alerts about failed transactions make it difficult to fine-tune your web application performance monitoring. Thresholds you have defined can be reached and false alarms can be triggered just because of this variance in the network.


If you have these kinds of problems, consider adding a few ‘waits’ to the transaction. If you see the response time of your action varies between 1-4 seconds, simply add a 5 second wait after your step to accommodate that variation. Wait times won’t count into the overall timing of the transaction, but will help to absorb the changes in the response times. Click on the Add Wait Time button (in yellow on the screenshot below) and define how long to wait (in blue).




Alternatively you can use an image match. Image match will wait up until the defined number of seconds (by default 30s) until the image is loaded. Waiting time is counted into the duration of a transaction; however, if the image match expires before the image is loaded, the whole transaction will fail. Click on Image Match button (in yellow), mark the area to match (in red) and define how long to wait (in blue).






Browser Versions

Web Performance Monitor is using the Internet Explorer browser to play back your transactions. One of the great features of WPM is the ability to use remote players and thus provide performance data from various locations (offices, branches, customer regions and so on). However each remote player might use a different version of Internet Explorer and this brings variations into your playbacks. Different versions of Internet Explorer could interpret a code differently and display the page with small variations that might result in different response times or failures in the transaction playback.


When you are recording a transaction double-check the version of the browser on the machine where you make the recording and the versions on all your remote players and make sure they are same. Even the small difference can make actions like 'Image match' fail.




Optimize the load

A WPM recording is a copy of what a user does in Internet Explorer. This fact has certain requirements for memory and CPU. That is also a reason why you can’t have hundreds of transactions on one player. It is like having hundreds of users using the same computer (just recall how slow can be your browser when you have too many tabs open. If you assign too many transactions to one player it might affect response times and performance data and cause false alarms for failed transactions.


To optimize utilization of the players we provide you with load indicator for each player.




The value of the indicator can range from 0 to hundreds (%).  What does this value mean? Here is the simplified formula to calculate the load:


player_load = number_of_running_playbacks / total_number_of_playback_workers*100 + transactions_waiting_for_playback


The transactions_waiting_for_playback value is based on the sum of wait times of the transactions on the player before they are played back. The longer the transactions wait for playback on the player the higher this value gets.


Basically it says how well you utilize your player to playback transactions. Most of the time you want to have it around 100% (or even slight above). There might be ups and downs, and you might experience short spikes and therefore what you need to look at is the status in the long-term. If the load is consistently below 100% there is still capacity to handle more transactions. If the load is consistently above 100% it means your player is too busy and it might have an impact on performance data.




What are the options?


  1. Simplify transactions
  2. Reduce the frequency of playbacks
  3. Move transactions to another player
  4. Add resources to the player


Simplify the transactions as we described in the previous post. Make sure each transaction is simple and minimalistic and contains only actions that are needed to verify application functionality or for which you want to collect performance data.


Reduce the frequency of playbacks so that it still gives you information you need, but balances the number of transactions on a given player. Simply edit transaction in the UI and make it run once per hour instead of every 5 minutes.




Consider moving the transaction to another player and check the impact on the load indicator of the original player. You might want to group similar transaction to one player or dedicate a player for the transactions which exercise some of the more complex business logic of your application and consume more resources and time.


Add resources to the player. More RAM and CPU can help, but not always. Also, there are internal limitations of Internet Explorer. WPM therefore limits the number of workers on external players to 7 (workers on main poller are limited to 2). Generally horizontal scaling works better than vertical scaling.


In this post we have learned how adding waits to the transaction can help to absorb variations in response times We also learned that different versions of browsers interpret the page differently, sometimes causing false alerts, and lastly we discussed how to optimize the load of your players.


In the next post we will look at how to use and troubleshoot transactions monitoring desktop applications using Citrix XenApp.


Get the most of your Web Performance Monitor

If you own SolarWinds Engineers Toolset (ETS), and any SolarWinds products shipped with Orion platform you can leverage the functionality of Toolset directly from Orion platform based products. For more on Orion Integration with Engineer’s Toolset refer to Craig’s blog.


You may have ETS installed on a laptop, helpdesk agent system or any system on your network and happen to use the integration functionality to Orion. If that's so, then you are familiar with the tool you call by right clicking on a node. The tool runs on the local machine therefore the traffic is sourced from your local machine and not the server. The below diagram gives a visual representation of the deployment where Engineers Toolset resides outside of the Orion server. Using the Engineers Toolset - Orion integration function by right clicked a node (switch-32-01), the source traffic originates from the system running Engineers Toolset ( and the reply is send to the same system, while the local machine displays the output.



Why do I, need to source the tool from my Orion server and How do I achieve it?


You may have restrictions with in your organization, that does not allow your ETS machine to access some of your network and want to call simple tools from your Orion server without buying another ETS license. In this case, you would want some mechanism to perform this functionality. Lets see, how to achieve calling PING tool from an Orion server.

  1. Download and install PsTools from Microsoft, specifically you will need PsExec. Install this on your Orion server. PsExec allows you to run commands on your remote machine.
  2. Create a batch script (*bat) and save it into the same local folder where PsTools was installed. Batch files are useful for storing set of DOS commands that are executed by calling its name.

Here's a sample batch script.


echo off

cmd /K <path>\psexec.exe \\%1 <remote cmd> %2

To break down the batch script parameters,

- echo off:  Will turn off the command you put in a batch file from showing itself,

- /K: Carries out the command specified by string and continues (in my case carries out cmd.exe and continues to the installation folder of psexec.exe),

- <path>: Is the local (Orion server) folder where psexec.exe is installed, %1 carries out the first command line argument (IP Address of the Orion server),

- <remote cmd>: Is any DOS commands which will be executed from the source machine/Orion server, and

- %2:  carries out the second command line argument (which is the IP Address of the remote device, you PING)


This script executes PING on the Orion server and displays the resulting output locally (in this case Orion server)


echo off
cmd /K C:\Tools\psexec.exe \\%1 ping %2


and the below for TRACEROUTE


echo off
cmd /K C:\Tools\psexec.exe \\%1 tracert %2


  3. Next, on your ETS system. Go to Engineer’s Toolset Integration tray icon and update the right click menu to include your remotely executed command.



   4. Go to Menu Items tab and click “Create, delete and edit menu items”


   5. Input Target path and Command-Line Arguments into Item Details page (as shown below)



6. Transfer the new menu items from the Available field box into the selected Menu field.


7. Open Orion web Console and use the Engineer’s Toolset Integration Menu. The new menu items should appear and can be used to execute the remote command to targeted nodes.


If you're an Orion customer and you haven't tried the Engineer's Toolset, you can learn more about it here.

SolarWinds NTA v3.11 is now available for download in your customer portal.

NTA v3.11 brings these notable improvements:


  • Support for sampled NetFlow
  • QoS hierarchy and performance updates
  • Support for query of IP ranges/CIDR within Flow Navigator


You can view the full set of release notes, including problems fixed here.

Here is a preview of how Nested CBQoS Policies are displayed in v 3.11


SolarWinds FSM v6.5 is now available for download in your customer portal.

FSM v6.5 brings these notable improvements:


  • Juniper SRX support
  • Increased support for managing, tracking, searching and documenting business justification rules in IOS
  • Extended rule/object change analytics, now including IOS
  • Enhanced change modeling capabilities for predicting the impact of rule/object changes on security and traffic flow, now including IOS


You can view the full set of release notes, including problems fixed here.

Note about upgrade to FSM 6.5: The Release Notes referenced above, contain important information about upgrades from your current version of FirePAC or FSM, to FSM v6.5
We recommend that users spend a few minutes reviewing them, especially if you plan to perform an upgrade from FirePAC deployed in Standalone mode, to FSM v6.5.

We have recently officially reached the Release Candidate (RC) phase for our next release, Web Help Desk 12.0. RC is the last step before general availability and is a chance for existing customers on active maintenance to get the newest functionality before it is available to everyone else. You can find the RC binaries for download in the SolarWinds customer portal.


If you have any questions I encourage you to leverage the WHD RC group on thwack:


This release contains the following product improvements and new features:


  • Improved LDAP Directory and Active Directory auto-discovery
  • Native Asset Discovery (using WMI)
  • Native integration of Asset Database with SolarWinds Network Performance Monitor (NPM), Network Configuration Manager (NCM) and Server and Application Monitor asset databases (SAM)
  • Native integration with Alerts from SolarWinds NPM, NCM and SAM
  • New Getting Started wizard
  • Automatic setup of WHD database in the wizard
  • Support for Windows 2012, Windows 8 (only for evaluation version of WHD)
  • Support for MS SQL 2012, support for embedded PostgreSQL database
  • Migration utility from Frontbase to PostgreSQL


Let's look at some of these features in more detail.


Native WMI Asset Discovery

WHD is now capable of discovering assets in your network using WMI so that it doesn't have to rely on external asset databases. We gather information like hostname, IP, Domain, MAC address, OS, SP level, User profiles, CPU and memory information, installed printers, HD size and model or installed software and patches. By simply defining an IP range, credentials for WMI and schedule for discovery and synchronization, WHD will keep your asset database synced with what's actually on your network. You can find configuration details on the following screenshot.



Once you schedule discovery and it has successfully completed you can find discovered assets in the database. Please note the Discovery Connection column, which shows where this asset is coming from.



SolarWinds Asset Database Integration

To make it easier to integrate WHD with other SolarWinds products we introduced native integration for NPM, NCM and SAM. WHD now natively provides mapping from each product database to the WHD asset database, so you only need to provide connection details and credentials. On the screenshot below you can see configuration options for integration with NPM.



Once the discovery is finished, you can see the import data in WHD. Please note the column Discovery Connection, which indicates where the nodes are coming from.



SolarWinds Alerts Integration

Another great feature we introduced is native support for SolarWinds Alerts for NPM, NCM and SAM. Create a connection and define a few simple rules. Each rule will tell when to accept an alert and create a ticket. When an alert is triggered and it matches the filters, a ticket will be created or updated (if it already exists). On the following screenshot you can see a configuration for an NPM connection.


We will define a filter to accept all alerts which have severity other than "Low". This is what it looks like in WHD:


If you are familiar with Alert Central, you should be very comfortable here, but filter definitions are intuitive even if you have not tried Alert Central yet. Now lets say you have an alert defined in NPM which is triggered when the node is unmanaged. If we unmanage node NPM_SG9323P038 we will see the following alert:


Immediately we will notice there is a new ticket in WHD.


The connection between WHD and NPM is bi-directional. As soon as you add the first note to a ticket, WHD will notify NPM and acknowledge your original alert. Any text you put into the notes in WHD is also added to the alert notes in NPM. On the next two screenshots you can see the same note added in WHD and how it is displayed in NPM.


The same note is also visible from the NPM web console.



Active Directory and LDAP Directory Auto-Discovery

Active Directory is one of most common user databases and we wanted to make it as easy as possible to configure. With the new auto-discovery feature, you now only need to define connection details like hostname, click Detect Settings button and WHD will not only find out if there is an Active Directory or generic LDAP directory server, but also pre-configure basic settings like user DN or search filter. Now you only need to define connection account details and you can start to use your directory server. On the following screenshot you can see a much simplified configuration form and please also note the new Test Settings button, which will help you to make sure the settings are set up correctly.


Once you successfully configured the connection, you can search for users (just don't forget to tick the Search LDAP checkbox):


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.