1 2 3 4 Previous Next

Product Blog

730 posts

Content is key as new applications get released to the market, as well as new versions of products that have been out there for some time. Application templates are a critical component of what makes Server & Application Monitor (SAM) great and we’re constantly taking feedback on how to enhance the content we have and what additional content folks would like to see. The following post is part 1 of more to come on net-new and enhanced application monitoring templates for Server & Application Monitor. As always, if you have comments or feedback, please let us know and if there are any application templates you would like to see that we do not offer today, please let us know.



SAP HANA is a net-new addition to our library. Unlike many of our other templates, there are some prerequisites to get monitoring to work properly.


This template can be found on THWACK® at the following URL, or, if you have SAM, you can look at the application templates page, which connects to SAM.

SAP HANA 2.0.apm-template

From the server that will be polling your HANA instances, you’ll need the 32-bit or 64-bit HANA ODBC drivers. You should be able to download these from the SAP portal. You also need the ODBC credentials to access SAP HANA 2.0 Express Edition. Note that if you install the 64-bit version, you will need to update the template to use the 64-bit job engine vs. default 32-bit.


If you have an account for the SAP Support Portal (customer, partner, ask-your-administrator), just enter SAP HANA client in the search bar. Take 2.0 and select the operating system (such as Windows).

If you don’t have an SAP support account, you can also download the SAP HANA client from the Developer community, https://www.sap.com/developer/trials-downloads.htmlThis will direct you to the SAP store; it also requires an account but this one is free.

The metrics we are gathering for HANA include the following. (If you want more details on what the counters mean, how they are calculated, and any reference documentation, please see the links to the templates. In this case, for HANA,
SAP HANA 2.0.apm-template.)


  • CPU Utilization %
  • I/O Read Throughput in MB - DATA volume
  • I/O Read Throughput in MB - LOG Volume
  • I/O Write Throughput in MB - DATA Volume
  • I/O Write Throughput in MB - LOG Volume
  • System Memory Used %
  • Heap Memory Used %
  • Connections
  • Active Statements
  • Active Procedures
  • Table Lock Count
  • Record Lock Count
  • Blocked Transaction Count


Here is how this looks in SAM:



Enhanced Exchange 2016:

Next up is a set of enhancements to an existing template we already offer today, Microsoft Exchange 2016. We just added some new experience monitors as well as some component monitors within the template itself.



There are now four templates available for Exchange 2016.

  • Active Sync Connectivity
  • Edge Transport Role Counters & Services
  • Mailbox Role Counters & Services
  • OWA Form Login (PowerShell)




  1. WMI access to the Exchange server.
  2. Credentials: Windows Administrator on the target server.
  3. To run template “Exchange Active Sync Connectivity Template”:
      1. Exchange 2016 Management tool should also be installed on the machine. Once it’s installed, import this tool utility in PowerShell via this command:
        Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn;
      2. Double-click on Exchange Server installer. It will ask the folder where you need to save the extracted files. Once extraction is completed, go to the Scripts folder and run the script “new-testcasconnectivityuser.ps1”—this script creates the test user, which helps in fetching the output from the command “Test-ActiveSyncConnectivity” used in the script.
      3. “Test-ActiveSyncConnectivity” needs Client Access Server (CAS). You can find this server name by executing the PowerShell command “Get-ExchangeServer” and note the “Name” value.
      4. Test to ensure http://<Hostname>/powershell or https://<Hostname>/powershell should be working.
  4. To run template “Exchange 2016 OWA Form Login (PowerShell)”:
      1. Resolve the IP of the node this script will run against, make an entry of that IP in etc/hosts file.
      2. Test to ensure http://<Hostname>/owa or https://<Hostname>/owa should be working.


SQL 2016 on Windows:

You can read more about and download these two templates here.



There are now two templates available for SQL Server 2016 on Windows.

  • Analysis Services
  • Reporting Services


For SQL 2016 Analysis Services, we are collecting the following metrics/info.

  • Service: SQL Server Analysis Services
  • Cache: Direct hits/sec
  • Cache: Lookups/sec
  • Cache: Direct hit ratio
  • Cache: Current entries
  • Cache: Current KB
  • Cache: Inserts/sec
  • Cache: Evictions/sec
  • Cache: Misses/sec
  • Cache: KB added/sec
  • Cache: Total direct hits
  • Cache: Total evictions
  • Cache: Total filtered iterator cache hits
  • Cache: Total filtered iterator cache misses
  • Cache: Total inserts
  • Cache: Total lookups
  • Cache: Total misses
  • Connection: Current connections
  • Connection: Current user sessions
  • Connection: Requests/sec
  • Connection: Failures/sec
  • Connection: Successes/sec
  • Connection: Total failures
  • Connection: Total requests
  • Connection: Total successes
  • Data Mining Prediction: Queries/sec
  • Data Mining Prediction: Predictions/sec
  • Locks: Current latch waits
  • Locks: Current lock waits
  • Locks: Current locks
  • Locks: Lock waits/sec
  • Locks: Total deadlocks detected
  • Locks: Latch waits/sec
  • Locks: Lock denials/sec
  • Locks: Lock grants/sec
  • Locks: Lock requests/sec
  • Locks: Unlock requests/sec
  • MDX: Total NON EMPTY unoptimized
  • MDX: Total recomputes
  • MDX: Total Sonar subcubes
  • Memory: Cleaner Memory shrinkable KB
  • Memory: Cleaner Memory nonshrinkable KB
  • Memory: Cleaner Memory KB
  • Memory: Cleaner Balance/sec
  • Memory: Filestore KB
  • Memory: Filestore Writes/sec
  • Memory: Filestore IO Errors/sec
  • Memory: Quota Blocked
  • Memory: Filestore Reads/sec
  • Proactive Caching: Notifications/sec
  • Proactive Caching: Processing Cancellations/sec
  • Proc Aggregations: Temp file bytes written/sec
  • Processing: Rows read/sec
  • Processing: Rows written/sec
  • Processing: Total rows read
  • Processing: Rows converted/sec
  • Processing: Total rows converted
  • Processing: Total rows written
  • Storage Engine Query: Queries from cache direct/sec
  • Storage Engine Query: Queries from cache filtered/sec
  • Storage Engine Query: Queries from file/sec
  • Storage Engine Query: Avg time/query
  • Storage Engine Query: Measure group queries/sec
  • Storage Engine Query: Dimension queries/sec
  • Threads: Processing pool idle I/O job threads
  • Threads: Processing pool busy I/O job threads
  • Threads: Processing pool job queue length
  • Threads: Processing pool job rate


Here is how that will look in SAM:


For Reporting Services, we are collecting the following metrics/info:

  • MSRS Windows Service: Active Sessions
  • MSRS Windows Service: Cache Flushes/Sec
  • MSRS Windows Service: Cache Hits/Sec
  • MSRS Windows Service: Cache Hits/Sec (Semantic Models)
  • MSRS Windows Service: Cache Misses/Sec
  • MSRS Windows Service: Cache Misses/Sec (Semantic Models)
  • MSRS Windows Service: Delivers/Sec
  • MSRS Windows Service: Events/Sec
  • MSRS Windows Service: Memory Cache Hits/Sec
  • MSRS Windows Service: Memory Cache Miss/Sec
  • MSRS Windows Service: Reports Executed/Sec
  • MSRS Windows Service: Requests/Sec
  • MSRS Windows Service: Snapshot Updates/Sec
  • MSRS Windows Service: Total Processing Failures
  • MSRS Windows Service: Total Rejected Threads
  • MSRS Windows Service: Report Requests
  • MSRS Windows Service: First Session Requests/Sec
  • MSRS Windows Service: Next Session Requests/Sec
  • MSRS Windows Service: Total App Domain Recycles
  • MSRS Windows Service: Total Cache Flushes
  • MSRS Windows Service: Total Cache Hits
  • MSRS Windows Service: Total Cache Hits (Semantic Models)
  • MSRS Windows Service: Total Cache Misses
  • MSRS Windows Service: Total Cache Misses (Semantic Models)
  • MSRS Windows Service: Total Deliveries
  • MSRS Windows Service: Total Events
  • MSRS Windows Service: Total Memory Cache Hits
  • MSRS Windows Service: Total Memory Cache Misses
  • MSRS Windows Service: Total Reports Executed
  • MSRS Windows Service: Total Requests
  • MSRS Windows Service: Total Snapshot Updates
  • Report Server: Active Connections
  • Report Server: Bytes Received/sec
  • Report Server: Bytes Sent/sec
  • Report Server: Errors/sec
  • Report Server: Logon Attempts/sec
  • Report Server: Logon Successes/sec
  • Report Server: Memory Pressure State
  • Report Server: Memory Shrink Amount
  • Report Server: Memory Shrink Notifications/sec
  • Report Server: Requests Executing
  • Report Server: Requests/sec
  • Report Server: Tasks Queued
  • Service: SQL Server Reporting Services
  • Report Server TCP Port
  • Report Server: Bytes Received Total
  • Report Server: Bytes Sent Total
  • Report Server: Errors Total
  • Report Server: Logon Attempts Total
  • Report Server: Logon Successes Total
  • Report Server: Requests Disconnected
  • Report Server: Requests Not Authorized
  • Report Server: Requests Rejected
  • Report Server: Requests Total


SQL 2017 on Windows:

You can read more about and download the template here.


This template uses Windows performance counters to assess the status and performance of Microsoft SQL Server 2017 Analysis Services.




Below are the metrics and counters we will gather:

  • Service: SQL Server Analysis Services
  • Cache: Direct hits/sec
  • Cache: Lookups/sec
  • Cache: Direct hit ratio
  • Cache: Current entries
  • Cache: Current KB
  • Cache: Inserts/sec
  • Cache: Evictions/sec
  • Cache: Misses/sec
  • Cache: KB added/sec
  • Cache: Total direct hits
  • Cache: Total evictions
  • Cache: Total filtered iterator cache hits
  • Cache: Total filtered iterator cache misses
  • Cache: Total inserts
  • Cache: Total lookups
  • Cache: Total misses
  • Connection: Current connections
  • Connection: Current user sessions
  • Connection: Requests/sec
  • Connection: Failures/sec
  • Connection: Successes/sec
  • Connection: Total failures
  • Connection: Total requests
  • Connection: Total successes
  • Data Mining Prediction: Queries/sec
  • Data Mining Prediction: Predictions/sec
  • Locks: Current latch waits
  • Locks: Current lock waits
  • Locks: Current locks
  • Locks: Lock waits/sec
  • Locks: Total deadlocks detected
  • Locks: Latch waits/sec
  • Locks: Lock denials/sec
  • Locks: Lock grants/sec
  • Locks: Lock requests/sec
  • Locks: Unlock requests/sec
  • MDX: Total NON EMPTY unoptimized
  • MDX: Total recomputes
  • MDX: Total Sonar subcubes
  • Memory: Cleaner Memory shrinkable KB
  • Memory: Cleaner Memory nonshrinkable KB
  • Memory: Cleaner Memory KB
  • Memory: Cleaner Balance/sec
  • Memory: Filestore KB
  • Memory: Filestore Writes/sec
  • Memory: Filestore IO Errors/sec
  • Memory: Quota Blocked
  • Memory: Filestore Reads/sec
  • Proactive Caching: Notifications/sec
  • Proactive Caching: Processing Cancellations/sec
  • Proc Aggregations: Temp file bytes written/sec
  • Proc Aggregations: Current partitions
  • Proc Aggregations: Total partitions
  • Proc Aggregations: Memory size rows
  • Proc Aggregations: Memory size bytes
  • Proc Aggregations: Rows merged/sec
  • Proc Aggregations: Rows created/sec
  • Proc Aggregations: Temp file rows written/sec
  • Processing: Rows read/sec
  • Processing: Rows written/sec
  • Processing: Total rows read
  • Processing: Rows converted/sec
  • Processing: Total rows converted
  • Processing: Total rows written
  • Storage Engine Query: Queries from cache direct/sec
  • Storage Engine Query: Queries from cache filtered/sec
  • Storage Engine Query: Queries from file/sec
  • Storage Engine Query: Avg time/query
  • Storage Engine Query: Measure group queries/sec
  • Storage Engine Query: Dimension queries/sec
  • Threads: Processing pool idle I/O job threads
  • Threads: Processing pool busy I/O job threads
  • Threads: Processing pool job queue length
  • Threads: Processing pool job rate


The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

I am happy to announce the General Availability of Database Performance Analyzer (DPA) 12.0. This release focuses on analysis with two major features: Query Performance Analyzer (QPA) and Table Tuning Advisor. We have also improved our integration with the Orion® Platform by adding blocking, deadlocks, and wait time status to the PerfStack feature. In this post, I will cover Table Tuning Advisor, while QPA will be covered in another post.


Table Tuning Advisor

Every database has inefficient queries—ones that perform many logical reads but retrieve a relatively small number of rows. In other words, they do a lot of work for a small return. This type of inefficiency can result in higher I/O, longer wait times, greater amounts of blocking, and increased resource contention.


Tuning inefficient queries can be difficult and many questions tend to surface as part of the process. DPA 12.0 with Table Tuning Advisor can help lead you to answers to some of these common questions.

  • Should you tune the query? Add a new index? Or maybe add columns to an existing index?
  • Plans are complex and hard to analyze; which steps are the ones I should pay attention to?
  • Which predicates in the plans are causing inefficient data access and high amount of reads?
  • Are there recommendations I can use as a starting point?
  • Are there other inefficient queries that access the same table and could be affected by indexing decisions?
  • How many indexes currently exist on the table and how are they designed?
  • How much data churn (inserts, deletes, and sometimes updates) does the table undergo?


DPA’s Table Tuning Advisor is designed to analyze expensive queries and plans to help identify tables that have inefficient workload run against them. For each table, the advisor page displays aggregated information about the table and the inefficient queries. You can use this information to make informed decisions about database performance optimization opportunities, and to weigh the potential costs and benefits of adding an index.



There are two ways to get to the advisor page:

  • A new Tuning super-tab near the top of the page appears after clicking into an instance. This will take you to a page that combines the Query and Table Tuning Advisors.

  • The new Query Performance Analyzer (QPA) page with the Table Tuning Advisors section provides a summary of the advice aggregated to the table level and includes links to the advisor detail page.

Advisor Page Layout

The Table Tuning Advisor page has three main areas:

  • Inefficient SQL – a list of queries accessing the table ranked by relative workload.
  • SQL and Plan Details – SQL and Plan details for the selected query.
  • Table and Index Information – current table information, existing indexes on the table and the table’s columns.



Table Tuning Advisor Example

Let’s assume we are being proactive and want to tune something that will have a big impact. At a summary level, the tuning tab shows the tables with inefficient queries and ranks them based on workload. The list includes an aggregated view of wait time for each table, the number of queries that have inefficient plan steps on the table, and the number of index recommendations. This list quickly gives insight into the tables that have the highest inefficient workloads executing against them. These are prime opportunities for tuning.


Are there any recommendations to use as a starting point? Clicking on the “orders” table takes us to the Table Tuning Advisor page that provides details about inefficient queries accessing the table. This page pulls together what you need to know about the table regarding inefficient usage patterns, statistical information, design of current indexes, and much more. Index recommendations appear near the top of the page and may provide a good starting point for a solution.


Which steps in the plans are inefficient and does it align with the recommendation? DPA uses a proprietary algorithm to find plan steps that are inefficient and causing issues. Inefficient “producer” steps (for example, full table/index scans) read data to be processed later by subsequent "consumer" plan steps. While consumer steps (for example, sorts) can have a high plan cost, they are usually affected by a preceding producer step that read too much data. DPA can point out the inefficient producer plan steps that should be the focus of tuning efforts.


In this example, DPA identified two steps that are inefficient:

  1. INDEX SCAN – Step 64 – A full scan of the o_totalprice_index index. Notice the predicate value that shows a function named CONVERT. The query is using a CONVERT function against the o_totalprice column which will often negate effective use of an index. An INDEX SCAN reads the entire index, which is why the step shows 15 million rows associated with it.
  2. CLUSTERED INDEX SCAN – Step 69 – A full scan of the orders table. Notice the CONVERT_IMPLICIT function within the predicate value. This indicates an implicit conversion, i.e., data type mismatch, and DPA displays a predicate warning as a result. Click on the warning to get additional information. Other potential warnings include:
    1. Lookup Warning – The plan uses an index but is required to go back to the table to “look up” other needed information. Adding a “covering” index can potentially help tune this issue.
    2. Spool Warning – The plan step is storing data for later use, but large amounts of spooling can cause disk overhead.
    3. Parallel Warning – DPA has detected a parallelism step later in this query's execution, implying that this step's intermediate result set is likely large enough to exceed parallel processing cost thresholds. Look for ways to rewrite the query to reduce the size of intermediate result sets earlier in the query. For example, look for a sub-select that could produce fewer rows.


Based on the data shown by DPA in this example, the index recommendation may help tune the clustered index scan in step 69. However, tuning step 64 will likely require a modification to the query to remove the CONVERT function on the o_totalprice column. Gleaning this information via manual plan analysis would probably take hours. Plan analysis is difficult, so let Table Tuning Advisor help get you to a good starting place.


Are there other inefficient queries that access this table? The left pane of the Table Tuning Advisor page shows other inefficient queries, ranked by relative workload, accessing the “orders” table. Pay attention to the queries near the top of this list because they cause more workload against the table. Conversely, you should not spend as much time on queries near the bottom with small relative workloads. These queries could be affected by a new or modified index on the table.


How many indexes currently exist on the table and how are they designed? Toward the bottom of the Table Tuning Advisor page, the current indexes and their columns are shown along with information about statistics and usage. Also shown are fragmentation percentages, sizes of the table and indexes, the table’s columns, and more. This is important for several reasons:

  • Is the data churn for the table high? If so, this means insert/delete activity is high and a new index could cause more harm than good.
  • Is there an existing index that already contains the o_shippriority column? If so, can the index be modified to benefit this query versus creating a new index?
  • Were optimizer statistics generated recently? If not, and churn is high, updating the statistics for the table may be a good first step.
  • Are indexes fragmented? If they are and scans are performed against them, defragmenting them may help performance.


What Did You Find?

Our development team uses DPA to help make sure our code performs well. When using the Table Tuning Advisor, it pointed them to a problematic set of tables. Within a couple of hours, they tuned the queries with a simple rewrite and saved hours of database time every night during the cleaning process. If you find interesting stories in your environment, let us know by leaving comments on this blog post.


We would love to hear feedback about the following:

  • Does this improve your workflow when tuning a query? How much time does it save you?
  • Are there tuning questions that are not answered by the page?
  • Is all of the assembled data important to you when tuning?


What’s Next?

Don’t forget to read Brian’s blog about Query Performance Analyzer (QPA). To learn more about other DPA 12.0 new features, see the DPA Documentation library and visit your SolarWinds Customer Portal to get the new software.


If you don't see the features you've been wanting in this release, check out the What We Are Working On for DPA post for what our dedicated team of database nerds are already looking at. If you don't see everything you've been wishing for there, add it to the Database Performance Analyzer Feature Requests.

To kick off the Q3 systems releases, I am happy to announce Generally Availability of Database Performance Analyzer version 12.0.   This release focuses on analysis with two major features: Query Performance Analyzer (QPA) and the Table Tuning Advisor.  We've also improved our integration with Orion® Platform by adding blocking, deadlocks, and wait time status to PerfStack™. In this post, I'll cover QPA and the Orion integration. Table Tuning Advisor will be covered in another post.


Query Performance Analyzer

QPA is designed to intelligently assemble current and historical data for a query, combining all the information about a query into one place, including the query analysis (summarized per day) and the historical charts (30 days of data, down to 10-minute intervals). QPA analyzes the data about the query and automatically expands sections and selects metrics to show you the most relevant data. It also allows you to change time ranges on the query, and still has the great drill-down capability you are used to. You can use QPA for queries in any database supported by DPA.


Since QPA has all the data you previously saw on multiple screens, all links on query hashes and names now go to QPA, keeping your current timeframe. So now, when you go to look at a query in the product, you get QPA!


New Charting Capabilities

QPA uses SolarWinds' new Nova GUI components, allowing us to assemble and present data in new ways. We are very excited to have adopted this technology. There are a few nifty features that you'll see in the screenshots below.

  • Charts all have the same x-axis, even if their data is at different frequencies or ranges
  • As you roll over the chart, the values and time are shown both for the chart you are on and all other charts displayed
  • In all charts, you can uncheck one of the items on the legend to remove it
  • When you roll over an item in the chart legend, it is highlighted while other items are grayed out

All of these combine to make it very easy to inspect and correlate data across multiple charts.


QPA Layout

QPA has two main areas:

  • The Wait Type Chart and Time Navigation
  • Three tabs showing different data and analysis


Top Chart - Wait Types and Navigation (yes it's sticky!)

DPA is all about waits, so the top chart shows the total wait time by wait type, and it is sticky so it stays at the top of the page, making it easy to correlate the waits with the data in the charts below it. The new time navigation at the top of the chart allows to you to choose a pre-defined time range or build your own.  And now, you can display data further back than 30 days if you need to.


Tabs - Intelligent Analysis, SQL Text and Supporting Data

QPA has three tabs which we cover in detail below.

  • Intelligent Analysis: Intelligently assemble and display the most relevant data about this query
  • SQL Text: A nicely formatted version of the SQL text
  • Supporting Data: Additional performance data about this query available in under 24 hours


Intelligent Analysis

QPA can intelligently assemble the most important information about a query and allow you to customize your view to meet your needs. Intelligence includes expanding sections to show you relevant data and picking metrics based on the predominant wait type.


Sections include:

  • Query Advisor: Latest advice for the query in the current time period
  • Tables Tuning Advisor: Latest Table Tuning Advisors for the query in the current time period
  • Statistics: Query statistics, both the actual value and per execution
  • Blocking: Shows blocking info (blockee and blocker) if it sees significant blocking
  • Plans: Shows plan information if more than one plan is used for the current time period
  • Resource Metrics for the Instance: Displays instance resources based on the predominant wait time

Here is a query with both Query Advisors and Table Tuning Advisors.

Keep scrolling to see multiple plans and PLE (and more CPU/memory resources). Note that:

The wait type chart shrinks and stays at the top of the page

Rolling over a chart shows detailed data on each chart

QPA selected which sections to expand and which metrics to show


SQL Text

Formatted SQL text that is easy to read, as well as easy to copy.


Supporting Data

Supporting data is additional per-query data we collect and is only available at timeframes of 24 hours or less.  Sections are auto-expanding if DPA detects interesting data.



Analyzing a Query with QPA (Example)

If we look at the following query for 30 days in QPA, we can see that wait time started increasing around April 23. The query advisors show advice for the latest day (just like on the trends page), but instead of drilling, let’s scroll down the page some

I see the number of executions is unchanged, but wait time per execution increased with wait time... so it looks like something changed.

If I keep scrolling, I'll see that the blocking section is closed (so no blocking), but the plans section is open and showing multiple plans. DPA noticed a plan change and displayed this chart automatically. If there was only one plan, DPA closes the chart and just gives you a link to the plan.

Note that increase in wait time and wait time per execution correspond to the same time as the plan change on April 23—BINGO!

If I want to see more detail on April 23, I can drill by clicking on the bar chart (just like on the trends page).  I can click it on the top chart, or any other bar chart (like the plans chart).

When I drill into Apr 23, I can see that the change correlates to the plan change. Note that I can also see the instance statistics, and they don't indicate any kind of resource pressure.


From here, I can drill down to an hour if I want, or I can click the plan hashes and take a look at the differences between them.


Blocking, Deadlocks and Wait Time Status in PerfStack

We don't have a new DPA Integration Module (DPAIM) for the 12.0 release, but PerfStack is so versatile, we can share new data with it and have it available automatically. Now blocking (root blocking and blockee), deadlocks, and wait time status are available in PerfStack.

When you highlight the blocking info, you can see the queries in the data explorer.


What did you find in your environment?

We'd love to hear your story about queries and indexes you've improved in your environment. Feel free to post your stories here and commiserate with your fellow admins. For example, during an RC-assisted upgrade, we helped a customer upgrade and walked through the new features, and in just a few minutes, we found a query with over six hours of wait time in QPA. By drilling into the new Table Advisor, we were able to discover the table was missing an index.


What's Next?

Don't forget to read Dean's blog on the Table Tuning Advisor and the DPA 12.0 Release Notes


If you don't see the features you've been wanting in this release, check out the What We Are Working On for DPA (Updated August 29, 2018) post for what our dedicated team of database nerds and code jockeys are already looking at.  If you don't see everything you've been wishing for there, add it to the Database Performance Analyzer Feature Requests.

NetFlow Traffic Analyzer

Faster. Leaner. More Secure.


The new NetFlow Traffic Analyzer leverages the power of columnstore technology in MS SQL Server to deliver answers to your flow analysis questions faster than ever before. MS SQL 2016 and later runs in a more efficient footprint than previous flow storage technologies, making better use of your infrastructure. Support for TLS 1.2 communication channels and monitoring of TCP and UDP Port 0 traffic helps to secure your environment.


Version 4.4 also introduces a new installation process to confirm that you have the necessary prerequisites, and to guide you through the installation and configuration process.


NTA 4.4 is now available in the Customer Portal. Check out the Release Notes for an overview of the features.



The latest release of NTA makes use of Microsoft’s latest version of their SQL columnstore based flow storage database.  Columnstore databases organized and query data by column, rather than row index. They are the optimal technology for large-scale data warehouse repositories, like massive volumes of individual flow records. Our testing and our beta customer experiences indicate that columnstore indexes support substantial performance improvements in both querying data, and in data compression efficiency.


NTA was an early adopter of columnstore technology to enhance the performance of our flow storage database. As Microsoft’s columnstore solutions have matured, we’ve chosen to adopt the MS SQL 2016 and later versions as the supported flow storage technology. That offers our customers the ability to standardize on MS SQL across the Orion platform, and to manage their monitoring data using a common set of tools with common expertise. We’ve made deployment and support simpler, more robust, and more performant.



This same columnstore technology also runs more efficiently with the existing resource footprint. This solution builds and maintains columnstore indexes in memory, and then manages bulk record insertions with much less intensive I/O to the disk storage. CPU required to build indexes is also substantially less intensive than our previous versions. As a result, this version will make better use of the same resources to run more efficiently.


More Secure

This version of NTA supports TLS 1.2 communication channels, required in many environments to secure communications with client users.


Beginning in this version, NTA will explicitly monitor network flows that are destined to TCP or UDP service port 0. Traffic that’s addressed to TCP or UDP port 0 is either malformed – or malicious traffic. This port is reserved for internal use, and network traffic on the wire should never appear addressed to this port. By highlighting and tracking flows addressed to port 0, NTA helps network administrators to identify sources of malicious traffic that may be attacking hosts in their network, and providing the information they need to shut that traffic down.


NTA will surface port 0 traffic as a distinct application, so the information is available in all application resources.

NTA Port 0 Traffic

Supported Database Configurations

This version of NTA maintains a separate database for Flow Storage. NPM also maintains the Orion database for device and interface data. Both of these databases are built in MS SQL instances.


New installations of NTA and upgrades to version 4.4 and later will require an instance of MS SQL 2016 Service Pack 1 or later version for flow storage. For evaluation, the express edition is supported. For production deployments, we support the Standard and Enterprise editions.


When upgrading to this version from older version on the FastBit database, data migration is not supported. This upgrade will build out a new, empty database in the new MS SQL instance.  The existing flow data in the FastBit database will not be deleted or modified in any way. That data can be archived for regulatory requirements, and customers can run older product versions in evaluation mode to temporarily access the data.


In the current NTA product, we require a separate dedicated server for Flow Storage. The simplest upgrade would use that dedicated server with the new release to install an instance of MS SQL 2016 SP1 or later for flow storage. Many of our customers will be interested in running both the Orion database and the NTA Flow Storage database in the same MS SQL instance. We support that, but for most customers that will take some planning to consolidate and to appropriately size that instance to support both databases.


Here's a more detailed discussion of NTA's New MS SQL Based Flow Storage Database. Also, a knowledge base article on NTA 4.4 Adoption is available, with frequently asked questions.


We’re doing some testing now to provide some performance guidance for key performance indicators to monitor. One of the benefits of using MS SQL technology for both of these databases is that there are many common tools and techniques available to monitor and tune MS SQL databases. We plan to provide guidance for both monitoring, and deployment planning.



Please visit the NetFlow Traffic Analyzer Forum on THWACK to discuss your experiences and new feature requests for NTA.

I am very excited to announced that Solarwinds NCM 7.8 is available for download in the Customer Portal! This release brings many valuable features and the release notes are a great resource for these.


Network Insight for Cisco Nexus
This is the third iteration in our Network Insight series and in this release we have extended those insights to Cisco Nexus. We understand that your Cisco Nexus devices are a sizable investment and come with a host of valuable features and that you also expect deeper insight from your Solarwinds monitoring and management tools as a result. This meant that we had to go back and develop some new features and expand on existing ones to ensure that the relevant information you need is presented properly. It means that your workflows are logical and more time efficient.



Virtual Port Channels

One of the really awesome features of a Cisco Nexus, that comes with a good deal of complexity, is the ability to create and deploy vPCs. vPCs operate as a single logical interface, but are actually just a group of interfaces working together. What this means is that managing vPCs can become a time drain, as the number of vPCs increases and as the number of interfaces on each vPC pair increases. Network Insight provides a view to show each vPC and the member interfaces in each of those vPCs. This is covered in the NPM v12.3 release blog.


In addition to this view, there is another layer of detail that shows the configuration of each vPC and its member interfaces. To see this detail you will click on "View Configs" on the vPC page. This page displays the configuration details for each of the side of the vPC and the configurations of each member interface. This allows you to save time by more efficiently identifying configuration errors within the vPC and the member interfaces. I think we can all agree that not having to hop across multiple windows and execute manual searches or commands to find issues is a major workflow improvement!


The example below is a vPC with multiple member interfaces:


Virtual Device Contexts

As it is covered here, each VDC is essentially a VM on a Cisco Nexus (also Cisco ASAs!) and each context is configured separately and provides its own set of services. These configurations are downloaded and backed up by NCM. They are also referenced for all the features in this release.


To manage a context in NCM, one just needs to click "Monitor Node" and it will walk through node addition process, after that has concluded each configuration is downloaded and stored separately.


Access Control Lists

ACLs define what to do with the network traffic. ACLs are very complicated to manage because within each ACL are rules (Access Control Elements) and within these are object groups. The object groups are containers that house specific information for the given rule like the interfaces that you might block a particular MAC address from traversing. The layering creates some problems. Manually you need to verify the rules are handling traffic by examining the hit counts, and that none of the rules are shadowed or redundant. Lastly, to ensure we met all of your needs for ACLs we extended the existing functionality of Access Control Lists (ACLs) beyond Port Access Control Lists (PACLs) and VLAN Access Control Lists (VACLs), to include MAC ACLs and non-contiguous subnet masks.


ACLs are super easy to add and once the Nexus nodes are added to NCM, it will automatically discover ACLs and grant you access to all the information available inside those ACLs. You won't need to spend copious amounts of time digging into each ACL, determining if changes occurred, and what changes occurred.


To see the list of ACLs for a particular Nexus, mouse over the entities on the side panel and select “Access Lists.”

Access Control List Entity View


With this view you are able to see the historical record of ACLs, including the date and time of each revision, and if there are any overlapping rules inside of each version of the ACL. To expose the previous version for viewing just expand the view. From this same screen you are able to view the ACL details and also compare against the next most recent, older revision, or a different nodes ACL.

ACL detail view and rule alerts


When you navigate into the ACL, each of the rules in that ACL are displayed including all the syntax for that ACL. In this view each rule provides a hit counter, making it easy to see which rules are impacting traffic and which ones are not. You are also able to drill down into the object groups.


Viewing conflicting rules is simple in NCM. Expanding on the alert, you can see the shadowed or redundant rules.

  • Redundant: a rule earlier in the list overlaps this rule, and does the same action to the matched traffic.
  • Shadowed: a rule earlier in the list overlaps this rule, and does the opposite action.


Interface Config Snippets???

At some point during the course of your day you will have identified one or many interfaces that warrant deeper inspection. Based on feedback from many of you, we discovered that once you reached this point you needed to see more information. Specifically, information about that interface and the interface configuration information. Normally you would have had to dig into overall running or startup configs requiring you to navigate away from the interface screen. This is why we created where interface config snippets and this is probably one of my favorite features in this Network Insight release.


These snippets are the running configurations of the specific interface you are viewing.

Interface Config Snippet

Once you have found the snippet on the page, you are able to verify which configuration this snippet is pulled from and the date and time of when it was downloaded.

Interface Config Snippet details + history



That is all I have for now on this release but I recommend you go check out our online demo and visit the customer portal to click through this functionality and see all the great features available in this release. My fellow cohort cobrien put together a great blog on Network Performance Monitor's v12.3 release for Network Insight and I highly recommend that you head over and give it a read! I look forward to hearing your feedback once you have this new release up and running in your environment!


Starting with NPM 12.2, SolarWinds has embarked on a journey to transform your Orion deployment experience with fast and frequent releases of key deployment components. The first step was revamping the legacy installer to the new and improved SolarWinds Orion installer. The installer was able to deploy new or upgrade an entire main poller in one seamless session. The second iteration of the installer released the capability to do the same for your scalability engines. In this release NTA has been updated to utilize a MSSQL database, allowing us to happily say that the SolarWinds Orion installer is truly an All-in-One installer solution for your Orion deployment. For NPM 12.3, we have made tremendous scalability improvements that allow you to utilize even more scalability engines. As a result, your Orion deployment upgrades gain in complexity, so the installer team is providing additional updates to how you can stage your environment for minimal upgrade time.


Normal Upgrade Process


Using the All-in-One SolarWinds Orion installer, your upgrade process will look like the following.


Step one:


Review all system requirements, back up your database and if possible snapshot the Orion deployment. This will be especially important in this release, as the NTA Flow Storage database requirements have changed. Note: Flow Storage database refers to the database instance that stores NTA collected flow data. In previous versions this was utilizing a Fastbit database, but in this release has been updated to use MSSQL with a minimum version of 2016. An Orion database is the primary database that stores all polled data from NPM and other Orion products.


Step two:


Download the NPM 12.3 installer, selecting either the online or the offline variant according to your system requirements. Note: the SolarWinds Orion installer is


Step three:


Run the installer on your main poller and upgrade it to completion. If you have any other Orion product modules installed, the installer will upgrade this instance to the latest versions of those modules at the same time to maintain compatibility with the new Orion Platform 2018.2. If there are new database instances to be configured, that will be handled during the Configuration Wizard stage of the main poller upgrade. This release of the installer has a new type of preflight check that requires confirmation from you before proceeding. In the example below, is one for the NTA upgrade. Click for details to see the confirmation dialog and select yes or no.


Configuration Wizard step for NTA:


Step four:


If you don’t have any scalability engines, e.g Additional Polling Engines, Additional Websites or HA Backups you’re ready to explore all of the new features available in this version!


Scalability Engines


For those environments utilizing scalability engines or for those who are looking to try them out, this section will guide you through the process of deployment. Even if you have not utilized scalability engines previously, trying them out to test the scale improvements is incredibly easy. Like every SolarWinds Orion product, they are available for an unlimited 30-day free evaluation.


Deploying a fresh scalability engine is handled with the same installer that you downloaded for the main poller.


1. Copy the installer to your intended server and Click to “Run as Administrator”


Note: If you downloaded the offline installer, which is about 2 GB, the download process to your server can take some time and does not currently stage the scalability engine for faster upgrade. In the future, this is something we’d like to improve but is not an available feature for this release.  if you’d like to shorten the initial download of installer file to server, you can always use the online installer to set up your scalability engine. This installer file is about 40 MB so the download of installer file time to the server is much shorter. This will still meet offline requirements because when selecting the “Add a Scalability Engine” option, it will choose to download from the main poller to maintain version compatibility and does not require internet access. As always, the 40 MB scalability engines installer is also available for download from the All Settings -> Polling Engines page.


2. Select the “Add a Scalability Engine” option.


first screen of installer


3. Similar to the main poller upgrade process, at this point system checks that are specific to scalability engines will be run.


Note: Anything tagged as a blocker may need confirmation or action from you before proceeding.  If this is the case, address those issues and run the installer again. Things that are tagged as a warning or informational message are simply for your awareness and will not prevent your installation from proceeding.


4. Select the type of scalability engine that you are looking to deploy, and then complete the steps in the wizard to finish your installation per your normal process.



Upgrading a scalability engine, is also handled through the same installer. However, this is where you have an opportunity to utilize our staging feature.

Note: If you were to proceed with your normal practice of putting the scalability engines installer on each server you need to upgrade, and then manually upgrading, that process will work perfectly well with no changes. Please read through the “Staging Your Environment for your Scalability Engines Upgrade” section below to see the alternative workflow that allows you to stage your environment.


Staging Your Environment for Your Scalability Engines Upgrade


For customers with more than a handful of scalability engines or with some distributed over WAN links, we noticed that they were occasionally experiencing extremely high download times from their main poller to their scalability engines. In addition, there was no centralized area where one could see the upgraded state of the scalability engines. Navigate to "All Settings", and click "High Availability Deployment Summary" and you will see the foundational pieces for an Orion deployment view.


The Servers tab contains the original High Availability Deployment Summary content, and is where you can continue to set up additional HA pools and HA environment.


Check out the new Deployment Health tab! You may not have heard of our Active Diagnostics tool, but it comes prepackaged with every install of the Orion Platform with test suites designed to test for our most common support issues. We've brought that in depth knowledge to your web console in the new Deployment Health view. With nightly run tests across your Orion Deployment, every time you come to this page you will see if there are any issues that could be a factor in the performance of Orion or your upgrades.


You are able to refresh a check if you're working on an issue and wish to see an updated test result. If there are tests that you don't want to address, silence them to hide the results from the web console. Click on the caret to the right and you'll be able to see more details and a link to a KB article that will give you remediation advice.


On the Updates tab is where you will be able to stage your scalability engines.


The first page of the wizard will let you know if there are updates that are available to be installed on your scalability engines. At this point you've upgraded your main poller, so there are definitely updates available!  Click "Start" to get started!


The second page is where we are testing the connection to each of the scalability engines. If we are able to determine the status of these engines, we'll give you the green light to proceed to the next step. Common issues that could prevent this from being successful could be that the SolarWinds Administration Service has not been updated to the correct version or is not up and running at this point. Click "Start Preflight Checks" to proceed.


Similar to the Deployment Health tab, these are running preflight checks across your Orion Deployment. You'll be able to see all of the same preflight checks that were available through the installer client, except centralized to one view. If there are blockers present on this screen, you can still proceed in this flow if at least one scalability engine is ready to go, but please note down those scalability engines with blockers. You will need to address those blockers before an upgrade can occur on those servers. Click "Start download" to start the staging process.




At this point, we are starting the download process of every msi needed to upgrade your scalability engines. In this example, I'm only staging one scalability engine, but if you  have multiple, you can see the benefits in time savings right away! All of the downloads will be triggered in parallel.

Sit back and relax as we stage your environment for you. You can even open up RDP sessions to those servers with one click from this page.


When everything has finished downloading, we will let you know which servers are ready to install. Click on the "RDP' icon to open your RDP session to the server.


On your desktop, you should see the SolarWinds scalability engines installer waiting for you to click on and finish the upgrade.


Visually you will run through the same steps that you normally would in clicking through the installer wizard. However, when you actually get to the installation part, you'll notice that there is no download appears in the progress bar. Finish your upgrade and move on to the next!


I hope you enjoy this update to how you can upgrade your Orion Deployment. I'm always looking for feedback on how we make this as streamlined as possible for you.

NPM 12.3 is available today, May 31st, on the Customer Portal!  The release notes are a great place to get a broad overview of everything in the release.  Here, I'd like to go into greater depth on Network Insight for Cisco Nexus including why we built it and how it works.  Knowing that should help you get the most out of the new tech!


Network Insight

What's all this "Network Insight" talk?  If you haven't heard of this big theme we've been building on a few years, start here.  If you know the story, skip ahead to the Network Insight for Cisco Nexus section.


We live in amazing times.  Every day new technologies are invented that change how we interact, how we build things, how we learn, how we live.  Many (most?) of these technologies are only possible because of the relatively new ability for endpoints to talk to each other over a network.  Networking is a key enabling technology today like electricity was in the 1800s and 1900s, paving the way for whole wave of new technologies to be built.  The better we build the networks, the more we enabling this technological evolution.  That's why we believe in building great networks.


A great network does exactly one thing well: connects endpoints.  The definition of "well" has evolved through the years, but essentially it means enabling two endpoints to talk in a way that is high performance, reliable, and secure.  Turns out this is not an easy thing to do, particularly at scale.  When I first started maintaining, and later building networks, I discovered that monitoring was one the most effective tools I could use to build better networks.  Monitoring tells you how the network is performing so you can improve it.  Monitoring tells you when things are heading south so you can get ahead of the problem.  Monitoring tells you if there is an outage so you can fix it, sometimes even before users notice.  Monitoring reassures you when there is not an outage so you can sleep at night.


Over the past two decades, we believe as a company and as an industry we have done a good job of building monitoring to cover routers, switches, and wireless gear.  That's great, but virtually every network today includes a sprinkling of firewalls, load balancers, chassis switches, and maybe some web proxies or WAN optimizers.  These devices are few in number, but absolutely critical.  They're not simple devices either.  Monitoring tools have not done a great job with these other devices.  The problem is that we mostly treat them like just another router or switch.  Sure, there are often a few token extra metrics like connection counts, but that doesn't really represent the device properly, does it?  The data that you need to understand the health and performance of a firewall or a load balancer is just not the same as the data you need for a switch.  This is a huge visibility gap.


Network Insight is designed to fill that gap by finally treating these other devices as first class citizens; acquiring and displaying exactly the right data set to understand the health and performance of these critical devices.


Network Insight for Cisco Nexus

Network Insight for Cisco Nexus is our third installment in the Network Insight story, following Network Insight for F5 and Network Insight for ASA.  Nexus chassis switches are used to build high performance, scalable, and virtually indestructible data center networks.  Thats why Nexus are at the heart of many of the largest data centers.  Nexus are switches so our traditional switching data is still important, but a $300k chassis switch has a lot of additional capabilities that a $5k switch does not.   As you saw with F5 and ASA, Network Insight for Cisco Nexus takes a clean slate approach.  We asked ourselves (and many of you) questions like:


  • What role does this device play in connecting endpoints?
  • How can you measure the quality with which the device is performing that role?
  • What is the right way to visualize that data to make it easiest to understand?
  • What are the most common problems that occur with this device?  What are the most severe?
  • Can we detect those problems?  Can we predict them?


With these learnings in hand, we built the best monitoring we could from the ground up.


VDC Aware


Similar to ASA's, Nexus can be split into virtual instances.  Nexus calls them Virtual Device Contexts while ASA calls them Contexts.  VDCs are to Nexus what VMs are to servers, allowing a single piece of hardware to be split into several logical nodes.  Each logical node, or VDC, is configured separately and provides a full set of technology services.  All of the features you read about below discover complete information about each VDC.


Adding the Admin VDC for a Nexus to monitoring lets NPM map out all of the VDCs, which will then appear on the Node Details screen:


Anytime you go to Node Details for any of the VDCs, you'll get this new resource so it's easy to navigate between them.  NCM users will also find it easier than ever to make sure all of their VDCs are backed up.  If you're well setup for catastrophic failures, they're less likely to occur, right?  More info on what NCM is doing for VDCs can be found here.


So Many Interfaces


The first big difference between Cisco Nexus and most other devices is simple interface count.  Thanks to the distributed nature of a Nexus deployment, particularly Fabric Extenders, a single Nexus 7k is likely to have hundreds or even thousands of ports.  Dealing with thousands of ports on a single device is different than dealing with the usual couple dozen, and we wanted to make sure this fundamental part of Nexus monitoring was done right.


First, the Node Details page now contains a simple summary of all of the interfaces:


Like Network Insight for ASA, we have a new sub-view for each major technology service provided by the device.  Clicking on Interfaces, in the above resource or on the sub-view tabs on the left, will bring you to the Interfaces sub-view showing all interfaces.  Clicking on any of the status icons or numbers will bring you to a list of only those interfaces.


This is built on the relatively new List View that's part of our Unified Interface Framework.  UIF is an important component to make sure the UI across all Orion Platform based tools from SolarWinds have a consistent UI experience so when you learn how to do something in one tool, you know how to do it in all tools.  The list view is made for management of large lists, including:

  • Multi-level filtering, for example, interfaces with status Up AND (utilization Warning OR Critical).
  • Colored highlighting of values over your thresholds for that specific entity.
  • Sorting
  • Searching
  • Pagination control with up to 100 items per page.


I particularly like the search function for looking up ports on a certain module.  Entering a "1/" in the search field will show you all the ports on slot 1.  Easy.


These are straight forward improvements but I think you'll find it much more pleasant dealing with the large interface counts on your Nexus devices.  And good news: we extended this sub-view to all nodes so you have a super polished interface interaction model on your smaller switches too.


Virtual Port Channels


A big part of why people are willing to shell out for the huge cost of a Nexus is more reliable connectivity to endpoints like servers.  Nexus should provide an order of magnitude higher reliability connectivity to servers.  Cisco accomplishes with vPCs, a Multi-Chassis Etherchannel (MCEC) technology that allows a single endpoint to uplink to multiple switches.  Traditional port channels can only connect a single upstream switch, resulting in a single point of failure.


Believe it or not, vPCs are a serious departure from how networking works.  In fact, a pair of Nexus have to "conspire" (a fancy word for lie) to present themselves as a single switch to the endpoint they're connected to.  Cisco has a bunch of technology to make it work, and in our research we found this was making it hard for administrators to understand, monitoring, and troubleshoot their vPCs.  When we dug into this, we found that expert administrators will spend several minutes to understand the health of a single VPC.  They do things like:

  • Login to Nexus
  • "show vpc"
  • "show interface port-channel..."
  • "show interface...", repeat 2-4 times
  • "show run interface...", repeat 2-4 times
  • Find peer switch, login, and do all the commands again.


When all is said and done, they've mapped 5, 7, 9, or even more different components, each with its own status, performance, and config.  Our goal was to have this expert level data set available to experts and non-expert users in seconds.  The vPC tab accomplishes that:

On the left we see the vPCs.  Each vPC is mapped to the local port-channel.  We find the peer switch and map the vPC to the port-channel on the peer.  Mousing over allows you to see the member ports of each port-channel and navigate to them:

Again we're using the List View, so you have filtering, sorting, searching, pagination, and so forth as expected.  Click to drill into any interface for all the details we have about that interface.  Of course all of this is can be alerted upon and reported on to keep you ahead of problems without staring at monitoring all day.  There's some really cool additional stuff you can do with NCM specific to vPCs.  If you're interested, check out their upcoming post.


During beta and RC we found environments where folks had spent hundreds of thousands to more than a million dollars and countless hours setting up high resiliency.  Once they pointed NPM at their Nexus, they found that resiliency had deteriorated over time.  They had failures and the redundancy saved them, but it also meant they didn't know the problem existed so they never restored redundancy.  This leaves them one failure away from a catastrophe in a multi-million dollar high redundancy environment.


If you're in IT, you're strapped for time.  Our monitoring tools have to help us do better here.  I'm happy that NPM will now help you keep your vPCs running clean!


Access Lists?!


One thing that surprised me is how many of you are running ACLs on your Nexus.  There's a trend of moving security closer to the endpoint, and Nexus devices are the access layer for many data center environments.  This results in lots of Port Access Control Lists (PACLs) and VLAN Access Control Lists (VACLs).  Fortunately, we recently worked on this for Cisco ASA.  The latest NCM release extends and enhances the ACL backup and analysis capability, including new support for MAC ACLs and non-contiguous subnet masks.  All of the Access List functionality is based on pulling and analyzing configs, so you'll need the NCM tool to get this feature.  Check it out NCM's post  - and also, bonus, my favorite part: Interface Config Snippets!


Traditional Routing and Switching


While working on the enhanced capabilities, we also revisited some core technology of ours to make sure it was covering Nexus well.  Things like routing protocol monitoring and hardware health should work better than ever.  We think we've got everything covered but there's a huge number of combinations of hardware (platform and modules) and software (trains and versions).  If you notice any gaps please shoot me a private message with the data that's not showing up for you and a SNMP walk of your device.




I would have started this guide with setup if not for the fact that setup is so darn easy.  To get this feature working, add a node as usual and you'll notice a new check box on the last step of the Add Node Wizard:



Check that box, enter your CLI creds (read only is fine) and you're good to go.  If you have existing Nexus under monitoring and you'd like to get the enhanced monitoring, head over to manage nodes.  You can edit an individual node and check this box, or you can find all of them with Machine Type and/or search and enable all at once.


There's nothing else you need to configure or define.  Simple right?


Other Deep Dives


We've got a couple other deep dives for new Orion Platform features included in NPM 12.3.  Check 'em out!


Orion Platform 2018.2 Improvements - Chapter One

Orion Platform 2018.2 Improvements - Chapter Two - Intelligent Mapping

Orion Platform 2018.2 Improvements - Chapter Three




That does it for now.  You'll be able to click through the functionality yourself in our online demo starting around June 6th.  If you're on active maintenance for NPM, head over to the Customer Portal to get your upgrade now.  I'd love to hear your feedback once you have it running in your environment!

Starting with VMAN 8.0, and continuing with 8.1  we've streamlined how you deploy and use VMAN.  Virtualization Manager 8.2 , the latest edition of these efforts, is now available on your Customer Portal.


One of the biggest pain points that surfaced over the last 2 releases was that the process for adding virtualization nodes to be monitored was not intuitive. This is solved with a new simplified workflow!


Whether choosing to add a Node or setting up a Discovery job, we've updated those entry points to direct you to the new, separate workflow.


Add a Node - Select VMware vCenter or Hyper-V devices
Network Discovery - Add VMware vCenter or Hyper-V devices
All Settings -  Add VMware vCenter or Hyper-V devices


Once you click on any of those entry points you'll be able to get started monitoring your environment with a few simple clicks.


Add a Virtual Object for Monitoring
See the thresholds that apply to your virtualization manager entities
Click Finish and you're successfully on your way to monitoring your virtualization environment


If you identified any thresholds that you'd like to tweak, simply navigate to All Settings -> Virtualization Settings to update your thresholds. Within a few clicks, you're ready to take advantage of capacity planning, recommendations and much more!


Get Started with Documentation

VMAN 8.2 Release Notes

VMAN 8.2 Getting Started Guide

VMAN 8.2 Administrator Guide

VMAN 8.2 Deployment Sizing Guide

Applications talk to each other, and you should know who they are talking to


Applications constantly rely on communication between different servers to deliver data to end-users. The more applications end-users require to do their job, the greater the complexity of application environments and those communication based relationships.

With the release of Server & Application Monitor 6.6, we introduced an Orion Agent based feature, called Application Dependencies, which enables system administrators to quickly gain an understanding of which applications servers are talking to one another, as well as see related metrics, to help with troubleshooting application performance issues.


How do you enable it?

The ability to discover and map Application Dependencies is enabled by default. This allows SAM to actively collect inbound and outbound communication at the application process level. This is paired with an ability to collect connection related metrics (latency and packet loss), which is disabled by default. You can find all of the configuration options in the Application Connection Settings section of the Main Settings & Administration screen.


What does it show you?

At its core, Application Dependencies help you understand if application performance issues are associated with server resource utilization or network communication. For example, Microsoft Exchange is heavily dependent on Active Directory for authentication and other services. Application Dependencies show you the relationship, and the communication, by adding a few new resources in SAM.


The two main areas where you can see the Application Dependency information. One area is in a new widget that is available on application and node details pages. This widget will show you the discovered application dependencies, specific to that monitored application or node. Notice in the screen below that you can see where multiple Exchange servers have a dependency on the Active Directory server, ENG-AUS-SAM-62, and more specifically the Active Directory service that is running on it.


The second area where you can see Application Dependency information is in the connection details page, which is linked from the above mentioned connections widget. This will allow you to see all of the application monitors, and associated processes, process resources metrics, and ports, responsible for the discovered communication, between two specific nodes. You will also see the latency and packet loss data, if you have enabled the Connection Quality Polling component. The screen below shows the relationship between ENG-AUS-SAM-62 (Active Directory) and ENG-AUS-SAM63 (Exchange), in greater detail.

What’s going on under the covers?

There are two, new Orion Agent plug-ins that help deliver this new functionality. One is the Application Dependency Mapping plug-in, and the other is the Connection Quality Polling plug-in.

The Application Dependency Mapping plug-in is responsible for collecting the active connection data from the server. That information is then sent back to the Orion Server, where it is correlated with component monitor and node data, already being collected by SAM (Note: You must have at least one component monitor, like the process monitor, applied to the server). As SAM matches the collected data from the different application servers, it creates the connection details pages and populates the connection widget.


The Connection Quality Polling plug-in is responsible for a synthetic probe, which measures latency and packet loss. This accomplished by sending TCP packets to the destination server, on the specific port identified by the active connection information collected by the Application Dependency Mapping plug-in. It is important to note that the Connection Quality Polling plug-in includes the NPCAP driver for use with this synthetic probe.


If you would like to read more about how this feature works, you can find more information in the SAM administrator guide.


Is that it?

Application Dependencies is not the only feature that was released in SAM 6.6. You can read more about the other features in the release notes. You can also check out Application Dependencies, live in action, in the online demo.

I am happy to announce General Availability of Storage Resource Monitor 6.6.   This release continues the momentum of supporting FLASH and HYBRID arrays that were highly requested by you on THWACK!  We've also updated to SRM to the latest version of the Orion® Platform and installer, so you'll enjoy the benefits of easier upgrades and participation in all the latest Orion® features.  Check out the SRM 6.6 Release Notes for more information about installing, upgrading and new features and fixes.


New Array Support

Support includes all the standard features you love: capacity utilization and forecast, performance, end to end mapping in AppStack, and integrated performance troubleshooting in PerfStack.  We also were able to squeeze in Hardware Health for all these arrays too!

  • EMC Unity
  • HPE Nimble
  • INFINIDAT InfiniBox
  • IBM V9000

Now for some screenshots for your viewing pleasure!


Monitoring EMC Unity

SummaryBlock StorageFile StorageHardware Health
EMC Unity Monitoring SummaryEMC Unity Block Storage Monitoring EMC Unity File Storage Monitoring EMC Unity Hardware Health Monitoring


Monitoring HPE Nimble Monitoring

SummaryBlock Storage File Storage Hardware Health
HPE Nimble Monitoring SummaryHPE Nimble Monitoring Block HPE Nimble Monitoring FileHPE Nimble Monitoring Hardware Health


Monitoring INFINIDAT InfiniBox

Block Storage
File Storage
Hardware Health
INFINIDAT InfiniBox Monitoring SummaryINFINIDAT InfiniBox Monitoring Block INFINIDAT InfiniBox Monitoring FileINFINIDAT InfiniBox Monitoring Hardware Health


Monitoring IBM V9000

You'll have to try this one yourself, looks the same as our monitoring for IBM SVC.



Don't see what you are looking for here? Check out the What we are working on for SRM after v6.7 -- Updated on Sep 27, 2018 post for what our dedicated team of storage nerds and code jockeys are already looking at.  If you don't see everything you've been wishing for there, add it to the Storage Manager (Storage Profiler) Feature Requests.


Note: The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

No, you haven't entered a multidimensional time warp. Nor are you having a 90's flashback. While the industry hype cycle is primarily focused on hot new trends like hybrid IT, SaaS, and containers, there lurks an unsung hero in the darkest dwellings of many of today's most established organizations. It oftentimes doesn't get the attention or appreciation it deserves, because to most, its existence is completely transparent. It sits there in the corner, just plugging away, day after day, hour after countless hour, without complaint or need for recognition. Yet these systems remain at the very core of the business, handling the most critical transactions. From maintaining patients medical records, keeping all your banking transactions in order, to running some of today's largest companies CRM and ERP applications, AIX is still very much around us every day, touching our lives in ways you probably haven't even considered.


For as important as these systems remain even today, the monitoring of their performance and application health is far too often overlooked or completely forgotten. Perhaps it's because these workhorses were built to last and seldom fail at their important duties, making them fall into the dangerous category of out-of-sight-out-of-mind. More likely, however, is that these systems have traditionally been extremely difficult to gain visibility into using modern day multi-vendor monitoring solutions. That may be because a long time ago, IBM seemingly stole a page out of Sony's playbook of market dominance, which had propelled their proprietary Betamax and MiniDisk formats into the iPhone like successes of their day. Oh, wait. That's not what happened! That's not what happened at all!!


Unfortunately, despite strict and very well defined industry standards which would govern how key operating system metrics should be exposed, and allowing third-party monitoring solutions to provide necessary insight into their health and performance, IBM decided that standards didn't necessarily apply to them. This decision has historically made monitoring AIX systems challenging for both their customers, as well as, 3rd parties seeking to provide a monitoring solution for those organizations most critical systems. Compounding this problem is the fact that the few monitoring solutions available to those customers have traditionally been wildly complex, difficult to deploy and configure, and even more challenging to maintain.  A new solution was needed. One which could bring with it unexpected simplicity, where none existed before. Its time has come, and that time is now.



AIX Agent Deployment


As one would expect from SolarWinds, deployment of the AIX agent is a simple turnkey affair, no different than deploying Agents to other operating systems, such as Windows or Linux. That's right, deploying an Agent to AIX is just as simple as it is for Windows and you don't need to be an expert in AIX. In fact, you don't even need any experience using AIX to be successful monitoring these systems with Server & Application Monitor (SAM) 6.6. If you can add a Node in Orion, then you, too, can monitor your AIX 7.1 and 7.2 systems.


Add Node Wizard - Push Deployment


To begin, navigate to [Settings -> All Settings -> Add Node], enter the IP address or fully qualified hostname of the AIX host you'd like managed in the "Polling Hostname or IP Address" field and select the "Windows & Unix/Linux Servers: Agent" radio button from the available "Polling Method" options. Next, enter the credentials that will be used to both connect to the AIX host and install the agent software on the 'Unix/Linux tab. The credentials provided here should have 'root' or equivalent level permissions. Note that the credentials provided here are used only for initial deployment of the agent. Future password changes of the account credentials provided here will have no impact on the agent once it is deployed. Alternatively, if you authenticate to your AIX host via SSH using a client certificate rather than a username and password, click the 'Certificate Credential' radio button and upload your certificate in PEM format through the Orion web interface. This certificate will then be used to authenticate to the AIX host for the purpose of installing the Agent.


You can also optionally add SNMP credentials to the Agent if SNMP has already been configured properly on the AIX system. Rest assured, though, that this isn't needed and is used only if you're wanting to utilize SAM's SNMP Component Monitors against the AIX system. Configuring this option will also populate the 'Location' and 'Contact' fields located on the 'Node Details' resource if those values have been properly populated in your SNMP configuration. Everything else will be polled natively through the AIX Agent with zero reliance upon SNMP.



Once you've entered your AIX credentials, click the 'Next' button at the bottom of the page. The Agent will then be deployed to the AIX host using a combination of SSH and SFTP requiring TCP port 22 be open from the Orion server (or additional polling engine) to the AIX endpoint you wish to manage for push deployment to function properly.


Install Agent PromptInstall Agent Progress IndicatorList Resources


Manual - Pull Deployment


In some scenarios, it may not be possible for the Orion server to push the agent to the AIX host over SSH. This is not uncommon when the host you wish to manage resides behind a NAT or access control lists restrict access to the AIX system via SSH from the network segment where the Orion server resides.  While firewall policy changes, port forwarding, or one-to-one address translations could be made to facilitate push deployment of the agent, in many cases, it may be far easier to perform a manual deployment of the agent to those hosts.


The Agent package can be downloaded from the Orion web interface to the AIX host by going to [Settings -> All Settings -> Agent Settings -> Download Agent Software] and selecting "Unix/Linux" from the options provided and clicking "Next".


Download Agent Software - Machine Type
Download Agent Software - Deployment Method


In the following step of the Wizard select "Manual Install" and click "Next". Finally, in the third and final step of the wizard is where you will select 'IBM AIX 7.x' from the 'Distribution' drop down. Here you can also configure any advanced options the agent will use when it is installed, such as which polling engine the Agent should be associated with in Agent Initiated (Active) mode, or the listening port the Agent will use when running in Server Initiated (Passive) mode. Additionally, you can also specify a proxy server the Agent should use to communicate to the Orion server or Additional Polling Engine in Agent Initiated (Active) mode. If you're deploying in an environment where proxy servers are used, fret not. The Agent's proxy configuration fully supports the use of authenticated proxies.



After selecting all the appropriate configuration options, click the "Generate Command" button at the bottom of the page. This will generate a dynamic installation command based upon the settings chosen above, which can then be copied and pasted into an SSH or X-Windows session on the AIX host. The AIX machine will then download and install the appropriate agent software from the Orion server using those pre-configured options.


Copy Generated Agent Installation CommandPaste Command into SSH TerminalAgent Installation Success


As soon as the Agent is registered with the Orion server, select your newly added agent node and click "Choose Resources" from the 'Manage Agents' view to select items on the node you would like to monitor.



Agent Advantages


So what's so great about Agents anyway? What's wrong with using the tried and true agentless methods for monitoring AIX hosts, like SNMP?




Well, as anyone who has the misfortune of using SNMP to monitor their AIX hosts can tell you, it's not all sunshine and lollipops, starting with configuring SNMP. Most environments today have strict security standards which mandate the use of encryption for virtually all network communication. While configuring SNMP v1/v2 on AIX isn't especially difficult for an experienced AIX administrator, neither of those versions of SNMP utilizes encryption. That would necessitate that users utilize SNMPv3, which comparatively speaking, practically requires users obtain a Ph.D. from Big Blue University in AIX to properly enable and configure.  By comparison, the Orion AIX Agent natively utilizes highly secure 2048 bit TLS encryption for all network communication.




IBM's proprietary SNMP daemon leaves much to be desired when compared to other standard based SNMP daemons running on alternative operating systems. Chief among the complaints I hear regularly is that IBM's SNMP daemon doesn't support important standard MIBs, such as the HOST-RESOURCES-MIB which exposes key pieces of information regarding running processes on the server and their respective resource consumption. This remains the primary reason why so many customers have chosen to replace IBM's proprietary SNMP daemon with NET-SNMP.  SImilar to NET-SNMP, though, are issues of reflecting critical metrics accurately, such as memory utilization. It seems odd that something so basic would present so many challenges and be pervasive across both Linux and AIX when monitored via SNMP.




Like all Agents in Orion, the AIX Agent runs independently of the Orion server. This means the Agent continues to monitor the host where it's installed, even if the Orion server is down, or otherwise inaccessible due to a network outage. Once connectivity is restored or the Orion server is brought back online, the data collected by the AIX agent is then uploaded to the Orion server, filling gaps in the historical time series charts that would have otherwise existed if that node was being monitored via SNMP. This ensures that availability reporting is accurate, even if the server running Orion experiences a catastrophic failure.




In today's highly secure and heavily firewalled environments which are riddled with network obstacles such as network address translation, port address translation, access control lists, and proxies, it's sometimes amazing that anything works at all. More and more the things that need to be monitored are oftentimes the most difficult to monitor. With the AIX agent, overcoming these obstacles is a snap, allowing users to monitor their AIX systems regardless of where they might be located in the environment. Have your Orion system deployed in the Cloud and running in Amazon's AWS or Microsoft's Azure? Not a problem. Deploy the Agent in Agent Initiated (Active) Mode and forget about VPN tunnels or 1:1 NAT mappings. Does all traffic leaving the network go through a proxy server? No problem. The Agent natively supports the use of authenticated proxies to access the Orion server, while conversely, Agent communication within Orion can be configured to utilize a proxy server to reach an Agent that might not be accessed directly. These are possibilities you previously could only dream about when using SNMP.



AIX Agent Exclusive Features


There have been several Orion features released throughout the years which had previously only been available for nodes running other operating systems, such as Linux or Windows. AIX had largely been left out in the cold. That is, until today.


Network Interface Monitoring


In Server & Application Monitor 6.6, network interfaces on your AIX server can now be monitored without needing Network Performance Monitor installed. This functionality is available exclusively through the AIX Agent and does not count against your SAM component monitor usage or NPM element license count, in the event you also have NPM installed. That means this functionality is provided essentially free and potentially even allows you to free up some of those valuable NPM element licenses for other nodes in your environment.


Volume Performance Monitoring


Today, storage is the leading cause of server and application performance issues. Having visibility into storage volume performance, such as disk reads/writes per second, and queued disk I/O from within the Orion web console alongside other key performance indicators, allows you to isolate where performance bottlenecks are occurring on your server and which applications are affected. With the AIX Agent, you now have visibility into the storage volume performance, similar to those that you've grown accustomed to on your Windows and Linux volumes.

Total Disk IOPSDisk Queue Length



Real-Time Process Explorer


When applications aren't running right, inevitably there's a culprit. It may be the processes that make up the application you're already monitoring, or it might be with those you aren't. The Real-Time process explorer provides you visibility into all processes and daemons running on your AIX server, along with their respective resource utilization. It's like your web-based command center where you can quickly determine which processes are running amuck. No more firing up your SSH client, logging in and running 'topas' to troubleshoot application issues. Now you can do it all from the comfort of your web browser. Spot a runaway process or one that's leaking memory like a sieve? You can also terminate those processes directly within that same web interface. Simply select the process(s) you want to 'kill' and click 'End Process'. Voila! It's just that easy.


You can also now select processes you wish to monitor on your AIX server directly through the Real-TIme Process Explorer. To do so, simply select the process you're interested in monitoring and click 'Start Monitoring'. You'll then be walked through SAM's application template wizard where you can choose to add this process to one of your existing Application Templates or create a new one.




Reboot Management Action


If you find yourself in a situation where terminating processes alone does not resolve your application issue, there's always the tried and true reboot. While usually the option of absolute last resort, it's comforting to have it easily at hand when and if you've exhausted all other options. Simply click the 'Reboot' button in the 'Management' resource on the 'Node Details' view and you'll be prompted to confirm that you really mean business.



Application Component Monitors


Last, and unquestionably most important, are the wide array of various SAM Application Component Monitors supported by the AIX Agent. From these components, you can create templates to monitor virtually any application, commercial, open source, or homegrown.


AIX Agent Supported SAM Component Monitor Types

Directory Size Monitor

File Count Monitor

HTTPS Monitor

ODBC User Experience Monitor

DNS User Experience Monitor

File Existence Monitor

JMX Component Monitor

Process Monitor

File Age Monitor

File Size Monitor

Linux/Unix Script Monitor

SOAP Monitor

File Change MonitorHTTP MonitorNagios Script Monitor

SNMP Component Monitor

TCP Port Monitor

Tomcat Server Monitor


SWQL Walkthrough

Posted by animelov Employee Mar 13, 2018

Hi, all!  And welcome back to our discussion on the SDK and all things SWQL. So, at this point, we’ve briefly introduced SWQL, but today we’re going to get down to building queries with SWQL and talk about what you can make with them.


But first, let’s discuss why this is important. The Orion® Platform does an excellent job of giving you a single-pane-of-glass view for all of your products, while giving you the freedom to pick and choose which modules you wish to purchase. Because of this, the back-end database is fairly segregated, save for a handful of canned widgets and reports. For database experts, this would be done by waving a magic wand and using the correct incantations of “Inner join!” or “Outer Join!”  For the rest of this, SWQL can do the trick!


Now, what this is NOT going to be is a general guide on structured query language (SQL). This does require some level of knowledge of how to construct a basic query, but don’t be scared off just yet.  SWQL, let alone SQL, is not something I knew before I started, but I picked up quite easily.


Also, before we begin, if you haven’t picked up the latest SWQL studio, I highly recommend you do so and check out our most recent post here. This can be installed anywhere that has an Orion server connection, including your laptop.


Now that that is out of the way, let’s get to the meat of the subject. SWQL, in and of itself, is very similar to SQL, but with a few restrictions. For example, SWQL and its studio is read-only. There are SWIS calls you can make with it for API/PowerShell®, but we’ll get into that in a later post. For a quick rundown, check out this post.


If I wanted to see a list of applications that I’m monitoring on my SAM instance, I could write a query that looks like this:

select Displayname from Orion.APM.Application


And I go to Query à Execute (or F5),


that will get me an output that looks like this:


Pretty simple, right?  Let’s look and see what this does, though:



“select” and “from” should be self-explanatory if you know a little SQL. “select” means we want to retrieve/read information, and “from” is stating which table we’re getting that information from.  Orion.APM.Application is the table name (so, we’re getting information “from” “Orion.APM.Application”), and “DisplayName” is the title of the column we’re getting that information from.  Now, where did we get that column name from?  If you look at SWQL studio and find the table name and expand on it, you’ll get this:



Those icons with the blue icons next to them? Those are the columns we can pick from. For the other icons, check out our previous post here.


Let’s add some more to that query. If we want to see the status of the applications (up/down/warning/etc.), we can just add the status to the query, like so:

select Displayname, Status from Orion.APM.Application


This will give us:


More info on the status values can be found here, but, 1 is up, 2 is down, etc.


Now, what if we wanted to select ALL of the columns in this table to see what we get. Unfortunately, this is one of the first things that differs from SQL to SWQL that you cannot wildcard with *.  In other words:


select * from <tablename> does NOT work!!!


If you want all columns, you’ll have to select each one of them, separated by a comma. Luckily, SWQL will do this for you. If you right-click on the table name, you have the option of creating a general select statement:

That will generate a query for you with EVERY column possible for that table.


Pretty neat, right? Now, let’s get to the fun part of SWQL. One of the attributes of SWQL over SQL is its ability to automatically link related tables. If you are familiar with SQL, this is where things would normally get hairy with inner/outer join statements. Here we make it easier.


Let’s use our application example again. Having the list of applications is great, but to me, it’s nothing unless I know which node it is tied to. There is a node id in that application table, but it returns a number, which means nothing to me.  Remember those chain link icons from earlier?  Those are linked tables, and if you look, there is a link to the Orion.Nodes table:



To get the linkage to work, we first need to give the application table an alias. To do so, I just need to give it a name after the table declaration.  So, let’s call it “OAA,” which is short for Orion APM Application. Note: you can name it anything EXCEPT for a reserved word, like “select” or “from.”

select Displayname, Status from Orion.APM.Application OAA


Now, we need to make sure our columns are referenced to OAA by adding that to the beginning of the column names:

select OAA.Displayname, OAA.Status from Orion.APM.Application OAA


Finally we add the node name.  We can do this with the linked table from earlier by using dot notation.  In other words, if I write in “OAA.Node.”, I’m now allowed to pick any column from the node table, including the name of the node (or “caption”).  Now my query looks like this:

select OAA.Displayname, OAA.Status, OAA.Node.Caption from Orion.APM.Application OAA


And now, this is my output:


This is where things get interesting. Remember how I said that we can tie multiple products together?  The AppStack dashboard with Virtualization Manager and Storage Resource Monitor is extremely powerful, especially in terms of reporting. SWQL can help us with that.


If we keep going with our Application example above, we can continue tying information together from other tables.  So far, we’ve linked the nodes table, but let’s see what ESX host these belong to. From the Applications table, there isn’t a link to any of the VIM tables:


But it is related to the “Nodes” table, and if we look at the Nodes table:

Then we go to the Virtual Machines table and from Virtual Machines table…

… there’s the Hosts table!  So, let’s link that to our query using dot notation:

select OAA.Displayname, OAA.Status, OAA.Node.Caption, OAA.Node.VirtualMachine.Host.HostName from Orion.APM.Application OAA


Note, the host names that are NULL refer to machines that are not monitored via Virtualization Manager; they do not have a host.


That’s it for now. Later, we’re going to learn some more tricks for formatting with SWQL, and then how to apply this to Orion®. Stayed tuned for the next one!

We are happy to announce the release of SolarWinds® Traceroute NG, a standalone free tool that finds network paths and measures their performance.

The original traceroute is one of the world’s most popular network troubleshooting tools but it works poorly in today’s networks. You can read about its shortfalls in this whitepaper.

SolarWinds® fixed these shortfalls with NetPath, a feature of NPM.  People love NetPath but there are two problems.  First, NetPath takes a couple minutes to find all possible paths in complex networks, much longer than a quick tool like traditional traceroute. Second, most people don’t own SolarWinds® NPM and so don’t have access to NetPath.

Traceroute is too important of a tool to allow it to languish.  That’s why we’ve taken what we’ve learned with NetPath and fixed traceroute.  We call it Traceroute NG.

Traceroute NG is a super fast way to get accurate performance results for a network path in a text format that’s easy to share.

Compared to traceroute, TracerouteNG is:

  • Super-fast
  • Rarely blocked by firewalls
  • More accurate, thanks to path control
  • Updates latency/loss continuously
  • Detects path changes
  • TCP or ICMP


You can download Traceroute NG here and launch the tool by double-clicking the traceng.exe.

You’ll be presented with a help screen and the application will wait for your input. Type the domain name to start a trace.


You can also launch the free tool from Windows command prompt:

traceng www.google.com

Let’s look at some results.


Scenario 1: Endpoint is blocking TCP port

We all know that HTTP uses TCP 80 by default. What would traceroute show you if someone blocks that port on a firewall or webserver?

All good, it’s not the network. You know it’s not your issue. But what is the issue?  That’s where Traceroute NG will help:

Traceroute NG can mimic TCP application traffic, so packets are treated as the application traffic is. In this case it detected that port TCP 80 on the destination webserver is closed. You know, it’s not the network. But you can be more precise and tell your sysadmin to enable this port on his webserver.


Scenario #2: Network path change

To illustrate this scenario, I have created a simple network using GNS3.


I also have a loopback adapter configured, to point all IPv6 traffic to this lab:


I’d like to trace from my machine in Cloud1 to the PC (fc90::3). If the OSPF routing works, I should go through routers R1, R7 and R3. Traceroute confirms:


Traceroute NG as well:


What if I do maintenance on router R7? Will traceroute tell me, when router R7 becomes unavailable and detect the new path? No. It runs once and then you need to run it again. Manually.

With Traceroute NG, detecting a change is simple. You can tell Traceroute NG to warn you if the path changes and optionally log the output. An example command would be: -a warn -l -p 23 fc90::3

And this is the result:


So you know that your router is down and once you hit enter, Traceroute NG will show you the new path. In the GNS3 lab we expect the new path will go through R1, R5, R2 and R3. Traceroute NG confirms:


And the log file as well, showing you original path and the new path:

In this use case, we have leveraged several features of Traceroute NG. First, it runs continuously. Second, it detects, when a path is no longer available. Third, it can log results in a text format, that’s easy to share.

Now, enough boring reading, it’s time to try it out! You can download Traceroute NG here: https://www.solarwinds.com/free-tools/traceroute-ng/


We’re super excited to share this tool with the world and hope you find it useful.  Let us know your thoughts!


February 6, 2018 is a historic day. 

After months of planning, collaboration, and late night efforts, the journey of SolarWinds Backup is ready for primetime!  We are pleased to announce that SolarWinds Backup is officially launching and Generally Available today to our core market of IT professionals.  This launch is one more way we’re driving even more innovation in our Orion portfolio of products, and backup is a natural fit with our existing systems management capabilities.  You rely on SolarWinds for comprehensive monitoring of your servers and applications, for remote control and administration of these assets, and now, we can help you solve your backup and recovery challenges as well.


While most in-house IT departments have some type of backup solution, there is a lot of dissatisfaction with the options they currently use.  In November, we conducted a survey on THWACK, and heard from our customer base that:

  • Backups are too expensive, and managing them is too complex
  • Reliability of backups is an issue, requiring time-consuming manual checking
  • Managing and forecasting local storage requirements for backups is painful


You can read more on the full survey results here: Is Unnecessary Complexity Making Backups a Headache For You?


SolarWinds Backup changes the game by bringing a simple, powerful, and affordable alternative to the market.  It’s a modern, fully managed, cloud-first backup service, already chosen by thousands of managed service providers, now packaged for the enterprise and direct use by IT professionals.


It's tough to consume an entire product in a single blog post, so this inaugural posting will provide the highlights of our new hotness!  In future posts, we'll follow up with more "How-To" guidance on specific topics and get you rapidly backing up your IT environment in minutes.  If you are chomping at the bit, feel free to browse our new SolarWinds Backup forum on THWACK with the latest in release notes, documentation and step-by-step training videos.



There are two primary interfaces you'll find when you start your journey with SolarWinds Backup, and generally you should be able to use them without training or instruction.  As you move into more advanced topics, we will provide you with the resources you need.   On day 1, two parts are all you need to know to get started.

Part 1: The Console: http://backup.management

Deploy, monitor, report, manage users, create profiles, and remotely control all your backups from this unified, self-service console.

the best console in backup

Part 2: Backup Manager (the agent)

The Backup Manager agent does all the heavy lifting on protected end-points (i.e., devices, servers, instances).  The key functions of the agent are to scan data, find block level changes, deduplicate, compress, encrypt, optimize data file size for speed, and much more all behind the scenes.




TrueDelta technology

SolarWinds Backup includes a unique TrueDelta technology that tracks block-level changes between backups, so you only back up (or restore) what has changed – not the entire file. This keeps backup windows short, only transmits the minimal amount of changed data to be backed up over the network, and improves performance overall.


Direct-to-cloud backup

The SolarWinds Backup services were designed from the ground up for fast, efficient, remote backups. You can skip the hassles of configuring local and remote backups, storage provisioning, and storage capacity planning. Instead, your backups go safely to our global purpose-built private cloud, with backup windows measured typically in minutes, not hours.  Manage your storage pool and capacity intuitively from the Console.


End-to-end security

Encryption is built into our backup process. Backup data is encrypted at the source, stays encrypted while in transit, and while at rest.


Single, unified management console

Protect physical, virtual, and cloud servers, including all major operating systems and hypervisors, with a single product. One unified web-based dashboard shows you backup status at a glance, and frees you to check systems, and even do restores, from any location – even from your mobile device.  Do you have the need for backing up workstations, laptops or just certain documents?  Not a problem, as our solution covers these capabilities also!


Recovery options

Whether you need to recover an entire server or VM, an application, or just a portion of a file, SolarWinds Backup handles it.

  • Recover at WAN speed or LAN speed, using the optional Local SpeedVault™ 
  • Physical-to-virtual and virtual-to-virtual recovery automates a full system restore to VMware vSphere® or Microsoft® Hyper-V®

  • Bare-metal recovery simplifies and reduces recovery time for Windows® servers, and can be used for migration to new hardware
  • Cloud recovery targets provide even more flexibility by allowing recovery to Microsoft® Azure®, or to any other virtual environment of your choice


Secure remote storage for your backup data, worldwide

SolarWinds Backup provides world-class storage for your backup data with around-the-clock security in our data centers located on four continents. For the Backup Services, our data centers’ certifications meet requirements for HIPAA compliance and similar legal and regulatory standards. The Backup Services are scalable and can grow with your business.



Pricing for SolarWinds Backup is straightforward and based on an annual subscription with tiers based on the number of operating system instances (servers) being protected. Each tier includes a block of cloud storage, so there are no extra charges or hidden costs.

SolarWinds Backup is the simple, powerful, affordable option.  The product’s simplicity and ease of management translate to even greater savings, as personnel do not need extensive training or certification. There is no need to buy expensive local storage to support backups, or even to pay for a separate contract for cloud provided storage.  It’s all included.





File System / System State (Microsoft)

Windows Server® 2008/ 2008 R2/ 2012/ 2012 R2 & Windows SBS 2011, Windows Server 2016.  Windows Vista/ 7/ 8.x/ 10

File System (GNU/Linux)

CentOS® 5/ 6/ 7, Debian® 5/6/ 7, OpenSUSE® 11/ 12

File System (Apple)

Mac® OS X® 10.9 Mavericks/ 10.10 Yosemite®/ 10.11 EL Capitan

Network Shares

Remotely protect network shares and Network Attached Storage (NAS) devices

Application Protection

Microsoft Exchange 2007/ 2010/ 2013/ 2016, MS SharePoint® 2007/ 2010/ 2013

Database Protection

Microsoft SQL Server® 2005/ 2008/ 2012/ 2014/ 2016, MySQL 5.0/ 5.1/ 5.5/ 5.6, Oracle® Database Standard Edition 11g for Windows

Open File Protection

Leverage Microsoft Volume Shadow copy Services (VSS) for open file and application aware backups

Pre/ Post Backup Scripts

Back up third-party application and databases through custom scripting

Backup Filters & Exclusions

Exclude specific extensions, files, paths, or volumes from File System and Network Share backups

Backup Scheduling

Automate protection with multiple recurring backup schedules





Search and Restore

Individual files across all recovery points from Backup Manager (excluding VSS plugin)

Self Service Restores

Use Virtual Drive technology to present historic backup sessions as a browseable file system

Application Restores

Full application- and database-level restores

Continuous Restore

Use the Recovery Console to automatically create / update standby images or remote recovery copies of selected data

Physical to Virtual (Source)

Windows Vista®/ 7/ 8.x/ 10, Windows Server 2008/ 2008 R2/ 2012/ 2012 R2/ 2016 & Windows SBS 2011

Physical to Virtual (Virtual Disk Target)

Create (.VHD/X or .VMDK) files for use in the virtual environment of your choice

Physical to Virtual (Hypervisor Target)

Microsoft Hyper-V Server 2008 R2/ 2012/ 2012 R2 (Hyper-V 2.0 and 3.0) VMware vSphere (ESXi) versions: 4.1, 5.0, 5.1, 5.5 and 6.0.

Physical to Virtual (Cloud Target)

Microsoft Azure is supported

Recovery Testing

Automated recovery with email confirmation and screenshots (VMware and Hyper-V)


Bare Metal Recovery (BMR)

Create bootable CD or USB media to recover systems without a reliable OS. Supports dissimilar hardware, dissimilar drives, single pass application recovery (VSS) and granular file / volume selections





Web-based Management Console

Single view to monitor and manage all resources from anywhere

User Defined Roles

Multiple roles with defined permissions

Reports / Alerts

Custom views, Daily dashboards, consolidated backup reports, real-time alerts, and disaster recovery


Audit Logs

User-accessible log of changes made on protected device

Remote Management & Control

Remotely launch the Backup Manager directly from the management console

Automated Deployment

Silent install, command-line options, and/or use your favorite software deployment tool.  A single command is good for the unlimited number of installations.

Remote Commands

Allows for remote operating commands for a device or a group of devices

API / Command Line

Simple Object Access Protocol (SOAP)

Automatic Updates

Monthly scheduled updates with version control

File Versioning & Retention

Three (3) versions minimum; 90-day retention.

Data Archiving

Extend beyond the standard retention model and help ensure compliance with multiple data archiving policies

Password Protection

Password protection for backup data to restrict changes or limit functions to restore only

Industry Compliance

Helps you meet compliance requirements, such as HIPAA, SOX, PCI-DSS and others, in regard to encryption  through our SSL for backup data - in-motion and at-rest

Encryption Key Length

AES 256-bit encryption 

Data Center Security

Backup data stored in one of (7) SSAE16 SOC-1 Type II and ISO-27001 certified private data centers worldwide

Data Center Locations

USA, Canada, United Kingdom, Netherlands, Germany, Italy, Switzerland, Norway, France, Spain, Australia, and South Africa.




Don't see what you are looking for here?


Visit the SolarWinds Backup Forum

Check out the What We're Working On for SolarWinds Backup post for what our dedicated team of backup developers and code jockeys are already looking at. 

If you don't see everything you've been wishing for there, add it to the SolarWinds Backup Feature Requests.

Greetings! And welcome to the second in our series of primers on customising the Orion® Platform. Today’s post will focus on first installing and then navigating through SWQL Studio. This free utility is simple to use, but can drastically lower the barrier to creating and editing SWQL queries.

Help Resources

Before we go any further, it’s worth highlighting two key resources for using the SDK.

  • The GitHub®
  • And the THWACK®


The GitHub site is the main resource, where you can download the installer, browse code samples, and review the schema and wiki for the SDK itself. It’s also where issues are tracked, so if you do find a bug you can flag it as an issue there.

The other main resource is the Orion SDK forum here on THWACK. SolarWinds does not provide pre- or post-sales support on any customisations, including code using the Orion SDK. However, that does not mean you are alone. You can often find code samples that may address your use case by simply searching the forum. And the site is also frequented by not just SolarWinds staff and THWACK MVPs, but other users who can also provide feedback and guidance on code that you may be working on.

Installing SWQL Studio

SWQL Studio is a Windows®-based utility, and is included with the Orion SDK. The installation is simple. Just navigate to the “releases” area of the GitHub site (https://github.com/solarwinds/OrionSDK/releases) and download and install “OrionSDK.msi” on your Windows workstation.

Once installed, connect it to your Orion server (of course, you will be working on your special dev server).


   SWQL Studio


Navigating through SWQL Studio

The connection itself is made to SWIS on the server (the same Information Service covered in the previous post). By accessing via SWIS, instead of SWQL itself:

  • Separate logins to the actual SQL database itself are not needed
  • Account limitations are still applied
  • Any changes to the database schema are abstracted from the user

Once connected, you will see the server you connected to, as well the credentials you used, and if you expand on this connection icon, you will see the various data sources represented. Note, the list of objects will vary depending on the modules installed and the versions of those modules.



To the right of the object list you will see an editor screen. From here, you can type in SWQL, and execute the query to see the results. Running the query is very simple:

  • You can click Query-> Execute in the menu
  • Or simply hit F5 or Ctrl-E

In the below example, I typed out a simple query. We will talk more in the next post about how to write queries. I just want to draw your attention to the fact that in this latest version of SWQL Studio (, autocompletion is also possible.


A lot of the power of SWQL Studio however, lies in the fact that the necessary data can be viewed graphically.

Let’s expand on the Orion item in the list:



Then scroll down to Orion.Nodes, where we will see the resource referenced in the hand-written SWQL statement I used previously. If I expand this resources I will then see all of the properties of this Orion.Nodes object, two of which I had referenced in the aforementioned SWQL statement (caption, and IP_Address).



A key difference between SQL and SWQL is that there is no equivalent to select* in SWQL. However, just by right-clicking on the entity within SWQL studio, you can “Generate Select Statement,” which will populate the editor window, with all the fields populated.



Another useful aspect is the ability to identify the property type by its icon. To explain a bit further, let’s refer to Orion.Syslog as an example.




First up, we have the keys, which are indicated by the golden key icon. These are essentially the unique IDs used to reference each individual record in the data source.



The most common properties are the standard fields, indicated as blue rectangular prisms:




Whereas properties that are inherited are display as green cubes:




A very interesting aspect here is that related entities (depicted with the chain icon below) are also shown here. That means you can see which of these can be used to implicitly join data, without having to write an explicit join.



And as a final point, verbs (represented as pink prisms) that can be used in scripts are listed here, as well the parameters needed for each verb. We will cover this very cool feature in a future post.




Wrap Up

  Now that we have taken a solid look at SWQL studio, you should be in an excellent position to dive in and look at the data available within Orion, for use in reports and scripts. Not only does it allow you to easily create SWQL that you can use for various purposes, it also allows you to parse the entities within the database, including related objects.  With the next post, we will actually work with SWQL queries, and some of the constructs that differentiate SWQL from SQL. And in follow-up posts, we will see how the verbs can be used to provide management capabilities from

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.