1 2 3 4 5 Previous Next

Product Blog

620 posts

We have reached the Release Candidate (RC) status for Network Configuration Manager (NCM) 7.4. RC is the last step before general availability and is a chance for existing customers to get the newest functionality before it is available to everyone else.


Update: NCM v7.4 RC2 is available on customer portal. Please note that an NCM server host with SolarWinds NPM installed cannot install or upgrade to NCM 7.4 RC2 until the NPM is ugraded to NPM 11.5.2 RC1. More details can be found in NCM 7.4 RC2 is Available on Customer Portal.


  • Device Template Wizard
    Use the web-based wizard to create or edit a device template. NCM stores all device templates in the Orion Platform database instead of the NCM server's local file system.
  • Cisco Vulnerability Reports
    Receive alerts when the latest NIST data indicates a vulnerability in a version of Cisco device software (IOS or non-IOS) running on deployed switches, routers, and security appliances.
  • Compliance Reports
    DISA STIG, NIST FISMA, and PCI policy reports, with all necessary rules and policies already set-up, become available to run after installation.
  • Enhanced Change Approval
    Use a one tier approval workflow to allow any NCM user with the WebUploader role to submit device configuration changes to a single NCM Administrator. Use a two tier approval for non-privileged users to require two NCM Administrators to approve a device configuration change submitted by any NCM user with the WebUploader role. And use a two tier approval for all users to require two NCM Administrators to approve a device configuration change submitted by any NCM user.
  • Web-based Alerts and Reports
    Fully eliminating the NCM desktop application, NCM uses the Orion Platform alerts and reports systems, enhancing NCM's integration with other Orion Platform products.
  • Remediation of Policy Violation
    Automatically or manually remediate violations from within the Policy Report interface for all relevant blocks in a device configuration.
  • Configuration Change Auditing
    Syslog data forwarded from network devices as part of real-time change detection includes the name of the user associated with the change.


Extend and troubleshoot device templates easily

Device templates have been moved to the database. You can modify the templates in the Web UI and no longer have to replicate them accross your polling engines. Smooth thwack integration was a must!

In addition, we have integrated a wizard into NCM that helps you develop new templates or troubleshoot problems with the existing ones.


Device-Template-Management.PNG Device-Template-Wizard.PNG


Identify and manage firmware vulnerabilities

If network security is a concern in your organization, you should definitely use this new capability of NCM -- run a nightly vulnerability assessment based on recent CVE data provided by the National Vulnerability Database (by NIST). NCM will download and process the CVE data in a SCAP-compatible way and will notify you of potential vulnerabilities, provide detailed information and let you take an appropriate action. This security scan works even if your NCM server is not connected to the Internet -- you just have to download the datafiles manually. Complimentary reports are provided out of the box.

Note: Cisco IOS and ASA devices are supported as of now.



Vulnerability-Manage.PNG Vulnerability-Settings.PNG


Run your favourite regulatory compliance checks out of the box

Tired of importing the obligatory reports from thwack? No longer a problem! NCM now ships the latest DISA STIG, NIST FISMA, and PCI reports out of the box. The updates also include broader vendor coverage and more detailed checks.


Note: If you upgrade from NCM v7.2.x to v7.4 Release Candidate, these new reports will be missing. The workaround is to upgrade from 7.2.x to 7.3.2 first and then to 7.4 RC. The problem is related to upgrades from NCM 7.2.x only; upgrades from NCM 7.3.x and clean installations are not affected. We apologoize for incovenience and work on solution for the final NCM 7.4 release.




Manage your NCM reports and alerts on the web

Were you looking forward to the day when you could customize your NCM reports in the web and combine them with information from other modules? The day has come! Not only have we migrated all the existing inventory reports to the Orion reporting engine, but we have also added a decent number of new ones. The same applies to alerts.





Automatic Compliance Enforcement

Maintaining your configurations compliance has never been so easy. Just configure the remediation script to be executed automatically! Works for interfaces, too.




Ant that's not all -- check Network Configuration Manager v7.4 Beta2 is Available! for other improvements that cannot fit in this post.


RC builds are made available to existing customers prior to the formal release. These are used to get customer feedback in production environments and are fully supported. If you have any questions, I encourage you to leverage the NCM forum.


You will find the latest version on your customer portal in the Release Candidate section.

It is my pleasure to announce that Web Help Desk 12.3 has reached RC status and is available for download from the SolarWinds Customer Portal for customers on Active Maintenance through June 1st, 2015.


This release includes support for Parent/Child ticketing, allowing you to link multiple tickets to a one parent ticket. Using this feature, you can for example:


  • Model repetitive business processes, such as on-boarding new employees or scheduling maintenance tasks
  • Track your IT projects
  • Group service request tickets together for troubleshooting or create ad-hoc child tickets to fulfil requests


It also address various known feature requests, namely




The Parent/Child relationship logic incorporated within SolarWinds Help Desk is modeled after the concept of Problem/Incident management. Problem/Incident is addressing unplanned interruptions or failures (Incidents) and identifying the root cause (Problem) of one or more incidents. This process is based on ITIL best practices, but it has its limitations. Parent/Child relationships are designed to provide general links between tickets and serves many purposes. There are a few other differences, which I highlight later in this article. Fundamentally, to link a problem with an incident, you create a relationship between both entities.


Manually Linking Tickets Together

Using Parent/Child relationships built into SolarWinds Web Help Desk, you can manually link service request tickets together by clicking the Requests tab on the right side of the ticket screen, as shown below.




Here, you can link any existing ticket as a parent to your working ticket. Simply search for the appropriate tickets using the Search Requests function and click Link next to the targeted ticket.

Once you create a relationship between both tickets, click Save. After you save your child ticket, you can:


  • View the linked parent
  • View the parent ticket from the child ticket
  • Send notes to linked child tickets
  • Send notes to the linked parent ticket
  • Add attachments to the notes and propagate to parent and child tickets


Viewing a Linked Parent

Once you link your ticket to the parent ticket, the Linked Parent ticket tab appears within your working ticket, as shown below.




In this embedded tab window, you can view basic ticket information such as the ticket number, status, or subject combined with the request details. You can also unlink the ticket by clicking the Trash icon on the right side of the window. The ticket screen also includes an embedded History tab that appears in the ticket. SolarWinds Web Help Desk records all ticket actions related to the Parent/Child ticket relationship, providing you with a complete record for auditing or other IT business processes.


Viewing a Parent Ticket from a Child Ticket

When you click on the ticket number or request details of the linked parent ticket, the parent ticket appears on your screen. In the parent ticket is an embedded Linked Child Tickets tab that lists the linked child tickets and ticket summary information, such as the ticket number, status, and request details. You can freely move between the parent and child tickets as needed by clicking each ticket.


You can add a parent ticket to another parent ticket as required, but every ticket can have only one parent ticket. The Parent/Child logic built into SolarWinds Help Desk incorporates a tree structure that will not allow you to create endless loops between tickets.


Sending Notes to Linked Child Tickets

When you are in the parent ticket, you can send a note down to all linked child tickets. Simply create a note in the ticket and select the Show in Linked Child Tickets check box, as shown below.




Once you save the ticket, the note is visible to all linked child tickets.


Sending Notes to the Linked Parent Ticket

You can also send a note up to the parent ticket. When you are working in a linked child ticket, simply click the Show in Linked Parent Ticket check box. Your note is copied to the parent ticket, as shown above.


Adding Attachments to Notes

When you send notes to the child or parent ticket, you can also include an attachment. If you need to edit the note, open the original ticket of the note, edit the note, and then save it. The updated note including attachment will be immediately available to the parent ticket or down to the child tickets.


There is no limitation on ticket status for any ticket within the Parent/Child structure. You can open, close, or modify any linked ticket.


Automatically Linking Tickets Together

The Parent/Child ticketing structure built into SolarWinds Web Help Desk can help you automate repeated tasks, such as the New Hire process used to on-board a new employee. In this scenario, you can create a standardized set of tickets that are completed by various people within your organisation.


Some process tasks are dependent on others and need to be tracked separately using an action rule to trigger your New Hire process. Simply define the criteria — such as the request type or a keyword that appears in the subject — to run a task. Your defined task creates a set of tickets assigned to the appropriate department or individuals and links them to the parent ticket that initiated the action rule.




When you create a task for your automated process, select the Shared check box so you can use this task in an action rule. For every step of your process, define a task element.


Each task element creates one ticket. In the Task Elements tab window, you can decide if the newly- created ticket should be linked to the parent ticket, as shown below.




By linking tickets together, you can:


  • Pass information from parent tickets to child tickets. This process allows you to distribute selective or sensitive information to all child tickets. Typically, you would create a form using custom fields and pass your information through these fields as needed.
  • Combine information between ticket types. You can combine information from a parent ticket with new information in a child ticket. For example, the subject in a child ticket could include a description with the parent subject description appended to it, separated by a space.
  • Copy ticket attachments between ticket types. You can copy ticket attachments (but not note attachments) from parent tickets to child tickets. However, be aware that this process will duplicate the attachments in your database and consume additional storage space.
  • Trigger succeeding task elements when a targeted element is completed. Using the Generate Next Element logic, you can define when the next ticket is created based on the previous ticket element status. For example, you could create tasks that generate service request tickets to create a new employee email address and user account. When those tasks are completed, you can configure these tasks to trigger another step in the new hire process, as shown below. Alternatively, you can perform the same procedure by creating several tasks and action rules that model more complex dependencies and workflows.



Now that you have seen how Parent/Child relationships can benefit your organization, go to our customer portal and download the new version to get started today.


PS. One more thing , often you need to reference ticket from a Note, Request Details field or FAQ fields. You can do it now by simply typing "ticket <number>" or "case <number>" (it works also with keywords like request, incident or problem). WHD will automatically makes it a link.


Embedded links.png

Following recent UDT 3.2.2 Service release in which we added Web alerting and brought back the ability to manage nodes without a re-discovery, the development team is already working on new features for next release. Below is a highlight of some of these items in the pipeline.


Improved Juniper support

Web Reporting

Whitelisting improvements

Err-disabled port status in UI

Cross-product Integration

  • NTA endpoint details displayed in UDT and vice versa


Disclaimer: Comments given in this forum should not be interpreted as a commitment that SolarWinds will deliver any specific feature in any particular time frame. All discussions of future plans or product roadmaps are based on the product teams intentions, but those plans can change at any time.

SolarWinds Database Performance Analyzer (DPA) has always been the way to quickly get to the root cause of application performance problems. Now with DPA 9.1, we have extended the same great functionality to databases hosted in the Amazon cloud.  From the same screens you use today, you can monitor SQL Server and Oracle RDS databases just like any other database.  Even better, you can monitor EC2, RDS, and on-premise databases from the same installation of DPA.


Now you can easily provision a DPA instance from the Amazon Marketplace using the DPA AMI, you won’t have to wait for a box to be provisioned, you are the master of your DPA install.  Once provisioned, it is just a couple of clicks and you are collecting data through DPAs agentless architecture.


Often, I have heard that the biggest fear when moving a database into the cloud is that you can’t see what is going on outside the database (server, storage, and infrastructure). Although you can’t monitor the servers that the database is on, you can use DPA to troubleshoot if a performance issue is likely related to a resource, such as Storage, or if queries need to be tuned.


DPA is a tool that pulls back the curtains, at least enough to be able to pinpoint or even eliminate a particular area.  One of our customer’s favorite features to use when you are monitoring a database in the cloud is DPA’s ‘Storage IO’ feature that was introduced in DPA 9.0.   As in the case of Storage, you can see if there is a high latency or prove that the work is being done in memory. DPA can give you that confidence to move forward with troubleshooting performance issues.  Without DPA, RDS databases still have a bit of a black box feel to them.


DPA 9.1 can help ease your mind as you leverage the Amazon Cloud for your databases.


To see all the features of Database Performance Analyzer 9.1, see Database Performance Analyzer 9.1 Release Candidate is available now!!!!.

Coming fresh off the Virtualization Manager  (VMAN) 6.2 release in which we added management action functionality (power management and snapshot management) and AppStack support (to name just a few features) the VMAN team is already working on the next set of features. I am excited about what we have planned for VMAN as we transform it into a one stop monitoring and virtual management tool. Here are the highlights of what we have we are currently working on:.


  • New Migration management actions
    • vMotion/Live Migration - Provide the ability to migrate a VM to a different host within VMAN
    • Storage vMotion/Storage Live Migration - Provide the ability to migrate VM disks to a different datastore or shared cluster volume from within VMAN.
  • Sprawl Management Actions
    • Add/Remove CPU - Grow or shrink the amount of virtual machine CPU from within VMAN
    • Add/Remove RAM - Grow or shrink the amount of virtual machine RAM from within VMAN
    • Delete VMs - The ability to remove/ delete unnecessary virtual machines for the purpose of reclaiming resources from within VMAN
    • Delete Orphaned VMDKs - Provide the ability to delete orphaned VMDKs from the Virtualization Sprawl resource “Storage being used by Orphaned VMDKs” so that I can reclaim datastore storage from the alert.
  • The ability to act on virtual sprawl alerts from within the VMAN Sprawl page.
    • Over-allocated & Under-allocated resources - Add/Remove CPU & RAM
    • VMs Powered Off for More than 30 Days - Delete VMs
    • VMs Idle for the Last Week - Shutdown VMs
    • VMs that might benefit from decreasing vCPUs - Decrease vCPU
    • Top 10 VMs by Snapshot Disk Usage - Delete Snapshots
    • Orphaned VMDKs (New alert) - Delete Orphaned VMDK
  • Alert Remediation - Provide the ability to configure an alert to trigger a management action based on a threshold..
  • Red Hat Enterprise Virtualization support
  • Citrix XenServer Support
  • Recommendations - Recommended actions to take to ensure performance, optimal capacity, avoid potential issues, and improve uptime.
  • vCenter and Hyper-V Events
  • Improved scalability and performance of polling


Disclaimer:  Comments given in this forum should not be interpreted as a commitment that SolarWinds will deliver any specific feature in any particular time frame. All discussions of future plans or product roadmaps are base on the product teams intentions, but those plans can change at any time.


If you don't see what you are looking for here, you can always add your idea(s) and vote on features in our Virtualization Manager Feature Requests forum.

The Patch Manager team is pleased to announce general availability of Patch Manager version 2.1. All customers under active maintenance can find the download in the SolarWinds Customer Portal, and anyone interested in Patch Manager can download from solarwinds.com.


We talked about what's new in detail back in the beta days (Patch Manager 2.1 Beta 1) , but let me quickly refresh your memory.


Automated Third Party Patches


We've added the ability to automatically publish and patch third party updates similar to the way WSUS/SCCM handle Windows updates. With this feature, you can automatically download, publish, and patch third party products from our 3PUP catalog (see: Table of third party patches - updated 4/8/2015 for the latest third party patches).


Important note: there are some providers that don't allow us to automatically download and publish their updates without a click-through EULA acceptance. For now, these providers (including Oracle and Adobe patches) won't be supported through this feature. Hopefully we'll find a good solution to this in the near future - we're working with the third party vendors directly to hopefully find some options.


New Reports


Thanks to thwack feedback and discussions, we've pulled in a lot of common custom report requests and now have them included out of the box! These reports should make it easier to identify problem systems and report on updates.


Included are:

  • Custom Hardware Report
  • Installed Programs and Features Basic
  • Approved Updates Status Count by WSUS Server and Update Source
  • Computer Update Status - Locally Published Updates
  • Computer Update Status with Aggregate Counts of Install State for Approved Updates
  • Computer Update Status Counts by Classification for Approved - Not Installed
  • Computer Update Status - Approved Updates with ID and Revision


New Computer Group Scoping Options


A common request is to make it easier to find and manage computers across subnets, Active Directory OUs, or AD sites. We've added several new grouping options:

  • IP subnet or range
  • Active Directory Organizational Unit (OU)
  • Active Directory Site


...and more!


In addition to the standard bugfixes and improvements, we've also:

  • Made it easier to gather logs for troubleshooting with the support team
  • and, added support for Windows 2012 R2 and Windows 8.1 as options when selecting computer properties


As always come over to the Patch Manager space here on thwack if you've got any questions. Happy patching!

Patch Manager uses Inventory tasks to pull information from your Windows Server Update Services (WSUS) servers and managed computers so you can view and report on that information from the Patch Manager Console. The WSUS Inventory task provides detailed information about WSUS server configuration, WSUS server statistics, basic computer inventory attributes, and so on. Data from the WSUS Inventory task populates reports in two categories:

  • Windows Server Update Services
  • Windows Server Update Services Analytics


In Patch Manager version 2.1, the following reports that are available in earlier versions of Patch Manager have become obsolete:


In Windows Server Update Services:

  • Count of security updates released in 2008
  • Critical Updates released in (2006-2008)
  • Security (bulletins/Updates) released in 2008
  • Updates released in (2006-2009)
  • Windows Server (2000/2003) applicable critical updates released in (2007/2008)
  • Windows (XP/Vista) applicable updates released in 2008
  • Updates superseded by Windows XP SP3


In Windows Server Update Services Analytics:

  • Computer details for MS08-067 or MS08-078
  • Computers with failed MS08-067 or MS08-078 installations
  • Computers in a selected group with approved update status details for (June-August 2008)
  • Computers where MS08-067 MS08-078 MS09-001 (failed to install/is installed/is NOT installed)
  • Computers where MS08-067 or MS08-078 is (not) installed
  • Effective approvals by computer group for MS08-067 or MS08-078
  • Effective approvals for MS08-067 MS08-78 MS09-001


However, you have the option to download these reports and import them in Patch Manager.


To download obsolete reports and attach them to the Patch Manager Console:

  1. Close all sessions of Patch Manager.
  2. Download the attached obsolete reports to your local drive.
  3. Open the downloaded reports.
  4. Copy the reports that you want to activate.
  5. Go to Patch Manager installation folder. For example, \\Program Files\SolarWinds\Patch Manager\Server\Templates\cmdb
  6. Paste the selected reports in the Patch Manager installation folder.
  7. Open Patch Manager Console.
  8. Expand Administration and Reporting > Reporting > WSUS Reports.
  9. Check if the copied reports are now available in the appropriate reports pane.


Once these reports are visible in the Patch Manager Console, you can view, customize, and create reports for your WSUS servers and managed clients.

Hopefully you saw my Sneak Peek blog post a couple weeks back on the Active Directory integration coming with 11.2 and are excited about it as much as I am.  If you haven't read it, I highly recommend checking it out. 


Also excited that the Release Candidate, (barring no last minute issues) should be available Monday.  The Release Candidate is a fully tested and supported release and you can upgrade to the RC from your previous version of DameWare.  Should you need assistance during the installation, feel free to contact SolarWinds support

If you are interested in the release candidate, either send me a direct message or please sign up here

Looking forward your feedback!




As always, here comes a list of items we are working for the future of VNQM. We are listening and your comments will be more than welcomed.


  1. Support for SIP Trunk utilization troubleshooting and SIP protocol connectivity troubleshooting. Support for SIP and H.323 Trunk utilization monitoring Gateway / trunk Combined utilization view and alerting.
  2. End-To-End visualization of network map between two phones and connected ports & switches on a call route.
  3. Updating visuals - more attractive charts (zoom-in) + Web Based Report for VNQM , Export report from VoIP search page
  4. More out of the box reports with capacity trending. advanced VNQM reports
  5. Support for MS SQL 2014
  6. Performance improvements and more CDR data retention period (more than 1 month of the data in DB)
  7. Support for Cisco IO 15.x, ISR for IPSLA Support for IOS 15.3 on ASR
  8. Monitor PRI Utilization by Call Count Instead of % Utilization


PLEASE NOTE:  We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

I would like to share a list of items we are working for the future of Network Traffic Analyzer. We are listening and your comments will be more than welcomed.


  1. Improvements in unknown application detection in netFlow(s) - using Cisco NBAR2 (1000 apps) for understanding what's behind port 80, etc. (Support for Cisco NBAR)
  2. Network security protection using Network Behavior Analysis (using flows, not agents). Detect DDoS attack, mallware communication or any other suspicious network communication without end-point agent deployment.
    1. IP Address reputation - get warning if application/end-point communicate with potentially dangerous site, we will take care of updates and real-time IP address evaluation.
    2. SYN flood attack detection, DoS detection, unexpected TCP/UDP high volume detection, find the host-name and switch port/SSID information of the source of the traffic.
    3. Port scanning detection
  3. Improved Network Bandwidth Utilization troubleshooting - instead of showing applications from entire network, we want to point you to the most utilized interfaces (and even better, network links) and show you what are top application and top talkers responsible for that conversation.
  4. UDT Integration Integration: NetFlow and User Device Tracker
  5. Topology information within NTA - understand "what and where". See your WAN links or VPN connections and underlying border interfaces with all the traffic (applications) and top talkers on the single page.


PLEASE NOTE:  We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

Hello, I'm Brian Flynn and I'm the Solarwinds Product Manager for Storage Resource Monitor (SRM).  I'm excited to announce that we are gearing up for a beta release of SRM 6.1 and I'm inviting customers with Storage Manager (STM) or Storage Resource Monitor (SRM) in active maintenance to test our beta release on new arrays.  If you are not an STM or SRM customer, but are still interested in participating, go ahead and sign-up as well, and I'll see if there is a way I can at least get you on a feedback call to interact with a SolarWinds-local install.  Click here for sneak peek screen shots

SRM beta button.png


In This Release...

We are primarily working on device support

  • Dell Compellent
  • EMC Symmetrix VMAX
  • Nimble
  • HP P2xxx/MSA
  • Hitachi Data Systems (HDS)
  • HP 3PAR StoreServ
  • EMC Isilon


DISCLAIMER : What we are working on in SRM 6.1 Beta is in no way a promise of what we will deliver in SRM 6.1.


How can I accelerate support for my devices?

The answer is simple:

  1. Check for an existing feature request for your array.
    • If it's there already, vote it up.
    • If it's not there already, create it.
  2. Regardless, you can greatly assist our velocity in delivering more device support by providing recordings of the metadata from your arrays using our Storage Responder Tool (SRT).


SRT is a very simple tool you can use to collect performance data from your arrays.  Simply point SRT at your SMI-S Provider and it will get a snapshot of performance, capacity and configuration data that you can send us for our engineering team to use in our development and testing. The data is safe to share; it does not contain any data from the array’s disks (no data), only the metadata we need for monitoring.  You can check that for yourself because the data is stored in plaintext XML, so you can inspect it before sending it to us.  SRT comes with a PDF document that will guide you through using it but as I've said, don't hesitate to contact me with issues.

Licensing Changes

SRM 6.0 was just released in February 2015 and with that release came some slight licensing changes.  Most notably, if you choose to run both the SRM Orion module and the Profiler module, you will find that you can monitor an array with both modules using the same license.  You do not need a separate license for SRM Orion module and the SRM Profiler module.  This quick video should help clarify that with a couple of use cases.


With SRM 6.0 being so new, we haven't yet written about all of it's features.  Previous blog posts like Storage Dreaming - The Next Chapter for Storage Monitoring with SolarWinds and Dreams Come True - Storage on Orion is now in Release Candidate tell the outstanding AppStack story, but there's more to SRM than just AppStack.

Here's an introduction to some of the features in the currently supported version of SRM:

  • Performance Dashboard
  • Latency Histogram for LUNs
  • IOPS Performance Per LUN

Performance Dashboard

SRM Performance Dashboard.png

Who is this for?

  • Anyone! 
  • Don't you get tired of creating performance summaries for your peers?
  • Wouldn't you like more time to work on pressing matters?
  • Wouldn't you like your help desk and support peers to take care of initial triage?
  • Are you a help desk or support professional tired of being the middle man between users and storage admins?

Then the SRM Performance Dashboard is for you!  Take a look at the example to the left. 

Here's what I can see at a glance:

  • We have performance problems on our CX3-10c array.
  • It took me no time to drill through from array through storage pools to find LUNs experiencing performance problems.
  • The Storage Objects by Performance Risk resource tells me the CX3-10c array has got LUNs with latency problems.
  • Let's take a look at that array by clicking on the CX3-10c array.

Latency Histogram for LUNs

SRM Latency Histogram.png

Who is this for?

  • Storage Admins
  • Users of Storage LUNs
  • Is anyone left?  ;-)

Allow me to demonstrate:

  • Open the Block Storage sub-view
  • Review LUN performance by histogram.
  • Click the 24h zoom.
  • Hover over the histogram bars for more info.
  • I see that 7 LUNs have have an average latency between 101-500 milliseconds in the past 24 hours.
  • That's clearly a significant concern for a Storage Admin and the users of Storage LUNs.

IOPS Performance Per LUN


Who is this for?

  • Are you frequently in the center of blame games between application owners using LUNs in those pools?
  • Are you frequently fetching LUN performance metrics for non-storage admins?

Then the IOPS Performance Per LUN resource is for you!  Take a look at the example to the left. Here's what I can see at a glance:

  • 12 LUNs in this storage pool.
  • 3 LUNs have significantly higher IOPS than the rest.
  • 2 LUNs have experienced the same spike in IOPS. 
  • This makes me wonder why.  Are they both used by the same application?  See AppStack!
  • 1 LUN has typically had very low IOPS but has spiked up just after 9 AM.
  • 1 of the LUNS that typically has higher IOPs experienced a parallel spike.  Again, I say see AppStack!

See AppStack



Keep watching for more blog posts outlining new SRM features and don't forget to sign up for the beta!


SRM beta button.png

I told you I was saving the best for last and here it is!  We're standing database performance analysis on it's ear by presenting it from the perspective of applications using your databases, otherwise referred to as Database Application Performance Monitoring & End User Experience.  Many people have database instances shared by several applications which turns troubleshooting performance into a complicated nightmare.  This release of Database Performance Analyzer (DPA) has features to empower the application owner and liberate DBAs.  I hope you like what I'm about to show you and invite you to consider joining DPA 9.2 Beta which is open to customers with both DPA and Server Application Monitor (SAM) currently on active maintenance.  To try it out, please fill out this short survey.

Oh, and did I mention you could WIN A $50 AMAZON GIFT CARD?



For this 3rd post in my 3 part series, I'm going to tell you about :

  • Part 3 - Database Application Performance Monitoring & End User Experience!
    • See the status of applications querying your databases
    • Application Perspective of End User Experience
    • A new perspective on Blocking

Previous Posts:

NOTICE : This is BETA, so there is no promise that what you see here will be delivered as is.



Applications Using My Databases : After you've mapped applications in SAM to database instances in DPA, you will see a resource on both the Summary View and Instance View that enumerates the applications that depend on your databases.  This can be a handy troubleshooting tool when you suspect a database problem may impact applications.  You'll be able to use this as a quick glance at their status as well as a means to dive into them for a closer look.  When you do, you'll find a new DPA resource was added that helps you understand the database's contribution to Application End User Experience.


The Benefit: Easily keep track of application to database dependencies.


Application End User Experience : Databases don't exist for the sake of giving DBAs something to do.  They exist to serve applications who are typically created and maintained by someone other than the DBA.  Unfortunately, those somebodies often don't know databases like a DBA.  Another unfortunate reality is that DBAs are typically outnumbered by applications they support and projects developing new applications.  To help with this, we've created a resource you can add to an Application View for applications that use databases.  This resource provides a contextual glimpse into the performance of queries originating from the node your application resides on.  We're not just showing you total resource loads; we're showing you real query execution times for queries from your application server.  It's a beautiful thing, right? Explaining database behavior in multi-tenant environments takes more than a minute but everyone gets waiting and prefers to minimize how much they wait.  And we're showing them how long THEY are waiting.  In other words, we've filtered out what they would perceive as noise.



The Benefits:

  • Empower application development and support staff.
  • Liberate the DBAs from tedious research tasks.
  • Eliminate unproductive blame games!


query-blocker-lg.jpgBlocking : Many people I've spoken to have a lot of interest identifying where blocking occurs.  A quick overview for the non-DBA.  Blocking is not inherently bad, but it can be relatively bad.  Blocking is by-design behavior that results from locks which are used to preserve transactional consistency.  In fact, up until recently, locking & blocking is how all of the big name relational databases have maintained transactional consistency.  So let's say that we don't typically care about very brief blocking.  We only care when it becomes a significant factor in overall query performance.  To help with this, we've created a resource that helps you understand the situation from both the perspective of the blockers & the blockees.  Sometimes they're one in the same i.e. the same update query being executed by 2 different sessions at the same time.  There also are different blocking scenarios.  You can have one session blocking many sessions or you can have a cascading tree of sessions blocking other sessions that block other sessions etc.  In the first scenario, you'll see blocking time and blocked time being equal.  In the second scenario, you'll see more time blocked than time blocking.  This resource also has a bar graph at the bottom to show you when there has been blocking.

The Benefit: This resource provides a visual that helps the user see how many blockers are impacting how many blockees (waiting queries)



$50 Amazon Gift Cards Details
  • Completing requirements earns you opportunities to win 1 of 5 Amazon Gift Cards!
  • 1 opportunity to win per milestone completed.  Complete them all to maximize your chances of winning.
  • Share via screen shots or videos e.g. camera phone.
  • Send to brian.flynn@solarwinds.com.
  • Milestones
    1. Share your integration experience.
      • Show that you successfully connected SAM to DPA.  The screen should show that Orion has successfully tested a connection or is connected.
      • Show us your database instance mapping screen.  The screen should show some database instances are mapped to Orion nodes.
      • Show us your application mapping screen.  The screen should show that you have discovered applications querying your databases.
    2. Show us your Summary View - Click on the Databases tab.  That is the Summary View.
    3. Show us an Instance View - From the Summary View, click on a database instance then click the DB Performance sub-view.
    4. Show us an Application View - From the Summary View or Instance View, find the Applications Using My Databases resource and click into one of those applications.

User Device Tracker (UDT) 3.2.1 Release Candidate is now available in your SolarWinds Customer Portal.


UDT 3.2.1 brings back ability to monitor new switches and routers without a need to do re-discovery. You can now simply go to Port management page =>


... and choose from a new dropdown to see devices not monitored with UDT (e.g. anything you’ve just added to NCM, NPM etc.).


Selecting desired nodes and clicking on Monitor with UDT will make all their ports monitored within one polling interval (default is 30min).


The Release Candidate is a fully tested and supported release and you can upgrade to the RC from your previous version of UDT.

Should you need assistance during the installation, feel free to contact SolarWinds support or you can use dedicated RC Thwack space.

Looking forwards your feedback!

UDT team

Coming off the release in December, in which we integrated DameWare into Web Help Desk, the team rolled straight into working on formal Active Directory integration as discussed here. With the product today, you can export a list of users from Active Directory (AD) and manually do a one-time import of those accounts into DameWare.  While this is helpful, as an Administrator of the product you still have to manage user passwords separately.


Based on feedback from our customers, we are enhancing this integration and I wanted to give everyone a sneak peek into how things are progressing now that we have been in beta for a couple weeks.


First, when you login to Central Server, you will notice the top ribbon menu is a bit different in that we added a new button and tweaked an existing button a bit as highlighted below.

New Nav.png

Let’s first walk through defining a new connection with our Active Directory server.  By clicking on the AD Import button, a wizard will be presented in which you can select if this sync/import will just be for a single time or if we want this to occur on a regular scheduled basis. Since we are leveraging Active Directory Groups, if you were to select synchronized, on the back end after the initial synchronization, then going forward we will check if new users have been added to the group since we last synchronized with Active Directory and automatically import that account into DameWare.  I will provide more details on how this works more specifically in a bit later in this post.


Next we specify the connection details for the Active Directory Domain, nothing out of the ordinary here to review and discuss with the exception that you can use the local domain, or any other domain for authentication.


Since we are leveraging Active Directory Groups, here is where we get to select one or more groups we want to import users from. For environments that have a large number of groups, we can auto filter down the list based on text typed into the dialogue below the group picker.


AD Groups have now been selected and the final step of the wizard allows you define which license is associated with each group and define the schedule in which we will synchronize with the Active Directory server.


We complete the wizard and an initial synchronization occurs if desired.  After the import is completed, you will now see a list of the users imported from AD.  Note the login name I highlighted is in domain\username format.


If you ever want to go back and edit a synchronization profile to change the schedule, groups to synchronize, etc. you can click on the AD Manager as highlighted in the first screenshot and you will be presented with a view of all the profiles that were defined and other information and actions as can be seen below.


Your Active Directory accounts are now synchronized, so what will the experience now look like for your technicians using the product? If you are familiar with applications like SQL Studio, DameWare will have a similar experience in that you can choose either Windows or DameWare Authentication.  If you select Windows Authentication, then we will use the credentials you are logged into that machine with if you are logged into a domain.  If you “Remember last connection settings” and “Don’t show this again”, going forward when you launch the application it will perform a Single Sign On or SSO and automatically log you into the DameWare as long as that account has permissions in DameWare.


Once logged in, if you look at the bottom of the application, as seen highlighted below, you will notice you are logged in as domain/username or in this case lab.tex\labuser.


That’s it, pretty simple and straightforward.  We are currently running beta with a handful of customers, but if you have active maintenance and are interested in giving us some feedback, then please send me a direct message via thwack and we can set something up.

In my last post, I spoke about different ways to Alert in NPM, pairing multiple features together to create powerful ways to create granular alerts and to really reduce on alerting noise.


Well, that’s all well and good in a perfect world, where all of your devices are reporting the correct data – but what can you do if they aren’t? If your device is providing the wrong data for CPU and Memory for example, it’s no longer possible to alert accurately on that node. Or if we’re showing the vendor as ‘Unknown’ then it’s hard to use a qualifier like ‘Where Vendor = ABCDXYZ’ to define your alert scope.


We poll certain OIDs for different device types with our Native Pollers – OIDs that we’ve carefully chosen for certain vendors or models that work for the vast majority of those devices. But sometimes, those default OIDs aren’t a perfect fit. Sometimes, the device should support an OID, but it doesn’t. Other times, we might not have a Poller created for that particular device model yet. (When we say Poller, we mean gathering specific data from an OID or group of OIDs – for example, CPU & Memory, Hardware health sensors, Topology data and so on)


Luckily, there’s an easy and quick way to swap in new Pollers, or create your own ones, and start polling these devices accurately - Device Studio!


Never heard of it? Check out this video.


So, let’s talk specifics about Device Studio, and show you the exact steps you can take to fix a device that’s providing inaccurate information. On your Orion Settings page, look under Node & Group Management for the ‘Manage Pollers’ option:


Let’s assume you have a problematic device, providing the wrong CPU & Memory details. Fixing this is a two-step process.


#1 – Find the right OIDs to poll to get accurate information

This one will need a little legwork! Check Thwack and the Content Exchange first (or click the Community tab to download directly from the Manage Pollers page) – after all, no point in re-inventing the wheel when the awesome folk on the forums have shared their successes! If that doesn’t work, you’ll often find all the information you need by plugging the device model, what you’re looking for and the word ‘OID’ into Google – chances are if you’re looking for this information, someone else was too. If that fails you, turn to the device documentation.


If you’re brave and curious, you can SNMP Walk the entire device to get a list of every single OID it supports. The best part? We ship an SNMP Walk tool with your Orion install – you can find it here:

[Install Drive]\Program Files(x86)\SolarWinds\Orion\SNMPWalk.exe


I’m an SNMP geek, so if I get into writing about how to read an SNMP Walk, you’ll never hear the end of it – so I’ll leave you with this handy guide on SNMP to get you started on reading the output and choosing the right OID from it to poll your device.


#2 – Using the OIDs, set up a new Poller, and assign it to your device


    1. In Device Studio, click ‘Create New Poller’

    2. Fill in the details about this new poller


    3. On the next page, you will see a list of all required information for this Poller. For example, to poll CPU & Memory, Orion needs to know where to get details of the current CPU load, Memory used and Free memory.


    4. For each of these details, you’ll need to define the data source – this means, you’ll need to define what OIDs Orion needs to poll to get accurate information from your device.

    5. You can browse the MIB tree itself, testing OIDs against your chosen device as you go.


Alternatively, if you already know the OID you want to poll, go ahead and enter it under ‘Manually Define OID’

    6. Once you’ve chosen the data source, you’ll be asked to confirm if that data is reasonable and accurate. You’ll have the option here to perform calculations on the polled result – for example, to get an average across CPU cores, or combine multiple pollers together – this is very useful for Memory, as often, the data is stored as the number of blocks used / free – which then must be multiplied by the block size to get an accurate result.
If you’re happy with what you see, click ‘Yes, the data source is reasonable’.


     7. Almost there! Once you complete the wizard, choose your shiny new Poller from the list and select the ‘Assign’ button.

Select the node or nodes you need to assign this poller to, and run a Scan against them – this confirms that they will definitely support those new OIDs. If they pass the test with a Match, you can Enable your new poller, replacing the Native poller.


     8. If you need to swap back again for any reason, just run a List Resources against the Node, and you can toggle back and forth between your pollers.


And there you have it!


But wait – I also mentioned that you can use the Device Studio to fix those pesky devices that show as ‘Unknown’. If you do have devices that show up with the Vendor as Unknown, we’d still like to hear about them so that we can match them natively – but if you’d like to fix this yourself without waiting for the next release, you can use Device Studio to do this, and you can even use these steps to correct any devices that respond as ‘NET-SNMP’ instead of the correct Vendor & MachineType.


Much of the steps will be the same as above – you’ll just be creating a ‘Node Details’ Poller instead.


When you define the Data Sources for Node Details pollers, you’ll notice a lot of these are optional – but one is absolutely required: the SysObjectID.


The SysObjectID returns an OID that references another part of the MIB database – usually the Vendor’s MIBs, and can be used to identify both the Vendor and the Model of the device. It’s quite rare that this one isn’t supported by a device, so try to let Orion poll the SysObjectID automatically if at all possible. If the device doesn’t support this OID, you can use a constant value instead, and manually define the OID that should have been returned by the device.


Now, with the required OIDs done and out of the way – you can move on to fixing that Vendor = Unknown problem – and that part is quick and simple. Set the Constant Value to the text string you want it to report for both the Vendor, and the MachineType.


So, there you have it – a great way to clean up those ‘Unknown’ devices, and take care of the devices that respond with incorrect information, all in one place.

Filter Blog

By date:
By tag: