Hello, I'm Brian Flynn and I'm the Solarwinds Product Manager for Storage Resource Monitor (SRM).  I'm excited to announce that we are gearing up for a beta release of SRM 6.1 and I'm inviting customers with Storage Manager (STM) or Storage Resource Monitor (SRM) in active maintenance to test our beta release on new arrays.  If you are not an STM or SRM customer, but are still interested in participating, go ahead and sign-up as well, and I'll see if there is a way I can at least get you on a feedback call to interact with a SolarWinds-local install.  Click here for sneak peek screen shots

SRM beta button.png


In This Release...

We are primarily working on device support

  • Dell Compellent
  • EMC Symmetrix VMAX
  • Nimble
  • HP P2xxx/MSA
  • Hitachi Data Systems (HDS)
  • HP 3PAR StoreServ
  • EMC Isilon


DISCLAIMER : What we are working on in SRM 6.1 Beta is in no way a promise of what we will deliver in SRM 6.1.


How can I accelerate support for my devices?

The answer is simple:

  1. Check for an existing feature request for your array.
    • If it's there already, vote it up.
    • If it's not there already, create it.
  2. Regardless, you can greatly assist our velocity in delivering more device support by providing recordings of the metadata from your arrays using our Storage Responder Tool (SRT).


SRT is a very simple tool you can use to collect performance data from your arrays.  Simply point SRT at your SMI-S Provider and it will get a snapshot of performance, capacity and configuration data that you can send us for our engineering team to use in our development and testing. The data is safe to share; it does not contain any data from the array’s disks (no data), only the metadata we need for monitoring.  You can check that for yourself because the data is stored in plaintext XML, so you can inspect it before sending it to us.  SRT comes with a PDF document that will guide you through using it but as I've said, don't hesitate to contact me with issues.

Licensing Changes

SRM 6.0 was just released in February 2015 and with that release came some slight licensing changes.  Most notably, if you choose to run both the SRM Orion module and the Profiler module, you will find that you can monitor an array with both modules using the same license.  You do not need a separate license for SRM Orion module and the SRM Profiler module.  This quick video should help clarify that with a couple of use cases.


With SRM 6.0 being so new, we haven't yet written about all of it's features.  Previous blog posts like Storage Dreaming - The Next Chapter for Storage Monitoring with SolarWinds and Dreams Come True - Storage on Orion is now in Release Candidate tell the outstanding AppStack story, but there's more to SRM than just AppStack.

Here's an introduction to some of the features in the currently supported version of SRM:

  • Performance Dashboard
  • Latency Histogram for LUNs
  • IOPS Performance Per LUN

Performance Dashboard

SRM Performance Dashboard.png

Who is this for?

  • Anyone! 
  • Don't you get tired of creating performance summaries for your peers?
  • Wouldn't you like more time to work on pressing matters?
  • Wouldn't you like your help desk and support peers to take care of initial triage?
  • Are you a help desk or support professional tired of being the middle man between users and storage admins?

Then the SRM Performance Dashboard is for you!  Take a look at the example to the left. 

Here's what I can see at a glance:

  • We have performance problems on our CX3-10c array.
  • It took me no time to drill through from array through storage pools to find LUNs experiencing performance problems.
  • The Storage Objects by Performance Risk resource tells me the CX3-10c array has got LUNs with latency problems.
  • Let's take a look at that array by clicking on the CX3-10c array.

Latency Histogram for LUNs

SRM Latency Histogram.png

Who is this for?

  • Storage Admins
  • Users of Storage LUNs
  • Is anyone left?  ;-)

Allow me to demonstrate:

  • Open the Block Storage sub-view
  • Review LUN performance by histogram.
  • Click the 24h zoom.
  • Hover over the histogram bars for more info.
  • I see that 7 LUNs have have an average latency between 101-500 milliseconds in the past 24 hours.
  • That's clearly a significant concern for a Storage Admin and the users of Storage LUNs.

IOPS Performance Per LUN


Who is this for?

  • Are you frequently in the center of blame games between application owners using LUNs in those pools?
  • Are you frequently fetching LUN performance metrics for non-storage admins?

Then the IOPS Performance Per LUN resource is for you!  Take a look at the example to the left. Here's what I can see at a glance:

  • 12 LUNs in this storage pool.
  • 3 LUNs have significantly higher IOPS than the rest.
  • 2 LUNs have experienced the same spike in IOPS. 
  • This makes me wonder why.  Are they both used by the same application?  See AppStack!
  • 1 LUN has typically had very low IOPS but has spiked up just after 9 AM.
  • 1 of the LUNS that typically has higher IOPs experienced a parallel spike.  Again, I say see AppStack!

See AppStack



Keep watching for more blog posts outlining new SRM features and don't forget to sign up for the beta!


SRM beta button.png

I told you I was saving the best for last and here it is!  We're standing database performance analysis on it's ear by presenting it from the perspective of applications using your databases, otherwise referred to as Database Application Performance Monitoring & End User Experience.  Many people have database instances shared by several applications which turns troubleshooting performance into a complicated nightmare.  This release of Database Performance Analyzer (DPA) has features to empower the application owner and liberate DBAs.  I hope you like what I'm about to show you and invite you to consider joining DPA 9.2 Beta which is open to customers with both DPA and Server Application Monitor (SAM) currently on active maintenance.  To try it out, please fill out this short survey.

Oh, and did I mention you could WIN A $50 AMAZON GIFT CARD?



For this 3rd post in my 3 part series, I'm going to tell you about :

  • Part 3 - Database Application Performance Monitoring & End User Experience!
    • See the status of applications querying your databases
    • Application Perspective of End User Experience
    • A new perspective on Blocking

Previous Posts:

NOTICE : This is BETA, so there is no promise that what you see here will be delivered as is.



Applications Using My Databases : After you've mapped applications in SAM to database instances in DPA, you will see a resource on both the Summary View and Instance View that enumerates the applications that depend on your databases.  This can be a handy troubleshooting tool when you suspect a database problem may impact applications.  You'll be able to use this as a quick glance at their status as well as a means to dive into them for a closer look.  When you do, you'll find a new DPA resource was added that helps you understand the database's contribution to Application End User Experience.


The Benefit: Easily keep track of application to database dependencies.


Application End User Experience : Databases don't exist for the sake of giving DBAs something to do.  They exist to serve applications who are typically created and maintained by someone other than the DBA.  Unfortunately, those somebodies often don't know databases like a DBA.  Another unfortunate reality is that DBAs are typically outnumbered by applications they support and projects developing new applications.  To help with this, we've created a resource you can add to an Application View for applications that use databases.  This resource provides a contextual glimpse into the performance of queries originating from the node your application resides on.  We're not just showing you total resource loads; we're showing you real query execution times for queries from your application server.  It's a beautiful thing, right? Explaining database behavior in multi-tenant environments takes more than a minute but everyone gets waiting and prefers to minimize how much they wait.  And we're showing them how long THEY are waiting.  In other words, we've filtered out what they would perceive as noise.



The Benefits:

  • Empower application development and support staff.
  • Liberate the DBAs from tedious research tasks.
  • Eliminate unproductive blame games!


query-blocker-lg.jpgBlocking : Many people I've spoken to have a lot of interest identifying where blocking occurs.  A quick overview for the non-DBA.  Blocking is not inherently bad, but it can be relatively bad.  Blocking is by-design behavior that results from locks which are used to preserve transactional consistency.  In fact, up until recently, locking & blocking is how all of the big name relational databases have maintained transactional consistency.  So let's say that we don't typically care about very brief blocking.  We only care when it becomes a significant factor in overall query performance.  To help with this, we've created a resource that helps you understand the situation from both the perspective of the blockers & the blockees.  Sometimes they're one in the same i.e. the same update query being executed by 2 different sessions at the same time.  There also are different blocking scenarios.  You can have one session blocking many sessions or you can have a cascading tree of sessions blocking other sessions that block other sessions etc.  In the first scenario, you'll see blocking time and blocked time being equal.  In the second scenario, you'll see more time blocked than time blocking.  This resource also has a bar graph at the bottom to show you when there has been blocking.

The Benefit: This resource provides a visual that helps the user see how many blockers are impacting how many blockees (waiting queries)



$50 Amazon Gift Cards Details
  • Completing requirements earns you opportunities to win 1 of 5 Amazon Gift Cards!
  • 1 opportunity to win per milestone completed.  Complete them all to maximize your chances of winning.
  • Share via screen shots or videos e.g. camera phone.
  • Send to brian.flynn@solarwinds.com.
  • Milestones
    1. Share your integration experience.
      • Show that you successfully connected SAM to DPA.  The screen should show that Orion has successfully tested a connection or is connected.
      • Show us your database instance mapping screen.  The screen should show some database instances are mapped to Orion nodes.
      • Show us your application mapping screen.  The screen should show that you have discovered applications querying your databases.
    2. Show us your Summary View - Click on the Databases tab.  That is the Summary View.
    3. Show us an Instance View - From the Summary View, click on a database instance then click the DB Performance sub-view.
    4. Show us an Application View - From the Summary View or Instance View, find the Applications Using My Databases resource and click into one of those applications.

User Device Tracker (UDT) 3.2.1 Release Candidate is now available in your SolarWinds Customer Portal.


UDT 3.2.1 brings back ability to monitor new switches and routers without a need to do re-discovery. You can now simply go to Port management page =>


... and choose from a new dropdown to see devices not monitored with UDT (e.g. anything you’ve just added to NCM, NPM etc.).


Selecting desired nodes and clicking on Monitor with UDT will make all their ports monitored within one polling interval (default is 30min).


The Release Candidate is a fully tested and supported release and you can upgrade to the RC from your previous version of UDT.

Should you need assistance during the installation, feel free to contact SolarWinds support or you can use dedicated RC Thwack space.

Looking forwards your feedback!

UDT team

Coming off the release in December, in which we integrated DameWare into Web Help Desk, the team rolled straight into working on formal Active Directory integration as discussed here. With the product today, you can export a list of users from Active Directory (AD) and manually do a one-time import of those accounts into DameWare.  While this is helpful, as an Administrator of the product you still have to manage user passwords separately.


Based on feedback from our customers, we are enhancing this integration and I wanted to give everyone a sneak peek into how things are progressing now that we have been in beta for a couple weeks.


First, when you login to Central Server, you will notice the top ribbon menu is a bit different in that we added a new button and tweaked an existing button a bit as highlighted below.

New Nav.png

Let’s first walk through defining a new connection with our Active Directory server.  By clicking on the AD Import button, a wizard will be presented in which you can select if this sync/import will just be for a single time or if we want this to occur on a regular scheduled basis. Since we are leveraging Active Directory Groups, if you were to select synchronized, on the back end after the initial synchronization, then going forward we will check if new users have been added to the group since we last synchronized with Active Directory and automatically import that account into DameWare.  I will provide more details on how this works more specifically in a bit later in this post.


Next we specify the connection details for the Active Directory Domain, nothing out of the ordinary here to review and discuss with the exception that you can use the local domain, or any other domain for authentication.


Since we are leveraging Active Directory Groups, here is where we get to select one or more groups we want to import users from. For environments that have a large number of groups, we can auto filter down the list based on text typed into the dialogue below the group picker.


AD Groups have now been selected and the final step of the wizard allows you define which license is associated with each group and define the schedule in which we will synchronize with the Active Directory server.


We complete the wizard and an initial synchronization occurs if desired.  After the import is completed, you will now see a list of the users imported from AD.  Note the login name I highlighted is in domain\username format.


If you ever want to go back and edit a synchronization profile to change the schedule, groups to synchronize, etc. you can click on the AD Manager as highlighted in the first screenshot and you will be presented with a view of all the profiles that were defined and other information and actions as can be seen below.


Your Active Directory accounts are now synchronized, so what will the experience now look like for your technicians using the product? If you are familiar with applications like SQL Studio, DameWare will have a similar experience in that you can choose either Windows or DameWare Authentication.  If you select Windows Authentication, then we will use the credentials you are logged into that machine with if you are logged into a domain.  If you “Remember last connection settings” and “Don’t show this again”, going forward when you launch the application it will perform a Single Sign On or SSO and automatically log you into the DameWare as long as that account has permissions in DameWare.


Once logged in, if you look at the bottom of the application, as seen highlighted below, you will notice you are logged in as domain/username or in this case lab.tex\labuser.


That’s it, pretty simple and straightforward.  We are currently running beta with a handful of customers, but if you have active maintenance and are interested in giving us some feedback, then please send me a direct message via thwack and we can set something up.

In my last post, I spoke about different ways to Alert in NPM, pairing multiple features together to create powerful ways to create granular alerts and to really reduce on alerting noise.


Well, that’s all well and good in a perfect world, where all of your devices are reporting the correct data – but what can you do if they aren’t? If your device is providing the wrong data for CPU and Memory for example, it’s no longer possible to alert accurately on that node. Or if we’re showing the vendor as ‘Unknown’ then it’s hard to use a qualifier like ‘Where Vendor = ABCDXYZ’ to define your alert scope.


We poll certain OIDs for different device types with our Native Pollers – OIDs that we’ve carefully chosen for certain vendors or models that work for the vast majority of those devices. But sometimes, those default OIDs aren’t a perfect fit. Sometimes, the device should support an OID, but it doesn’t. Other times, we might not have a Poller created for that particular device model yet. (When we say Poller, we mean gathering specific data from an OID or group of OIDs – for example, CPU & Memory, Hardware health sensors, Topology data and so on)


Luckily, there’s an easy and quick way to swap in new Pollers, or create your own ones, and start polling these devices accurately - Device Studio!


Never heard of it? Check out this video.


So, let’s talk specifics about Device Studio, and show you the exact steps you can take to fix a device that’s providing inaccurate information. On your Orion Settings page, look under Node & Group Management for the ‘Manage Pollers’ option:


Let’s assume you have a problematic device, providing the wrong CPU & Memory details. Fixing this is a two-step process.


#1 – Find the right OIDs to poll to get accurate information

This one will need a little legwork! Check Thwack and the Content Exchange first (or click the Community tab to download directly from the Manage Pollers page) – after all, no point in re-inventing the wheel when the awesome folk on the forums have shared their successes! If that doesn’t work, you’ll often find all the information you need by plugging the device model, what you’re looking for and the word ‘OID’ into Google – chances are if you’re looking for this information, someone else was too. If that fails you, turn to the device documentation.


If you’re brave and curious, you can SNMP Walk the entire device to get a list of every single OID it supports. The best part? We ship an SNMP Walk tool with your Orion install – you can find it here:

[Install Drive]\Program Files(x86)\SolarWinds\Orion\SNMPWalk.exe


I’m an SNMP geek, so if I get into writing about how to read an SNMP Walk, you’ll never hear the end of it – so I’ll leave you with this handy guide on SNMP to get you started on reading the output and choosing the right OID from it to poll your device.


#2 – Using the OIDs, set up a new Poller, and assign it to your device


    1. In Device Studio, click ‘Create New Poller’

    2. Fill in the details about this new poller


    3. On the next page, you will see a list of all required information for this Poller. For example, to poll CPU & Memory, Orion needs to know where to get details of the current CPU load, Memory used and Free memory.


    4. For each of these details, you’ll need to define the data source – this means, you’ll need to define what OIDs Orion needs to poll to get accurate information from your device.

    5. You can browse the MIB tree itself, testing OIDs against your chosen device as you go.


Alternatively, if you already know the OID you want to poll, go ahead and enter it under ‘Manually Define OID’

    6. Once you’ve chosen the data source, you’ll be asked to confirm if that data is reasonable and accurate. You’ll have the option here to perform calculations on the polled result – for example, to get an average across CPU cores, or combine multiple pollers together – this is very useful for Memory, as often, the data is stored as the number of blocks used / free – which then must be multiplied by the block size to get an accurate result.
If you’re happy with what you see, click ‘Yes, the data source is reasonable’.


     7. Almost there! Once you complete the wizard, choose your shiny new Poller from the list and select the ‘Assign’ button.

Select the node or nodes you need to assign this poller to, and run a Scan against them – this confirms that they will definitely support those new OIDs. If they pass the test with a Match, you can Enable your new poller, replacing the Native poller.


     8. If you need to swap back again for any reason, just run a List Resources against the Node, and you can toggle back and forth between your pollers.


And there you have it!


But wait – I also mentioned that you can use the Device Studio to fix those pesky devices that show as ‘Unknown’. If you do have devices that show up with the Vendor as Unknown, we’d still like to hear about them so that we can match them natively – but if you’d like to fix this yourself without waiting for the next release, you can use Device Studio to do this, and you can even use these steps to correct any devices that respond as ‘NET-SNMP’ instead of the correct Vendor & MachineType.


Much of the steps will be the same as above – you’ll just be creating a ‘Node Details’ Poller instead.


When you define the Data Sources for Node Details pollers, you’ll notice a lot of these are optional – but one is absolutely required: the SysObjectID.


The SysObjectID returns an OID that references another part of the MIB database – usually the Vendor’s MIBs, and can be used to identify both the Vendor and the Model of the device. It’s quite rare that this one isn’t supported by a device, so try to let Orion poll the SysObjectID automatically if at all possible. If the device doesn’t support this OID, you can use a constant value instead, and manually define the OID that should have been returned by the device.


Now, with the required OIDs done and out of the way – you can move on to fixing that Vendor = Unknown problem – and that part is quick and simple. Set the Constant Value to the text string you want it to report for both the Vendor, and the MachineType.


So, there you have it – a great way to clean up those ‘Unknown’ devices, and take care of the devices that respond with incorrect information, all in one place.

I’ve been with SolarWinds going on 8 years now, working with the Support Team, so it came as no surprise to me when I saw how many of you answered our poll last month that Alerts, specifically, filtering out the noise from the real issues, was your top priority problem to solve in the next 30 days.


So, with that in mind, I’ve put together three powerful tips that could help you to achieve this goal, and I’ll discuss some of the common alerting questions that we tend to get in Support along the way.


NPM has awesome alerting capability, and when coupled with some of the other Orion features, it’ll let you get really granular – so yes, you really can cut down on a lot of that noise!


Tip #1 – Use custom properties with alerts for a powerful combo

Custom properties allow you to define any property you like that doesn’t already exist in Orion. The possibilities are endless here, and you can use multiple custom properties together to provide a high level of granularity. Not heard of custom properties before? Check out a video tutorial on them here.


I’ll take three case studies as examples that we’re regularly asked about, which could go a long way to reducing that alerting noise.


Let’s start really simple, and assume you’ve got a handful of devices that you need to monitor, but that you don’t need alerts for them. Create a Boolean (Yes/No) custom property called DoNotAlertOnThis. By default, custom property Boolean values default to false – so all you need to do is set this to True for these ‘noisy’ devices.


Set up your alert like this:



Next, lets take a look at creating a single alert that can email a different contact for each node. When you’ve got a lot of departments, you don’t want all of your admin teams to be receiving alerts for devices that others are responsible for, right? Well, what if I told you that you could create a custom property on your nodes, and store the email address or distribution list address for the Primary Contact(s) within it?


And finally, do you ever find yourself wishing that you could define your alert thresholds per device, per interface, or per volume? Not all volumes are created equal - have you ever wished you could alert on some volumes when they reach 50%, but then others aren’t really a panic until they hit 95%? It’s not only possible, it’s quite easy to set up with custom properties.


Tip #2 – Using Dependencies to add intelligence to the Alerts

Dependencies are another awesome way to build alerting intelligence, and they cut down drastically on noise.


Let’s assume for example, that you’ve got a remote site with a few hundred devices – well using just the default ‘Node Down’ alert, you’re probably getting an email for every single one of those remote devices, if the link to that remote site goes down. It’s not that those devices are actually down, but as Orion can’t contact them, it has no choice but to mark them that way. Right? Thankfully – Wrong!


Using Groups and Dependencies, you can tell Orion that those devices are Unreachable when that link goes down – that way, you’ll only receive your interface down alert. Sound good?


Here’s how you do it.


First create a Group for your remote site. Let’s call it FieldOffice. Set a dependency for your FieldOffice group, with the interface of the network device in your HQ as the parent.


That’s it. It’s really that simple – Orion will do all the rest for you automatically.


If that interface goes down, Orion will set all the nodes in your Field Office as Unreachable – a specific status used for Dependencies. As these nodes are ‘Unreachable’ status, and not ‘Down’, the logic for ‘alert me when a nodes goes down’ does not apply.


Speaking of groups – this is a little off topic, but we do get asked about it from time to time. How do you set up an alert to tell you when a group member goes down (or tell you which of your group members caused the group to go down)? You can actually use the out-of-the-box ‘Alert me when a group goes down’ alert. By default this tells you about the group going down (as the status bubbles up to group level – but it doesn’t include the detail about what group member keeled over to cause that to happen). Add in the following variable to your trigger action to get that information:

Root Cause: ${GroupStatusRootCause}


Tip #3 - Check out the complex condition options in 11.5

With the complex conditions options in 11.5 the possibilities are endless, so rather than talk about specific case studies, let’s talk about some of the awesome logic options you can now apply to your alerts.


First, enable the “Complex Conditions” on the “Trigger Conditions” tab of your new alert:


First, let’s take a look at a really cool noise reducing option – only get an alert when a certain number of objects meet the alert criteria.


Yes, you heard that right – instead of receiving a separate alert for each separate interface with high utilization, you can choose to receive an alert only when a certain threshold of interfaces that match your alert requirements go over their thresholds. With the Complex Conditions enabled, this option will appear at the end of the Primary Section:


And finally, as my last but definitely not least tip of the day – let’s take a look and stringing conditions together. There’ll be times when you might only want to get an alert when two (or more) very distinct conditions occur at once, and when you’re fighting fires, it can be hard to correlate a bunch of emails together to see if that scenario actually happened or not.


The classic example of this would be to check the status of two separate applications on two separate servers – maybe you can hobble along with just one, but you might need a critical alert sent out if you lose both.


Using the standard alert conditions, you can create an alert to alert on just one of those applications. If you were to add ‘Node name is equal to ServerB’ in here, it could never trigger – as no server can be named both ‘ServerA’ and ‘ServerB’ at the same time.


This is where the complex condition is king. You can now alert as:


So now an application would have to fail on both Server A and also fail on Server B in order to generate this alert.


Here’s where it gets super interesting – the conditions don’t have to match. In fact, they don’t have to have anything to do with each other at all, other than both resolving to ‘true’ conditions in order for the alert to fire. And you can add as many of these additional conditions as you like to get truly granular and complex alerts.


So – over to you! What tips and tricks have you picked up on Alerts over the years? What excited you about the changes to alerts in 11.5?


Want to learn more about Orion Alerts? Take a look at these resources:

Introduction to Alerts

Level 2 Training – Service Groups, Alerts and Dependencies (53 mins)

Building Complex Alert Conditions

Information about Advanced SQL Alerts

Alert Variables

With the exciting release of Orion NPM 11.5 last week, you might be looking for all the information you might need to upgrade, and the steps to do it quickly and easily.

  1. Check the Release Notes for Orion NPM 11.5
  2. Review the system requirements and ensure your server is up to spec.
  3. Back Up your Database (Microsoft KB Links: 2005, 2008, 2008 R2, 2012)
  4. Check your upgrade path
    • If you have only Orion installed, and you’re running NPM version 10.7 or higher, you can go straight to 11.5.
    • If you’re running a version lower than 10.7, you’ll need to do a stepped upgrade. Here’s the overall upgrade path if you only have NPM installed:

               Orion NPM 7.8.5 8.5.1 9.1 9.5.1 10.1.3 10.3.1 10.7 11.5.X

    • If you’re running NPM in addition to other modules, make sure you check the Product Upgrade Advisor tool to get an exact upgrade path for both NPM and your other modules to ensure you maintain compatibility.

     5.   Download the required versions from the customer portal; skip this step if you've already downloaded the bits.

To download the current version, and any previous versions you require, you’ll need to log into the customer portal. Once you’ve logged in, use the License management menu to select ‘My Downloads’.



Next, select the product you want to download:


And under the Server Downloads section, you’ll see the latest release selected by default. If you need an earlier version for your upgrade path, you can select it from the drop-down           menu.


     6.   Run the installations, in order, to upgrade your product.  You’ll find the upgrade instructions for Orion NPM here, and a wealth of documentation, including upgrade and installation guides for all our products here.


If you run into any trouble during your upgrade, call us in Support on our toll-free numbers or submit a ticket – we’re happy to help!


Got a question that wasn’t covered here? Try these resources for help:

Product Upgrade Advisor

Orion NPM Product Page

Upgrading SolarWinds Orion when FailOver Engine is installed

Migrating Orion to another server

Migrating the Orion Database

Manual License Registration

In talking with some of our more security focused and more tightly regulated customers from a compliance perspective; a common question I get asked is in regards to audit logging with DameWare.  With Mini Remote Control (MRC), there are a couple different options when it comes to logging.


By default, DameWare Mini Remote Control writes to the Windows Event Log.  The two events which MRC writes audit event are either attempts to connect to a remote host and disconnects from a remote host.  These Application Event Log entries contain connection information, along with specific information about the system the MRC user connected from and the username used to establish the MRC connection.



The next couple options are not enabled and configured by default, so for these to work, both the logging server and all remote systems must be running the MRC client agent.

If you already have MRC deployed in your environment and you want to enable this, you can configure the agents by either clicking on the highlighted icon within MRC or you can right click on the tray icon and select “Settings”.




In the dialog you receive, as seen below, select the “Additional Settings” tab and click on the highlighted “Logging” button.


Once here you can either configure this agent log locally and/or log to a remote destination.  Double check and make sure the destination folder exists on the file system.  DameWare will automatically create the file, but only if the path exists.


If you have not deployed the DameWare agents on to your network yet, you can customize and configure the agents to have these settings by default.  In order to do this, you will need to create a new msi with our utility, which is installed by default and is called “DameWare Mini Remote Control Client Agent MSI Builder”.


Once you have this configured and are sending the audit events to a log file, using a comma separated file is recommended.  An example of what this would look like can be seen below.


If you have deployed and are using DameWare Central Server for over the internet or outside the firewall remote control sessions, the Central Server also writes various events to the Windows Event Log, such as licensing information, session connection and disconnection information.  In our upcoming release we will be adding active directory synchronization information.  If you need any further information on logging, you can also see a KB we have here.


I’m interested in hearing what other types of events or action you would like to see logged going forward, so please post any feedback to the comments section or you can always direct message me via thwack.

Here on the Security Event Manager product team we often get questions about how we maintain our appliance and ensure integrity of the data we collect. We previously published this KB article, but this post includes a quick recap of the critical areas relevant to SEM's end-to-end security.


This diagram is a good overview of the specific areas of interest, with even more detail in the sections below.

LEM security diagram2.png


Hardened Operating System


SEM applies security elements at just about every level to include the OS. Since the product is deployed as a virtual appliance, several security measures are applied at the OS level to include:


  • A Debian Linux core operating system with all unnecessary ports, protocols, processes and services/daemons disabled.
  • Operating system and application maintenance is performed regularly with SEM upgrades, for both security and stability updates.
  • No root level access – in fact, all passwords are randomly generated per-appliance, and even our internal teams don’t have knowledge of them in advance.
  • Minimal access to the appliance - only via virtual appliance console or SSH.
    • For low level appliance configuration (including things like networking and backup configuration), SEM includes a menu driven command line interface that requires a username/password via SSH over a nonstandard port.
  • Remote configuration access can also be disabled or restricted by IP address, and an appropriate usage banner can be displayed.
  • Internal logging and auditing.


Web Console Security


SEM’s web console provides a secure read-only view of data and access to SEM’s configuration. This access can be restricted and further secured.


Security features applied to the web console include:

  • Encrypted Console access that is certificate based over secure HTTP
    • Option to deploy a CA-signed certificate in addition to the included self-signed certificate
    • Option to disable HTTP access in favor of only HTTPS
  • Additional console access limitation applied on a per IP basis.
  • Local or Active Directory users with role based access. The roles available are:
    • Administrator - Users who have full access to the features and capabilities within the web console.
    • Auditor/Guest - Users who have extensive view rights to the system, but cannot modify anything other than their own filters.
    • Monitor - Users who can access the Console, but cannot view or modify anything, and must be provided a set of filters.
    • Contact - Users who cannot access the Console, but do receive external notification.
    • Reports – Users who can only run Reports, but cannot access the Console for real-time monitoring.
  • Password complexity requirement for local users (AD users inherit AD policy).
  • SEM active responses are pre-configured and not "open" scripted (and don't accept input on the client-side).


Data Storage Security


The crux of securing the SEM appliance is ensuring the data cannot be altered. To that end the following measures have been applied:


  • Data storage is encrypted and hashed – should access to the appliance be breached, data can’t be tampered with and served back to SEM as if it were unmodified.
  • All access via SEM’s tools to data storage is read-only, ensuring that data cannot be altered regardless of the role assigned to the user.
  • Data added to the SEM appliance is insert-only, only removed to make room for new data and never “updated” or edited (except accumulated metadata).
  • Access to SEM reports can be further restricted by IP address and leverage certificate-based TLS communication.


Log Collection Security


As logs are collected, chain of custody is ensured – again, to prevent as much tampering as possible.


  • Logs are collected on the agent as close to real-time as possible, ensuring data lives on disk a minimum amount of time.
  • Windows Event Log data is collected using the Event Log API, not relying on data to be written to disk to be collected.
  • Where appropriate, LEM tools avoid collecting personally identifiable information. SQL Auditor, for example, does not include full queries and responses, which could have personal data in either the statement or the response.
  • Logs collected using the Security Event Manager agent are protected in transit with FIPS-approved TLS/SSL encryption algorithms.
  • When communication is interrupted, data is buffered to disk in a binary format, not the original modifiable log content.




Security is not limited to the data that sits on the Virtual Appliance. In the event archives are used and need to be restored, their integrity is also maintained.


  • Database and log archives are encrypted and hashed to prevent tampering


Internal Auditing


Detailed auditing of all appliance activity is enabled by default, and can be leveraged in both alerts and reports. This includes:


  • Change Auditing - All changes to rules, reports, filters, searches, widgets and other internal element.
  • Access Audit - Successful and Failed attempts to access the web and reporting console.
  • Rule Activity – Any rule that fires creates a full audit trail of the actions taken.
  • Report Activity – Any time a report is generated automatically or manually.
  • Search – All search activity is recorded and includes the source IP and username.
  • Built-in audit reports that can be scheduled and reviewed.


SolarWinds Security Processes


Our internal security practices are leveraged with every product release.


  • SEM is Common Criteria certified, which includes both product and processes
  • Vulnerability scans are ran during the product development cycle and reported issues addressed
  • Time is allotted during each release cycle to include security updates, patches, and fixes
  • Internal response plans to assess critical vulnerabilities (like Shellshock), reported external vulnerabilities, and reported customer vulnerabilities via your own internal scans



If you’ve got any questions about how SEM security is maintained, or any features we might have missed, let us know – another part of security is recognizing that our processes must be constantly improved.

First and foremost, I'm excited to announce the availability of Web Performance Monitor (WPM) 2.2!


With this newest release, the WPM team has introduced several new features, including:


  • AppStack Integration
  • Web-based Alerting
  • and Step Dependencies


To learn more about each of these features, head over to my WPM 2.2 Beta post. Rather than cover these features individually, I want to give a bit of color as to the power of the three in aggregate.



Using WPM to Define a Web App Workflow

When thinking about the AppStack, a WPM transaction recording is best thought of as a way of defining a web-based application workflow. Defining this workflow allows you to add great context to your AppStack environment. As in previous versions, WPM permits you to capture a series of steps you take within a web app. But now, when you complement this with WPM's newly introduced AppStack support, you are able to go one step further - tying the steps in your WPM transaction to the various components in the AppStack on which it depends.


The easiest way to demonstrate the capability of WPM 2.2 in identifying, forecasting, alerting, and reporting on issues is through an example and for that, why not use a web app with which we're all familiar - the SolarWinds Orion Console. To make it easy, I'll use an example from our wonderful online demo so you can click through. You'll find the example here.


Creating the WPM Transaction

I'm not going to step through how to record the transaction, but as mentioned previously, we built a transaction in the WPM demo to define a workflow that simulates end users visiting the Orion Console from our various office locations. The screenshot below shows the various steps in the recording as well as their statuses (we'll touch on those later):

orion transaction steps.png

The transaction workflow is as you would expect - login to Orion from our main Austin office, log out, and do the same from our offices in Cork and Tokyo. Now that you have the transaction and its various steps recorded, you can set your dependencies and really unleash the power of the AppStack.


Setting the AppStack Dependencies

WPM 2.2 allows you to setup both transaction and step dependencies easily. The result is a set of dependencies in your environment that are tied to the success of the execution of our Orion Console transaction - and thus our Orion products more broadly. A screenshot of a transaction level application dependency:

orion transaction dependencies 2.png

The screenshot above is from the main transaction details page. It shows that you that the entire transaction has an application dependency on an MSSQL instance that is sitting on our orion-main server. Now we'll set a few of our newly introduced step level dependencies, which can be applied both to nodes and applications. To do this, go to the Step Details page for any transaction step and click the Edit option, as seen in the below screenshot.

step details.png

From there, scroll to the bottom, expand the "Set individual dependencies for steps" list, and add your step dependencies through the Edit menu option you see below.

setting step dependencies.png

Once you've set the step dependencies you need, go back to the Step Details page and you will see all the dependencies which you've set.

orion step dependencies.png

The screenshot above is from the "Log in to Additional Web1 - Cork" step details page. You see that we have set both node dependencies on this step (a couple of routers and a location specific Orion server, orion-web-cork) as well as one application dependency, our Cork IIS. We could have set any node or application dependency we deemed critical to the execution of this step - the list of the dependencies you can set is only limited by what your SolarWinds products can see.


So that means we set our transaction, and we set our dependencies. Time for the AppStack.


WPM + AppStack = Making Sense of it All

We did all the heavy lifting. Now it's time for AppStack to do the rest and see where it gets us. The good news is, that it gets us pretty far. Opening up our AppStack view and turning our Orion Console transaction on in the AppStack Spotlight view shows us this:

appstack orion.png

A few simple steps and we now have a view from top to bottom of what our Orion Console transaction looks like in the context of our broader environment. The applications that matter are put into focus. As are the various servers and volumes. A more complex transaction would mean that more levels of your AppStack would come into focus.


So, How Does This Help Me?

The value that the previous view in the AppStack provides is limited really only by your imagination. A few straightforward examples.

1) If you get a ticket that says that your Orion Console is unreachable (which in our demo it seems to be), you can quickly look at the transaction in the above focused view and follow the statuses down the AppStack to find the root cause - you may have noticed in earlier screenshots that our orion-web-cork server is down and thus the status of that leg of the transaction is unreachable. This is clearly represented by the red dot on the server level. Now you know your issue lies with that server and you know where to start.

2) If you would prefer using WPM's full power to get ahead of such issues rather than fight the resulting fires, you can set an alert on a transaction step and instantly be alerted when a step goes down. This will allow you to be informed of an issue as soon as that step goes down and be able to pinpoint which dependent resource may be at fault. Here's what setting up that alert would look like in our new web-based alerting front end. You'll see that it is set to alert you anytime a transaction step is in any state but up (the unreachable status in this example would have triggered this alert):

tranaction step alert.png

3) You know that you have a non-responsive app (you see it right in the AppStack) and are curious to see which web apps and transaction workflows may be impacted. You click and see that this Orion Console transaction has that application on its path. Immediately you know that users of the Orion Console will be impacted. While you're fixing the root cause of the issue (our orion-web-cork server), you can have WPM's reporting functionality on the Orion Console transaction build a report to show the impact (availability, SLA, etc.)

4) Mapping of these transaction dependencies allows all those using WPM and the AppStack to understand what parts of your environment are interrelated and dependent on one another. Thus, after initial setup, this information is clearly and intuitively presented even to those that are new to your environment. And if someone in your organization is trying to take down a server for maintenance, they can look at these dependencies and know exactly what will and won't be affected.


A Quick Review

So what did we accomplish? In a few easy steps, we enabled the AppStack to provide quick and easily digestible context about our environment that will help to make identifying, solving, forecasting, alerting, and reporting the root cause of an issue simple.


Above all, this is the power of the new WPM 2.2 and its features. But this is really the tip of the iceberg in the form of a simple example. We can't wait to see what our users come up with.


Leave some comments on your thoughts about this and where you think the power of WPM + AppStack could help you.


And of course, head over to your Customer Portal to get started with the the new WPM 2.2.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.