1 2 3 Previous Next

Product Blog

761 posts

SolarWinds has a long history of being easy to try and easy to buy. Those of you who own two or more Orion Platform product modules may have realized, usually when planning your next upgrade, it's not necessarily easy to know which product module versions are compatible with others. While figuring this out may not be too terribly difficult when you own only two Orion product modules, the complexity rises significantly with each additional product module you purchase thereafter. Imagine you need to figure out which versions of your other 13 Orion Platform product and integration modules are compatible with Server & Application Monitor 6.7? Suddenly, what was previously a rather trivial task has become a daunting, and sometimes overwhelming, challenge.

 

For that reason and many more, we have some significant changes coming your way to end the madness. First though, here’s a brief history of where we've been, how we got here, and where the future will take us.

 

 

 

The Matrix

 

For many years, we attempted to make the process of deciphering compatibility between Orion Platform product modules easier through a compatibility matrix maintained within our documentation. The matrix itself was a fairly complex Excel spreadsheet that oftentimes felt like you needed a secret decoder ring to help interpret the results. For what you might imagine should be a relatively simple task, the compatibility matrix was anything but.

 

 

Upgrade Advisor

 

As the number of available Orion Platform product modules increased, we eventually realized the Compatibility Matrix had become too complex for customers to interpret, and too unwieldy for us to maintain. Thus came our next valiant attempt at improving the situation for determining multi-product compatibility, the Upgrade Advisor. The Upgrade Advisor represented a monumental leap forward compared to the Compatibility Matrix. In fact, many still rely upon it today.

 

 

 

The process is relatively straightforward. Enter in the Orion Platform product modules you currently have installed and their respective version numbers. Next, enter the version number of the product module to which you'd like to upgrade. The Upgrade Advisor will then map out the rest of the product module version numbers compatible with the newer version.

 

While fraught with good intentions, the Upgrade Advisor still suffered from the same fundamental flaw which led to the demise of the Compatibility Matrix. It still required users to be both aware of its existence and proactive about their upgrade planning. When the recommendations outlined in the Compatibility Matrix or Upgrade Advisor weren't followed, bizarre and unexplainable issues would occur due to incompatible module behavior.

 

 

Next Generation Installer

 

The latest attempt at unraveling this quagmire has been to place the information available in the Upgrade Advisor into the installer itself. Anytime before or at the time of upgrade, simply running the installer provides a list of all Orion Platform product modules currently installed and their respective versions. Next to it is the list of versions for other product modules compatible with the module version downloaded.

 

Image result for solarwinds installer upgrade

 

This method is vastly superior to both the Compatibility Matrix and Upgrade Advisor, as it requires no prior knowledge of the existence of either, nor does it require any manual steps to determine module compatibility. The installer simply handles it all for you. No muss, no fuss.

 

While the next-generation installer took all the complexity out of the equation, it introduced a fair amount of confusion. For the planners among you, it seemed counterintuitive to run an installer, days, weeks, or even months ahead of a scheduled upgrade to determine the upgrade path. For others, executing the installer on a production environment prior to the scheduled change window sounded like a dangerous proposition, assuming the mere fact of running the installer might start the upgrade process or shut down Orion services without consent or confirmation. As a result, some still found greater comfort utilizing the Upgrade Advisor this new installer was intent on replacing.

 

Does this really need to be so complicated?

 

A lot of time, effort, and different technologies have been used throughout the years in what seems to have been a vain attempt to reduce confusion and make it easier for users to identify compatibility between different product module versions. The problem, however, was never how we attempted to address the issue (though admittedly, some methods worked better than others). The ultimate solution is to change how we think about the problem in the first place: the version number itself.

 

 

Ushering in a new tomorrow

 

It's rather arbitrary that 6.9 is the Server & Application Monitor (SAM) version compatible with Network Performance Monitor (NPM) 12.5. Rather than require users have a Ph.D. in SolarWinds Orion Platform product module versioning, wouldn't it be easier if those product modules compatible with each other all shared the same version number? Then it would be downright simple to identify IP Address Manager vX.XX wasn't compatible with User Device Tracker vY.YY or Network Configuration Manager vZ.ZZ.

 

Simplifying and consolidating our product module versioning is precisely what we aim to do in our next Orion Platform module releases. As you can imagine, this might come as a big surprise to many, which is why we've decided to notify the community in advance.

 

New releases for every Orion Platform product module going forward will now use the same versioning as the Orion Platform itself. This means the next release of Network Performance Monitor will not be v12.6 or v13.0, nor will any of the other Orion Platform product modules bear a resemblance to their current versioning. Instead, Orion Platform product module versions will be the four-digit year in which they were released, followed by the quarter of release. If there is a Service Release for a given module, it will appear in the third position following the quarter.

 

 

[YYYY.Q.SR]

 

If this all seems a bit confusing, fret not. You're probably already familiar with this versioning, as it's been the basis of the Orion Platform version for nearly a decade. This is also the same versioning used for Network Automation Manager.

 

 

What does this mean for my product modules?

 

To be completely honest, really nothing at all, aside from a departure from those products’ previous versioning schemes. It also means versioning is much more transparent and easier to relate to. For example, if you needed to know what version of Storage Resource Monitor (SRM) was released in October 2025, it’s now very easy: Storage Resource Monitor v2025.4. If you also needed to know what version of Server Configuration Manager (SCM) was compatible with SRM v2025.4, that too is now easy: SCM v2025.4, of course!

 

 

How will this affect previous releases?

 

In short, it doesn't. Currently released product module versioning will remain unchanged, though you can expect a fairly significant jump in version numbers the next time you upgrade.

 

 

I still have unanswered questions

 

You undoubtedly have a million questions related to this change racing through your brain right now. If not, perhaps later, after pondering this post for a while, a fantastic question pops to mind. In either scenario, post your questions related to this change in the comments section below.

When we launched SolarWinds® Service Desk (SWSD), I couldn’t wait to get my hands on it. I was very excited to see a new solution to handle incident management, asset management, an internal knowledge base, problem management, and an employee self-service portal. There’s so much to this new product to unpack, I needed to figure out where to start. Thankfully, there was already an excellent document introducing everyone to the solution I could read.

 

For the past three years, I’ve been getting deeper and deeper into leveraging various APIs to do my bidding. This lets me go nuts on my keyboard and automate out as many repeatable functions as possible. No, I’m not breaking up with my mouse. We had a healthy discussion, my mouse and I, and he’s fine with the situation. Really. What was I talking about? Oh yeah, APIs!

 

One of the things I absolutely love about working with APIs (and scripting languages as well) is there’s no one way to do something. If you can think it, you can probably do it. Most RESTful APIs allow you to work with whatever language you prefer. You can use the curl executable, Perl (tagging Leon here), PowerShell, or nearly anything else. PowerShell is my personal preference, so I’m doing my scripting with it. But more on those details later.

 

You’ve seen me write and talk about using the SolarWinds® Orion® API to help automate your monitoring infrastructure. I’ve even gotten some of my friends in on the trend. But, the launch of SWSD opened a brand-new API for me to explore. I started where I always do with something new: by reading the manual. SolarWinds Service Desk has extensive documentation about using the API. There’s so much there for me to explore, but I had to limit myself. In trying to pick a place to start, I thought about my past.

 

SolarWinds has always been in the business of helping IT professionals do their jobs better. Many of us technology professionals, like me, started our careers working on a help desk. Based on everything SWSD offers, I limited myself to the Incidents Management area. Then I just had to think about how I would leverage this type of solution in some of my previous roles.

 

As a help desk supervisor who went on to be a monitoring engineer, I thought about how great it would be to get tickets automatically created based on an alert. I could talk all day about what qualifies for an alert (I have) and what’s best to include in an alert message (that, too), but the biggest thing to strive towards is some level of tracking. The most common tracking method for alerts has been email notifications. This is the default for most people, and 90% of the time it’s fine. But what about the times when email is the problem? You need another way to get your incidents reported and tracked.

 

Like scripting languages, the Orion alerting engine allows for multiple ways to handle alert logic—not just for the trigger conditions, but also for the actions when the trigger occurs. One of those mechanisms is to execute a program. On the surface, this may sound boring, but not to me and other keyboard junkies. This is a great way to leverage some scripting and the SWSD API to do the work for us.

 

First things first, we need to decide how to handle the calls to the API. The examples provided in the API documentation use the curl program to do the work, but I’m not in love with the insanely long command lines required to get it to work. But since this is a RESTful API, I should be able to use my preferred scripting language, PowerShell. (I told you I’d get back to it, didn’t I?)

 

Let’s assemble what you need to get started. First you need your authentication. If you’re an administrator in SWSD, you can go to Setup, Users & Access, and then select yourself (or a service account you want to use). Inside the profile, you’ll find the JSON web token.

 

 

This is how you authenticate with the SWSD API. The web token is a single line of text. In the web display, it’s been wrapped for visual convenience. Copy that line of text and stash it somewhere safe. This is basically the API version of “you.” Protect it as you would any other credentials. In a production system, I’d have it set up to use the service account for my Orion installation.

 

API Test

For the API call, we need to send over some header information. Specifically, we need to send over the authorization, the version of the API we’ll be using, and the content type we’ll be sending. I found these details in the API documentation for Incidents. To start things off, I did a quick test to see if I could enumerate all the existing incidents.

 

I’m trying to get more comfortable with JSON, so I’m using it instead of XML. In PowerShell, the HTTP header construction looks like this:

$JsonWebToken = "Your token goes here. You don't get to see mine."

 

$Headers = @{ "X-Samanage-Authorization" = "Bearer $JsonWebToken";

              "Accept"                   = "application/vnd.samanage.v2.1+json"

              "Content-Type"             = "application/json" }

 

Basically, we’re saying (in order): this is me (auth), I’d like to use this version of the API with JSON (accept), and I’m sending over JSON as the request itself (content-type).

 

This block of headers is your pass to speak with the API. I’m testing this from the United States, so I’ll use the base URI via https://api.samanage.com/. There’s a separate one specifically for EU people (https://apieu.samanage.com). If you are in the EU, that’s the one you should be using.

To list out the incidents, we make an HTTP GET call to the “incidents” URI as specified in the documentation. I saved this as a variable so I wouldn’t have copy/paste failures later.

 

$URI = "https://api.samanage.com/incidents.json"

 

Then to get the list of all incidents, I can just invoke the REST method.

Invoke-RestMethod -Method Get -Headers $Headers -Uri $URI

 

 

Excellent! I can talk to the API and get some information back. This means I’m authenticating correctly and getting the list of incidents back. Time to move on.

Creating a Test Incident

To create an incident, I only technically need three fields: name (of the incident), the requester, and the title. I’ve seen this called the payload, the body, or the contents. To stay on the same page with the PowerShell parameters, I’ll refer to it as the body. Using it, I built a very small JSON document to see if this would work using the script I’ve started developing. The beauty of it is I can repeatedly use the header I already built. I’ve put the JSON in a string format surrounded by @” and “@. In PowerShell this is called a here-string and there are many things you can do with it.

$TestBody = @"

{

"incident": {

   "name":        "Testing Incident - Safe to Close with no notes",

   "priority":    "Critical",

   "requester":   { "email" : "kevin.sparenberg@kmsigma.com" }

}

}

"@

 

Invoke-RestMethod -Method Post -Headers $Headers -Uri $URI -Body $TestBody

 

When I run it, I get back all kinds of information about the incident I just created.

But to be really, doubly sure, we should check the web console.

There it is. I can create an incident with my script.

 

So, let’s build this into an actual alert script to trigger.

 

Side note: When I “resolved” this ticket, I got an email asking if I was happy with my support. Just one more great feature of an incident management solution.

Building the new SolarWinds Service Desk Script

For my alert, I’m going with a scenario where email is probably not the best alert avenue: your email server is having a problem. This is a classic downstream failure. We could create an email alert, but since the email server is the source, the technician would never get the message.

 

 

The above logic looks for only nodes with names containing “EXMBX” (Exchange Mailbox servers) and when the status is not Up (like Down, Critical, or Warning).

 

Now that we have the alert trigger, we need to create the action of running a script.

 

For a script to be executed by the Orion alerting engine, it should “live” on the Orion server. Personally, I put them all in a “Scripts” folder in the root of the C: drive. Therefore, the full path to my script is “C:\Scripts\New-SwsdIncident.ps1”

 

I also need to tweak the script slightly to allow for command line parameters (how I send the node and alert details). If I don’t do this, then the exact same payload will be sent every time this alert triggers. For this example, I’m just sticking with four parameters I want to pass. If you want more, feel free to tweak them as you see fit.

 

Within a PowerShell file, you access command line parameters via the $args variable, with the first argument being $args[0], the next being $args[1], and so on. Using those parameters, I know I want the name of the alert, the details on the alert, the IP of the node, and the name of the node. Here’s what my script looks like:

You can see I added a few more fields to my JSON body so a case like this could be routed easier. What did I forget? Whoops, this should have said this was a test incident. Not quite ready for production, but let’s move on.

When we build the alert, we set one of the trigger actions as execution of an external program and give it an easily recognizable name.

 

 

The full command line I put here is:

 

C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -File "C:\Scripts\New-SwsdIncident.ps1" "${N=SwisEntity;M=StatusDescription}" "${N=SwisEntity;M=Caption}" "${N=SwisEntity;M=IP_Address}" "${N=Alerting;M=AlertName}"

 

This is the path and executable for PowerShell, the script file we want to execute, and the parameters (order is important) we want to pass to the script. I’ve also surrounded the parameters with double quotes because they *may* contain spaces. In this case, better safe than sorry.

 

Then I just need to sit back and wait for an alert matching my description trigger. There’s one now!

 

 

Just like every alert I write, I’ve already found ways to improve it. Yes, I know this is a very rudimentary example, but it’s a great introduction to the integrations possible. I’ll need to tweak this alert a little bit before I’d consider it ready for prime time, but it’s been a great learning experience. I hope you learned a little bit along with me.

 

So, I ask you all: where should I go next?

Status is arguably one of the most important aspects of any monitoring solution. It's a key component for visually notifying you that something is amiss in your environment, as well as being an important aid in the troubleshooting process. When used properly, status is also the engine that powers alerting, making it an absolutely essential ingredient for both proactive and reactive notifications aimed at ensuring your entire IT environment runs smoothly.

 

Orion® Node Status, in particular, has for an extended period of time been somewhat unique when compared to other entities in the Orion Platform[MJ1] . Most other entities have a fairly simple, straightforward, and easy-to-understand hierarchy of status based upon severity. These include things like Up, Warning, Critical, and Down, but can also include other statuses which denote an absence of a state, such as Unknown, Unmanaged, etc. By comparison, a node managed in the Orion Platform today can have any of twenty-two unique statuses. Some of these statuses can, to the uninitiated, appear at best contradictory, and at worst, just downright confusing.

 

This is the result of separating information about the node itself from its associated child objects (like interfaces and applications) into multiple colored balls. The larger colored ball representing the reachability of the node, usually via ICMP, while the much smaller colored ball in the bottom right represents the worst state of any of the node's child objects.

 

 

Primary Node Status

Nodes With Child Status

 

It would be fair to say that this is neither obvious, nor intuitive, so in this release, we've sought to radically improve how Node status is calculated and represented within the Orion Platform.

 

 

Node Thresholds

 

The first thing people usually notice after adding a few nodes to the Orion Platform, is that node thresholds for things like CPU & Memory utilization appear to have no effect on the overall status of the node, and they'd be right. Those thresholds can be used to define your alerts, but node status itself has historically only represented the reachability of the node. That, unfortunately, complicates troubleshooting by obfuscating legitimate issues as well as adds unnecessary confusion. For example, in the image below, I'm often asked why the state of the node is “green” when the CPU Load and Memory utilization are obviously critical? A very fair and legitimate question.

 

 

 

With the release of Orion Platform 2019.2 comes the introduction of Enhanced Node Status. With this new Enhanced Node Status, thresholds defined either globally or on an individual node itself can now impact the overall status of the node. For example, if the memory utilization on a node is at 99% and your “Critical” threshold for that node is “Greater than 90%,” the node status will now reflect the appropriate “Critical” status. This should allow you to spot issues quickly without having to hunt for them in mouse hovers or drilling into Node Details views.

 

CPU Load

Memory Utilization

Response Time

Packet Loss

 

Sustained Thresholds

 

Borrowing heavily from Server & Application Monitor, Orion Platform 2019.2 now includes support for sustained node threshold conditions. Being notified of every little thing that goes bump in the night can desensitize you to your alerts, potentially causing you to miss important service impacting events. For alerts to be valuable, they should be actionable. For example, just because a CPU spikes to 100% for a single poll probably doesn't mean you need to jump out of bed in the middle of the night and VPN into the office to fix something. After all, it's not that unusual for a CPU to spike temporarily, or latency to vary from time to time over a transatlantic site-to-site VPN tunnel. 

 

What you probably want to be notified of instead is if that CPU utilization remains higher than 80% for more than five consecutive polls, or if the latency across that site-to-site VPN tunnel remains greater than 300ms for 8 out of 10 polls. Those are likely more indicative of a legitimate issue occurring in the environment that requires some form of intervention to correct.

 

 

Sustained Thresholds can be applied to any node's existing CPU Load, Memory Usage, Response Time, or Percent Packet Loss thresholds. You can also mix and match “single poll,” “X consecutive polls,” and “X out of Y polls” between warning and critical thresholds for the same metric for even greater flexibility. Sustained Thresholds can even be used in combination with Dynamic Baselines to eliminate nuisance alerts and further reduce alert fatigue, allowing you to focus only on those alerts which truly matter.

 

Null Thresholds

 

A point of contention for some users has been the requirement that all Node thresholds must contain some value. Those could be nodes that you still want to monitor, report, and trend upon those performance metrics but not necessarily be alerted on, such as staging environment, machines running in a lab, decommissioned servers, etc.

 

Historically, there has been no way to say, “I don't care about thresholds on this node”' or “I don't care about this particular metric.” At best, you could set the warning and critical thresholds as high as possible in the hopes of getting close to eliminating alerts for metrics on those nodes you don't necessarily care about. Alternatively, some customers update and maintain their alert definitions to exclude metrics on those nodes they don't want to be alerted on. A fairly messy, but effective, solution—but also one that is no longer necessary.

 

With the introduction of Enhanced Status in Orion Platform 2019.2, any Node threshold can now be disabled simply by editing the node and unchecking the box next to the warning or critical thresholds of the metric you're not interested in. Don't want a node to ever go into a “Critical” state as a result of high response time to keep the boss off your back, but still want to receive a warning when things are really bad? No worries, just disable the “Critical” threshold, leave the “Warning” threshold enabled and adjust the value to what constitutes “really bad” for your environment.

 

 

If so inclined, you can even disable these individual warning and critical thresholds globally from [Settings > All Settings > Orion Thresholds] for each individual node metric.

 

 

Child Objects

 

In this new world of Enhanced Status, no longer are there confusing multi-status icons, like “up-down” or “up warning.” Child objects can now influence the overall node status itself by rolling up status in a manner similar to Groups or how Server & Application Monitor rolls-up status of the individual component monitors that make up an Application. This provides a simple, consolidated status for the node and its related child entities. Those child objects can be things such as Interfaces, Hardware Health, and Applications monitored on the node, to name only a few.

 

Similar to Groups, we wanted to provide users with the ability to control how node status rollup was calculated on an individual, per-node basis for ultimate flexibility. When editing the properties of a single node or multiple nodes, you’ll now find a new option for “Status roll-up mode” where you can select from Best, Mixed, or Worst.

 

 

 

By altering how node status is calculated, you control how child objects influence the overall status of the node.

 

BestMixedWorst

 

Best status, as one might guess, always reflects the best status across all entities contributing to the calculation. Setting the Node to “Best” status is essentially the equivalent of how status was calculated in previous releases, sans the tiny child status indicator in the bottom right corner of the status icon.

 

Worst status, you guessed it, represents the status of the object in the worst state. This can be especially useful for servers, where application status may be the single most important thing to represent for that node For example, I'm monitoring my Domain Controller with Server & Application Monitor's new AppInsight for Active Directory. If Active Directory is “Critical,” then I want the node status for that Domain Controller to reflect a “Critical” state.

 

Mixed-status is essentially a blend of best and worst and is the default node status calculation. The following table provides several examples of how Mixed status is calculated.

 

Polled Status

Child 1 Status

Child 2 Status

Final Node Status

DOWNANYANYDOWN
UPUPUPUP
UP or WARNINGUPWARNINGWARNING
UP or WARNINGUPCRITICALCRITICAL
UP or WARNINGUPDOWNWARNING
UP or WARNINGUPUNREACHABLEWARNING
UPUPUNKNOWNUP
WARNINGUPUNKNOWNWARNING
UPUPSHUTDOWNUP
UP or WARNINGDOWNWARNINGWARNING
UP or WARNINGDOWNCRITICALCRITICAL
UP or WARNINGDOWNUNKNOWNWARNING
UP or WARNINGDOWNDOWNWARNING
UPUNKNOWNUNKNOWNUP
WARNINGUNKNOWNUNKNOWNWARNING
UNMANAGEDANYANYUNMANAGED
UNREACHABLEANYANYUNREACHABLE
EXTERNALANYANYGroup Status

 

In case you overlooked it in the table above, yes, External Nodes can now reflect an appropriate status based upon applications monitored on those nodes.

 

Child Object Contributors

 

Located under [Settings > All Settings > Node Child Status Participation] you will find you now have even more fine-grained, granular control of up to 27 individual child entity types that can contribute to the overall status of your nodes. Don't want Interfaces contributing to the status of your nodes? No problem! Simply click the slider to the “off” position and Interfaces will no longer influence your nodes status. It's just that easy.

 

Show me the Money!

 

You might be asking yourself, all these knobs, dials, and switches are great, but how exactly are these going to make my life better or simpler? A fair question, and one that no doubt has countless correct answers, but I'll try and point out a few of the most obvious examples.

 

Maps

 

One of the first places you're likely to notice Enhanced Status is in Orion Maps. The example below shows the exact same environment. The first image shows what this environment looked like in the previous release using Classic Status. Notice the absence of any obvious visual cues denoting issues in the environment. The next image to the right is of the very same environment taken at the exact same time as the image on the left. The only notable difference is that this image was taken from a system running Orion Platform 2019.2 with Enhance Node Status.

 

In both examples, there are the exact same issues going on in the environment, but these issues were obfuscated in previous releases. This made the troubleshooting process less intuitive and unnecessarily time-consuming. With Enhance Status, it's now abundantly clear where the issues lie. And with the topology and relationship information from Orion Maps, it's now easier to assess the potential impact those issues are having on the rest of the environment.

 

Classic Status

Enhanced Status

 

Groups

 

Groups in the Orion Platform are incredibly powerful, but historically in order for them to accurately reflect an appropriate status or calculate availability accurately, you were required to add all relevant objects to that group. This means you not only needed to add the nodes that make up the group, but also all child objects associated with those nodes, such as interfaces, applications, etc.

 

Even in the smallest of environments, this was an otherwise impossible feat to manage manually. Given the nature of all the various entity types that could be associated with those nodes, even Dynamic Groups were of little assistance in this regard. Enhanced Status not only radically simplifies group management, but it also empowers users to more easily utilize Dynamic Groups to make group management a completely hands-off experience.

 

The following demonstrates how Enhanced Node Status simplifies overall Group Management in the Orion Platform, reducing the total number of objects you need to manage inside those groups. The screenshot on the left shows a total of eight nodes using Enhanced Status in a group, causing the group to reflect a Critical status. The image to the right shows all the objects that are required to reflect the same status using Classic Status. As you can see, you would need to not only add the same 8 nodes but also their 43 associated child objects for a total of 51 objects in the group. Yikes!

 

Enhanced Status (8 Objects)

Classic Status (51 Objects)

 

By comparison, the following demonstrates what that group would look like with just the eight nodes alone included in the group using both Classic Status and Enhanced Status. Using Classic status, the group reflects a status of “Up,” denoting no issues at all in the group. With Enhanced Status, it's abundantly clear that there are in fact issues, which nodes have issues, and their respective severity. This aids in significantly reducing time to resolution and aids in root cause analysis.

 

Enhanced Status

Classic Status

 

Alerts

 

Possibly the greatest benefit of Enhanced Status is that far fewer alert definitions are required to be notified of the exact same events. Because node thresholds and child objects now influence the status of the node, you no longer need alert definitions for individual node metrics like “Response Time,” or related child entities like “Interfaces.” In fact, of the alert definitions included out-of-the-box with the Orion Platform, Enhanced Status eliminates the need for at least five, taking you from seven down to a scant two. That's a 71% reduction in the number of alert definitions that need to be managed and maintained.

 

Out-of-the-box Alerts Using Classic Status - x7

Out-of-the-box Alerts Using Enhanced Status - x2

 

Alert Macros

 

I'm sure at this point many of you are probably shouting at your screen, "But wait! Don't I still need all those alert definitions if I want to know why the node is in whatever given state that it's in when the alert is sent? I mean, getting an alert notification telling me the node is “Critical” is cool and all, but I sorta need to know why."

 

We would be totally remiss if in improving Node status we didn't also improve the level of detail we included in alerts for nodes. With the introduction of Enhanced Status comes two new alert macros that can be used in your alert actions, such as email notifications, which lists all items contributing to the status of that node. Those two alert macros are listed below.

 

The first is intended to be used with simple text-only notification mechanisms, such as SMS, Syslog, or SNMP Traps. The second macro outputs in HTML format with hyperlinks to each child objects respective details page. This macro is ideally suited for email or any other alerting mechanism that can properly interpret HTML.

 

  • ${N=SwisEntity;M=NodeStatusRootCause}
  • ${N=SwisEntity;M=NodeStatusRootCauseWithLinks}

 

The resulting output of the macro provided in the notification includes all relevant information pertaining to the node. This includes any node thresholds that have been crossed as well as a list of all child objects in a degraded state associated with the node, which is all consolidated down into a simple, easily digestible, alert notification that pinpoints exactly where to begin troubleshooting.

 

 

 

 

Enabling Enhanced Status

 

If you're installing any Orion product module for the first time that is running Orion Platform 2019.2 or later, Enhanced Status is already enabled for you by default. No additional steps are required. If you're upgrading from a previous release, however, you will need to enable Enhanced Status manually to appreciate the benefits it provides.

 

Because status is the primary trigger condition for alerts, we did not want customers who are upgrading to be surprisingly inundated with alert storms because of how they had configured the trigger conditions of their alert definitions. We decided instead to let customers decide for themselves when/if to switch over to Enhanced Status.

 

The good news is that this is just a simple radio button located under [Settings > All Settings > Polling Settings]

 

 

Conversely, if you decided to rebuild your Orion server and have a preference for “Classic” status, you can use this same setting to disable “Enhanced” Status mode on new Orion installations and revert back to “Classic” status.

 

 

Cautionary Advice

 

If you plan to enable “Enhanced” status in an existing environment after upgrading to Orion Platform 2019.2 or later, it’s recommended that you disable alert actions in the Alert Manager before doing so. This should allow you to identify alerts with trigger conditions in their alert definition that may need tweaking without inadvertently causing a flood of alert notifications or other alert actions from firing. Your coworkers will thank you later.

 

 

Feedback

 

Enhanced status represents a fairly significant, but vitally important, change for the Orion Platform. We sincerely hope you enjoy the additional level of customization and reduced management overhead it provides. As with any new feature, we'd love to get your feedback on these improvements. Will you be switching to Enhanced Status with your next upgrade? If not, why? Be sure to let us know in the comments below!

The Orion® Platform is designed to consolidate monitoring into a single source of truth, taking massive amounts of data and making it easier to identify issues in complex environments. A key component to this is the organization of data. As an example, if I were to present you with the dashboard below, you can see it’s aggregating a ton of information and highlighting issues from multiple modules like Network Performance Monitor (NPM), Server & Application Monitor (SAM), Virtualization Manager (VMAN), and Storage Resource Monitor (SRM). Single pane of glass, right?  However, it’s not interesting, not even a little bit, and most importantly, it’s not easily interpreted. This dashboard doesn't really help me understand the problem or where to focus.

 

Click to Enlarge

 

Simplifying how data is interpreted through better visualizations can provide drastic improvements for understanding problems. Now, if I present you with this view, can you tell me where the problem areas are?

 

Click to Enlarge

 

The Orion Maps team believes visualization of your data can be a powerful tool when put together in a meaningful way. Ensuring critical data is available but presenting it in a clear and concise manner allows you to quickly see the problem and its potential impact. Visualizations help tell the story, and can help members of your organization, or clients, understand the breadth and complexity of what you manage on a day-to-day basis. For those of you unfamiliar with the Orion Maps project to date, you may want to review the following posts. These should help paint the picture, no pun intended, on what we’ve delivered with the previous releases.

 

Orion Platform 2018.2 Improvements - Chapter Two - Intelligent Mapping

Orion Platform 2018.4 Improvements - Intelligent Mapping Enhancements

 

With the release of 2019.2, we’ve incorporated some new enhancements designed to extend the flexibility of the platform and provide some amazing new options for representing your environment and critical services.

 

 

ORION MAPS MENU & MANAGEMENT PAGE

 

As a new entry point to maps, an "Orion Maps" menu is now available under My Dashboards and Home.Selecting this option will transport you to the Map Management page.  This will be blank initially, prompting you to create a map.

 

 

It’s important to note here that any user can create a map. If you have access to this menu, you can create maps. However, each of you will only be able to see the maps you created yourself in the list view. The current features on this page will allow you to sort your list by Map Name, Last Updated, and Created Date. There’s also a search bar allowing you to search for maps by name.

 

 

Any Orion Administrator will have an additional function when they access this view. A very helpful tool is available in the upper-right corner allowing you to toggle the view to include all user maps vs. just your own. The main components to this page provide the capabilities to create a new map, edit existing maps, delete maps, or view a map by selecting its name.

Click to Enlarge

 

MAP EDITOR

Let’s begin by creating a new map via the Map Editor. Selecting New Map will open the basic editor for building maps from scratch. You’ll be greeted by an entity library on the left side, which defaults to a paginated list of your nodes. You can click the drop-down to choose from any entity type in Orion Maps. As always, a search bar is also available. The empty canvas will take up most of the view, and a few controls will be noticeable in the bottom-right corner, along with a Save button and More menu in the upper-right side. Building a map from the basic editor is for those of you who know exactly what you want in the map. For now, this is single drag-and-drop functionality, and any relationships or connections identified will automatically be drawn.

Click to Enlarge

 

Like any design tool, built-in functions allow you to manipulate the map. Holding the space bar will allow you to pan the map. Selecting entities will allow you to move objects, and holding the Shift key when moving objects will perform a snap to grid function. Using arrow keys will gently nudge the entity in a desired direction. Holding Shift while using arrows will move the object in larger increments. Holding the Control key or using the + or - buttons will allow you to zoom in or out while working with your map. Probably one of my favorite tools is the Center key in the bottom right. This will not only center your map, but perform a zoom to fit, ensuring the entire map is placed in the viewable area. This is an excellent tool as you expand or condense maps of different scales. Any entity can be removed from a map by selecting it and hitting the Delete key on your keyboard.

 

Click to Enlarge

 

Once we have our map situated how we want it, you’ll notice any change in the canvas enables the "Save" button in the upper-right corner.  Clicking save will generate a dialogue, which will allow you to add a unique name. This will warn you in the event you attempt to name your map with a previously used name.

 

 

Under the MORE menu, a number of options will be presented to you. "New" will allow you to start a new map and a blank canvas, much like the name implies. "Save As" is particularly useful if a map has been shared with you, or as an administrator you’re editing a map you didn’t create. Unless you’re the one who created the original map, you won’t be allowed to "Save" but will have to perform a "Save As" and rename the map. "Delete" needs little explanation, but again, if this isn’t your map, then the delete option will be grayed out. I’ll cover the "View" button a bit later in this post in more detail, and the "Help" button of course links to formal documentation for much of the items discussed in this post.

 

 

LEVERAGING CONTEXTUAL MAPS

We have massive plans to improve upon the function of building maps as we understand one of the biggest needs is expediting map creation and limiting the number of touches to maintain them. Feel free to share what you believe would make a difference in the comment section below. In this release, we’re taking advantage of the framework and functionality delivered previously through the contextual sub-views. If or when viewing an automatically generated map from the Node or Group Details sub-views, you’ll now see a new button added to the menu bar, "Open Map in Editor." Essentially, I can use the existing functionality to take a pre-built map, expand it further, and have what was done within the sub-view persisted and sent to the new map editor with the click of a button. The images below should show a basic demonstration of this workflow. This is a great way to build maps quickly and then make final adjustments in the editor before saving.

 

Navigating to Map sub-view from Node Details page

Click to Enlarge

 

Expanding the map through automatically discovered relationships

Click to Enlarge

 

Open Map in Editor

Click to Enlarge

 

Of course, using the built-in tools to move objects around the canvas, snap to grid, and taking advantage of the center/auto-fit tool as you make adjustments can help you properly create a representation that makes the most sense for your organization. Once I’ve saved the map, what do I do now?

 

ORION MAPS WIDGET

As maps are saved, they’ll be accessible as a Map Project from the list view under the Map Management page. You’ll also find a new widget available in the Widget Drawer, allowing you to add any of your custom maps to a dashboard or view. Click the pencil in the upper-left side marked Customize Page, then click Add Widgets, and the resource will be located under the Network Maps section called Orion Map.

 

 

Drag and drop as many of these widgets out to the page as you wish, and click "Edit" or "Choose Map" to specify a map from your list. A dialogue will contain options to customize a title or subtitle and specify the widget height by pixels. A list of maps will be shown, along with a search option for quickly identifying the map you wish to use. Like the Map Management page, admins will also have the option to see all user-created maps by clicking the toggle on the right side.

 

 

Click "save" and your map will now be available. Another one of my favorite features is we managed to build the widget where it‘ll automatically scale the map according to the size you specified. By adjusting the height and the column width, your map will auto-fit the available space, making it fast and easy to get the map exactly where you want on your dashboard, at just the right size.

 

 

Click to Enlarge

 

With the ability to incorporate these maps alongside other widgets in the dashboard, you have some amazing new ways in which to roll up critical problems within your environment.  Below is a quick example of what one may look like.

 

 

Click to Enlarge

 

 

ENHANCED NODE STATUS

If you are unaware, or have yet to come across this post, Orion Platform 2019.2 - Enhanced Node Status by aLTeReGo, we’ve included some very significant updates in how we highlight status in the Orion Platform. The desire for improvements in status was a consistent theme we heard during user research with maps as well, and the difference this change makes is awesome. To steal an excerpt from aLTeReGo's post: The example below shows the exact same environment. The first image shows what this environment looked like in the previous release using Classic Status. Notice the absence of any obvious visual cues denoting issues in the environment. The next image to the right is of the very same environment, taken at the exact same time as the image on the left. The only notable difference is this image was taken from a system running Orion Platform 2019.2 with Enhance Node Status.

 

 

In both examples, there are the exact same issues going on in the environment, but these issues were obfuscated in previous releases, making the troubleshooting process less intuitive and unnecessarily time-consuming. With Enhanced Status, it's now abundantly clear where the issues lie, and with the topology and relationship information from Orion Maps, it's now easier to assess the potential impact those issues are having on the rest of the environment.

 

 

Classic StatusEnhanced Status

 

 

INTERACTING WITH THE MAP WIDGET AND VIEW MODE

Now that you have an amazing visualization of your environment and the issues are clearly identified, a closer look may be in order. There are a couple of different methods for interacting with your maps. The first method takes advantage of the improvements made to the Orion Hovers and are accessible from the Map Widget.  By hovering over an entity in your map, performance status will be available and should highlight exactly why your entity is in a degraded state. You will also be able to access the Commands menu, which will allow you to Go To Details pages, Edit Node, Mute Alerts, or Unmanage the entity directly from the map!  This behavior will be the same if a group is on a map, or if you have nested maps.  You can see that the commands option for a map includes viewing a map, editing a map, or muting alerts associated to a map!  From here, you can choose to use the command options or simply click on the entity in the map. By doing so it will take you to the details page automatically as pictured below.  The View Mode, which can also be accessed as a button in the top right of the Map Widget, is a full screen depiction of that map and all its entities, allowing you to investigate further utilizing the inspector panel to show related entities, alerts, and recommendations, if viewing virtual entities.

 

Click to Enlarge

 

FEEDBACK

This release marks another significant step for the Orion Maps project and we hope you find these new enhancements valuable and useful in your environment.  I plan to write and attach a couple other posts to this announcement around using Maps in Alerts and Reporting.  Of course with each release, we find your feedback extremely valuable, and much of what has been done to this point centers around your asks.  Please be sure to comment below and SHARE YOUR MAPS and DASHBOARDS!  Stay tuned as we are already hard at work on the next major release and have some very cool stuff in store. 

 

Check out the other posts form serena and aLTeReGo on 2019.2 Platform improvements if you haven't already!

Orion Platform 2019.2 - Install/Upgrade Improvements Part 1

Orion Platform 2019.2 - Install/Upgrade Improvements Part 2

Orion Platform 2019.2 - Enhanced Node Status

Orion Platform 2019.2 - Additional Improvements

In addition to Node status improvements, the Orion® Platform 2019.2 includes a slew of other great new features and enhancements. There’s a tremendous amount of diversity in these improvements, ranging from deployment flexibility to usability all the way to security. So, no matter what your jam, this release for the Orion Platform is sure to have something for you.

 

 

 

Default Admin Password

 

If you're installing an Orion Platform product for the first time, perhaps on a lab system or in a staging environment, undoubtedly the first new thing you'll notice the first time you attempt to log in to the Orion web interface is you’re now required to define a password for the default “Admin” user account. No longer will you be able to login with the default “admin” account with no password. If you're upgrading from a previous release, however, this change won’t affect you. It's only applicable to new installs of the Orion Platform. However, if you're still running your Orion instance with no password defined for the “Admin” account, let this post serve as a reminder to check that off the to-do list.

 

Admin Password Change PromptError Returned When no Password is Entered

Azure SQL DB Support

 

In the earlier Orion Platform 2018.2 release, we added support for using Amazon Relational Database Service (RDS) as a cloud-based alternative to more traditional on-premises Microsoft SQL database servers. This allowed those customers who were deploying Orion instances into the cloud using Amazon Elastic Compute Cloud (EC2) as their infrastructure as a service solution, to lower costs and reduce management overhead further by using Amazon's database-as-a-service offering. As more organizations lift and shift workloads into the cloud, it's natural for their monitoring solution to be one of them.

 

Since that release, however, we've received numerous requests to provide similar support for Azure SQL DB, Microsoft's equivalent alternative service offering to Amazon's RDS… and in the Orion Platform 2019.4, we’ve delivered. By adding support for Azure SQL DB to all product modules running atop Orion Platform 2019.2, you’re now afforded greater deployment flexibility and choice than ever before, without the worry of being locked in to a single cloud vendor. Best of all, using Azure SQL DB as the SQL database repository for your Orion install is just as easy as using a local on-prem MSSQL database server instance.

 

Regardless if you're installing the Orion Platform for the first time or migrating your Orion instance to the cloud, the magic begins in the Configuration Wizard. Simply enter in the fully qualified domain name (FQDN) of the SQL Server instance as shown in your Azure Portal and your credentials. With the introduction of Azure SQL DB, the Orion Platform now also supports the use of Azure Active Directory credentials for authenticating to the Azure SQL DB instance should you prefer not to use SQL authentication.

 

 

If this is a new Orion Platform installation, you can create an empty database from within your Azure Portal for your Orion instance to use, or the Configuration Wizard can automatically create one for you, no differently than if you were to deploy the Orion Platform on-prem. By default, the Configuration Wizard will create an S3 tier database, the absolute lowest Azure SQL DB tier supported by the Orion Platform and its associated product modules.

 

My favorite thing about Azure SQL DB is how incredibly fast and easy it is to scale your database tier up or down from within the Azure portal as your needs (or budget) dictates.

 

If for any reason you forget which Azure SQL database tier the Orion Platform is using, you can remind yourself from within the comfort of the Orion web interface simply by going to [Settings > All Settings > Database Details].

 

 

Orion Agent Rediscovery

 

Rediscovering things like newly added volumes, AppInsight applications, and interfaces on Agents has historically been a fairly binary operation. Your options were either to run a rediscovery against every Agent-managed node associated with a given Polling Engine, or none. There wasn’t really a way to specify additional criteria to narrow your rediscovery job to a subset of Agent-managed nodes. This was obviously fairly limiting if you wanted to handle some Agent-managed nodes differently than others, such as production vs. staging/lab machines or by office/region. If you wanted these handled differently, your only recourse was to divvy those Agents up across polling engines based on their role or location.

 

Since this was hardly an ideal solution for some customers, or even an option for others, we knew we could do better. In Orion Platform 2019.2, you can now specify rediscovery parameters for Agent-managed nodes based on node properties, such as IP addressing, node caption naming conventions, and even custom properties. These properties can even be combined to target a subset of Agents you want to be rediscovered, either one time or on a recurring basis. You'll even find a convenient “Preview” button so you can validate the rediscovery parameters you've specified to return the expected Agent-managed nodes. Coupled with automatic import, these Agent rediscovery options provide the Ronco Rotisserie equivalent of IT management, allowing you to simply set it and forget it.

 

Linux Agent Metrics

 

More than a few keen-eyed observers have noticed a slight discrepancy when monitoring Linux nodes using the Agent when compared to those same nodes being monitored via SNMP. Namely, the absence of specific volume types, such as Swap Space, Shared Memory, Memory Buffers, and more. Fortunately, in this release, we've corrected this injustice and now provide visibility into the same volume types with the Linux Orion Agent as are available when polling via SNMP. No longer will you need to make difficult compromises or tradeoffs when deciding to switch your node polling method from SNMP to the Linux Agent.

 

Orion Platform 2018.4Orion Platform 2019.2

 

Orion Agent SDK

 

Since the initial first release of the Orion Agent, it's been possible to use the Orion SDK to script the push deployment of new agents to remote machines no differently than you can through the Orion web interface. While this has been great, those systems have to be accessible via RPC and WMI for Windows or SSH for Linux for the agent to be deployed. Additionally, those machines where the Agent is deployed must be able to communicate back to the Orion server or one of its associated polling engines. For those customers who would prefer to pre-deploy the Agent in a passive mode (server initiated), either using Chef, Puppet, SCCM, or even SolarWinds Patch Manager, there hasn’t really been any good way to script or automate managing those systems. Instead, users have had to add those passive agents to the Orion Platform manually, one by one. Which is perhaps fine if you have the occasional one or two, but not so much fun when you have dozens or even hundreds of newly deployed Agents to manage in your Orion instance.

 

With Orion Platform 2019.2, this is now a problem of the past. You can now fully script and automate adding passive agents to your Orion instance using the Orion SDK. Simply pass all the same parameters you would normally be prompted to enter when adding a passive agent through the Orion web interface as part of your script. For example, the IP address of the machine where the passive agent is already deployed. Within seconds of executing your script, you should see your passive agent appear under [Settings > All Settings > Manage Agents] of the Orion web interface.

 

 

 

Manually Provision Agent Plugins

 

Some organizations have offices in very remote regions of the world where latency is very high and bandwidth is a sparse, precious commodity. While the Orion Agent is extremely lightweight to deploy and bandwidth-efficient during normal operation, when the Agent is initially provisioned, it downloads any and all dependencies necessary to perform whatever function it has been asked to do, such as functioning as a QoE sensor, NetPath probe, or becoming a managed node, to name only a few uses for the Agent.

 

Depending on which functions are being used, the age of the operating system, and how up-to-date the machine is with Windows Updates, the Agent plugin dependencies can reach up to a couple of hundred megabytes in size. If you need to provision dozens of Agents in one of these remote regions with high latency connections and very little bandwidth, it can take a very long time before all those Agents finish downloading all necessary plugins and dependencies (if they don't give up before then). Worse yet, if you're doing this deployment during working hours, the download of plugins and dependencies for all those Agents can significantly impede other people's ability to function in the office, as all available bandwidth could be consumed by those Agents attempting to download their plugins and plugin dependencies.

 

After upgrading to Orion Platform 2019.2, you’ll be able to pre-provision all Agent plugins and their related dependencies, thus eliminating the need for them to be downloaded from their associated polling engine as well as the potential to impact end users working in that remote office during the Agent provisioning process.

 

To get started, simply copy the contents of the “C:\Program Files (x86)\SolarWinds\Orion\AgentManagement\Plugins”' directory on the main Orion server to the “C:\ProgramData\SolarWinds\Agent\Plugins” directory of the Windows machine where you want to deploy the Agent. How you get those files to their intended destination is entirely up to you. You can use a CD, DVD, USB drive, even a local file share (or can I plug the tried-and-true Serv-U® MFT file transfer solution).

 

Once the agent plugins and their related dependencies have been copied to the appropriate directory on the remote machine where the Agent will be installed, install and configure the Agent as you normally would. The Agent should now use the local plugin repository rather than downloading those plugins across the wire from the polling engine with which it's associated. If you're pre-provisioning Linux or AIX Agents, you can follow the same steps. The only difference is the directory where the agent plugins are stored. For Linux or AIX Agents, be sure to copy them to the “/opt/SolarWinds/Agent/bin/Plugins” directory.

 

This same method can be used when upgrading Agents using a package management or software distribution solution like SolarWinds Patch Manager or Microsoft SCCM. Simply deploy the contents of the “C:\Program Files (x86)\SolarWinds\Orion\AgentManagement\Plugins” directory from the main Orion server to the appropriate directory listed above on the machine where the Agent is installed. Then execute the unattended Agent upgrade process as you normally would.

 

 

 

Continuing on the momentum of the previous release, Orion Platform 2019.2 adds even more direct links to PerfStack, where you can cross-correlate metrics across a variety of different entities and entity types to quickly identify the root cause of issues in your environment. Now, simply click on the numeric value or linear gauge in any of the 30 updated resources and you’ll be launched directly into PerfStack, where metrics are automatically plotted for you over time, ready for you to begin your analysis.

 

 

The following table lists all 30 Orion resources updated in this release to link their respective metrics directly to PerfStack.

 

New Resources Supporting Direct Links to PerfStack
Top 10 Avg. Disk sec/TransferTop 25 Avg. Disk sec/TransferTop XX Avg. Disk sec/Transfer
Top 10 Nodes by Average Response TimeTop 25 Nodes by Average Response TimeTop XX Nodes by Average Response Time
Top 10 Nodes by Average CPU LoadTop 25 Nodes by Average CPU LoadTop XX Nodes by Average CPU Load
Top 10 Disk Queue LengthTop 25 Disk Queue LengthTop XX Disk Queue Length
Top 10 Volumes by Disk Space UsedTop 25 Volumes by Disk Space UsedTop XX Volumes by Disk Space Used
Top 10 Nodes by Percent Memory UsedTop 25 Nodes by Percent Memory UsedTop XX Nodes by Percent Memory Used

Top 10 Nodes by Percent Packet Loss

Top 25 Nodes by Percent Packet LossTop XX Nodes by Percent Packet Loss
Top 10 Nodes by Current Response TimeTop 25 Nodes by Current Response TimeTop XX Nodes by Current Response Time
Top 10 Total IOPSTop 25 Total IOPSTop XX Total IOPS
Nodes with High Average CPU LoadVolumes with High Percent UsageNodes with High Memory Utilization

 

 

Automatic Removal of Unknown Volumes

 

In today's highly virtualized word, volumes are no longer the physical, heavy-metal rectangle components of the server seldom, if ever, removed or added from the machine. Instead, volumes are simply additional storage capacity easily added or removed on a whim with just a few mouse clicks or keystrokes. As such, it's not uncommon these days for new volumes to be added or removed as storage capacity needs change over the course of a server's lifecycle. This, however, results in some additional overhead to keep the monitoring server up-to-date with these changes in the environment. While scheduled recurring discoveries with automatic import helps address automating the monitoring of new volumes as they're added to servers in the environment, removed volumes remain managed in the Orion Platform until they're manually deleted by someone with Node Management rights. Hunting down all these “unknown” volumes can also be a tedious process, which is why it's seldom done. The result is wasted volume licenses and bogged down polling engines wasting polling cycles by trying to monitor volumes that will never return.

 

 

In our never-ending quest to reduce management overhead, we’ve now added the ability to automatically remove these “unknown” volumes after a predetermined period of time, which is, of course, user-configurable.

 

Under [Settings > All Settings > Orion Polling Settings], you’ll find a new option intuitively entitled “Automatically Remove Unknown Volumes,” which, as the name suggests, will remove any volumes from being managed by the Orion Platform if they’ve been “unknown” for longer than the number of days defined in “Remove Unknown Volumes After” field. To ensure we’re not inadvertently removing “unknown” volumes you may not want to be deleted immediately upon upgrading to Orion Platform 2019.2,, we’ve disabled this option by default. We do, however, recommend enabling this option and removing “unknown” volumes after a reasonable number of days as part of good monitoring hygiene.

 

 

Secure Syslog Alerts

 

For several years it's been possible to send SNMP Traps securely using SNMPv3 as an alert action. There has, however, not been any equivalent for sending Syslog messages as part of an alert trigger action in a similarly secure fashion… until now.

 

With the release of Orion Platform 2019.2, you’ll now find a new option to send Syslog messages via TCP, not just UDP, as in previous releases. There’s also an option for sending those Syslog messages via TCP using TLS encryption, providing secure communications and data privacy for data in motion. With these new capabilities, you can now safely and securely send alerts via Syslog to other Syslog receivers like Kiwi Syslog® or another Orion instance running Log Analyzer via TCP for improved reliability of message delivery and TLS encryption to comply with your latest security policies and regulatory mandates.

 

 

HSRP Addresses

 

Odd as it may seem, IP addresses configured on Cisco routers for use with HSRP are not expressed using the traditional industry standard MIB2 ipAdEntAddrhttp://oid-info.com/get/1.3.6.1.2.1.4.20.1.1 OID. This information is instead tucked away in Cisco's private cisco-hsrp-mib, out of reach from the Orion Platform's normal mechanisms for gathering IP addresses assigned to a node. This meant it wasn’t possible to search for a node via the “Search for Nodes” resource using any HSRP IP address configured on a device. It also meant any Orion product module attempting to associate information to a given Node via its HSRP address, like NetPath, was unable to because the Orion Platform was unaware of the node's HSRP addresses.

 

Fortunately for you, this is now a thing of the past. With Orion Platform 2019.2, it will now collect all HSRP addresses assigned to a given node, allowing you to quickly find nodes by their HSRP addresses and properly associating disparate information from Orion product modules to its associated node.

 

FortiGate CPU & Memory

 

Those of you running FortiGate firewalls in your environment should be pleased to hear Orion Platform 2019.2 now natively supports monitoring of both CPU and memory utilization for these devices out-of-the-box. No longer will you need to fumble with Universal Device Pollers. Best of all, you can even monitor these metrics in real-time via PerfStack Real-Time Polling.

 

If you're already monitoring your FortiGate firewalls with your Orion instance via SNMP, there's nothing additional you need to do. Simply upgrade your Orion product module to the latest version that includes Orion Platform 2019.2, and these metrics will begin being collected. If you were previously using Universal Device Pollers to monitor the CPU and memory utilization on your FortiGate firewalls, you may want to consider removing those pollers after upgrading to reduce polling overhead.

 

 

 

Dynamic External Nodes

 

For years now, the Orion Platform has had the notion of External Nodes, which essentially represents a node that typically isn’t owned or managed by you and doesn’t respond to ICMP, SNMP, or WMI. The primary purpose of external nodes is for assigning application templates from Server & Application Monitor. Those application templates are commonly HTTP/HTTPS User Experience Monitors or TCP Port Monitors for monitoring external websites and SaaS applications, but there are many more uses for External Nodes. These are simply two examples.

 

 

The trouble with external nodes historically has been since they don't poll any information, they also don't update their IP address—you must edit the properties of an External Node and select “Dynamic IP.” In previous Orion releases, you couldn't have external nodes with dynamic IP addresses. So, if you’d assigned an application template to an external node and its IP address ever changed, it would report a “down” status even if the application being monitored was really “up.” The Orion Platform was still polling the application using the original IP address of the node prior to it changing. Your only recourse for correcting this issue was to delete the node, re-add it to your Orion instance, and reassign any application templates you had assigned while losing any historical data for the applications monitored on the node.

 

With the release of Orion Platform 2019.2, we have addressed this glaring limitation of external nodes. Now, when the “Dynamic IP Address” box is checked on an “External” node, a reverse lookup against the hostname or fully qualified domain name (FQDN) for the node is done every two minutes by default, automatically updating the IP address. The frequency in which this query is done can be adjusted simply by updating the “Node Status Polling” interval for the node.

 

 

Newly Added SysObjectIDs

 

Every release of the Orion Platform includes support for identifying new makes, models, and manufacturers of devices. This comes in large part from customers just like you who help identify these new devices in the wild and report them to us in the Tell Us Your Unknown Devices v2.0 thread.

 

The following is a list of all new devices that will now be properly identified by Orion Platform 2019.2. If you're running the latest release of the Orion Platform and the “Machine Type” for any of your devices is reported as “Unknown,” simply post its SysObjectID to the Tell Us Your Unknown Devices v2.0 thread along with its make, model, and manufacturer, and we’ll ensure it's properly identified as such in the next release of the Orion Platform.

 

Cisco 800M with 8-Port LAN Integrated Services RouterCisco C1111-8PLTELAWH Router
DELL S5000Cisco C1111-8PLTELAWF Router
DELL S4810-ONCisco C1111-8PWE Router with WLAN E domain
DELL S6000-ONCisco Aironet 1815
DELL S4048-ONCisco Aironet 1540
DELL S3048-ONCisco Catalyst 2960L-24TQ-LL Switch
DELL S3148PCisco Catalyst 2960L-48TQ-LL Switch
DELL S3124PCisco Catalyst 2960L-24PQ-LL Switch
DELL S3124FCisco Catalyst 2960L-48PQ-LL Switch
DELL S3124Cisco Catalyst 9407R Switch
DELL S6100Cisco Catalyst 94010R Switch
DELL S6010Cisco C1111-4P Router
DELL S4048TCisco C1111-4PLTEEA Router with Multimode Europe and North America Advanced LTE
DELL S3148Cisco C1111-4PLTELA Router with Latin America Multimode and Asia Pacific Advanced LTE
DELL Z9500Cisco C1111-4PWE Router with WLAN E domain
DELL Z9100Cisco C1111-4PWB Router with WLAN B domain
DELL S4148FCisco C1111-4PWA Router with WLAN A domain
DELL S4148TCisco C1111-4PWZ Router with WLAN Z domain
HP 2930F-24G-PoE+-4SFP (JL261A)Cisco C1111-4PWN Router with WLAN N domain
1920S 24G 2SFP PoE+ (JL385A)Cisco C1111-4PWQ Router with WLAN Q domain
ForeScout CounterACT ApplienceCisco C1111-4PWH Router with WLAN C domain
Corvil CNE ApplianceCisco C1111-4PWR Router with WLAN R domain
Corvil CNE ApplianceCisco C1111-4PWF Router with WLAN K domain
FortiWeb 1000DCisco C1111-4PWD Router with WLAN D domain
Fortinet Fortigate 280D-POECisco C1116-4P Router with VDSL/ADSL
FortiGate 500DCisco C1116-4PLTEEA Router with Multimode Europe and North America Advanced LTE
FortiGate 600DCisco C1117-4P Router with VDSL/ADSL
FortiWeb 4000DCisco C1116-4PWE Router with WLAN E domain
Pulse Secure IC4000Cisco C1117-4PLTEEA Router
Pulse Secure MAG-2600Cisco C1117-4PLTELA Router
Pulse Secure PSA-3000Cisco C1117-4PWE Router with WLAN E domain
Pulse Secure PSA-5000Cisco C1117-4PWA Router with WLAN A domain
Pulse Secure PSA-7000cCisco C1117-4PWZ Router with WLAN Z domain
Pulse Secure PSA-7000fCisco C1117-4PM Router with VDSL/ADSL
9982P2ETCisco C1117-4PMLTEEA Router
IAP-325Cisco C1117-4PMWE Router with WLAN E domain
IAP-315Cisco C1112-8P Router
ClearPass Policy Manager CP-HW-5KCisco C1112-8PLTEEA Router with Multimode Europe and North America
6548 SwitchCisco C1113-8P Router
Internal Management Module SwitchCisco C1113-8PM Router with VDSL/ADSL
AnnuncicomCisco C1113-8PLTEEA Router
InstreamerCisco C1113-8PLTELA Router
DataDomain 9300Cisco C1113-8PMLTEEA Router
S6720-54C-EI-48S-ACCisco C1113-8PWE Router with WLAN E domain
Lantronix EDS4100Cisco C1113-8PWA Router with WLAN A domain
Xerox DocuColor 242Cisco C1113-8PWZ Router with WLAN Z domain
ColorQube 9301Cisco C1113-8PMWE Router with WLAN E domain
D110Cisco C1113-8PLTEEAWE Router
Palo Alto PA-5200Cisco C1113-8PLTELAWE Router
Palo Alto PA-5200Cisco C1113-8PLTELAWZ Router
Palo Alto PA-220Cisco C1114-8P Router
H3C S5560-54C-EICisco C1114-8PLTEEA Router with Multimode Europe and North America
H3C S12504X-AFCisco C1115-8P Router
H3C S6520-48S-EICisco C1115-8PLTEEA Router with Multimode Europe and North America Advanced LTE
LP-1030Cisco C1115-8PM Router with VDSL/ADSL
TSM-24-DPSCisco C1115-8PMLTEEA Router
VMR-HD4D30Cisco C1118-8P Router(ciscoC11188P)
NPS-8-ATSCisco C1116-4PLTEEAWE Router
vMXCisco C1117-4PLTEEAWE Router
Juniper Virtual Route Reflector (vRR)Cisco C1117-4PLTEEAWA Router
Juniper ACX2200Cisco C1117-4PLTELAWZ Router
Juniper ACX5048Cisco C1117-4PMLTEEAWE Router
Juniper ACX5096Cisco 807 Industrial Integrated Services Routers
Juniper vSRXCisco 807 4G LTE Industrial Integrated Service Router
Juniper SRX345Cisco 807 4G LTE Industrial Integrated Service Routers with multi-mode  Global (Europe & Australia) LTE/HSPA+
Juniper ACX2100Cisco 807 4G LTE Industrial Integrated Service Router
Juniper ACX1100Cisco 807 4G LTE Industrial Integrated Service Routers with multi-mode  AT&T and Canada  LTE/HSPA+
Juniper EX3400-24TCisco Catalyst 9500 series with 32 Ports of 100G/32 Ports of 40G
Juniper QFX10002-72QCisco Catalyst 9500 series with 32 Ports of 40G/16 Ports of 100G
Juniper QFX10008Cisco Catalyst 9500 series with 48 Ports of 1G/10G/25G + 4 Ports of 40G/100G
WIB 8000Cisco Catalyst 9500 Router with 24 Ports of 1G/10G/25G + 4 Ports of 40G/100G
Meraki DashboardCisco Catalyst 9500 Series Switch
Xerox ApeosPort-IV C3375C9500-16X
Xerox ApeosPort-V C6675 T2IR829M-LTE-LA-ZK9
DCS-7060CX2-32SCisco C1109-2PLTEGB 2 ports GE LAN M2M Router with Multimode LTE WWAN Global
SX6036Cisco C1109-2PLTEUS 2 ports GE LAN M2M Router with Multimode LTE WWAN US
SX6036Cisco C1109-2PLTEVZ 2 ports GE LAN M2M Router with Multimode LTE WWAN Verizon
MSB7800-ES2FCisco C1109-2PLTEAU 2 ports GE LAN M2M Router with Multimode LTE WWAN Australia and New Zealand
F5 BIG-IP 10350vCisco C1109-2PLTEIN 2 ports GE LAN M2M Router with Multimode LTE WWAN India
BIG-IP i2800Cisco C1101-4P 4 Ports GE LAN Router
F5 Networks BIG-IP i4600Cisco C1101-4PLTEP 4 Ports GE LAN Router
Delphix DB EngineCisco C1101-4PLTEPWE 4 Ports GE LAN Router
TSC ME240Cisco C1101-4PLTEPWB 4 Ports GE LAN Router
Dell S4048-ONCisco C1101-4PLTEPWD 4 Ports GE LAN Router
Dell S6000-ONCisco C1101-4PLTEPWZ 4 Ports GE LAN Router
CX923deCisco C1101-4PLTEPWA 4 Ports GE LAN Router
OmniSwitch 6450-48LCisco C1101-4PLTEPWH 4 Ports GE LAN Router
OmniSwitch 6450-P10Cisco C1101-4PLTEPWQ 4 Ports GE LAN Router
Alcatel OmniSwitch 6450-C48XCisco C1101-4PLTEPWR 4 Ports GE LAN Router
Alcatel OmniSwitch 6450-P48XCisco C1101-4PLTEPWN 4 Ports GE LAN Router
Alcatel OmniSwitch 6450-U24Cisco C1101-4PLTEPWF 4 Ports GE LAN Router
Alcatel OmniSwitch 6350-P48Cisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2P)
OmniSwitch 6860E-U28Cisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWB)
InfoBlox ND-1400Cisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWE )
TelePresence MCU 5320Cisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWD)
Cisco IE 2000-16PTC-G-NX Industrial Ethernet SwitchCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWZ)
Cisco IE 2000-4S-TS-G-L Industrial Ethernet SwitchCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWA)
Cisco IE-2000U-4S-G Industrial Ethernet SwitchCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWH)
Cisco C887VAM Integrated Series RoutersCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWQ)
Cisco 897 Multi-Mode VDSL2/ADSL2+ POTS Annex M with Multi-Mode 4G LTE RouterCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(iscoC11094PLte2PWN)
Cisco C899 Secure Gigabit Ethernet with Multi-mode 4G LTE RouterCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWR)
Aironet 1572EC Outdoor Access PointCisco C1109-4PLTE2P 4 Ports GE LAN M2M Router(ciscoC11094PLte2PWF)
Cisco Catalyst 6824-X-LE-40GCisco C9407R
Cisco Firepower NGFW 4140Cisco 1000V
Cisco NCS 5001Cisco Nexus 3132Q Switch
Cisco NCS 5002Cisco UCS 6332 32-Port Fabric Interconnect
Cisco 897 Multi-mode VDSL2/ADSL2+ POTS with Multi-Mode 4G LTE RouterCisco Nexus 5672UP Switch
Cisco NCS 1002UCS 6332-16UP Fabric Interconnect
Cisco NCS 5508Cisco Nexus 31128PQ Switch
Cisco NCS 5502-SECisco Nexus 3132
Cisco 897VAGLTELAK9-4G LTE Latin America router with 1 Giga Ethernet WANCisco Nexus 3172
Cisco 819 Non-Hardened 4G LTE M2M with Dual Radio 802.11n WiFi RouterCisco Nexus 3172
Cisco 819 Non-Hardened 4G LTE M2M with Dual Radio 802.11n WiFi RouterCisco Nexus Nexus 9236C
Cisco Aironet 1560Cisco Nexus 31108PC-V
C899G-LTE-LA-K9 4G router with 1 Giga Ethernet WAN, 1 SFP (Small Form-factor Pluggable) Giga Ethernet WANCisco 3172
C819G-LTE-LA-K9 Router with 1 Gigabit Ethernet WAN, 4 Fast Ethernet LANCisco 9232C
Cisco 4221 ISRNexus 93180YC-FX
Cisco 4221 Integrated Services RouterNexus 9348GC-FXP
Cisco Catalyst CDB-8U SwitchCisco Nexus 9K C9364C
Cisco Catalyst CDB-8P SwitchCisco 7600 Series Route Switch Processor 720 with 10 Gigabit Ethernet Uplinks
Cisco NCS 5501WS-X45-SUP9-E (Cisco Catalyst 4503-E  Switch Module )
Cisco NCS 5502Cisco 3172
Cisco 829 4G LTE Industrial Integrated Service RouterCisco SGE2000 10/100/1000 Ethernet Switch
Cisco 829 4G LTE Industrial Integrated Service Routers with multi-mode LTE/HSPA+ with 802.11nSF550X-24
Cisco 829 4G LTE Dual-modem Industrial Integrated Service RouterSF550X-24P
Cisco 829 4G LTE Dual-modem Industrial Integrated Service Routers with multi-mode LTE/HSPA+ with 802.11nSF550X-24MP
Cisco 809 4G LTE Industrial Integrated Service RouterSF550X-48
Cisco 809 4G LTE Industrial Integrated Service Routers with multi-mode LTE/HSPA+SF550X-48P
Cisco C1111-8P RouterSF550X-48MP
Cisco C1111-8PLTEEA Router with Multimode Europe and North America Advanced LTESG550X-24
Cisco C1111-8PLTELA Router with Latin America Multimode and Asia Pacific Advanced LTESG550X-24P
Cisco C1111-8PWE Router with WLAN E domainSG550X-24MP
Cisco C1111-8PWB Router with WLAN B domainSG550X-24MPP
Cisco C1111-8PWA Router with WLAN A domainSG550X-48
Cisco C1111-8PWZ Router with WLAN Z domainSG550X-48P
Cisco C1111-8PWN Router with WLAN N domainSG550X-48MP
Cisco C1111-8PWQ Router with WLAN Q domainSG350X-24
Cisco C1111-8PWH Router with WLAN C domainSG350X-24PD 24-Port 2.5G PoE Stackable Managed Switch
Cisco C1111-8PWR Router with WLAN R domainSG350X-24P
Cisco C1111-8PWF Router with WLAN K domainSG350X-24MP
Cisco C1111-8PLTEEAWE RouterSG350X-48
Cisco C1111-8PLTEEAWB RouterSG350X-48P
Cisco C1111-8PLTEEAWA RouterSG350X-48MP
Cisco C1111-8PLTEEAWR RouterSG350X-8PMD 8-Port 2.5G PoE Stackable Managed Switch
Cisco C1111-8PLTELAWZ RouterSG350-8PD 8-Port 2.5G PoE Managed Switch
Cisco C1111-8PLTELAWN RouterPravail NSI
Cisco C1111-8PLTELAWQ Router

 

 

But Wait, there's more!

 

The list of improvements above is just a small sampling of everything included in the Orion Platform 2019.2 release. There are still plenty of additional new features and improvements added to this release of the Orion Platform, including Enhanced Node Status, Orion Maps 2.0, and Install/Upgrade Improvements. As always, we appreciate your feedback on all these improvements, so be sure to let us know your thoughts in the comment section below.

In our latest release of User Device Tracker (UDT), you'll discover new port discovery and polling support for Cisco Nexus switching equipment. You'll also see UDT make a cameo appearance in our Network Insight™ for Palo Alto firewalls, with new visibility for devices connected to these firewalls. We'll show you where it integrates today into NPM.

 

Speaking of discovery, we've completely reworked the port discovery process to be very similar to node discovery. We'll show you what it looks like, and how to configure credentials for these new device types.

 

Finally, we'll talk briefly about some Orion® Platform enhancements, and improvements to the SDK we've recently published for working with ports.

 

Discovering and Importing Ports

 

In this release, we're adding some significant granularity in the Discovery and Import process for ports. The experience and the workflow is similar to the NPM node discovery, with granular selection criteria and port-filtering options:

 

It's simple to exclude operationally or administratively down ports from the import. This flexibility saves overhead and simplifies licensing by offering better, granular control.

 

Configuring Access for UDT

For most devices supported by UDT, all that's necessary are the SNMP credentials. For some devices—the Cisco Nexus 5K, 7K, and 9K series switches, or for the Palo Alto Firewall—a set of command-line interface (CLI) credentials are required.

 

You can configure devices in bulk or individually in the Port Management section of the User Device Tracker settings page.  Select "Manage Ports" to see the list of devices which can be configured:

 

Select one or more of these devices, edit their properties, and you'll find a section for configuring SNMP polling:

You'll also find a section for CLI-based polling:

The polling interval is set in its own section of the UDT Settings page, under "Polling Interval." The default polling interval for port information is 30 minutes.

 

Once you’ve enabled UDT Layer-3 polling for a CLI-based device, you can expect to see port information populated in the Port Details resource on the Node Details page.

 

UDT SDK Updates

This release adds some basic create, read, update, and delete operations for UDT ports into the Orion SDK. Refer to the documentation available in GitHub for examples.

 

 

Platform Improvements

Along with all of the other modules in the Orion Platform, UDT can be installed now in Azure, and make use of the native Azure SQL database service to host the Orion database. This adds additional deployment flexibility—we already support deployment in AWS using the RDS service.

 

How Do I Get This Goodness?

For UDT, you can find the latest release in your Customer Portal.

 

 

To see all the features of Network Insight for Palo Alto, you’ll want to have several modules installed and working together.

  • Network Performance Monitor discovers and polls your Palo Alto firewall and retrieves and displays your site-to-site VPN and GlobalProtect client VPN connection information.
  • Network Configuration Manager collects your device configuration and provides a list of your security policies for zone-to-zone communication. This module tracks configuration changes over time and provides context for policies spanning multiple devices.
  • NetFlow Traffic Analyzer collects flow data from the firewall and maps the traffic to policies in the Policy Details page. You can also view traffic through the firewall or through specific interfaces.
  • User Device Tracker collects directly connected devices and provides a history of connections to the ports on the device.

 

 

You can demo these products individually or install/upgrade from any installer available in your Customer Portal.

 

We're looking forward to hearing your feedback and questions on the release in the forum below!

VoIP & Network Quality Manager version 4.6 builds on the SIP trunk monitoring work we introduced in the previous release. In the last release, we introduced SIP trunk health and availability metrics monitored from the Cisco Unified Call Manager element. In this release, VNQM delivers SIP trunk call metrics—with on-demand polling—and SIP trunk utilization from the Cisco Unified Border Element. It's comprehensive visibility throughout the CUCM environment.

 

Monitoring the CUBE

 

You'll notice there's a new resource available on the VoIP Summary Page—"VoIP Gateways." These are the border gateway elements where SIP trunks terminate.

 

In this release, we support the Cisco Unified Border Element, or “CUBE” appliance.

 

The Call Manager hosts, visible in the VoIP CallManagers resource, drill down into details pages for each call manager. The VoIP Gateways expand directly into a list of SIP trunks.

 

 

 

At this level, you can see a quick status for each trunk. Drilling into one of the trunks provides this view.

 

 

The SIP Trunk Details view provides resources for monitoring status over time, metrics for inbound and outbound call activity, and SIP trunk utilization. Each of these metrics can be individually opened in a new PerfStack™ project, or the collection of status and call metrics for this trunk can be opened from the summary resource. In PerfStack, the "Performance Analyzer" view looks like this.

 

Pulling these metrics into PerfStack gives you visibility on the same timescale for other related metrics—resources on the CUBE device, for example.

 

 

Or, perhaps the active inbound and outbound calls for several trunks.

 

 

The PerfStack dashboard gives you the flexibility to compose views that you can save and use for monitoring or troubleshooting in the future.

 

Immediate Real-Time Polling

Note that the Inbound and Outbound call metrics from the CUBE have the "rocket ship" icon next to them to denote real-time polling is available. This means we can enable continuous polling and presentation of these metrics from the CUBE when we're troubleshooting issues, and we need to see the current call metrics from the perspective of the CUBE. This is a valuable insight into the key utilization metrics for each SIP trunk.

 

SIP Trunk Utilization

Tracking SIP trunk utilization is complex; there are several different factors to calculate utilization. Utilization depends upon the mix of typical calls, the codecs used in the environment, and the number of active calls. In this release, we're using maximum concurrent calls as the primary indicator of utilization, and calculating and presenting a percentage value useful for capacity planning. You should work with your SIP provider to estimate the number of concurrent calls your trunks can support and configure your thresholds accordingly.

 

To configure the maximum concurrent calls we'll use in the utilization calculation, you'll need to visit global settings for "VoIP & Network Quality Manager (VNQM) Settings," and select "Edit VoIP & Network Quality Manager Settings" to see these global values for "Gateway" settings.

 

In addition to the "Maximum Concurrent Calls," you can also set thresholds, polling intervals, and retention period for these metrics here.

 

CLI credentials are configured at the CUBE level by selecting "Manage Gateways" and providing credentials for one or all gateway devices.

 

 

You can also override the maximum concurrent call value in the CUBE properties.

 

The default out of the box value is 100 concurrent calls. You'll want to confirm this for your environment with your SIP trunk provider.

 

 

Platform Improvements

Along with all the other modules in the Orion® Platform, VNQM can be installed now in Azure, and make use of the native Azure SQL database service to host the Orion database. This adds additional deployment flexibility; we already support deployment in AWS using the RDS service.

 

We're excited to provide comprehensive health, availability, and utilization monitoring for SIP trunks in the Cisco CUCM environment. Visit your Customer Portal to review the Release Notes, verify the System Requirements, and download this release.

 

 

We're looking forward to your experiences and questions in the forum below.

In our latest release of NetFlow Traffic Analyzer (NTA), we’re focusing on features that deliver expanded visibility, and flexible evaluation and deployment options. For the first time, NTA is providing a significant contribution to our Network Insight™ feature for Palo Alto firewalls.

 

Also, in this release, we’re adding support for IPv6 flow records, and enhancing our filtering to display IPv4 only, IPv6, or both types of traffic.

 

For evaluation customers—and for current customers upgrading—we’ll automatically configure a local source of NetFlow data on the local server. This will provide an immediate source of data for evaluation installations and a comprehensive source of information for traffic sourced or destined to the primary poller.

 

Finally, we’re fully supporting the deployment of NTA into Azure, using the native Azure SQL Database service to host the flow database. This builds upon our existing support for deployment in AWS, using the native RDS service.

 

We’ll explain an important upcoming change in the upgrade process, and how to plan for it.

 

Traffic Visibility by Policy

 

In this release, NTA is contributing to our latest Network Insight through an integration with Network Configuration Manager (NCM). Users of SolarWinds NCM with Palo Alto firewalls will see top traffic conversations by security policy on the NCM Policy Details page. Examining traffic by policy helps answer the question, "Who might be affected as I make changes to my security policies?"

 

Let's look at how we find this view.  We'll start at the Node Details page for this firewall:

 

 

We'll use the slide-out menu in this view to select "Policies." This will take us to a list view of all the policies configured for zones on this device.

 

Selecting a policy from this list brings us to the Policy Details page:

Close

 

Policies define security controls between zones configured on the firewall. For a Palo Alto firewall, a zone can include one or more interfaces. So, in this view, we're looking at all the conversations based on applications defined in the policy.

 

It's a very different way of looking at conversations; this isn't a view of all traffic through a node or an interface. Rather, it's a view that relates to the policy definition, so the endpoints in these conversations are running over the applications on which your security rules are based.

 

The mechanism here is filtering; we’re looking at application traffic that references the application IDs in your security policy. So, the endpoints in those conversations may be from any zone where you’re using this policy.

 

For an administrator considering changes at the policy level, this is a valuable tool to understand how those rules apply immediately to production services and what kinds of impacts changes to them will have.

 

For this feature, you'll need both NCM and NTA. NTA, of course, requires Network Performance Monitor (NPM). NCM provides us the configuration information that includes the policy definition and the applications definitions. NTA reads application IDs from the flow records we receive from the Palo Alto firewall, and correlates those with the policy configuration to generate this view. With NTA, you can also easily navigate to more conventional node or interface views of the traffic traversing the firewall, and we integrate traffic information seamlessly into the Node Details page in NPM as well.

 

IPv6 Traffic Visibility

 

This release offers comprehensive visibility in mixed IPv4 and IPv6 environments, and the flexibility to isolate TopN views in each of these protocols. While deployment of IPv6 has not been aggressive as some originally predicted, it's gaining some significant traction in the public sector, large-scale distribution operations, universities, and companies working with IoT infrastructures. Our latest release consumes NetFlow v9 and IPFIX flow templates for IPv6 traffic and stores those records along with the IPv4 flow records we support today. Let's see what the NTA summary page looks like.

 

You'll notice some IPv6 conversations, and some IPv6 endpoints in the TopN views. This view gives you complete visibility into the traffic driving your utilization in a mixed IPv4 and IPv6 environment. We've also added new filters, both on the dashboard and in the flow navigator.

 

Close

 

These filters give you the flexibility to examine how traffic running over each version drives utilization, and which conversations are dependent on different configurations within the infrastructure.

 

The Orion® Platform—and NTA—already support installation on dual-stack IPv4 and IPv6 servers. You can receive these flow records on either an IPv4 or IPv6 interface, depending on how your server is connected.

 

IPv6 changes how we think about the security model. This visibility gives us a perspective on how our security polices act on IPv4 and IPv6 traffic to permit or deny conversations. In that sense, it's a valuable tool to confirm your traffic is compliant with your security policies.

 

Local Source of NetFlow

 

This release will automatically add a new source of NetFlow data to your NTA main poller. This new source is a composite of physical network interfaces on your Orion main poller represented as a special type of virtual interface: Local NetFlow Source. This new source of flow information gives you unprecedented visibility into the traffic that originates on or arrives to the Orion server. You can use this to answer questions about your network and system management traffic trends. "How much SNMP traffic does my monitoring generate? What volumes and frequencies of flow traffic do I receive, and from where? How much DNS traffic does my management platform drive, and to where?"

 

Let's see what this looks like.

Close

 

Selecting the "Local NetFlow Source" interface and drilling into it, here's the view.

 

Close

 

You can manage this source of traffic the same way you manage any other source of flow data: by selecting the "Manage Sources" link in the NetFlow Sources resource.

 

Close

You can enable or disable the Local NetFlow Source here to include or exclude traffic from this source.

 

For brand-new installations of NTA, this new source will be created and enabled by default. If you’re working with an evaluation copy of the NTA application, this will give you immediate live data in the product that's personal to your network. It's a great way to introduce your colleagues to new versions or evaluate new releases without having to reconfigure your network devices to send flow records to this instance.

 

If you’re upgrading NTA, this source will be created but will not be enabled by default. We'll respect your existing configuration and give you the flexibility to make the choice about whether you'd like to include this traffic in your current view. Disabling this source completely shuts down capture of traffic on the local interfaces.

 

Creating this interface consumes a single node license for both NPM and NTA. If you would prefer not to use a node license for local NetFlow source, you can completely delete this interface to release the license. You cannot, however, add this interface back later.

 

Azure Deployment

 

Finally, we've been working to ensure users deploying into Azure can make use of the native Azure SQL Database service for both the common Orion database and the SQL NTA database. You'll be able to specify Azure SQL Database to build both of these databases during installation, in much the same way as you build in existing SQL instances today. We're supporting additional choices to help lower operational costs and expand your deployment flexibility.

 

To take advantage of this option, you’ll enter the connection string for your Azure SQL Database instance much the same way you enter any other connection string in the Configuration Wizard.

 

AzureCW.png

 

Changes in the Upgrade Process Are Coming

 

If you’re upgrading to NTA 4.6 from an older version of the product, you’ll once again see a familiar option to defer your NTA upgrade and remain on a version that doesn’t require SQL 2016 or later for the flow database.

 

In the past three releases of NTA (4.4, 4.5, and 4.6), we’ve included a pre-flight check in the upgrade dialog to allow customers to defer the upgrade and retain (or upgrade to) NTA version 4.2.3, the latest version that supports flow storage in the FastBit database. This in turn allowed updates to the Orion Core and other product modules without requiring an NTA upgrade. 

 

In the next release of NTA, this option will no longer be available. An upgrade to the next release of NTA after 4.6 will require a SQL 2016 or later database (or appropriate AWS RDS or Azure SQL option) to complete the upgrade.

 

A modern version of SQL supports columnstore technology, which provides significant performance and scale benefits for NTA. We’re building on this technology in every new release to drive better performance and a better user experience.

 

You should plan now for your next upgrade to deploy a SQL 2016 or later instance for flow storage. Refer to the NTA System Requirements documentation for supported options.

 

How Do I Get This Goodness?

 

For NTA, you can find the latest release in your Customer Portal. Remember, we also have a terrific complementary set of free NetFlow tools in the Flow Tool Bundle, including Flow Replicator, Flow Generator, and Flow Configurator.

 

To see all the features of Network Insight for Palo Alto, you’ll want to have several modules installed and working together.

 

  • Network Performance Monitor discovers and polls your Palo Alto firewall and retrieves and displays your site-to-site VPN and GlobalProtect client VPN connection information.
  • Network Configuration Manager collects your device configuration and provides a list of your security policies for zone-to-zone communication. This module tracks configuration changes over time and provides context for policies spanning multiple devices.
  • NetFlow Traffic Analyzer collects flow data from the firewall and maps the traffic to policies in the Policy Details page. You can also view traffic through the firewall or through specific interfaces.
  • User Device Tracker collects directly connected devices and provides a history of connections to the ports on the device.

 

You can demo these products individually or install/upgrade from any installer available in your Customer Portal.

 

Post your questions and experiences in theNetFlow Traffic Analyzer community forum!

It gives me great pleasure to introduce the newest version of Network Configuration Manager (NCM), v8.0, as generally available!

 

I’m pretty excited about this release, as it’s jam-packed full of great features. Per popular request, Network Insight includes awesome capabilities from NCM, Network Performance Monitor (NPM), NetFlow Traffic Analyzer (NTA), and User Device Tracker (UDT). This very special Network Insight for Palo Alto firewalls provides users with insights into their policies, traffic conversations across policies, and VPNs. We have a great detailed write-up about all the great value we stuffed into the feature here.

 

In addition to Network Insight, NCM is now easier to use when executing config change diffs, adds two new vendors to the Firmware Upgrade feature, and is more performant when executing config backups.

 

Updated Config Diff

 

In an effort to reduce the amount of time committed to spotting changes in a config diff (all those lines…), a simpler and easier-to-use Config Diff has been implemented in this version. By focusing the view around the context of the diff, the changes, you’ll now see the changes highlighted plus five lines above and below the changes. All unchanged lines beyond the five-line limit are collapsed to remove the endless scroll. This gives you the context of the change and makes it easier to discern what steps need to be taken next.

 

 

Additional Vendor Support for Firmware Upgrade

 

For some time now, you’ve all been asking for additional vendors to be added to Firmware Upgrade, and I’m pleased to say we’ve delivered. Take advantage of the automation to apply firmware to Juniper and Lenovo switches to patch vulnerabilities or ensure your network devices are on the latest. Have a different switch model? Just use the framework from the out-of-the-box templates to make it work for you.

 

 

Go check out the release notes for the full details or review the admin guide. We’ve been working hard to bring these wonderful new features to you, so be sure to visit your Customer Portal to download this version.

 

If there’s anything you think we should consider in a future release, please be sure to go create a new feature request to let me know about the additional functionality you would like to see.

We’re excited to introduce our Network Insight™ for Palo Alto firewalls! This is the fourth Network Insight feature, and we’re building these in direct response to your feedback about the most popularly deployed devices and the most common operational tasks you manage.

 

Network Insight features are designed to give you tools specific to the more complex and expensive special-purpose devices in your network infrastructure. While the bulk of your network consists of routing and switching devices, the more specialized equipment at the edge requires monitoring and visibility beyond the standard SNMP modeled metrics we’re all familiar with.

 

So, what kinds of visibility are we talking about for Palo Alto firewalls?

 

The Palo Alto firewall is zone-based, with security policies that describe the allowed or denied connectivity between zones. So, we’ll show you how we capture and present those security policies. We’ll show you how we can help you visualize application traffic conversations between zones, to help you understand how policy changes can affect your clients. Another critical feature of the Palo Alto firewall is to secure communications between sites, and to provide secure remote access to clients. We’ll show you how to see your site-to-site VPN tunnels, and to manage GlobalProtect client connections.

 

Managing Security Policies

 

Palo Alto firewalls live and die on the effectiveness of their security policies to control how they handle network traffic. Policies ensure business processes remain unaffected and perform optimally, but unintentional or poorly implemented policies can cause widespread network disruption. It’s critical for administrators to monitor not only the performance of the firewall, but the effect and accuracy of the policy configuration as well. As these policies are living entities, continually being modified and adjusted as network needs evolve, the impact and context of a change may be missed and difficult to recover. This is why in Network Insight for Palo Alto, Network Configuration Manager (NCM) brings some powerful features to overcome these pitfalls.

  • Comprehensive list view of security policies
  • Detailed view into each policy and its change history
  • Usage of a policy across other Palo Alto nodes managed by NCM
  • Policy configuration snippets
  • Interface configuration snippets
  • Information on applications, addresses, and services

 

Once the Palo Alto config is downloaded and parsed, the policy information will populate the Policies List View page. This page is intended to make it easier to search through and identify the right security policy from a potentially long list, using configurable filtering and searching. The list view provides each policy’s name, action, zones, and last change. Once the correct policy is identified, users can drill down into each one to see the composition and performance of each policy.

 

The policy details page summarizes the most critical information and simplifies the workflow to understand if a policy is configured and working as intended. You can review the basic policy details, as well as the policy configuration snippet and review the object groups composed into the policy. Admins will be able to quickly analyze if additional action is required to resolve an issue or optimize the given policy.

 

 

Some policies are meant to extend across multiple firewalls and without a view to see this, it’s easy to lose context about the effectiveness of your policy. Network insight for Palo Alto analyzes the configuration of each firewall to identify common security policies and display their status. As an administrator, this lets you confirm if your policies are being correctly applied across the network and to take action if they’re not. If there’s a desire to provide more continuous monitoring of a policy standard, you can also leverage a policy configuration snippet as a baseline for all Palo Alto nodes.

 

 

With any configuration monitoring and management, it’s critically important to be able to provide some proof of compliance for your firewall’s configuration. With Network Insight, you can track and see the history of changes to a policy and provide tangible evidence of events that have occurred. Of course, this also supports the ability to immediately run a diff of the configs where this change took place, by simply clicking the “View diff” button.

 

 

VPN Tunnel Monitoring, Finally

 

How do you monitor your VPN tunnels today? We asked you guys this question a lot as we started to design this feature. The most common response was you’d ping something on the other end of the tunnel. That approach has a number of challenges. The device terminating the VPN tunnel rarely has an IP address included in the VPN tunnel’s interesting traffic that you can ping. You have to ping something past the VPN tunnel device, usually some server. Sometimes the company at the other end of the tunnel intentionally has strict security and doesn’t allow ping. If they do allow ping, you have to ask them to tell you what to ping. If that thing goes down, monitoring says the tunnel is down, but the device might be down, not the tunnel. All this adds work. It’s all manual, and companies can have hundreds, thousands, or more VPN tunnels. Worst of all, it doesn’t work very well. It’s just up/down status. When a tunnel is down, why is it down? How do you troubleshoot it? When a tunnel is up, how much traffic is it using? When’s the last time it went well?

 

This is a tough position to be in. VPN tunnels may be virtual, but today they’re used constantly as infrastructure connections and may be more important than some of your physical WAN connections. They’re commonly used to connect branch offices to each other, to HQ, or to data centers. They’re the most popular way to connect one company to another, or from your company to an IaaS provider like Amazon AWS or Microsoft Azure. VPN tunnels are critical and deserve better monitoring.

 

Once you enable Network Insight for Palo Alto, Network Performance Monitor (NPM) will automatically and continually discover VPN tunnels. A site-to-site VPN subview provides details on every tunnel.

 

 

There are a couple things going on here that may not be immediately obvious but are interesting—at least for network nerds like me.

 

All tunnels display the source and destination IP. If the destination IP is on a device we’re monitoring, like another Palo Alto firewall or an ASA, we’ll link that IP to that node in NPM. That’s why 192.168.100.10 is a blue hyperlink in the screenshot. If you’ve given the tunnel a name on the Palo Alto firewall, we’ll use that name as the primary way we identify the tunnel in the UI.

 

There’s different information for VPN tunnels that are up and VPN tunnels that are down. If the tunnel is down, you’ll see the date and time it went down. You’ll also, in most cases, see whether the VPN tunnel failed negotiation in phase 1 or phase 2. This is the first piece of data you need to start isolating the problem, and it’s displayed right in monitoring. If the tunnel is up, you’ll see the date and time it came up and the algorithms protecting your traffic, including privacy/encryption and hashing/authenticity.

 

The thing I’m most excited about is in the last two columns. BANDWIDTH! Since VPN tunnel traffic is all encrypted, getting bandwidth usage is a pain. Using a flow tool like NTA, you can find the bandwidth if you know both peer IPs and are exporting flow post encryption. It takes some manual work, and you can only see traffic quantities because of the encryption. You can’t tell what endpoints or applications are talking. If you export flow prior to encryption, you can see what endpoints are talking, but you have to construct a big filter to match interesting traffic, and then you have no guarantee that traffic makes it through the VPN tunnel. The traffic has the additional overhead of encapsulation added, so pre-encryption isn’t a good way to understand bandwidth usage on the WAN either. The worst part is that VPN tunnels transit your WAN—one of the most expensive monthly bills IT shops have.

 

Network Insight for Palo Alto monitors bandwidth of each tunnel. All the data is normalized, so you can report on it for capacity, alert on it to react quickly when a tunnel goes down, and inspect it in all the advanced visualization tools of the Orion® Platform–including the PerfStack™ dashboard.

 

 

GlobalProtect Client VPN Monitoring

 

Why does it always have to be the CEO or some other executive who has problems with the VPN client on their laptop? When I was a network engineer, I hated troubleshooting client VPN. You have so little data available to you. It’s very easy to look utterly incompetent when someone comes to you and tells you their VPN service isn’t working, and when it’s the CEO, that’s not good. Network Insight for Palo Alto monitors GlobalProtect client VPN and keeps a record of every user session.

 

 

This makes it easy to spot the most common problems. If you see the same user failing to connect over and over, but other users are successful, you know it’s something on that client’s end and would probably check if login credentials are right. “No, I’m sure you didn’t forget your password. Sometimes the system forgets. Let’s reset your password because that often fixes it.” If lots of people can’t connect, you may check for problems on the Palo Alto firewall and the connection to the authentication resource.

 

Traffic Visibility by Policy

 

In this release, NetFlow Traffic Analyzer (NTA) is contributing to our latest Network Insight through an integration with Network Configuration Manager. NCM users who manage Palo Alto firewalls will see top traffic conversations by security policy on the NCM Policy Details page. Examining traffic by policy helps answer the question, "Who might be affected as I make changes to my security policies?"

 

 

Let's look at how we find this view. We'll start at the Node Details page for this firewall.

 

 

We'll use the slide-out menu in this view to select "Policies." This will take us to a list view of all the policies configured for zones on this device.

 

 

Selecting a policy from this list brings us to the Policy Details page.

 

 

Policies define security controls between zones configured on the firewall. For a Palo Alto firewall, a zone can include one or more interfaces. In this view, we're looking at all the conversations based on applications defined in the policy. It's a very different way of looking at conversations; this isn't a view of all traffic through a node or interface. Rather, it's a view related to the policy definition—so the endpoints in these conversations are running over the applications your security rules are based on. The mechanism here is filtering; we’re looking at application traffic that references the application IDs in your security policy. The endpoints in those conversations may be from any zone where you’re using this policy.

 

For an administrator considering changes at the policy level, this is a valuable tool to understand how those rules apply immediately to production services and what kinds of impacts changes to them will have. For this feature, you'll need both NCM and NTA. NTA, of course, requires NPM. NCM provides the configuration information, including the policy definition and the applications definitions. NTA reads application IDs from the flow records we receive from the Palo Alto Firewall, and correlates those with the policy configuration to generate this view. With NTA, of course, you can also easily navigate to more conventional node or interface views of the traffic traversing the firewall, and we integrate traffic information seamlessly into the Node Details page in NPM as well.

 

User Device Tracker’s Cameo

 

For most devices supported by User Device Tracker (UDT), all that's necessary are the SNMP credentials. We’ll pick up information about devices attached to ports from the information modeled in SNMP. But for some devices—the Cisco Nexus 5K, 7K, and 9K series switches, or the Palo Alto firewall—a set of command-line interface (CLI) credentials are required. We’ll log in to the box periodically to pick up the attached devices.

 

To support device tracking on these devices, you’ll need to supply a command line login. You can configure devices in bulk or individually in the Port Management section of the User Device Tracker settings page. Select "Manage Ports" to see the list of what devices can be configured.

 

 

Select one or more of these devices, edit their properties, and you'll find a section for configuring SNMP polling.

 

 

You’ll also find a section for configuring command-line polling. For devices requiring CLI access for device tracking—currently the Nexus switches and the Palo Alto firewall—you should enable CLI polling, and configure and test credentials here.

 

 

Be sure to enable Layer 3 polling for this device in the UDT Node Properties section as well.

 

You’ll see attached devices for these ports in the Node Details page, in the Port Details resource.

 

 

How Do I Get This Goodness?

 

To see all the features of Network Insight for Palo Alto, you’ll want to have several modules installed and working together.

  • Network Performance Monitor discovers and polls your Palo Alto firewall and retrieves and displays your site-to-site VPN and GlobalProtect client VPN connection information.
  • Network Configuration Manager collects your device configuration and provides a list of your security policies for zone-to-zone communication. This module tracks configuration changes over time and provides the context for policies spanning multiple devices.
  • NetFlow Traffic Analyzer collects flow data from the firewall and maps the traffic to policies in the Policy Details page. You can also view traffic through the firewall, or through specific interfaces.
  • User Device Tracker collects directly connected devices and provides a history of connections to the ports on the device.

 

You can demo these products individually, or install or upgrade from any installer available in your Customer Portal.

I’m excited to announce the general availability of SolarWinds Service Desk, the newest member in the SolarWinds product family, following the acquisition of Samanage.

 

SolarWinds Service Desk (SWSD) is a cloud-based IT service management solution built to streamline the way IT provides support and delivers services to the rest of the organization. The solution includes an ITIL-certified Service Desk with Incident Management, Problem Management, Change Management, Service Catalog, and Release Management, complemented by an integrated Knowledge Base. It also includes Asset Management, Risk and Compliance modules, open APIs, dashboards, and reporting.

 

Core Service Desk

SWSD includes a configurable Employee Service Portal, allowing employees to make their service requests, open and track their tickets, and find quick solutions through the knowledge base. The portal’s look and feel can be customized to your branding needs, and configurable page layouts support your organization’s unique service management processes.

 

 

 

 

For IT pros working the service desk, we provide an integrated experience that brings together all related records (for example, assets or knowledge base articles related to an incident or change records related to a problem), so that the agent can see all the information available to expedite the resolution.

 


 

 

In order to help agents prioritize work, Service Level Management (SLM) helps build and manage SLA policies directly within the service desk, including auto-escalation rules.

 

 

IT pros often need to be on the go, or need to respond to urgent service requests and incidents after hours. The SWSD mobile app, available on both iOS and Android mobile devices, allows agents to work on records, make approvals, and track the status of their work queue at all times.

 

Process Automation

 

Driving automation throughout all aspects of service delivery helps service desk groups drive faster, affordable, and highly consistent services to the rest of the organization. Process automation in SWSD uses custom rules logic to route, assign, prioritize, and categorize inbound tickets, change requests, and releases. The Service Catalog allows you to define and publish IT services (such as VM provisioning or password reset) and non-IT services (such as employee on-boarding) through the Employee Service Portal. The catalog forms defining those services are dynamic and can be configured to fit specific use cases, with little to no coding required.

 

 

The other part of defining any Service Catalog item is automated fulfillment workflow.

 

 

IT Asset Management and CMDB

 

SWSD offers full asset lifecycle management starting with the management of IT and non-IT asset inventories and an audit history of changes. Compliance risk analysis helps expose unmanaged software or out of support software and devices. Where applicable, asset information incorporates contract, vendor, and procurement data to provide a full view on assets under management.

 

 

The Configuration Management Database (CMDB) populated by service supporting configuration items (CIs) plays a critical role in providing better change, problem, and release management services. Knowing what CIs support each service and the dependencies between them help IT pros to better assess the risks and impacts related to IT changes, driving better root cause analysis (RCA) in Problem Management, as well as being better prepared for new software releases.

 

Integrations

 

Many service desk processes can be integrated into other IT and business processes. SolarWinds Service Desk comes with hundreds of out-of-box integrations and an open REST API, allowing you to make it a part of the workflows you need.

 

 

We are releasing a brand new integration today with Dameware Remote Everywhere (DRE). The great synergy between  SWSD and Dameware’s remote support capabilities allow agents to initiate a DRE session directly from a SWSD incident record.

 

 

Artificial Intelligence (AI)

AI is embedded in a few different SWSD functions, introducing a new level of automation and an improved time to resolution. Our machine learning algorithms analyze large sets of historical data, identify patterns, and accelerate key service management processes. There is a “smart” pop-up within the employee service portal that auto-suggests the best corresponding knowledge base articles and service catalog items that related to the keyword(s) typed in the search bar.

 

 

For agents, AI helps with automatic routing and classification of incoming incidents, reducing the impact of misclassifications and human errors. It also offers “smart suggestions” agents can leverage when working on a ticket. Smart suggestions are made based on keyword matching from historical analysis of similar issues -- those suggestions offer knowledge base articles or similar incidents, advising the agent on the best actions to take next.

 

 

Reports and Dashboards

 

SolarWinds Service Desk comes with dozens of out-of-the-box reports that analyze and visualize the service desk’s KPIs, health, and performance. Those reports help agents, managers, and IT executives make data driven decisions through insights, including trend reports, incident throughput, customer satisfaction (CSAT) scores, and SLA breaches.

 

 

Dashboards provide a real time and dynamic view of the service desk. Dashboards are comprised from a set of widgets that can be added, removed, and configured to adjust to the individual needs of the agent, manager, or organization.

 

 

 

 

This has been pretty packed inaugural product blog for us. I hope you found it useful. We’d love to get your feedback and ideas. Feel free to comment below or visit the SolarWinds Service Desk product forum here; we're quickly building it out.

Security Event Manager (SEM) 6.7 is now available on your Customer Portal. You're probably wondering what exactly Security Event Manager is? It's the product formally known as Log and Event Manager (LEM). LEM has always been so much more than a tool for basic log collection and analysis. It offered so much more in terms of detecting and responding to cyberattacks as well as easing the burden of compliance reporting. SEM helps organizations across the globe to improve their security posture, and we believe the new name better reflects the capabilities of the tool.

 

FLASH - THE BEGINNING OF THE END

Moving away from Flash has been the top priority for SEM for some time. I'm excited to say that this release introduces a brand-new HTML5 user interface as the default interface for SEM. You can now perform most of your day-to-day tasks within this new interface, including searching, filtering and exporting logs, as well as configuring and managing correlation rules and nodes. The feedback on the new UI has been hugely positive thus far, with many users describing it as clean, modern and incredibly responsive. The Flash interface is still accessible and is required for tasks such as Group/User Management, E-Mail Templates and the Ops Center. However, we're by no means finished with the new user interface and will continue to make improvements and transition away from Flash.

 

 

CORRELATION RULES

Correlation is one of the key components of any effective SIEM tool. As vast amounts of data are fed into Security Event Manager, the correlation engine identifies, alerts on, and responds to

potential security weaknesses or cyberattacks by comparing sequences of activity against a set of rules. This release includes a brand new Rule Builder which enables you to easily build new rules and adjust existing rules. We've made some improvements including drop down menus (as well as the traditional drag-and-drop) to create rules, auto-enablement of the rule after saving, easier association of Event Names and Active Response actions and the removal of the Activate Rules button

 

 

 

FILE INTEGRITY MONITORING

FIM was originally introduced way back in LEM 6.0 and has provided users with great insight into access and modifications to files, directories and registry keys ever since. With users constantly creating, accessing and modifying files, a huge amount of log data is generated which is often associated with excessive noise. In order to better enable you to split the signal from the noise, we've introduced File Exclusions within our redesigned FIM interface. If a particular machine is generating excessive noise based on a particular file types (I'm looking at you tmp files), you can now easily exclude file types at the node level.

 

 

LOG EXPORT

When investigating a potential cyberattack or security incident, you'll often need to share share important log data with other teams, external vendors or attach the logs to a ticket/incident report. Exporting results to a CSV is now possible directly from the Events Console.

 

 

AWS DEPLOYMENT

As organizations shift workloads to the cloud to lower costs and reduce management overhead, they require the flexibility to deploy tools in the cloud. In additional to the Azure deployment support included in LEM 6.5, this release adds support for AWS Deployment. Deployment is done via a private Amazon Machine Image and therefore you need to contacts SolarWinds Sales (for evaluation users) or Technical Support (for existing users) in order to gain access to the AMI. Please note that your AWS Account ID will be required in order to grant access.

 

I really hope you like the direction we're going with Security Event Manager, especially the new user interface. We're already hard at work on the next version of SEM, as you can see in the What We're Working On post. As always, your feedback and ideas are always greatly appreciated so please continue to do so in the Feature Requests area.

SolarWinds® Access Rights Manager (ARM) 9.2 is available on the customer portalPlease refer to the release notes for a broad overview of this release.

 

Most of you are using cloud services in your IT environments today, living in and managing a hybrid world.

 

With the release of ARM 9.1 we already have taken this into consideration by complementing the existing access rights permission visibility into Active Directory, Exchange, and file servers by Microsoft® OneDrive and Microsoft® SharePoint Online.

Now with ARM 9.2 we round off our function set by introducing the ability to collect events from Microsoft® OneDrive and SharePoint Online allowing you to gain also visibility in activities within these platforms.

 

In addition to the functionality above, a lot of work was done under the hood to lay the foundation for coming features we will make available in the next releases.

 

What’s New in Access Rights Manager 9.2?

  • Microsoft OneDrive and SharePoint Online monitoring - Administrators need to be aware about certain events in their OneDrive and SharePoint Online infrastructure. ARM now enables the Administrator to retrieve events from the O365 environment and analyze them in reports.
  • UI - Design and layout optimizations to complete the SolarWinds look and feel.
  • Defect fixes - as with any release, we addressed product defects.

 

The SolarWinds product team is excited to make these features available to you.  We hope you enjoy them. 

Of course, please be sure to create new feature requests for any additional functionality you would like to see with ARM in general.

 

To help get you going quickly with this new version, below is a quick walk-through of the new monitoring capabilities for Microsoft® OneDrive and Microsoft® SharePoint Online.

 

Identify ACCESS to shared directories and files on OneDrive

OneDrive is an easy tool to let your employees share resources with each other and/or external users. ARM makes it easy for you to check which files an employee has shared internally or externally, and who actually accessed these.

 

Now let’s take a look how we can use OneDrive monitoring to answer the question “with whom outside the company do we share documents and files?” ARM allows you to easily generate a report for this.

 

1. Navigate to the Start screen in the ARM rich client and click on “OneDrive Logga Report” in the Security Monitoring section.

 

The configuration for the “OneDrive Logga Report“ opens.

2. Provide a title and comment that will be shown at the beginning of the report (optional). Select the time period analyzed for this report.

3. Click into “OneDrive Resources”

4. Select the target resources on the right side for this report by double clicking.

5. Click into “Operations”

6. As we are interested in who has shared the resources when and also if/what external users have accessed it we select the “AnonymousLinkCreated” and “AnonymousLinkUsed” operations on the right side for this report by double clicking.

7. Click on “Start” to create this report manually.

8. Click on “Show report” to view the report.

In the report created you get the information of who has invited external users when to access internal resources and if any external users have accessed these from what IP address.

Note: You can schedule this report to be sent periodically to your mailbox to stay on top what’s happening.

 

In the same way you can generate reports about the more than 180 other events available in SharePoint Online and OneDrive. Just follow the outlined steps and adapt in step 6 the operations to the ones you are interested in.

Other interesting events you might want to have a look at are file and folder related operations like FileDeleted/FolderDeleted or FileMoved/FolderMoved helping you with one of the classic use cases if employees complain about their disappearing files and folders.

 

On a side note, file/folder events on file servers are also captured in our monitoring and are available through the file server reports.

 

Conclusion

I hope that this quick summary gives you a good understanding of the new features in ARM and how you can utilize ARM to get better visibility and control over your hybrid IT environment. 

 

If you are reading this and not already using SolarWinds Access Rights Manager, we encourage you to check out the free download.  It’s free. It’s easy.  Give it a shot.

We are happy to announce the release of SolarWinds® Access Rights Auditor, a free tool, designed to scan your Active Directory and file system and evaluate possible security risks due to existing user access rights.

 

 

Ever hear of risks and threats due to unresolved SIDs, globally accessible directories, directories with direct access, or groups in recursion –  and wondered if you were affected?

 

Access Rights Auditor helps you answer this question by identifying use cases such as these and allows you to export the overall risk summary in an easy-to-understand PDF report to be shared.

 

Don’t know where to start?

 

Let’s walk through a typical use case assuming we want to check the permissions and risks associated with a sensitive folder from the Finance department.

We type the phrase “invoices” in the search box and press enter (1).

 

The “Search Results” view displays the search history and all hits of your current search in the different categories available like folders, users, and groups.

We select the folder we are interested in by clicking on “Invoices” (2).

 

Now we’re redirected to the “Folder Details” view and immediately get all “Folder Risks” displayed – in this example, three occurrences of “Unresolvable SIDs” and “Changed Access Permissions.”

But it doesn’t end here, because some risks are inherited by directories. For example, from inactive user accounts with continued access. These hidden risks are also listed here in the “Account Risks” section.

 

Now we validate who has access in the “User and groups” section below and realize that in our example the “System” account and the “Domain Admins” group have “full control” access on the folder.

To select members of the “Domain Admins” group, simply click on the group and you’ll be redirected to the “Group details” view.

 

 

Access Rights Auditor improves your visibility into permissions and risks with just a few clicks.

 

Can’t believe it’s free? Go ahead and give it a try.

 

For more detailed information, check the Quick Reference guide here on THWACK® at https://thwack.solarwinds.com/docs/DOC-204485.

Download SolarWinds Access Rights Auditor at https://www.solarwinds.com/free-tools/access-rights-auditor.

For those of you who didn’t know, Storage Resource Monitor 6.8 is currently available for download! This release continues our momentum of supporting new arrays that you all requested on THWACK® as well as deepening our already existing support for the most popular arrays.

 

Why don’t we go over some of what’s new in SRM with the 6.8 release?

 

NEW ARRAY SUPPORT - KAMINARIO®

We’re all really excited here about our newest supported array vendor: Kaminario®. With Kaminario® being an enterprise storage vendor that has a lot of exciting progress going on, we’re really excited to say that we now support their arrays, starting with K2 and K2.N devices. And we think that you will be to, if the voting in THWACK has anything to say about it.

 

Best of all, out of the box, this new support includes all the standard features you know and love: capacity utilization and forecasting, performance monitoring, end-to-end mapping in AppStack™, integrated performance troubleshooting in PerfStack™, and Hardware Health.

 

And, as always, we’re excited to share some screenshots.

 

Summary View

 

Hardware Health View

 

NEW HARDWARE HEALTH SUPPORT - DELL® COMPELLENT AND HPE 3PAR

Whether you’re a new customer to SRM or you’ve been a customer for a while, you know that there is a lot to be had when we extend support for an array to hardware health. With SRM 6.8, we focused on adding hardware health support to those arrays most popular with our customers. And so, we’re excited to announce hardware health support for Dell® Compellent and HPE 3PAR arrays. So now, starting in SRM 6.8, digging into these array types allows you to see details on fans, power supplies, batteries, and more.

 

A screenshot? Of course.

 

 

WHAT’S NEXT

Add in some bug fixes and smaller changes and you have SRM 6.8. We’re excited for you all to check it out.

 

If there are any other features that didn’t make it into SRM 6.8 but that you would like to see, make sure to add it to our Storage Manager (Storage Profiler) Feature Requests forum. But before you do, head over to the What We’re Working On page to see what the storage team already has in the works for upcoming releases.

 

And as always, comments welcome below.

 

- the SRM Team

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.