1 2 3 Previous Next

Product Blog

772 posts

As a production database administrator for many years, I was tasked with security requests. These requests ranged from “who changed what” and included detection of SQL injection attacks. That role taught me how proper data security is a never-ending job, requiring the right tools and knowledge.

 

This is one reason I advocate the use of SEM to help with database security requirements. With SEM you can use the SQL Audit Events connector to monitor for security events. The previous version of the connector required a server-side trace to capture events related to schema changes, user changes, and failures for any query activity.

 

The latest version of SQL Audit Events connector allows for the user to use SQL Server Audit instead of a trace. SQL Server Audit is a great feature, but a bit cumbersome to work with if you haven’t before.

 

The first step is to create a Server Audit. This is the “kitchen sink” for SQL Server Audit, as it catches everything and determines where to send the event output. The SQL Audit Events connector requires the SQL Server Audit output to the security or application event log on the server. One thing to note here is that the Windows event log can fill up and be overwritten. Make certain you have modified the retention policy accordingly before you flood the event logs with audit events from SQL Server. It’s also worth noting that the Windows Application event log is less secure than the Windows Security event log, any authenticated user is allowed to read and write from the Windows Application event log.

 

After you have created the Server Audit, the next step is to create either a Server Audit Specification, or a Database Audit Specification. The Server Audit Specification is for events that affect the instance of SQL Server, and you can only have one Server Audit Specification output to one Server Audit object. The Database Audit Specification is for events that affect a specific database, and you can have multiple Database Audit Specifications output to a Server Audit object. Here’s what it all looks like:

 

 

The full list of SQL Server Audit action groups and actions can be found here. It is difficult to list out the specific groups and actions, as each company will have different requirements. But there’s a few I would suggest you consider.

 

First, start by auditing the audit. You will want to know if the audit has been turned on or off, or if it has been altered in any way. You will use the AUDIT_CHANGE_GROUP for this task.

 

Next, you should set up a Server Audit Specification for events that affect the entire instance. I recommend the following:

 

FAILED_DATABASE_AUTHENTICATION_GROUP

LOGIN_CHANGE_PASSWORD_GROUP

SERVER_PRINCIPAL_CHANGE_GROUP

SERVER_ROLE_MEMBER_CHANGE_GROUP

USER_CHANGE_PASSWORD_GROUP

 

Be mindful that a busy server will flood your event log. Be precise with what data you want to collect. While it is possible to collect events at a server instance level for all database activity, doing so will flood the event log. That’s why I recommend using Database Audit Specifications inside of the databases you want to audit. These are the groups you should consider at a minimum:

 

DATABASE_OBJECT_CHANGE_GROUP

DATABASE_PERMISSION_CHANGE_GROUP

DATABASE_PRINCIPAL_CHANGE_GROUP

DATABASE_PRINCIPAL_IMPERSONATION_GROUP

DATABASE_ROLE_MEMBER_CHANGE_GROUP

 

You must review the groups and actions to decide if they meet your auditing requirements. The ones I have listed here are meant as a guide, a starting foundation upon which to build.

 

You will notice I didn’t include any groups or actions regarding query activity, such as a SELECT statement. I don’t like the idea of capturing that anything that has query data, especially update or insert data, and allowing that text stored in an event log or inside the SEM database.

 

SQL Server Audit is a great tool that doesn’t get enough love and attention, in my opinion. To me, the strength of this feature is how you can extend it to do things like auditing SQL Agent jobs. I’ve written an example here: https://thomaslarock.com/2017/10/audit-sql-server-jobs/

 

The downside to SQL Audit is the reporting and viewing of the audit event data. SQL Server Management Studio has a log viewer, but the user experience can be frustrating at times. By using SEM we create a better user experience. Not just for viewing event data, either. SEM allows for the creation of Correlation Rules, allowing us to automate actions to take if a specific event occurs. Here’s an example:

 

 

I can create a custom rule that would trigger an action, in this case I will have an email sent should a database object change event is found. You can’t do that out of the box with SQL Server Management Studio.

 

If you are using SQL Audit, you should give SEM a trial and discover what is possible. If you are using SEM, you should consider leveraging SQL Audit to enhance your security. Together, SQL Audit and SEM offer you the opportunity to lower your risk of loss due to a data breach.

SolarWinds® Access Rights Manager (ARM) 2019.4 is available on the Customer Portal! Please refer to the release notes for a broad overview of this release.

 

Previous releases of ARM extended the existing access rights permission visibility into Active Directory, Exchange, and file servers by Microsoft OneDrive and Microsoft SharePoint Online and introduced the ability to collect events from Microsoft OneDrive and SharePoint Online.

With ARM 2019.4, we now add the ability to provision users in managed Azure AD domains and to assign mailboxes and licenses.

 

Supporting hybrid environments also means we continue to further improve ARM in all its capabilities and platforms you use. We’ve introduced improvements with ARM 2019.4 in Active Directory monitoring/alerting as well as official support for Microsoft Server 2019 editions.

 

What’s New in Access Rights Manager 2019.4?

 

  • Installation and configuration: Improved installation and configuration experience for new installation and upgrade scenarios.

 

  • Web Client - web dashboard: Use the new dashboard to get instant insight into what’s most important, or what needs to be addressed right now.

 

  • Active Directory - group policy monitoring: ARM now monitors if a group policy change has occurred and reports the change details.

 

  • Active Directory - alerting on user/group events: ARM now supports creation of alerts for any user/group on AD containers, making the configuration easier and covering more use cases beyond alerting on selected objects.

 

 

  • Azure AD/Office 365: Provision users in managed domains and assign mailboxes and licenses.

 

  • Defect fixes and architecture improvements: As with any release, we addressed product defects and introduced architectural optimizations, laying the foundation for coming features we plan to make available in the next releases.

 

The SolarWinds product team is excited to make these features available to you. We hope you enjoy them. Of course, please be sure to create new feature requests for any additional functionality you would like to see with ARM.

 

To help get you going quickly with this new version, below is a quick walkthrough of the new monitoring capabilities for Microsoft Active Directory, also available in the ARM Audit edition.

 

 

Identify CHANGES to GROUP POLICIES

 

Group policies are an important tool for managing Active Directory environments, and administrators should be aware if these have changed.

 

Now let’s look at how we can use Active Directory monitoring to answer the question, “What group policy has changed, and what are the change details?” ARM allows you to find this information via the Logbook in the thick client.

 

    

 

     1. Navigate to the Logbook view in the ARM thick client by clicking on “Logbook” in the navigation bar.

         The “Logbook” opens.

 

    

     2. Select the time period to be viewed by clicking the highlighted “from” date.

 

     

     3. Select the new date by clicking on the date in the date picker.

 

   

     4. Click “Apply.”

 

     5. Click the cell in the “Group Policy Changes” column of the date you’re interested in.

 

     6. In the upper window on the right side, you’ll see all group policy change events and who has changed these when on the selected date. The lower window holds the details of each event. In our case, we have the “Maximum system log size” changed from “60096 kilobytes” to “60160 kilobytes” and the “Prevent local      guests group from accessing application log” changed from “Not configured” to “Disabled.”

 

You can also get this information as report via the “AD Logga” report, which can be scheduled to be sent periodically to your mailbox, helping you stay on top what’s happening with group policy changes in your Environment.

 

Conclusion

 

I hope this quick summary gives you a good understanding of some of the new features in ARM and how you can use ARM to get better visibility and control over your hybrid IT Environment.

 

If you’re reading this and not already using SolarWinds Access Rights Manager, we encourage you to check out the free download. It’s free. It’s easy. Give it a shot!

Security Event Manager (SEM) 2019.4 is now available on your Customer Portal and solarwinds.com.  The Release Notes are available here and steps to upgrade your existing SEM appliance here. The SEM online demo has also been updated and can be accessed from here and you can see the dashboard in action within this video.

 

Firstly, you'll probably notice our new versioning format. New releases for SEM going forward will now use year.quarter, taking a similar approach to Orion® Platform product modules. SEM versions will be named with the four-digit year in which they were released, followed by the quarter of release. If there's a Service Release in between major releases, it will appear in the third position following the quarter, e.g., 2019.4.1.

 

So, what's included in this SEM release? This release mainly focuses on our migration from Flash, with new functionality added to our HTML5 interface including dashboards, user-defined groups, and email templates.

 

DASHBOARD

As the saying goes, a picture paints a thousand wordswhich is particularly true when it comes to log data. The Events page in SEM allows you to interact with your logs via filtering and keyword searching, but it can be difficult to spot any unusual activity or suspicious trends. That's where a dashboard comes into playbeing able to visualize thousands of logs and build a picture of what's happening on your network can be hugely valuable when detecting threats. We've included several out-of-the-box charts based on some of the most common use cases we hear from our customers, including change management, authentication, and network traffic widgets. You can easily create custom widgets based on any filter within the Events page and chart options include bar, pie, and donut, as well as line graphs for time-series data. Drilling into the log data behind each chart is vitally important when analyzing potential threats. You can easily view the corresponding log data within the Events page by clicking on a segment of a chart. Here's a glimpse at our new dashboard looksI hope you like what we've done:

 

 

 

USER-DEFINED GROUPS

You can now build and manage these groups via the HTML5 interface. User-defined groups contain data specific to your environment, such as user and computer names, sensitive files, approved USB devices, and so on. These groups can also act as whitelists and blacklists for use in correlation rules and filters, for example, alerting you to attempted URL access to a URL that you've blacklisted. You can create these groups manually or import elements via a CSV file. You can also easily export group elements to a CSV too. To ensure our out-of-the-box content remains relevant to an ever-changing threat landscape, we've updated several of our predefined groups, including SQL Injection/XSS vectors, anonymizer websites, and remote desktop websites.

 

 

 

EMAIL TEMPLATES

As part of the SEM 6.7 release, we introduced the ability to manage your correlation rules via the new interface, including the ability to select which email template you'd like to use as part of the alert. However, the creation and customization of those email templates still resided in the Flash console. SEM 2019.4 introduces the ability to build and customize these email templates within the new interface. These emails are incredibly valuable when it comes to adding context to email alerts as well as including information from log data within those alerts.

 

 

 

FILTER -> RULE

Your network is probably generating hundreds, if not thousands, of events every second, and trying to identify interesting logs from the deluge of log data is challenging. That's where filters come into play. You can rely on the predefined filters or create custom filters within SEM to home in on certain logs. But what if you want to create a correlation rule to alert or respond to those same events being generated on your network? Until now, you had to create a filter and then manually create a corresponding correlation rule. We've simplified this process and you can now send SEM filters to rule creation to quickly create new correlation rules based on a filter.

 


 

 

I really hope you like the direction we're going with Security Event Manager, especially the new user interface. As always, your feedback and ideas are always greatly appreciated, so please provide any feedback you may have within the comments section below or within the SEM Release Candidate forum.

The release of Orion® version 2019.4 brings a lot of excitement to the SolarWinds® Service Desk team. It introduces an integration that enables a closed-loop workflow, which converts alerts detected by Orion into a service desk ticket and updates the Orion alert as the ticket is resolved. By streamlining this process, IT pros can react faster when performance issues or outages are detected. This helps expedite the resolution process, helping IT ensure the availability of the service that employees rely on to stay productive.

My good friend, tony.johnson, put together a great article on how to implement the integration, but we wanted to also share how you can maximize the value of this integration. Let’s take a look into how you can configure your alerts and your service desk for optimal results!

The SolarWinds Orion and SolarWinds Service Desk Integration

Before we jump into the configuration option, let’s talk about the value this integration brings to your IT operations. The core capability automatically converts alerts into tickets. This makes things much easier for IT pros, but that is only part of the story. The integration also:

  • Brings together IT operations and service information to improve visibility of employee impacting issues, helping them react and resolve issues faster
  • Improves operational efficiency by automating bi-directional communication between SolarWinds Orion and SolarWinds Service Desk
  • Captures all alert data into your service records, allowing you to report on alert-generated incident trends and your team's efficiency in resolving these types of issues

 

To take full advantage of the integration’s capabilities, you will need to properly configure both systems. Fortunately, this can be accomplished relatively easily. The three-step process below outlines a best practice approach to implementing this integration.

 

Step One: Game Planning

Although this step may seem like a no-brainer, we cannot stress its importance enough. At many organizations, the teams working in the Orion platform differ from those working in the service desk. They have different roles, responsibilities, priorities, and processes that they follow. By formalizing what you are trying to accomplish with this integration you can drive better alignment and accountability across teams. Keep in mind that this step may not require you to reinvent the wheel. The Orion Platform provides hundreds of pre-configured alerts, many of which you may already have activated. Now it’s just a matter of discussing which alerts you want sent to your service desk and how those tickets should be processed. A great way to accomplish this step is to have a classic whiteboard session. Some key questions to ask in this session are:

 

  • What types of alerts do we want sent to the service desk?
  • How should we categorize them?
  • Who should we assign them to?
  • How do we prioritize individual tickets?
  • Who should we notify when an alert-based ticket is created?
  • Do we want to set individual SLA rules on the alert-based tickets?
  • What information and attributes of the alert should be included in that ticket?
    • The general rule is to include all beneficial attributes. Not only could this information help you diagnose the issue, but it also can be used to automatically route, categorize, and prioritize the ticket.

 

It is important to note that the answers to these can vary based on the different types of alerts you are sending to the service desk. For example, the desired outcomes for alerts generated by Network Performance Monitor (NPM) could vary greatly from those for Server and Application Monitor (SAM). Throughout this post, we will focus on a specific scenario, but keep in mind that the flexibility of both Orion and SolarWinds Service Desk allows this integration to support many use cases. Example Scenario: Active Directory Replication FailureThe Problem: Like many organizations, our company is running on several mission-critical applications that our employees rely on to get their work done. We are using Active Directory (AD) to ensure the right users have the proper access levels to the applications essential to their positions. To help us manage AD, we utilize Server and Application Manager (SAM) coupled with AppInsight for Active Directory for deeper visibility into this critical system. However, we have more than one domain controller, and if replication fails or is delayed, users may not be able to log in to their applications. To help address this, we want to escalate AD generated alerts for replication failures to our service desk to provide better visibility and quicker resolutions.

 

The Whiteboard Session:

 

QuestionsAnswers
What types of alerts do we want to be sent to the service desk?Active Directory Replication Failure
What information and attributes of the alert should be included in that ticket?The Domain Controller Name
How should we categorize them?
  • Category: Application
  • Subcategory: Active Directory
Who should we assign them to?Application Support Team
How do we prioritize individual tickets?Critical
Who should we notify when an alert-based ticket is created?Tier One Support Team
Do we want to set individual SLA rules on the alert-based tickets?Yes, we want service restored within 2 hours

Step 2: Configuring Orion Alerts

Now that you have a clear picture of your goals in converting an alert to a ticket, it is time to start configuring the two systems. We are going to start on the Orion Platform side, where you have two key configuration options:

  1. Customizing your alert attributes: Selecting the information you want included when an alert is sent.
  2. Adding the “Create SolarWinds Service Desk Incident” alert trigger: Setting that these specific alerts will be sent to your service desk.

 

Example Scenario: Active Directory Replication FailureLet’s jump back into our use-case from step one to build out our alerts.

  1. In the first step, we decided which attributes are to be included in the alert for “AppInsight for Active Directory: Alert me when replication fails.” We built it out to include these attributes:

  1. Now that you have the alert attributes set, let’s add the action to send these alerts to the service desk. Select the option below to add the action to your alert:

With the above configuration, alerts sent to your service desk will look like this:

Step 3: Configuring Your Service Desk

Now that we have our alerts configured properly, let’s start configuring the service desk. Here we will focus on three main areas:

  1. Building Automation rules
  2. Defining Service Level Agreements (SLAs)
  3. Creating reporting on alert-generated tickets


IT Pro Tip: When you are configuring the integration in your service desk (in the setup options), you have to designate a requester, which will be the user that all alert-generated tickets will be associated with. We recommend creating a “shell” or fake user for this requester to make it easier to configure SLAs and automation rules specific to this integration. This will also make it easier to visualize alert-generated tickets when viewing your Incident queue.


Setting Alert-Generated Incident Automation Rules


In SolarWinds Service Desk, automation rules allow you to define what actions you want to take on a ticket when it is created, commented on, or updated. These automated actions drive consistency to the way you route, prioritize, categorize, and process tickets. Setting automation rules for alert-generated tickets keeps the proper teams aware of performance issues, allowing them to quickly react to and address the situation.


Example Scenario: Automation Rule for Active Directory Replication Failure Alert
Now that we have configured the Orion side in step two, let’s build an automation rule that will triage, prioritize, and categorize the alert-generated ticket. This is a two-part process:

  1. First, set your conditions. When a ticket matches these conditions, the proper automated actions will take place. Here are a couple of key conditions:
  • Origin: You can set conditions based on the origin of the incident, and in our case, incidents coming from “SolarWinds Orion.” This ensures the automation rules will only run for tickets generated by this integration.



  • Keywords: Setting a keyword condition allows you to leverage the alert attributes we established earlier with your automation rule. In our situation, we are going to use keywords from the alert name to build out the rule.


 

IT Pro Tip: Using Multiple Attributes - Depending on your use case, you may want several attributes in your keyword condition when building an automation rule. To do this, you can use regular expressions for your keyword condition. For example, if you had two alert attributes you wanted to use, you could leverage the regular expression: (\s|\S)*. This allows you to search through the entire body of the incident to pinpoint your specified keyword criteria. This would look like:

 

(attribute1)(\s|\S)*(attribute2)

  1. Actions: Now select what you want your automation rule to do. For our example, I want my rule to:

 

    • Reassign the ticket to the Application Support Team
    • Categorize it as an Applications/Active Directory issue
    • Update the priority to Critical
    • Notify the Tier One Team that the issue is happening



Voila! Your automation rule is built.

 

IT Pro Tip: Cloning Automation Rules - You may want to build multiple automation rules for similar types of alerts. For example, you could build two automation rules for our scenario with slightly different actions:

 

  1. When the New York domain controller (NEWYADDS01v) is down, route the alert-generated tickets to the New York support team
  2. When the Los Angeles domain controller (LOSADDS01v) is down,  route the alert-generated tickets to the Los Angeles support team


With the help of cloning capabilities, you can easily scale variations of your automation rules. This allows you to clone an existing rule and make your modifications without starting from scratch.

Setting Service Level Agreements (SLAs) for Orion Alert-Generated Incidents

 

You can set up individual SLA rules for the incidents created by this integration to set expectations for response and resolution times associated with alert-generated tickets.
Before we get started, here are a few things to consider:

  • In many cases, your SLA rules will rely on your previously developed automation rules. In the example above, the automation rule set the category and priority of the alert-generated ticket, both of which are criteria you can use for your SLA rule.
  • Earlier, we shared an IT Pro Tip about creating a “shell” user to use as the default requester for this integration. That user can also be used to define the scope of your SLA rule, helping you ensure these rules will only apply to alert-generated incidents.


Example Scenario: SLA Rule for Active Directory Replication Failure Alert

When Active Directory is down, our employees cannot access the applications they need to do their jobs. For this reason, we want to set the expectation that any replication failure alert will be resolved within two hours. Let’s build out this SLA rule:

  1. Set your SLA target: For this example, I am setting a target of “Not resolved” within 2 hours.



  1. Define your scope: We will use the data points we set with our above automation rule in this section.
    1. Category = Application
    2. Subcategory = Active Directory
    3. Priority = Critical
    4. Requester = Orion Alerts



  1. Set your action: This is where you set actions that are triggered when the SLA breaches. For our example, we are:
    1. Assigning to Anthony Campbell (Director of IT)
    2. Escalating the ticket to Tier 3 Application Support



Similar to automation rules, you may want to build specific SLA rules for the different types of alerts that will be sent to your service desk. For example, you may have different expectations for tickets generated by networking alerts versus application alerts. This will help you set performance standards and measurable goals across the various scenarios that can impact your IT services.

Reporting on Orion Alert-Generated Tickets

 

The last thing we want to dive into is how you can leverage the service desk reports to get a different perspective on Orion alerts. tony.johnson said it best, “The Orion Platform gives you great information on when the alert was triggered, and when the alert is re-set, however, it is missing the details on what was done to resolve the alert.”


This is where the service desk can help. Here are a handful of reports available out-of-the-box with SolarWinds Service Desk that provide you a more complete picture on how alerts are processed and resolved by your teams:

  1. Incident Trend Reports - View the days of the week you receive the most alerts and resolve the most alert-based incidents.
  2. Incident Heatmap - See which times of the day you experience the most alert based incidents.
  3. Incident Throughput Report - Visualize how effective your team is at resolving alert based incidents.
  4. Service Level Breach Report - Keep track of  overall SLA compliance your agents have with alert-based incidents.

 

IT Pro Tip: Similar to automation rules, you can use the “Incident Orion” field in the reports module. This allows you to build reports that only reflect incidents that are created by the integration.

 

Bringing It All Together

 

We’ve walked through configuring both Orion and your service desk to get optimal results with this integration. Let’s tie it all together and talk through a real-world scenario.

 

Your Active Directory is experiencing a replication failure. An alert is generated, which is instantly converted into a service desk ticket. This ticket is prioritized as critical and assigned to the application support team.

 

The Tier One team is also notified that we are experiencing an AD replication issue. They are seeing tickets submitted by end users that seem related—users are unable to sign into Salesforce.

 

 

Per our processes, a problem record is promptly created and associated with the end users and alert-generated tickets. This allows the application support team to consolidate all the tickets associated with this issue, giving them valuable data that could help them quickly diagnose the root cause of the issue and work towards a resolution.

 

 

At the same time, the Help Desk Manager posts an announcement to the employee service portal that we are experiencing an issue when logging into Salesforce and we are actively working on resolving the problem. Now employees are aware of the situation and no longer submitting tickets, saving Tier One from a barrage of inbound tickets in their queues.

 



The Application Support team figures out what the problem is and deploys a fix that resolves the issue. They then resolve the problem record, which resolves all attached tickets, including the one generated by Orion. The team was able to react fast, keep the organization informed on the situation, and quickly diagnose and resolve the issue. IT saves the day again.



Although the above scenario may be a common use case, it is only one of the vast number of use cases that can be supported by this integration. As you begin using this integration we would love to learn more about your use cases and what impacts they made to your team and your organization. Share your stories in the comments below!

 

The SolarWinds® product management team is happy to announce the general availability of all 14 products on Orion Platform 2019.4. Every product has new features available in this release. Download now through your Customer Portal and solarwinds.com. By downloading the unified SolarWinds Orion installer from any one of those download sources, you'll be able to install or upgrade your entire Orion environment in a single, streamlined, upgrade session.


 

What's New for Orion Platform 2019.4

 

Updates to the Orion® Platform will provide you with:

  • Deployment flexibility - SolarWinds and Microsoft have partnered to enable the Orion Platform and its modules, including Database Performance Analyzer (DPA), to be deployed from the Azure Marketplace, simplifying and accelerating the process to deploy the platform into an Azure subscription.
  • Support for Azure SQL Database Managed Instance - Deploy the Orion Platform database with support for the latest version of Azure SQL Database.
  • Leverage your Azure subscription to:
    • Host the Orion server
    • Host the Orion database using Azure SQL Database
    • Host the Orion database using Azure SQL Database Managed Instance.
    • Host the Orion database as an Azure VM
  • Orion Maps enhancements - A redesigned Entity Library for quickly identifying what you need, enhancements for bulk administration, the ability to add custom images, and enabling topology relationships to be manually defined without ever leaving the editor.
  • Integration with SolarWinds Service Desk - Improve time-to-resolution via integration with the SolarWinds ITSM solution, enabling service desk tickets to be automatically created from Orion Alerts.
  • Web performance improvements across several Orion Platform modules, including Network Performance Monitor (NPM), NetFlow Traffic Analyzer (NTA), and Network Configuration Manager (NCM).
  • Standardized release numbering for easier compatibility comparison. All products in this release will be versioned 2019.4.

 

What's New for Systems Management Products

 

This release of the systems portfolio expands our capability to monitor additional devices, many of which have been top asks from our customer base. Upgrade to enjoy enhanced Microsoft Active Directory monitoring through domain trust support, simplified REST API monitoring, Hardware Health visibility for Nutanix clusters, support for Dell EMC Data Domain devices, and much more.

 

What's New for Network Management Products

 

This release of the network portfolio adds Device View, Real-Time Charts, Meraki flow support, visibility for Palo Alto policies, Cisco Unified Call Manager support, and more. We've also done a great deal of work to improve overall webpage performance and produce a better user experience.

 

What's Next

 

The SolarWinds product team is constantly looking ahead to build world class monitoring solutions to solve your monitoring woes. Watch and subscribe to What We Are Working On to get an updated view on what's next for the Orion Platform and its modules. Let us know how we're doing and what we can be delivering to keep you ahead of the curve.

I'm excited to announce general availability of SolarWinds Identity Monitor, an easy-to-use cloud-based service specialized in preventing account takeover. Identity Monitor is enabled through a partnership with SpyCloud, experts in recovering data breach information. Since this is the introductory post about Identity Monitor, I wanted to talk about the main problem it solves and then give you a quick overview of the product.

 

What Is Account Takeover?

Account takeover is exactly what it sounds like--when a bad guy obtains your credentials associated with one site, and then tries to use them to take over your accounts on other sites.

As someone in IT, you probably use unique, strong passwords with multi-factor authentication for every site or service you use (right?), and at work, you probably enforce secure policies for the servers and applications you control, but... are your users as careful as you?  Do they ever reuse passwords, mixing them across work with non-work services? They do, and this is why account takeover works--because once the bad guys get one set of credentials, they try them on hundreds of other sites using credential stuffing tools to find out what else they can access... and then the bad stuff starts to happen.

 

How Do You Prevent Account Takeover?

You can take all the preventative steps in the world, but there will continue to be data breaches where your credentials and information are taken, and once your credentials are compromised, the only way you protect yourself is to change your credentials. Seems simple--but first you have to know you've have been compromised to take action.

 

Identity Monitor has billions of records from previous data breaches and can tell you if you or your company are compromised right now. Identity Monitor presents this data in a timeline and summarize it into asset types, allowing you drill down on specific breaches in the past and see what credentials were exposed.  Data can include usernames, email addresses, passwords (both encrypted and unencrypted), addresses, birthdays, phone numbers--almost anything you've ever entered into a website.

 

Identity Monitor continuously scours the internet for new data breaches, and as this new information is ingested, it will analyze the data and alert you to new compromises. Speed is the key here--you need to know about new compromises of your users as fast as possible.

 

If the hair on the back of your neck is standing up and you're ready to see how deep the rabbit hole goes, go sign up for a free Identity Monitor account.   Otherwise, let's look at how Identity Monitor works, evaluate how compromised your company is right now, and find out what kind of information you might see.

 

Am I Compromised Right Now?

As IT professionals, part of our job is to protect our companies physical and digital assets. Let's log in, look at timelines, and drill into some detail. Here I have one domain registered (example.org) and I can see the timeline of breaches on top, the most recent breaches on the right, and the types of compromised information.

Let's take a closer look at the breached asset types.

You can see how email addresses are compromised, how many passwords are known, and the amount of Personal Identifiable information available. Let drill down on the emails and see what's exposed. I'll pick the first one since it was just a few days ago and is marked critical... and I'll click to expose the password (which turns out to be "secret").

I am also interested to see what personal information is available, so I click View Raw Data.

Here you can see the extensive amount of personally identifiable information... and it's scary.

 

Once you get a feel for the scope and type of exposure your company and employees have, you can address the current situation, and then decide how to improve your processes going forward. Each breach has advice on remediation too.

 

Ongoing Protection

Let's says we've addressed all the problems Identity Monitor found, but sadly we know another security breach is around the corner (just look at the history on the timeline). How does Identity Monitor protect us going forward? By continuously scouring the internet for new breaches, digesting the data as quickly as possible, and alerting you. In the Email Assets example above, you can see there were only a few days between the breach date and the date it published in Identity Monitor. We also get this handy email alert telling us there was a breach and link us to the details:

And you aren't limited to just your domain. You can extend your protection to any email address as long as the email owner gives permission.  This is great for watching personal emails of critical employees (like your executive team), DL used for signing up for external services, or any other email used for company business.

 

Sign up now for free! Pricing is by number of employees and starts at $1795 USD for 100 employees.

 

These are the primary use cases Identity Monitor covers, but there's more--be watching for more blog posts.

We are delighted to introduce our latest Dameware Remote Everywhere update: Viewer 6.00.07 for Mac.

In January 2019, we introduced our entirely re-styled Windows Viewer – in which we had consolidated all menu and action items into a single, easy to navigate top bar – giving an organized and scalable presentation to the DRE Viewer.

On initial launch, you’ll notice the Viewer changes immediately:

Viewer_Whole

But despite the menu changes, and consolidation of all menu items on the top bar, there’s been no compromise to functionality – all the features and functions have been homogenized and streamlined.

Menu_xplore_1

All your session stats and session telemetry remain wholly accessible:

session_telem

The menus are slick, navigable, and highly responsive, making this a real pleasure to use.

Summary of release:

– SolarWinds Take Control Viewer update: 6.00.07
– No agent update


The Problem…

As the "monitoring person," we often find ourselves dealing with keeping the records in the database correct and current. The problem is, no matter how hard we try, our end users don't always keep us up to date when a device is turned off. Normally, we find out a device was turned off when we see a NODE DOWN alert hit the board. The team responsible will sometimes ignore the notification because to them, the node's no longer in use, so they delete the email and never circle back to ensure the device is removed from all the different databases, including the monitoring database.

 

Well, one day a few weeks ago, coming off a great SolarWinds User Group (SWUG) in New York, my brain was spinning with ideas on how to automate simple tasks when the idea of "Dead Nodes" hit me. I thought about the common problem of having nodes on a report showing as "Down" when they were no longer in use. And knowing the power of Server & Application Monitor (SAM) and some of the things I've already done within that tool; I knew it was possible to address this use case easily within SAM.

 

The Birth of an Idea

So, I turned to THWACK to see if anyone else had the same idea. I found a great post with a great script, and I wanted to take it one step further. The original post I found would deal with the dead nodes, but it wasn't integrated into an alert. Since I wanted notifications sent to the system owners, this wouldn't work for me.  So, I reached out to Kevin M. Sparenberg, told him my idea, and he came back with "Let me try it out!" A few hours later, while at an amusement park, I found myself working with Kevin to perfect the alert. The alert was key for me because, as the monitoring guy, I think it's important to at least share with my end users what I'm doing with their devices. And that was lacking from the original post I found on THWACK. Kevin and I worked together to develop the SWQL query to define the conditions, write the script to run in PowerShell to do the heavy lifting, and craft the email notification.

 

I'm going to walk you through the way I built this alert with some help from the community. I'll cover three basic areas: Frequency of the Alert, Trigger Condition, and Alert Actions.

 

At the very end of this post are some things you may encounter using the examples in your environment. I ran into a few of them, I knew about a few others, and Kevin reminded me about one or two. I highly recommend you review the Some System Requirements section before importing the alert and scripts.

 

I've done my due diligence and provided you the necessary warnings. Now it's off to the races!

 

What's the Frequency?

Since our dead nodes alert isn't exactly mission-critical—it's more like good housekeeping—there's no need to check it every minute (which is the default). After a little discussion, I decided once an hour was enough for our needs. You could scale this back to once a day or even once a week (168 hours) if you like.

 

The Power of SQL/SWQL in an Alert Trigger

Thanks to Kevin's knowledge and understanding of SQL and SWQL, he was able to develop the original SWQL query based on the key points I wanted, which were straightforward. I wanted to find all the nodes in my system reporting as "DOWN" for the past 30 days. He came back with the following based off the original thread:

SELECT Nodes.Uri, Nodes.DisplayName FROM Orion.Nodes AS Nodes
 JOIN Orion.ResponseTime AS RT
 ON Nodes.NodeID = RT.NodeID  
 WHERE RT.DateTime > ADDDAY(-30, GETUTCDATE())
 AND Nodes.UnManaged = False
 GROUP BY Nodes.NodeID, Nodes.Caption, Nodes.Uri, Nodes.UnManaged
 HAVING MAX(RT.Availability) = 0

 

I opened SWQL Studio and ran this query to see if it passed the "sniff" test. The results looked pretty good, so I looped in my manager.

 

After speaking with my manager, I realized we'd cast our net a little too wide. Within my environment, I have some nodes down for over 30 days, but shouldn't be considered "Dead." These nodes are normally found within some of our locations and might be offline because of a natural disaster or the stores simply being remodeled. So, I took what Kevin gave me and changed it up to make sure it wasn't pulling in any devices down for a known reason. The result was this:

SELECT Nodes.Uri, Nodes.DisplayName FROM Orion.Nodes AS Nodes
 JOIN Orion.ResponseTime AS RT
 ON Nodes.NodeID = RT.NodeID  
 WHERE RT.DateTime > ADDDAY(-30, GETUTCDATE())
 AND Nodes.UnManaged = False
  AND Nodes.CustomProperties.Store_Known_Down = False
 GROUP BY Nodes.NodeID, Nodes.Caption, Nodes.Uri, Nodes.UnManaged
 HAVING MAX(RT.Availability) = 0  

It should be noted that Store_Known_Down is a Yes/No custom property I've tied to nodes so I can mark them as being down for a known reason. Your alert logic will probably differ, but it's important to think about these edge cases.

Defining the Alert Actions

With the list of devices from the alert trigger in hand, we next had to address the actions when the trigger occurs. For me, it was key to have both an email message to the system owners and the alert add a "Decommissioned Date" to the existing custom property with the same name. We use this custom property within my environment to track when a node is no longer in use, so having this date was critical for both reporting and alerting logic.

 

Kevin again came to the rescue and helped me develop the PowerShell script. We then tested the alert in his test lab and BINGO! The system was unmanaged and the custom property value was updated with the current date/time. But more details on the script later.

 

The Proof Is in the Results

So, after perfecting the query and the script, it was time to test it out. Kevin spun things up in his lab. We started by crafting a new alert and testing the query logic in the alert editor:

The query passed validation, so we've got no syntax errors and are good to move on to the next step.

 

Manually Testing the PowerShell Script

Before we could define the alert actions, we wanted to test all the parts, including the PowerShell script.

 

The complete script is here, and commented thoroughly. There are a few important parts to this script. The only place you will absolutely need to edit is the authentication block at lines 21-23, where you'll need to put in your Orion server and credentials.

<#
Script: Alert_Unmanage-Node.ps1
Arguments: The node ID in question
Authors:    Ben Keen (the_ben_keen) and Kevin M. Sparenberg (KMSigma)
 
Version: 1.0 - initial release
 
 
#>
if ( -not $args[0] )
{
    Write-Error -Message "You must provide the Node ID as a parameter to the script"
}
else
{
 
    # I hate using the "args" nomenclature, so I'm just going to do assign it to a better name
    $NodeID = $args[0]
    
    # Authentication
    $SwisHostname = "MyOrionServer.Domain.Local"
    $SwisUsername = "MyAdminAccount"
    $SwisPassword = "MyAdminPassword"
 
    # Build a SWIS Connection
    $SwisConnection = Connect-Swis -Hostname $SwisHostname -UserName $SwisUsername -Password $SwisPassword
 
    # When does the unmanage start?  Right now!
    $CurrentDate = ( Get-Date ).ToUniversalTime()
    
    # Flip the status to Unmanaged with no end date
    # The parameters are:
    # - The Node ID (in N:##) format
    # - The start date of the unmanage time
    # - The end date of the unmanage time (now + 10 years)
    # - false - no clue why this is required, but it is
    $Results = Invoke-SwisVerb -SwisConnection $SwisConnection -EntityName "Orion.Nodes" -Verb "Unmanage" -Arguments @( "N:$( $NodeID )", $CurrentDate, $CurrentDate.AddYears(10), $false )
 
    # We need the full URI to set properties
    $Uri = Get-SwisData -SwisConnection $SwisConnection -Query "SELECT Uri FROM Orion.Nodes WHERE NodeID = $NodeID"
    # Then we need to append it with the CustomProperties identifier
    # The [$Uri += "/CustomProperties"] is the equivalent of [$Uri = $Uri + "/CustomProperties"]
    $Uri += "/CustomProperties"
 
    # Set the Custom Property
    # Parameters are:
    # - The URI of the node in question's custom properties
    # - A hashtable of the properties and the values
    #      Denoted by @{ PropertyName1 = PropertyValue1; PropertyName2 = PropertyValue2; ... }
    $CustomProperty = @{ "Decommissioned_Date" = $CurrentDate }
    Set-SwisObject -SwisConnection $SwisConnection -Uri $Uri -Properties $CustomProperty
}

 

You'll notice on line 18, we make reference to the $args variable. These are the parameters you pass to this script. For this script, it's the Node ID of the device we want to decommission.  This script is only expecting a single node ID to be passed, so we are only looking at $args[0] (the first argument in the variable).

 

On line 37, we set the device to the Unmanage status and later on line 50-51, we set the decommission date custom property. In reality, there are only about six lines of this script that do any work.  The rest are comments so we can understand what we did even years down the road.

 

To test it, we opened a PowerShell prompt and then typed:

 

D:\Scripts\Alert_Unmanage-Node.ps1 62

 

This is the full path to the script, including the extension, a space, and then the node ID for marking "dead."

 

When executed against a testing node, we got no errors in the PowerShell prompt and the Orion pages showed the results we expected. Nice!

As you can see, the node was switched to Unmanaged and a Decommissioned Date was added.

 

Now that I know the script works, I can add it to an alert action.

 

Add an action for Execute an External Program and then fill in the details.

The full path doesn't show up in a screenshot, so I'll put it all here for you:

 

"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe" -File "D:\Scripts\Alert_Unmanage-Node.ps1" ${N=SwisEntity;M=NodeID}

 

It's a very long line, but simple in execution. Let me break it down:

                                                          

"C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe"

Full path to the PowerShell executable

-File

Parameter telling PowerShell to run the script in the next position

"D:\Scripts\Alert_Unmanage-Node.ps1"

The full path to the script. If you save this elsewhere on your computer, be sure to update the path.

${N=SwisEntity;M=NodeID}

SolarWinds variable containing the NodeID for the alerted node

 

Customizing the Alert Email

The user experience is key in everything, but especially in monitoring. If you're going to use the information in this post, make sure you spend some time crafting the message sent. I wrote it based on how my end users digest their alerts.  Your end users may view their alerts differently. I don't need much more than the basics for this type of alert message. I kept most of the default message and then just added some language about it being a dead node. Below is my example of the alert message.  [Yes, I know I have a typo in the first line]

 

So, I have the Frequency of the Alert, Trigger Condition, and Alert Actions (execute a script and send an email)—everything we need for this alert. When completed, the trigger actions list looked like this:

And that's pretty much it for the alert. There are no reset actions, so we're done. I just clicked through the wizard to save it. In my environment, I didn't enable the alert yet. I needed to make everyone aware of what was happening first.

 

The Results Are In

After clearing it with the necessary teams, I enabled the alert. Within a few minutes, the first system was found, flipped, and timestamped.

The results speak for themselves. My Orion server will no longer waste compute power trying to poll devices that have been offline for 30 days, the associated teams got a message saying I've stopped watching their devices, and I can make a simple custom query resource to show me all unmanaged devices with a decommission date.

 

Edit a dashboard, add new widget, search for a Custom Query widget, drag it into your dashboard, and then save the layout.

 

Edit the widget. Provide a clear name and enter:

SELECT  N.Caption AS [Node Name]
      , CONCAT('/NetPerfMon/images/Vendors/', N.VendorIcon) AS [_IconFor_Node Name]
      , N.DetailsURL AS [_LinkFor_Node Name]
      , N.CustomProperties.Decommissioned_Date AS [Decommission Date]
FROM Orion.Nodes AS N
WHERE N.Unmanaged = 'TRUE'
   AND N.CustomProperties.Decommissioned_Date IS NOT NULL

For the custom SWQL Query.

 

If you want to enable the search, enter:

SELECT  N.Caption AS [Node Name]
      , CONCAT('/NetPerfMon/images/Vendors/', N.VendorIcon) AS [_IconFor_Node Name]
      , N.DetailsURL AS [_LinkFor_Node Name]
      , N.CustomProperties.Decommissioned_Date AS [Decommission Date]
FROM Orion.Nodes AS N
WHERE N.Unmanaged = 'TRUE'
   AND N.CustomProperties.Decommissioned_Date IS NOT NULL
   AND N.Caption LIKE '%${SEARCH_STRING}%'

For the Search query.

 

When done, it'll look like this:

Save that resource and now you have a quick and easy way to search for unmanaged nodes, with hover-over information to boot.

 

In Summary

After all this was completed, I was very pleased with the results, but began to look around for some other changes. I've already thought of some ways to tweak this logic, improve the alert language, and leverage the SolarWinds Orion API to do more of my work for me.


Some System Requirements

Since this was my first foray into using a script action, I needed to do some additional work. You may not need to do all of these, depending on the way your infrastructure is architected.

 

PowerShell Execution Requirements

Depending on how your Orion server is configured, you may not be able to natively execute PowerShell scripts. This is part of the Execution Policy and it's controlled by several things, including Group Policy. To check the execution policy, open PowerShell as an Administrator and execute:

 

Get-ExecutionPolicy

 

If the results are either RemoteSigned or Unrestricted you can already run PowerShell scripts on this machine. If it's anything else, you'll need to change the policy. This falls outside the scope of this document, but you can find more information about Execution Policies in the Microsoft documentation.

 

SolarWinds Orion PowerShell Module

To connect to the SolarWinds Information Service, you'll need to install the SolarWinds Orion PowerShell Module (SwisPowerShell). This module is freely available and published on the PowerShell Gallery. To install it on your server, open PowerShell as an Administrator and execute:

 

Install-Module -Name SwisPowerShell -Scope AllUsers -Force

 

If this is the first PowerShell module you're installing, you may get prompted to approve the NuGet package provider. This is expected, and you can answer "Yes."  The above line says to install the PowerShell module and make it available for all users on that machine.

 

To validate the module was installed correctly, execute:

 

Get-Module -List -Name SwisPowerShell

 

If you get a result showing a version, then it's installed correctly.

 

Custom Properties

For the script to execute correctly, you need to have a custom property called "Decommissioned_Date" with the date/time data type and assigned to nodes. To create this custom property, within your admin pages, navigate to the Manage Custom Properties page and click "Add Custom Property."

 

This custom property will be based on nodes.

 

Provide the name, give it a description, and select the format as Date/Time. Be sure to keep the "required property" checkbox deselected.

 

Lastly, don't manually assign nodes with this custom property. We'll let the script do the work.

 

Note: if you choose to use a different name for your custom property, be sure to update it within the PowerShell script (line 50).

We are very pleased to announce our latest Dameware Remote Everywhere release. This release, which includes an updated Windows Agent, Windows Console & Viewer revision in addition to a variety of customer-driven improvements, also includes our latest feature: In-session video calling!

All the details…

In version 7.00.07 for Windows, after you launch a DRE session, you will now have a new option on the drop-down menu to “Start Video Call”.

New_menu_item1

Selecting this option immediately instantiates a video call request to your connected partner. That person can accept or reject your request.

Rejecting simply drops the inbound call, and in no way impacts the session itself. Accepting the call will immediately launch our video conferencing allowing a two-way exchange of voice and video stream. As with VoIP calls, your primary audio device will be enabled by default – now we’re adding in your primary camera device as well to enable video calling. This is particularly handy if the end user wants to show the technician something that will help troubleshooting on their end – a cabling configuration, port setup, etc.

Also in this release, we’re making a major enhancement to our Admin Area in the form of our new Take Control Widget:

Widget_On

This new Widget allows users to quickly see the support request queue, how licenses are being consumed at that moment, and offers simple ways for techs to  both  create and transfer sessions – and it’s all in one overlaid dialog box!

Note:  Introduced with this new Widget is the ability to administratively disable your local license consumption.  It’s as easy as switching the toggle to “Off”; you will terminate your use of license BUT you will be able to continue to perform other administrative functions, such as running reports, designing surveys, and so on:

 

Widget_Off

Summary of release:

Windows Agent, Viewer & Console 7.00.07
FEATURE: Added Video calling Agent based (Unattended) sessions
FEATURE: Added Session Widget to Administration Area
FEATURE: Added ability to disable license consumption on the Admin Area
FEATURE: Ensure TCP 3377 is configurable as a backup connectivity port
FEATURE: Update the PowerShell interaction to advise on <V5 interactions
UPDATE: Revision on the “Blank Screen” option
UPDATE: Added greater detail to Admin area audit notifications
UDPATE: Updated Calendar interactions on Admin area
BUGFIX: resolve the application estimation size of 0KB

Mac Agent, Applet and Console 6.00.05
FEATURE: Allows users to see administrative functions
BUGFIX: Resolve issue where session could lose connection following a restart command

Not to be overshadowed by the excitement around the introduction of SolarWinds® Service Desk earlier this summer, we’re excited to introduce you to SolarWinds® Discovery. This technology provides your organization the ability to discover, map, and manage your software and hardware assets directly in your service desk.

 

SolarWinds Discovery utilizes cloud-based technology to make it easier to implement, manage, and scale throughout your organization, helping you discover your IP connected devices with just a small footprint.

 




Now you may be thinking, “Discovery? Don’t I already have this functionality with other SolarWinds products I use?” Depending on the products, the answer is most likely yes. Many SolarWinds solutions have discovery components included, like Network Performance Monitor or Service & Application Manager on the SolarWinds Orion® Platform. However, they are helping your organization solve a different set of problems.

 

The discovery mechanisms used by Orion help you monitor asset performance, generate system alerts, or pinpointing vulnerabilities in your IT infrastructure.

 

On the other hand, SolarWinds Discovery helps you leverage your asset data to support your IT service management (ITSM) and IT asset management (ITAM) processes.

 

 

Let’s take a deeper look into the benefits SolarWinds Discovery can bring to the ITSM and ITAM capabilities provided by your SolarWinds Service Desk.

 

Improving Service Management Processes

SolarWinds Discovery populates asset information directly into your service desk, giving your technicians visibility into data that can help them diagnose issues quicker. Let’s say you have an employee (end user) who is having an issue accessing a particular software.

 

Because SolarWinds Discovery collects all the software titles installed on your computing device, you can then quickly looking up the employee’s devices and see what version of the software they are currently running. Within a matter of seconds you have the information you need to effectively troubleshoot and quickly resolve the issue.

 

 

The data that SolarWinds Discovery finds can also be used to help your service desk mitigate risks. SolarWinds Service Desk allows you to designate software titles as Greynet, meaning they are either illegal, not approved by your organization, or even a potential virus.

 

When SolarWinds Discovery finds a software title labeled Greynet, a notification is generated to give your agents visibility into the potential issue. Check out how FirstHealth of the Carolinas was able to utilize SolarWinds Discovery to pinpoint devices that were infected with a ransom virus, which ultimately helped them remove it without paying the demanded dollar amount.

 

 

Aligning your Assets with your Configuration Management Database (CMDB)

When SolarWinds Discovery finds assets throughout your infrastructure, they are automatically converted to Configuration Items (CIs) and populated into the CMDB that is included with your SolarWinds Service Desk. This allows you to create relationships between CIs, giving you a better picture of how the components of your infrastructure interact with each other and support IT services you deliver.

 

In turn, this can help your agents evaluate the root cause of a larger issue impacting your organizations, so they can work on resolving it quickly. Also, by understanding the relationships between your CIs, you can better evaluate impacts associated with changes you are making to your infrastructure, which helps your team understand and mitigate potential change related risks.



 

Your CMDB can provide a lot of value to your organization, but it is imperative that it remains complete and up-to-date in order to take advantage of its full capabilities. By combining your CMDB with SolarWinds Discovery, additions and changes to your IT infrastructure will continually be reflected in your service desk.

 

Leveraging Discovery for IT Asset Management Use Cases

SolarWinds Service Desk comes with an IT asset management module, helping you manage the capital expenditures (CAPEX) and lifecycle of the devices in your infrastructure. SolarWinds Discovery is a critical aspect to these capabilities as it helps you locate all your assets and collects additional information necessary for lifecycle analysis, such as installed software titles and warranty information.

 

SolarWinds Discovery also helps you lower your CAPEX by giving you greater visibility into the assets you own. For example, many organizations spend money on assets they do not need, specifically on assets like computers and printers. This is often a result of a lack of visibility into what assets they already have, so they end up purchasing instead of utilizing what is already in their inventory.




Also, SolarWinds Service Desk comes with software compliance capabilities, which help organizations avoid costly true-up expenses incurred when over-using software titles based on licensing contracts.

 

SolarWinds Discovery finds your installed software titles, giving you a clear picture of what is being utilized. These installs can then be vetted against your software licensing contracts, allowing you to build compliance reports to show both overutilization and underutilization.

How does SolarWinds Discovery work?

SolarWInds Discovery provides a suite of technologies to give you a flexible approach to discover your IT assets no matter how your IT infrastructure is configured. Let’s take a look into the three discovery options available:

  • Agent-based
  • Agentless
  • Integrations



Agent-based Discovery


The SolarWinds Discovery Agent is a lightweight software that can be installed on your Windows® and Apple® computing devices as well as Android® and iOS® mobile devices. Light and mighty, the agent can collect over 200 data points and the installed software titles from each device.
The agent takes a snapshot of the device every 24 hours of run-time (roughly every three days for standard users or every day and a half for IT pros). Built for easy deployment, organizations can use Group Policy or Domain Logon method to quickly install the agent throughout all their computing devices.
The agent enables software compliance and Greynet notification capabilities discussed above. It also highlights computers that have not reported back in the last seven days, helping you visualize devices that are potentially being misused or underused. This is an ideal discovery option for computing devices issued to remote workers who may not be frequently on company networks where other discovery technologies may be in use.

Agentless Discovery


The SolarWinds Discovery Scanner provides you an agentless way to find the IP-connected devices throughout your infrastructure. The Linux-based technology is installed on an individual subnet, and it can be extended to other subnets using multiple methods, for example, giving the scanner visibility to an ARP table located on a router. The system allows you to set the scanning frequency so it is active at optimal times. It also allows you to import SNMP and SSH credential to collect additional information on each device.
Compared to the agent, the scanner does not collect the same breadth of data points on computing devices However, the scanner will find all of the non-computing devices that an agent cannot be installed on. For many organizations, non-computing assets make up a majority of your total asset inventory. The scanner helps you get a fuller picture of your infrastructure. This is a critical component in keeping the SolarWinds Service Desk CMDB populated so you can map your devices’ relationships and dependencies.

Discovery Integrations

SolarWinds Discovery offers several out-of-the-box integrations with some of the industry leading configuration management tools, helping you bring device information from those systems directly into your service desk.
Available integrations:

  • Microsoft® System Center Configuration Manager (SCCM)
  • VMware vCenter®
  • Google Chrome® OS

 

Implementing Multiple Discovery Methods

By leveraging multiple discovery methods, you can be better equipped to collect the asset data to meet your organization's needs.

 

A good principle  to follow when implementing multiple discovery methods is to use the scanner to get a broad picture of your IP connected devices, then add the agent and/or integrations to get deeper information into the applicable devices. 

 

For example, you may support Windows, Apple, and Chrome computing devices that you would like to increase your visibility on. You may also have a heavy VMware footprint and hundreds of IP connected devices you would like to track.

 

In this scenario, you can install the agent on your Windows and Apple devices, activate the ChromeOS and vCenter integrations to collect data these assets, and install the scanner to collect data on everything else.

 

By combining the different discovery technology you will get a broad and balanced view of your IT infrastructure.  

 

Get more details on the SolarWinds Discovery technical specifications.

 

What’s Next for SolarWinds Discovery

 

We are currently working on deepening the SolarWinds Discovery Scanner capabilities to better support organizations that are predominantly Windows shops. This will include a Windows Installer, allowing customers to install the scanner on either Linux or Windows-based servers. Additionally, this will include the ability to add WMI credentials when scanning devices, greatly increasing the amount of data points you can discover on Windows devices.


SolarWinds Discovery can help you maximize the value of SolarWinds Service Desk for both your IT pros and your organization. If you have any questions, feedback, or ideas around SolarWinds Discovery, please comment below or visit the SolarWinds Product Blog Forum.

serena

Revisiting AppStack™

Posted by serena Employee Aug 23, 2019

After a long tenure working on the Orion® Platform, I’ve recently shifted my responsibilities to fully focus on Server & Application Monitor (SAM). Features designed on the platform and in SAM have eye-opening similarities due to deep integration between SAM, Virtualization Manager (VMAN), Web Performance Monitor (WPM), and other heavy hitters in our systems portfolio. The same tenets of componentization and shareability demanded by the Orion Platform exist in AppStack the way they do for PerfStack or the newest generation of Orion maps.

 

In honor of this revelation and how far our integration story has come since the first introduction of AppStack in 2014, I’d like to revisit this milestone feature and show those new to the SolarWinds systems portfolio the power of what we provide. For those who enjoy nostalgia, revisit the first AppStack post here https://thwack.solarwinds.com/community/solarwinds-community/product-blog/blog/2014/11/03/appstack. Personally, I was taken aback by the amount of change that’s occurred in the UI itself.

 

Welcome to 2014, amirite? (I stole this screenshot from Jeremy's original 2014 post.)

Fast forward to 2019, the look and feel is quite different. Navigate to AppStack through the menu bar, or enjoy the contextual AppStack widget on the details page for an entity.

For those who land on the full AppStack view today, you'll notice we have new entities appearing in the stack with the inclusion of container monitoring.

When we as Product Managers introduce the capability to monitor new entities such as containers, we must first ask if it deserves a place in the AppStack. For containers, this is certainly true, due to their ephemeral nature and clear distinction as a generic entity type. The same can be said for the improvements to Cisco UCS monitoring, where SAM added chassis, blade, and rack server statistics into the AppStack view. However, in the case of VMware vSAN entities, you'll notice their inclusion into AppStack in a subtle approach aligned with customer expectations for hyperconverged infrastructure.

 

In 2019.2 versions of the platform and later, the spotlight workflow is still an effective tool to quickly analyze where the problem might lie along your infrastructure stack.

The subtle difference lies in the changes to node status in the Orion Platform 2019.2 release. With simplified status calculations, and clear contributors detailed in the popovers, it's easier than ever to navigate to where you need to drill in for detailed troubleshooting. 

With additional changes from VMAN 8.4 to add virtual entities as status contributors and the ability to control the status contributors via the Node status contributors page, the AppStack solution becomes even more powerful. Through continued improvement and integration throughout the Orion Platform and the system portfolio, AppStack in 2019 has aged well and can help you navigate the intricacies and quirks of your environment.

Supplementing AppStack capabilities, through the addition of new Orion Maps and PerfStack, you now have a full toolset available to visualize your environment, narrow down the problem, and then troubleshoot the problem in-depth in real time.

 

Now that we've walked through how AppStack has grown over the years, I'd love to hear from you, both new and familiar to AppStack. What was your introduction to AppStack? Was it back in 2014 or the newer versions available today? What would you like to see improved in the future and what would you like to see preserved to keep the heart of AppStack beating strong for the next generation of Systems Management product releases? Put your feature request into Server & Application Monitor Feature Requests  for tracking and community input.

SolarWinds has a long history of being easy to try and easy to buy. Those of you who own two or more Orion Platform product modules may have realized, usually when planning your next upgrade, it's not necessarily easy to know which product module versions are compatible with others. While figuring this out may not be too terribly difficult when you own only two Orion product modules, the complexity rises significantly with each additional product module you purchase thereafter. Imagine you need to figure out which versions of your other 13 Orion Platform product and integration modules are compatible with Server & Application Monitor 6.7? Suddenly, what was previously a rather trivial task has become a daunting, and sometimes overwhelming, challenge.

 

For that reason and many more, we have some significant changes coming your way to end the madness. First though, here’s a brief history of where we've been, how we got here, and where the future will take us.

 

 

 

The Matrix

 

For many years, we attempted to make the process of deciphering compatibility between Orion Platform product modules easier through a compatibility matrix maintained within our documentation. The matrix itself was a fairly complex Excel spreadsheet that oftentimes felt like you needed a secret decoder ring to help interpret the results. For what you might imagine should be a relatively simple task, the compatibility matrix was anything but.

 

 

Upgrade Advisor

 

As the number of available Orion Platform product modules increased, we eventually realized the Compatibility Matrix had become too complex for customers to interpret, and too unwieldy for us to maintain. Thus came our next valiant attempt at improving the situation for determining multi-product compatibility, the Upgrade Advisor. The Upgrade Advisor represented a monumental leap forward compared to the Compatibility Matrix. In fact, many still rely upon it today.

 

 

 

The process is relatively straightforward. Enter in the Orion Platform product modules you currently have installed and their respective version numbers. Next, enter the version number of the product module to which you'd like to upgrade. The Upgrade Advisor will then map out the rest of the product module version numbers compatible with the newer version.

 

While fraught with good intentions, the Upgrade Advisor still suffered from the same fundamental flaw which led to the demise of the Compatibility Matrix. It still required users to be both aware of its existence and proactive about their upgrade planning. When the recommendations outlined in the Compatibility Matrix or Upgrade Advisor weren't followed, bizarre and unexplainable issues would occur due to incompatible module behavior.

 

 

Next Generation Installer

 

The latest attempt at unraveling this quagmire has been to place the information available in the Upgrade Advisor into the installer itself. Anytime before or at the time of upgrade, simply running the installer provides a list of all Orion Platform product modules currently installed and their respective versions. Next to it is the list of versions for other product modules compatible with the module version downloaded.

 

Image result for solarwinds installer upgrade

 

This method is vastly superior to both the Compatibility Matrix and Upgrade Advisor, as it requires no prior knowledge of the existence of either, nor does it require any manual steps to determine module compatibility. The installer simply handles it all for you. No muss, no fuss.

 

While the next-generation installer took all the complexity out of the equation, it introduced a fair amount of confusion. For the planners among you, it seemed counterintuitive to run an installer, days, weeks, or even months ahead of a scheduled upgrade to determine the upgrade path. For others, executing the installer on a production environment prior to the scheduled change window sounded like a dangerous proposition, assuming the mere fact of running the installer might start the upgrade process or shut down Orion services without consent or confirmation. As a result, some still found greater comfort utilizing the Upgrade Advisor this new installer was intent on replacing.

 

Does this really need to be so complicated?

 

A lot of time, effort, and different technologies have been used throughout the years in what seems to have been a vain attempt to reduce confusion and make it easier for users to identify compatibility between different product module versions. The problem, however, was never how we attempted to address the issue (though admittedly, some methods worked better than others). The ultimate solution is to change how we think about the problem in the first place: the version number itself.

 

 

Ushering in a new tomorrow

 

It's rather arbitrary that 6.9 is the Server & Application Monitor (SAM) version compatible with Network Performance Monitor (NPM) 12.5. Rather than require users have a Ph.D. in SolarWinds Orion Platform product module versioning, wouldn't it be easier if those product modules compatible with each other all shared the same version number? Then it would be downright simple to identify IP Address Manager vX.XX wasn't compatible with User Device Tracker vY.YY or Network Configuration Manager vZ.ZZ.

 

Simplifying and consolidating our product module versioning is precisely what we aim to do in our next Orion Platform module releases. As you can imagine, this might come as a big surprise to many, which is why we've decided to notify the community in advance.

 

New releases for every Orion Platform product module going forward will now use the same versioning as the Orion Platform itself. This means the next release of Network Performance Monitor will not be v12.6 or v13.0, nor will any of the other Orion Platform product modules bear a resemblance to their current versioning. Instead, Orion Platform product module versions will be the four-digit year in which they were released, followed by the quarter of release. If there is a Service Release for a given module, it will appear in the third position following the quarter.

 

 

[YYYY.Q.SR]

 

If this all seems a bit confusing, fret not. You're probably already familiar with this versioning, as it's been the basis of the Orion Platform version for nearly a decade. This is also the same versioning used for Network Automation Manager.

 

 

What does this mean for my product modules?

 

To be completely honest, really nothing at all, aside from a departure from those products’ previous versioning schemes. It also means versioning is much more transparent and easier to relate to. For example, if you needed to know what version of Storage Resource Monitor (SRM) was released in October 2025, it’s now very easy: Storage Resource Monitor v2025.4. If you also needed to know what version of Server Configuration Manager (SCM) was compatible with SRM v2025.4, that too is now easy: SCM v2025.4, of course!

 

 

How will this affect previous releases?

 

In short, it doesn't. Currently released product module versioning will remain unchanged, though you can expect a fairly significant jump in version numbers the next time you upgrade.

 

 

I still have unanswered questions

 

You undoubtedly have a million questions related to this change racing through your brain right now. If not, perhaps later, after pondering this post for a while, a fantastic question pops to mind. In either scenario, post your questions related to this change in the comments section below.

As of Orion Core version 2019.4, SolarWinds Service Desk has native integration with the Orion Platform.

When we launched SolarWinds® Service Desk (SWSD), I couldn’t wait to get my hands on it. I was very excited to see a new solution to handle incident management, asset management, an internal knowledge base, problem management, and an employee self-service portal. There’s so much to this new product to unpack, I needed to figure out where to start. Thankfully, there was already an excellent document introducing everyone to the solution I could read.

 

For the past three years, I’ve been getting deeper and deeper into leveraging various APIs to do my bidding. This lets me go nuts on my keyboard and automate out as many repeatable functions as possible. No, I’m not breaking up with my mouse. We had a healthy discussion, my mouse and I, and he’s fine with the situation. Really. What was I talking about? Oh yeah, APIs!

 

One of the things I absolutely love about working with APIs (and scripting languages as well) is there’s no one way to do something. If you can think it, you can probably do it. Most RESTful APIs allow you to work with whatever language you prefer. You can use the curl executable, Perl (tagging Leon here), PowerShell, or nearly anything else. PowerShell is my personal preference, so I’m doing my scripting with it. But more on those details later.

 

You’ve seen me write and talk about using the SolarWinds® Orion® API to help automate your monitoring infrastructure. I’ve even gotten some of my friends in on the trend. But, the launch of SWSD opened a brand-new API for me to explore. I started where I always do with something new: by reading the manual. SolarWinds Service Desk has extensive documentation about using the API. There’s so much there for me to explore, but I had to limit myself. In trying to pick a place to start, I thought about my past.

 

SolarWinds has always been in the business of helping IT professionals do their jobs better. Many of us technology professionals, like me, started our careers working on a help desk. Based on everything SWSD offers, I limited myself to the Incidents Management area. Then I just had to think about how I would leverage this type of solution in some of my previous roles.

 

As a help desk supervisor who went on to be a monitoring engineer, I thought about how great it would be to get tickets automatically created based on an alert. I could talk all day about what qualifies for an alert (I have) and what’s best to include in an alert message (that, too), but the biggest thing to strive towards is some level of tracking. The most common tracking method for alerts has been email notifications. This is the default for most people, and 90% of the time it’s fine. But what about the times when email is the problem? You need another way to get your incidents reported and tracked.

 

Like scripting languages, the Orion alerting engine allows for multiple ways to handle alert logic—not just for the trigger conditions, but also for the actions when the trigger occurs. One of those mechanisms is to execute a program. On the surface, this may sound boring, but not to me and other keyboard junkies. This is a great way to leverage some scripting and the SWSD API to do the work for us.

 

First things first, we need to decide how to handle the calls to the API. The examples provided in the API documentation use the curl program to do the work, but I’m not in love with the insanely long command lines required to get it to work. But since this is a RESTful API, I should be able to use my preferred scripting language, PowerShell. (I told you I’d get back to it, didn’t I?)

 

Let’s assemble what you need to get started. First you need your authentication. If you’re an administrator in SWSD, you can go to Setup, Users & Access, and then select yourself (or a service account you want to use). Inside the profile, you’ll find the JSON web token.

 

 

This is how you authenticate with the SWSD API. The web token is a single line of text. In the web display, it’s been wrapped for visual convenience. Copy that line of text and stash it somewhere safe. This is basically the API version of “you.” Protect it as you would any other credentials. In a production system, I’d have it set up to use the service account for my Orion installation.

 

API Test

For the API call, we need to send over some header information. Specifically, we need to send over the authorization, the version of the API we’ll be using, and the content type we’ll be sending. I found these details in the API documentation for Incidents. To start things off, I did a quick test to see if I could enumerate all the existing incidents.

 

I’m trying to get more comfortable with JSON, so I’m using it instead of XML. In PowerShell, the HTTP header construction looks like this:

$JsonWebToken = "Your token goes here. You don't get to see mine."

 

$Headers = @{ "X-Samanage-Authorization" = "Bearer $JsonWebToken";

              "Accept"                   = "application/vnd.samanage.v2.1+json"

              "Content-Type"             = "application/json" }

 

Basically, we’re saying (in order): this is me (auth), I’d like to use this version of the API with JSON (accept), and I’m sending over JSON as the request itself (content-type).

 

This block of headers is your pass to speak with the API. I’m testing this from the United States, so I’ll use the base URI via https://api.samanage.com/. There’s a separate one specifically for EU people (https://apieu.samanage.com). If you are in the EU, that’s the one you should be using.

To list out the incidents, we make an HTTP GET call to the “incidents” URI as specified in the documentation. I saved this as a variable so I wouldn’t have copy/paste failures later.

 

$URI = "https://api.samanage.com/incidents.json"

 

Then to get the list of all incidents, I can just invoke the REST method.

Invoke-RestMethod -Method Get -Headers $Headers -Uri $URI

 

 

Excellent! I can talk to the API and get some information back. This means I’m authenticating correctly and getting the list of incidents back. Time to move on.

Creating a Test Incident

To create an incident, I only technically need three fields: name (of the incident), the requester, and the title. I’ve seen this called the payload, the body, or the contents. To stay on the same page with the PowerShell parameters, I’ll refer to it as the body. Using it, I built a very small JSON document to see if this would work using the script I’ve started developing. The beauty of it is I can repeatedly use the header I already built. I’ve put the JSON in a string format surrounded by @” and “@. In PowerShell this is called a here-string and there are many things you can do with it.

$TestBody = @"

{

"incident": {

   "name":        "Testing Incident - Safe to Close with no notes",

   "priority":    "Critical",

   "requester":   { "email" : "kevin.sparenberg@kmsigma.com" }

}

}

"@

 

Invoke-RestMethod -Method Post -Headers $Headers -Uri $URI -Body $TestBody

 

When I run it, I get back all kinds of information about the incident I just created.

But to be really, doubly sure, we should check the web console.

There it is. I can create an incident with my script.

 

So, let’s build this into an actual alert script to trigger.

 

Side note: When I “resolved” this ticket, I got an email asking if I was happy with my support. Just one more great feature of an incident management solution.

Building the new SolarWinds Service Desk Script

For my alert, I’m going with a scenario where email is probably not the best alert avenue: your email server is having a problem. This is a classic downstream failure. We could create an email alert, but since the email server is the source, the technician would never get the message.

 

 

The above logic looks for only nodes with names containing “EXMBX” (Exchange Mailbox servers) and when the status is not Up (like Down, Critical, or Warning).

 

Now that we have the alert trigger, we need to create the action of running a script.

 

For a script to be executed by the Orion alerting engine, it should “live” on the Orion server. Personally, I put them all in a “Scripts” folder in the root of the C: drive. Therefore, the full path to my script is “C:\Scripts\New-SwsdIncident.ps1”

 

I also need to tweak the script slightly to allow for command line parameters (how I send the node and alert details). If I don’t do this, then the exact same payload will be sent every time this alert triggers. For this example, I’m just sticking with four parameters I want to pass. If you want more, feel free to tweak them as you see fit.

 

Within a PowerShell file, you access command line parameters via the $args variable, with the first argument being $args[0], the next being $args[1], and so on. Using those parameters, I know I want the name of the alert, the details on the alert, the IP of the node, and the name of the node. Here’s what my script looks like:

You can see I added a few more fields to my JSON body so a case like this could be routed easier. What did I forget? Whoops, this should have said this was a test incident. Not quite ready for production, but let’s move on.

When we build the alert, we set one of the trigger actions as execution of an external program and give it an easily recognizable name.

 

 

The full command line I put here is:

 

C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -File "C:\Scripts\New-SwsdIncident.ps1" "${N=SwisEntity;M=StatusDescription}" "${N=SwisEntity;M=Caption}" "${N=SwisEntity;M=IP_Address}" "${N=Alerting;M=AlertName}"

 

This is the path and executable for PowerShell, the script file we want to execute, and the parameters (order is important) we want to pass to the script. I’ve also surrounded the parameters with double quotes because they *may* contain spaces. In this case, better safe than sorry.

 

Then I just need to sit back and wait for an alert matching my description trigger. There’s one now!

 

 

Just like every alert I write, I’ve already found ways to improve it. Yes, I know this is a very rudimentary example, but it’s a great introduction to the integrations possible. I’ll need to tweak this alert a little bit before I’d consider it ready for prime time, but it’s been a great learning experience. I hope you learned a little bit along with me.

 

So, I ask you all: where should I go next?

Status is arguably one of the most important aspects of any monitoring solution. It's a key component for visually notifying you that something is amiss in your environment, as well as being an important aid in the troubleshooting process. When used properly, status is also the engine that powers alerting, making it an absolutely essential ingredient for both proactive and reactive notifications aimed at ensuring your entire IT environment runs smoothly.

 

Orion® Node Status, in particular, has for an extended period of time been somewhat unique when compared to other entities in the Orion Platform[MJ1] . Most other entities have a fairly simple, straightforward, and easy-to-understand hierarchy of status based upon severity. These include things like Up, Warning, Critical, and Down, but can also include other statuses which denote an absence of a state, such as Unknown, Unmanaged, etc. By comparison, a node managed in the Orion Platform today can have any of twenty-two unique statuses. Some of these statuses can, to the uninitiated, appear at best contradictory, and at worst, just downright confusing.

 

This is the result of separating information about the node itself from its associated child objects (like interfaces and applications) into multiple colored balls. The larger colored ball representing the reachability of the node, usually via ICMP, while the much smaller colored ball in the bottom right represents the worst state of any of the node's child objects.

 

 

Primary Node Status

Nodes With Child Status

 

It would be fair to say that this is neither obvious, nor intuitive, so in this release, we've sought to radically improve how Node status is calculated and represented within the Orion Platform.

 

 

Node Thresholds

 

The first thing people usually notice after adding a few nodes to the Orion Platform, is that node thresholds for things like CPU & Memory utilization appear to have no effect on the overall status of the node, and they'd be right. Those thresholds can be used to define your alerts, but node status itself has historically only represented the reachability of the node. That, unfortunately, complicates troubleshooting by obfuscating legitimate issues as well as adds unnecessary confusion. For example, in the image below, I'm often asked why the state of the node is “green” when the CPU Load and Memory utilization are obviously critical? A very fair and legitimate question.

 

 

 

With the release of Orion Platform 2019.2 comes the introduction of Enhanced Node Status. With this new Enhanced Node Status, thresholds defined either globally or on an individual node itself can now impact the overall status of the node. For example, if the memory utilization on a node is at 99% and your “Critical” threshold for that node is “Greater than 90%,” the node status will now reflect the appropriate “Critical” status. This should allow you to spot issues quickly without having to hunt for them in mouse hovers or drilling into Node Details views.

 

CPU Load

Memory Utilization

Response Time

Packet Loss

 

Sustained Thresholds

 

Borrowing heavily from Server & Application Monitor, Orion Platform 2019.2 now includes support for sustained node threshold conditions. Being notified of every little thing that goes bump in the night can desensitize you to your alerts, potentially causing you to miss important service impacting events. For alerts to be valuable, they should be actionable. For example, just because a CPU spikes to 100% for a single poll probably doesn't mean you need to jump out of bed in the middle of the night and VPN into the office to fix something. After all, it's not that unusual for a CPU to spike temporarily, or latency to vary from time to time over a transatlantic site-to-site VPN tunnel. 

 

What you probably want to be notified of instead is if that CPU utilization remains higher than 80% for more than five consecutive polls, or if the latency across that site-to-site VPN tunnel remains greater than 300ms for 8 out of 10 polls. Those are likely more indicative of a legitimate issue occurring in the environment that requires some form of intervention to correct.

 

 

Sustained Thresholds can be applied to any node's existing CPU Load, Memory Usage, Response Time, or Percent Packet Loss thresholds. You can also mix and match “single poll,” “X consecutive polls,” and “X out of Y polls” between warning and critical thresholds for the same metric for even greater flexibility. Sustained Thresholds can even be used in combination with Dynamic Baselines to eliminate nuisance alerts and further reduce alert fatigue, allowing you to focus only on those alerts which truly matter.

 

Null Thresholds

 

A point of contention for some users has been the requirement that all Node thresholds must contain some value. Those could be nodes that you still want to monitor, report, and trend upon those performance metrics but not necessarily be alerted on, such as staging environment, machines running in a lab, decommissioned servers, etc.

 

Historically, there has been no way to say, “I don't care about thresholds on this node”' or “I don't care about this particular metric.” At best, you could set the warning and critical thresholds as high as possible in the hopes of getting close to eliminating alerts for metrics on those nodes you don't necessarily care about. Alternatively, some customers update and maintain their alert definitions to exclude metrics on those nodes they don't want to be alerted on. A fairly messy, but effective, solution—but also one that is no longer necessary.

 

With the introduction of Enhanced Status in Orion Platform 2019.2, any Node threshold can now be disabled simply by editing the node and unchecking the box next to the warning or critical thresholds of the metric you're not interested in. Don't want a node to ever go into a “Critical” state as a result of high response time to keep the boss off your back, but still want to receive a warning when things are really bad? No worries, just disable the “Critical” threshold, leave the “Warning” threshold enabled and adjust the value to what constitutes “really bad” for your environment.

 

 

If so inclined, you can even disable these individual warning and critical thresholds globally from [Settings > All Settings > Orion Thresholds] for each individual node metric.

 

 

Child Objects

 

In this new world of Enhanced Status, no longer are there confusing multi-status icons, like “up-down” or “up warning.” Child objects can now influence the overall node status itself by rolling up status in a manner similar to Groups or how Server & Application Monitor rolls-up status of the individual component monitors that make up an Application. This provides a simple, consolidated status for the node and its related child entities. Those child objects can be things such as Interfaces, Hardware Health, and Applications monitored on the node, to name only a few.

 

Similar to Groups, we wanted to provide users with the ability to control how node status rollup was calculated on an individual, per-node basis for ultimate flexibility. When editing the properties of a single node or multiple nodes, you’ll now find a new option for “Status roll-up mode” where you can select from Best, Mixed, or Worst.

 

 

 

By altering how node status is calculated, you control how child objects influence the overall status of the node.

 

BestMixedWorst

 

Best status, as one might guess, always reflects the best status across all entities contributing to the calculation. Setting the Node to “Best” status is essentially the equivalent of how status was calculated in previous releases, sans the tiny child status indicator in the bottom right corner of the status icon.

 

Worst status, you guessed it, represents the status of the object in the worst state. This can be especially useful for servers, where application status may be the single most important thing to represent for that node For example, I'm monitoring my Domain Controller with Server & Application Monitor's new AppInsight for Active Directory. If Active Directory is “Critical,” then I want the node status for that Domain Controller to reflect a “Critical” state.

 

Mixed-status is essentially a blend of best and worst and is the default node status calculation. The following table provides several examples of how Mixed status is calculated.

 

Polled Status

Child 1 Status

Child 2 Status

Final Node Status

DOWNANYANYDOWN
UPUPUPUP
UP or WARNINGUPWARNINGWARNING
UP or WARNINGUPCRITICALCRITICAL
UP or WARNINGUPDOWNWARNING
UP or WARNINGUPUNREACHABLEWARNING
UPUPUNKNOWNUP
WARNINGUPUNKNOWNWARNING
UPUPSHUTDOWNUP
UP or WARNINGDOWNWARNINGWARNING
UP or WARNINGDOWNCRITICALCRITICAL
UP or WARNINGDOWNUNKNOWNWARNING
UP or WARNINGDOWNDOWNWARNING
UPUNKNOWNUNKNOWNUP
WARNINGUNKNOWNUNKNOWNWARNING
UNMANAGEDANYANYUNMANAGED
UNREACHABLEANYANYUNREACHABLE
EXTERNALANYANYGroup Status

 

In case you overlooked it in the table above, yes, External Nodes can now reflect an appropriate status based upon applications monitored on those nodes.

 

Child Object Contributors

 

Located under [Settings > All Settings > Node Child Status Participation] you will find you now have even more fine-grained, granular control of up to 27 individual child entity types that can contribute to the overall status of your nodes. Don't want Interfaces contributing to the status of your nodes? No problem! Simply click the slider to the “off” position and Interfaces will no longer influence your nodes status. It's just that easy.

 

Show me the Money!

 

You might be asking yourself, all these knobs, dials, and switches are great, but how exactly are these going to make my life better or simpler? A fair question, and one that no doubt has countless correct answers, but I'll try and point out a few of the most obvious examples.

 

Maps

 

One of the first places you're likely to notice Enhanced Status is in Orion Maps. The example below shows the exact same environment. The first image shows what this environment looked like in the previous release using Classic Status. Notice the absence of any obvious visual cues denoting issues in the environment. The next image to the right is of the very same environment taken at the exact same time as the image on the left. The only notable difference is that this image was taken from a system running Orion Platform 2019.2 with Enhance Node Status.

 

In both examples, there are the exact same issues going on in the environment, but these issues were obfuscated in previous releases. This made the troubleshooting process less intuitive and unnecessarily time-consuming. With Enhance Status, it's now abundantly clear where the issues lie. And with the topology and relationship information from Orion Maps, it's now easier to assess the potential impact those issues are having on the rest of the environment.

 

Classic Status

Enhanced Status

 

Groups

 

Groups in the Orion Platform are incredibly powerful, but historically in order for them to accurately reflect an appropriate status or calculate availability accurately, you were required to add all relevant objects to that group. This means you not only needed to add the nodes that make up the group, but also all child objects associated with those nodes, such as interfaces, applications, etc.

 

Even in the smallest of environments, this was an otherwise impossible feat to manage manually. Given the nature of all the various entity types that could be associated with those nodes, even Dynamic Groups were of little assistance in this regard. Enhanced Status not only radically simplifies group management, but it also empowers users to more easily utilize Dynamic Groups to make group management a completely hands-off experience.

 

The following demonstrates how Enhanced Node Status simplifies overall Group Management in the Orion Platform, reducing the total number of objects you need to manage inside those groups. The screenshot on the left shows a total of eight nodes using Enhanced Status in a group, causing the group to reflect a Critical status. The image to the right shows all the objects that are required to reflect the same status using Classic Status. As you can see, you would need to not only add the same 8 nodes but also their 43 associated child objects for a total of 51 objects in the group. Yikes!

 

Enhanced Status (8 Objects)

Classic Status (51 Objects)

 

By comparison, the following demonstrates what that group would look like with just the eight nodes alone included in the group using both Classic Status and Enhanced Status. Using Classic status, the group reflects a status of “Up,” denoting no issues at all in the group. With Enhanced Status, it's abundantly clear that there are in fact issues, which nodes have issues, and their respective severity. This aids in significantly reducing time to resolution and aids in root cause analysis.

 

Enhanced Status

Classic Status

 

Alerts

 

Possibly the greatest benefit of Enhanced Status is that far fewer alert definitions are required to be notified of the exact same events. Because node thresholds and child objects now influence the status of the node, you no longer need alert definitions for individual node metrics like “Response Time,” or related child entities like “Interfaces.” In fact, of the alert definitions included out-of-the-box with the Orion Platform, Enhanced Status eliminates the need for at least five, taking you from seven down to a scant two. That's a 71% reduction in the number of alert definitions that need to be managed and maintained.

 

Out-of-the-box Alerts Using Classic Status - x7

Out-of-the-box Alerts Using Enhanced Status - x2

 

Alert Macros

 

I'm sure at this point many of you are probably shouting at your screen, "But wait! Don't I still need all those alert definitions if I want to know why the node is in whatever given state that it's in when the alert is sent? I mean, getting an alert notification telling me the node is “Critical” is cool and all, but I sorta need to know why."

 

We would be totally remiss if in improving Node status we didn't also improve the level of detail we included in alerts for nodes. With the introduction of Enhanced Status comes two new alert macros that can be used in your alert actions, such as email notifications, which lists all items contributing to the status of that node. Those two alert macros are listed below.

 

The first is intended to be used with simple text-only notification mechanisms, such as SMS, Syslog, or SNMP Traps. The second macro outputs in HTML format with hyperlinks to each child objects respective details page. This macro is ideally suited for email or any other alerting mechanism that can properly interpret HTML.

 

  • ${N=SwisEntity;M=NodeStatusRootCause}
  • ${N=SwisEntity;M=NodeStatusRootCauseWithLinks}

 

The resulting output of the macro provided in the notification includes all relevant information pertaining to the node. This includes any node thresholds that have been crossed as well as a list of all child objects in a degraded state associated with the node, which is all consolidated down into a simple, easily digestible, alert notification that pinpoints exactly where to begin troubleshooting.

 

 

 

 

Enabling Enhanced Status

 

If you're installing any Orion product module for the first time that is running Orion Platform 2019.2 or later, Enhanced Status is already enabled for you by default. No additional steps are required. If you're upgrading from a previous release, however, you will need to enable Enhanced Status manually to appreciate the benefits it provides.

 

Because status is the primary trigger condition for alerts, we did not want customers who are upgrading to be surprisingly inundated with alert storms because of how they had configured the trigger conditions of their alert definitions. We decided instead to let customers decide for themselves when/if to switch over to Enhanced Status.

 

The good news is that this is just a simple radio button located under [Settings > All Settings > Polling Settings]

 

 

Conversely, if you decided to rebuild your Orion server and have a preference for “Classic” status, you can use this same setting to disable “Enhanced” Status mode on new Orion installations and revert back to “Classic” status.

 

 

Cautionary Advice

 

If you plan to enable “Enhanced” status in an existing environment after upgrading to Orion Platform 2019.2 or later, it’s recommended that you disable alert actions in the Alert Manager before doing so. This should allow you to identify alerts with trigger conditions in their alert definition that may need tweaking without inadvertently causing a flood of alert notifications or other alert actions from firing. Your coworkers will thank you later.

 

 

Feedback

 

Enhanced status represents a fairly significant, but vitally important, change for the Orion Platform. We sincerely hope you enjoy the additional level of customization and reduced management overhead it provides. As with any new feature, we'd love to get your feedback on these improvements. Will you be switching to Enhanced Status with your next upgrade? If not, why? Be sure to let us know in the comments below!

The Orion® Platform is designed to consolidate monitoring into a single source of truth, taking massive amounts of data and making it easier to identify issues in complex environments. A key component to this is the organization of data. As an example, if I were to present you with the dashboard below, you can see it’s aggregating a ton of information and highlighting issues from multiple modules like Network Performance Monitor (NPM), Server & Application Monitor (SAM), Virtualization Manager (VMAN), and Storage Resource Monitor (SRM). Single pane of glass, right?  However, it’s not interesting, not even a little bit, and most importantly, it’s not easily interpreted. This dashboard doesn't really help me understand the problem or where to focus.

 

Click to Enlarge

 

Simplifying how data is interpreted through better visualizations can provide drastic improvements for understanding problems. Now, if I present you with this view, can you tell me where the problem areas are?

 

Click to Enlarge

 

The Orion Maps team believes visualization of your data can be a powerful tool when put together in a meaningful way. Ensuring critical data is available but presenting it in a clear and concise manner allows you to quickly see the problem and its potential impact. Visualizations help tell the story, and can help members of your organization, or clients, understand the breadth and complexity of what you manage on a day-to-day basis. For those of you unfamiliar with the Orion Maps project to date, you may want to review the following posts. These should help paint the picture, no pun intended, on what we’ve delivered with the previous releases.

 

Orion Platform 2018.2 Improvements - Chapter Two - Intelligent Mapping

Orion Platform 2018.4 Improvements - Intelligent Mapping Enhancements

 

With the release of 2019.2, we’ve incorporated some new enhancements designed to extend the flexibility of the platform and provide some amazing new options for representing your environment and critical services.

 

 

ORION MAPS MENU & MANAGEMENT PAGE

 

As a new entry point to maps, an "Orion Maps" menu is now available under My Dashboards and Home.Selecting this option will transport you to the Map Management page.  This will be blank initially, prompting you to create a map.

 

 

It’s important to note here that any user can create a map. If you have access to this menu, you can create maps. However, each of you will only be able to see the maps you created yourself in the list view. The current features on this page will allow you to sort your list by Map Name, Last Updated, and Created Date. There’s also a search bar allowing you to search for maps by name.

 

 

Any Orion Administrator will have an additional function when they access this view. A very helpful tool is available in the upper-right corner allowing you to toggle the view to include all user maps vs. just your own. The main components to this page provide the capabilities to create a new map, edit existing maps, delete maps, or view a map by selecting its name.

Click to Enlarge

 

MAP EDITOR

Let’s begin by creating a new map via the Map Editor. Selecting New Map will open the basic editor for building maps from scratch. You’ll be greeted by an entity library on the left side, which defaults to a paginated list of your nodes. You can click the drop-down to choose from any entity type in Orion Maps. As always, a search bar is also available. The empty canvas will take up most of the view, and a few controls will be noticeable in the bottom-right corner, along with a Save button and More menu in the upper-right side. Building a map from the basic editor is for those of you who know exactly what you want in the map. For now, this is single drag-and-drop functionality, and any relationships or connections identified will automatically be drawn.

Click to Enlarge

 

Like any design tool, built-in functions allow you to manipulate the map. Holding the space bar will allow you to pan the map. Selecting entities will allow you to move objects, and holding the Shift key when moving objects will perform a snap to grid function. Using arrow keys will gently nudge the entity in a desired direction. Holding Shift while using arrows will move the object in larger increments. Holding the Control key or using the + or - buttons will allow you to zoom in or out while working with your map. Probably one of my favorite tools is the Center key in the bottom right. This will not only center your map, but perform a zoom to fit, ensuring the entire map is placed in the viewable area. This is an excellent tool as you expand or condense maps of different scales. Any entity can be removed from a map by selecting it and hitting the Delete key on your keyboard.

 

Click to Enlarge

 

Once we have our map situated how we want it, you’ll notice any change in the canvas enables the "Save" button in the upper-right corner.  Clicking save will generate a dialogue, which will allow you to add a unique name. This will warn you in the event you attempt to name your map with a previously used name.

 

 

Under the MORE menu, a number of options will be presented to you. "New" will allow you to start a new map and a blank canvas, much like the name implies. "Save As" is particularly useful if a map has been shared with you, or as an administrator you’re editing a map you didn’t create. Unless you’re the one who created the original map, you won’t be allowed to "Save" but will have to perform a "Save As" and rename the map. "Delete" needs little explanation, but again, if this isn’t your map, then the delete option will be grayed out. I’ll cover the "View" button a bit later in this post in more detail, and the "Help" button of course links to formal documentation for much of the items discussed in this post.

 

 

LEVERAGING CONTEXTUAL MAPS

We have massive plans to improve upon the function of building maps as we understand one of the biggest needs is expediting map creation and limiting the number of touches to maintain them. Feel free to share what you believe would make a difference in the comment section below. In this release, we’re taking advantage of the framework and functionality delivered previously through the contextual sub-views. If or when viewing an automatically generated map from the Node or Group Details sub-views, you’ll now see a new button added to the menu bar, "Open Map in Editor." Essentially, I can use the existing functionality to take a pre-built map, expand it further, and have what was done within the sub-view persisted and sent to the new map editor with the click of a button. The images below should show a basic demonstration of this workflow. This is a great way to build maps quickly and then make final adjustments in the editor before saving.

 

Navigating to Map sub-view from Node Details page

Click to Enlarge

 

Expanding the map through automatically discovered relationships

Click to Enlarge

 

Open Map in Editor

Click to Enlarge

 

Of course, using the built-in tools to move objects around the canvas, snap to grid, and taking advantage of the center/auto-fit tool as you make adjustments can help you properly create a representation that makes the most sense for your organization. Once I’ve saved the map, what do I do now?

 

ORION MAPS WIDGET

As maps are saved, they’ll be accessible as a Map Project from the list view under the Map Management page. You’ll also find a new widget available in the Widget Drawer, allowing you to add any of your custom maps to a dashboard or view. Click the pencil in the upper-left side marked Customize Page, then click Add Widgets, and the resource will be located under the Network Maps section called Orion Map.

 

 

Drag and drop as many of these widgets out to the page as you wish, and click "Edit" or "Choose Map" to specify a map from your list. A dialogue will contain options to customize a title or subtitle and specify the widget height by pixels. A list of maps will be shown, along with a search option for quickly identifying the map you wish to use. Like the Map Management page, admins will also have the option to see all user-created maps by clicking the toggle on the right side.

 

 

Click "save" and your map will now be available. Another one of my favorite features is we managed to build the widget where it‘ll automatically scale the map according to the size you specified. By adjusting the height and the column width, your map will auto-fit the available space, making it fast and easy to get the map exactly where you want on your dashboard, at just the right size.

 

 

Click to Enlarge

 

With the ability to incorporate these maps alongside other widgets in the dashboard, you have some amazing new ways in which to roll up critical problems within your environment.  Below is a quick example of what one may look like.

 

 

Click to Enlarge

 

 

ENHANCED NODE STATUS

If you are unaware, or have yet to come across this post, Orion Platform 2019.2 - Enhanced Node Status by aLTeReGo, we’ve included some very significant updates in how we highlight status in the Orion Platform. The desire for improvements in status was a consistent theme we heard during user research with maps as well, and the difference this change makes is awesome. To steal an excerpt from aLTeReGo's post: The example below shows the exact same environment. The first image shows what this environment looked like in the previous release using Classic Status. Notice the absence of any obvious visual cues denoting issues in the environment. The next image to the right is of the very same environment, taken at the exact same time as the image on the left. The only notable difference is this image was taken from a system running Orion Platform 2019.2 with Enhance Node Status.

 

 

In both examples, there are the exact same issues going on in the environment, but these issues were obfuscated in previous releases, making the troubleshooting process less intuitive and unnecessarily time-consuming. With Enhanced Status, it's now abundantly clear where the issues lie, and with the topology and relationship information from Orion Maps, it's now easier to assess the potential impact those issues are having on the rest of the environment.

 

 

Classic StatusEnhanced Status

 

 

INTERACTING WITH THE MAP WIDGET AND VIEW MODE

Now that you have an amazing visualization of your environment and the issues are clearly identified, a closer look may be in order. There are a couple of different methods for interacting with your maps. The first method takes advantage of the improvements made to the Orion Hovers and are accessible from the Map Widget.  By hovering over an entity in your map, performance status will be available and should highlight exactly why your entity is in a degraded state. You will also be able to access the Commands menu, which will allow you to Go To Details pages, Edit Node, Mute Alerts, or Unmanage the entity directly from the map!  This behavior will be the same if a group is on a map, or if you have nested maps.  You can see that the commands option for a map includes viewing a map, editing a map, or muting alerts associated to a map!  From here, you can choose to use the command options or simply click on the entity in the map. By doing so it will take you to the details page automatically as pictured below.  The View Mode, which can also be accessed as a button in the top right of the Map Widget, is a full screen depiction of that map and all its entities, allowing you to investigate further utilizing the inspector panel to show related entities, alerts, and recommendations, if viewing virtual entities.

 

Click to Enlarge

 

FEEDBACK

This release marks another significant step for the Orion Maps project and we hope you find these new enhancements valuable and useful in your environment.  I plan to write and attach a couple other posts to this announcement around using Maps in Alerts and Reporting.  Of course with each release, we find your feedback extremely valuable, and much of what has been done to this point centers around your asks.  Please be sure to comment below and SHARE YOUR MAPS and DASHBOARDS!  Stay tuned as we are already hard at work on the next major release and have some very cool stuff in store. 

 

Check out the other posts form serena and aLTeReGo on 2019.2 Platform improvements if you haven't already!

Orion Platform 2019.2 - Install/Upgrade Improvements Part 1

Orion Platform 2019.2 - Install/Upgrade Improvements Part 2

Orion Platform 2019.2 - Enhanced Node Status

Orion Platform 2019.2 - Additional Improvements

ORION PLATFORM 2019.4 - ORION MAPS RC (NPM Forum)

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.