Product Blog - Page 2

cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post

Product Blog - Page 2

Level 12

It gives me great pleasure to introduce the newest version of Network Configuration Manager (NCM), v8.0, as generally available!

I’m pretty excited about this release, as it’s jam-packed full of great features. Per popular request, Network Insight includes awesome capabilities from NCM, Network Performance Monitor (NPM), NetFlow Traffic Analyzer (NTA), and User Device Tracker (UDT). This very special Network Insight for Palo Alto firewalls provides users with insights into their policies, traffic conversations across policies, and VPNs. We have a great detailed write-up about all the great value we stuffed into the feature here.

In addition to Network Insight, NCM is now easier to use when executing config change diffs, adds two new vendors to the Firmware Upgrade feature, and is more performant when executing config backups.

Updated Config Diff

In an effort to reduce the amount of time committed to spotting changes in a config diff (all those lines…), a simpler and easier-to-use Config Diff has been implemented in this version. By focusing the view around the context of the diff, the changes, you’ll now see the changes highlighted plus five lines above and below the changes. All unchanged lines beyond the five-line limit are collapsed to remove the endless scroll. This gives you the context of the change and makes it easier to discern what steps need to be taken next.

Config Diff.png

Additional Vendor Support for Firmware Upgrade

For some time now, you’ve all been asking for additional vendors to be added to Firmware Upgrade, and I’m pleased to say we’ve delivered. Take advantage of the automation to apply firmware to Juniper and Lenovo switches to patch vulnerabilities or ensure your network devices are on the latest. Have a different switch model? Just use the framework from the out-of-the-box templates to make it work for you.

Firmware Upgrade Page.png

Go check out the release notes for the full details or review the admin guide. We’ve been working hard to bring these wonderful new features to you, so be sure to visit your Customer Portal to download this version.

If there’s anything you think we should consider in a future release, please be sure to go create a new feature request to let me know about the additional functionality you would like to see.

Read more
2 4 1,258
Level 12

We’re excited to introduce our Network Insight™ for Palo Alto firewalls! This is the fourth Network Insight feature, and we’re building these in direct response to your feedback about the most popularly deployed devices and the most common operational tasks you manage.

Network Insight features are designed to give you tools specific to the more complex and expensive special-purpose devices in your network infrastructure. While the bulk of your network consists of routing and switching devices, the more specialized equipment at the edge requires monitoring and visibility beyond the standard SNMP modeled metrics we’re all familiar with.

So, what kinds of visibility are we talking about for Palo Alto firewalls?

The Palo Alto firewall is zone-based, with security policies that describe the allowed or denied connectivity between zones. So, we’ll show you how we capture and present those security policies. We’ll show you how we can help you visualize application traffic conversations between zones, to help you understand how policy changes can affect your clients. Another critical feature of the Palo Alto firewall is to secure communications between sites, and to provide secure remote access to clients. We’ll show you how to see your site-to-site VPN tunnels, and to manage GlobalProtect client connections.

Managing Security Policies

Palo Alto firewalls live and die on the effectiveness of their security policies to control how they handle network traffic. Policies ensure business processes remain unaffected and perform optimally, but unintentional or poorly implemented policies can cause widespread network disruption. It’s critical for administrators to monitor not only the performance of the firewall, but the effect and accuracy of the policy configuration as well. As these policies are living entities, continually being modified and adjusted as network needs evolve, the impact and context of a change may be missed and difficult to recover. This is why in Network Insight for Palo Alto, Network Configuration Manager (NCM) brings some powerful features to overcome these pitfalls.

  • Comprehensive list view of security policies
  • Detailed view into each policy and its change history
  • Usage of a policy across other Palo Alto nodes managed by NCM
  • Policy configuration snippets
  • Interface configuration snippets
  • Information on applications, addresses, and services

Once the Palo Alto config is downloaded and parsed, the policy information will populate the Policies List View page. This page is intended to make it easier to search through and identify the right security policy from a potentially long list, using configurable filtering and searching. The list view provides each policy’s name, action, zones, and last change. Once the correct policy is identified, users can drill down into each one to see the composition and performance of each policy.

Node Details Policies.png

The policy details page summarizes the most critical information and simplifies the workflow to understand if a policy is configured and working as intended. You can review the basic policy details, as well as the policy configuration snippet and review the object groups composed into the policy. Admins will be able to quickly analyze if additional action is required to resolve an issue or optimize the given policy.

Policies Details.pngPolicy Config Snippet.png

Some policies are meant to extend across multiple firewalls and without a view to see this, it’s easy to lose context about the effectiveness of your policy. Network insight for Palo Alto analyzes the configuration of each firewall to identify common security policies and display their status. As an administrator, this lets you confirm if your policies are being correctly applied across the network and to take action if they’re not. If there’s a desire to provide more continuous monitoring of a policy standard, you can also leverage a policy configuration snippet as a baseline for all Palo Alto nodes.

Other Firewalls Using.png

With any configuration monitoring and management, it’s critically important to be able to provide some proof of compliance for your firewall’s configuration. With Network Insight, you can track and see the history of changes to a policy and provide tangible evidence of events that have occurred. Of course, this also supports the ability to immediately run a diff of the configs where this change took place, by simply clicking the “View diff” button.

Policy Changes.png

VPN Tunnel Monitoring, Finally

How do you monitor your VPN tunnels today? We asked you guys this question a lot as we started to design this feature. The most common response was you’d ping something on the other end of the tunnel. That approach has a number of challenges. The device terminating the VPN tunnel rarely has an IP address included in the VPN tunnel’s interesting traffic that you can ping. You have to ping something past the VPN tunnel device, usually some server. Sometimes the company at the other end of the tunnel intentionally has strict security and doesn’t allow ping. If they do allow ping, you have to ask them to tell you what to ping. If that thing goes down, monitoring says the tunnel is down, but the device might be down, not the tunnel. All this adds work. It’s all manual, and companies can have hundreds, thousands, or more VPN tunnels. Worst of all, it doesn’t work very well. It’s just up/down status. When a tunnel is down, why is it down? How do you troubleshoot it? When a tunnel is up, how much traffic is it using? When’s the last time it went well?

This is a tough position to be in. VPN tunnels may be virtual, but today they’re used constantly as infrastructure connections and may be more important than some of your physical WAN connections. They’re commonly used to connect branch offices to each other, to HQ, or to data centers. They’re the most popular way to connect one company to another, or from your company to an IaaS provider like Amazon AWS or Microsoft Azure. VPN tunnels are critical and deserve better monitoring.

Once you enable Network Insight for Palo Alto, Network Performance Monitor (NPM) will automatically and continually discover VPN tunnels. A site-to-site VPN subview provides details on every tunnel.

Site to Site VPN.png

There are a couple things going on here that may not be immediately obvious but are interesting—at least for network nerds like me.

All tunnels display the source and destination IP. If the destination IP is on a device we’re monitoring, like another Palo Alto firewall or an ASA, we’ll link that IP to that node in NPM. That’s why 192.168.100.10 is a blue hyperlink in the screenshot. If you’ve given the tunnel a name on the Palo Alto firewall, we’ll use that name as the primary way we identify the tunnel in the UI.

There’s different information for VPN tunnels that are up and VPN tunnels that are down. If the tunnel is down, you’ll see the date and time it went down. You’ll also, in most cases, see whether the VPN tunnel failed negotiation in phase 1 or phase 2. This is the first piece of data you need to start isolating the problem, and it’s displayed right in monitoring. If the tunnel is up, you’ll see the date and time it came up and the algorithms protecting your traffic, including privacy/encryption and hashing/authenticity.

The thing I’m most excited about is in the last two columns. BANDWIDTH! Since VPN tunnel traffic is all encrypted, getting bandwidth usage is a pain. Using a flow tool like NTA, you can find the bandwidth if you know both peer IPs and are exporting flow post encryption. It takes some manual work, and you can only see traffic quantities because of the encryption. You can’t tell what endpoints or applications are talking. If you export flow prior to encryption, you can see what endpoints are talking, but you have to construct a big filter to match interesting traffic, and then you have no guarantee that traffic makes it through the VPN tunnel. The traffic has the additional overhead of encapsulation added, so pre-encryption isn’t a good way to understand bandwidth usage on the WAN either. The worst part is that VPN tunnels transit your WAN—one of the most expensive monthly bills IT shops have.

Network Insight for Palo Alto monitors bandwidth of each tunnel. All the data is normalized, so you can report on it for capacity, alert on it to react quickly when a tunnel goes down, and inspect it in all the advanced visualization tools of the Orion® Platform–including the PerfStack™ dashboard.

Perf Stack VPN.png

GlobalProtect Client VPN Monitoring

Why does it always have to be the CEO or some other executive who has problems with the VPN client on their laptop? When I was a network engineer, I hated troubleshooting client VPN. You have so little data available to you. It’s very easy to look utterly incompetent when someone comes to you and tells you their VPN service isn’t working, and when it’s the CEO, that’s not good. Network Insight for Palo Alto monitors GlobalProtect client VPN and keeps a record of every user session.

Global Protect VPN.png

This makes it easy to spot the most common problems. If you see the same user failing to connect over and over, but other users are successful, you know it’s something on that client’s end and would probably check if login credentials are right. “No, I’m sure you didn’t forget your password. Sometimes the system forgets. Let’s reset your password because that often fixes it.” If lots of people can’t connect, you may check for problems on the Palo Alto firewall and the connection to the authentication resource.

Traffic Visibility by Policy

In this release, NetFlow Traffic Analyzer (NTA) is contributing to our latest Network Insight through an integration with Network Configuration Manager. NCM users who manage Palo Alto firewalls will see top traffic conversations by security policy on the NCM Policy Details page. Examining traffic by policy helps answer the question, "Who might be affected as I make changes to my security policies?"

Let's look at how we find this view. We'll start at the Node Details page for this firewall.

Node Details.png

We'll use the slide-out menu in this view to select "Policies." This will take us to a list view of all the policies configured for zones on this device.

Policies List View.png

Selecting a policy from this list brings us to the Policy Details page.

Policy Details View.png

Policies define security controls between zones configured on the firewall. For a Palo Alto firewall, a zone can include one or more interfaces. In this view, we're looking at all the conversations based on applications defined in the policy. It's a very different way of looking at conversations; this isn't a view of all traffic through a node or interface. Rather, it's a view related to the policy definition—so the endpoints in these conversations are running over the applications your security rules are based on. The mechanism here is filtering; we’re looking at application traffic that references the application IDs in your security policy. The endpoints in those conversations may be from any zone where you’re using this policy.

For an administrator considering changes at the policy level, this is a valuable tool to understand how those rules apply immediately to production services and what kinds of impacts changes to them will have. For this feature, you'll need both NCM and NTA. NTA, of course, requires NPM. NCM provides the configuration information, including the policy definition and the applications definitions. NTA reads application IDs from the flow records we receive from the Palo Alto Firewall, and correlates those with the policy configuration to generate this view. With NTA, of course, you can also easily navigate to more conventional node or interface views of the traffic traversing the firewall, and we integrate traffic information seamlessly into the Node Details page in NPM as well.

User Device Tracker’s Cameo

For most devices supported by User Device Tracker (UDT), all that's necessary are the SNMP credentials. We’ll pick up information about devices attached to ports from the information modeled in SNMP. But for some devices—the Cisco Nexus 5K, 7K, and 9K series switches, or the Palo Alto firewall—a set of command-line interface (CLI) credentials are required. We’ll log in to the box periodically to pick up the attached devices.

To support device tracking on these devices, you’ll need to supply a command line login. You can configure devices in bulk or individually in the Port Management section of the User Device Tracker settings page. Select "Manage Ports" to see the list of what devices can be configured.

Port Management View.png

Select one or more of these devices, edit their properties, and you'll find a section for configuring SNMP polling.

Polling Method.png

You’ll also find a section for configuring command-line polling. For devices requiring CLI access for device tracking—currently the Nexus switches and the Palo Alto firewall—you should enable CLI polling, and configure and test credentials here.

CLI Polling Settings.png

Be sure to enable Layer 3 polling for this device in the UDT Node Properties section as well.

UDT Node Properties.png

You’ll see attached devices for these ports in the Node Details page, in the Port Details resource.

Attached Devices.png

How Do I Get This Goodness?

To see all the features of Network Insight for Palo Alto, you’ll want to have several modules installed and working together.

  • Network Performance Monitor discovers and polls your Palo Alto firewall and retrieves and displays your site-to-site VPN and GlobalProtect client VPN connection information.
  • Network Configuration Manager collects your device configuration and provides a list of your security policies for zone-to-zone communication. This module tracks configuration changes over time and provides the context for policies spanning multiple devices.
  • NetFlow Traffic Analyzer collects flow data from the firewall and maps the traffic to policies in the Policy Details page. You can also view traffic through the firewall, or through specific interfaces.
  • User Device Tracker collects directly connected devices and provides a history of connections to the ports on the device.

You can demo these products individually, or install or upgrade from any installer available in your Customer Portal.

Read more
0 15 4,966
Product Manager
Product Manager

I’m excited to announce the general availability of SolarWinds® Service Desk, the newest member in the SolarWinds product family, following the acquisition of Samanage.

pastedImage_0.png

SolarWinds Service Desk (SWSD) is a cloud-based IT service management solution built to streamline the way IT provides support and delivers services to the rest of the organization. The solution includes an ITIL-certified service desk with incident management, problem management, change management, service catalog, and release management, complemented by an integrated knowledge base. It also includes asset management, risk and compliance modules, open APIs, dashboards, and reporting.

Core Service Desk

SWSD includes a configurable Employee Service Portal, allowing employees to make their service requests, open and track their tickets, and find quick solutions through the knowledge base. The portal’s look and feel can be customized to your branding needs, and

configurable page layouts support your organization’s unique service management processes.

pastedImage_5.png

For IT pros working the service desk, we provide an integrated experience to bring together all related records (for example, assets or knowledge base articles related to an incident or change records related to a problem), so the agent can see all the information available to expedite the resolution.

pastedImage_10.png

To help agents prioritize work, Service Level Management (SLM) helps build and manage SLA policies directly within the service desk, including auto-escalation rules.

pastedImage_15.png

IT pros often need to be on the go or need to respond to urgent service requests and incidents after hours. The SWSD mobile app, available on both iOS and Android mobile devices, allows agents to work on records, make approvals, and track the status of their work queue at all times.

Process Automation

Driving automation throughout all aspects of service delivery helps service desk groups drive fast, affordable, and highly consistent services to the rest of the organization. Process automation in SWSD uses custom rules logic to route, assign, prioritize, and categorize inbound tickets, change requests, and releases.

The Service Catalog allows you to define and publish IT (VM provisioning or password reset) and non-IT services (employee onboarding) through the Employee Service Portal. The catalog forms defining those services are dynamic and can be configured to fit specific use cases, with little to no coding required.

pastedImage_25.png

The other part of defining any Service Catalog item is automated fulfillment workflow.

pastedImage_30.png

IT Asset Management and CMDB

SWSD offers full asset lifecycle management starting with the management of IT and non-IT asset inventories and an audit history of changes. Compliance risk analysis helps expose unmanaged software or out-of-support software and devices. Where applicable, asset information incorporates contract, vendor, and procurement data to provide a full view on assets under management.

pastedImage_37.png

The Configuration Management Database (CMDB) populated by service supporting configuration items (CIs) plays a critical role in providing better change, problem, and release management services. Knowing what CIs support each service and the dependencies between them helps IT pros to better assess the risks and impacts related to IT changes, driving better root cause analysis (RCA) in problem management, as well as being better prepared for new software releases.

pastedImage_47.png
Integrations

Many service desk processes can be integrated into other IT and business processes. SolarWinds Service Desk comes with hundreds of out-of-the-box integrations and an open REST API, allowing you to make it part of the workflows you need.

pastedImage_57.png

We are releasing a brand-new integration today with Dameware® Remote Everywhere (DRE). The great synergy between SWSD and Dameware’s remote support capabilities allow agents to initiate a DRE session directly from a SWSD incident record.

pastedImage_62.png

Artificial Intelligence (AI)

AI is embedded in a few different SWSD functions, introducing a new level of automation and an improved time to resolution. Our machine learning algorithms analyze large sets of historical data, identify patterns, and accelerate key service management processes. There is a “smart” pop-up within the employee service portal that auto-suggests the best corresponding knowledge base articles and service catalog items related to the keyword(s) typed in the search bar.

pastedImage_68.png

For agents, AI helps with automatic routing and classification of incoming incidents, reducing the impact of misclassifications and human errors. It also offers “smart suggestions” agents can leverage when working on a ticket. Smart suggestions are made based on keyword matching from historical analysis of similar issues—those suggestions offer knowledge base articles or similar incidents, advising the agent on the best actions to take next.

pastedImage_73.png

Reports and Dashboards

SolarWinds Service Desk comes with dozens of out-of-the-box reports to analyze and visualize the service desk’s KPIs, health, and performance. Those reports help agents, managers, and IT executives make data-driven decisions through insights, including trend reports, incident throughput, customer satisfaction (CSAT) scores, and SLA breaches.

pastedImage_79.png

Dashboards provide a real-time dynamic view of the service desk. Dashboards are comprised of a set of widgets that can be added, removed, and configured to adjust to the individual needs of the agent, manager, or organization.

pastedImage_89.pngpastedImage_90.png

This has been a pretty packed inaugural product blog for us, and I hope you found it useful. We’d love to get your feedback and ideas here. Please stay tuned to many more ITSM updates as we’re quickly building out the new THWACK Service Desk product forum.

Read more
7 26 5,235
Product Manager
Product Manager

Security Event Manager (SEM) 6.7 is now available on your Customer Portal​. You're probably wondering what exactly Security Event Manager is? It's the product formally known as Log and Event Manager (LEM). LEM has always been so much more than a tool for basic log collection and analysis. It offered so much more in terms of detecting and responding to cyberattacks as well as easing the burden of compliance reporting. SEM helps organizations across the globe to improve their security posture, and we believe the new name better reflects the capabilities of the tool.

FLASH - THE BEGINNING OF THE END

Moving away from Flash has been the top priority for SEM for some time. I'm excited to say that this release introduces a brand-new HTML5 user interface as the default interface for SEM. You can now perform most of your day-to-day tasks within this new interface, including searching, filtering and exporting logs, as well as configuring and managing correlation rules and nodes. The feedback on the new UI has been hugely positive thus far, with many users describing it as clean, modern and incredibly responsive. The Flash interface is still accessible and is required for tasks such as Group/User Management, E-Mail Templates and the Ops Center. However, we're by no means finished with the new user interface and will continue to make improvements and transition away from Flash.

Screenshot 2019-05-13 at 10.58.00.png

CORRELATION RULES

Correlation is one of the key components of any effective SIEM tool. As vast amounts of data are fed into Security Event Manager, the correlation engine identifies, alerts on, and responds to

potential security weaknesses or cyberattacks by comparing sequences of activity against a set of rules. This release includes a brand new Rule Builder which enables you to easily build new rules and adjust existing rules. We've made some improvements including drop down menus (as well as the traditional drag-and-drop) to create rules, auto-enablement of the rule after saving, easier association of Event Names and Active Response actions and the removal of the Activate Rules button

Screenshot 2019-05-20 at 09.18.59.png

Screenshot 2019-05-20 at 09.20.15.png

FILE INTEGRITY MONITORING

FIM was originally introduced way back in LEM 6.0 and has provided users with great insight into access and modifications to files, directories and registry keys ever since. With users constantly creating, accessing and modifying files, a huge amount of log data is generated which is often associated with excessive noise. In order to better enable you to split the signal from the noise, we've introduced File Exclusions within our redesigned FIM interface. If a particular machine is generating excessive noise based on a particular file types (I'm looking at you tmp files), you can now easily exclude file types at the node level.

Screenshot 2019-05-20 at 09.47.52.png

LOG EXPORT

When investigating a potential cyberattack or security incident, you'll often need to share share important log data with other teams, external vendors or attach the logs to a ticket/incident report. Exporting results to a CSV is now possible directly from the Events Console.

Screenshot 2019-05-20 at 10.09.02.png

AWS DEPLOYMENT

As organizations shift workloads to the cloud to lower costs and reduce management overhead, they require the flexibility to deploy tools in the cloud. In additional to the Azure deployment support included in LEM 6.5, this release adds support for AWS Deployment. Deployment is done via a private Amazon Machine Image and therefore you need to contacts SolarWinds Sales (for evaluation users) or Technical Support (for existing users) in order to gain access to the AMI. Please note that your AWS Account ID will be required in order to grant access.

I really hope you like the direction we're going with Security Event Manager, especially the new user interface. We're already hard at work on the next version of SEM, as you can see in the What We're Working On post. As always, your feedback and ideas are always greatly appreciated so please continue to do so in the Feature Requests area.

Read more
9 15 4,547
Product Manager
Product Manager

SolarWinds® Access Rights Manager (ARM) 9.2 is available on the customer portalPlease refer to the release notes for a broad overview of this release.

Most of you are using cloud services in your IT environments today, living in and managing a hybrid world.

With the release of ARM 9.1 we already have taken this into consideration by complementing the existing access rights permission visibility into Active Directory, Exchange, and file servers by Microsoft® OneDrive and Microsoft® SharePoint Online.

Now with ARM 9.2 we round off our function set by introducing the ability to collect events from Microsoft® OneDrive and SharePoint Online allowing you to gain also visibility in activities within these platforms.

In addition to the functionality above, a lot of work was done under the hood to lay the foundation for coming features we will make available in the next releases.

What’s New in Access Rights Manager 9.2?

  • Microsoft OneDrive and SharePoint Online monitoring - Administrators need to be aware about certain events in their OneDrive and SharePoint Online infrastructure. ARM now enables the Administrator to retrieve events from the O365 environment and analyze them in reports.
  • UI - Design and layout optimizations to complete the SolarWinds look and feel.
  • Defect fixes - as with any release, we addressed product defects.

The SolarWinds product team is excited to make these features available to you.  We hope you enjoy them. 

Of course, please be sure to create new feature requests for any additional functionality you would like to see with ARM in general.

To help get you going quickly with this new version, below is a quick walk-through of the new monitoring capabilities for Microsoft® OneDrive and Microsoft® SharePoint Online.

Identify ACCESS to shared directories and files on OneDrive

OneDrive is an easy tool to let your employees share resources with each other and/or external users. ARM makes it easy for you to check which files an employee has shared internally or externally, and who actually accessed these.

Now let’s take a look how we can use OneDrive monitoring to answer the question “with whom outside the company do we share documents and files?” ARM allows you to easily generate a report for this.

pastedImage_8.png

1. Navigate to the Start screen in the ARM rich client and click on “OneDrive Logga Report” in the Security Monitoring section.

pastedImage_29.png

The configuration for the “OneDrive Logga Report“ opens.

2. Provide a title and comment that will be shown at the beginning of the report (optional). Select the time period analyzed for this report.

pastedImage_20.png

3. Click into “OneDrive Resources”

4. Select the target resources on the right side for this report by double clicking.

pastedImage_32.png

5. Click into “Operations”

6. As we are interested in who has shared the resources when and also if/what external users have accessed it we select the “AnonymousLinkCreated” and “AnonymousLinkUsed” operations on the right side for this report by double clicking.

7. Click on “Start” to create this report manually.

pastedImage_36.png

8. Click on “Show report” to view the report.

In the report created you get the information of who has invited external users when to access internal resources and if any external users have accessed these from what IP address.

Note: You can schedule this report to be sent periodically to your mailbox to stay on top what’s happening.

In the same way you can generate reports about the more than 180 other events available in SharePoint Online and OneDrive. Just follow the outlined steps and adapt in step 6 the operations to the ones you are interested in.

Other interesting events you might want to have a look at are file and folder related operations like FileDeleted/FolderDeleted or FileMoved/FolderMoved helping you with one of the classic use cases if employees complain about their disappearing files and folders.

On a side note, file/folder events on file servers are also captured in our monitoring and are available through the file server reports.

Conclusion

I hope that this quick summary gives you a good understanding of the new features in ARM and how you can utilize ARM to get better visibility and control over your hybrid IT environment. 

If you are reading this and not already using SolarWinds Access Rights Manager, we encourage you to check out the free download.  It’s free. It’s easy.  Give it a shot.

Read more
0 5 1,275
Product Manager
Product Manager

We are happy to announce the release of SolarWinds® Access Rights Auditor, a free tool, designed to scan your Active Directory and file system and evaluate possible security risks due to existing user access rights.

pastedImage_1.png

Ever hear of risks and threats due to unresolved SIDs, globally accessible directories, directories with direct access, or groups in recursion –  and wondered if you were affected?

Access Rights Auditor helps you answer this question by identifying use cases such as these and allows you to export the overall risk summary in an easy-to-understand PDF report to be shared.

Don’t know where to start?

Let’s walk through a typical use case assuming we want to check the permissions and risks associated with a sensitive folder from the Finance department.

pastedImage_3.png

We type the phrase “invoices” in the search box and press enter (1).

pastedImage_5.png

The “Search Results” view displays the search history and all hits of your current search in the different categories available like folders, users, and groups.

We select the folder we are interested in by clicking on “Invoices” (2).

pastedImage_7.png

Now we’re redirected to the “Folder Details” view and immediately get all “Folder Risks” displayed – in this example, three occurrences of “Unresolvable SIDs” and “Changed Access Permissions.”

But it doesn’t end here, because some risks are inherited by directories. For example, from inactive user accounts with continued access. These hidden risks are also listed here in the “Account Risks” section.

pastedImage_12.png

Now we validate who has access in the “User and groups” section below and realize that in our example the “System” account and the “Domain Admins” group have “full control” access on the folder.

To select members of the “Domain Admins” group, simply click on the group and you’ll be redirected to the “Group details” view.

pastedImage_14.png

Access Rights Auditor improves your visibility into permissions and risks with just a few clicks.

Can’t believe it’s free? Go ahead and give it a try.

For more detailed information, check the Quick Reference guide here on THWACK® at https://thwack.solarwinds.com/docs/DOC-204485.

Download SolarWinds Access Rights Auditor at https://www.solarwinds.com/free-tools/access-rights-auditor.

Read more
2 1 1,018
Product Manager
Product Manager

For those of you who didn’t know, Storage Resource Monitor 6.8 is currently available for download! This release continues our momentum of supporting new arrays that you all requested on THWACK® as well as deepening our already existing support for the most popular arrays.

Why don’t we go over some of what’s new in SRM with the 6.8 release?

NEW ARRAY SUPPORT - KAMINARIO®

We’re all really excited here about our newest supported array vendor: Kaminario®. With Kaminario® being an enterprise storage vendor that has a lot of exciting progress going on, we’re really excited to say that we now support their arrays, starting with K2 and K2.N devices. And we think that you will be to, if the voting in THWACK has anything to say about it.

Best of all, out of the box, this new support includes all the standard features you know and love: capacity utilization and forecasting, performance monitoring, end-to-end mapping in AppStack™, integrated performance troubleshooting in PerfStack™, and Hardware Health.

And, as always, we’re excited to share some screenshots.

Summary View

pastedImage_28.png

Hardware Health View

pastedImage_29.png

NEW HARDWARE HEALTH SUPPORT - DELL® COMPELLENT AND HPE 3PAR

Whether you’re a new customer to SRM or you’ve been a customer for a while, you know that there is a lot to be had when we extend support for an array to hardware health. With SRM 6.8, we focused on adding hardware health support to those arrays most popular with our customers. And so, we’re excited to announce hardware health support for Dell® Compellent and HPE 3PAR arrays. So now, starting in SRM 6.8, digging into these array types allows you to see details on fans, power supplies, batteries, and more.

A screenshot? Of course.

pastedImage_36.png

WHAT’S NEXT

Add in some bug fixes and smaller changes and you have SRM 6.8. We’re excited for you all to check it out.

If there are any other features that didn’t make it into SRM 6.8 but that you would like to see, make sure to add it to our Storage Manager (Storage Profiler) Feature Requests forum. But before you do, head over to the What We’re Working On page to see what the storage team already has in the works for upcoming releases.

And as always, comments welcome below.

- the SRM Team

Read more
1 4 1,188
Level 11

I’m happy to announce the General Availability of Database Performance Analyzer (DPA) 12.1. This release focuses on deeper performance analysis and management of DPA through these cool new features:

  • Anomaly Detection Powered by Machine Learning
  • Management API
  • Upgraded Java
  • New Options Page
  • Alerting Improvements

Anomaly Detection Powered by Machine Learning

Users tend to log help desk tickets when things are running slower than normal, i.e., an anomaly. Those tickets often find their way to the database team’s inbox to check the database. DPA can be used to find issues when you have time to drill into the wait time data, but often, time is of the essence. Everyone wants answers immediately.

Tired of comparing the trends chart with previous days to decide what “normal” looks like? DPA 12.1 now does the work for you, using a machine learning algorithm to identify which hours are abnormal, and displays the information contextually on the trends page. Bonus! If DPA detects an anomaly in the last 60 minutes, it changes the wait time status on the home page, letting you quickly identify the database instances your users are waiting on.

The DPA wait meter on the home page is now powered by anomaly detection, and new correlation charts appear as you drill into an instance. For example, you may be reviewing the home page and suddenly see the wait meter turn red.

pastedImage_0.png

This is an indication the instance is having higher than normal wait times and may be having issues. Clicking on the wait meter takes you to a view of the last 24 hours, and the status of the last bar will match the wait meter.

pastedImage_1.png

Drilling into the last bar, we can start to unravel the root cause of the anomaly. In this example, we see heavy wait times on RESOURCE_SEMAPHORE_QUERY_COMPILE, usually an indication that one or more queries require more memory than is currently available. In our case, many queries were waiting on this wait type, indicating a potential memory shortfall on the database server, which is what we found to be the case. Without the anomaly detection feature, we may not have known about this problem.

pastedImage_2.png

For more about this story and others, see this feature post in the DPA Customer Success Center: DPA 12.1 feature: Anomaly detection - SolarWinds Worldwide, LLC. Help and Support .

Management API

DPA has many customers automating tasks within their database environments, and many of you have scripts that can deploy/destroy a database environment in minutes. The new REST API in DPA 12.1 can be used to further that automation to management of DPA itself as well as monitored instances. It can safely connect to DPA and issue calls to:

  • Add and remove instances
  • List, allocate, and deallocate licenses
  • Stop, start, and update passwords for monitors
  • Add, retrieve, and delete annotations
  • And more

pastedImage_4.png

DPA customers are already using the API to:

  • Create annotations when a new build of an application is installed
  • Add monitoring to a newly created database instance and allocate proper licenses
  • Stop and restart monitors before and after O/S patches

If you are using the DPA API to do cool things, reply to this post and let us know about it.

For more information about DPA’s Rest API, including an interface to try them out before building code around them, use the new Options page and the Management API Documentation link. Here’s a list of other useful pages when you are ready to put the API into action:

What Did You Find?

Our QA team uses DPA to help make sure our code performs well. The anomaly detection feature has helped them be more efficient when problems crop up. DPA pings them using anomaly detection alerts rather than a person being required to drill into every instance to find issues. They can then use the anomaly detection charts to quickly understand the issues. If you find interesting stories in your environment, let us know by leaving comments on this blog post.

We would love to hear feedback about the following:

  • Does anomaly detection improve your workflow for finding wait time issues?
  • Are there issues in your databases that DPA did not find, or flagged incorrectly?
  • Are you using the REST API? How much time does it save you? What processes are you automating?

What’s Next?

To learn more about the exciting DPA 12.1 new features, see the DPA Documentation library and visit your SolarWinds Customer Portal to get the new software.

If you don't see the features you've been wanting in this release, check out the What We Are Working On for DPA post for what our dedicated team of database nerds are already looking at. If you don't see everything you've been wishing for there, add it to the Database Performance Analyzer Feature Requests.

Read more
3 2 1,898
Product Manager
Product Manager

I'm very excited to announce that SolarWinds Server Configuration Monitor (SCM)​ 1.1 is now available for download! This release expands on SCM 1.0 capabilities, both giving more detail for each change detected, and adding a new data source that can be analyzed for changes:

  • Detect “Who made the change” for files and registry
  • Detect changes in near real-time
  • Deploy PowerShell scripts and track changes in the output (with links to additional example scripts)
  • Set baselines for multiple nodes at once

Who made the change? In near real-time

SCM 1.0 is good at detecting changes in your Windows files and registry, but it didn't tell you who made the change, leaving you to do some additional investigative work. SCM 1.1 adds "who made the change" by leveraging our File Integrity Monitoring (FIM) technology, which also detects changes in near real-time -- a double benefit. Near real-time allows us to catch changes almost as they happen, and gives us a separate record for each change, even if changes are happening in rapid succession.

Turning on "Who made the change"

After you install or upgrade to SCM 1.1, you can easily turn on the "Who Made the Change" feature for the servers you want to monitor via a wizard:

  • From the "Server Configuration Summary -> What's New Resource," click the Set Up "Who Made the Change" Detection button
  • From the "All Settings -> Server Configuration Monitor Settings -> Polling Settings Tab," click the Set Up Who Detection button

Either way, it starts the "Who Made the Change" wizard.

The first step tells you about what happens when you turn on "Who Made the Change" detection:

The second step allows you to define the server exclusion list and turn on the feature:

Once you press Enable Who Detection, SCM will push out FIM driver to the agent(s) and turn it on, so file and registry changes will be monitored in near real-time rather than polled once a minute as in SCM 1.0. You can always come back and change the exclusion list or turn off "Who Made the Change" later.

Where to see "Who made the change"

You can see who made the change (user and domain) in a number of places, represented by the person icon.

  • SCM Summary: Recent Configuration Changes resource
  • Node Summary: Configuration Details and Recent Configuration Changes resources
  • Node: Content comparison, note the time I added to the file matches the time SCM shows the file changed.

Alerting

When building an alert, you can filter on "Who made the change" and add it to the text of your alert.

Reporting

The out-of-the-box SCM report includes "Who made the change" data.

Deploy and monitor the output of PowerShell scripts

Everyone's environment is different, and SCM could never monitor everything you want to "out-of-the-box." So, we added the ability to deploy and execute PowerShell scripts and compare the output over time. Now, configuration monitoring is only limited by your imagination and scripting super powers.

Adding a new script

I created a new Profile for this test, but you can add scripts to your current Profiles too.

First, create a new Profile and click Add to add a new element.

To add a PowerShell script configuration element:

  1. Choose PowerShell script as your Element type.
  2. Paste your script into the box.
  3. Click Add to add the element to the profile, then add again to save the profile.

Deploy and enjoy!

Once your new (or modified Profile) is ready, you can deploy it to one or more agents. From Server Configuration Monitor Settings > Manage Profiles, select the profile and click assign, then pick the servers you want, and walk through the wizard. SCM will deploy the scripts and start executing them on schedule.

Comparing the output

Comparing the output of the script over time works like any other source (file, registry, asset info) in SCM. You can set baselines and see changes in the content comparison. As you can see, the entire output of the script is captured and stored.

Mix and match elements in profiles

Don't forget -- one of the great things about SCM is you can mix and match elements in a single profile. Mix and match registry setting, multiple files, and PowerShell scripts into a single profile to monitor interesting aspects of your configurations.

Check Out Some Cool PowerShell Examples by Kevin

SolarWinds' own Technical Community Manager KMSigma put together some awesome examples of what SCM can do: Manage and Monitor PowerShell Scripts

Keep a lookout in our SCM forums for more PowerShell script examples in the future, and feel free to post your scripts too.

Set/Reset baselines for multiple nodes at once

Our early customers in large environments were limited to setting/resetting baselines one node at time, which was very painful when the dozens or hundreds of servers were updated (like a Windows update), so we addressed it quickly in this release. Now from the Server Configuration Monitor Settings screen, you can pick multiple servers, see a quick summary of the number of baselines you'll be updating, and then reset the baselines to the current output -- easy as 1-2-3.

What's next?

Don't forget to read the SCM 1.1 Release Notes to see all the goodness now available.

If you don't see the features you've been waiting for, check out the What We're Working on for SCM post for a list of features our dedicated team of configuration nerds and code jockeys are already researching. If you don't see everything you've been wishing for, add it to the Server Configuration Monitor (SCM) Feature Requests.

Read more
1 3 945
Product Manager
Product Manager

I’m pleased to announce the General Availability of Log Analyzer (LA) 2.0 on the Customer Portal.  You may be wondering what Log Analyzer is. The artist formally known as Log Manager for Orion has undergone a transformation. It has evolved past its former life as a 1.0 product and become Log Analyzer 2.0. Log Analyzer was selected after extensive research to better understand what our users would call a product that solves the problems our tool solves based on our feature set. I hope you like the new name!

This release includes Windows Event Support, Log Export, Log Forwarding and Rule Improvements as well as other items listed in the Release Notes.

Windows Events

As a System Administrator, closely monitoring Windows Events is vital to ensuring your servers and applications are running as they should be. These events can also be hugely valuable when troubleshooting all sorts of Windows problems and determining the root cause of an issue or outage. While there are vast array of Windows Events categories, the three main categories you'll likely focus on when troubleshooting are the Application (events relating to Windows components), System (events related to programs installed on the system) and Security (security related events such as authentication attempts and resource access). Trawling through Windows Event Viewers to find the needle in the haystack on individual servers can be a laborious task. Having a tool such as Log Analyzer can be a real life saver when it comes to charting, searching and aggregating these Windows Events. Thanks to the tight integration with Orion, you can view your Windows Events alongside the performance data collected by other tools such as NPM and SAM. Worth noting that you can also add VMware Events into the mix, thanks to the latest Virtualization Manager (VMAN) release.

In order to start ingesting Windows Events with Log Analyzer, you need to install the Orion Agent on your Windows device. Windows Event Forwarding​ is also supported, so if you prefer to forward events from other nodes to a single node with the Orion agent installed, that's an option too. By default, we collect all Windows Application and System events, along with 70 of the most common Windows Security Events. You can view more information on setting up Windows Event Collection here.

Once you have the agent installed and added the node(s) to Log Analyzer, you'll see the Events within the Log Viewer. Events are automatically tagged with Application, System or Security tags. Predefined rules are also included out of the box which tag events such as Authentication Events, Event Logs Cleared, Account Creation/Lockout/Deletion, Unexpected Shutdowns, Application Crashes and more.

Screenshot 2019-03-12 at 10.18.27.png

Windows Events are also supported in PerfStack, enabling you to correlate performance data with Windows Events. For example, you can see below there are memory spikes on a SQL Server, with some corresponding Windows Events and Orion Alerts. Drilling into the Windows Events you can clearly see there is insufficient system memory which is causing the Node Reboot and SQL Server Insufficient Resources alerts.

Screenshot 2019-03-12 at 10.58.21.png

Log Forwarding

​Log Analyzer shouldn't be seen as a dead end for your log data. There may be times when you need to forward import syslog/traps to another tool such as an Incident Management or SIEM for further processing/analysis. This release includes a new 'Forward Entry' rule action which enables you to forward syslog/traps to another application. You can keep the source IP of the entry intact or replace with Orion's IP address:

Screenshot 2019-03-12 at 11.21.15.png

Screenshot 2019-03-12 at 11.22.12.png

Log Export

When troubleshooting problems it's often necessary to share important log data with other team members, external vendors or attach to a helpdesk ticket. You can now do so thanks to the new Export option within the Log Viewer.

Screenshot 2019-03-12 at 11.33.28.png

Screenshot 2019-03-12 at 11.47.21.png

Rule Improvements

We've added some pre-populated dropdown menus for fields such as MachineType, EngineID, Severity, Vendor and more to make it even easier to create log rules. It is now also possible to adjust the processing order of the rules.

Screenshot 2019-03-12 at 12.00.34.png

The team is already hard at work on the next version of LA, as you can see covered here in the What We're Working On post. Also, please keep the feedback coming on what you think and what you would like to see in the product in the Feature Requests section of the forum.

Read more
2 19 2,662
Product Manager
Product Manager

Virtualization Manager (VMAN) 8.4 is now available and can be downloaded from your customer portal. In recent releases, we brought you VMware vSAN monitoring, container support, and better centralized upgrades to your deployment overall.

VMware Event Monitoring, Correlation, and Alerting

As a virtualization admin, it's a primary concern to track the many changes that occur in dynamic and often automated virtualization environments. While many virtualization vendors tout that the simplicity of their solution alleviates the need for admins to worry, I err on the side of caution. With VMware event monitoring, you now have real-time access to alert and correlate VMware's alarms, health checks, events, and tasks to issues in your environment. Ephemeral events such as vMotions are now easily tracked, and if you also have Log Analyzer, you can tag them for future cataloging.

pastedImage_0.png

Looking at my VMware Events summary, there are quite a few warning and critical events in the last hour. Filtering down to the warning events to do deeper inspection, I can see four of them are warning me of a failed migration for virtual machine DENCLIENTAFF01v

pastedImage_0.png

Clicking on one of these events allows me to drill in to get more context. Clearly, I need to look at the configuration of my vMotion interface.

pastedImage_1.png

Clicking "Analyze Logs" allows me to have better filtering and is also where I would configure processing rules to start configuring real-time alerting on these VMware events. Yes, event collection is real-time, and as a result, your alerts configured on these events are also triggered in real-time. If you want to be alerted to host connection changes, or when vMotions are triggered when they aren't supposed to be, you now can be alerted immediately.

pastedImage_0.png

For those of you who have Log Analyzer, you have even more troubleshooting tools that play very nicely with this VMAN feature. Are you looking to visually see occurrences of this event over time? Easy. Click "Analyze Logs" to navigate to the Log Viewer. Your Log Viewer will differ in that you'll have a visual graph to see how many times this event has occurred over the specified time period. In the example below, I increased the time to two hours, and searched for "vMotion." In addition, I've used the tagging feature to tag all events like this with a "vMotion" tag.

pastedImage_2.png

So how do I correlate this to problems? By using PerfStack dashboard.

pastedImage_1.png

After troubleshooting your issues, simply save the PerfStack project and put that project on your NOC view for future visibility.

pastedImage_2.png

Deeper Dives and Other Features

For a more in depth look at the VMware events feature check out these documents. Let me know if you have use cases that require real time alerting, monitoring and reporting so we can consider putting them in as OOTB content.

For those of you who are curious what we have for those users who do not need VMware event visibility check out these documents for more details:

Next on the VMAN Roadmap

Don't see what you're looking for here? Check out the WHAT WE'RE WORKING ON FOR VIRTUALIZATION MANAGER (UPDATED MARCH, 2019)  post for what our dedicated team of virtualization nerds and code jockeys are already looking at. If you don't see everything you've been wishing for there, add it to the Virtualization Manager Feature Requests

This version of VMAN is compatible with the legacy VMAN 8.1 appliance; however, all the newly available features are only on VMAN on the Orion Platform. If you're using the appliance on your production VMAN installation, I recommend that you consider retiring the appliance at your earliest convenience to reap all the benefits of the new features we are developing for VMAN on Orion. If you cannot retire the appliance for any reason, I'm very interested in your feedback and reasons, and would love to see them listed out in the comments below.

Helpful Links

Read more
0 8 1,446
Community Manager
Community Manager

Anyone who knows me knows that I’m a fan of PowerShell. “Fan” is a diminutive version of the word “fanatic,” and in this instance both are true. That’s why I was so excited to see that PowerShell script output is now supported in Server Configuration Monitor (SCM).

Since SCM’s release, I’ve always thought it was a great idea to monitor the directory where you store your scripts to make sure they didn’t vary and to validate changes over time, even going in and reverting them in case there was a change without approval. However, that part was available in the initial release of SCM. Using PowerShell with SCM, you can monitor your C:\Scripts\*.ps1 files and get notified when any deviate from their baselines.

Using PowerShell scripts to pull information from systems you’re monitoring is only limited by your scripting prowess. But let me say this plainly: You don’t need to be a scripting genius. The THWACK® members are here to be your resources. If you have something great you wrote, post about it. If you need help formatting output, post about it. If you can’t remember how to get a list of all the software installed on a system, post about it. Someone here has probably already done the work.

Monitoring the Server Roles

Windows now handles many of the “roles” of a machine (Web Server, Active Directory Server, etc.) based on the installed features. There never was a really nice way to understand what roles were installed on a machine outside the Server Manager. This is especially true if you’re running Windows Server Core because it has no Server Manager.

Now, you can just write yourself a small PowerShell script:

Get-WindowsFeature | Where-Object { $_.Installed } | Select-Object -Property Name, DisplayName | Sort-Object -Property Name

…and get the list of all features displayed for you.

Name                      DisplayName

----                      -----------

FileAndStorage-Services   File and Storage Services

File-Services             File and iSCSI Services

FS-Data-Deduplication     Data Deduplication

FS-FileServer             File Server

MSMQ                      Message Queuing

MSMQ-Server               Message Queuing Server

MSMQ-Services             Message Queuing Services

NET-Framework-45-ASPNET   ASP.NET 4.7

NET-Framework-45-Core     .NET Framework 4.7

NET-Framework-45-Features .NET Framework 4.7 Features

NET-WCF-Services45        WCF Services

NET-WCF-TCP-PortSharing45 TCP Port Sharing

PowerShell                Windows PowerShell 5.1

PowerShell-ISE            Windows PowerShell ISE

PowerShellRoot            Windows PowerShell

Storage-Services          Storage Services

System-DataArchiver       System Data Archiver

Web-App-Dev               Application Development

Web-Asp-Net45             ASP.NET 4.7

Web-Common-Http           Common HTTP Features

Web-Default-Doc           Default Document

Web-Dir-Browsing          Directory Browsing

Web-Dyn-Compression       Dynamic Content Compression

Web-Filtering             Request Filtering

Web-Health                Health and Diagnostics

Web-Http-Errors           HTTP Errors

Web-Http-Logging          HTTP Logging

Web-ISAPI-Ext             ISAPI Extensions

Web-ISAPI-Filter          ISAPI Filters

Web-Log-Libraries         Logging Tools

Web-Metabase              IIS 6 Metabase Compatibility

Web-Mgmt-Compat           IIS 6 Management Compatibility

Web-Mgmt-Console          IIS Management Console

Web-Mgmt-Tools            Management Tools

Web-Net-Ext45             .NET Extensibility 4.7

Web-Performance           Performance

Web-Request-Monitor       Request Monitor

Web-Security              Security

Web-Server                Web Server (IIS)

Web-Stat-Compression      Static Content Compression

Web-Static-Content        Static Content

Web-WebServer             Web Server

Web-Windows-Auth          Windows Authentication

Windows-Defender          Windows Defender Antivirus

WoW64-Support             WoW64 Support

XPS-Viewer                XPS Viewer

This is super simple. If someone adds or removes one of these features, you’ll know moments after it’s done because it would deviate from your baseline.

Monitoring Local Administrators

This got me thinking about all manner of other possible PowerShell script uses. One that came to mind immediately was local security. We all know the local administrator group is an easy way to have people circumvent security best practices, so knowing who is in that security group has proven difficult.

Now that we don’t have those limitations, let’s look at the local admins group and look at local users.

Get-LocalGroupMember -Group Administrators | Where-Object { $_.PrincipalSource -eq "Local" } | Sort-Object -Property Name

Now, you’ll get returned a list of all the local users in the Administrators group.

ObjectClass Name                         PrincipalSource
----------- ----                         ---------------
User        NOCKMSMPE01V\Administrator   Local
User        NOCKMSMPE01V\Automation-User Local

Now we’ll know if someone is added or deleted. You could extend this to know when someone is added to power users or any other group. If you really felt like going gang-busters, you could ask for all the groups, and then enumerate the members of each.

Local Certificates

These don’t have to be relegated to PowerShell one-liners either. You can have entire scripts that return a value that you can review.

Also, on the security front, it might be nice to know if random certificates start popping up everywhere. Doing this by hand would be excruciatingly slow. Thankfully it’s pretty easy in PowerShell.

$AllCertificates = Get-ChildItem -Path Cert:\LocalMachine\My -Recurse

# Create an empty list to keep the results

$CertificateList = @()

ForEach ( $Certificate in $AllCertificates )

{

    # Check to see if this is a "folder" or a "certificate"

    if ( -not ( $Certificate.PSIsContainer ) )

    {

        # Certificates are *not* containers (folders)

        # Get the important details and add it to the $CertificateList

        $CertificateList += $Certificate | Select-Object -Property FriendlyName, Issuer, Subject, Thumbprint, NotBefore, NotAfter

    }

}

$CertificateList

As you can see, you aren’t required to stick with one-liners. Write whatever you need for your input. As long as there’s output, SCM will capture it and present it in a usable format for parsing.

FriendlyName : SolarWinds-Orion
Issuer       : CN=SolarWinds-Orion
Subject      : CN=SolarWinds-Orion
Thumbprint   : AF2A630F2458E0A3BE8D3EF332621A9DDF817502
NotBefore    : 10/12/2018 5:59:14 PM
NotAfter     : 12/31/2039 11:59:59 PM

FriendlyName :
Issuer       : CN=SolarWinds IPAM Engine
Subject      : CN=SolarWinds IPAM Engine
Thumbprint   : 4527E03262B268D2FCFE4B7B4203EF620B41854F
NotBefore    : 11/5/2018 7:13:34 PM
NotAfter     : 12/31/2039 11:59:59 PM

FriendlyName :
Issuer       : CN=SolarWinds-Orion
Subject      : CN=SolarWinds Agent Provision - cc10929c-47e1-473a-9357-a54052537795
Thumbprint   : 2570C476DF0E8C851DCE9AFC2A37AC4BDDF3BAD6
NotBefore    : 10/11/2018 6:46:29 PM
NotAfter     : 10/12/2048 6:46:28 PM

FriendlyName : SolarWinds-SEUM_PlaybackAgent
Issuer       : CN=SolarWinds-SEUM_PlaybackAgent
Subject      : CN=SolarWinds-SEUM_PlaybackAgent
Thumbprint   : 0603E7052293B77B89A3D545B43FC03287F56889
NotBefore    : 11/4/2018 12:00:00 AM
NotAfter     : 11/5/2048 12:00:00 AM

FriendlyName : SolarWinds-SEUM-AgentProxy
Issuer       : CN=SolarWinds-SEUM-AgentProxy
Subject      : CN=SolarWinds-SEUM-AgentProxy
Thumbprint   : 0488D26FD9576293C30BB5507489D96C3ED829B4
NotBefore    : 11/4/2018 12:00:00 AM
NotAfter     : 11/5/2048 12:00:00 AM

FriendlyName : WildcardCert_Demo.Lab
Issuer       : CN=demo-EASTROOTCA-CA, DC=demo, DC=lab
Subject      : CN=*.demo.lab, OU=Information Technology, O=SolarWinds Demo Lab, L=Austin, S=TX, C=US
Thumbprint   : 039828B433E38117B85E3E9C1FBFD5C1A1189C91
NotBefore    : 3/30/2018 4:37:41 PM
NotAfter     : 3/30/2020 4:47:41 PM

Antivirus Exclusions

How about your antivirus exclusions? I’m sure you really, really want to know if those change.

$WindowsDefenderDetails = Get-MpPreference

$WindowsDefenderExclusions = $WindowsDefenderDetails.ExclusionPath

$WindowsDefenderExclusions | Sort-Object

Now you’ll know if something is added to or removed from the antivirus exclusion list.

C:\inetpub\SolarWinds
C:\Program Files (x86)\Common Files\SolarWinds
C:\Program Files (x86)\SolarWinds
C:\ProgramData\SolarWinds
C:\ProgramData\SolarWindsAgentInstall

Trying to find this out by hand would be tedious, so let’s just have SCM do the work for you.

This is all just a sample of the power of PowerShell and SCM. We’d love to know what you’ve got in mind for your environment. So, download a trial or upgrade to the latest version of SCM. Be sure to share your excellent scripting adventure so the rest of us can join in the fun!

Read more
14 22 4,485
Product Manager
Product Manager

In part 2 of "What's New in SAM 6.8" we are going to discuss the improved Cisco UCS monitoring that is shipping with SAM 6.8

(If you were looking for part 1 it is over here: SAM 6.8 What's New Part 1 - AppInsight for Active Directory )

Those of you who have been using SAM with NPM for a while are probably already aware that some support for UCS monitoring is possible in Orion. UCS support has been re-written to be utilized by any combination or standalone deployment of SAM, VMAN or NPM Additionally we added a new overview resource that let's you visualize your UCS environment. We fleshed out the hardware health support to include all the pieces. Fabric Inter-connects, Chassis, Blades and any rack mount UCS servers that you have managed under UCS. Finally we added a widget to let you see native errors and failures from UCS via the API. If you are using Cisco UCS in a Hyper-converged (HCI) configuration or hosting your critical virtualization resources in UCS then the new monitoring we have added is going to be a big win for you!

Get started by adding your Cicso UCS Manager node. In the Add a node wizard, click  'Poll for UCS' and enter your credentials.

pastedImage_0.png

Once you are successfully polling the UCS Manager some new widgets will become available:

pastedImage_1.png

Overview and UCS Errors and Failures

pastedImage_2.png

Chassis Overview

pastedImage_3.png

Blade hardware health

pastedImage_5.png

New layer added in AppStack!

AppStack let's you see the relationship between your Cisco UCS resources and the VMs and Applications running on them.

See end to end status from containers and applications all the way to the storage at the foundation of your UCS stack!

pastedImage_0.png

Out of the box alerts and reports:

Hardware Alerts:

pastedImage_5.png

Cisco UCS Entity Report

pastedImage_4.png

That wraps up our quick tour of this great new feature in SAM 6.8... As always, if you like what you see or have a question or a comment please feel free to contribute below.

You can also submit a feature request Server & Application Monitor Feature Requests

If you are curious about what we are planning for future releases jump over to the public road map What We're Working On Beyond SAM 6.8 (Updated March 13, 2019)

Here are some additional useful links related to SAM:

Read more
4 16 2,808
Product Manager
Product Manager

SAM 6.8 is now available - Following up to our previously released AppInsight for SQL, Exchange and IIS... The latest installment of AppInsight is here and it wants to make your life easier when it comes to monitoring Active Directory. In addition to performance counters and event logs, detailed information about Replication, FSMO Roles and Sites is provided out-of-the-box

To get started there are a couple ways to get AppInsight for Active Directory applied to your domain controller nodes:

You can either use "List Resources" on a node you know to be a domain controller or you can run a network sonar discovery and we will find your DCs for you!.

pastedImage_5.png

pastedImage_6.png

Perf-counters and events are still here but we took the time to add some new ones and also improve the grouping presentation. User and Computer Events, System Events, Replication Events, Policy Events and Logon Events are all neatly grouped together to make it easy to find what you are looking for.

pastedImage_2.pngpastedImage_3.pngpastedImage_4.png
Click to EnlargeClick to EnlargeClick to Enlarge

Replication: If replication isn't working, your Active Directory isn't working. Keep an eye on replication and get alerted if anything goes wrong. In addition to status we are representing direction and site location. You can also expand any given DC to see more detail about it's configuration.

pastedImage_3.pngpastedImage_2.png
Click to EnlargeClick to Enlarge

FSMO Roles at a glance: When something is wrong with a particular DC it can be helpful to know what roles it holds. Hover over the pill to expand the role description. Filters are also available at the top of the resource to allow you to focus on servers of a particular type of role.

pastedImage_4.png

Site Details: This widget provides a detailed overview of your sites including a view into related Links and Subnets. The widget also allows for quick searching to zero in on a specific item.

pastedImage_5.pngpastedImage_6.png

Alerts objects specific to AppInsight for AD

pastedImage_2.pngpastedImage_3.png

So that wraps up our quick tour of this great new feature in SAM 6.8... Don't forget to check out part 2 of what's new in SAM 6.8 SAM 6.8 WHAT'S NEW PART 2 - Enhanced Support for UCS Monitoring

As always, if you like what you see or have a question or a comment please feel free to contribute below.

You can also submit a feature request Server & Application Monitor Feature Requests

If you are curious about what we are planning for future releases jump over to the public road map What We're Working On Beyond SAM 6.8 (Updated March 13, 2019)

Here are some additional useful links related to SAM:

Thanks for stopping by!

Read more
5 18 3,139
Level 11

Update: A few new screenshots based on the current version. Full release notes available here.

After four months, it is time again to write another article about another product.
As it happens, we’ve added a new toy to our portfolio:

SolarWinds Access Rights Manager (ARM)

Some of you may know it under its former name, 8MAN.

What exactly does ARM do? And who came up with this TLA?

The tool validates permissions within Active Directory®, Exchange™, SharePoint®, and file servers. So who has access to what, and where does the permission come from?

Users, groups, and effective permissions can be created, modified, or even deleted.

Reports and instant analysis complete the package.

Everything works out of an elegant user interface, and you can operate it—even if you aren’t a rocket scientist.

ARM will be installed on any member server and comes with minimal requirements.
The OS can be anything up from 2008SP1; give it two cores and four gigs of RAM, and you’re golden, even for some production environments. The data is stored on an SQL 2008 or later.

The install process is quick.

01.jpg

02.jpg

03.jpg

Once installed, the first step is to click the configuration icon on the right-hand side. The color is 04C9D7, and according to the internet, it is called “vivid arctic blue,” but let’s call it turquoise.
On that note, let me tell you: I am German and unable to pronounce turquoise, so I am calling it Türkis instead.

04.jpg

The next step is to create an AD and SQL® user and connect to the database:

05.jpg

Don't panic if you see this message, the system is automatically reconnecting:

06.jpg

ARM is now available, but not yet ready to use.

07.jpg

We need to define a data source, so let’s attach AD. The default settings will use the credentials already stored in ARM for directory access.

05.png

In my example, an automated search kicks off in the evening. When you set it up for the first time, I suggest clicking the arrow manually once to get some data to work with.
Attention: Don’t do this with 10,000 users in the early morning.

Alright, that’s it.


Now click the orange—sorry, F99D1C—icon to start the tool.

06.png

Login:

08.jpg

The first thing we see is the dashboard:

09.jpg

Let’s deal with the typical question, “Why was that punk able to access X at all?”
The main reason for this is probably a nested authorization, which isn’t obvious at first glance.
But now ARM comes into play.
Click on Accounts and enter Mr. Punk’s name into the search box above:


09.png

The result is a tree diagram showing the group memberships, and it is easy to see where the permission is coming from.

10.png

If you click on a random icon, you will see more details—give it a try.
You can also export the graphic as a picture.
On the right side, you will find AD attributes:

11.png

Now it is getting comfortable. It is possible to edit any record just from here:

12en.png

Oh yes, I don’t trust vegetarians!

By the way, this box here is mandatory on any change, as proper change management requires the setting of notes.

13.png

And while we’re at it, right-click on an account:

14.png

Let’s walk from AD to file permissions. It’s only a short walk, I promise.
Click Show access rights to resources as seen above.

Now we need to select a file server:

15.png

On the right, we see the permissions in detail:

16.png

We ship ARM with a second GUI in addition to the client—a web interface accessible from anywhere, where you find tools for other tasks.

10.jpg

Typical risks are ready for your review out of the box. Just click on Analyze/Risk Assessment Dashboard. I know you want to do it.

You’ll find some interesting information, like inactive accounts:

18.png

Permanent passwords:

19.png

Or everybody’s darling, the popular “Everyone” permission on folders:

20.png

One does not simply “Minimize Risks,” but give it a try:

21.png

I could initiate changes directly from here – also in bulk.

By the way, any change made via ARM will be automatically logged.
The logbook is at the top of the local client, and we can generate and export reports:

22.png

You may have seen this above already, but you can find more predefined reports directly on the Start dashboard:

23.png

Let’s address one or two specific topics.

Since Server 2016, there is a new feature available called temporary group membership.
It can be quite useful; for example, in the case of an employee working in a project team who requires access to specific elements for the duration of the project. That additional authorization will expire automatically after whatever time has been set.

Practical, isn’t it?

But also consider this: Someone might have used an opportunity and given him- or herself temporary access to a resource with the understanding that the change of membership will disappear again, which makes the whole process difficult—if not impossible—to comprehend.

But not anymore! Here we go:

24.png

If you hover over this box here…

25.png

…you will find objects on the right side:

26.png

For this scenario, these two guys here might be interesting:

27.png

Unfortunately, in my lab, there’s nothing to see right now, so let’s move on.

ARM allows routine tasks to be performed right from the UI; for example, creating new users or groups, assigning or removing permissions, and much more.
This becomes even more interesting when templates, or profiles, are introduced.

Let’s change into the web client. Click the cogwheel on top, then choose Department Profiles:

28.png

At the right side, click Create New.

29.png

The profile needs a shiny name:

30en.png

Always make sure people who operate microwaves receive proper training. But that’s a different story.

More buttons on the left side; I will save it for now:

31.png

Starting now, you can assign new hires to these profiles, and everything else is taken care of by the tool, like assigning group memberships or setting AD attributes.

Of course, these profiles are also baselines, and there is a predefined report available showing any deviations from the standard. Just click Analysis and User Accounts.

32.png

Select a profile and off you go:

33.png

Elyne is compliant; congratulations. But that’s hardly surprising, as she is the only employee in Marketing:

34.png

These are just a few features of ARM. Other interesting topics would be the integration of different sources, or scripts for more complex automation. This is food for future postings.

Have fun exploring.

Read more
3 1 1,424
Product Manager
Product Manager

Woes of Flow

A poem for Joe

It uncovers source and destination

without hesitation.

Both port and address

to troubleshoot they will clearly assess.

Beware the bytes and packets

bundled in quintuplet jackets,

for they are accompanied by a wild hog

that will drown your network in a bog.

The hero boldly proclaims thrice,

sampling is not sacrifice!

He brings data to fight

but progress is slow in this plight.

Mav Turner

As network operators, one of the most common—and important—troubleshooting tasks revolves around tracking down bandwidth hogs consuming capacity in our network infrastructure. We have a wealth of data at our fingertips to accomplish this, but it’s sometimes challenging to reconcile into a clear picture.

Troubleshooting high utilization usually begins with an alert for exceeding a threshold. In the Orion Platform’s alerting facility, there are several conditions we can set up to identify these thresholds for action. The classic—and simple—approach is to set a threshold for utilization defined as a percentage of the available capacity. The Orion Platform also supports baselining utilization in a trailing window and setting adaptive thresholds. Next, you need to investigate to determine what’s driving utilization and decide what action to take.

Usually, the culprit is a particular application generating an unusual level of traffic. We can get some insights into application traffic volumes from a NetFlow analyzer tool like NetFlow Traffic Analyzer.

So, why don’t the volume measurements match exactly from these two sources of data? Aren’t interface utilization values the same as traffic volume data from NetFlow?

Let’s review the metrics we’re working with, and how this data comes to us.

Interface capacity—the rate at which we can move data through an interface—is modeled as an object in SNMP, and we pick that up from each interface as part of the discovery and import process into Network Performance Monitor network monitoring software. It can be overridden manually; some agents don’t populate that object in SNMP correctly.

Interface utilization is calculated from the difference in total data sent and received between polls, divided by the time interval between polls. The chipset provides a count of octets transmitted or received through the interface, and this value is exposed through SNMP. The Orion Platform polls it, then normalizes it to a rate at which the interface speed is expressed. That speed is usually “bits per second.”

Picture1.png

The metrics reported by SNMP about data received or sent through the interface includes all traffic—layer two traffic that isn’t propagated beyond a router, as well as application traffic that is routed. Some of the data that flows through the interface isn’t application traffic. Examples include address resolution protocol traffic, some link-layer discovery protocols, some link-layer authentication protocols, some encapsulation protocols, some routing protocols, and some control/signaling protocols.

For a breakdown of application traffic, we look to flow technologies like NetFlow. Flow export and flow sampling technologies are normalized into a common flow record, which is populated with network and transport layer data. Basic NetFlow records include ICMP traffic, as well as TCP and UDP traffic. While it’s possible on some platforms to enable an extended template that includes metrics on layer 2 protocols, this is not the default behavior for NetFlow, or any of the other flow export protocols.

Picture2.png

The sFlow protocol takes samples from layer 2 frames, and forwards those. So, while it’s possible to parse out layer 2 protocols from sFlow sample packets, we generally normalize sFlow along with the flow export protocols to capture ICMP, TCP, and UDP traffic, and discard the layer 2 headers.

When we work with flow data, we’re focusing on the traffic that is generally most variable and represents the applications that most often drive that high utilization that we’re investigating. But you can see that in terms of the volumes represented, flow technologies are examining only a subset of the total utilization we see through SNMP polled values.

Picture3.png

An additional consideration is timing. SNMP polling and NetFlow exports are designed to work on independent schedules and are not synchronized by design. Therefore, we may poll using SNMP every five minutes and average the rate of bandwidth utilization over that entire period. In the meantime, we may have NetFlow exports from our devices configured to send every minute, or we may be using sFlow and continuously receiving samples. Looking at the same one-minute period, we may see very different values at a particular interval for interface utilization and application traffic that is likely the main driver for our high utilization.

Picture4.png

If we’re using sFlow exclusively, our accuracy can be mathematically quantified. The accuracy of randomly sampled data—sFlow, or sampled NetFlow—depends solely on the number of samples arriving over a specific interval. For example, a sample arrival rate of ~1/sec for a 10G interface running at 35% utilization and sampling at 1:10000 yields an accuracy of +/-3.91% for one minute at a 98% confidence interval. That accuracy increases as utilization grows or over time as we receive a larger volume of samples. You can explore this in more detail using the sFlow Traffic Characterization Worksheet, available here: https://thwack.solarwinds.com/docs/DOC-203350

So, what’s the best way to think about the relationship between utilization and flow-reported application traffic?

  • Utilization is my leading indicator for interface capacity. This is the trigger for investigating bandwidth hogs.
  • Generally, utilization will alert me when there’s sustained traffic over my polling interval.
  • Application traffic volumes are almost always the driver for high utilization.
  • I should expect that the utilization metric and the application flow metrics will never be identical. The longer the time period, the closer they will track.
  • SNMP-based interface utilization provides the tools to answer the questions:
    • What is the capacity of the interface?
    • How much traffic is being sent or received over an interface?
    • How much of the capacity is being used?
  • Flow data provides the tools to answer the questions:
    • What application or applications?
    • How much, over what interval?
    • Where’s it coming from?
    • Where is it going?
    • What’s the trend over time?
    • How does this traffic compare to other applications?
    • How broadly am I seeing this application traffic in my network?

Where can I learn more about flow and utilization?

An Overview of Flow Technologies

https://www.youtube.com/watch?v=HJhQaMN1ddo

Visibility in the Data Center

https://thwack.solarwinds.com/community/thwackcamp-2018/visibility-in-the-data-center

Calculate interface bandwidth utilization

https://support.solarwinds.com/Success_Center/Network_Performance_Monitor_(NPM)/Knowledgebase_Articl...

sFlow Traffic Characterization Worksheet

https://thwack.solarwinds.com/docs/DOC-203350

Read more
4 4 1,426
Product Manager
Product Manager

Choosing the right monitoring tool can be difficult. You have fires to put out, time is limited, and your allocated budget may rival that of a first grader's allowance. When budgets are tight, there's nothing better than free, and many of you may lean on open-source solutions. These tools usually have no price tag and are essentially "free," but we have a saying here at SolarWinds®... "Is it free like a puppy, or free like a beer?"

While there isn't an actual cost through a purchase with open-source software, the caveat is that you usually need to put extensive work into getting them up and running. What if you had an alternative? A monitoring solution already purpose built for you, that is intuitive and helps cover the essentials. I'd like to introduce you to SolarWinds ipMonitor® Free Edition. The free edition of ipMonitor offers all the same functionality as paid software and supports up to 50 monitors.

ipMonitor is a comprehensive monitoring solution for your network devices, servers, and applications in a consolidated view. The tool is streamlined for simple agent-less monitoring of availability, status, and performance metrics in a lightweight tool that can be installed almost anywhere.

Perfect for even the smallest satellite office, ipMonitor sets up in minutes, uses minimal resources, and is completely self-contained, so there is no need to install a web front end or separate database and be forced to maintain it.

Use and customize built in dashboards to organize the critical data in your environment.Easily track response time, hardware health, or bandwidth of your firewalls, routers, and switches.Monitor servers for cpu, memory, drive space, and even critical services.

Free_Edition_Dashboard.png

(Click image to enlarge)

ipMonitor_Switch_Details.png

(Click image to enlarge)

Server_Details.png

(Click image to enlarge)

Drill down to investigate in more granular detail and view historical statistics.Click a chart to instantly generate an automated report to share, print, or save.

Leverage built in service monitors or assign port checks.

Pull performance counters or simulate user experience through built in wizards.

Switch_Interface_Stats.png

(Click image to enlarge)

Interface_Report.png

(Click image to enlarge)

pastedImage_3.png

(Click image to enlarge)

  Take advantage of simplified NOC views to quickly pinpoint areas of concern.

NOC_View.png

(Click image to enlarge)

There is a ton of power packed in such a small package, and best of all - it's FREE!  Download it for yourself. Check it out here: ipMonitor Free Edition | SolarWinds

Want to learn more? Check out the upcoming webinar: https://launch.solarwinds.com/essential-monitoring-with-ipmonitor-re-broadcast.html

Share feedback or see how others are leveraging ipMonitor in the ipMonitor forum on THWACK.

Video Link : 1285

Need to expand beyond the free edition? ipMonitor offers the ability to scale to help stay ahead of the next crisis, without emptying the pocket book. Whether you run a small business or need dedicated monitoring for a particular project fast, ipMonitor is designed to simplify the day-to-day.

Check out the ipMonitor documentation in the SolarWinds Success Center

Read more
2 2 1,006
Product Manager
Product Manager

SolarWinds® Access Rights Manager (ARM) v9.1 is now available on the customer portal!  For a broad overview of this release, the release notes are a great place to start. 

Feature Summary

View and Manage Azure AD Accounts with ARM

Create Azure AD accounts with ARM

Identify shared directories and files on OneDrive

Create a report about directories and files shared on OneDrive Identify users assigned to a transaction code in SAP R/3

Identify multiple authorizations for transaction codes in SAP R/3 Identify critical basic permissions in SAP R/3 Conclusion

Feature Summary

The primary changes you will see in this new release are designed to extend support for your critical applications and simplify integration with other systems and business processes, with explicit design to save you time on repetitive tasks.  

1.    Rebranded interface.The legacy 8MAN branding has been removed and the UI now looks similar to other SolarWinds products.  This is a small change but the first step in making ARM an important part of the SolarWinds security portfolio.

2.    Microsoft Azure Active Directory.  SolarWinds ARM now provides the ability to see and change permissions within Azure Active Directory.  By extending ARM to Azure-based Active Directory deployments, organizations who are directly leveraging Azure or who have hybrid environments can now utilize ARM to get better visibility and control over both. 

3.    Microsoft OneDrive.  SolarWinds ARM has been extended to include permissions visibility and change for Microsoft OneDrive, complementing the existing access rights permission visibility with Active Directory, Exchange, and file servers. Gain visibility into key areas, such as which files an employee has shared externally, and who has shared what files and directories internally with which employees.

4.    SAP R/3.  With this release, SolarWinds ARM introduces support for SAP R/3, allowing you to search for security-critical transaction codes, find authorization paths, and recognize multiple authorizations.  See which Active Directory users are assigned to each SAP account through the Access Rights Manager interface.

5.    UI/UX Improvements.  The ARM UI now has a more modern look.  The loading indicators have been improved.  We’ve added user pictures next to the comment boxes.  And, the user experience was improved by introducing tables with persistence in areas such as the resource view.  No need any more to re-apply your changes to the order or size of columns.  They stay with you after you set them.  Also, Analyze & Act scenarios can now be selected much easier by the new grouping and filtering functionality.  We heard you and made these improvements to make your job easier.

6.    Microsoft SQL Server Express Integration.  To make the installation for smaller environments easier, ARM now supports the automatic installation and configuration of Microsoft SQL Server Express directly from the ARM configuration page.  Use this option out-of-the box or utilize Microsoft SQL Server instead if you need a higher performance database.

7.    ARM Sync!  Most companies have several systems in place to manage users and their data.  This includes Active Directory, HR systems, and ERP systems.  Without proper synchronization processes, the systems may have an inconsistent view of the user’s data, resulting in administrators and HR employees having a difficult time identifying the correct set of data. ARM Sync! Helps to automate the data exchange between third-party systems and a system administered with ARM. With ARM Sync!, you can automatically create, deactivate, or delete user accounts.

8.    Recurring Task Scripting. Scripts are often used by administrators to ease the execution of recurring or repetitive tasks.  ARM now allows you to make a script available to users via the cockpit in a safe way to allow those users to execute an action immediately without an approval workflow.  These scripts can be executed before or after user provisioning processes, making it flexible and easy to apply.

9.    Create SharePoint Permission Groups.Industry best practices for SharePoint and file servers is not to grant permissions directly to users, but instead via group memberships to resource groups. With the Group Wizard for SharePoint, ARM relieves you of the many manual work steps needed to do this.  ARM now let’s you assign authorizations through a simple drag-and-drop procedure, and ARM will automatically create authorization groups and group memberships for both SharePoint online and SharePoint on-premises.

The SolarWinds product team is excited to make this new set of features available to you.  We hope you enjoy them.  Of course, please be sure to create new feature requests for any additional functionality you would like to see with ARM in general.

To help get you going quickly with this new version, below is a quick walk-through of the new Azure Active Directory feature, SharePoint, and OneDrive.

View and Manage Azure AD Accounts with ARM

ARM helps you to view, manage, and get control of your accounts in Azure AD and on-premises AD through a common interface.

1. Use the search box to find an Azure AD (AAD) account.  Use the search configuration (arrow) to ensure that Azure AD accounts are included in your search results.

pastedImage_8.png

2. Click on the desired entry. The icon with the cloud symbolizes an AAD account.

pastedImage_10.png

3. ARM focuses on the account. After right-clicking, select the appropriate action you want to perform.

Create Azure AD accounts with ARM

Create new Azure AD accounts or groups based on templates. Ensure the correct attributes and data is set.

1. On the start page, click "Create new user or group". 

pastedImage_18.png

2. Click on the desired template for a new user or new group in the AAD.

pastedImage_21.png

3. Enter the required information.

pastedImage_24.png

The information requested by the template can be fully customized.

4. Specify the logon information used to create the account in the AAD.

5. Enter a comment.

6. Start the execution.

Identify shared directories and files on OneDrive

OneDrive is an easy tool to let your employees share resources with each other and/or external users. ARM makes it easy for you to check which files an employee has shared externally, and who has shared what files and directories internally with which employees.

Option A: Browse through the OneDrive structure.

pastedImage_25.png

1. Select the resource view.

2. Expand OneDrive.

3. Browse the OneDrive structure.

4. ARM displays the permissions.

5. ARM shows you the authorized users.

"External" is used to identify files or folders shared externally. OneDrive creates a link (hence the symbol used). Anyone who owns the link can read or change it.

"Internal" identifies files or folders that are shared within the organization.

If a file or folder is shared with a specific user (email address) within the organization, this user is given permission (not a link).

Option B: Search for shared resources on OneDrive.

pastedImage_34.png

1. Search for "Internal" or "External" in …

2. OneDrive Accounts. 

3. This will open a scenario that displays all with OneDrive internally or externally shared files and folders.

Create a report about directories and files shared on OneDrive

Sometimes a report is easier to share, or you just want to follow up later on something you found. ARM allows you to easily generate a report about the files and folders your employees share on OneDrive.

pastedImage_38.png

1. Select the resource view.

2. Expand OneDrive and select a resource.

3. Select "Who has access where?".

pastedImage_42.png

4. The previously selected resource is preset.

5. Optional: Delete the preselected resources.

6. Use Drag-&-Drop procedure to add resources.

7. Start report creation.

Identify users assigned to a transaction code in SAP R/3

Transaction codes are important entities of SAP permissions. ARM helps you to identify which users are assigned to a specific transaction code, either direct or indirect, via membership in roles.

1. Use the search to find the transaction code you are looking for.

pastedImage_52.png

2. Click on the search result.

pastedImage_53.png

3. ARM automatically expands the tree view of the permission structure and focuses on the transaction code you are looking for.

4. ARM displays all permissions.

5. ARM displays all SAP users that have assigned the transaction code.

Identify multiple authorizations for transaction codes in SAP R/3

As with all permissions, there is often more than just one way a transaction code has been assigned to a user. ARM resolves all of these authorization paths and clearly visualizes these, leaving no room for ambiguity.

1. Use the search to find the transaction code you are looking for.

pastedImage_63.png

2. Click on the search result.

pastedImage_64.png

3. ARM automatically expands the tree view of the authorization structure and focuses on the transaction code you are searching for.

4. In the user list, ARM shows you how many authorization paths (arrows) have been set for the transaction code. Click on the user.

5. ARM shows you the authorization paths.

Identify critical basic permissions in SAP R/3

Use ARM to check regularly for critical basic authorizations following the principle of least privilege, and reduce the risk of data leakage.

1. Use the search box to find and select the critical basic authorization you are looking for. ARM opens the SAP authorization structure and focuses on the entry you are looking for.

pastedImage_68.png

2. Browse through the subordinate structure to analyze the use of the critical basic authorization.

pastedImage_69.png

Conclusion

That is all I have for now on this release.  I hope that this summary gives you a good understanding of the new features and how they can help you more effectively manage the permissions of your Azure AD, SharePoint, OneDrive, and SAP R/3 applications. 

I look forward to hearing your feedback once you have this new release up and running in your environment!

If you are reading this and not already using SolarWinds Access Rights Manager, we encourage you to check out the free download.  It’s free. It’s easy.  Give it a shot.

Read more
3 1 2,165
Level 12

The Ghosts of Config Past, Present, and Future (Well, Sort Of)

The scene is set: the curtains open to a person in bed trying to get a good night’s sleep during a dark and windy night. The hair on the back of their neck is standing on end, and with one big gust their worst features come true! In bursts, a flurry of emails demanding proof for configs of old.

Okay, okay, while I’m no Hemingway, I can tell you that we’ve all experienced the nightmare of being visited by configs of old. Being bothered to prove an older configuration was in compliance is a real pain, and the thought of doing this manually makes skin crawl. Enter SolarWinds® Network Configuration Manager (NCM) network configuration management software and the Favorite config.

Being a “favorite” is always a good thing, and the same can be said for Favorite configs inside of Network Configuration Manager. Just as any favorite gets special handling, Favorite configs are granted special privileges within compliance policies. Compliance Policies are always evaluating the most recent version of a configuration file. If you’re trying to prove compliance of an old file, you need to tell NCM to use that file instead. You do that by setting the config as Favorite.

If you set one config from each node as Favorite, then those Favorites will forever be the most recent. This means that you, as the user, would be able to prove these configs’ compliance at any point in the future from that day without any extraordinary effort. The best part of getting this setup is that it can be fairly easy, if you have established rules and policies.

Simply mark a config as Favorite either through the UI or, for the savvy user, through the SDK. This is done by navigating to the Configuration Management page and expanding the list of configs nested under a node.

Setting config as favorite.png

Once this is done, you need to make sure to set up or modify your Policies to use this config type.

Assigning a Policy to Compliance Report.png

After the policies are set, just add these policies to a Compliance Report. 

Assigning a Policy to Compliance Report.png

After the Compliance Report is set up, update the report and click on it to see the output. You can verify that this is evaluating the correct config by drilling into any violation and clicking the “View Config” link.

View Config in Compliance Report.png

If everything is set up correctly, you will see the details for the Favorite config. 

Verify Config.png

And there you have it! You’ll no longer be pressed to manually evaluate older configs for audit review or documentation. If you find this useful, have any comments, or would like to see how this can be done through the SDK, please let me know below!

Read more
1 0 828
Product Manager
Product Manager

The team continues to hammer away on enhanced and new application template content for Server & Application Monitor. The list below adds to what has been discussed in recent earlier posts, which you can find here, here, and here.

In this update, we will walk through the latest updates, including:

  • Verson 2 of Office 365 monitoring – We’ve reorganized the templates a bit, but more importantly, fixed an issue some customers were experiencing where components would randomly go into unknown status
  • Citrix XenServer – Net-new template support
  • Citrix PVS Accelerator for XenServer – Net-new template support
  • Oracle RAC (Real Application Clusters) – Net-new template support

As always, please let us know if you have any comments about these templates or requests to add to our list for new template creation. 

The info provided in this post is relatively high-level. Click on the links to see the complete detail for each new or updated template.

With that, let’s jump right in!

Office 365 Exchange Enhancements:

As mentioned above, besides reorganizing the templates a bit, the main update here is the fix to the issue some folks had reported where components would randomly go unknown. The issue was due to the fact that Microsoft has a “Global Throttling Policy,” which limits simultaneous connections from one client for O365 and maximum three simultaneous connections are allowed.

To overcome this concurrency issue, we have implemented a locking mechanism and restricted three scripts establishing a connection with Office 365.

pastedImage_0.png

Oracle RAC:

Next up, following on from the previous Oracle template updates, we are also releasing a new template for Oracle RAC, which you can download and read more about at https://thwack.solarwinds.com/docs/DOC-203744

The list of metrics available for monitoring include:

  • Average MTS response time
  • Average MTS wait time
  • Sort ratio
  • MTS UGA memory
  • Database file I/O reads
  • User locks
  • Locked users
  • Global cache service utilization
  • Global cache block lost
  • Global cache average block receive time
  • Long queries elapsed time
  • Redo logs contentions
  • Active users
  • Buffer cache hit ratio
  • Dictionary cache hit ratio
  • Average enqueue timeouts
  • Global cache block access latency
  • Nodes down
  • Long queries count
  • Database file I/O write operation
  • Global cache corrupt blocks

The thing to keep in mind about this template is, just like our other Oracle templates, it requires some prerequisites be set up on the Orion Server and/or Poller for it to work.

pastedImage_1.png

Citrix XenServer:

The third template we are releasing is for XenServer, which you can download and read more about here – https://thwack.solarwinds.com/docs/DOC-203745

Monitors the host as well as the guest VMs running on that host, including the following metrics:

  • Host - Free Memory
  • Host - Average CPU
  • Host - Control Domain Load
  • Host - Reclaimed Memory
  • Host - Potential Reclaimed Memory
  • Host - Total Memory
  • Host - Total NIC Receive
  • Host - Total NIC Send
  • Host - Agent Memory Allocation
  • Host - Agent Memory Usage
  • Host - Agent Memory Free
  • Host - Agent Memory Live
  • Host - Physical Interface Receive
  • Host - Physical Interface Sent
  • Host - Physical Interface Receive Error
  • Host - Physical Interface Send Error
  • Host - Storage Repository Cache Size
  • Host - Storage Repository Cache Hits
  • Host - Storage Repository Cache Misses
  • Host - Storage Repository Inflight Requests
  • Host - Storage Repository Read Throughput
  • Host - Storage Repository Write Throughput
  • Host - Storage Repository Total Throughput
  • Host - Storage Repository Write IOPS
  • Host - Storage Repository Read IOPS
  • Host - Storage Repository Total IOPS
  • Host - Storage Repository I/O Wait
  • Host - Storage Repository Read Latency
  • Host - Storage Repository Write Latency
  • Host - Storage Repository Total Latency
  • Host - CPU C State
  • Host - CPU P State
  • Host - CPU Utilization
  • Host - HA Statefile Latency
  • Host - Tapdisks_in_low_memory_mode
  • Host - Storage Repository Write
  • Host - Storage Repository Read
  • Host - Xapi Open FDS
  • Host - Pool Task Count
  • Host - Pool Session Count
  • VM - CPU Utilization
  • VM - Total Memory
  • VM - Memory Target
  • VM - Free Memory
  • VM - vCPUs Full Run
  • VM - vCPUs Full Contention
  • VM - vCPUs Concurrency Hazard
  • VM - vCPUs Idle
  • VM - vCPUs Partial Run
  • VM - vCPUs Partial Contention
  • VM - Disk Write
  • VM - Disk Read
  • VM - Disk Write Latency
  • VM - Disk Read Latency
  • VM - Disk Read IOPs
  • VM - Disk Write IOPs
  • VM - Disk Total IOPs
  • VM - Disk IO Wait
  • VM - Disk Inflight Requests
  • VM - Disk IO Throughput Total
  • VM - Disk IO Throughput Write
  • VM - Disk IO Throughput Read
  • VM - VIF Receive
  • VM - VIF Send
  • VM - VIF Receive Errors
  • VM - VIF Send Errors

pastedImage_2.png

pastedImage_4.png

pastedImage_6.png

Citrix PVS Accelerator for XenServer

Last but not least, we added a net-new template for Citrix PVS Accelerator for XenServer, which you can read more about and download here - https://thwack.solarwinds.com/docs/DOC-203773

Includes the following metrics available for collection:

  • PVS - Accelerator Eviction Rate
  • PVS - Accelerator Hit Rate
  • PVS - Accelerator Miss Rate
  • PVS - Accelerator Traffic Clients Sent
  • PVS - Accelerator Traffic Servers Sent
  • PVS - Accelerator Read Rate
  • PVS - Accelerator Saved Network Traffic
  • PVS - Accelerator Space Utilization

That’s it for this round of content updates! We have more in process and will post to let you all know as soon as they are ready. As always, you can suggest new templates or features for SAM by creating a Feature Request.

Read more
0 2 1,680
Level 12

Network Configuration Manager (NCM) v7.9 is available today on the customer portal! For a broad overview of this release, the release notes are a great place to start. This is a particularly pleasing release as we are delivering a feature that has received over 470 votes: Multi-Device Baselines.

What are Configuration Baselines?

Baselines are often attached to the act of measuring and rating the performance of a given object (interface, device, or similar) in real time. In configuration management terms, baselines are used to provide a framework for change control and management. The configuration baselines measure and evaluate the content set within the config and indicate whether the content is aligned to the baseline or not.      

Given that configuration changes over time are more difficult to directly observe and more complex to manage, this means that baselines play a role in monitoring and preventing unwanted changes. I find that this definition of baselines from Techopedia is interesting and accurate:

“It is the center of an effective configuration management program whose purpose is to give a definite basis for change control in a project by controlling various configuration items like work, features, product performance and other measurable configuration.”

This means that monitoring may be possible for a small number of nodes, but it is not practical nor is it reasonable to scale this type of manual monitoring framework. Actively monitoring each device’s config makes the validation of consistency and alignment to corporate or regulatory requirements reliable and possible.

Baselines

The great news is that NCM already helps with mitigating the challenges related to monitoring configuration drift by providing config change reports, Real Time Change Detection, rules and policies that monitor configurations based on a set of user-defined conditions, and a one-to-one configuration baselining. What we implemented in the latest version of NCM extends and improves configuration baselines to include:

  1. Creating new baseline(s) through
    1. Promoting an existing config to be a baseline, or
    2. Creating a new baseline by copy/paste or loading a file
  2. Ignoring unnecessary configuration lines (or lines unique to each device)
  3. Applying baseline(s) to a single node or multiple nodes

<New!> Baseline Management

In this release, there is a new list view of all baselines that have been created or migrated from an upgrade. From this new page, users can create new baselines, edit existing, apply or remove nodes for a given baseline, enable or disable a baseline, update the status of the baseline, or delete a baseline.

Screen Shot 2018-12-12 at 11.01.47 AM.png

<New!> Updated Diff Viewer

A major improvement in this release is the implementation of a new diff viewer for baselines. This new diff viewer will collapse lines that are unchanged, highlight ignored lines as gray, and mark all changes as yellow.

Screen Shot 2018-12-12 at 11.17.56 AM.png

More Ways to Create a Baseline

The process of creating baselines should be easy—take an existing config and simply apply it against a set of nodes, right? In NCM, you can do just that by promoting an existing configuration, loading a config from file, or copying and pasting.

Promoting a config is now nested under the node and in the baseline column:

Screen Shot 2018-12-05 at 4.34.53 PM.png

Creating a new baseline can be done through the new Baseline Management Page:

Screenshot Cropped.png

No matter the steps to create the baseline, each will ultimately lead to applying the baseline to the nodes and configs.

Screen Shot 2018-12-12 at 11.08.40 AM.png

Ignoring Extraneous Config Lines

One of the key challenges with baselines is being able to get an accurate assessment of the config and not having false positives for config lines that are unique to a node or not relevant to the baseline. In NCM v7.9, we have introduced an ignore line capability that allows users to click through lines that are not relevant to the baseline to aid in reducing false positives. To read more on this, check out this link.

Screen Shot 2018-12-12 at 11.16.00 AM.png

Baseline Status Indicators

To monitor whether or not a node (config) is in compliance with a baseline or baselines, there needs to be a visual and written indication. Baseline Management, Configuration Management, and ‘Baseline vs. Config Conflicts’ report all now have visual and written indicators. On the Configuration Management page, there is a new baseline column that contains the visual and written indication of whether or not that node is in alignment with the baselines applied.

Screen Shot 2018-12-12 at 9.52.34 AM.png

For each status, there is a hover that provides a list of all the baselines and their associated status for that node.

Screen Shot 2018-12-12 at 10.07.58 AM.png

The new Baseline Management view provides a complete list view of all baselines that have been created. This view is meant to show the alignment of all the nodes that are applied against a single baseline.

Screen Shot 2018-12-12 at 11.01.47 AM.png

Each baseline can be expanded to show the status for different nodes to which it is applied (similar to the hover for Configuration Management). Each one of the statuses is clickable and will load the diff of that baseline vs. the config selected.

Screen Shot 2018-12-12 at 10.12.21 AM.png

Lastly, the “Baseline vs. Config Conflicts” report also inherits the visual indicators and now shows the status of a node to one or many baselines.

Screen Shot 2018-11-28 at 1.57.48 PM.png

This is a major step forward for baselines and the monitoring of configuration drift within NCM. Of course, please be sure to create new feature requests for any additional functionality you would like to see with baselines or NCM in general.

Helpful Links:

NCM v7.9 Releases Notes

NCM Support Documentation

Network Configuration Management Software

Read more
3 0 755
Product Manager
Product Manager

Network Performance Monitor (NPM) 12.4 and the Orion Platform 2018.4 are now generally available in your customer portal. For those of you subscribing to the updates in What We're Working on for NPM (Updated June 1st, 2018)  you may have noticed a line item called "Centralized Upgrades." This update will give you the first chance to experience Centralized Upgrades on your environment.

Great news this upgrade is going to be easier than ever!
pastedImage_0.png

Planning for Your Upgrade to 2018.4

Read the release notes and minimum system requirements​ prior to installation as you may be required to migrate to new server or database infrastructure. For quick reference, I have provided a consolidated list of release notes below.

Note: Customers running Windows Server 2012, 2012 R2, and SQL 2012 will be unable to upgrade to these latest releases prior to migrating to a newer Windows operating system or SQL database version. Check for the recommended Microsoft upgrade path through the upgrade center.

See more information about why these infrastructures are deprecated here: Preparing Your Upgrade to Orion Platform 2018.4 and Beyond - Deprecation &amp; Other Important Items

SolarWinds strongly recommends that you update to Windows Server 2016 or higher and SQL Server 2016 or higher at your earliest convenience. 

Refresh your upgrade knowledge with the following upgrade planning references.

Always back up your database and if possible take a snapshot of your Orion environment.

Start Your Upgrade on the Main Polling Engine

Download any one of the latest release installers to your main polling engine.

pastedImage_12.png

For the screenshots that follow I'm upgrading my Orion deployment with the following setup:

  • Main Polling Engine is installed with Virtualization Manager (VMAN) 8.3 and will be upgraded to VMAN 8.3.1
    • Utilizes a SQL 2016 database
  • Three scalability engines
    • One Free Additional Polling Engine for VMAN on Windows 2012
    • One Free Additional Polling Engine for VMAN on Windows 2016
    • One HA Backup on Windows 2016

My first screen confirms my upgrade path to go from 8.3 to 8.3.1.

  • If I'm out of maintenance for a specific product, I would see indicators here first on the screen. Being out of active maintenance will prevent you from upgrading this installation to the latest, so please pay attention to the messaging here.
  • The SolarWinds installer will upgrade all of the products on this server to the versions of product that are compatible with this version of the Orion Platform for optimal stability. This may mean that you'll be upgrading more than just one product.
  • When in doubt, feel free to run the installer to see the upgrade path provided, so you can plan for your downtime. Cancelling out at the pre-flight check stage will give you all the information needed to plan ahead, without surprises and without changes to your environment.  This information can also be used for your change request before scheduling downtime for your organization.

pastedImage_19.png

The second step will run pre-flight checks to see if anything would prevent my upgrade from being successful on the main polling engine.

  • In case there are no blocking, warning, or informational pre-flight checks, we will proceed straight to the next step, accepting the EULA.
    • My main polling engine server and DB meet all infrastructure system requirements for the 2018.4 Orion Platform, so I am not shown any blocking pre-flight checks at this stage.
  • Pre-flight checks can block you from moving forward with your installation
    • You  may need to confirm whether you meet new infrastructure requirements (e.g. NTA 4.2.3 -> 4.4 upgrade) to proceed. Blockers will prevent you from successfully installing or upgrading, so the installer will not allow you to proceed until those issues have been resolved. 
    • Warning pre-flight checks give you important information that could affect the functionality of your install after upgrade but will not prevent you from successfully installing or upgrading. 
    • Informational pre-flight checks give you helpful troubleshooting information for "what if" scenarios, in case we don't have enough information to determine whether this would be an active issue for your installation.

The online installer will start to download all installers needed from the internet

  • SolarWinds recommends that you use the online installer because it will be able to auto-update and download exactly what's needed for the installation. Not only is it more efficient, but it will save you from downloading unnecessary or outdated bits.

This screen gives you an overview of next steps. The Configuration wizard will launch next, to allow you to configure database settings and website settings.

In this release, all scalability engines, including Additional Polling Engines, Additional Websites and HA Backup Servers, can be upgraded in parallel manually, using the scalability engine installer. Manual upgrades are still supported, but if you have scalability engines, please try our centralized upgrade workflow to save you time.

pastedImage_0.png

Follow the configuration wizard steps to completion. If you only have a main polling engine to upgrade, your installation is now complete. Log in to your SolarWinds deployment and enjoy the new features that have been built with care for your use cases.

Centralized Upgrades of the Scalability Engines

For those customers who have chosen to scale out their environment using scalability engines, such as Additional Polling Engines, HA Backup Servers or Additional Websites this is the section for you.

If you kept the "Launch Orion Web Console" checkbox checked in the final step of the Configuration Wizard, the launched web browser session will navigate you directly to the Updates Available page, where you can continue with the Centralized Upgrade workflow. If you want to open a new web browser session on a different system, you can quickly navigate to where you want to go by following these steps.

Launch the web browser and log in.

pastedImage_3.png

Navigate to 'My Orion Deployment' from the Settings drop-down.

pastedImage_4.png

Click to the UPDATES AVAILABLE tab. If this tab is not showing, that means there are no updates available for you to deploy.

pastedImage_6.png

Click Start, to begin the process of connecting to your scalability engines.

pastedImage_7.png

My environment is not experiencing any issues connecting to my scalability engines.

pastedImage_8.png

Bookmark this page Connection problems during an Orion Deployment upgrade - SolarWinds Worldwide, LLC. Help and Support  for future guidance on common "gotcha" scenarios, and how to handle them.

After the contact with scalability engines has been established, pre-flight checks will be run against all scalability engines

pastedImage_9.png

Looking at my pre-flight checks you can see that one server PRODMGMT-49 has a blocker that would prevent upgrades from occurring, mainly that it does not meet infrastructure requirements for this version of the Orion Platform.

pastedImage_10.png

However, my "Start Upgrade" is enabled. This is because if at least one scalability engine is eligible for upgrade, we will allow you to proceed. Only when none of the scalability engines are eligible will this button be disabled. Pay attention to servers that have blocking pre-flight checks, as you will have to manually upgrade them or move items being monitored via this scalability engine to one that is upgraded.

Clicking "Start Upgrade" begins the centralized upgrade process, first by downloading all the necessary bits to all the scalability engines in parallel. Notice how my scalability engine that was on incompatible 2012 infrastructure is not being upgraded.

pastedImage_11.png

Grab a coffee as the rest of your installation and configuration happens silently on each of the servers being centrally upgraded.

pastedImage_12.png

pastedImage_13.png

Oh no, an error occurred. What can you do at this point?

  • Click Retry download after troubleshooting (e.g. did the scalability engine lose connectivity to the main polling engine?)
  • RDP directly into the server using the convenient RDP link that is provided

Common scenarios to investigate:

  • Is this scalability engine set up inconsistently from the other servers? For instance, you may have Engineer's Toolset on the Web installed on this server and not on the others.
  • Do some of the installed products have dependencies on .NET 3.5? Engineer's Toolset on the Web has a dependency on .NET 3.5 to be able to upgrade. Ensure that if you have enabled .NET 3.5 and try again.
  • Check the Customer Success Center for more scenarios to help while troubleshooting.

pastedImage_14.png

In my case, I clicked Retry and was able to get past the issue.

pastedImage_16.png

My upgrade is complete! Congratulations on an upgrade well done.

pastedImage_20.png

Click Finish to complete your Centralized Upgrade session.

Gotchas - What to do with Unreachable Servers

If your server isn't being blocked because of incompatible infrastructure, you have an opportunity to manually upgrade that server in parallel while the rest of your environment is being centrally upgraded.

In the installation example captured below, if I were to run the installer on the Additional Website that is currently being upgraded by Centralized Upgrades, I would be blocked from running the installer on that server. However for the listed unreachable Additional Website, I can run that upgrade manually with no problem in parallel.

pastedImage_26.png

If you're blocked from proceeding on a manual upgrade, you would see the following. Only until you have finished the Centralized Upgrade process will you be allowed to proceed with a manual upgrade that is blocked in this fashion. For these scenarios, simply navigate to My Orion Deployment and exit out of the deployment wizard flow to cancel the centralized upgrade session.

pastedImage_28.png

Manual Upgrades

Manual upgrades of your deployment are still supported. If you have only one scalability engine, Centralized Upgrades may not be the fastest way to upgrade. However, if you have more, it is. This upgrade is still beneficial for those considering using manual upgrades for their deployment, and the reason is the installation and configuration wizard process can now be run in parallel. Existing customers have always known that there were some scenarios where you could run the configuration wizard in parallel across servers (e.g. same server type) and some that you could not. It took time and training to understand what scenarios those were. In this release, that limitation is lifted, and all server types can be configured in parallel.

There are times where you may need to consider falling back to manual upgrades in combination with your Centralized Upgrade. As an example, take this installation: two have completed, one has the configuration wizard in process.

pastedImage_27.png

If the download, installation, or configuration is taking a long time for one of your scalability engines, and you need to see more information that is only available in the client, you may consider canceling out of the Centralized Upgrade session to resume the rest of your upgrade manually. The servers that have been upgraded thus far will remain in a good spot, so you can cancel out with confidence. Proceed with this option carefully, as you will want to ensure that you have upgraded everything by the end of your scheduled downtime.

pastedImage_32.png

Check the My Orion Deployment page to ensure that all the servers in your Orion deployment are upgraded.

pastedImage_31.png

Support

We have all been there, despite all the best intentions and all the preparation in the world, something went wrong. No worries! File a support ticket Submit a Ticket | SolarWinds Customer Portal  and start gathering diagnostics via our new web based and centralized diagnostics.

Click to the Diagnostics tab

Select all the servers in your deployment,

pastedImage_0.png

and click "Collect Diagnostics."

pastedImage_3.png

Sit back and relax as your diagnostics are centrally gathered in preparation for your support call.

Customer Experience

Early adopters and those who have participated in our release candidates have already begun to enjoy the benefits of centralized upgrades. Check out our THWACK forums for testimonials from customers just like you as they experience the new and improved "Easy Button" upgrade experience. Here's a link to one from one of our very own THWACK MVPs  The &amp;quot;Easy Button&amp;quot; has arrived with the December 2018 install of NAM (and other Sol... If you'd like to share your upgrades with me, I'm very interested, and we'd love to see screenshots and your feedback on this new way to upgrade your SolarWinds deployment.

More centralized upgrade success - Success with Centralized Upgrades

Read more
11 29 5,310
Level 12

IPAM 4.8 has arrived and is now generally available! You can find this latest release in your Customer Portal. In recent releases, we’ve brought you integration with VMware vRealize Automation and Orchestrator and monitoring support for Amazon Web Services (AWS) Route 53 and Azure DNS. In this release, we have extended our support (yet again) to additional platforms and bring you these goodies:

Monitoring Support for Infoblox

You asked for it, you got it! This is our #1 integration feature request on THWACK®, and I’ve spoken to many of you at tech conferences about wanting us to monitor your Infoblox DHCP and DNS environments. IPAM provides valuable resources, alerting, and reporting capabilities without having to purchase add-ons, as well as a centralized management console across heterogeneous environments.


pastedImage_5.png

Migration to Core Custom Properties
We have migrated from product-specific custom fields to the unified custom properties designed to be simple and powerful for you to use with other Orion® Platform products. Now you can add new custom properties the same way you would for other modules and use them for IPAM entities in Reports and Alerts.

pastedImage_6.png

Support for More Linux Versions
We have extended DHCP and DNS support to the following Linux distributions:

    • Ubuntu 14.04
    • Ubuntu 16.04
    • Debian 9.5
    • Debian 8.6 (DHCP only)

HELPFUL LINKS:

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

·         IPAM IP address management software[MJ1]

Read more
0 1 1,052
Product Manager
Product Manager

We’re delighted to announce the release of version 4.5 of NetFlow Traffic Analyzer (NTA)!

The latest release of SolarWinds® NetFlow Traffic Analyzer is designed to help create alerts based on application flows. In past releases, we could alert on the overall utilization of an interface and provide a view of the top talkers when the configured threshold was exceeded. In this release, you can set a threshold on the volume of a specific application in order to trigger an alert. We're making use of the Orion Platform alerting framework, so that flexibility is available to you.

You’ve outlined a small set of critical problems in multiple requests, and in this release, we’re delivering on the five most popular of these.

  • Application traffic exceeds a threshold – Alert triggered when we observe a specific application rate exceeds a user-defined threshold
  • Application traffic falls below a threshold – Alert that can provide visibility when an application “goes off the air” and stops communicating
  • Application traffic appears in the “TopN” list of applications – This alert triggers when application traffic increases suddenly relative to other applications
  • Application traffic drops from the “TopN” list of applications – Likewise, alert triggers for a sudden reduction relative to other applications
  • Flow data stops from a configured flow source – Alerts on the loss of flow instrumentation, and prompts to take action to help restore visibility

Contextual Alerting

The approach we're using to create alerts is built to guide users into a particular context—a source of flow where we see the application traffic—and then offers a simple user experience to create the alert.

To create an alert based upon any these triggers, we must first select a source of flow data as a point of reference. We can do these one of two ways.

We can visit the NTA Summary Page, and navigate to a particular source of flow data:

Navigate_Node.jpg

If the application of interest is in the TopN, we can expand it to see where this application is visible and select that source. That will take us to a detail page, which is already filtered by both application and source of the flow data.

We can also select our source of flow data directly in the Flow Navigator. We can build our alert based upon a node that reports flow, or upon a specific interface:

Screen Shot 2018-10-29 at 2.16.11 PM.png

Once we have a context for an alert, we can select an application. If we use the "TopN Applications" resource, we have already identified both the application and the node or interface where it's visible.

Another way to arrive at this context can make use of the Flow Navigator, where we can explicitly select the application we’re interested in:

Screen Shot 2018-10-29 at 2.33.38 PM.png

We can select either Applications, or NBAR2 Applications, to help describe the traffic. With the context now fully described, we are able to open the "Create a Flow Alert" panel and create our first alert:

Screen Shot 2018-10-29 at 2.36.42 PM.png

At the top of the panel, we'll see the source of the flow data that we'll evaluate, and a default alert name prefix. We can customize the alert name to help make searching simpler. The severity of the alert is configurable:

Screen Shot 2018-10-29 at 2.45.03 PM.png

For the Trigger Condition, we'll select one of the options described above. In this case, we'll select "Application Traffic exceeds Threshold," and we'll set a threshold of 50MBps on the ingress. We'll evaluate the last five minutes of traffic; this is configurable. This threshold will trigger when our traffic rate averages greater than 50MBps over the five min. time period.

Finally, we can specify one or several protocols; if we specify more than one, we'll sum the traffic volumes for all the protocols.

To create the alert, there are two options. We can select the "Create Alert" immediately, and this will simply log the alert when it triggers. Or, we can check the box to open the alert in the Advanced Alert Editor and then select "Create Alert." Selecting this option will redirect us to the last step in the "Add New Alert" wizard, where we can modify the trigger actions, reset actions, or time of day schedule.

Screen Shot 2018-10-29 at 3.56.00 PM.png

The trigger condition is an advanced SWQL query, pre-populated with the contextual information on the source and application.

Before submitting this new alert, we'll see a message indicating whether the alert will trigger immediately.

Practical Alert Scenarios

Use the "exceeds threshold" alert for application traffic levels that average above or below the specified threshold.

Use the operation for ">" (greater than) or "<=" (less than or equal to) to determine then you can alert above or below the threshold. For example:

  • To determine when backup application traffic is running out of schedule
  • To identify large file transfers in the middle of the day
  • To identify DDOS attacks, or when Port 0 traffic is present at all

Use the <= “exceeds threshold” to help detect when an application server process goes offline and stops sending traffic.

  • The application service may have crashed
  • An intermediate connectivity problem (firewall or outage) may have reduced traffic

Use alerts related to applications appearing in—or dropping out of—the TopN can be useful for detecting sudden changes in traffic volume relative to other applications. Examples include:

  • Detecting streaming or peer-to-peer file sharing applications that are transient
  • Detecting changes in the mix of applications that usually traverse an interface

You can also set up an alert for each of your NetFlow sources to help take action if the configuration is modified, or firewall rules block flow traffic.

User Experience Improvements

This release of NTA also includes a number of small but significant improvements in the user interface to help enhance scalability and improve ease of use. Several long lists are now uniformly ordered, and we’ve changed how we label certain features to be clearer in the navigation.

Additional Resources

Check out the Release Notes, download the new release on the Customer Portal, and get additional help with the upgrade at the Success Center.

You can see these new features in action in the webcast, “Up, Down, and Gone: A Tale of Applications and Flow.”

This is an initial introduction of the traffic alerting feature. Be sure to enter additional feature requests and expanded functionality that you'd like to see with this capability!

jreves

Read more
2 6 1,218
Level 15

NPM 12.4 is available today, December 4, on the Customer Portal! The release notes are a great place to get a broad overview of everything in the release. Here, I'd like to go into greater depth on the brand-new Cisco ACI support. Let’s talk a bit about how software-defined networks are different than traditional networks, what that means for monitoring, and how to get the most out of the new ACI monitoring feature.

What is SDN?

The first time I heard the term Software Defined Network, I thought it was stupid. All networks are defined by software. Software moves packets and frames, or programs the hardware that does it. Software is used to manually configure networks via CLI. Software is used to automatically configure networks with protocols like OSPF, STP, and LLDP. Networks were alreadysoftware-defined!

Whether SDN is a good name or not, it is an important concept. There’s a lot of people trying to define SDN, usually with some ulterior motive of placing themselves in a favorable position. For a slightly less biased view, check out the Wikipedia definition. The thing that stands out to me is:

SDN suggests to centralize network intelligence in one network component by disassociating the forwarding process of network packets (data plane) from the routing process (control plane).

This is a big change. In an SDN environment, network devices like routers and switches become simple devices that just move traffic at a high rate. All the intelligence is in a separate device called the controller. The controller learns how everything is connected, what connectivity applications need, and writes instructions to all of the network devices so they know how to forward traffic.

There are a ton of SDN solutions available today. The two most popular commercial solutions seem to be Cisco ACI and VMware NSX. Cisco ACI is more commonly requested by our customers (see and compared to ), so we’ve built support for it first.

How Do I Monitor SDN?

An SDN fabric consists of a data plane and a control plane. The data plane is comprised of physical devices, Nexus switches, and, in the case of Cisco ACI, cabling. The control plane is comprised of many logical components that fit together to define what endpoints are allowed to send network traffic to each other. The modular nature of the configuration reminds me of Cisco’s MQC. To make sure your SDN environment is running well, you need to monitor both layers.

Data Plane (aka Underlay aka Infrastructure Layer)

AKA the boring stuff. This is not the glamorous part of SDN. It’s the stuff you’ve been doing for years: power supplies, fans, temperatures, CPU, RAM, and interface stats. The fact of the matter is, these things all need to function properly for your SDN environment to be performant and reliable.

The data plane for Cisco ACI environments is made up of the Cisco Nexus model line. Fortunately, NPM 12.3, the release before this one, introduced Network Insight for Nexus. This gave NPM better than ever support for this hardware.

It’s easy to set up. Navigate over to Settings (top menu bar) -> Manage Nodes -> Add Node. Add your spine switches and leaf switches as SNMP nodes. On the last step, make sure to check this box:

Picture1.png

If you already have your switches in NPM, you can find the same checkbox when you edit a node.

You’ll be prompted for your CLI credentials. CLI is the only way some of this very important data is available, so that’s how NPM gets it. This will cover the basics like power supplies, fans, temperature sensors, CPU, RAM, and interface statistics, plus the advanced stuff like VPC.  Those of you with NCM can also get access list version control and analysis. Those of you with NTA will get flow analysis. You can check all of that out on our demo site here.

Okay, let’s get to the new interesting stuff.

Control Plane (aka Overlay aka Control Layer)

In an SDN environment, the controller has all the intelligence. This has a big impact on monitoring. Instead of polling dozens or hundreds of devices that each have their own very narrow view of the network, we can poll the controller directly. It has to know where everything is or it couldn’t control it. This means we can learn a lot from monitoring it.

This part is also easy to set up. Navigate again to Settings (top menu bar) -> Manage Nodes -> Add Node. In a Cisco ACI environment, the controller is called an APIC. Add your controller as SNMP nodes. At the bottom of the first screen you’ll see this checkbox:

Picture2.png

Check it! If you’ve already got your APIC added, edit the node and you can find the same box to check.

Cisco strongly recommends each ACI fabric have three APICs. Since each APIC must be able to control the entire network if necessary, each APIC has a complete view of the network. Polling them all results in a lot of duplication of work and potentially duplicate alerts. You have a choice in how you approach monitoring of these devices:

  1. 1)    Add all three APICs to monitor but enable API-based ACI polling (the checkbox) for only one controller.
    1. a.    Pros: efficient for the APICs and efficient for NPM.
    2. b.    Cons: if the controller you’re doing API-based polling on goes down, you’ll see the APIC is down, but you’ll lose visibility to the control plane until you fix it or enable API-based polling for another controller.
  2. 2)    Add all three APICs to monitoring and enable API based ACI polling for all three controllers.
    1. a.    Pros: Control plane monitoring works, even if one or two of the three APICs go down.
    2. b.    Cons: NPM has to poll the same data three times. APICs have to provide the same data three times. You will get duplicate alerts and reporting data unless you’re careful to write your alerts in consideration of the duplicate data. More on this in a future post.

Our recommendation is to do #1, but either way will work.

The API-based polling runs over TLS. If you have a valid cert on your controllers, everything will add fine and you’ll be good to go. If you have a self-signed cert, you will receive a warning about it and you’ll have to accept the risk or replace it with a properly signed cert before proceeding. You do have a real cert on your APIC, right?

Once you complete the add node wizard, navigate on over to Node Details for one of your APICs with API-based polling enabled. You can click along with me right now on the Online Demo.  On the left side, you’ll see two new views: Members and Map. Let’s look at Members first.

Picture3.png

The Members view shows all of the logical components we have discovered. This includes Tenants, Application Profiles, and EndPoint Groups. It also includes the APIC’s view of the physical components: leaf switches and spine switches.

Picture4.png

This uses the framework’s List View, which is a polished way to deal with large lists. You can do multilevel filtering on the left, like sort, and search. The list contains the name of the component (example: Tenant3), the type of component (example: Tenant), and the distinguished name (example: uni/tn-Tenant3). On the right, we see the health score. Let’s talk about that.

Since the controller has visibility into all components and their relationships, for the first time, part of the network infrastructure is in a position to accurately assess its health. Cisco ACI does this by assigning a health score. The health score is an integer from 1 to 100, where 100 is perfectly healthy and less than 100... isn’t. The health score takes into consideration both parents and descendants in the ACI model. You can check out the exact formula here. Since health scores represent status, they’re polled at the status interval in NPM. As always, you can adjust this interval. All of this data is polled via Cortex, incidentally, our new polling framework that you previously saw powering PerfStack Real-Time Polling.

Health scores will be colored red, yellow, or green according to thresholds. There are thresholds on the APIC already for this that determine what color that score is in the APIC GUI. To stay consistent, NPM learns the thresholds from the APIC and applies those. If you customize the thresholds on the APIC, NPM will learn and apply the new threshold settings.

You can click on a health score to get the history in the PerfStack dashboard:

Picture5.png

Thanks to this being in PerfStack, it’s easy to start correlating other metrics about the APIC, leaf switches, and spine switches. It gets more interesting when you start correlating to end node availability, latency, and other data NPM has. If you own other modules on the Orion Platform, you can correlate that data too; for example, application counters, database wait time, IOPs, logs, and all the rest. Seeing all this data normalized on the same shared timeline is powerful for troubleshooting. If a health score is in bad shape and you think the issue is on the controller, it’s time to log in to the APIC itself. The APIC can tell you what is causing the score to be what it is and has a bunch of additional ways to troubleshoot.

Returning to the sub-view menu on the left, let’s check out the Map tab.

When you first open the map, you’re only going to see the APIC in the center. To get more on the map, select the APIC. On the right side, the inspector panel will open. Here you can check the box next to related entities and press Add at the bottom to add them to the map. You can use this method to continue to spider through your ACI environment. This works well for creating a map of a small ACI environment or of a specific section of a larger ACI environment, like a tenant or an app. Once you’ve got a map you like, you can select to Save as a group in the top right. From that point forward, you can navigate to that group and press the Map tab to see the map again. Here’s an example of one I saved in my lab:

Picture6.png

Pretty slick! One important note: the APIC GUI already has some capability to map an ACI environment. In talking to NPM users who run ACI environments, I frequently heard that they would like to grant read-only access via a common platform for folks who don’t have access to the APIC directly, like NOC engineers. This accomplishes that goal and lets you correlate and visualize with all of the other data currently available in Orion Maps.

Next Steps

To upgrade now, customers with NPM under active maintenance can head over to the Customer Portal and download NPM 12.4. Thanks to the improved Orion Installer, upgrade is faster than ever with centralized upgrade of additional polling engines. Once you’re installed, add those ACI nodes and reply here to let us know how it’s working for you!

Read more
2 24 3,910
Level 11

Hi, all! Welcome back to the continuation of our Primer posts on SWQL and the Orion® SDK. In the last post, we showed how to create dashboards using SWQL queries. Now we’re going to take it one step further with some other uses for SWQL:

Dashboards:

As with the reports, you can also add a custom SolarWinds Query Language (SWQL) query to a dashboard. If you aren’t familiar with customizing dashboards and widgets, check out these videos first:
Creating a New View
Adding and Customizing Resources

To get started, make sure you’re logged in as an admin to SolarWinds, or a user that has rights to make updates/changes to views. Once that’s confirmed, go to the page you wish to update, and go to the left-hand drawer and select “Customize Page.”

pastedImage_0.png

pastedImage_1.png

Search for “Custom Table” and drag and drop the widget onto your dashboard. There’s also “Custom Query,” and we’ll explore the advantages further down:

pastedImage_2.png

Select “Done Adding,” then “Done Editing” when complete. With your newly created widget, go ahead and select either “Edit” in the upper-right, or “Configure this resource” in the middle:

pastedImage_3.png

This should look familiar to the report writer’s interface at this point. Give this table a title, then click “Select Datasource.”

pastedImage_4.png

Change the Selection Method to “Advanced Database Query (SQL, SWQL)” and make sure the radio button is set to “SWQL.” Then copy/paste your query and preview results to make sure everything looks okay:

pastedImage_5.png

Select “Update Datasource” when complete. Just like the report writer above, you can select and format your columns. Once you’re finished, click submit, and you now have a custom table on your dashboard!

pastedImage_6.png

Note: The Host Name column being blank isn’t an error, these machines are not associated to a host.  We’ll explore formatting in a later post to show these as “N/A” instead.

Now, let’s try the custom query instead. With the “Custom Query” widget, you don’t have as many options in formatting, but it gives you two distinct advantages: the ability to paginate, and the ability to add searches. Pagination will be very important for larger lists, not only for cleanliness, but also for load times on the page you’re viewing, by restricting to X number of results at a time.

Again, go to the left-hand drawer and select “Customize Page,” then “Add Widget.” This time, search for “Custom Query” and drag/drop this widget to your dashboard:

pastedImage_7.png

Now, select “Edit” in the upper-right corner of the widget:

pastedImage_8.png

Notice here, you only get a box where “Select Datasource” would normally be. Go ahead and copy/paste your query in here, but since you don’t get the option of selecting the order of the columns, make sure your columns in the select statement are in the order you want them in. For example, with our query:
select OAA.Displayname, OAA.Status, OAA.Node.Caption, OAA.Node.VirtualMachine.Host.HostName from Orion.APM.Application OAA

Displayname” will be the first column, “Status” will be the second column, and so forth. So now that we have this:

pastedImage_9.png

That will result in a widget that looks like this:

pastedImage_10.png

Notice the “Page 1 of 2” at the bottom? This will help reduce clutter on your dashboards by keeping the list neat and tidy, and at the same time help with page loads, since we’re restricting to only five results. Another cool feature is the “Search” function. Edit the widget again, and this time check the “Enable Search” box:

pastedImage_11.png

Now you have another box to insert your query, and a note about adding a where clause for the search string. When we’re finished, we’ll have a search box on the widget page, and whatever you put in that box will go into the ${SEARCH_STRING} variable. This will change our query to add the where clause. In this case, we’re going to search on the Application name, which is our first column:

select OAA.Displayname, OAA.Status, OAA.Node.Caption, OAA.Node.VirtualMachine.Host.HostName from Orion.APM.Application OAA WHERE OAA.DisplayName like ‘%${SEARCH_STRING}%’

The keen-eyed individuals will notice we added just a little bit more here. In SWQL, if you want to do a wildcard match instead of an exact match, you use the word “like” instead of “=”. Then, you use the percent character (%) to denote a wildcard, not an asterisk (*). Finally, in SWQL you always use single quotes for strings, never double quotes. Let’s put that in our search box:

pastedImage_12.png

And now we have a search box!

pastedImage_13.png

To test, let’s search on IIS and see what we get:

pastedImage_14.png

There we go! Remember, this is just an application example, you can use this for anything else that you’re collecting in the product. For more examples, check out my other post for searching on a Port Description in User Device Tracker (UDT): https://thwack.solarwinds.com/docs/DOC-192885

That’s it for now! Stay tuned for future posts on formatting SWQL queries in these reports!

Read more
2 4 1,615
Level 17

SQL Server upgrades are a pain, I know.

And boring, too. It’s not very exciting to watch a progress bar.

Many people put off upgrading SQL Server. They wait for a business reason or an important security patch. Or, as was the case historically, they wait for the first service pack. After all, if it ain’t broke, don’t touch it.

I’m here today to tell you those days are over.

No longer can you sit back and allow systems and applications to lag behind with regards to patches and upgrades. You must stay current. Allowing applications to be more than one major version behind puts you, and your systems, at greater risk for security threats than ever before.

Microsoft has made it easier to upgrade and patch SQL Server. They’ve removed service packs, opting instead for cumulative updates. By shifting to a model that is similar to continuous deployment, Microsoft is able to deliver features, performance improvements, and security enhancements at a faster rate than ever before.

So, if you are waiting for SQL Server 2017 SP1, you’ll be waiting forever.

Don’t wait. Get started on upgrading SQL Server to the latest version today.

Let me help you understand just a few of the reasons why upgrading SQL Server is right for you.

Reasons for Upgrading SQL Server

As I mentioned before, it’s just common sense to stay current with the latest version of SQL Server. Microsoft has built tools like the Database Migration Assistant to help make upgrades easy. Applying cumulative updates has also been simplified. And because Microsoft hosts millions of database workloads inside of Azure SQL Database, you can be assured that these updates have been tested thoroughly.

Here’s a handful of the features available, out of the box, when you upgrade to the latest version of SQL Server.

Automatic database tuning – The ability for the database engine to identify and fix performance problems.

Adaptive query processing – While processing the execution plan, SQL Server will adapt query plans as necessary, essentially tuning itself instead of reusing the same plan.

Data security and privacy featuresAlways Encrypted, Dynamic Data Masking, Row Level Security, Data Discovery and Classification, and Vulnerability Assessment are all new, and all awesome.

Those are just a handful of the improvements. You will also find things like faster DBCC CHECKDB, improved backup security, and a new cardinality estimator. All those are great features worth your time for upgrading.

But there’s one more thing: the Orion® Platform.

See, we’ve been busy refactoring the Orion Platform to take advantage of newer SQL Server features.

Reasons for Upgrading Your Orion Installation

When I’m at an event performing demos, I am surprised how many customers haven’t upgraded to the latest version of the Orion Platform. Of course, I understand the many reasons why upgrades are put on the back burner.

I’m here today to help you understand that there’s more to the latest Orion version than a few fancy screens.

By using columnstore indexes, we have reduced the size of the Orion database (up to 33% less space), the amount of time it takes to perform maintenance (up to 6x faster on average), and the amount of time to retrieve data (up to 10x faster). That’s a lot of performance gains.

Table partitioning allows Log Manager for Orion to scale, accommodating multiple log sources, and the ability to quickly display all logs in time sequential order. As anyone that has had to analyze logs will tell you, it’s important to be able to quickly see all events in the exact order they occurred.

Also, in-memory OLTP helps products that leverage the Orion Platform achieve a high rate of concurrency, accelerating performance and scalability.

Those features sound great, but don’t just take my word for it. You should read about the SQL Server features being used by NetFlow Traffic Analyzer (NTA) over at this FAQ pag....

Now, at the bottom of that page, I want to call out something else that you will find interesting…

“You can install your NTA Flow Storage database and your Orion database in the same instance of MS SQL, provided that instance is an MS SQL 2016 SP1 or later version.”

That’s right, upgrading to the latest version of NTA allows you to consolidate your SolarWinds footprint. For customers paying by the core for SQL Server licensing, this alone should motivate you to upgrade.

I’ll make it easy for you: here’s a link to help you get started. Also, here’s the official upgrade guide located on our Customer Success Center.

I’ve also written some other in-depth posts about tips and tricks on upgrading SQL Server. Have a look—I believe you’ll find the information useful.

Summary

At the end of the day, we want the same thing that any company would want: happy customers.

By upgrading to the latest version of SQL Server, and then the Orion Platform, our customers can see benefits immediately. Not just in performance, but in your wallet.

Continuous improvement is the world in which we live now. Stop thinking of upgrades as a chore or a task to get past. Upgrade because you want to, not because you have to.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

Read more
4 13 2,872
Product Manager
Product Manager

If you have installed or upgraded any Orion® Platform product module over the course of the last six months and were running Orion product modules on either Windows Server 2012, Windows Server 2012 R2, or SQL Server 2012, you probably noticed an ominous warning message notifying you that these operating system and SQL database versions are deprecated and will no longer be supported in a forthcoming release.

Windows Server 2012 / R2 Deprecation NoticeMicrosoft SQL Server 2012 Deprecation Notice
pastedImage_0.pngpastedImage_1.png

If you didn’t encounter this message during your latest upgrade or install, not to worry. The above message only appears if Orion product modules are being installed or upgraded on an operating system or SQL database version that has been deprecated. If you're running several versions behind but have been keeping tabs on the release notes, eyeing all the wonderful features that await you when you do your next upgrade, you will find a similar deprecation verbiage there for every Orion module letting you know that you should upgrade from Windows Server 2012, Server 2012 R2, and SQL 2012 at your earliest convenience to stay current with later releases.

So what exactly is the purpose of these deprecation notices and why should I care?

Deprecation notifications such as these serve as sign postings to our customers of important impending changes in the matrix of operating systems and SQL database versions that will no longer be supported in future releases. These types of advanced notices were introduced at the request of customers like you. Their intention is to allow you ample opportunity to upgrade your environment prior to the release of newer Orion product module versions where these operating systems and database versions may no longer supported.

Life before Deprecation Notices

Prior to the inclusion of these deprecation notices, the only real way of knowing if an operating system or database version was no longer supported in the latest release of the Orion Platform was to download and attempt to install it. This was obviously much too late in the process, as by this point you likely only received approval from the change advisory board to upgrade your Orion install, and your window for downtime was narrow enough only to allow for the upgrade of your Orion product modules and not the operating system or database server that your Orion Platform resided upon. As you could imagine, this was a frustrating or even downright infuriating time to find out your upgrade was blocked. To prevent these types of mishaps from occurring, SolarWinds provides in-product deprecation notices one version in advance, warning customers that future releases are unlikely to support these older operating systems or SQL database versions.

My OS or SQL database version has been deprecated. How am I affected?

In short, you're probably not. These deprecation notices apply only to the absolute latest releases and are not applicable to previous versions of the product. There has always been zero requirement that customers upgrade to the latest version to continue receiving support. While we welcome and encourage all our customers to take full advantage of the latest enhancements and improvements included in newer versions of the product, this is not always possible or practical in every customer environment. Some organizations even have firm constraints that require them to stay at least one version behind the latest at all times.

For those reasons and more, we continue to fully support several previous released versions of Orion product modules at any given time. Suffice to say, if you're currently running NPM 12.3 or any other Orion Platform 2018.2 module release on Windows Server 2012, 2012 R2, or SQL 2012, there is no immediate impending requirement to upgrade. SolarWinds end-of-life policy helps ensure that these versions will remain fully supported, even when installed on Server 2012, 2012 R2, or SQL 2012.

Why are you deprecating my otherwise perfectly fine operating system or SQL database version?

Going forward, the Orion Platform and its related modules will begin to leverage new technologies only available in newer versions of SQL and Windows. New capabilities such as In-Memory OLTP, columnstore indexes, as well as partitioned tables and indexes aim to improve various aspects of performance and scalability for the entire Orion Platform, as well as the modules installed atop it. This will allow for accelerated website performance, shorter nightly database maintenance routines, reduced database size, and faster report generation, to name only a few areas of noticeable improvement.

Windows Server 2016 and 2019, as well as the version of IIS included with them, provide a host of important new security improvements that are critical to organizations of all sizes. These include things like supporting newer, stronger encryption ciphers, HTTP Strict Transport Security (HSTS) enabled websites, secure cookies, and more. While patches for specific critical security vulnerabilities will still be made available for Windows Server 2012 and 2012 R2, vital new security enhancements, bug fixes, and other notable improvements will only be available to later versions of the Windows operating system still under mainstream support.

How can I better plan for possible future OS and SQL deprecations?

While SolarWinds does everything reasonably possible to help ensure customers stay well informed of impending deprecations, some have asked for a longer-term outlook so they can plan their upgrade and server migration schedules accordingly. First, when selecting which operating system or database version to install Orion product modules on, we always recommend using the latest possible version of both. This decreases the likelihood of that operating system or database version being deprecated anytime in the foreseeable future, while also limiting the number of times you need to migrate your Orion installation to a newer server throughout its lifetime. To stay proactively ahead of any impending deprecation notices, however, you need only look to Microsoft's published product lifecycle for Windows and SQL Server.

Put simply, the Orion Platform will support Windows operating system and SQL database versions covered under Microsoft's Mainstream support that are available at the time of that versions GA release date.

I still have Windows and SQL 2012 in my environment. Can I continue monitoring those systems with Orion?

Absolutely! Monitoring Windows Server 2012 and SQL 2012 systems with the Orion Platform and its related modules remains fully supported, even in the latest releases. This support also extends to those systems monitored using the Orion Agent.

What Windows and SQL server versions exactly should I expect will be supported in the release following Orion Platform 2018.2?

The following table outlines those versions of SQL and Windows Server that will be supported in the Orion Platform release following version 2018.2:

Supported Operating System VersionsSupported Microsoft SQL Server Versions
Windows Server 2016SQL 2014
Windows Server 2019SQL 2016
SQL 2017
Amazon RDS

I'm currently on Windows or SQL Server 2012. How do I upgrade?

In recent years, Microsoft has made the in-place upgrade process easier and more reliable than ever. In-place upgrades are likely also the fastest method for getting your Orion server up to the latest operating system or SQL database version. If an in-place upgrade isn't for you, SolarWinds provides a wealth of documentation on migrating your Orion Platform to a new server.

Also in the Success Center, you will find documentation on migrating your Orion database to a new SQL Server.

I need assistance with my next Orion upgrade, what options do you provide?

If you need a refresher course on the upgrade process, or a confidence boost that you're on the right track, you will find on-demand training videos and instructor-led virtual online classes you can attend for free through the Customer Portal. As always, if at any time you encounter an issue during your upgrade, don't hesitate to contact SolarWinds support for assistance. We are here 24 hours a day, seven days a week, 365 days a year to help ensure you are successful using SolarWinds products.

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Read more
8 0 4,024
Product Manager
Product Manager

If you've been using the SolarWinds® Orion® unified IT monitoring platform for more than a few years, it's likely that you've at least once migrated it to a new server. You may have migrated from a physical server to a virtual one, or perhaps you simply needed to migrate to a server running a more modern operating system. Regardless of the reason, you're likely aware that there's a litany of documentation and training videos on the subject. Below are just a few, in case you’re curious.

Index

Deprecation

Now, as many of you have likely discovered during your last upgrade to SolarWinds® Network Performance Monitor (NPM) 12.3, SolarWinds® Server & Application Monitor (SAM) 6.7, or any other product modules released in 2018, support for Windows Server 2012, Server 2012 R2, and SQL Server 2012 were officially deprecated in those releases. You may have stumbled upon that when reviewing the release notes, or during the pre-flight checklist when running the installer.

pastedImage_7.png

Deprecation does not mean that those versions aren’t supported with Network Performance Monitor 12.3, Server & Application Monitor 6.7, etc. Deprecation simply means that new versions released in the future are unlikely to support those older operating systems. These deprecation notices were added at the request of customers like yourself, who asked to be provided with advance notice when future versions of Orion product modules would no longer run on older operating systems or SQL database versions. Those deprecation notices serve to allow customers an opportunity to budget and plan for these changes accordingly, rather than find out during a 3 a.m. change window that the upgrade you planned doesn't support your current OS or database version.

While future product module versions may no longer support Windows Server 2012, Server 2012 R2, or SQL Server 2012, that doesn't mean all previous versions are no longer supported at all. In fact, the latest, currently shipping versions of NPM, SAM, and other SolarWinds products are planned to continue to support running on Windows Server 2012 and 2012 R2 for many more years to come. So, if you're happy with the versions of product modules you're running today, take your time and don't rush your OS upgrade or server migration. Plan it appropriately. We'll still be here, waiting with a boatload of awesome new features whenever you're ready to upgrade.

Preface

But I digress. Since many of you have planned, or will eventually be planning, to migrate your Orion Platform to Server 2016 or perhaps even Server 2019, this inevitably stirs up painful memories of migrations you have undoubtedly done in the past—whether that was the Orion Platform specifically or some other mission-critical system in your environment. Let's face it, these migrations are typically neither fun or easy. Luckily, we’ll discuss how you can help change that.

Now, over the years, I have performed countless Orion Platform upgrades and migrations. And I, like many of you, have amassed a tremendous treasure trove of tips and tricks for streamlining the process down to an art form. The name of the game here is called downtime, and the less of it you can incur during your migration the sooner you get to go home, and the more you look like a rock star to your boss. What I'm about to show you here is how you can migrate your Orion Platform server with zero downtime!

As I stated above, there is a wealth of information on the subject of migrating the Orion Platform to a new server, and perhaps I overstated it a bit when I suggested that you're doing it wrong. There are multiple different (yet still correct) ways of migrating the Orion Platform from one server to another, and some ways may be faster, easier, or less error prone than others. These migrations typically vary depending upon the type of Orion server role the machine is hosting. This blog post focuses on the main Orion server, but the strategy can apply equally to Additional Polling Engines.

Orion Server Migration Made Easy

Going forward I'll assume you have at least one Orion server running NPM 12.2 or later—that one server being the main Orion server itself. If you're still running NPM 11.5.x, then chances are good you're not planning to migrate directly to Server 2016 or Server 2019 anyway, since 11.5.x isn't supported on either of those operating systems. I'm also going to assume that your Orion Platform server is currently running on Server 2012 or 2012 R2, though this process is equally applicable to those still rocking Server 2008 or Server 2008 R2. I'm also going to assume you have another freshly installed server ready for your Orion Platform migration. Lastly, this document won't be covering database server migrations. If that's what you were hoping for, there's an excellent document on the subject here.

First Things First

As with any good do-it-yourself project, the first order of business is to throw out, or otherwise lose, the instructions. I'm going to be walking you through what I’ve found to be the simplest, fastest, least error-prone manner of migrating the Orion Platform to a new server with absolutely zero downtime. None of those other documents or videos referenced above are going to show you how to do that, so let's just pretend they never existed.

Schedule Your Maintenance Window

While the Orion server should not be going down during the migration, it's always best to plan for the worst and hope for the best. I don't want anyone telling their boss that they decided to migrate their Orion server in the middle of the day because some guy on THWACK® named aLTeReGo told them to do it.

Backup Your Orion Server

Conventional wisdom would tell you that if it can go wrong, it probably will—so be prepared. If your Orion server is running on a virtual machine, take a snapshot prior to the migration just in case. While we won't be messing with that server at all during the migration, it's always good to have a safety net just in case.

Backup Your Orion Database

I can't emphasize this enough. BACKUP YOUR DATABASE! Seriously, just do it. Not sure if the backup from last night completed successfully? Do another one. Everything important is in the database, and with a backup, you can restore from virtually any disaster. If the database is corrupted though and you don't have a good backup to restore from, you may be rebuilding your Orion Platform again from scratch. You don't need to shut down the Orion Platform to take a backup, so go ahead and take another just to be on the safe side. We'll wait.

Need a little extra insurance? Why not give SolarWinds cloud-based server backup a try?

Do Not Upgrade (Yet)

If you’re migrating as part of an upgrade, don't upgrade yet unless you’ll be migrating to Windows Server 2019. It's best to leave the original server fully intact/as-is in the event something goes wrong and you need to roll back. There will be plenty of time to upgrade and play with all the cool new features later. For now, just focus.

Have Faith and Take a Deep Breath

This is going to start off a bit odd, but stick with me and we'll all come out of this together. Start by going to [Settings > All Settings > High Availability Deployment Summary] in the Orion web interface from a web browser on the new machine where you plan to migrate the Orion Platform. Next, click [Setup a New HA Server > Get Started Settings Up a Server > Download Installer Now].

Download the High Availability Secondary Server Installer
pastedImage_1.png
pastedImage_2.png
pastedImage_4.png
pastedImage_5.png
pastedImage_9.png

That's right, we'll be using the power of Orion High Availability (HA) to perform this Orion server migration. If at this point you're worried that you can't take advantage of this awesome migration method because you don't own an Orion High Availability license, fret not. Every Orion Platform installation comes with a full 30-day evaluation of High Availability for use on an unlimited number of servers. That's more than enough time for us to complete this migration! If you have no need for Orion High Availability, don't worry. The final steps in this migration process include disabling High Availability, so there's no requirement to purchase anything. However, you might find yourself so smitten with Orion High Availability by the end of the migration that you may wonder how you ever managed to live without it. You've been warned!

Begin Installation On Your New Orion Server

Once downloaded, double-click on the Scalability Engines Installer. Depending on which version of the Orion Platform you're running, the Scalability Engines Installer may look significantly different, so I've included screenshots below from both versions. On the top row, you’ll see screenshots from the Scalability Engines Installer version 1.x and version 2.x below that. Regardless of which version you're running through, the end result should be identical.

Connect to Existing Orion Server on Original Server

Select Server Role to Install

pastedImage_0.pngpastedImage_1.png
pastedImage_2.pngpastedImage_1.png

Once the installation is complete, the installer will walk you through the Configuration Wizard process. Ensure that all settings entered in the Configuration Wizard are identical to those used by your existing Orion server.

Let The Fun Begin

Now that we've installed a secondary Orion server, it's time to join them together into a pool (aka cluster). To do this, we begin by logging into the Orion web interface on your original Orion server. From there go to [Settings -> All Settings -> High Availability Deployment Summary]. There, you should find listed your original Orion server that you're logged into now, as well as your new Orion server that you just installed in the steps above. Click the “Set up High Availability Pool” button next to the name of your new Orion server.

Setup High Availability Pool.png

Now, if both your existing Orion server and the new server you'll be migrating to are located on the same subnet, you might be prompted to enter different information within the HA Pool creation wizard. It's also important to note that if you're currently running Orion Platform 2017.1 or earlier, it will not be possible to perform this zero-downtime server migration, unless both your existing and new Orion servers are located on the same subnet.

Same Subnet Migration

If both your existing and new Orion servers reside on the same subnet, you’ll be prompted to provide a new, unused IP address on the same subnet as your existing Orion server. This virtual IP (VIP) will be shared between these two Orion servers, as long as they remain in the same HA Pool. The purpose of the VIP is to route traffic to whichever member in the pool is active. If you don't have intentions of keeping HA running after the migration, this IP address will be used only briefly and can be reclaimed at the conclusion of the migration. When you're done entering the IP address, click “Next”.

pastedImage_6.png

Migrating to Server in Different Subnet

If you're migrating to an Orion server on a different subnet than your existing one, then the HA Pool creation wizard will prompt you to provide a virtual hostname rather than a virtual IP address. This name helps ensure users are directed to the “active” member in an HA pool when accessing the Orion web interface whenever failovers occur. If you don't have intentions of keeping HA running after the migration, you can enter anything you like into this field. Once you've populated the “Virtual Host Name” field, click “Next.”

Pool Properties.png

On the “DNS Settings” step of the HA Pool creation wizard, select your DNS server type or choose “other” from the “DNS Type” drop-down menu if you don't intend on keeping HA running after the migration. If you choose “other,” you can populate any IP address and any DNS zone (even one that doesn't exist) into the fields provided—these values will not be used unless you plan to integrate Orion High Availability with a non-Microsoft and non-BIND DNS server in the future.

pastedImage_0.png

When complete, click “Next” to proceed, review the “Summary,” and click the “Create Pool” button to complete the HA Pool creation process.

Ready, Set, Cut-over!

From the Orion Deployment Summary [Settings > All Settings > High Availability Deployment Summary], select the HA pool you just created. On the right side, click the “Commands” drop-down menu and select “Force Failover.” This should initiate an immediate failover from your old Orion server to your new one. Note that while the cutover time for polling and alerting is typically just a couple of seconds, it may take the Orion web interface a minute or so before it's fully accessible. Unless you were accessing the Orion Web Console using the VIP you assigned earlier, you’ll need to change the URL in your browser to point to the IP address of the new Orion server or the VIP to regain access to the Orion web interface once you've initiated the failover.

Fore Failover.png

Verify you're cut over to the new server by looking at the pool members listed on the Orion Deployment Summary, specifically their state or roll showed just below their names. Your old Orion server should be listed as “Standby,” and your new Orion server should display as “Active.” Congratulations! You've just completed a successful Orion server migration with zero downtime!

Clean-up

The following steps should be completed within 30 days if you don't currently own or have plans to purchase Orion High Availability to provide continuous monitoring, redundancy, and near-instantaneous failover of your Orion server in the event of a failure. Don't forget that Orion High Availability also helps you maintain that your Orion Platform’s 100% uptime every month when Patch Tuesday rolls around. (Patch Tuesday… or, you know, when Microsoft releases its latest round of operating system hotfixes, all of which inevitably require a reboot.)

Shutdown Old Server

You should start the cleanup process by shutting down your original Orion server. It served you well, and we all know how hard it is to bid a final farewell to such a loyal friend, but its time has come. If you're not immediately planning to destroy the virtual machine or de-rack your old Orion server, you may first want to consider changing its IP address if you plan to use it on your new Orion server. This will ensure that if the old Orion server is started back up, it won't cause an IP address conflict and wreak havoc on your network monitoring. Once you've changed the IP address, resume with shutting down the server before proceeding with the next steps.

pastedImage_1.png

Remove The Pool

From within the Orion web interface, navigate back to the High Availability Deployment Summary by going to [Settings > All Settings > High Availability Deployment Summary]. Click on the name of the Pool you created earlier in the steps above. From the “Commands” drop-down menu on the right select “Remove Pool.”

pastedImage_6.png

Reclaim The Original Orion Server's IP Address

Last, but certainly not least, you may want to reclaim the IP address of your original Orion server by assigning it to your new Orion server. This is simply a matter of logging into the Windows server via RDP (or etc.) and opening the Network Control Panel. I prefer to go to the “Run” command and typing “ncpa.cpl” and <Enter> to open the Network Control Panel without needing to navigate around Windows. Once you've opened the Network Control Panel, right-click on your network interface and select “Properties.” Within the interface properties, select “Internet Protocol Version 4 (TCP/IPv4)” and click “Properties.”

Network Control PanelInterface Properties
pastedImage_33.pngpastedImage_34.png

Update the “IP Address” field by entering the IP of your original Orion server, then click “Advanced.” In the “Advanced TCP/IP Settings,” you’ll find the Virtual IP Address you configured earlier in the steps above—you should be able to safely remove this now. To do so, simply select it by clicking on the IP address with your mouse and then click the “Remove” button. Then, click “OK” on each of the three windows to save your changes.

TCP/IP PropertiesAdvanced TCP/IP Settings
pastedImage_37.pngpastedImage_39.png

Update DNS/Machine Name

Do not rename the server itself in Windows. If you have users who are accessing the Orion Web Console via the original server’s name, the best and easiest method of ensuring those users can now access the new server is to create a DNS C-Name that points to the new server. It's always a good idea to have a layer of abstraction between what name end users type into their browser and the name of the server itself. This can help ensure that you can easily redirect those users later, should you want to add an Additional Web Server or re-enable High Availability. To accomplish this, we are going to create a DNS CNAME record for your original Orion server's name that points to the new Orion server. In this example, I'm using Windows DNS, but the same principle applies for really any type of DNS server.

From the DNS Control Panel on your DNS Server, expand “Forward Lookup Zones” and right-click on your domain name and select “New Alias (CNAME).” In my example below, my previous server's FQDN (Fully qualified domain name) was “solarwinds.sw.local” and my new Orion's server name is “pm-aus-jmor-04.sw.local.” In the “Alias name” enter “solarwinds.” The “Fully qualified domain name” field will automatically populate with the alias and domain name. In the “Fully qualified domain name (FQDN) for target host” field, enter the FQDN of your new Orion server and click “OK” to save your changes. Lastly, find and delete the ANAME “Host (A)” record for your old Orion server.

DNS Control PanelAdd New Alias (CNAME)
pastedImage_0.pngpastedImage_2.png

While this undoubtedly looks like a lot of steps, the process is actually fairly straightforward and I completed it in less than an hour. Now, obviously your mileage may vary, but regardless of how long it may take, there's no simpler way—for which I'm aware—that will allow you to migrate your Orion Platform to a new server with anywhere close to zero downtime. Hopefully, this process will save you a fair bit of time and frustration over the previous methods referenced above. If you have any tips and tricks of your own that have simplified your Orion server migrations, feel free to post them in the comments sections below—we'd love to hear them!

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates.  All other trademarks are the property of their respective owners.

Read more
55 199 22K
Product Manager
Product Manager

As awesome as SolarWinds applications can be at helping avoid a network crisis, sometimes disaster strikes closer to home: in the Orion® Platform itself. You come in to work one morning and click on the web page, only to see an unhandled exception, or you realize you haven’t seen an alert in a suspiciously long time, or you plan an upgrade expecting everything to go smoothly and find yourself stumped by a configuration wizard error. While our awesome support folks are here to help you get past any trouble you might run into, wouldn’t it be great if you had a way to check the health of your SolarWinds environment to prevent a catastrophe?

Well, good news: now you do! Introducing the Orion Health page, designed to help you keep your SolarWinds environment humming along trouble-free. You can go to Settings -> All Settings -> High Availability Deployment Summary (found in the Product Specific Settings section). Once there, you will see a tab called Deployment Health (and don’t worry, even if you don’t use our High Availability product, you still have these options).

pastedImage_0.png

This page has over 70 checks created from analyzing the most common reasons why customers call us. We cover everything from database maintenance to disk space. The best part is, you don’t have to remember to come to this page and run the server health checks regularly—they will run every night along with our regularly scheduled database maintenance. We run specific tests against each server in your deployment. Got an additional web server? A scalability engine? A standby server? We got you covered!

Shhh…

Of course, every environment is different, and we know that some tests might not apply to everyone. For example, we’ll warn you that if your SQL Server is on a VM, that might not be the best setup.  Sometimes resource-intensive applications like SQL don’t play well with shared resources. But if you know that your VM is a beast and can handle whatever we can throw at it, you can just silence that particular check:

pastedImage_0.png

You won’t see it again unless you un-silence it, so your daily view will be only the tests you care about.

There’s a KB for that!

So now you’re using the health page to keep everything copacetic, and you find a failure that you’re not quite sure what to do with.  Well, we’ve got a whole database full of Knowledge Base articles right at your fingertips! Just click on the arrow at the end of the row with the test on it, and a panel will pop up with a description of the test and a link to an article with more info.

pastedImage_0.png

Now that you have this tool at your fingertips, remember that we always welcome your feedback. Think there’s something we could test for that we don’t? Something that needs to be tweaked? Let us know and we’ll be happy to take a look!

Read more
15 62 5,272