Skip navigation
1 2 3 Previous Next

Product Blog

692 posts

We are excited to share, that we've reached GA for Web Help Deskv12.5.2

 

This service release includes:

 

Clickjacking protection

This release prevents malicious code from redirecting a hyperlink in the Web Help Desk user interface to an unauthorized third-party website or resource.

 

Secure password reset logic

After you click Forgot Password on the Log In screen, Web Help Desk verifies your current email address and redirects you back to the application using a secure connection to reset your password.

 

Improved LDAP security

Web Help Desk now prevents unauthorized LDAP client account users from logging in to an LDAP tech account with an identical user name. In v12.5.1 and earlier, WHD had 2 ways to handle LDAP authentication. One for techs and one for clients. After you install this release, the tech LDAP authentication functionality is removed, and every tech, who used this functionality will have his WHD password reset, and will also receive an email with steps to log in to WHD.

See Unauthorized clients can log in to a Tech account using LDAP authentication for details.

Before you install this upgrade, ensure that all techs have client accounts (authenticated through LDAP) linked to their tech accounts. Also ensure, that the tech username is not the same as any of the client's usernames. After the upgrade, all techs must access their tech account through their client account, or using the WHD tech username and WHD password (which can be reset using the secure password reset logic).

 

Updated Apache Tomcat

This release supports Apache® Tomcat® 7.0.82 for improved security. See the Apache Tomcat website for details.

 

Notabe fixed issues

Tickets linked to a survey now close properly after you change the status to Resolved.

The Office 365 connector now supports subfolders.

Tickets restricted to a location group can no longer be accessed by users in another location group.

 

We encourage all customers to upgrade to this latest release which is available within your customer portal.

Thank you!

SolarWinds Team

To kick off the Q4 releases, I am happy to announce Generally Availability of Database Performance Analyzer 11.1.   This release continues to build momentum on previous releases by extending our support of Availability Groups, supporting the latest databases, and improving the DPA interface.  We've also added a subscription option when you deploy DPA in the Amazon cloud.

 

Support for New Database Versions

We'd like to announce official support for the following databases:

  • SQL Server 2017 on Windows
  • SQL Server 2017 on Linux
  • Oracle 12.2
  • MariaDB 10.0, 10.1 and 10.2
  • IBM DB2 11.1

When integrated with Orion, these databases will appear in DPAIM.

 

Availability Groups:  Status, Alerts and Annotations!

New in version 11.1, DPA regularly polls the status of all SQL Server Availability Groups (AGs) contained in the monitored instance. The DPA Home page displays a new status icon which tells you that AGs are present in a monitored instance.

 

DPA’s new AG monitoring gives you the ability to:

  • See the status of all your AGs on the home page, including a new filter widget.
  • When you drill down, see the status of AGs, databases, and replicas. This includes synchronization and failover status information.
  • See annotations on trend charts that show when AG failovers have occurred. The annotations show you the previous and current replica (from/to), and allows you to correlate failovers to changes in load.
  • Send an alert email when:
    • An AG failover occurs.
    • An AG status becomes Partially Healthy and Not Healthy.

For a detail view of new AG features, see this feature post:  DPA 11.1: Improved monitoring of SQL Server Availability Groups

HomePageAGInfo.png

AG_summary.png

 

Amazon Subscription

Have a lot of databases in the Amazon cloud?  You can now monitor them via your Amazon subscription, simply start up DPA from the AWS Marketplace, connect up a repository and start monitoring your databases. All currently supported databases can be monitored, and you can integrate DPA with your Orion server and see the data in Orion.

 

Improved Wait Time Status Indicator

The wait time indicator on the home page has been improved in two ways.

  • There are 3 status (Green, Yellow, Red) instead of 2 (Blue, Red).  Default thresholds are now 1.3x and 1.6x the historical wait time for Yellow and Red, respectively.
  • We evaluate every 10 minutes, instead of once an hour.

This increased frequency and new thresholds allow DPA to show you wait time pressure much quicker than previous versions.

This new status is propagated to Orion via the integration module.

 

And a whole lot more!

  • When you search for a SQL statement while creating an alert or a report, the search results include each SQL statement's total wait time for the last 7 days
    SQLSearchResults.png
  • Improved instance filter includes if instance monitor is on/off.
    status indicator.png
  • New icons and images that align better with Orion
  • See the DPA 11.1 Release Notes for the rest!

 

What's Next?

Don't see what you are looking for here? Check out the What We Are Working On for DPA (Updated Nov 6, 2017) post for what our dedicated team of database nerds and code jockeys are already looking at.  If you don't see everything you've been wishing for there, add it to the Database Performance Analyzer Feature Requests.

 

 

Syslog

When PerfStack was initially released, a surprising few key metrics were noticeably absent. Chief among those, was the surprising lack of Syslog data. In the Network Performance Monitor 12.2 release, and all Orion product modules which include Orion Platform 2017.3, we rectified this injustice, bringing Syslog into the PerfStack fold. For any node sending Syslog data to Orion you will find a new 'Syslog' metric tile under the 'Status, Events, Alerts' category. Simply drag that tile to the chart area, and voila! Syslog data charted over time broken down by severity level.  As with all metric tiles within PerfStack this metric tile is dynamic and will only appear Nodes which have been configured to send Syslog data to the Orion server. If this tile does not appear for nodes you believe should, verify that Orion has received Syslog data from the device using the web based syslog viewer within the Orion web interface.

 

 

Hovering your mouse over the charted area will show the total number of syslog messages received at the top of the legend for the time shown below the vertical marker, as well as a breakdown of each severity type contributing to that total beneath it.

 

 

SNMP Traps

Not to be outdone, the inclusion of SNMP Trap data has also been added. Much like Syslog, SNMP Trap data is broken out by severity and is available as a dynamic metric tile for any node which is configured to send SNMP Trap data to Orion. The SNMP Trap metric tile can be found under the same 'Status, Events, Alerts' category as Syslog, when a node is selected from within the metric palette.  If this tile does not appear for nodes you have configured to send SNMP Trap data to Orion, verify that Orion is receiving those SNMP Traps from the device using the web based SNMP Trap viewer, accessible from within the Orion web interface.

 

 

 

Zoom

When hovering your mouse over the charted area you may notice your mouse cursor has learned a new trick, changing to crosshairs. Holding down your mouse button while dragging across the charted area allows you to lasso a specific time period of interest, such as a sudden spike in resource usage. The selected area is then the focal point of your project, while the rest of the chart area surrounding this time period becomes visually de-emphasized through color desaturation. while the colors remain bright and vibrant within the selected area.  This easily allows you to focus your attention on the area selected area without visual distraction from the surrounding area. This selection process occurs across all charts within the PerfStack, making it easier to visually correlate what else occurred during the same time period.

 

 

Once a time period has been selected, new options appear next to the selected area. Clicking the [+] icon highlighted above, zooms into the selected time range where you can view high fidelity detailed data collected during that time period. Similarly, clicking the [-] icon allows you to zoom further out to spot trends, or return to your previous perspective after having zoomed in. Any time after having made a selection, clicking the [X] icon will cancel the selection and return focus to the entire viewable chart area.

 

 

Data Explorer

Great! So now you know how many Syslog messages and SNMP Traps were received from any of your devices, along with their respective severity. You can even zoom into a specific time frame and cross correlate this data against other metrics from the same or different entities monitored by Orion. That's some super powerful stuff! But you know what would make this data even more powerful?

 

What if you could actually view the full details of those Syslog or SNMP-Traps from within PerfStack itself? That would assuredly accelerate the troubleshooting process and aid in reducing time-to-resolution; both of which are key tenets of PerfStack. Well that's exactly what we've done!

 

To get started, select a time period in the chart area, the same as described above to zoom in or out of the chart. Next, click the top paper & magnifying glass icon from the options displayed to the right of the selected area. This action will cause the Data Explorer tab to be shown in the left pane, where the Metric Palette normally appears. Switching between the Data Explorer and the Metric Palette is simply a matter of clicking on the appropriate tab.

 

Within the Data Explorer, all Syslog or SNMP Trap data is listed in the chronological order it was received. Each line represents a single message, beginning with its severity and ending with the date and time the message was received. In between is a brief preview of the message body. To view the full message text, simply expand the row.

 

 

If your devices are sending a lot of Syslog or SNMP Traps to Orion, it may be difficult to sift through all the noise and focus on what's truly important, even if you select the tiniest window of time. Since this just so happens to be another key tenet of PerfStack, we added filtering and search to the Data Explorer. This allows you to do things, such as show only Warning or Critical messages that were received during the selected time period, while filtering out the excessive noise generated by Informational and Debug messages.

 

And BOOM! Just like that, the problem is found. If SolarWinds was a leading US based office supply company, right now you'd probably feel compelled to blurt out 'That was easy". The magic doesn't end there either. This same functionality that's available for both Syslog and SNMP Traps can also be used to view the details of Orion Events. So if you aren't afforded the opportunity to configure devices to send Syslog or SNMP Traps to your Orion server, you can still give this new feature a whirl by adding Orion Events to your PerfStack and following the instructions outlined above.

 

 

Additional Metrics Support

In addition to Syslog & Traps, PerfStack in Orion Platform 2017.3 includes support for a variety of additional Orion product module metrics in this release. Given the feedback we have received since the initial release PerfStack, we know these will all be extremely welcomed additions.

 

Universal Device Pollers

That's right, I said Universal Device Pollers! Unquestionably the single most often requested feature request I receive for PerfStack has been adding support for Universal Device Pollers. With the release, I'm proud to announce you can now add Universal Device Pollers to your PerfStack projects no differently than any other metric. Universal Device Pollers for both Nodes and Interfaces are supported, and appear under their respective entity type. When you select an interface for example, which has Universal Device Pollers assigned, the 'Group' names as defined within the Universal Device Poller Windows application will appear within PerfStack as new metric categories. In the example below I have a Universal Device Poller Group I've defined in the Win32 application called 'Cisco Switch', which I've assigned to interface 'Fa0/1' on 'lab-transi-sw1'. Similarly, I have a Node based Universal Device Poller called 'sonic Current CPU Util' which is assigned to 'stp-nsa2400'. Please note that for Universal Device Pollers to appear in PerfStack, they must be of type 'Rate' or 'Counter' and that 'Keep Historical Data' must be enabled.

 

PerfStack Universal Device Pollers.png

Network Insight for F5 & ASA

This release of PerfStack also includes support for NPM's Network Insight for F5, and all new Network Insight for ASA. These metrics appear under their own distinct categories or node entities and are treated no differently than any other metric within PerfStack.

 

F5 ASA.png

 

Voice & Network Quality Manager

A party just isn't a party unless you invite your friends, so in this release we invited VNQM to join the fun by bring metrics for IPSLA operations into PerfStack. IPSLA Operations appear as their own separate entity type within PerfStack. If however, you find it easier to search by source router instead you can select that node entity and click the add related items button and all IPSLA operations running on that router will appear listed in the entities list. Select the IPSLA operation you'd like to see more information about and the list of all available metrics for that operation appear listed in the metric pallet. From there you know the drill; just drag and drop them onto the chart area and voila!

 

Network Configuration Manager

Also joining the party is NCM, allowing Orion users for the first time ever to visually cross correlate configuration changes to the impact they have on the network. When combined with NTA you can easily see the effect your recent CBQoS policy change is having on the flow of traffic from the moment the change was made. PerfStack also makes it easy for you to not only determine exactly when a change was made, but by whom. Similar to Syslog, Traps, and Events, selecting a time period from the charted area and clicking the Data Explorer button reveals detailed information about the configuration change that occurred, such as the username of the individual who logged into the device and their IP address.

 

 

 

 

Alert Visualization

Visualization of alerts in the premiere release of PerfStack displayed only the total aggregate of all alerts against a given entity, as well as how long all alerts had been active against that object. While useful, you were unable to determine what those alerts were which triggered, or how long each of those alerts remained active. A lot of that important detail was obscured through aggregation, making this particular area of PerfStack ripe for improvement.

 

In this release of PerfStack we preserved the same total alert aggregate that was available in the first release, but then extended the chart to show the names of the alerts which triggered along with their individual durations. No longer are you left wondering what alert was triggered against the object, or fumbling through other areas of the Orion web interface to track down that information. You'll also know at a glance when the alert triggered and for how long, in a manner which allows you to visually recognize patterns of reoccurrence and correlate specific individual alerts against other metrics collected by Orion.

 

Export

Occasionally it's necessary to share your PerfStack findings with others who may not have access to Orion, or archive those findings in a ticketing system for historical purposes beyond your defined retention period. For some, this information needs to be imported into other systems for change back, show back, billing, forensic analysis, or correlation with other non-Orion tools in the environment. Previously, your only recourse was to export data for each metric individually via the Custom Chart View, or write your own series of custom queries against the SQL database backend to obtain the raw values behind these charts.  In this PerfStack release however, those days are behind us.

 

After adding all metrics of interest to your PerfStack project, simply click the new 'Export' button located in the top menu. This will export your project's content to a Microsoft® Excel friendly comma separated value (CSV) formatted file,which is then downloaded to your local machine via your browser.

 

Double click the file to open in Excel, or upload the file to Google™ Docs to open in Sheets. Inside you will find all the raw data which made up the charts in PerfStack. Each series of the same chart is represented as its own set of columns, complete with the entity and metric name, date and time the data was collected, and its value. If your PerfStack project contained multiple charts, the values for each chart are grouped together in the CSV file the same as they were visually laid out within PerfStack itself. Simply put, the detailed raw data for the second PerfStack chart is directly beneath the end of the series for the first chart data contained within the CSV file.

 

 

The logical layout of the CSV file makes it easy to visually recreate similar charts to what was seen in PerfStack, using the native graphing tools included in Excel or Sheets. The format of the raw data is also well suited for import into 3rd party reporting solutions like Domo or eazyBI. With the data contained in logical layout and an open format, the possibilities are limitless.

 

 

 

Usability Improvements

 

Share

Not everyone who used PerfStack initially realized that whatever they created could be easily shared with others, simply by copying the URL and sending it to them. Since this is one of PerfStack's most powerful capabilities, we felt it needed to be promoted to the top menu, alongside other important functions like saving and loading a PerfStack.

 

Just click this 'Share' button and the dynamic PerfStack URL is automatically copied to your clipboard and ready to be pasted into an email, instant message, helpdesk ticket, etc. and shared with others.

 

Getting to PerfStack in previous releases meant leaving what you were looking at and starting the troubleshooting process over from the very beginning again in PerfStack if you needed to correlate symptoms with their root cause. For example, if users are complaining about the performance of SharePoint, a logical starting point might be to go to the Node Details view of the SharePoint server in Orion. But what if you then wanted information from the Hypervisor this virtual machine is running on, the storage array, or the SharePoint application being monitored by SAM? Well, you'd probably navigate to PerfStack, add the problem node, and then its relationships, before eventually plotting metrics in the chart area for correlation. This sounds tedious just explaining it and we knew we could do better.

 

Within the 'Management' or 'Details' resource located on the details view of virtually any entity type supported by PerfStack, you will find a new 'Performance Analysis' link. When pressed, you will be taken directly to PerfStack and the entity object you came into PerfStack will be pre-populated in the entity selector, along with any of its related items. In addition, the chart area will pre-populate with relevant metrics associated with the entity you came in from.

 

 

For example, if you were to click on the "Performance Analysis' link from the 'Node Details' view of your SharePoint server, you would be taken to PerfStack where the Node, it's volumes, interfaces, etc. are already listed in the entity list, and metrics such as Status, CPU/Memory utilization, response time, alerts, and events are all pre-populated for you. These metrics are dynamic based upon the entity type you enter from, ensuring they are always relevant to what you're investigating.

 

Links To PerfStack.png

We didn't stop simply at having links into PerfStack from other views in Orion. We knew that there were occasions when users viewing data in PerfStack needed additional information only found on entity details views, such as MAC addresses, serial/model numbers, etc.. WIth that in mind, we included direct links to the Details view of any entity shown within PerfStack's entity list. Simply hover your mouse over the entity name, the same as you would to add related items or to remove the entity from the list. There you will notice a new link icon, which when clicked will open a new browser tab that will take you to the details view for that entity.

 

Link to Details from PerfStack.png

 

Drag & Drop Entities

We all know you can drag and drop metrics onto PerfStack's chart area to visualize virtually any combination of metrics desirable, but what if I told you that you could drag and drop entire entities into the chart area? Would that blow your mind?  Well that's exactly what you can do now with PerfStack!

 

Utilizing the same logic derived from 'Links to PerfStack' referenced above, relevant and dynamic metrics associated with a given entity are pre-populated and charted when an entity is dragged into PerfStack's chart area. Simply click and hold your mouse on the drag handle which appears to the left of the entity name when hovering your mouse. Next, drag the entity into the chart area. This will then populate the charted area will the same metrics that would appear if you had entered PerfStack through the 'Performance Analysis' link on that entities Details view, saving you precious time in the throes of troubleshooting.

Drag and Drop Entities.png

Full Screen Mode

WIth the initial release of PerfStack we received a lot of feedback from customers wanting to add PerfStack to a wallboard in their NOC. The first issue these customers ran into was a bug where PerfStack did not respect session timeout values defined for the user account. I'm happy to announce that this issue has been resolved, allowing you to have PerfStack running and updating indefinitely if so desired, without ever timing out your session.

 

Next on their wish list was some ability to declutter the UI of extraneous elements which were unnecessary for a non-interactive display. This would allow them to maximize the viewable area for data being displayed in the charts and legend. Being the accommodating bunch that we are, a newly added button was added to the top right of PerfStack, just above the chart legend. When clicked, all UI elements except the chart and legend are removed, placing PerfStack into fullscreen mode. To exit full screen mode and return to normal mode, just click the button again or remove all metrics from the chart. Note that this button will only appear when one or more metrics are plotted in the chart area.

 

Maximize.png

 

Sharing is great when you're collaborating on a specific issue with a group of individuals, but what if you as the Orion Administrator want to create custom PerfStack dashboards to share with your users? This can be accomplished with other custom Orion views fairly easily, and PerfStack function no different.

 

To begin, first start by creating your custom PerfStack and including all metics you'd like represented in the chart. When complete, save your changes and click the 'Share' button. You should now have the URL to your saved PerfStack in your copy/paste buffer. Next, go to [Settings -> All Settings -> Customize Menu Bars] and edit the menu bar for the user/s you'd like to have access to your saved PerfStack. At the bottom of the page click the 'Add' button, give your link a name, and paste the saved PerfStack URL into the URL field of the 'Edit Custom menu Item' dialog windows that appears. From there, click 'ok' to save your changes and drag your newly created menu item from the 'Available Items' column to the 'Selected Items' column to add this to your menu bar.

 

When users which are assigned that Menu Bar login, they will now see a link to your custom saved PerfStack in their navigation. This allows Orion administrators to quickly share custom saved PerfStacks without emailing or instant messaging links to users. Similarly, you can now also use the 'User Links' resource in a similar fashion to provide your users links to custom saved PerfStack dashboards.

 

 

This represents an incredible amount of awesome jam packed into a single release, and I haven't even mentioned Real-Time polling yet! Let us know what your thoughts are these PerfStack improvements in the comments section below. We'd love to hear your feedback!

SRM 6.5 was made available in the Customer Portal on September 13th!  The release notes are a great place to get a broad overview of everything in this release. Exciting times since SRM also joined in other Q3 releases from SolarWinds (NPM, NTA, NCM, UDT, IPAM, DPA, & VMAN). We’ve been attending shows, monitoring customer requests and our theme for Q3 is expanding our already comprehensive list of device support! In fact, I would say that FLASH was our Primary focus, sprinkled with a bit of HYBRID Array models. We constantly measure various Storage Market Sectors, speak with analysts and rely on customer feedback with regard to Storage architecture and growth. This research has led us to recognize that Flash-Type Array solutions are a rapidly growing market sector and we want to be there when you are looking to renew your Storage environment, providing a path for you to continue using SolarWinds Storage Resource Monitor! Of course, there are some other worthy reasons to upgrade to SRM 6.5 like UI, Installer, and Perfstack enhancements, along with continued additions towards monitoring Array Hardware Health!!

 

Here's a Summary of what was accomplished in 6.5

Support for arrays:

  • IBM FlashSystem A9000/A9000R
  • IBM DS 8xxx
  • NetApp EF
  • NetApp AFF
  • EMC VMAX Flash Family

Hardware health monitoring for:

  • EMC VNX CLARiiON
  • EMC VNX Celerra/VNX Gateway
  • IBM SVC/V7000/V3700

 

Installer Enhancements:

  • Use the new SolarWinds Orion Installer to install and upgrade one or more Orion Platform products simultaneously in your environment. When installing new products into an existing Orion environment, the Orion Installer verifies compatibility between the product versions and notifies you if additional steps are needed.
  • Bottom Line – The installer installs or upgrades all products from a single screen. You do not need to download files for each product.

Read More about the Orion Installer

 

UI Enhancements:

  • Start searching for widgets in two clicks from any customizable page
  • Mark favorite widgets so you can quickly add them to new dashboards or pages
  • Drag and drop widgets directly onto pages or move widgets to new locations

Read more about Dashboards

SolarWinds NCM 7.7 became available for download in the Customer Portal on September 13th. As always, the release notes contain plenty of great information on the new features.  I’d like to dive deeper into our Network Insight for Cisco ASA, building on the great post by Chris O'Brien.

 

Network Insight for Cisco ASA

As Chris pointed out, this is our second installment in the Network Insight series, and the first that I’ve had the pleasure of being involved in. The initial Network Insight release brought together NPM and NCM to deeply manage and monitor F5 BIG-IP devices. NCM 7.5 delivered valuable capabilities including binary configuration support, F5 LTM and GTM configuration support, and new inventory support.

 

For this release, we focused on delivering a set of capabilities around monitoring and management of Cisco’s Adaptive Security Appliance, or Cisco ASA. For SolarWinds NPM, this includes specific features around:

  • Site to Site VPN
  • Remote Access VPN
  • Interfaces

 

For SolarWinds NCM, we focused on the following three areas:

  • Firmware Upgrade
  • Multi-contexts
  • And the most exciting, Access Control List Management

 

Firmware Upgrade Support

With SolarWinds NCM 7.7, we’ve continued to improve the Firmware Upgrade feature, adding support for upgrading the firmware for Cisco ASAs, both in single- and multi-context mode.

  • Multi-context – must be used if the device is in multi-context mode, even if there is only one context. Must be run from the admin context
  • Single-context – must be used if the device is in single-context mode, or if the device doesn’t support contexts at all.

firmware upgrade for Cisco ASA

 

Multi-Context

In NCM 7.7, we can discover security contexts for Cisco ASA’s and easily bring them under management. To take advantage of this, first discover the ASA admin context. NCM will automatically discover additional contexts and list them in the Contexts resource. To manage each context, simply click on the “+” icon. Each additional context counts as a node; its configuration will be stored and managed separately.

Cisco ASA multi-context discovery and management

 

Access Control Lists (ACLs) Management

Saving the best for last, NCM 7.7 automatically discovers ACLs, which zones they are assigned to, and what interfaces are assigned to those zones. Using NCM, you can now ensure that your ACLs are doing what you expect them to do. Gone are the days of laboriously poring over each rule in an ACL in turn, hunting down object and object group definitions, wondering if a particular rule is being hit (and if not, why not?), and if something changed recently, and if so, what changed?

 

To see the list of ACLs for a particular ASA, mouse over the subviews panel and select “Access Lists.”

List of ACLs for a particular Cisco ASA

 

 

NCM tracks the history of ACLs on a particular ASA, including showing the date for the most recent version. And, if there are prior versions, they can be viewed via the expand carat.

 

History of a particular ACL on a Cisco ASA

 

And you can easily compare ACLs to prior versions, other ACLs on the same ASA, or even across ASAs.

Change which ACLs are compared for a Cisco ASA

 

Navigating into a particular ACL, you can see the rules of the ACL, with syntax highlighting.  You can filter rules by type, source, and destination. Each rule shows a count of hits, that is, how many times the firewall has seen traffic that matches a particular rule.

You can also drill into object groups, to see their definition including history.

ACL Detail view, including syntax highlighting, filtering, object group hierarchy traversal for Cisco ASA

 

Finally, with NCM 7.7, we’ve added Overlapping Rule Detection. Overlapping rules are classified in two ways, in terms of completeness of overlap and type of overlap.

With regards to completeness, we can have either partial or complete overlap, which should be self-explanatory. And with regards to type, we have the following:

  • Redundant: a rule earlier in the list overlaps this rule, and does the same action to the matched traffic.
  • Shadowed: a rule earlier in the list overlaps this rule, and does the opposite action.

 

Mousing over the Overlap indicator in the the Access List view, you can see a summary of the issues with a particular ACL.

Cisco ASA ACL rule overlap, summary

Drilling into a specific ACL, you can see which rules are overlapping, and clicking on the "Show the details" link will provide even more detail.

Cisco ASA ACL rule overlap, detailed view

 

 

Conclusion

Why are you still reading this? Go get the latest version from the Customer Portal and install it today! And, while you’re waiting for the new installer to work its magic, feel free to click through the new functionality in our online demo. We’d love to hear your feedback, post away below!

A long time ago, in a network far, far away, a post was written called Why Should I Care About Release Candidates?  I say a long time ago because it was 2009, shortly before I joined SolarWinds and while feature phones still ruled the earth.   Since that post, SolarWinds has added thousands of customers, but as Product Managers, we work hard to stay close to you so we deliver real value in every release.  And we are very thankful for your willingness to share your time and thoughts about our products and company.

 

However, we wanted to reach out again, to customers both new and old, to tell you (or remind you) the secret of how to get what you really want in future releases of your favorite products. Ready for the secret?  The main secret to getting what you really want?  It's easy - PARTICIPATE!  Participate as much as you can!  Since we operate on the YAWL principle (You Asked, We Listened), the more we hear from you, the better.  We have a variety of ways for you to participate.

  • Review What We Are Working On: Our What We Are Working On posts details the features we are currently working on across all our products.  Its a great place to start and see what is on the roadmap.  If you don't see what you want, check the feature request form for your products and vote.
  • Vote on Feature Requests: Each product has a Feature Request forum  (examples:  NPM, SAM, DPA) where you can add and vote on your favorite features.  While product managers reviews all feature requests, voting helps up prioritize... and when we start working on a feature, we can reach out to directly to everyone who votes.
  • Walk Through UX Mockups:  Early in the release process, our excellent UX team puts together functional mockups for you to review.  Feedback here has a big influence on what we implement and how it works. At times it may feel like a therapy session, but we really want to understand what you see, how you process the information and decide what to do next.
  • Install Beta Releases: Beta's are early versions of the next release with some features complete and ready to try in your environment.  This is where the rubber meets the road and features come to life - and your feedback is crucial.  Remember - beta's are fresh installs only, not suitable for consumption on your production server.
  • Upgrade to Release Candidates:  As emphasized in the old post, Release Candidates are fully supported releases, meaning you can upgrade your production servers and start using new features right away. Our Support and Sales Engineering teams are fully trained in the new version.  Don't hesitate - our new upgrade process for Orion products is amazing.
  • Show Us Your Environment:  Our UX team also loves to do "Show Me" sessions, where we watch you use our products in your environment, and see how you solve problems.
  • Answer Surveys:  From time to time, you will receive requests to fill out surveys, here on Thwack and via email.  These help us understand broad trends of your business, environment, and feature needs.

 

Hopefully you are in inspired mood, because here is your opportunity to participate: Take this one page survey so we can reach you next time we are looking for volunteers.

REALITY CHECK: Will I really get EVERYTHING I want?

You may be thinking to yourself "Will I really get everything I want"?  Sadly, no... we can't make all your dreams come true because we have limited resources and no limit of good ideas from you.  But this is why your participation is so important, to help get the most valuable features to the top of the list.  And patience often pays off - check out this feature we implemented earlier this year from a request in 2012 - Silence Alerts While Still Monitoring.

 

Thank you for taking the time to read this, and hopefully we will be hearing from you!

 

The PM Team.

In continuance of the great Q3 release from SolarWinds (NPM, NTA, NCM, UDT, IPAM, DPA, & SRM), I am excited to announce the Generally Availability of Virtualization Manager (VMAN) 8.0.  Over time, Virtualization Manager has incrementally evolved from a stand-alone virtual appliance to providing advanced integration to the SolarWind's monitoring stack. VMAN Orion allows for a single pane of glass for troubleshooting (AppStack ), root cause analysis (PerfStack), and VMAN features found only on Orion (predictive recommendations).  In VMAN 8.0 we have taken this a step further by providing native VMAN functionality in Orion while removing the virtual appliance requirement.  Our goal with the release was to improve ease of use while strengthening VMANs ability to find and resolve problems in the virtual infrastructure.  The VMAN 8.0 download can be found in the SolarWinds customer portal.

 

Native VMAN Polling on Orion

It is now possible to download and install VMAN much like you would for SAM & NPM without upgrading and maintaining the VMAN appliance.  If you are currently integrated, it is extremely simple to migrate your VMAN polling to Orion by going to Virtualization Settings VMware Settings (or Hyper-V Settings).  Just select the vCenter or Hyper-V host to switch polling over and choose VMAN Orion Polling from the Polling Method drop-down menu.  In VMAN 8.0 there are 3 different polling methods that you will notice.

  • Basic - This is the general polling performed by SAM & NPM and does not require a VMAN license but it will require a SAM or NPM node license. ***Note - you can still use the basic polling method if you only have VMAN polling since the VMAN license is allocated node licenses equal to the socket count in your VMAN license, (e.g. 64 socket license = 64 node licenses).
  • VMAN Appliance - This is the traditional VMAN polling which occurs from the virtual appliance and requires that the appliance is integrated with Orion to get full VMAN functionality in Orion.
  • VMAN Orion - Full VMAN polling that occurs from Orion and does not require the VMAN appliance but does require a VMAN license

 

 

In addition to the new polling method in VMAN 8.0 we have added the ability to scale out your VMAN polling using the Additional Polling Engine (APE) framework for free.  No additional polling engine license is required to run with VMAN infrastructure monitoring.

 

Batch Execution of Recommendation Actions

The time it takes to optimize your environment has been drastically reduced with the ability to quickly select and batch multiple recommendations to execute immediately or schedule for a later time through just a few clicks.

 

VMAN will automatically configure the correct order of operations in which to perform the actions based on your selections.

 

 

Management action based recommendation policies

 

New policies in Virtualization Manager allow for greater control and precision over the recommendations which optimize your environment.  Create granular policy exclusions based on recommended actions for CPU, Memory, VM & Storage Migrations.

The new policy becomes available by selecting the Disallow Actions for Recommendations and then selecting the scope (vm, host, cluster, or datastore) to apply the policy against.  Then select the actions to exclude from recommendations for the scope chosen above.

  • Move VM - Actions based on VM movement/placement to a different host or datastore.
    • Move VM to a different host
    • Move VM to a different Datastore
  • Configuration - Modification of the VM memory or vCPU configuration
    • Change CPU Resources
    • Change Memory resources

 

 

PerfStack 2.0 - New Features & Improvements

With VMAN 8.0 we also get a new and improved version of Perfstack that continues to build on the tenants of SolarWinds to help dig into problems faster and identify true root cause across different IT disciplines.

    • Links from VM & Host Details that populate the PerfStack palette
    • Zoom into PerfStack charts to view more detail for the selected time period
    • Data ExplorerAlert visualization improvements
      • Export PerfStack data to Excel
    • Alert visualization improvements
      • Each individual alert start/end time is now visualized separately in PerfStack
      • Existing aggregate alert visualization against an object is retained
    • Export PerfStack data to Excel
    • Easier to share PerfStack dashboards with the new ‘Share’ button.
    • PerfStack 2.0 - Real-Time Polling

You can get to PerfStack from within the VM details page in three different spots (we wanted to make sure you didn't miss it.)

  • Virtualization Manager Tools
  • Virtual Machine Details
  • Resource Utilization

 

 

View of the PerfStack palette pre-populated with metrics of the VM.

 

 

WEB INTERFACE IMPROVEMENTS

 

 

NEW INSTALLER UPGRADE EXPERIENCE

    • Install and upgrade one or more Orion Platform products simultaneously
    • Modern interface with a simplified design and intuitive workflow
    • Downloads and installs only what is needed
      • Reduces download size & accelerates installation

 

 

Windows Authentication & SSL Encryption for Orion Microsoft SQL Database connectivity

 

 

Documentation

Virtualization Manager (VMAN) - SolarWinds Worldwide, LLC. Help and Support

 

VMAN 8.0 Release Notes - SolarWinds Worldwide, LLC. Help and Support

NPM 12.2 was made available in the Customer Portal on September 13th!  The release notes are a great place to get a broad overview of everything in the release.  Here, I'd like to go into greater depth on  Network Insight for ASA including why we built it and how it works.  Knowing that should help you get the most out of the new tech!

 

Network Insight

We live in amazing times.  Every day new technologies are invented that change how we interact, how we build things, how we learn, how we live.  Many (most?) of these technologies are only possible because of the relatively new ability for endpoints to talk to each other over a network.  Networking is a key enabling technology today like electricity was in the 1800s and 1900s, paving the way for whole wave of new technologies to be built.  The better we build the networks, the more we enabling this technological evolution.  That's why I believe in building great networks.

 

A great network does exactly one thing well: connects endpoints.  The definition of "well" has evolved through the years, but essentially it means enabling two endpoints to talk in a way that is high performance, reliable, and secure.  Turns out this is not an easy thing to do, particularly at scale.  When I first started maintaining, and later building networks, I discovered that monitoring was one the most effective tools I could use to build better networks.  Monitoring tells you how the network is performing so you can improve it.  Monitoring tells you when things are heading south so you can get ahead of the problem.  Monitoring tells you if there is an outage so you can fix it, sometimes even before users notice.  Monitoring reassures you when there is not an outage so you can sleep at night.

 

Over the past two decades, I believe as a company and as an industry we have done a good job of building monitoring to cover routers, switches, and wireless gear.  That's great, but virtually every network today includes a sprinkling of firewalls, load balancers, and maybe some web proxies or WAN optimizers.  These devices are few in number, but absolutely critical.  They're not simple devices either.  Monitoring tools have not done a great job with these other devices.  The problem is that we mostly treat them like just another router or switch.  Sure, there are often a few token extra metrics like connection counts, but that doesn't really represent the device properly, does it?  The data that you need to understand the health and performance of a firewall or a load balancer is just not the same as the data you need for a switch.  This is a huge visibility gap.

 

Network Insight is designed to fill that gap by finally treating these other devices as first class citizens; acquiring and displaying exactly the right data set to understand the health and performance of these critical devices.

 

Network Insight for Cisco ASA

Network Insight for Cisco ASA is our second installment in the Network Insight story, following Network Insight for F5.  As you saw with F5, Network Insight for ASA takes a clean slate approach.  We asked ourselves (and many of you) questions like:

 

  • What role does this device play in connecting endpoints?
  • How can you measure the quality with which the device is performing that role?
  • What is the right way to visualize that data to make it easiest to understand?
  • What are the most common and severe problems that occur with this device?
  • Can we detect those problems?  Can we predict them?

 

With these learnings in hand, we built the best monitoring we could from the ground up.  Let's take a look at what we came up with.

 

Access Lists

 

ACLs define what traffic is allowed or blocked.  This is the most essential task of the firewall and monitoring tools generally don't provide any visibility.

 

The first thing we found here is there's no good way to get all of this data via SNMP.  We have to pull the config and analyze it.  For that reason, we handed this piece off to the NCM team to work on.  Check out more here: Network Configuration Manager 7.7 is now generally available!

 

 

Site to Site VPN

 

Site to site VPN tunnels are the next most important service that ASAs provide.  They are often used to connect offices to data centers, data centers to cloud providers, or one organization to a partner.

 

Yesterday, you could monitor these tunnels by testing connectivity to the other side of the tunnel, for example an ICMP monitor to a node that can only be reached through the tunnel.  Today, we poll the ASA itself via SNMP and API to show a complete picture including:

  • What tunnels are configured?
  • Are my tunnels up or down?
  • If a tunnel is up:
    • How long has the tunnel been up?
    • How much bandwidth is being used by the tunnel?
    • What protocols are securing the traffic transiting the tunnel?
  • If a tunnel is down:
    • How long has the tunnel been down?
    • What phase did the tunnel negotiation fail at?

 

 

 

This means we automatically detect and add VPN tunnels as they're configured or removed and constantly keep an eye on these very important logical connections.  I'll highlight a couple interesting things.

 

Favorites

 

We're introducing a simple new concept called favorites.  Marking a tunnel as a favorite by clicking the star on the right does two things.  First, you can filter and sort based on this attribute.  The page by default shows favorite tunnels first, so you will always see your favorites first until you change the sorting method.  Second, it promotes that tunnel's status to the summary screen.  We found for most ASAs there were a couple of VPN tunnels that were wildly more important than all of the other tunnels.  Here at SolarWinds HQ for example, it's the tunnel to the primary data center.  At the primary data center, it's the tunnel to the secondary data center.  Favorties provide a super easy way to add extra focus to the tunnels that are so important that a big part of the story of the health and performance of the ASA is the health of the tunnels themselves.

 

 

Tunnel Status

What is the status of the tunnel?

 

Turns out this is a harder question to answer than it looks.  Tunnels are established on-demand.  If you just configured a tunnel, but have not sent any interesting traffic so the tunnel is not up, should we show it as down (red)?  That doesn't seem right.  What if the tunnel was up for 3 months, but interesting traffic stopped coming so the tunnel timed out and went back down, but is prepared to come back up as soon as interesting traffic is seen?  The tunnel is definitely "down", but should it be red?  Probably not!  We spent a lot of time thinking about this and talking to you guys to determine the logic that decides if an administrator considers a tunnel down, up, or something in between.  All of that logic is built into the statuses you see presented on this page.

 

Phases

For years, my first troubleshooting step on a tunnel that was down was to review logs and find out what phase negotiation failed at.  This tells you what set of variables you need to review for matching against your peer.  I'm very pleased that this first data point is now right in the monitoring tool that identified the tunnel as down to start with.  I hope it helps you guys get your tunnels back up faster.

 

 

Remote Access VPN

 

When users connect to the office using a software VPN client on their laptop, Cisco calls that Remote Access VPN.  As with Network Insight for F5, we are careful here to use the same terms as the manufacturer so it's easy to understand what we're talking about.

 

Again, we have to use both SNMP and API to get all the data we need to answer the following questions:

  • Who's connected?
  • Who tried to connect in the past, and what was the result?
  • How long have they been connected?
  • How much data have they uploaded and downloaded?
  • What is their session history?

 

 

Again, I'll highlight a few things.

 

List View

One of the challenges is the sheer number of remote access connections there are.  We know we do not do good enough job at dealing with very large lists today and our UI Framework team has been working on solving that.  This page is one of the first implementations of the new List View that they created.  This list view gives you the tools to easily deal with very large lists.  The left side of the screen lets you filter on anything shown on the right.  The filters available are considerate of the data and values seen on the right, so we don't have useless filters.  You can stack several filters and remove them individually.  Finally, after filtering your list you can still sort and search through those filtered results to further hone your list.

 

 

You'll see this list view a lot more as time passes.

 

Interfaces

 

Whereas interfaces are the main story on a switch or router, they're an important secondary story on an ASA.  We rebuilt the interfaces view from the ground up based on the List View.  Along the way, we made sure we were building it for a firewall.

 

 

NAMEIF

As my fellow ASA Administrators know, nameif is not a typo.  Nameif is the command you use to specify the name of an interface on an ASA.  A nameif must be configured for an interface to function, and from the moment you specify the nameif onward, every other element in the interface references the nameif.  ACLs, NAT, you name it.  In other words, the identity of an interface on an ASA is its nameif (like CPLANE or OUTSIDE), not it's physical name (like GigabitEthernet0/2).  Accordingly, that is the primary name shown here, with the physical interface name shown only if the interface isn't in use and doesn't have a nameif.

 

Access Lists

If you have NCM to pull access lists from configs, we will identify which access list is applied to each interface and provide a link to review the access list.  This is super convenient in practice.

 

Security Level

Security levels have some control over what traffic the ASA allows.  It also provides a quick indicator of how much the administrator trusts the network connected to a specific interface.  Kind of important things for a firewall.

 

Favorites

Again, we're using the simple favorites concept.  I expect a lot of ASAs to have the interface connected to the Internet favorited!

 

 

Platform

 

All of the things described above are technology services that are built on a platform.  The platform must be healthy for the services to have any chance of being healthy.  The platform sub-view helps you understand the health of the platform.

 

 

High Availability

While high availability is a feature of many platforms, it seems to be particularly popular on ASAs.  Additionally, it seems Administrators have to fiddle with it a lot.  Administrators have to failover to perform software upgrades, some choose to failover to change circuits, failover to upgrade hardware, failover for all sorts of reasons.  While I'm concerned we are all using failover so often, it is clear that NPM has to provide great coverage for H/A.

 

In speaking with lots of ASA administrators we found several different behaviors.  Some administrators were unaware of whether their ASAs were really ready for failover or not.  Some check manually every once in a while, but have had an active ASA go down only to discover failover could not occur.  Some expert administrators were checking failover status, but were also checking the quality of failover that would occur by verifying configuration synchronization and state synchronization.

 

Our H/A resources takes the best practices we found were being manually used by expert administrators, automates the monitoring of them, and presents simple conclusions in the UI.  If everything is green, you get simple checks and a phrase explaining what is healthy.  If something goes wrong, you get a red X more verbose explanation.  For example, if the standby is ready but the config is not in sync, failover can occur but the behavior of the firewall may change.  Maybe your last ACL change was not copied to the standby, so it doesn't apply if there is a failover.  If standby is ready but connection state information is not synced, failover can occur but all of your users will have to restablish their connections.  Not good!

 

Of course you can alert on all of these things.

 

Connection Counts

Firewalls store information about each connection that is actively flowing through them at a given moment.  Because of that, there is a limit to how many concurrent connections they can handle, and this is one of the primary values used to determine what size firewall you need to buy.  It's obvious then that it should also be a crucial part of how we understand the load of the device in addition to RAM and CPU, so we've included it here.

 

Aggregating connection failure rates is an interesting way to get an indicator that something is amiss.  Perhaps your firewall is blocking a DDOS or maybe a firewall rule change went awry.  Watching this one value can be a leading indicator of all sorts of specific problems.

 

Summary: Putting it all Together

If we've done our job, we're providing comprehensive coverage of the health and performance of an ASA on all of the sub-views.  Now, we pull all the information together and summarize it on the Summary page.

 

 

Details Widget

One of the things that really weighed down the Node Details page for most nodes was the Details resource.  This resource has historically been a catch all for lots of little bits of largely static data users have asked us to show on this page.  The problem is that it kept growing and eventually took up nearly half the page with data that actually wasn't that commonly needed.  Here we have rebuilt the resource to focus on the most important data, but with the additional data available within the "other details" drop down.  This also allowed us to move away from the archaic pattern of Name:value pairs in our UI.  Instead, we describe the device as your peer would.  You can see how the resource reads more like "this is <hostname>, the <context name> context on a <hardware model> running <software version>".

 

Also, did you know that what we called "resources" in the previous UI framework are called "widgets" in the new UI Framework?  There's your daily dose of useless trivia!

 

PerfStack Widget

Did you notice it?  The Load Summary and Bandwidth widgets on this page are powered by PerfStack charting.  Try clicking around on them.  It's oh so pleasant.  More to come on this later.

 

Favorites

The Bandwidth and Favorite Site-to-Site VPN widgets display information about the components you identified as your favorites on the other pages.  I think it's about time we recognized that all VPN tunnels and all interfaces are not equally important.  Some are so critical that their status alone is a big part of the answer to the question: how is the firewall running?  Favorites makes it easy to give them the attention they deserve.

 

Setup Network Insight for ASA

 

To get this visibility in your environment, jump on over to the customer portal to download the new version.  After upgrading your NPM instance, the new ASA monitoring should "just work," but here's the specifics just in case.

 

Already monitoring ASAs?

The new monitoring will start up automatically.  Give the new version a couple minutes to poll and jump over to Node Details for one of your ASAs.  You'll get a bunch of new information out of the box.  For complete coverage as seen in the screenshots above, you'll be prompted to edit the node, check the "Advanced ASA" monitoring check box, and enter CLI credentials.  Make sure to look at the sub-views (mouse over to the left)!

 

There is one caveat.  If you've assigned a custom view to your ASAs, we will not overwrite that!  Instead, you will have to choose to manually change the view for your ASAs over to our new view.

 

Adding a new ASA?

 

Simply "Add Node" and select the "Advanced ASA" monitoring check box on the last step to enter CLI credentials.  That's it.  Give it a few minutes and check out the Node Details page for that ASA.

 

Conclusion

 

That does it for now.  You can click through the functionality yourself in our online demo.  I'd love to hear your feedback once you have it running in your environment!


 

 

No need for liederhosen or giant mugs of beer to have a good time.  Just sign up for the SAM 6.5 beta.

 

Of course if you want an idea of what's in the beta, just refer to WHAT WE'RE WORKING ON BEYOND SAM 6.4

 

The beta is now open, so turn on some oompah music get started here.

For those of you familiar with the Enterprise Operations Console, you may be familiar bshopp previously written, similarly titled article , which explains the function and purpose of EOC.  That article was a big hit among users so I have decided to write up something similar for our latest release of EOC 2.0 which is now available.

 

When discussing the Orion Suite of products with customers, we often get the question, “how does Orion scale?”  Customers these days may have very different reasons for asking this question or different ideas of what scale means to them.

 

The answer is that SolarWinds provides multiple methods available for customers to choose from:

  1. Scaling horizontally or scaling up - If customers are scaling the number of elements within a single instance of Orion, SolarWinds provides the use of Additional Polling Engines to quickly expand the overall capacity.
  2. Scaling out - Whether it is the way in which the client runs their business such as a MSP, a customer has acquired additional instances of SolarWinds due to mergers or acquisitions, or any other number of reasons, customers may have what we refer to as a distributed model of deployment.  This means the clients environments consist of multiple separate Orion Instances running throughout their infrastructure.  The Enterprise Operations Console is the perfect tool to roll up data from each distributed instance.

In this post, we are going to discuss the Enterprise Operations Console 2.0 or EOC for short.  Take the below first graphic, you have a worldwide network with teams responsible for managing their respective region, so an Orion installation resides in each; North America, EMEA and APAC. EOC’s main functionality is to aggregate data from multiple Orion server installations and display that data in a similar fashion as the Orion Web Console. As an example, the global NOC/ Management Team may require a comprehensive view of all entities into a single dashboard for a representation of status, current alerts, and need the ability to run reports across the worldwide environment.  The Enterprise Operations Console provides this visibility for you. Administrators even have the ability to create customized accounts and restrict what data each Orion EOC user is permitted to see. These restrictions can be set on an individual basis by customizing user settings and on a group basis by defining roles.Now, the EOC 2.0 web console has changed slightly from the prior version as you can see in the screenshots below.  The previous version of EOC is on the left, and the latest version of EOC is depicted below on the right.

EOC.jpg

Below is a quick touch on some of the new features and improvements.  Stay tuned for more posts with additional details around these features.

  • Improved Data Collection: Live on-demand polling and immediate display of data from distributed Orion Instances. (Discussed more below)
  • Improved & Updated UI:  EOC 2.0 is now built on the same framework as other members of the Orion product suite.  This provides:
    • An improved UI, with familiar navigation, icons, and controls.
    • Access to Orion features such as High Availability including multi-subnet deployments (WAN/DR), and the ability to implement Additional Web Servers with EOC.
  • New Status Rollup: View status of any Orion entity from distributed Orion sites with the unique Enterprise Summary Tiles.
  • Improved Search Functions: Utilize Search integrated into the Navigation or utilize the Asset Explorer for pre-filtered searches within Enterprise Summary Tiles.
  • New Alert Options: Input notes and acknowledge alerts from EOC.
  • Global Reporting: Create and customize global reports with easy selection of desired sites.
  • New Map Options: Import Maps from Orion Instances to display status in EOC NOC view.  Use the WorldMap resource to display consolidated view across the distributed environment.
  • PerfStack™ in EOC: Perform advanced analysis of issues by pulling entity data from multiple sites into a single PerfStack™! 

Note:  There is no upgrade path from EOC 1.6.x to EOC 2.0.  You may run both these versions in tandem if desired, however EOC 2.0 will require a fresh install on a new server and a new database.These changes were done based on feedback from you, our users, who were looking for an update to the UI, more flexibility with the tool, and something up to the standards the rest of the Orion Platform has executed upon with recent enhancements. One of the common misconceptions about EOC is that it pulls all the data from each of your Orion servers into the EOC database.  In actuality, EOC pulls high-level information such as current status, alerts, events, and more.  Investigating further or drilling into elements within the EOC web console would then seamlessly redirect users behind the scenes to the local Orion instance that entity is actually being monitored from. Let's walk through an example.  Below you can see that I have a number of issues in my lab environment, and I will select a node entity that is in warning.  This will populate the asset explorer with a pre-filtered list, and drilling into the device will actually take me to the device details page on the local instance where that node is being monitored from:I can now continue further investigation or go right back to my EOC global view.Now that we have gone through a high-level of what EOC is, let's dig deeper into how it works.  In versions of the Enterprise Operations Console prior to 2.0, EOC contacted remote Orion instances at regular polling intervals in which it collected all current statistics based on the most recent polls.  So by default, every 5 minutes, large subsets of data were retrieved from the remote sites and stored into the EOC database.  The overview of the previous version can be reviewed here:  What is the Enterprise Operations Console (EOC) & how does it work?The problem was that even with decreasing the polling intervals, we were never able to achieve real-time polling or real-time display of that data within EOC.  Very often, the data presented was well behind what was actually being detected at the local instance/site.Delays in status and alert notification presented a serious problem for EOC users, therefore resolving this issue was extremely critical.  EOC 2.0 now leverages a function called SWIS federation which executes only the queries necessary determined by the content of the page being viewed.  In essence, Federated SWIS allows us to pull only the data we need, when we need it. Depicted below is a more technical breakout of this first image.  Each instance may contain different products, may be different sizes, and may be geographically dispersed. With EOC 2.0, remote Orion SWIS services register themselves with EOC's federated SWIS service, thereby exposing their data through a single SWIS interface.  When a user/client accesses the website and performs a function, a SWQL query is sent to SWIS on EOC.  Here it is analyzed to determine what servers can satisfy the query.  SWIS will then exclude any servers that are not necessary, forward the query to the appropriate Orion SWIS instances, combine the results from each, and send the aggregated results back to the client. This process is completely transparent to the user and is able to run much more efficiently over constantly pulling copies of data, from each instance, into a database, and then running additional queries to provide the results.  Federated SWIS performs all the heavy lifting, aggregating the data, and allows us to display that data exponentially faster than we were able to before. EOC 2.0 will be able to highlight Alerts, Events, and Status from any of the following entities:

  • Applications
  • Components
  • Groups
  • Interfaces
  • IP SLA Operations
  • LUNs
  • NAS Volumes
  • Nodes
  • WPM Player locations
  • Pools
  • Ports
  • Providers
  • Storage Arrays
  • Transactions
  • Virtual Clusters
  • Virtual Datacenters
  • Virtual Hosts
  • Virtual Machines
  • VCenter Servers
  • Volumes
  • And more...

 

Stay tuned for more posts highlighting the new feature/ functionality and enhancements made to EOC in the Enterprise Operations Console  forum. 

We’re happy to announce the release of SolarWinds® Exchange Monitor, a standalone free tool that provides a view of your Microsoft® Exchange Server performance.

Exchange Monitor gives your team insight into metrics, services and Database Availability Group (DAG) status. This tool supports Exchange Server 2013 and 2016.

 

Metrics monitored by Exchange Monitor:

  • Total Service IOPS (count)
  • Host Used CPU (%)
  • Host Used Disk (%)
  • Host Used Memory (%)
  • I/O Log Writes Average Latency (ms)
  • I/O Log Reads Average Latency (ms)
  • I/O Database Writes Average Latency (ms)
  • I/O Database Reads Average Latency (ms)
  • Outstanding RPC Requests (count)
  • RPC Requests Failed (%)

Services monitored by Exchange Monitor:

  • Active Directory Topology
  • DAG Management
  • Exchange Replication
  • Information Store
  • Mailbox Assistants
  • Mailbox Replication
  • Mailbox Transport Delivery
  • Mailbox Transport Submission
  • Search
  • Service Host
  • Transport
  • Unified Messaging

DAG info:

  • Databases (count)
  • Current Size (GB)
  • Total DB Copies (count)
  • Disk Free Space (GB)
  • List of servers belonging to the DAG and their health status

 

 

Exchange Monitor gives you the ability to add as many Exchange Servers as you wish. Simply click the “Add Server” button and fill IP address/domain name and credentials. The only limitation when adding a new server is, that you cannot add 2 servers from the same DAG. Monitoring multiple servers from the same DAG can be achieved only with Server & Application Monitor (SAM).

Once you add a server to Exchange Monitor, it will load polling interval, metrics with thresholds and services to monitor from global settings. You can edit these settings locally – per server. When looking at server details page, click the COMMANDS button in the top – right corner and select “Manage Metrics”.

 

You can then fine-tune your thresholds for this server only. Here you can also turn off metric, if you do not wish to monitor it. When you do not monitor a metric, you will not get any notifications about this metric.

 

If an Exchange Server belongs to DAG, you’ll see Database Availability Group (DAG) resource on the server details screen. Bottom part of this resource contains information about all servers belonging to this DAG and their health status. You can inspect status of these servers by clicking on “Show All” button.

 

Hovering over these servers will give you additional information about Content Index State and Copy status of the databases assigned to this server.

 

 

 

What else does Exchange Monitor do?

  • Allows you to add as many Exchange servers as you wish (with limitation to monitor only 1 server per DAG)
  • Creates notification every time, when metric goes over threshold, service goes into non-running state or DAG is not healthy

       

 

  • Can write all notifications into Event Log.
  • Allows you to set your polling interval (how frequently you want to receive data from Exchange Server) per server
  • Allows you to set thresholds, metrics to monitor and services to monitor per server

    

 

 

For more detailed information about Exchange Monitor, please see the SolarWinds Exchange Monitor Quick Reference guide here on THWACK®: https://thwack.solarwinds.com/docs/DOC-191309

Download SolarWinds Exchange Monitor: http://www.solarwinds.com/exchange-monitor

Don't have time to read this article and just want to try out the example template?  You can download it here Docker Basics for Linux

 

Overview

As I discuss cloud utilization with customers, one subject that often comes up is Docker.  Many customers are using cloud servers as Docker farms, or to supplement their on-premise docker footprint.  OH!  There's that hybrid IT thing again.

 

Since some of our readers may be devops people without experience in SAM, I'm going to explain every step in detail.

 

We watch applications in SAM with templates.  An application template is simply a collection of one or more component monitors. Today we are going to create a simple Docker template as an example of what you can do to monitor docker.

 

To monitor Docker we are going to:

1.  Build a Docker Server

2. Use the Component Monitor Wizard to create component monitors for critical Docker processes and generate the base template.

3. Create a script Component Monitor to query Docker for critical metrics and insert that Component Monitor into the template from step 2.

 

Build a Docker Server (step 1 of 3)

First we need to load Docker on a target server.  I won't bore you with instructions about that.  If you need help installing, refer to the documentation from Docker themselves at Docker Documentation | Docker Documentation .

 

The next step is to manage the node where we installed Docker with SAM (if it wasn't already managed).  If you're using Linux, I'd recommend installing the Linux agent when you set up the node.  Again, this is well documented in the SolarWinds documentation starting here Agent deployment .

 

Now that we've got a Linux node with the Solarwinds agent and Docker installed, we are ready to build a template to monitor it.

 

Start with a Process Monitor type component monitor (step 2 of 3)

In order to monitor an application we have to decide what to look at on the target node.  I'm talking about basic availability monitoring by ensuring that the service/process is up.  We accomplish this with either a service or process component monitor.  Using a Service or Process monitor is a best practice when creating a new template.

 

SAM offers Windows Service monitors along with Windows or Linux Process monitors.  In this example I'm targeting an Ubuntu server running Docker, so I'll use a Linux Process Monitor.

 

The easiest way to create a component monitor is using the Component Monitor Wizard.  Sometimes new users miss this option because it's located at Settings->All Settings->SAM Settings->Getting Started with SAM. 

 

 

In the wizard, I will choose Linux Process Monitor and click "Next"

 

Now we get to the reason it was so important to have an application node set up and defined in SAM before we started.  The next screen allows us to enter the IP address of the Linux server I set up earlier.  In this case, it's a 64-bit operating system. Be sure to change the 32-bit setting to 64-bit.  Once again, click "Next"

 

This is where the power of the wizard becomes evident.  You will see a list of the processes on the target server with empty checkboxes next to them.  Just check the processes associated with Docker (in this case dockerd and docker-containerd), then click "Next".

(For more about why Docker split the docker engine into two pieces, take a look at Docker containerd integration - Docker Blog .)

 

Now our component monitors have been created.  You can optionally check the boxes to the left and test them now.  Since the wizard actually had found the processes previously on the test node, the test should pass.  Just click "Next" again.

You can either create a new application template with these component monitors or add them to an existing one.  In this case we are going to create a new template called "Docker Basics for Linux".  It's a good idea to include the OS type in the name since we might run Docker on Windows in our environment later and we will want to know that this particular template is for Linux.  Also I included the term "Basics" since I get ambitious and write a more complex template later.

 

Click "Next" to continue to a screen where we can assign the template to SAM nodes.  The node we used for developing the template should be checked by default.  Just click "Next" again if that's the only node we are monitoring for now.

 

The final step is to confirm the creation of the component monitors.  Click "OK, Create" and you're done.

If you go to the SAM Summary page, you'll see your template has been applied.  Don't worry if it shows an "unknown" state at first.  It can take a few minutes to do its initial polling.  It should turn green once discovered.

That's it.  You're now monitoring Docker availability.

 

If you wanted to monitor Docker on Windows, you could have used a Windows Process Monitor or a Windows Service Monitor, which both work similarly.  This would provide the most basic information at whether Docker is even running on a target system.

 

Monitor Docker from the inside with a Script Component Monitor (Step 3 of 3)

For the next step, let's add some visibility into Docker itself.

Go to Settings->All Settings->SAM Settings->Manage Templates

You should see a list of all templates on your SAM system.  In the search box at the upper right corner, type "docker" and click "Search".  This should filter the list to only include the Docker template we created above.

Check the checkbox to the left of our template and click "Edit"

 

You should see the details of the template with the 2 component monitors we created earlier.

At this point, you can add a description and tags, or change the polling settings.  You can also change component monitors in the template.

Let's add a component monitor to show basic stats about Docker.  The easiest way to get those stats is to run the "docker info" command on the Linux server.

Wow!  That's a lot of information.  But how to do we get it into SAM?

Not to worry, SAM lets us run scripts and can collect up to 10 statistic/message pairs from the script.

Just select "Add Component Monitor" on the screen above and click "Submit".

 

Then select "Linux/Unix Script Monitor" (you may have to advance to the 2nd page of results) and click "Add".

Now you should see a new component monitor on the list that is opened for editing.

Enter a description and choose the authentication method and credentials. In this case I chose to inherit credentials from the node.  The script is basically run using ssh so port 22 must be open in any firewall configuration.

 

I entered a working directory of "/tmp" for any disk I/O required by SAM.

 

I also will rename the component monitor to something more meaningful.

 

The "command line" field will be set up for a perl script by default.  We are using a shell script so we will delete the "perl" part and replace it with "bash".

Next, click "Edit Script" to enter the actual script commands.

 

Insert a statistic into SAM with a bash script

In order to use a script with SAM, we are going to need to format the data in a very specific way.  SAM expects the only output from a script to consist of key/value pairs, separated by a ":", that either are a statistic or a message.  The key for statistics consists of the keyword "statistic" then a variable name, then ":"

For example if we were trying to capture number of containers, we would use Statistic.container:1 and for a description we would use Message.container:Containers This would allow SAM to display a value of 1 with a heading of "Containers".  The variable "container" would tie the two together.  That's a brief explanation of getting statistics into SAM.  I could write an entire article on Script Component Monitors alone.  Oh, look!  Someone already did... SAM Script Component Monitors - Everything you need to know

 

This would be pretty simple if we only wanted the single "Containers" line from the "docker info" command.  We could just pipe the output of the "docker info"  through "grep" to find the line that contains "Containers" and then pipe that line through "sed" to substitute our special identifying text for the heading "Containers: "  To see how this works type this on the Linux command line:

 

 

docker info | grep "Containers: "| sed s/"Containers: "/"Message.container:Containers \nStatistic.container:"/

 

 

Which would output:

 

Message.container:Containers

Statistic.container:1

 

Show more than one statistic

However if we want to grab more than one line from "docker info", it gets a bit more complicated.

First, to avoid having to grab multiple lines, I decided to just have docker send the output in JSON format.  This can be done by using the format option:

 

docker info --format '{{json .}}'

 

which returned:


 

Now we have the output in a single, JSON-formatted line. 

 

Now to create a poor-man's JSON parser.  I simply used awk to grab the data I was interested in.  I didn't use a JSON parser like "jq" because I didn't want the burden of ensuring that this code was installed on the target system. 

 

Here is the script that I used:

 

#! /bin/bash
JSON="$(/usr/bin/docker info --format '{{json .}}')"
echo $JSON| awk -F ','  '{print $3}' | awk -F ':'  '{print "Message.running:"$1,"\nStatistic.running: "$2}'
echo $JSON| awk -F ','  '{print $4}' | awk -F ':'  '{print "Message.paused:"$1,"\nStatistic.paused: "$2}'
echo $JSON| awk -F ','  '{print $5}' | awk -F ':'  '{print "Message.stopped:"$1,"\nStatistic.stopped: "$2}'
echo $JSON| awk -F ','  '{print $6}' | awk -F ':'  '{print "Message.images:"$1,"\nStatistic.images: "$2}'

This grabs the 3rd, 4th, 5th and 6th objects in the JSON and plugs them into SAM as message and statistic respectively.  At this point I have to admit that there is some risk that Docker could change the output of this file and break my positional parsing, while not affecting someone using a real JSON parser to search for the proper key/value pair.  You can test the script from the Linux command line to be sure that it works.

 

Now that we've got a working script on the Linux server, we can insert it into the SAM Linux script component monitor.

Here's a screen shot of the script being put into SAM

 

A couple of things to notice on this screenshot.

  • I couldn't run the "docker info" command as a normal user.  I had to use "sudo" to run it as super-user.  This required me to modify my "sudoers" file to add the "NOPASSWD:ALL" option for the user whose credentials I'm using.  This change allowed SAM to run a privelidged command without being prompted again for the password.  More on that here How To Monitor Linux With SSH And Sudo
  • I used a fully qualified path for the docker executable.  This is to ensure that I don't have issues due to the user's PATH statement.

 

Here are some screenshots of the monitor as I spun up some images and containers on the target system.

.

That's it!  You're monitoring Docker.

 

For those of you running Docker on Windows.  Most of the Docker Info commands are similar, if not the same and a script component monitor could be written in PowerShell.

The Center for Internet Security (CIS) provides a comprehensive security framework called The CIS Critical Security Controls (CSC) for Effective Cyber Defense, which provides organizations of any size with a set of clearly defined controls to reduce their risk of cyberattack and improve their IT security posture. The framework consists of 20 controls to implement, however, according to CIS, implementation of the first first five controls provides effective defense against 85% of the most common cyberattacks. CIS provides guidance on how to implement the controls and which tools to use to reduce the burden on security teams. Without these controls, those teams have to spend time deciphering the meaning and objective of each critical security control.

 

SolarWinds offers several tools that provide the capabilities to implement many of the CIS Controls. In this post, I'm going to break down each Critical Security Control and discuss how SolarWinds® products can assist.

 

Critical Security Control 1: Inventory of Authorized and Unauthorized Devices

Actively manage (inventory, track, and correct) all hardware devices on the network so that only authorized devices are given access, and unauthorized and unmanaged devices are found and prevented from gaining access.

Asset Discovery is an important step in identifying any unauthorized and unprotected hardware being attached to your network. Unauthorized devices undoubtedly pose risks and must be identified and removed as quickly as possible.

 

    

SolarWinds User Device Tracker (UDT) enables you to detect unauthorized devices on both your wired and wireless networks. Information such as MAC address, IP address, and host name can be used to create blacklists and watch lists. UDT also provides the ability to disable the switch port used by a device, helping to ensure that access is removed.

 

Critical Security Control 2: Inventory of Authorized and Unauthorized Software

Actively manage (inventory, track, and correct) all software on the network so that only authorized software is installed and can execute, and that unauthorized and unmanaged software is found and prevented from installation or execution.

As the saying goes, you don’t know what you don’t know. Making sure that software on your network is up to date is essential when it comes to preventing against attacks on known vulnerabilities. It’s very difficult to keep software up to date if you don’t know what software is running out there. 

 

 

SolarWinds Patch Manager can create an inventory of all software installed across your Microsoft® Windows® servers and workstations. Inventory scans can be run ad-hoc or on a scheduled basis, with software inventory reports scheduled accordingly. Patch Manager can also go a step further and uninstall unauthorized software remotely. CSC2 also mentions preventing the execution of unauthorized software. SolarWinds Log and Event Manager can be leveraged to monitor for any non-authorized processes and services launching and then blocking them in real-time.

 

 

Critical Security Control 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers

Establish, implement, and actively manage (track, report on, and correct) the security configuration of laptops, servers, and workstations using a rigorous configuration management and change control process to prevent attackers from exploiting vulnerable services and settings.

Attackers prey on vulnerable configurations. Identifying vulnerabilities and making necessary adjustments helps prevent attackers from successfully exploiting them. Change Management is critical to helping ensure that any configuration changes made to devices do not negatively impact the their security.

 

 

Lacking awareness of access and changes to important system files, folders and registry keys can threaten device security. SolarWinds LEM includes File Integrity Monitoring, which monitors for any alterations to critical system files and registry keys that may result in insecure configurations. LEM will notify you immediately to any changes, including permission changes.

 

CSC 4: Continuous Vulnerability Assessment and Remediation

Continuously acquire, assess, and take action on new information in order to identify vulnerabilities, remediate, and minimize the window of opportunity for attackers.

 

 

 

To help remediate vulnerabilities, we need to first identify those that already exist on your network. SolarWinds Risk Intelligence includes host-based vulnerability scanning capabilities. Risk Intelligence leverages the CVSS database to uncover the latest threats. If vulnerabilities are identified as a result of outdated software and missing OS updates, SolarWinds Patch Manager can be used to apply those updates to remediate the vulnerabilities. If you have a vulnerability scanner such as Nessus®, Rapid7® or Qualys® - LEM can parse event logs from these sources to alert on detected vulnerabilities and correlate activity. SolarWinds Network Configuration Manager can help to identify risks to network security and reliability by detecting potential vulnerabilities in Cisco ASA® and IOS®-based devices via integration with the National Vulnerability Database. You can even update the firmware on IOS-based devices to remediate known vulnerabilities.

 

CSC 5: Controlled Use of Administrative Privileges

The processes and tools used to track/control/prevent/correct the use, assignment, and configuration of administrative privileges on computers, networks, and applications.

Administrative privileges are very powerful and can cause grave damage when those privileges are in the wrong hands. Administrative access is the Holy Grail for any attacker. As the control states, administrative privileges needs be tracked, controlled and prevented. A SIEM tool such as SolarWinds LEM can and should be used to monitor for privileged account usage. This can include monitoring authentication attempts, account lock outs, password changes, file access/changes and any other actions performed by administrative accounts. SIEM tools can also be used to monitor for new administrative account creation and existing accounts being granted privileged escalation. LEM includes real-time filters, correlation rules and reports to assist with the monitoring of administrative privileges.

 

 

CSC 6: Maintenance, Monitoring, and Analysis of Audit Logs

Collect, manage, and analyze audit logs of events that could help detect, understand, or recover from an attack.

As you've probably guessed by the title, this one has Security Information and Event Management (SIEM) written all over it. Collecting and analyzing your audit logs from all the devices on your network can greatly reduce your MTTD (mean time to detection) when an internal or external attack is taking place. Collecting logs is only one part of the equation. Analyzing and correlating event logs can help to identify any suspicious patterns of behavior and alert/respond accordingly. If an attack takes place, your audit logs are like an evidence room. They allow you to put the pieces of the puzzle together, understand how the attack took place, and remediate appropriately. SolarWinds LEM is a powerful SIEM tool that includes such features as log normalization, correlation, active response, reporting, and more.

 

 

CSC 7: Email and Web Browser Protections

Minimize the attack surface and the opportunities for attackers to manipulate human behavior though their interaction with web browsers and email systems

According to a recent study from Barracuda®, 76% of ransomware is distributed via e-mail. Web Browsers are also an extremely popular attack vendor, from scripting languages, such as ActiveX® and JavaScript®, unauthorized plug-in's, vulnerable out-of-date browsers, and malicious URL requests. CSC7 focuses on limiting the use of unauthorized browsers, email clients, plugins, scripting languages and monitoring URL requests.

SolarWinds Patch Manager can identify and uninstall any unauthorized browsers or e-mail clients installed on servers and workstations. For authorized browsers and e-mail clients such as Google® Chrome® , Mozilla® Firefox®, Internet® Explorer®, Microsoft® Outlook® and Mozilla® Thunderbird®, Patch Manager can help ensure that they are up-to-date. LEM can take it a step further and block any unauthorized browsers and e-mail clients from launching, thanks to it's "kill process" active response. LEM can also collect logs from various proxy and content filtering appliances to monitor for URL requests. This also helps validate any blocked URL requests.

 

CSC 8: Malware Defenses

Control the installation, spread, and execution of malicious code at multiple points in the enterprise, while optimizing the use of automation to enable rapid updating of defense, data gathering, and corrective action.

LEM can integrate with a wide range of Anti-Virus and UTM Appliances to monitor for malware detection and respond accordingly. LEM also provides threat feed integration to monitor for communication with bad known actors associated with malware and other malicious activity. Control 8.3 involves limiting the use of external devices such as USB thumb drives and hard drives. LEM includes USB Defender® technology, which monitors the USB storage device usage and detach any unauthorized usage.

 

 

 

CSC 9: Limitation and Control of Network Ports, Protocols, and Services

Manage (track/control/correct) the ongoing operational use of ports, protocols, and services on networked devices in order to minimize windows of vulnerability available to attackers.

Attackers are constantly scanning for open ports, vulnerable services, and protocols in use. The principle of least privilege should be applied to ports, protocols and services - if there isn't a business need for the, they should be disabled. When people talk about ports they generally think of checking perimeter devices, such as firewalls, but internal devices such as servers should also be taken into to consideration when tracking open ports, enabled protocols, etc.

 

 

SolarWinds provides a free tool that can scan available IP addresses and their corresponding TCP and UDP ports in order to identify any potential vulnerabilities. The tool is aptly named SolarWinds Port Scanner. Network Configuration Manager can be used to report on network device configuration to identify any vulnerable or unused ports, protocols and services running on your network devices. Netflow Traffic Analyzer (NTA) can also be used to monitor flow data in order to identify traffic flowing across an individual port or a range of ports. NTA also identifies any unusual protocols and the volume of traffic utilizing those protocols. Finally, LEM can monitor for any unauthorized services launching on your servers/workstations, as well as monitoring for traffic flowing on specific ports based on syslog from your firewalls.

 

CSC 10: Data Recovery Capability

The processes and tools used to properly back up critical information with a proven methodology for timely recovery of it.

Currently, ransomware attacks take place every 40 seconds, which means that data backup and recovery capabilities are incredibly critical. CSC10 involves ensuring backups are taking place on at least a weekly basis, and more frequently for sensitive data. Some of the controls in this category also include testing backup media and restoration processes on a regular basis as well as ensuring backups and protected via physical security or encryption. SolarWinds MSP Backup & Recovery can assist with this control. Server & Application Manager can validate that backup jobs are sucessful, thanks to application monitors for solutions such as Veaam® Backup and Symantec® BackupExec®.

 

CSC 11: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches

Establish, implement, and actively manage (track, report on, correct) the security configuration of network infrastructure devices using a rigorous configuration management and change control process in order to prevent attackers from exploiting vulnerable services and settings.

This critical control is similar to CSC3, which focuses on secure configurations for servers, workstations, laptops, and applications. CSC11 focuses on the configuration of network devices, such as firewalls, routers and switches. Network devices typically ship with default configurations including default usernames, passwords, SNMP strings, open ports, etc. All of these configurations should be amended to help ensure that attackers cannot take advantage of default accounts and configurations. Device configuration should also be compared against secure baselines for each device type. CSC11 also recommends that an automated network configuration management and change control system be in place, enter NCM.

 

 

NCM is packed with features to assist with CSC11 including real-time change detection, configuration change approval system, Cisco IOS® firmware updates, configuration baseline comparisons, bulk configuration changes, DISA STIG reports and more.

 

CSC 12: Boundary Defense

Detect/prevent/correct the flow of information transferring networks of different trust levels with a focus on security-damaging data.

There is no silver bullet when it comes to boundary defense to detect and prevent attacks and malicious behavior. Aside from firewalls, technologies such as IDS/IPS, SIEM, Netflow and web content filtering can be used to monitor traffic at the boundary to identify any suspicious behavior. SolarWinds LEM can ingest log data from sources such as IDS/IPS, firewalls, proxies, and routers to identify any unusual patterns, including port scans, ping sweeps, and more. NetFlow Traffic Analyzer can also be used to monitor both ingress and egress traffic to identify anomalous activity.

 

CSC 13: Data Protection

The processes and tools used to prevent data exfiltration, mitigate the effects of exfiltrated data, and ensure the privacy and integrity of sensitive information.

 

Data is one of every organizations most critical assets and needs to be protected accordingly. Data exfiltration is one of the most common objectives of attackers, so controls need to be in place to prevent and detect data exfiltration. Data is everywhere in organizations. One of the first steps to protecting sensitive data involves identifying the data that needs to be protected and where it resides.

 

 

SolarWinds Risk Intelligence (RI) is a product that performs a scan to discover personally identifiable information and other sensitive information across your systems and points out potential vulnerabilities that could lead to a data breach. The reports from RI can be helpful in providing evidence of due diligence when it comes to the storage and security of PII data. Data Loss Prevention and SIEM tools can also assist with CSC13. LEM includes File Integrity Monitoring and USB Defender which can monitor for data exfiltration via file copies to a USB drive. LEM can even automatically detach the USB device if file copies are detected, or even detach it as soon as its inserted to the machine. LEM can also audit URL requests to known file hosting/transfer and webmail sites which may be used to exfiltrate sensitive data.

 

CSC 14: Controlled Access Based on the Need to Know

The processes and tools used to track/control/prevent/correct secure access to critical assets (e.g., information, resources, systems) according to the formal determination of which persons, computers, and applications have a need and right to access these critical assets based on an approved classification.

As per the control descrption 'job requirements should be created for each user group to determien what information the group needs access to in order to perform its jobs. Based on the requiremnts, access should only be given to the segments or servers that are needed for each job function.' Basically, provide users with the appropriate level of access required by their role, but don't give them access beyond that. Some of the controls in this section involve network segmentation and encrypting data in transit over less-trusted networks and at rest. CIS also recommends enforcing detailed logging for access to data. LEM can ingest these logs to monitor for authentication events and access to sensitive information. File Integrity Monitoring includes the ability to monitor for inappropriate file access, including modifications to permissions. LEM can also monitor Active Directory® logs for any privileged escalations to groups such as Domain Admins.

 

 

CSC 15: Wireless Access Control

The processes and tools used to track/control/prevent/correct the security use of wireless local area networks (LANS), access points, and wireless client systems.

Wireless connectivity has become the norm for many organizations, and just like wired networks, access needs be controlled. Some of the sub-controls in this section involve creating VLANs for BYOD/untrusted wireless devices, helping to ensure that wireless traffic leverages WPA2 and AES as well as identifying rogue wireless devices and access points.

 

 

Network Performance Monitor and User Device Tracker can be used to identify rogue access points and unauthorized wireless devices connected to your WLAN. brad.hale has a great blog post on the topic of monitoring rogue access points here.

 

CSC 16: Account Monitoring and Control

Actively manage the life cycle of system and application accounts their creation, use, dormancy, deletion in order to minimize opportunities for attackers to leverage them.

Account management, monitoring and control is vital to making sure that accounts are being used for their intended purposes and not for malicious intent. Attackers tend to prefer leveraging existing, legitimate accounts rather than trying to discover vulnerabilities to exploit. It saves a lot of time and effort. Outside of having clearing defined account management policies and procedures, having a SIEM in place, like LEM, can go a long way to detecting potentially compromised or abused accounts.

 

LEM includes a wide range of out-of-the-box content to assist you with Account Monitoring and Control, including filters, rules and reports. You can easily monitor for events such as:

 

  • Account creation
  • Account lockout
  • Account expiration (especially important when an employee leaves the company)
  • Escalated privileges
  • Password changes
  • Successful and failed authentication

 

Active Response is also included, which can respond to these events via actions, such as automatically disabling an account, removing from a group, and logging users off.

 

CSC 17: Security Skills Assessment and Appropriate Training to Fill Gaps

For all functional roles in the organization (prioritizing those mission-critical to the business and its security), identify the specific knowledge, skills, and abilities needed to support defense of the enterprise; develop and execute an integrated plan to assess, identify gaps, and remediate through policy, organizational planning, training, and awareness programs.

You can have all the technology, processes, procedures and governance in the world, but your IT Security is only as good as its weakest link - and that is people. As Dez always says, "Security is not an IT problem, it's everyone's problem." A security awareness program should be in place in every organization regardless of its size. Users need to be educated on the threats they face everyday, for example social engineering, phishing attacks, and malicious attachments. If users are equipped with this knowledge and are aware of threats and risks, they are far more likely to identify, prevent, and alert on attacks. Some of the controls included in CSC17 include performing a gap analysis of users IT security awareness, delivering training (preferably from senior staff), implementing a security awareness program, and validating and improving awareness levels via periodic tests. Unfortunately, SolarWinds doesn't provide any solutions that can train your users for you, but know that we would if we could!

 

CSC 18: Application Software Security

Manage the security life cycle of all in-house developed and acquired software in order to prevent, detect, and correct security weaknesses.

Attackers are constantly on the look out for vulnerabilities to exploit. Security practices and processes must be in place to identify and remediate vulnerabilities in your environment. There are an endless list of possible attacks that can capitalize on vulnerabilities, including buffer overflows, SQL injection, cross-site scripting, and many more. For in-house developed applications, security shouldn't be an afterthought that is simply bolted on at the end. It needs to be considered at every stage of the SDLC. Some of the sub-controls within CSC18 address this with controls, including error checking for in-house apps as well as testing for weaknesses and ensuring that development artifacts are not included in production code.

 

CSC 19: Incident Response and Management

Protect the organization’s information, as well as its reputation, by developing and implementing an incident response infrastructure (e.g., plans, defined roles, training, communications, management oversight) for quickly discovering an attack and then effectively containing the damage, eradicating the attacker’s presence, and restoring the integrity of the network and systems.

An incident has been identified. Now what? CSC19 focuses on people and process rather than technical controls. This critical control involves helping to ensure that written incident response procedures are in place, making sure that IT staff are aware of their duties and responsibilities when an incident is detected. It's all well and good to have technical controls, such as SIEM, IDS/IPS and Netflow in place, but they need to be backed up with an incident response plan once an incident is detected.

 

CSC 20: Penetration Tests and Red Team Exercises

Test the overall strength of an organization’s defenses (the technology, the processes, and the people) by simulating the objectives and actions of an attacker.

Now that you've implemented the previous 19 Critical Security Controls, it's time to test them. Testing should only take place once your defensive mechanisms are in place. Testing needs to be an ongoing effort, just not a once off. Environments and the threat landscape are constantly changing. Some of the controls within CSC20 include vulnerability scanning as the starting point to guide and focus penetration testing, conducting both internal and external penetration tests and documenting results.

 

I hope at this point you now have an understanding of each Critical Security Controls and some of the ways in which SolarWinds tools can assist. While it may seem like a daunting exercise to implement all 20 controls, it's worth casting your mind back to the start of this post, whereby I mentioned that by implementing even the first five critical controls, provides effective defense against 85% of cyberattacks.

 

I hope that you've found this post helpful. I look forward to hearing about your experiences and thoughts on the CIS CSC's in the comments.

AppStack is a very useful feature for troubleshooting application issues, which you can find when you click on Environment under the Home menu in the Orion® Web Console. The feature allows you to understand your application environment, through exposing the relationships between the applications and the related infrastructure that supports those applications. The more SolarWinds System Management products you have installed, the more information you are provided in the AppStack UI. From applications to databases, virtual hosts to storage arrays, you can get a complete picture of the entire “application stack”. You can learn more about the basics of using AppStack by watching this video.

A common question I hear is, “How does AppStack know what is related to what?” Well, AppStack uses concepts like automatic end-to-end mapping to automatically determine relationships between things like datastores and LUNs. But, AppStack also utilizes user-defined dependencies to allow you to specify relationships between particular monitored entities. For example, you can create a user-defined dependency between the Microsoft IIS application you are monitoring, and the Microsoft SQL Server that it relies on for business critical functionality. To configure the user-defined dependencies, click on Manage Dependencies on the Main Settings & Administration page.

When you select the Microsoft IIS application, you created the dependency for, in the AppStack UI, you are able to not only see the infrastructure stack that supports that particular web server, but you can also quickly see the other applications or servers that it depends on.

You can use the Spotlight function in AppStack to quickly narrow down the visibility to only the object you select and the objects that are related to it, including the user-defined dependencies. In this case, you can see the IIS server and the SQL server and the related infrastructure stack. Both the objects related to the IIS application and the SQL Server application will be shown.

Once you build out your dependencies, you will be able to quickly traverse from one monitored application to another, and gain a complete understanding of your complex application environment. In the example of the IIS application and SQL server, you can select the SQL server and see what other applications are dependent on it.

As you continue to build out your user-defined dependencies, you will be able to quickly traverse all of the relationships between the applications you monitor and the other monitored objects in your environment, like web transactions. This will, in turn, allow you to determine root cause of application issues faster, by giving you have better visibility into the entire application and infrastructure landscape.

PoShSWI.png

In my previous posts, I talked about building the virtual machine and then about prepping the disks.  That's all done for this particular step.

 

This is a long set of scripts.  Here's the list of what we'll be doing:

  1. Variable Declaration
  2. Installing Windows Features
  3. Enabling Disk Performance Metrics
  4. Installing some Utilities
  5. Copying the IIS Folders to a new Location
  6. Enable Deduplication (optional)
  7. Removing unnecessary IIS Websites and Application Pools
  8. Tweaking the IIS Settings
  9. Tweaking the ASP.NET Settings
  10. Creating a location for the TFTP and SFTP Roots (for NCM)
  11. Configuring Folder Redirection
  12. Pre-installing ODBC Drivers (for SAM Templates)

 

Stage 1: Variable Declaration

This is super simple (as variable declarations should be)

#region Variable Declaration
$PageFileDrive = "D:\"
$ProgramsDrive = "E:\"
$WebDrive      = "F:\"
$LogDrive      = "G:\"
#endregion

 

Stage 2: Installing Windows Features

This is the longest part of the process.. and it can't be helped.  The Orion installer will do this for you automatically, but if I do it in advance, I can play with some of the settings before I actual perform the installation.

#region Add Necessary Windows Features
# this is a list of the Windows Features that we'll need
# it's being filtered for those which are not already installed
$Features = Get-WindowsFeature -Name FileAndStorage-Services, File-Services, FS-FileServer, Storage-Services, Web-Server, Web-WebServer, Web-Common-Http, Web-Default-Doc, Web-Dir-Browsing, Web-Http-Errors, Web-Static-Content, Web-Health, Web-Http-Logging, Web-Log-Libraries, Web-Request-Monitor, Web-Performance, Web-Stat-Compression, Web-Dyn-Compression, Web-Security, Web-Filtering, Web-Windows-Auth, Web-App-Dev, Web-Net-Ext, Web-Net-Ext45, Web-Asp-Net, Web-Asp-Net45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Mgmt-Tools, Web-Mgmt-Console, Web-Mgmt-Compat, Web-Metabase, NET-Framework-Features, NET-Framework-Core, NET-Framework-45-Features, NET-Framework-45-Core, NET-Framework-45-ASPNET, NET-WCF-Services45, NET-WCF-HTTP-Activation45, NET-WCF-MSMQ-Activation45, NET-WCF-Pipe-Activation45, NET-WCF-TCP-Activation45, NET-WCF-TCP-PortSharing45, MSMQ, MSMQ-Services, MSMQ-Server, FS-SMB1, User-Interfaces-Infra, Server-Gui-Mgmt-Infra, Server-Gui-Shell, PowerShellRoot, PowerShell, PowerShell-V2, PowerShell-ISE, WAS, WAS-Process-Model, WAS-Config-APIs, WoW64-Support, FS-Data-Deduplication | Where-Object { -not $_.Installed }
$Features | Add-WindowsFeature
#endregion

 

Without the comments, this is 2 lines.  Yes, only 2 lines, but very important ones.  The very last Windows Feature that I install is Data Deduplication (FS-Data-Deduplication).  If you don't want this, you are free to remove this from the list and skip Stage 6.

 

Stage 3: Enabling Disk Performance Metrics

This is something that is disabled in Windows Server by default, but I like to see them, so I re-enable them.  It's super-simple.

#region Enable Disk Performance Counters in Task Manager
Start-Process -FilePath "C:\Windows\System32\diskperf.exe" -ArgumentList "-Y" -Wait
#endregion

 

Stage 4: Installing some Utilities

This is entirely for me.  There are a few utilities that I like on every server that I use regardless of version.  You can configure this to do it in whatever way you like.  Note that I no longer install 7-zip as part of this script because I'm deploying it via Group Policy.

#region Install 7Zip
# This can now be skipped because I'm deploying this via Group Policy
# Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "\\Path\To\Installer\7z1604-x64.msi", "/passive" -Wait
#endregion
#region Install Notepad++
# Install NotePad++ (current version)
# Still need to install the Plugins manually at this point, but this is a start
Start-Process -FilePath "\\Path\To\Installer\npp.latest.Installer.exe" -ArgumentList "/S" -Wait
#endregion
#region Setup UTILS Folder
# This contains the SysInternals and Unix Utils that I love so much.
$RemotePath = "\\Path\To\UTILS\"
$LocalPath  = "C:\UTILS\"
Start-Process -FilePath "C:\Windows\System32\robocopy.exe" -ArgumentList $RemotePath, $LocalPath, "/E", "/R:3", "/W:5", "/MT:16" -Wait
$MachinePathVariable = [Environment]::GetEnvironmentVariable("Path", "Machine")
if ( -not ( $MachinePathVariable -like '*$( $LocalPath )*' ) )
{
    $MachinePathVariable += ";$LocalPath;"
    $MachinePathVariable = $MachinePathVariable.Replace(";;", ";")
    Write-Host "Adding C:\UTILS to the Machine Path Variable" -ForegroundColor Yellow
    Write-Host "You must close and reopen any command prompt windows to have access to the new path"
    [Environment]::SetEnvironmentVariable("Path", $MachinePathVariable, "Machine")
}
else
{
    Write-Host "[$( $LocalPath )] already contained in machine environment variable 'Path'"
}
#endregion

 

Stage 5: Copying the IIS folders to a New Location

I don't want my web files on the C:\ Drive.  It's just something that I've gotten in the habit of doing from years of IT, so I move them using robocopy.  Then I need to re-apply some permissions that are stripped.

#region Copy the IIS Root to the Web Drive
# I can do this with Copy-Item, but I find that robocopy works better at keeping permissions
Start-Process -FilePath "robocopy.exe" -ArgumentList "C:\inetpub", ( Join-Path -Path $WebDrive -ChildPath "inetpub" ), "/E", "/R:3", "/W:5" -Wait
#endregion
#region Fix IIS temp permissions
$FolderPath = Join-Path -Path $WebDrive -ChildPath "inetpub\temp"
$CurrentACL = Get-Acl -Path $FolderPath
$AccessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList "NT AUTHORITY\NETWORK SERVICE", "FullControl", ( "ContainerInherit", "ObjectInherit" ), "None", "Allow"
$CurrentACL.SetAccessRule($AccessRule)
$CurrentACL | Set-Acl -Path $FolderPath
#endregion

 

Stage 6: Enable Deduplication (Optional)

I only want to deduplicate the log drive - I do this via this script.

#region Enable Deduplication on the Log Drive
Enable-DedupVolume -Volume ( $LogDrive.Replace("\", "") )
Set-DedupVolume -Volume ( $LogDrive.Replace("\", "") ) -MinimumFileAgeDays 0 -OptimizeInUseFiles -OptimizePartialFiles
#endregion

 

Stage 7: Remove Unnecessary IIS Websites and Application Pools

Orion will create its own website and application pool, so I don't need the default ones.  I destroy them with PowerShell.

#region Delete Unnecessary Web Stuff
Get-WebSite -Name "Default Web Site" | Remove-WebSite -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0" -Confirm:$false
Remove-WebAppPool -Name ".NET v2.0 Classic" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5" -Confirm:$false
Remove-WebAppPool -Name ".NET v4.5 Classic" -Confirm:$false
Remove-WebAppPool -Name "Classic .NET AppPool" -Confirm:$false
Remove-WebAppPool -Name "DefaultAppPool" -Confirm:$false
#endregion

 

Step 8: Tweak the IIS Settings

This step is dangerous.  There's no other way to say this.  If you get the syntax wrong you can really screw up your system... this is also why I save a backup of the file before I make and changes.

#region Change IIS Application Host Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Application Host settings
$ConfigFilePath = "C:\Windows\System32\inetsrv\config\applicationHost.config"
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.applicationHost'.log.centralBinaryLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.log.centralW3CLogFile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\LogFiles" ) )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logFormat", "W3C" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("logExtFileFlags", "Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus" )
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.logfile.SetAttribute("period", "Hourly")
$ConfigFile.configuration.'system.applicationHost'.sites.siteDefaults.traceFailedRequestsLogging.SetAttribute("directory", [string]( Join-Path -Path $LogDrive -ChildPath "inetpub\logs\FailedReqLogFiles" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("directory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp\IIS Temporary Compressed File" ) )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("maxDiskSpaceUsage", "2048" )
$ConfigFile.configuration.'system.webServer'.httpCompression.SetAttribute("minFileSizeForComp", "5120" )
# Save the file
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

There's a lot going on here, so let me see if I can't explain it a little.

I'm accessing the IIS Application Host configuration file and making changes.  This file governs the entire IIS install, which is why I make a backup.

The changes are:

  • Change any log file location (lines 15 - 17, 21)
  • Define the log type (line 18)
  • Set the elements that I want in the logs (line 19)
  • Set the log roll-over period to hourly (line 20)
  • Set the location for temporary compressed files (line 22)
  • Set my compression settings (lines 23-24)

 

Stage 9: Tweaking the ASP.NET Configuration Settings

We're working with XML again, but this time it's for the ASP.NET configuration.  I use the same process as Stage 8, but the changes are different.  I take a backup again.

#region Change the ASP.NET Compilation Settings
# XML Object that will be used for processing
$ConfigFile = New-Object -TypeName System.Xml.XmlDocument
# Change the Compilation settings in the ASP.NET Web Config
$ConfigFilePath = "C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config\web.config"
Write-Host "Editing [$ConfigFilePath]" -ForegroundColor Yellow
# Load the Configuration File
$ConfigFile.Load($ConfigFilePath)
# Save a backup if one doesn't already exist
if ( -not ( Test-Path -Path "$ConfigFilePath.orig" -ErrorAction SilentlyContinue ) )
{
    Write-Host "Making Backup of $ConfigFilePath with '.orig' extension added" -ForegroundColor Yellow
    $ConfigFile.Save("$ConfigFilePath.orig")
}
# change the settings (create if missing, update if existing)
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("tempDirectory", [string]( Join-Path -Path $WebDrive -ChildPath "inetpub\temp") )
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("maxConcurrentCompilations", "16")
$ConfigFile.configuration.'system.web'.compilation.SetAttribute("optimizeCompilations", "true")
# Save the file
Write-Host "Saving [$ConfigFilePath]" -ForegroundColor Yellow
$ConfigFile.Save($ConfigFilePath)
Remove-Variable -Name ConfigFile -ErrorAction SilentlyContinue
#endregion

 

Again, there's a bunch going on here, but the big takeaway is that I'm changing the temporary location of the ASP.NET compilations to the drive where the rest of my web stuff lives and the number of simultaneous compilations. (lines 16-18)

 

Stage 10: Create NCM Roots

I hate having uploaded configuration files (from network devices) saved to the root drive.  This short script creates folders for them.

#region Create SFTP and TFTP Roots on the Web Drive
# Check for & Configure SFTP and TFTP Roots
$Roots = "SFTP_Root", "TFTP_Root"
ForEach ( $Root in $Roots )
{
    if ( -not ( Test-Path -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) ) )
    {
        New-Item -Path ( Join-Path -Path $WebDrive -ChildPath $Root ) -ItemType Directory
    }
}
#endregion

 

Stage 11: Configure Folder Redirection

This is the weirdest thing that I do.  Let me see if I can explain.

 

My ultimate goal is to automate installation of the software itself.  The default directory for installation the software is C:\Program Files (x86)\SolarWinds\Orion (and a few others).  Since I don't really like installing any program (SolarWinds stuff included) on the O/S drive, this leaves me in a quandary.  I thought to myself, "Self, if this was running on *NIX, you could just do a symbolic link and be good."  Then I reminded myself, "Self, Windows has symbolic links available."  Then I just needed to tinker until I got things right.  After much annoyance, and rolling back to snapshots, this is what I got.

#region Folder Redirection
$Redirections = @()
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]1; SourcePath = "C:\ProgramData\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]2; SourcePath = "C:\ProgramData\SolarWindsAgentInstall"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]3; SourcePath = "C:\Program Files (x86)\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]4; SourcePath = "C:\Program Files (x86)\Common Files\SolarWinds"; TargetDrive = $ProgramsDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]5; SourcePath = "C:\ProgramData\SolarWinds\Logs"; TargetDrive = $LogDrive } )
$Redirections += New-Object -TypeName PSObject -Property ( [ordered]@{ Order = [int]6; SourcePath = "C:\inetput\SolarWinds"; TargetDrive = $WebDrive } )
$Redirections | Add-Member -MemberType ScriptProperty -Name TargetPath -Value { $this.SourcePath.Replace("C:\", $this.TargetDrive ) } -Force
ForEach ( $Redirection in $Redirections | Sort-Object -Property Order )
{
    # Check to see if the target path exists - if not, create the target path
    if ( -not ( Test-Path -Path $Redirection.TargetPath -ErrorAction SilentlyContinue ) )
    {
        Write-Host "Creating Path for Redirection [$( $Redirection.TargetPath )]" -ForegroundColor Yellow
        New-Item -ItemType Directory -Path $Redirection.TargetPath | Out-Null
    }
    # Build the string to send to the command prompt
    $CommandString = "mklink /D /J `"$( $Redirection.SourcePath )`" `"$( $Redirection.TargetPath )`""
    Write-Host "Executing [$CommandString]... " -ForegroundColor Yellow -NoNewline
    # Execute it
    Start-Process -FilePath "cmd.exe" -ArgumentList "/C", $CommandString -Wait
    Write-Host "[COMPLETED]" -ForegroundColor Green
}
#endregion

The reason for the "Order" member in the Redirections object is because certain folders have to be built before others... IE: I can't build X:\ProgramData\SolarWinds\Logs before I build X:\ProgramData\SolarWinds.

 

When complete the folders look like this:

mklinks.png

Nice, right?

 

Stage 12: Pre-installing ODBC Drivers

I monitor many database server types with SolarWinds Server & Application Monitor.  They each require drivers  - I install them in advance (because I can).

#region Pre-Orion Install ODBC Drivers
#
# This is for any ODBC Drivers that I want to install to use with SAM
# You don't need to include any driver for Microsoft SQL Server - it will be done by the installer
# I have the drivers for MySQL and PostgreSQL in this share
#
# There is also a Post- share which includes the files that I want to install AFTER I install Orion.
$Drivers = Get-ChildItem -Path "\\Path\To\ODBC\Drivers\Pre\" -File
ForEach ( $Driver in $Drivers )
{
    if ( $Driver.Extension -eq ".exe" )
    {
        Write-Host "Executing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath $Driver.FullName -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    elseif ( $Driver.Extension -eq ".msi" )
    {
        # Install it using msiexec.exe
        Write-Host "Installing $( $Driver.FullName )... " -ForegroundColor Yellow -NoNewline
        Start-Process -FilePath "C:\Windows\System32\msiexec.exe" -ArgumentList "/i", "`"$( $Driver.FullName )`"", "/passive" -Wait
        Write-Host "[COMPLETED]" -ForegroundColor Green
    }
    else
    {
        Write-Host "Bork-Bork-Bork on $( $Driver.FullName )"
    }
}
#endregion

 

Running all of these with administrator privileges cuts this process down to 2 minutes and 13 seconds.  And over 77% of that is installing the Windows Features.

 

Execution time: 2:13

Time saved: over 45 minutes

 

This was originally published on my personal blog as Building my Orion Server [Scripting Edition] – Step 3 – Kevin's Ramblings

Filter Blog

By date: By tag: