In NCM 6.0 the development team produced a very powerful and cool new feature – Config Change Templates. On the back end, these are scripts that use a SolarWinds scripting engine. On the front end, they are simple to use, menu driven templates that allow you to make bulk changes to devices across your network from the web interface. The other great thing about these templates is they are easily shareable directly within the product!


For example, say I need to change the enable password on all of the Cisco devices in my network. On the NCM website, hover on the Configs tab then click the “Config Change Templates” link. Now click “Shared Config Change Templates on thwack” and find the template you’re interested in. In this example, click Cisco (wow, it just happens there is an exact script to accomplish what I am trying to do ).






After importing the template, I can quickly run it by selecting the template, clicking “Define Variables & Run”, and providing the relevant variables (in this case, the nodes and new enable password).
















After you click next, it will generate the full command output. You can preview what will be sent by expanding one of the nodes. For example, if I expand Bas-2621.lab.tex.local, I can see what will be run on the router. Also, you can specify whether the commands should be written to NVRAM or just to the running config.








Simply click Execute and you’re done. Easy enough right? Well now let’s really dig in and look at the actual script. Click on the “Config Change Templates” link again. Select the script you want to modify, and instead of clicking “Define Variables & Run”, click “Advanced Modify”. The interesting section is the “Config Change Template”, this is where the code is.








The scripting logic should appear familiar to most people with scripting or programming experience in other languages (except LISP, that’s just different((())) ).


The first section is commented out with the /* */. This is meta data to help the script engine know what variables it should ask for. To define variables, use the .PARAMETER_LABEL and .PARAMETER_DESCRIPTION pairs. Both of these are required for every variable you want to prompt the user for when they execute the script. The Context Node is the node that the script it being run against and is required for all scripts. In the example of the script to change the enable password we ran earlier, the only other information we needed was the new password.


After you have created all of your parameters, you are ready to begin with the script body itself. The next statement you should notice is “script ChangePassword”. ‘script’ should be followed by the name of the script, then the variable types and names. In our example, the new password is a string. Other valid data types are: int, string, and “swis.entity” (more on that in another blog).


After that, it is mainly a matter of defining the control flow, or logic, of your script. The standard statements are supported: If, If Else, and Foreach (see the documentation for full usage details). When you are ready to have your script write to the command line of the Node, use the follow structure:


  make a cup of coffee for me    


For example, in the Change Cisco Enable Password script, here is the CLI section:



  configure terminal 
  enable secret @NewPassword 
  write mem 














A great resource is the NCM documentation. Specifically, the Understanding Config Change Template Semantics help document goes into great detail about creating and modifying these scripts.


Whenever you use scripts, especially ones you download from thwack, make sure you review the script so you understand what it is doing. Also, before you click the final Execute button when running scripts, examine the output to make sure it is doing exactly what you intended.


Remember, “With great power comes great responsibility”.

If you’d like to get a detailed walkthrough of the new 10.1 features, register for this free training session on Thursday, December 9, 2010 11:00 AM - 12:00 PM CST.


During this session we’ll discuss and demonstrate several of the most asked for new features that have been incorporated in the 10.1 release of Orion NPM. This is a major upgrade of Orion, don’t let the .1 release fool you. We have incorporated some of the most often asked for features and functionality, including:


• Complete Active Directory(AD) Authentication integration    
• Dynamic  Service Groups and Dependencies    
• Multi-Object stacked Charts    
• PDF export of Reports and Views    
• Mobile Device aware Alert Management


Please register:

We’ve posted a new customer training video that covers topics such as tuning Orion's website, SQL database, and data collection services.  The Head Geek is joined by Dan Wendeln from Support and Chris LaPoint from product management.  Check it out.  It’s handy stuff for an any customer, whether you’re new to Orion or if you’ve been around for a while. 


It’s been a long time coming (for us and well as you), but it’s finally here!   Orion NPM 10.1 is officially GA!   Hats off to all the customers who participated in the prototype reviews, betas, and release candidates. We really couldn’t have done it without you.  And, last but certainly not least, a really big shout out to all our friends in dev who worked tirelessly to deliver the very aggressive feature content and schedule we PMs set for them ;-)  


So (drum roll), for those that missed our “Orion NPM 10.1…you asked, we listened!” blog post, here’s a complete run down of all things new in 10.1:


Data Center Networking Enhancements

  • Dynamic Service Groups    
    • Ability to group multiple Orion objects (e.g., nodes, interfaces, applications, etc.) into a container that can be used to visualize status.   For example, you can create a “WAN Links” Group, add several interfaces, and then set Group rollup status mode to “best” to account for link redundancy. 
    • See Meet the Features – Dynamic Service Groups for a detailed walk-through with screen shots. 
  • Dependencies 2.0 / Basic Root Cause Analysis    
    • Ability to define dependency relationships within the Orion web console.  For example, if this parent node or interface goes down, then mark all items defined as children as “unreachable” instead of “down”.   This effectively suppresses alerts on all children objects while still allowing you to use your existing alerts configured on “down” status.
    • See Meet the Features – Orion NPM 10.1 - Dependencies 2.0 / Basic Root Cause Analysis for more details.
  • VMware infrastructure monitoring    
    • Visibility into the vCenter, DataCenter, and Cluster levels (in addition to ESX host and VM guest visibility already provided)
  • Cisco UCS Support (Unified Computing System)    
    • Dynamically maintain the logical relationships from operating system through virtual guest and host to the Cisco UCS blade  and chassis
    • View the status of the UCS chassis hardware, such as fans and PSU’s
  • Message Center    
    • Displays Syslog, Traps, Events, and Alerts in a single view where you can filter to a specific Orion object to see all pertinent messages

Fesitivus for the Rest of Us

  • Complete Active Directory (AD) Authentication Support       
  • PDF Views and Reports    
    • Send Orion Web Console Views or Report Writer Reports as a PDF attachment to Orion users via email
    • See Hey Chart, get in my Report! (Part 2) for details on how to leverage PDF exports along with several other new in 10.1 features to get charts in your reports.
  • Mobile Device and Alert Management Enhancements    
    • A dedicated alert view made for mobile web browser to allow you to view and acknowledge alerts from your phone
    • Receive an email alert notification and directly from your email whether on your PC or mobile device, acknowledge that alert by clicking on a link
    • Enable, disable, and delete Advanced Alerts in the web console
    • Ability for users to add notes to alerts in the web console
    • Just like in Report Writer, you can use SQL to create advanced alerts you cannot create through the built-in trigger creation interface
  • Node and Interface details on Summary Views    
    • Ability to quickly and easily add any resource from node details and interface details to Summary views
  • Multiple Interfaces or UnDPs on a single chart    
    • Ability to place multiple interfaces from a single node or multiple nodes on a single chart
    • Ability to place a UnDP assigned to one node or multiple nodes to a single chart
  • Meru Wireless Controller Support    
    • First class support for the Meru Wireless Controllers
    • See Controller and Thin AP stats along with connected client information
  • SQL 2008 R2 Support
  • Some other hidden tidbits and Easter eggs I will let you find on your own :)

So is that it, are we done?  For 10.1 we are, but rest assured we are not resting on our laurels .  We are already off and running and actively in the planning phase for the next release and once I have more details, I will post a new “What we are working on” blog post to make sure everyone knows where we are heading next.  For those of you thwackers, keep on thwacking your ideas, gripes, wish lists etc., cause as all of us Product Managers have said before and as indicated by above, you help shape what items we focus on next.  So until then, I hope you enjoy 10.1 as much as we are excited to deliver it to you.


- If you have IPSLA Manager, you will need to download IPSLA 3.5.1 to upgrade   
- If you have EOC 1.2, you will need to download the EOC-Orion 10.1 Compatibility Pack   
Both should be available in the customer portal.

A number of you have asked how to build custom reports in Profiler and then email or publish them on a schedule.  This week we will cover building a custom report, and will cover scheduling in a future post.

Like Orion, Profiler reports can be customized to meet your needs.  For Profiler, all reports are based on report templates, which describe the fields and query necessary to build a report.  When you build a custom report, you are simply selecting the fields you want to see, in the order you want to see them, and adding filters to narrow the results.   At report execution (either on-demand or scheduled), you will be able to select the set of devices, the time period, and other variables, in order to narrow the results further. 

For example, lets say I wanted to find all the servers in my environment where the C:\ drive was greater than 90% full, and I want to understand how fast they are growing. We have the template "Volume Usage Forecast" we can leverage as a starting point.  Go to Reports > My Reports and press the New Report button, and then we will select the template we want to use.  Note the template list is long, and we are working to make it shorter.

Select Enterprise > Storage > Volume Usage Forecast and press Continue:

Next, we can select the columns we want in the report and the order of those columns.  Note each template has defaults for the columns selected, the column order and the sort order.  In this case, there are 10 columns selected by default, but I am going to remove the Eighty and Ninety percent columns by highlighting them and pressing the left arrow.

Next we will select the sorts and filters for the report, allowing us to narrow the output to exactly what we want.  Since this configuration screen is long, we will tackle it in parts. 

First we will enter the name of the report "C Drives Over 90 Percent" and in this case, we will leave the default sort (server, volume) in place.  If we wanted to change the sort order, we can remove current sorts by highlighting them in the text box and pressing "Delete Sort", and add new sorts by selecting the Sort By field and order and then pressing "Add Sort".  Note the order of the fields in text box is the order the sort will be applied to the report.

Next, we will pick the filters.  In general, you can filter by any column in the report, and the options of the filter will match the data type:

  • Text Fields: like, not like, <, >, =, <=, >=, <>, in, not in
  • Numeric Fields: like, not like, <, >, =, <=, >=, <>, in, not in
  • Date Fields: <, >, =, <=, >=, <>, before, within

The basic query we will build is "Volume Name like C: AND % Used > 90".  So for the first part select "Volume Name" and "like" and enter "%C:%" and then press "Add Filter" - you should see the filter appear in the text box.   For "like" filters, make sure you add the "%" before and after the text.  Note since this is the first filter, we can ignore AND/OR drop down.

For the second filer, choose "AND", "% Used", " > " and enter "90" and press "Add Filter".

Finally, there are few additional items to select:

  • Time Zone:  The default time zone of the report, which can be changed at run time.
  • Row per Page:  The default rows per page.
  • Permissions:  What users can see this report:  Myself, Group (All users in my groups), All (everyone).

You are ready to save the report, your page should look like this:

Available actions:

  • Save Report - saves the report and returns to My Reports.
  • Save and Run - saves the report and takes you to the Run Report page
  • Cancel - abort changes to the report and return to the My Reports list.

For this report, if you select Save and Run, and then run the report, the output look like this (assuming you have some C:\ drives over 90%).

So that is how you build a custom report - it is quite flexible and lets you get to your data exactly how you want it.  

So how do users leverage the reporter functionality? 

  • Identify problem areas before they become "problems" - busiest LUNs, fullest drives, busiest VM or ESX, etc.
  • Chargeback - charge users, departments, customers, etc. for the storage, backups or servers they are using.
  • Backup Compliance - identify full backups that have not been successful for more than 7 days.
  • User Quotas - identify users exceeding their storage quota

How are you using reports in Profiler?  We would really like to hear.

PS: Cool new feature alert! In later versions of Profiler, we added the ability to group filters, so you can do "Find all (C: OR D:) drives that are over 90%".   Lots of people of have asked for this feature, and now you can do it!

Before I dig into this post, I want to give a big shout out to all the customers who’ve participated in the NPM 10.1 Beta and RC programs.  In short, you guys rock!  Thank you!!! Your feedback has been invaluable and has greatly contributed to the quality of this feature-packed release.   We’re wrapping up the RC now, so it will be generally available very soon.

So that said, to the meat of this post.   While it’s impossible to get every feature every customer wants in every release, we believe NPM 10.1 is huge step forward in addressing many of your outstanding requests.   

Here are 5 key examples:

1. Complete Active Directory (AD) Authentication Support

You asked:

We listened:

  • With NPM 10.1, you can now authenticate to the Orion Web Console using native AD Users and Groups.  Prior to 10.1, NPM only supported authentication via an Orion account or Windows pass-through.  Yes, it’s been a long time coming, but we wanted to make we got it right.   After all, if we didn’t offer AD Group support, what’s the point, right?
  • Now, you can add an AD Group to Orion and then easily control Orion web access by add/removing group members through AD Users and Computers.  No more “can you give Orion access to Bob and Sue” requests in your inbox.  If/when users leave your organization, their Orion access is disabled with their Windows account.   See this this post for more details.

2. Dynamic Service Groups and Dependencies

You asked:

We listened:

  • With the introduction of Dynamic Service Groups in NPM 10.1, you can group multiple Orion objects (e.g., nodes, interfaces, applications, etc.) into a container that can be used to visualize status.   For example, you can create a “WAN Links” Group, add several interfaces, and then set Group rollup status mode to “best” to account for link redundancy.  See this post for a detailed walk-through with screen shots.
  • Once you’ve grouped your objects, you can configure a dependency for the group to suppress alerts based on a particular interface, node, or group being down.  The best part is you don’t need to change any of your existing alert configurations!  See this post for more details.

3. Multiple Interfaces or UnDPs on a single chart

You asked a lot! ;-)

We listened:

  • With NPM 10.1, you can now graph multiple interfaces or UnDPs on a single chart as well as graph the sum
  • More importantly for some, you can also place the new resource on a summary view and graph multiple interfaces from different nodes on the same chart.

4. PDF Views and Reports

You asked:

We listened:

  • With NPM 10.1, you can now schedule the export of any page or report in Orion as a PDF.  This eliminates the problems with sending HTML pages and the images getting gobbled by your email servers.   You’ll also notice an Export to PDF button in the upper right hand corner of NPM 10.1 web pages.  Click it and you can create ad hoc PDF exports of whatever you see on your screen.
  • See this post for details on how to leverage PDF exports along with several other new in 10.1 features to get charts in your reports.

5. Mobile Device and Alert Management Enhancements

You asked:

We listened:

  • With NPM 10.1, you can access a dedicated Orion alert view for mobile web browsers that allows you to view and acknowledge alerts from your mobile device.
  • What about email?  You can also receive an email notification and acknowledge alerts by clicking a link from your PC or mobile device.
  • What if you need to remotely disable an alert?  With NPM 10.1, you can view, enable, disable, and delete alerts directly from the website.
  • Finally, as an added bonus, just like in Report Writer, you can use custom SQL to create advanced alerts you cannot create through the built-in trigger creation interface

Please note that we’re using your thwack posts as examples because they’re publicly available for us to point to, but they’re not the sole determiner of release content prioritization.  We know you can’t access our minds or our internal feature tracking systems, but if you could, you’d see we also take into account all the great feedback you’ve given us in 1:1 interviews, feature requests submissions, beta posts, through our support organization, etc. to help us decide what goes into each release.

So, regardless of which method you prefer to provide us your feedback, please keep it coming!  We’re absolutely listening.

We often get asked by users if there is a quick and easy way to enable SNMP on a lot of servers. It’s easy enough to add the server manually if you are only doing this for a handful of machines. But if you have a large server environment you are responsible for, this isn’t manageable. Using a combination of freely available tools, you can greatly simplify this process.

1. Download and install PsTools from Microsoft (Windows Sysinternals), specifically, you will need PsExec. PsExec allows you to execute commands against remote machines. Install this on your workstation, then you can run commands against the remote servers without needing to log in to them directly. You can click here for more information about PsExec and here for more information about Windows Sysinternals. These are essential tools for any systems administrator.


Happy Cloud


2. Depending on the OS, you will also be using one of Microsoft’s built in tools, sysocmgr or ocsetup. These tools enable you to install Windows components unattended directly from a command line and are part of the Windows distribution. You can find more information about sysocmgr here. For enabling SNMP on Windows Server 2008, you should use Microsoft’s ocsetup tool. You can find more information about that tool here.


If you want to enable SNMP on a Windows 2008 Server, simply follow these steps:

We will need to build the argument string you will pass to psexec. Here is an example string. You will need to replace all UPPERCASE words with the values relevant for your environment.

psexec \\COMPUTER –u USER –p PASSWORD “start ocsetup snmp /quiet /norestart”

If you have a list of computers, instead of specifying the individual computer name, simply put in a file name instead of the computer name. For example, if I had a file called “server.txt” which contained a list of servers (one per line), my command would look like this

psexec @servers.txt –u USER –p PASSWORD “start ocsetup snmp /quiet /norestart”

Notice the @ before the filename, make sure you add this. See the psexec documentation for more information. The username and password should be an account that has access the the machine you are attempting to install this on.


To install on an older OS (Server 2000 and 2003, XP), you will need to use sysocmgr.

psexec \\COMPUTER –u USER –p PASSWORD “sysocmgr /i:%SystemRoot%\inf\sysoc.inf /u:\\NETWORKSHARE\InstallSNMP.txt /x /q /r”

Sysocmgr requires a configuration file. In my example, I put the InstallSNMP.txt file on a network share so I didn’t need to copy the file to the local server each time. This should be a simple text file with the following commands:

netoc = on
SNMP = 1

The /r means, do not reboot automatically. You can find the full command options by simply opening a prompt and typing sysocmgr. This also assumes the i386 folder is available in the locally define default path.


As always, you should test this on a small inconspicuous area of your network before applying to the whole surface :-)

Oh, and if you are interested in monitoring servers, you should probably check out our Application Performance Monitor product.

Now that the new Orion Failover Engine has been out for a couple of months, I wanted to discuss some of the common conversations I have had with folks as well as some of the common questions.

When I talk to customers, the first thing I do is discuss the two methods you can use to deploy the Orion Failover Engine -- high availability and disaster recovery. So, what the heck is the difference between the two?

1. High Availability – failover of the Orion applications within the same site, data center, or LAN. For example, the primary Orion server has a hard drive failure and switches over to another server within the same rack.

2. Disaster Recovery – failover of the Orion applications across the WAN to another site or Data Center. For example, the Primary Data Center in United States gets completely knocked offline with a flood and power outage and switches over to another Data Center in Europe.

Once I establish which method a customer prefers, I discuss that option in further detail. Since I don’t know which you prefer, let’s walk though each in more detail and discuss some considerations for you to take into account when choosing a deployment option.

High Availability:

This deployment option requires two NICs on each of the primary and secondary servers. The first NIC on both servers will be connected to the network and will share the same IP Address. The second NIC on each server is the failover channel, in which the heartbeat occurs, as well as real-time file and registry replication.

I know, I know… two servers cannot be on the network at the same time with the same IP Address. As part of the Orion Failover Engine package, we install a packet filter on the NICs, so only the active server is actively connected to the network, broadcasting and receiving traffic.

One benefit of the packet filtering approach is that when a failover occurs, your users browse to the same IP Address, and Orion can use the same database backend. As a Network Engineer, the larger benefit to you is any Syslog, SNMP Trap or Netflow device traffic you have been sending to Orion won’t have to be reconfigured to send to a new IP Address.


Disaster Recovery:

First, please do not make fun of my Visio skills in the below diagram :).

The primary difference for a Disaster Recovery versus the setup that I outlined above is the following.

  • Primary IP Address for Primary and Secondary server will not be the same as they are on different subnets
  • If you have configured your devices to send Syslog, SNMP Traps and Netflow by IP Address, then this will need to be reconfigured to send to the secondary IP Address or, when you initially setup this stuff, you should configure it to send to both IP Addresses.
  • You will need to work with your SQL DBA on how to handle SQL failover. We have provided some initial options, which you can find here.



Question: My Orion is deployed on a physical server, do I have to have a physical server for my secondary/backup machine?  
Answer: No, the Orion Failover Engine supports multiple hardware deployment options which include:

  • Physical to Physical
  • Physical to Virtual
  • Virtual to Virtual

You can read more about this here.

Question: How good is this stuff, this is a brand new product. Is it a 1.0?  
Answer: No, the Orion Failover Engine is based off of technology from the NeverFail Group. VMware has also licensed the NeverFail technology for their High Availability and Disaster Recovery offering and the shipping version is 6.4, so this is a proven technology.

Question: Where can I learn more about the details of the Orion Failover Engine?  
Answer: Review this document here

Question: So how will this work when I need to apply Microsoft patches or other software that will require me to reboot the machine?  
Answer: The Orion Failover Engine allows you to manually fail over the secondary server and suspend any replication of data between the primary and secondary until your updates are complete. Then you can fail back over to the primary and re-enable replication.  See here for more details.

Question: I own Orion NPM, APM and IPAM, but I only want to protect NPM & APM. Can I do this?  
Answer: No, you cannot choose which Orion products on a server are protected and which aren’t. This is because many of the Orion products share Windows services, so you can’t protect one without protecting the other.

Question: How does licensing work for my Orion server?  
Answer: We license by what we call a primary product per server. Currently classified as a primary products are Orion NPM, APM and NCM. The other Orion products (IPSLA Manager, IPAM and NTA) are already covered for free once you have Orion FoE for that server.

For example, I have Orion NPM, NCM and NTA on my server. You will need an Orion Failover Engine for Two Primary Products to cover NPM and NCM and NTA coverage is free.

Let’s look at a more complex example.

Server 1: Orion NPM, NTA & IPAM  
Server 2: Orion Additional Poller   
Server 3: Orion NPM   
Server 4: Orion Enterprise Operations Console or EOC

In this example you will need the following:

Server 1: Orion Failover Engine for One Primary Product   
Server 2: Orion Failover Engine for Additional Poller    
Server 3: Orion Failover Engine for One Primary Product    
Server 4: Orion Failover Engine for EOC

If you have any other questions, please feel to PM me or contact your Sales Engineer.  We also provide many Knowledge Base article which walk through many additional topic in detail, which I highly recommend you reference.


In my Meet the Features – Orion NPM 10.1 - Dependencies 2.0 / Basic Root Cause Analysis post, in which I reviewed Dependencies 2.0, I referenced Groups in the creation workflow and told you I would tell you more about them later.  I figured I made you wait long enough, so let’s walk through Groups.


So why would I need or use a Dynamic Service Group you ask?  Let’s walk through an example use case.  You currently own NPM and APM to monitor you network and applications.  You provide a service to your end users, in this example, Exchange email.  A user calls in complaining that their email is not working right, how do you go about troubleshooting this?  Is it the app, the server, the connect switch port or the switch itself?  With Dynamic Service Groups, you can quickly identify what is causing the problem.


Let’s walk through how you would go about setting this up in Orion.


On the Orion Settings page, under Node & Groups Management, select Manage Groups.  One key thing to note here is that you can nest groups within each other.




Create a new group, name it and set the description.  Under the Advanced option there is one key setting to check out and that is Status Rollup Mode.  In essence, what this allows you is to define how we rollup up status and the three choices are Best, Mixed or Worst.




When you click on Add & Remove Objects, there are two primary methods to specify which objects are contained within the group. 

  1. Manual selection, where you can select which nodes, interfaces and volumes and if you own other modules, APM Component Monitors, Applications and IPSLA operations.    
  3. Dynamic Query, which allows you to specify an object criteria, such as node name, interface description or application monitor name.  This way, if you add or remove anything to Orion, then the Group is automatically updated to reflect these changes.



Now that we have created our group, there is a new sub-menu link underneath the Home tab for Groups.  Within this new Summary view, you can view all of the Groups, their statuses, alerts, etc.




When you drill down into one of the Groups, you can quickly view important details and statistics.  The really important one to pay attention to here is the Group Root Cause resource.  This will allow you to quickly pin-point what is causing the issue within this group.




As you can see, Dynamic Service Groups provide a tremendous set of functionality including, but not limited to, Grouping (no more using custom properties), providing basic root cause analysis and viewing your network infrastructure at not just a network element view, but at a higher level business service context.  Groups will aid you to more efficiently leverage Orion and all it existing capabilities and be more diligent in your troubleshooting.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.