Skip navigation

A customer asked us the other day if it was possible to put their own logo in the top of the their NCM reports. Not only is it possible, it's really, really easy. So, this is what you get in your email now: 

 

 

And as much as you like us over here at SolarWinds, you'd like to have your own company logo across the top. All you have to do is go to: 

 

C:\Program Files\SolarWinds\Configuration Management\WebResources\Header.jpg – this is logo itself

C:\Program Files\SolarWinds\Configuration Management\WebResources\Spacer.jpg – this is spacer repeater to fill-in free space at the right side from the logo

 

You'll find everything works best if you resize your logo file to be the same height as the file you are replacing (45 pixels). The width will take care of itself - or you might want to get creative and add additional content to the right of your logo.

Once you do this - your report will look like this. You'll see I just replaced the logo with the thwack logo. 

Easy and definitely nice to have. 

In this blog post we highlight some more of the goods in Orion NPM 10.1: Virtual Infrastructure Monitoring.  "Wait a second," you say, "doesn’t NPM already include VMware monitoring?"  Yes, it does, but we just made it a lot better!  Previously, Orion NPM would give you performance data on your ESX hosts and any virtual machines on those hosts; however, we didn’t tell you anything about the rest of your virtual infrastructure.  That is no longer the case.  Now, Orion NPM 10.1 gives you visibility into your vCenters, Datacenters, and ESX Clusters.  Why should you care?  By providing performance monitoring information for your entire virtual infrastructure, we hope we can save you a few visits to vCenter by giving you a single pane of glass from which to monitor both your physical and virtual environments.  Let’s take a look.

VIM - tab.

The first thing you’ll notice is the new VIRTUALIZATION tab which will take you to the view in the above screenshot.  This tab is where you’ll find information about your virtualized VMware infrastructure.  Let’s take a closer look at our VMware assets.

VIM - Assets.

In this resource you’ll see your entire VMware virtual infrastructure.  Orion is able to display this data via the VMware API, which you can connect to through your vCenter server(s) or standalone ESX hosts.  With 10.1, we've moved polling for everything but interfaces, volumes, and individual VMs to the API (we're still polling these objects via SNMP).  Another thing you may notice are new status icons.  These are actually VMware statuses (not to be confused with Orion node statuses), which Orion also gets via the VMware API.  Each of the managed assets listed in the resource above are clickable which will take you to a more detailed view for that particular asset.  On this view you’ll also notice a new VMware Asset Summary resource.

VIM - Assets Summary.

This resource gives us summary information about all of the VMware assets we’re monitoring in Orion.  Next, let’s take a more detailed look at our vCenter server.

VIM - vCenter Details.

Here you see statistics on the physical server itself, node details, as well as a new resource that gives you VMware specific information about the vCenter including its VMware status, and the number of each type of VMware asset (Datacenters, clusters, hosts, and VMs) managed by the vCenter.  Let’s take a closer look at one of our clusters.

VIM - Cluster Details.

On this view you’ll notice several new resources that give you more detailed visibility into your ESX clusters.  Here we see the number of ESX hosts that are part of the cluster, as well as a new Cluster details resource that gives us summary information about the cluster.  Last, let’s take a look at the new VMware Settings page.

VIM - VMware Settings.

 

On this page you’ll find all of your VMware managed nodes, and several options for managing those nodes.  From this page you can assign or edit credentials, enable or disable polling, or specify whether you want to poll through vCenter or a specific ESX host.

As you can see, we’ve added quite a few new features that give you much deeper visibility into your VMware virtual infrastructure.  These features are included in the NPM 10.1 Release Candidate.  If you’re currently participating in the Release Candidate, please let us know what you think in the NPM RC forums here!

For our Profiler customers, the Profiler - What we are working on... is in progress, but many of you have asked about the backup applications supported in Profiler?

Here is a quick summary of where we are headed:

  • We are actively working on support for the latest version of the following backup vendors:
    • NetBackup 7
    • BackupExec 2010
    • EMC Networker 7.6
    • TSM 6.2
    • Commvault 9

PLEASE NOTE:  We are working on these items, but this is NOT a commitment that all of these enhancements will make the next release.  If you have comments or questions on any of these items (e.g. how would it work?), please let us know!

I’ve gotten a lot of requests for part 2 of my Hey Chart, get in my Report! (Part 1), so this follow-up is certainly long past due.  Just as a recap for new readers, we were discussing two highly-requested use-cases:

  1. Getting the pretty charts in the Orion website into a report that you can send to your boss on a regular basis
  2. Getting the pretty charts in the Orion website + the detailed data (which Report Writer provides) and send that to your boss on a regular basis

For those who have been following along, you’ll remember that the first use-case was covered in my original post back in September.  

So, was there a method to my madness in waiting so long to do part 2?  I certainly think so.  In case you haven’t heard, Orion NPM v10.1 is currently in Why Should I Care About Release Candidates? phase and provides a number of cool new features that will make creating specialized reporting views much simpler and faster (and not coincidentally, a much easier blog post ;-)

  • Custom Object Resource  - this resource allows you to select any object in Orion (e.g. node, interface, volume) and choose an associated resource to display.  This means you can add resources for different nodes and interfaces to the same page.  For example, you may want a page that shows bandwidth utilization charts for all of your WAN interfaces.   Now you can do this with a just a few clicks.   This should eliminate the need to use the custom HTML resource for this purpose (hopefully, some of you are smiling already).   
  • Multi-Interface and Multi-UnDP Chart Resources - these highly requested resources give you the ability to chart multiple interfaces or multiple UnDPs respectively in a single chart resource, including the option to display the sum/aggregate.  
  • Scheduled PDF Reports – this new capability allows you to schedule the export of any page or report in Orion as a PDF.  This eliminates the problems with sending HTML pages and the images getting gobbled by your email servers.

So, in this final post in the series, I’ll walk through how you use these new 10.1 features to address the final graphical reporting use-case (#2 above).

1. First, you’ll need to create a new “report” view:

Go to Admin > Manage Views and create a new view.  Let’s call this one “Critical Network Links Management View”.

image 

2. Next, you’ll want to add and configure resources on the view to represent the required charts and data:

For this Critical Network Link Management View, I’m going to add several individual interface charts, a multiple interface chart, and a data table report.    This will require the resources shown checked below.   

image

As you can see below, I’ve added enough Custom Object Resources to cover my 4 critical WAN links in addition to the Multiple Interfaces Chart and Report from Report Writer resources.  

image

Now, you’ll want to click Preview so you can see what the view will look like and edit the resources.   If you don’t like the layout, you can always click Customize Page again and change the column width.  

image

Next, you’ll want to edit each resource to select the appropriate interface or interfaces.    I’m not going to walk-through the step-by-step on this because the resources are very straight forward to configure.   If you’re interested in seeing what this looks like for the Multiple Interfaces Chart, check out Request for feedback – multiple interfaces and UnDP on charts.   As you can see below, I’ve configured all the chart resources.   Now, all that’s left is the report resource.

image

For the Report resource, I’ll select the Top 25 Interfaces by Utilization report.   This way, in addition to my 4 critical WAN links, I can see details regarding the health of other interfaces with high bandwidth utilization in my environment.   You can always use Report Writer to easily filter this report to specific interfaces, show other columnar data, or create a custom report specific to your environment.

image 

3. Finally, you’ll want to schedule this page to be sent as a PDF report via email to your boss.  

To do this, you’ll need to copy the URL from the browser.

 

 

 

 

 

 

 

 

image

Then, open the Report Scheduler app on your Orion Server (Start > All Programs > SolarWinds Orion > Alerting, Reporting, and Mapping > Orion Report Scheduler).   Click on the Add+ button to create a new report job.   Fill out the job details and paste this URL into the required field when prompted as shown below.   

image

Finally, you’ll want to enter the SMTP server info, your boss’s email address of course, and the appropriate scheduling details.  At the end, you’ll see the new option in 10.1 to schedule the page to be emailed as a PDF.    Select that, and you’re done!!

image

Example PDF report below:

image

We hope you find the new 10.1 features helpful not only for this use-case, but also for creating custom NOC and troubleshooting dashboards to share with all your networking friends.   As always, we welcome any feedback you have around the post or the new features, so please comment away!

P.S. While this is the last post in this series, this is only the first step towards our long-term vision for graphical reporting and we’re already exploring ways to make this process even more wizard-like and streamlined post 10.1.  We’re going to need your feedback soon, so please stay tuned!  

 

From time to time, we try to highlight tips and tricks from community members.  These tips and tricks are usually posted on thwack.  In this post, we want to highlight something that was posted on on a personal blog.  In his extremely detailed and helpful blog post entitled “Automated SpiceWorks Tickets using Orion NPM and Perl”, Aaron Hebert explains how to make Orion automatically open ticket in SpiceWorks help desk.

We encourage you to read his post.  Here’s a quote from his intro:

“Customers appreciate a fast response to a network outage. After working in a NOC setting and learning many things about customer service, I’ve decided to take advantage of a few very good ideas implemented by others that could decrease response times to outages. One of these ideas includes creating tickets when a “node down” event is detected by an NNM. I’ve seen “auto-tickets” generated by integrating HP OpenView and ConnectWise, but I’ve never come across a solution for less expensive products such as SpiceWorks and Orion NPM. I hope that this post will be of some use to anyone who wishes to implement this.”

Many thanks to Aaron for his contribution to the community.  Please take a look.  He includes detailed requirements and the original script he wrote.  Enjoy.

A few posts ago in Files, Files, Everywhere... we reviewed how to turn on file analysis for drives on your servers in Profiler, but many of you have centralized file shares on NAS devices (especially NetApps and Celerras) and may want to do file analysis on Virtual Machines.  In this post, we will explore how to set up file analysis for these devices.

Before we start, you ask why would you want to do this?  Simply to know who has files, what kind of files and how old they are - on each share you select.  This allows you to get to the size and details on each share, which can be used for quota enforcement, compliance reporting (no MP3s!!), and storage recovery.  Also, you can group shares into logical collections for chargeback - one of our largest customer processes 50,000 shares and some 500+ million files a week, which are grouped into departments and used for chargeback reports.

Let's chat about the prerequisites before going thru the steps:

  • First, we need a list of shares from the device. For NetApp, this is done automatically through the API.  For Celerra, Virtual Machines, and other NAS devices, you assign this work to an agent. If you are not seeing a list of shares, then most likely you have a permissions issue. See the FAQ at the bottom for more details.  
  • An agent that is assigned the work must be able to access the share.  For Windows, this means changing the Service Account to a domain user that has access to those CIFS shares.  For Unix/Linux, it means the root user should have access to the NFS shares.
  • You must turn file analysis to "On" on the agent doing the work.  The previous post  Files, Files, Everywhere... shows you how to do that.
  • All CIFS scanning is done by agents on Windows servers, and all NFS scanning by agents on Unix/Linux servers.

Ok, so lets focus on assigning out shares for a NetApp.

  1. Go to Settings > Assign Remote Shares and choose the following (changes in selection will refresh the screen):



    1. Assign By: NAS or VM - lets you choose if you want to see NAS devices or Virtual Machines
    2. [NAS/VM] Device: Select one device or all devices.
    3. Share Type: CIFS or NFS.  This will automatically change available agent to match the protocol.

  2. Once you have set the above criteria, you can then select the shares you want:



    1. If you have lots of shares, you can narrow it down by using the regular expression filter.
    2. Select the shares you want.  You can select multiple shares by using the CTRL or SHIFT key. 
    3. Now you will select which agent is going to do the work
      1. Resource: select which agent you want to do the work
      2. Share Depth: Leave this at zero for now - more on depth later
      3. Move: Press the down arrow to assign the share, up arrow to unassign shares.

  3. Once you have assigned the shares to the agents you want, press Save at the bottom.  This will immediately assign out the shares to that agent, and file analysis will be performed at the next scheduled file analysis start time for that agent.  If you want to change the start time, the previous post Files, Files, Everywhere... shows you how to do that. 

So what happens from here?  If file analysis is scheduled to run at 1:00am, then at that time the agent will connect to the first share, perform file analysis, then connect to the next share, and so on, until there are no more shares to do. As it completes each share, the data will appear in Profiler.

FAQ:

  • Where will I see reports?
    • NetApp: From the NetApp console, under the NAS Shares tab, and the vFiler Files tab. 
    • Celerra: From a Datamover console, under the NAS Shares tab.
    • Windows VM: From the VM console, under the Local Shares tab.
    • Reports: Dozens of predefined reports for shares, users, file types, file age, etc. (Reports > Quick Reports)
  • What kind of reports will I see?
    • Shares - how much space is used by shares and how fast they are growing
    • User - how much space used by Users
    • File Type Groups - how much space is used by different file types
    • File Age Categories - how old are my files (ex: 30, 60, 90, 365 days)
    • File Rules - find specific files, like the 100 biggest files, or orphaned files, or files created in the last 24 hours
  • How fast will it do file analysis? 
    • This varies from environment to  environment, ranging from 10K-90K files per min, with most environments  falling in the 30-50K files per min.
  • What is Share Depth? 
    • Say you had a share called "Users" with 100 users directories underneath it (sound familiar?) and you wanted to assign out all 100 shares so you could see how much space each person is using.  Assigning out 100 things (not to mention having to assign out the 101st in the future) is a real pain. Depth allows you to pick a share and tell Profiler to subdivide it at the directory level of X (the depth).  So in our example, if I chose depth "1" and the "Users" share, the data generated would not be for "Users" but for "Users\Brian", "Users\Craig", "Users\Denny" and so on.  It does this dynamically each time file analysis executes, so it automatically picks up when a new user is added or removed. 
  • Where are my Linux VMs?
    • The file analysis is for Window VMs at this time.
  • What else can I do with File Analysis?
    • Rules (Settings > File Analysis Rules) - find specific files you are interested in (Ex: Find all MP3)
    • File Type Groups (Settings > File Type Groups) - Group file types for reporting (default grouping of 700+ file types)
    • Share Groups (Settings > Resource Groups) - Group shares into logical collections for chargeback.
    • Reporting (Reports > Quick Reports) - Dozens of predefined reports for shares, users, file types, file age, etc.
  • How do I get a list of shares?
    • NetApp - automatically gets shares from the API
    • Celerra and other NAS devices - when you are configuring Profiler to monitor the device, you can select a Windows and Unix/Linux agent to get the list of shares (CIFS/NFS respectively).
    • VMs - go to Settings > Discover VM Targets and select which Windows agents to get the list of shares. 
  • If it isn't working, what should I look for?
    • If you don't have a list of shares, make sure your account permission have access to the shares.
    • If you assigned out shares but see no data, you make need to turn on file analysis for that agent.
    • If you are missing some shares, you may have a permissions issue or the share may be empty.

Thanks for taking the time for reading the blog, and as always, please let us know what you think!

WARNING:  You will find files you did not know about, and you will find files you don't want found!

As I mentioned in an Server Change Auditing, we’ve been thinking about new product areas, and we want your input.  We have two surveys today.  You can take both, one, or (obviously) neither.  If you give us your contact info, we’ll follow up with some of you for in-person calls, which will earn you a SolarWinds shirt or other goodies.

 

 

 

Take survey on Call Quality Management .  We'd really like to better understand what your needs are as it relates to IP Telephony call quality monitoring.

 

Call Quality means, being able to understand how a end user's experience was when they made a call. In our specific case, we want to know what the quality of the call was over an IP network.

 

Some examples of poor quality include:

 
      
  • Echo on the call
  •    
  • Conversation seems out of synch
  •    
  • Gaps in speech
  •    
  • Parts of the conversation are repeated
  •    
  • Improper volume
  •    
  • Unable to connect
  •    
  • Unexpected disconnect
 

 

 

Take survey on Device/Port Tracking

 

Device/Network Port Tracking solutions provide a complete, historical view of your entire switch-port inventory (wired and wireless) and connected devices to help eliminate tedious and time consuming manual processes.

 

To help clarify, here are some example use-cases:

 

* Need to trace the location of a specific IP or MAC address and see where it is now and where it's been on the network  
* Need to identify the impact of a switch being taken down for maintenance   
* Tired of tracing cables   
* Need to ensure switch capacity is really used before acquiring new switches   
* Need to verify availability of switch ports for upcoming project

 

 

 

If you think there are other areas where we should consider new products, please post comments, or feel free to email me directly.

 

In this next installment of introducing the new features of the upcoming Orion NPM 10.1 release, I wanted to cover Dependencies.  For those of you who have been using Orion NPM for some time, you know in order to setup dependencies for alert suppression you had to use the Advanced Alert Manager.  This approach had various issues and complications, which was one of the drivers behind us adding this feature.

 

For most folks, the primary use case for dependencies is alert suppression; however, but there are many more.  For example, you can group elements into logical grouping such as by location or service. 

 

Scenario:  
My Orion server resides in my Corporate Data Center in San Francisco, CA and polls sites located all around the world.  I want to be notified if any of my devices go down, so I have a general Orion alert setup to page me if a node goes down.  So what happens if the core routing device to one of those remote locations goes down?  If you do not have dependencies setup, then you will get a flood of pages/emails, one for each of those nodes in that site you are monitoring.  Not very helpful when you are trying to find the root cause!

 

If you had a Dependency setup, which established this parent-child relationship, then you would only get one alert for that single routing node at that location, pinpointing the problem immediately.

 

Let’s walk through setting this scenario up.

 
      
  1. Within the Setting page there is a new option in Node & Group Management called “Manage Dependencies”.    
    image      
  2.    
  3. Select “Add new dependency” and choose the parent element, which can be a node, interface or group.  I will cover Groups in more detail in another post, but in essence this feature allows you to statically or dynamically group elements together into a container and status rolls us based on the elements in that group.  From our example above, this parent node is the core router for one of my remote sites.    
    image
  4.    
  5. Select the children or downstream devices from the parent element, this can be a node or group.    
    image
  6.    
  7. That’s it. You have created your first dependency.  If that parent element now goes down, the child elements will now go into a new state “Unreachable”    
  8.    

    image

       

    From a UI stand point that is how you go about setting up a Dependency.  Let’s walk through some of the details under the covers on how things work.

       

    As I mentioned above, we're introducing a new status called "unreachable" in this version.   This means if you've configured your alerts when things go "down", they won't trigger when objects are set to the new "unreachable" status.

       

    There are two types of dependencies:   

       
          
    • Implicit dependency:  E.g. When server is "down", don't alert on applications on this node.  Mark them "unreachable" to avoid redundant alerts.  This happens automatically without any configuration or work on your part.
    •      
    • Explicit dependency:   E.g. When WAN link is "down", don't alert on all nodes behind this WAN link.   Mark them "unreachable" to avoid redundant alerts.  This happens by manually configuring a dependency relationship.
    •   
       

    More details on how this will work:

       
          
    • Implicit dependencies       
                
      • NPM (volumes, interfaces)           
                      
        • When setting volume status to unknown, Node status will be checked. If Node status is down or unreachable, volume status will be changed to "unreachable".
        •              
        • When setting interface status to unknown, Node status will be checked. If Node status is down or unreachable, interface status will be changed to "unreachable".
        •           
                
      •          
      • APM (applications, monitors)           
                      
        • When setting application/monitor to down or unknown, Node status will be checked. If Node status is down or unreachable, application and monitor status will be changed to "unreachable"
        •           
                
      •          
      • IPSLA (operations)           
                      
        • [need to wait for SP1 avail] When setting operation status to unknown, Node status will be checked. If Node status at either end of the operation is down or unreachable, operation status will be changed to "unreachable".  NOTE: We will not make any changes to operation status when setting to down since for IPSLA this means we CAN poll the results, but the operation itself could not collect data.
        •           
                
      •       
          
    •   
       
          
    • Explicit dependencies       
                
      • NPM (nodes)           
                      
        • When setting status on node to down or unknown, check for membership in a dependency relationship.  If the dependency indicates the app or monitor should be "unreachable", set the status to "unreachable".
        •           
                
      •          
      • APM (applications, monitors)           
                      
        • When setting status on application/monitor to down or unknown, check for membership in a dependency relationship (would be part of Group that is a child). If the dependency indicates the app or monitor should be ‘unreachable’, set the status to ‘unreachable’.
        •           
                
      •          
      • IPSLA (operations)           
                      
        • [need to wait for SP1 avail] When setting status on an operation status to unknown, check for membership in a dependency relationship (would be part of a Group that is a child). If the dependency indicates the operation should be unreachable, set the operation status to unreachable.  NOTE: We will not make any changes to operation status when setting to down since for IPSLA this means we CAN poll the results, but the operation itself could not collect data.
        •           
                
      •       
          
    •   
 

So, what about automatic dependencies?   While this isn’t in 10.1, the use of topology to provide automatic dependencies (or at the very least dependency recommendations) is absolutely the next logical step for us and something we’re exploring.    As we get further along with our research in this area, we’d love to talk you about what you’d like to see.    Please post a comment if you’d be interested in being involved in the feedback process!

Here in the SolarWinds labs, we’ve been thinking about new product areas.  We are already building new products and plan to build even more.  As part of that larger effort, we’re trying to get some input from Orion customers about existing IT Management problems.  One of the areas we’re considering is server change auditing—how do you know who made what changes (and when) on your servers. 

 

For this particular survey, if you provide your contact info, we’ll enter you in a drawing to win an Apple® iPad *  The survey should take less than 10 minutes. 

 

 

 

Server Change Auditing survey

 

 

 

*Apple and iPad are trademarks of Apple Inc., registered in the U.S. and other countries

 

As I am sure many you have seen, we have been working on a bunch of new features for 10.1 for a while now.  We have successfully gone through three beta rounds with customers, which you can view all the comments and feedback here.

 

Before you ask, no this release is not Generally Available yet; however, we do anticipate to go into Why Should I Care About Release Candidates? soon.  If you are an Orion NPM customer with active maintenance and wish to participate in the RC, please fill out this survey here.

 

One of the larger features we added to 10.1 was native Active Directory authentication (with groups support).  Prior to this upcoming release, Orion supported local authentication against the SQL database or using Window Pass-thru.  For those of you who have Orion in production, when you upgrade to 10.1 you will not need to change anything, everything will continue to work as it always has.  Going forward, you just have additional options to choose from for authentication into Orion. 

 

We chose to expand authentication support since many of our customer use Active Directory as a standard way of distributing permissions.  This eases some of the administrative overhead of managing user accounts allowing you to manage them in one single place instead of having unique accounts in products all over the place.

 

Let’s walkthrough how you would go about setting up and using this feature.

 

1. In the Setting section of the web console, under manage account, you will see we rearranged things a bit.  When I click on add new account, I will enter the add user wizard

 

image 

 

2. I can now select which type of account I want to add; a local Orion account, an individual Active Directory user account or an Active Directory Group account.  For this walk through, I’ll select individual Active Directory user account.

 

image

 

3. Next, I’ll specify the account I want to search the active directory domain SWDEV with and enter the domain I wish to search and execute the search.  I can now browse to my name and select it.  If I wanted, I could have done swdev\brandon.* as well for example.

 

image

 

4. Just like you have always done when adding local Orion accounts, you specify permission, view limitations etc. for this account and submit.

 

image

 

Now when I log in, if you look at the upper right hand corner of the browser, which I have circled in yellow, you can see I am logged in with my SWDEV/brandon.shopp account.

 

image

 

The user flow for adding an Active Directory Group is the exact same we just walked through, except instead of adding individual user accounts, you add the Active Directory group as seen below.

 

image

 

My SWDEV/brandon.shopp account belongs to this group and I deleted the individual account I created above.  Now when I log in, as seen circled in yellow below, it will show me my user account as well as the group I belong to.

 

image

 

A good question I heard in beta was “Brandon, what if an Active Directory user account belongs to multiple groups?” 

 

One of the challenges we faced with supporting AD groups is how to handle the fact that most user accounts are members of multiple groups.  With multiple memberships, it’s unclear what to do with the users permissions.  We could we give them the sum of all permissions, but how do you handle direct conflicts (e.g., Group 1 says “allow” and Group 2 says “deny)?  There are multiple perfectly fine ways to solve the problem, depending on what you want to optimize.  The solution we chose emphasizes transparency because Orion’s permissions are fairly straightforward, and we want to keep it that way.  Consequently, Orion will only consider the first group membership it encounters, and the administrator will determine the order in which it encounters groups.  So if SWDEV/brandon.shopp was a member of two Active Directory groups, then we would go down the list as defined within the Groups tab in the Orion UI to the first group in which this account belongs to and grant them access.

 

Next question I heard a couple times was “If I logged into my into my laptop, which is in the domain, do I need to log into the Orion web console again?”  As long as that account belongs to a group or has been individually defined, then the answer is no.  If you don’t like this behavior, this is a global setting which you can turn off.

 

A couple additional items worth noting here regarding this new feature is even though I focused mainly on Active Directory above, we also will support authentication for Windows Users and Groups on the local Orion server.

 

That’s it for Active Directory with Groups.  Please let me know if you have any questions or comments.

Filter Blog

By date: By tag: