1 2 3 Previous Next

Product Blog

38 Posts authored by: bmrad Employee

A few posts ago in Files, Files, Everywhere... we reviewed how to turn on file analysis for drives on your servers in Profiler, but many of you have centralized file shares on NAS devices (especially NetApps and Celerras) and may want to do file analysis on Virtual Machines.  In this post, we will explore how to set up file analysis for these devices.

Before we start, you ask why would you want to do this?  Simply to know who has files, what kind of files and how old they are - on each share you select.  This allows you to get to the size and details on each share, which can be used for quota enforcement, compliance reporting (no MP3s!!), and storage recovery.  Also, you can group shares into logical collections for chargeback - one of our largest customer processes 50,000 shares and some 500+ million files a week, which are grouped into departments and used for chargeback reports.

Let's chat about the prerequisites before going thru the steps:

  • First, we need a list of shares from the device. For NetApp, this is done automatically through the API.  For Celerra, Virtual Machines, and other NAS devices, you assign this work to an agent. If you are not seeing a list of shares, then most likely you have a permissions issue. See the FAQ at the bottom for more details.  
  • An agent that is assigned the work must be able to access the share.  For Windows, this means changing the Service Account to a domain user that has access to those CIFS shares.  For Unix/Linux, it means the root user should have access to the NFS shares.
  • You must turn file analysis to "On" on the agent doing the work.  The previous post  Files, Files, Everywhere... shows you how to do that.
  • All CIFS scanning is done by agents on Windows servers, and all NFS scanning by agents on Unix/Linux servers.

Ok, so lets focus on assigning out shares for a NetApp.

  1. Go to Settings > Assign Remote Shares and choose the following (changes in selection will refresh the screen):

    1. Assign By: NAS or VM - lets you choose if you want to see NAS devices or Virtual Machines
    2. [NAS/VM] Device: Select one device or all devices.
    3. Share Type: CIFS or NFS.  This will automatically change available agent to match the protocol.

  2. Once you have set the above criteria, you can then select the shares you want:

    1. If you have lots of shares, you can narrow it down by using the regular expression filter.
    2. Select the shares you want.  You can select multiple shares by using the CTRL or SHIFT key. 
    3. Now you will select which agent is going to do the work
      1. Resource: select which agent you want to do the work
      2. Share Depth: Leave this at zero for now - more on depth later
      3. Move: Press the down arrow to assign the share, up arrow to unassign shares.

  3. Once you have assigned the shares to the agents you want, press Save at the bottom.  This will immediately assign out the shares to that agent, and file analysis will be performed at the next scheduled file analysis start time for that agent.  If you want to change the start time, the previous post Files, Files, Everywhere... shows you how to do that. 

So what happens from here?  If file analysis is scheduled to run at 1:00am, then at that time the agent will connect to the first share, perform file analysis, then connect to the next share, and so on, until there are no more shares to do. As it completes each share, the data will appear in Profiler.


  • Where will I see reports?
    • NetApp: From the NetApp console, under the NAS Shares tab, and the vFiler Files tab. 
    • Celerra: From a Datamover console, under the NAS Shares tab.
    • Windows VM: From the VM console, under the Local Shares tab.
    • Reports: Dozens of predefined reports for shares, users, file types, file age, etc. (Reports > Quick Reports)
  • What kind of reports will I see?
    • Shares - how much space is used by shares and how fast they are growing
    • User - how much space used by Users
    • File Type Groups - how much space is used by different file types
    • File Age Categories - how old are my files (ex: 30, 60, 90, 365 days)
    • File Rules - find specific files, like the 100 biggest files, or orphaned files, or files created in the last 24 hours
  • How fast will it do file analysis? 
    • This varies from environment to  environment, ranging from 10K-90K files per min, with most environments  falling in the 30-50K files per min.
  • What is Share Depth? 
    • Say you had a share called "Users" with 100 users directories underneath it (sound familiar?) and you wanted to assign out all 100 shares so you could see how much space each person is using.  Assigning out 100 things (not to mention having to assign out the 101st in the future) is a real pain. Depth allows you to pick a share and tell Profiler to subdivide it at the directory level of X (the depth).  So in our example, if I chose depth "1" and the "Users" share, the data generated would not be for "Users" but for "Users\Brian", "Users\Craig", "Users\Denny" and so on.  It does this dynamically each time file analysis executes, so it automatically picks up when a new user is added or removed. 
  • Where are my Linux VMs?
    • The file analysis is for Window VMs at this time.
  • What else can I do with File Analysis?
    • Rules (Settings > File Analysis Rules) - find specific files you are interested in (Ex: Find all MP3)
    • File Type Groups (Settings > File Type Groups) - Group file types for reporting (default grouping of 700+ file types)
    • Share Groups (Settings > Resource Groups) - Group shares into logical collections for chargeback.
    • Reporting (Reports > Quick Reports) - Dozens of predefined reports for shares, users, file types, file age, etc.
  • How do I get a list of shares?
    • NetApp - automatically gets shares from the API
    • Celerra and other NAS devices - when you are configuring Profiler to monitor the device, you can select a Windows and Unix/Linux agent to get the list of shares (CIFS/NFS respectively).
    • VMs - go to Settings > Discover VM Targets and select which Windows agents to get the list of shares. 
  • If it isn't working, what should I look for?
    • If you don't have a list of shares, make sure your account permission have access to the shares.
    • If you assigned out shares but see no data, you make need to turn on file analysis for that agent.
    • If you are missing some shares, you may have a permissions issue or the share may be empty.

Thanks for taking the time for reading the blog, and as always, please let us know what you think!

WARNING:  You will find files you did not know about, and you will find files you don't want found!

Brandon's A little bit of new and old about the history of computing brought back fond memories of my Apple II+ with 2 floppy drives and the 8088 I have that still boots and runs WordPerfect 4.2.  But I started thinking about the evolution of storage arrays and how we monitor them.  A lot of you are trying Storage Manager, and you are running into SMI-S Providers for the first time.  SMI-S?  Provider? What is all this nonsense?  Back in the early 1990's, the DMTF (Distributed Management Task Force) put together the Common Information Model (CIM) to describe managed elements in a computing environment.  In 2000, a specification for storage devices based on CIM was created and SNIA (Storage Networking Industry Association) and SMI-S (Storage Management Initiative - Specification) were born.

Why am I beating you with acronyms?  Is this important? Yes, because most array vendors prefer to provide information about their arrays via this method, as it allows them to hide most changes to the underlying architecture from reporting and monitoring tools, which is a good thing for all of us.  The idea was to provide end-users with a unified way of reporting and managing all storage devices, like SNMP for networking... but more on that later.


A "provider" is simply the software that provides the information about the array to other applications. Each provider is built by the OEM of the Array, and they determine what data it will provide (asset, storage, allocation, mapping and performance data and more).  The OEM vendor generally can deliver the provider to the user in three ways:

  • Separate software installation
  • Integrated into the vendor management software
  • Embedded in the array (the provider is part of the array software)


So depending on your vendor, you may have some work to do before Storage Manager can discover and monitor arrays.  Our awesome SMI-S configuration document covers the installation, configuration and troubleshooting of the SMI-S providers that we support - including download links, but here is a quick summary of the steps you need to take to configure the provider and array  in Storage Manager.

Separate software installation:

  • Download the software and install it on a server (generally does not need connectivity to the SAN).
  • Configure the provider to connect to the array, generally you will need the IP addresses of the array controllers.

Integrated into the vendor management software:

  • Generally, the provider is installed by default and will already be configured.
  • Follow the instructions in the SMI-S configuration document for help with your management software.

Embedded into the array:

  • The provider is generally installed and running by default.
  • Follow the instructions in the SMI-S configuration document for help with your management software.

Next, add the array in Storage Manager

Configure Storage Manager to monitor the array in one of two ways:

  1. Use the Discovery engine to discover the array, make sure the IP  address of the Provider (not the array) is in the IP range of the  discovery configuration.  Once discovery is complete, you can choose which Storage Manager agent you want to use, and you are done.
  2. Go to the Getting Started page, pick your array type and fill in the  required fields (generally IP of the provider, credentials and the array  identifier).  You can press the Test button to check connectivity.  Once complete, press save and Storage Manager will start monitoring the array.

This looks like a lot of work, but it is really easy - it should take about 5 minutes the first time you configure a provider and Storage Manager.  Next time we will take a  deeper look at file analysis.

Note: Here is a quick list of arrays we support and the kind of provider they have:

Separate software installation:

  • Dell MD3K Family - use the LSI Provider.
  • EMC VNX/Clariion and VMAX/VMAXe/DMX/Symmetrix - use the EMC Provider.
  • IBM DS 3K, 4K, 5K- use the LSI Provider.
  • Oracle Sun StorageTek Family (2K, 6K) - use the LSI Provider.
  • SGI - use the LSI Provider

Integrated into the vendor management software:

  • Dell Compellent - provider part of Enterprise Manager
  • HP EVA - provider part of HP Command View
  • HP XP - provider part of HDS HiCommand
  • HDS USP, USPV - provider part of HDS HiCommand
  • Oracle Sun StorageTek 99xx Series - provider part of HDS HiCommand

Embedded in the array (the provider is part of the array software):

  • 3par
  • IBM DS 6K, 8K
  • IBM SVC/V7000 (embedded on the cluster and the main console)
  • XIV
  • Pillar

No Provider, Storage Manager uses a different methodology:

  • Dell Equallogic - uses SNMP
  • EMC Celera - uses Telnet/SSH
  • HP P4000 Series (Lefthand) - uses SNMP
  • IBM N-Series - uses NetApp API
  • NetApp - uses NetApp API
  • Xiotech Magnitude and Emprise - uses API


PS: SMI-S is alive and well, with new extensions to the specifications  coming out every couple of years to match pace with new technology.   However, since this is only a specification on how to describe the  storage device, vendors have been free to embrace and extend the  specification, leading to some fragmentation of the availability of data  for all arrays.  Is this a problem?  For Storage Manager, no, it handles the  differences, normalizing the data where it can, but extending to support  the important data the vendor has included.

Update: In version 5.0 and later of Storage Manager and Storage Profiler, there is an Orion Integration Module that presents data in Orion automatically.  We definitely suggest you move to the new version and try out the integration instead of following the method below.

A lot of you are asking when Profiler and Orion are going to be integrated, and while I don’t have a date, you can see Profiler - What we are working on....  In the meantime, if you just have to have some Profiler goodness inside Orion, here is a way to do it.   This is several steps long, and we are working to improve this process.  I am assuming you are somewhat familiar with Orion navigation.
NOTE:  While this gets Profiler views into Orion, it is far from a perfect user experience.  A few caveats:

  • You have to log into Profiler in the same browser as Orion before these views will work
  • Occasionally you will break out of the Orion view into a new window or tab
  • There is no integrated navigation or breadcrumbs, drill downs in Profiler are generally one way.

Build a View:
In general, any Profiler view (summary, monitor, device) could be imported into Orion.  However,  we would recommend summary or monitor views, as they allow you to drill down to device views.

  1. Go to Settings > Views > Manage Views
  2. Click Add, then enter a name of the view (ex: Profiler Summary), choose Summary for Type Of View and click Submit.

  3. Change the number of columns to 1, and width to 1000, click Submit. 
  4. Next to the Resources box, click the plus sign to add a new resource.

  5. In the tree, open Miscellaneous folder and check Custom HTML, press Submit.
  6. Press the Preview button at the bottom, this will take you to the view with one resource with no data.

  7. Press Edit and change the title name to Profiler Main Console. 
  8. We will be using an iframe to connect show Profiler within a container on the screen.  In this example, the link is to the Profiler Main Console, but at the end of the post I will give you some more sample URLs from Profiler.
    Enter the following into the text Box:
    <iframe src="http://<ProfilerServerNameOrIP>:<ProfilerServerPort>/MainConsole.do?actionName=showConsole" height= 400 width =950  name="text" id="contentFrame"></iframe>
       src is the URL to Profiler
       ProfilerServerNameOrIP - the Profiler server's DNS name or IP address
       ProfilerServerPort - the port
       height is the height of the iframe
       width is the width of the iframe, generally you want this to be less than the width of the resource defined in Step 3.
       name and id are very important, make sure you copy them in there
    1. When you are done, it should look like the image below.  Press Submit.

    If everything is correct, and you have already logged into Profiler in another tab, you should see something like this.  Otherwise, you will see a Profiler login screen.
    NOTE: Once you are successful, copy the URL of this page - you need it for the next step.

    Add the new View to the list of Available items:
    The view you built above is not available to be added to the menu bar until you manually add it.

    1. Settings > Customize > Customize Menu Bars
    2. Edit any menu bar (ex: Default Menu Bar).
    3. Scroll all the way to the bottom of the list of Available items, click Add.
    4. Enter a name, paste in the URL from the last step from above, and uncheck "Open a New Window", and press OK.
    5. You should see Profiler Summary in the Available Items.

    Add the new View to a Menu Bar:
    You can now add any of the Profiler view you created to any existing tabs, just like any other view...

    1. Settings > Customize > Customize Menu Bars
    2. Edit any menu bar (ex: Default Menu Bar).
    3. Drag and drop the new View to the menu bar, reorder as you see fit.
    4. Press Submit under the selected items.
    5. Next time the page refreshes, you should see the new View (Profiler Server) in the Tab.

    Phew… I worked up a sweat, did you?  When integration comes, all the heavy lifting will be done, so you won't have to do it. 

    As always, if you have any thoughts or suggestions, please let us know.

    Use Cases and sample URL:

    To build more, right click any link in profiler and copy it to a text file for review.

    We are currently working on version 5.0 and (in parallel) 5.1 and beyond.  Some of the items we hope to deliver:

    • Integration with Orion NPM or APM.  Users who already own these products will be able to view Profiler data from the Orion console.
    • Integration with SolarWinds' latest acquisition, Hyper9 (to be renamed)
    • Enhanced Thin Provisioning Support - adding more data, reports and alerts for more storage arrays
    • End-to-End Visualization - show a visualization of each data path, with drill downs to associated paths.  
    • Support for NetApp's ONTAP 8
    • Support for IBM XIV
    • Support for Dell Compellent

    PLEASE NOTE:  We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!


    Last week at Cisco Live was quite an experience!  It was great to meet Orion and Storage Profiler customers face to face.  Two items that users discussed most was End-to-End mapping (mapping storage back to its source) and file analysis.  Since we recently did a post on End-to-End mapping (Oh Storage, Where Art Thou?), I thought I would tackle the basics of file analysis.


    In Profiler, file analysis allows you to look at all of your formatted space (Server, NAS and VM) and summarize usage by file age, file type, and ownership, and build rules to find specific files (“files of interest”).  In this post, we will turn on file analysis for a physical servers, and show the reports and rules you get by default.  We will add NAS and VM shares in future posts.


    To turn on file analysis for a specific server, navigate to Configuration->Devices from the Getting Started page. On the Devices page, locate the server that has the agent installed on which you wish to enable File Analysis and click the wrench of the server.  In the configuration page that pops up, go to the file analysis section.


    Set the Status to “On” and set the Start Time (defaults to midnight).  File analysis will run every day, you can change it by changing the frequency.

    Once file analysis runs, when you review the “Files” tab for that server, you will have the summary file analysis data appear:

    By default, you will get summary by file type, file age and user (ownership).

    Also by default, you will get three rules:

    • Orphan Files: Files who have no owner
    • New Files: Files that have created in the last 24 hours (Windows Only)
    • Largest Files: The largest files on the drive

    If you click the link under files reported, you will get the details on each file, including location, size, last accessed and ownership.

    So armed with the information, you can summarize your files and find specific ones you might want to take an action on.  Some common use cases:

    • Find all MP3 (or other unwanted files)
    • Find all files over 1MB and not accessed in 1 year
    • Find all PST files

    In future posts, we will review adding NAS and VM shares, and step through creating rules.


    • All of the above reports can be run across multiple servers through the reporting engine (Reports > Quick Reports).
    • If you want to create a rule, go to Administration > Rules > Add New Rule > File Analysis Rule.
    • If you want to change file analysis settings for multiple servers, you can go through Policies (Administration > Policies > Default OS Policy > File Analysis).
    • File analysis walks through the entire file system every time it runs, looking at the meta information of the file (it does not open the file and review the contents).  It does put a load on the system while it runs (generally 15-20% of CPU), so generally users run file analysis at night outside of the backup window.

    Oh Storage, Where Art Thou?

    Posted by bmrad Employee Jun 11, 2010

    I've talked to a lot of customers lately who are wrestling with their virtual infrastructure, especially with storage.  Trying to chase performance issues is like sorting out a bowl of spaghetti because its hard to know where your storage is coming from, or who you are sharing it with.  

    For example, what if you have a performance issue on a VM and suspected the disk?  How would you diagnose the issue and decide how to fix it?  First, you need to know if you have contention issue by looking to see what VMs are on the same datastore, then drill down to LUN, and then find the Raid Group and see what other LUNs are on it.

    In the VMware Logical Mapping report, Profiler can do the leg work for you with drill downs to the details. You can access the Logical Mapping report at any level of virtual infrastructure (VC, DC, Cluster, ESX, VM) by clicking the storage tab, then clicking the Logical Mapping link.  The report correlates VMWare data (ESX, Datastore, VM) to the Array data (LUN, Array) for fibre channel, iSCSI or NFS connectivity.

    The VM level Logical Mapping report for VM "D3TMPVT4" tells us it lives on Datastore LSI04_FIB_DS22_Migrate_Dev (VM > Storage > Logical Mapping).

    The Cluster level Logical Mapping report (below) tells us there are 10 other VM's on that datastore.

    You can look at the Datastore and drill down to the individual VMs to review contention, or jump across to the Array and LUN, with further drill downs to the RAID group to reivew LUN performance. 

    In this case, if we run a ranking report for the VMs on this datastore (Quick Reports > VMware VM > Disk Performance), you can see one VM is much busier than the rest.

    At this point, your options are to diagnose the problem on QA-test box, or move the VM to another Datastore.  The logical mapping reports shows the other datastores available in the infrastructure (and the available space), and you could review the performance of these both on the virtual side and the Array side. Looking at the performance of the LUNs on the array:

    We can see that LUN "Migrate1" is a lot less volatile from a performance perspective, and has the required amount of space available 10GB.

    So you now decide which way to solve your disk issue, either further diagnosis of the offending VM or migrate your VM to another datastore.

    So you have probably heard of Profiler, Solarwinds' storage and virtual infrastructure product, but as an Orion customer, you might ask what can I do with Profiler?
    Let's drill down on a specific common storage use case - monitoring and reporting on VMware  DataStores.  DataStores are the hub of storage, providing capacity to the virtual infrastructure (VM, snapshots, logs, etc.).  Running out of space on your DataStore is a bad thing, and there are two ways Provider can help you avoid it.

    First, good old fashioned alerting.  Just navigate to the Rules Page, press Add New Rule and choose Threshold Rules. From there, change your drop down to match these below, then you can select a condition to alert on (% Usage, Free GB or Used GB).  Note you can select "Any" DataStore, or pick specific ones of interest.

    When the threshold is crossed, Profiler can send a trap, execute a script, or send you an email… we can even alert you through Orion, but that is another post. 
    Second, you can use the reporting engine to forecast when you are going to run out of storage in the future.  Profiler calculates the growth rate of each DataStore and forecasts when the next thresholds will be passed, allowing you to address problems before they happen.

    We can see both these DataStores will be out of storage this soon... better go talk to our admins before bad things happen :).

    One more thing you can do with DataStores is End-to-End mapping, where Profiler maps the DataStore to its source LUN on the array.  More on how to use End-to-End reports next time.

    You probably have heard by now, but SolarWinds has a new product that gives you insight into your storage, virtualization infrastructure and backups, all from a single pane of glass – Profiler. But as an Orion customer, you might ask what can I do with Profiler? 

    Focusing on storage, here are a number of ways Profiler can help you understand and optimize your environment:

    • One view for all your storage:  Look across all your storage and know how it configured, where it is allocated and how much is used. 

    • End-to-End Mapping:  Tired of keeping your LUN to server spreadsheet up to date in a virtual world?  We do that automatically for iSCSI, FC and NFS connectivity – all in a single report! 

    • Reclaim storage: Find unallocated LUNs on your arrays and unformatted or underused storage on your servers.  

    • Find files:  Profiler can categorize your storage by type, age and ownership and then show you the details of the files - find all those MP3 stashes eating up your space!

    • Forecast the future: Look at growth rates at all levels of your storage (Array, Datastore, File System) so you can understand how much storage your are going to need in the year (or two or three). 
    • Tier your storage: Review the performance of your arrays and LUNs, so you know what kind of storage you need to meet your needs.  iSCSI vs. FC vs. NFS? Fast Disk vs. slow disk?  All the data you need is in Profiler.

      So now you know some of the things you can do with Profiler right out of the box, future posts will drill down into grouping, reporting and  alerting, and give you a preview of things to come.

      Filter Blog

      By date: By tag:

      SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.