1 2 3 Previous Next

Product Blog

38 Posts authored by: bmrad Employee

Virtualization Manager is built on search - all the widgets, alerts, charts, etc. are simply queries that bubble up the most important data to your fingertips.  This allows us to quickly alert you about potential problems, including configuration recommendations for over and under allocation of resources (like CPU, memory, storage, etc.).  Many users have asked about how these recommendations are calculated, and the answer is - its all in the queries!  And even better - you can easily copy and edit the alert queries to build your own.

I was recently working with a customer on the alert recommendations for the default alert "VM Memory Overallocated".  The queries for this alert are based on average memory usage over a week, and the customer was concerned (and rightfully so) that if he followed our recommendations, he might induce swapping during memory peaks. So I took a deeper look to see what we could do.  

 

Make a copy of the current alert

First, I took the current alert and made a copy to work with:

  • From the Administrator Dashboard, scroll down in the All Alerts widgets and click "VM Memory Allocated".
  • At the bottom right of the screen, click "Configure".
  • At the bottom right of the screen, click "Save As", enter a new name and click "Save".  I changed mine to "VM Memory Peak Overallocated".
  • You should see the new name at the top and now click "Configure" at the bottom right again.

VManAlertSaveAs.png

 

Change the scope

I decided to change the scope, editing the last item to evaluate the weekly peak instead of the average and increased it from 30% to 50%.  I wanted to make sure I captured any VMs whose peak memory utilization was under 50% of the allocated.  Here is the query in full (bolded is the part I changed):

     vm.powerstate:poweredOn AND vm.memory:[2048 TO *]  AND vm.memloadPeak.week:([* TO 50])

 

So this establishes the scope as all VMs with at least 2GB of RAM whose peak memory usage did not go above 50% for the entire week.

VManScope.png

Change the recommendation

Now that we have the scope of the query set to the VMs we are interested in, we next tackle the recommendation.  In this case, I am going to take the peak utilization and multiply it by the VM Memory to get a peak utilization.  I will multiply that by 1.25 to add some buffer, and then do some fancy xpath functions to make recommendations in multiples of 256KB.

     concat ('Reduce ', /virtualMachine/memory, ' MB to ', (ceiling(/virtualMachine/memoryPeakUtilization/week * /virtualMachine/memory * 1.25 div 100 div 256))*256, ' MB')

 

Just to break that down:

  • Multiply weekly memory peak * VM memory * 1.25 (buffer) div 100 (because the peak is given as a percent)
    /virtualMachine/memoryPeakUtilization/week * /virtualMachine/memory * 1.25 div 100
  • Build a multiple of 256KB by dividing by 256, rounding up to the nearest integer, and then multiplying by 256.
    (ceiling(... div 256)) * 256

 

VManAlertNotification.png

 

Save and view the results

Once you click "Save" it will return you to the alerts view and immediately show you all VM's that match this condition. In this case, we now have 31 VM's that meet the criteria and now have memory recommendations for them.

VManAlertSummary.png

 

Validation

To validate the recommendation, click on the scroll icon for one of the VM's in the list.  This will take you to the detail view of the item. Switch to the XML by clicking the very small icon next to the name, this will allow you to see the latest instances of all the data.

VManVMxmlSwitch.png

Once you have the XML View open, you can look for the corresponding metrics (via the search bar at the top) to validate the suggestion:

  • memory: 2048
  • memoryPeakUtilization: Week 19.99

Calculation:  2048*19.99*1.25/100=511 so rounding up to the nearest multiple of 256 is 512MB, which matches the recommendation in the image above.

So in this case, we could reduce memory on this VM down to 512MB and have a 25% buffer over the historical peak value.

 

Additional alert features

Some of the features of alerts we skipped over:

  • Condition needs to be sustained for a certain amount of time.
  • Notification via email - we can send these alerts to multiple users
  • Execute an external action - for example, capture the currently monitored processes on the VM
  • SNMP traps and overrides.

For more information on alerting, please see our help documentation.

 

Alert Ideas (and more)

The default alerts packaged into Virtualization Manager are based on lots of customer feedback, but they are by no means a one size fits all!  We highly recommend you customize the alerts (and dashboards and widgets) to fit your environment.  For example, the choices I made on the building the above alert (ex: weekly peak 50% of total or the 1.25 buffer) should be customized to fit how you want to run your environment. Some more ways you can leverage the search:

  • You could also take the searches and functions and rework the "Top-N: VM Memory Overallocated" widget on the Sprawl Dashboard.
  • If you wanted to alert when a drive goes under 700MB, you can take "Guest Storage Space Alert", copy and change it to:
    • Scope: vm.powerstate:poweredOn AND vm.vol.freeSpace:[0 TO 734003200]
    • Alert: string-join(for $fullVolume in /virtualMachine/diskVolume[freeSpace<=(700*1024*1024)] return concat($fullVolume/mountPoint, ' (' , floor($fullVolume/freeSpace div 1024 div 1024), 'MB)'), ', ')

The possibilities are truly limitless with the search and xpath capabilities.

 

Other thoughts and references

Here are a few other thoughts to consider when building alerts, as well as some additional references:

  • Check out our help for Search in our documentation.
  • Here is a list of all properties and metrics that Virtualization Manager collects.
  • Performance and configuration data are collected on different intervals, so if you are mixing them in formulas, just be aware that the times will not necessarily align.
  • Rollup for peaks and averages are calculated a couple of times a day and restart at the beginning of their time period (day, week, month).  See this KB article for more information on data collection and rollup.
  • xPath functions are fun - see this excellent general xPath tutorial.

Just in time for Holidays is more goodness for Virtualization Manager.  The new version is available for all customers under maintenance in the Customer Portal, but if you are not a customer, you can always download an eval or go to our Live Demo to try it out!

 

What's New in Virtualization Manager

 

  • Virtual Desktop Dashboard - a dashboard dedicated to your desktops, allowing you to quickly see top consumers in your environment.  Combined with our new per-Socket licensing, this makes Virtualization Manager a perfect fit for your VDI environments.
  • More Hyper-V data - we've added more storage data in our Host and VM views.
  • Improved speed and scalability - lots of collection improvements have increased speed and scalability of our collection and GUI.
  • Many minor improvements (see the Release Notes for more info) - one example is our sorting objects in map views and widgets by alert severity.

 

We will expand on a couple features below.

 

Virtual Desktop Dashboard

If you have a virtual desktop infrastructure (VDI) and need insight into your performance and capacity, our new dashboard is for you.  With it, you can:

  • Find out how many and what kinds of desktops are running in your VDI.

Desktop OS Breakdown.png  Desktop VM Count.png

 

  • Identify and alert on which desktops are consuming the most resources (CPU, Memory, Network, Disk)

Desktop VMs CPU Ready Latest.png  Desktop VM Memory Ballooning Latest.png

Desktop VM Network IO.png  Desktop VM IOPs.png

    

  • Identify which desktops are about to run out of space

Desktop VM Disk 95pct Full.png Desktop Datastore Low Free Space.png

 

  • Find capacity and performance issues at the datastore and cluster levels.

Desktop Cluster CPU Util Latest.png  Desktop Cluster Memory Util Latest.png

Desktop Datastore IO Latency Latest.png

 

Severity Ranking in Maps

Previously, our Map View and Map Widget would sort objects (datstore, VM, etc) in a way that could hide those with issues.  In this version, sorting is by severity so all your problems bubble up to the top.

  • Map Widget: Now your more severe issues appear at the top:

     Environment Map.png

 

  • Map:  Each widget in the map will bubble up the objects with the most severe alerts, so you can quickly find where your problems are, and who and what are being affected by them.  We also changed the object popup (see VM in image) to also show the most severe alerts first.

MapView1.png

 

That's all the goodness we have to show today, so give it a try and let us know what you think!

Customers can download Virtualization Manager from the Customer Portal, but if you want to try it, you can download an eval.

We are currently working on Virtualization Manager 5.1 and beyond.  Some of the items we hope to deliver:

  •   Expanded support for Hyper-V, including support for Hyper-V v3
  •   Support for VMware 5.1
  •   Enhanced support for VDI infrastructures
  •   Simplified configuration for Hyper-V

 

PLEASE NOTE: We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

 

If you don't see what you are looking for here, you can always add your idea(s) and vote on features in our The specified item was not found. forum.

Please join us for a free webinar with Scott Lowe, Founder and Managing Consultant at The 1610 Group, and SolarWinds virtualization expert Brian Radovich. We’ll be discussing “Performance Management and Capacity Planning in VMware® and Hyper-V® environments.”
Implementing a management solution that is designed specifically for VMware and Hyper-V environments is critical to ensure that you are monitoring the metrics and issues unique to virtual environments. In this presentation, we will discuss:

 

  • How virtualization affects each of the 4 traditional hardware resource areas
  • The top areas you should be looking at when choosing a management solution for your virtual environment
  • The reasons why you need to use management solutions that are designed specifically for virtualization.

 

Register now for this special event!

We are currently working on STM version 5.7 and beyond (in parallel).  Some of the items we hope to deliver:

  • Storage Manager Server and Agent health and status overview
  • Product stability improvements
  • Preservation of Agent and Server settings on upgrade
  • Improved graphs
  • User-defined LUN Grouping
  • EMC PowerPath Support
  • Better "Storage Group" Dashboards


Disclaimer:  Comments given in this forum should not be interpreted as a commitment that SolarWinds will deliver any specific feature in any particular time frame. All discussions of future plans or product roadmaps are based on the product teams intentions, but those plans can change at any time.

 

Dan

This week Storage Manager, powered by Profiler 5.2 was released.   For Storage Manager and Storage Profiler customers, this release includes many  of the features you have clamored for, and we think you will be pleased.  The release focused on supporting Dell Compellent, improving support for other arrays, improving the target view and charting, and more integration with Virtualization Manager 5.0.

 

Support for Dell Compellent

With the addition of the Compellent arrays, Storage Manager is the only storage product that supports all of Dell's arrays - Compellent, EqualLogic and the PowerVault MD30/32/36 series.  Features of the Compellent support include:

  • Array discovery and configuration
  • Storage capacity, usage and allocation (including thin provisioning)
  • Asset information
  • Performance data (Array, Disk, LUN, Port)
  • Top Ten LUNs
  • Alerting
  • End-to-End Correlation and Mapping
  • Target View

Here is a fully populated Target View for a Compellent Array:

Improved Support of Arrays

In every release, we generally include improvements to arrays that users have been asking for, and Storage Manager 5.2 is no exception.  Here are the changes in this release

  • HP EVA - RAID group performance reports
  • IBM SVC/V7000 - Performance metrics are now per object, rather than split across nodes.
  • IBM DS 6/8K - Array level metrics

 

Better Charting

The layout of the charts has been improved in several ways to improve visibility and interpretation of the data.  These changes affect the charts generated throughout the product - from the tab menu, the report chart icon, and for charts emailed or published by the report schedules.  Improvements include:

  • Charts are a uniform width, which is especially handy when they are stacked in emails.
  • The number of items charted is defaulted to 5 instead of 10 (user definable)
  • Changed the orientation of the x-axis to be horizontal
  • Moved the legend to the bottom, the filters to the top, and the create report to the top right.

Here is a sample of the new chart, with the changes highlighted:

Improvements to the Target View

The target view provides the single view that shows contention for both array and virtualization.  In Storage Manager 5.2, we made a couple of improvements:

  • Support for NetApp NFS targets, giving you the same great end-to-end view for NFS that we have for iSCSI and Fibre Channel.
  • Populated performance charts for all arrays.

 

Integration with Virtualization Manager

Storage is an integral part of your virtual infrastructure, and SolarWinds award winning Virtualization Manager covers virtualization, but what happens when you  need to look beyond the virtual layer and take a deep dive into  storage?  In STM 5.1, we added the ability to do this with a single click, drilling from  a VMware object in Virtualization Manager to the associated storage target  (LUN) in Storage Manager.  In STM 5.2, this capability is extended for new links in Virtualization Manager 5.0 (not yet released).  Stay tuned for more information once VMan 5.0 is out.

 

Odds and Ends

Additional features and capabilities that made it into this release:

  • Extended alerting to more performance metrics
  • Links from Virtualization Manager 4.1.3/5.0 also work with Storage Profiler 5.2
  • Storage Profiler 5.2 and Backup Profiler 5.2 were also released

 

All in all, this is a great step forward for Storage Manager that improves not only the data available, but also how it is displayed and used. Let us know what you think.

We are currently working on STM version 5.2 and beyond (in parallel).  Some of the items we hope to deliver:

  • Support for Dell Compellent
  • Adding more metrics to the rules engine
  • Improved Integration with other SolarWinds Products
  • Support for NetApp's ONTAP 8
  • Additional features for EMC VNX arrays
  • Support for HP MSA
  • Improvements to the GUI, Reporter and Charting.
  • End-to-End Visualization - show a visualization of each data path, with drill downs to associated paths. 

PLEASE NOTE:  We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

Brian

Last week Storage Manager, powered by Profiler 5.1 was released.   For STM and Profiler customers, this release has many of the features you have clamored for, and we think you will be pleased. The release focused on improving usability, supporting new devices, and integrating with  (also released last week).

 

Integration with Virtualization Manager

Storage is an integral part of your virtual infrastructure, and SolarWinds award winning Virtualization Manager covers virtualization, but what happens when you need to look beyond the virtual layer and take a deep dive into storage?  You can now do this with a single click, drilling from an object in Virtualization Manager to the associated storage target (LUN) in Storage Manager, where you able to see and diagnose storage bottlenecks in your virtual infrastructure in a single view.  See the  post for more details on the integration.

 

Target View (aka LUN View)

The Target View unifies all the information about a LUN into a single view so you can immediately see the cause of performance issues and drill down for further investigation.  Highlights of this view are:

  • Target Details - the array information about the target
  • Target Mapping - information about the datastore and what it is connected to
  • Targets on this Group - this is a ranking of LUNs on this Raid Group, allowing you to quickly see if there is cross LUN contention
  • Top 10 VM - ranks the VMs on this target by IO

What the Target View allows you to do is quickly determine if your IO bottleneck is being driven by a VM on that target, or if it is being caused by another target that shares the same physical disks --- and all in a single view.

Extensive Linking to and from the Target View

The new target view would be useless if it was hard to get to, but we have made it extremely easy by adding links in every report that has a LUN identifier to the LUN View.  This means you can be almost anywhere in the product - on an Array, VM, ESX server - and get to the Target view in a single click.  Even better, the LUN view allows you to drill back to other objects with a single click as well.  All of these links simplify navigation through the product, turning 5-10 clicks into 1 or 2 clicks.

Top Ten LUNs

On the main console and on each array, you now have reports that rank LUNs by Total IOPS, Latency, Reads and Writes, so you can quickly identify which LUNs are busiest. And, you have a link right to the Target View, so you can quickly determine the cause of the issue.

Support for IBM XIV and HDS VSP

One of Storage Manager's benefits has always been array support, and we continue with that tradition with full support for XIV and VSP.  The addition of XIV is especially significant for IBM customers, because this allows you to monitor the full line of IBM arrays (DS, SVC, V7000, XIV, N-series and ESS) from a single product.

Odds and Ends

Additional features and capabilities that made it into this release:

  • Support for VMware to SVC End to End Mapping
  • Improvements to Speed and Scalability
  • Storage Profiler 5.1 and Backup Profiler 5.1 were also released
  • Backup Profiler adds support for NetBackup 7.0 and Backup Exec 2010

All in all, this is the most significant release for Storage Manager since the acquisition, adding support for two new devices and taking a tremendous step forward in usability by leveraging the integration of SolarWinds technology into the core of Storage Manager - and this is only the beginning.

In a previous post we discussed Building Reports in Profiler in Storage Manager, but I only gave one example (volume usage on hosts or VM).  A great way to leverage the reporting is to build ranking reports to identify hot spots in your environment - be it array, LUN, switch, host or VM, you can find the potential trouble points in your environment before they find you.

For example, say you wanted to identify LUNs based on different ranking criteria:  

  • Busiest LUNs on average in the  last 24 hours,
  • Busiest LUNs for one hour in the last 7 days
  • Least busy LUNs on average in the last 7 days

All can be done quickly with ranking reports, which can be combined into a single report in an email via a schedule (but more on that later).

To build a ranking report, first we need to pick a report template, so go to My Reports and press the New Report button.  In the drop downs, for OS select "Storage Array", then for Category select "Performance", then for Template select "LUN Performance". 

In the first screen, you can choose the columns for the report.  Move everything to the left by pressing the "<<" button, then select the fields "LUN ID", "Host Name" and "Total IOs/sec" from the available fields and pressing the ">" button.  Note I am leaving "Time" field out for now, more on why later.

Next, give the report a name and description, then we will edit the "Sort By".   First, clear out the Sort By text box by selecting each line and pressing the "Delete Sort" button.  Once all the default sorts are removed, then select "Total IOs/sec" in the first drop down and "Descending" in the second drop down, then click the "Add Sort" button.  When you are done with the Sort By section, it should look like this:

You can skip the Filters for now, as we want to rank all LUNs - but in the future, if you wanted to limit the list to a specific set of LUNs by name or pattern, you could do that in the filter section.  At the bottom of the page, press the "Save and Run" button  to execute your report.

When executing the report, you have the following options to choose from:

  • Groups: Which group of arrays for the report (default All)
  • Resources: Which arrays for the report (default All)
  • Data: The date rate (ex: Raw, Hourly, Daily, Weekly, Monthly)
  • Range: The time range, with both default and customizable ranges
  • Format: The output format of the report
  • Time Zone: What time zone the report times should be shown
  • Rows Per Page: How many rows to show in the report

Depending on what you want to rank, you will need to choose the appropriate data rate and time range.  Here are a couple of examples:

  • If you want the ranking to show only one data point per LUN, then you must synchronize the data rate and time range.  For example, if you want to rank the LUNs for the last 24 hours, then choose Data of "Daily" and the Range or "Last 24 hours".  For this report, I don't need the time field since each LUN has only one row.
  • If you want a ranking report to show multiple data points per LUN over a time range, then you don't need to synchronize the data rate and time range.  For example, if you wanted to rank the LUNs by the busiest hour in the last week, you would choose Data of "Hourly" and Range of "Last 7 Days".  For this report, I want the time field so I could identify the time it was busy.

Here is an example of a ranking report identifying the busiest LUNs in the last 24 hours (no time column):

Here is an example of a ranking report identifying the busiest hour for a LUN in the last 7 days (with time column):

Here is a report that looked at the weekly average for each LUN to identify LUNs that had no/low activity on them (no time column again) - these could be LUNs that you could reclaim.

Note this technique can be applied to any performance data in Storage Manager (RAID group, switches, hosts, servers, VM, etc.) including all the arrays we support (including EMC Clariion, NetApp, HP EVA, 3par, etc.), so don't hesitate to try them as well.

Thanks for taking the time to read our blogs, and as always, we would love to hear your thoughts and ideas on Storage Manager.

Extra Curricular Activities:

As many of you know, we have combined Storage Profiler with Virtual and Server Profiler to create a new product called Storage Manager, giving you the abilities and coverage of both products with simple "per disk" licensing. 

However, for current Storage Profiler customers, we also are releasing a new version of Storage Profiler (v5.0.1) that is based on the same platform as the Storage Manager product.  This new release continues its current licensing model (per disk), but now includes:

  • Monitoring Fibre Channel Switches
  • Integration with Orion platform
  • Support for the EMC VNX and IBM v7000 arrays

Also, since the products share a platform, future development to support new arrays (as well as improvements to currently supported arrays) will be available for both Storage Manager and Storage Profiler.

We will notify you once the release is available. Storage Profiler customers under current maintenance will be able to download Storage Profiler 5.0.1 in the customer portal. If you wish to move to Storage Manager, please contact your sales representative at virtualization@solarwinds.com for pricing.

As always, please let us know if you have any questions or concerns.

Brian

bmrad

Rule Your Files

Posted by bmrad Employee Mar 16, 2011

Managing files has always been a challenge, and even in today's world of virtualization, automatic tiering and migration, finding out what files are really out there is still difficult - and even more important. In Profiler, file analysis rules allow you to find specific files across all your storage, local, SAN and NAS.  If you need a refresher before we get started, please read Files, Files, Everywhere... and File Analysis on NAS (NetApp, Celerra, etc.) and Virtual Machines.

Once you have file analysis turned on, Profiler will start telling you interesting summary reports like "You have 24.5 GB of mp3" or "17% of your disk space hasn't been accessed in a year".  If you are curious like me, those files immediately become "files of interest" that I want to track down.  Alas, Profiler only has the summary data by default, it does not store information about every file it encounters in the database.

That's where file analysis rules come in, they let you get all the details of those "files of interest" into the database so you can easily view them from the comfort of your browser.  Find that stash of MP3 that you can quickly go delete and reclaim that valuable storage for your virtualization environment. Identify old files that can be deleted or moved to another tier of storage.

Lets get started finding some files.   First, go to Settings > File Analysis Rules to see the list of current rules.  Click Add New Rule and click File Analysis Rules.

The page for defining rules is very long, so we will take it in sections.

  • Rule Name:  Simply the name of the rule.  Rule names need to be unique.

This next set of parameters allow you to set the criteria of size, file age and number of files to return.

  • Find: This allows you to filter how many files each rule will return (ex: 500) and how to rank those results (by age or size).
  • Size: Define the minimum, maximum or range of the size of the files.
  • Accessed Age: Define the minimum, maximum or range of the last accessed time.
  • Modified Age: Define the minimum, maximum or range of the last modified time.
  • Created Age: Define the minimum, maximum or range of the creation time (Windows only).

Profiler allows you to define the file path using regular expression, in case you want to limit your results to just certain directories (ex .*[Uu]sers?.* which should find "User", "user", "Users", "users").

  • File Path Regular Expression: Enter a regular expression to filter the path of the file.

You can also select file types as criteria.  The list of file types is generated from files previously encountered during file analysis in the environment.  Highly recommend using the file type regex filter, there can be tens of thousands of types here.

  • File Type: This filter allows you to pick the file types (file extensions).  Select one or more file types on the left and click the right arrow.

You can also select file owners as criteria. This list is generated from the owners previously encountered during file analysis in your environment.

  • File Owners:  This filter allows you to pick the owners you are interested in.   Select one or more owners on the left and click the right arrow.

Ok, you created your rule, what's next?  First, you have to apply the rule to a policy, go to Settings > Policies.  You have to apply your rule to each policy for it to be used during file analysis.  If you want it applied to your servers and VMs, use the OS policy.  Also, the agent doing the work should have file analysis turned on and scheduled, see the links at the beginning of the post for more details.

So select the rules and press the down arrow, then press save.  When you get back to the Policy page, press the Push button - that will push the configuration to the agent doing the work.

So lets look at specific example of an file analysis rule.  Lets say I want to find the largest files each user owns in their home directory (ex C:\Users\Brian or sharename/users/Matt).  I can build the following rule and apply to the desired policies.

Once I apply this to the OS policy, I get the following report:

An viola - I found a bunch of files that I can now go delete and reclaim space.

The file analysis is a really powerful feature of Profiler.  Its is harder to use than we would like (and we will make that better), but the results are worth it.

Notes:

  • File Analysis works on local file systems on all agents, CIFS and NFS shares on NAS devices, and CIFS shares on Virtual Machines.
  • The number of files is limited in the rule to keep Profiler working well.  That being said, if you build a rule that is limited to 100 files, that is 100 files per target.  A target is a file system (C:\, D:\) or a share on a VM or NAS (C:\users$, /user/brian).  If you have 1000 targets, that means the rule would return information on up to 100,000 files. 
  • File analysis is driven by the schedule on the agent that the file analysis is assigned to.
  • If file analysis has previously been run, when you push out a new rule, it will be evaluated immediately on the historical data stored on the agent.
  • There are a few default file rules - Biggest files, oldest files, and new files.

As always, let us know your thoughts, suggestions and experiences with File Analysis.

As many of you know, Storage Manager (Profiler) gets a plethora of data, maybe even too much - but many of you ask about how to set thresholds and alerts so you can be notified when something is amiss.  In Profiler, getting  alerts involves three steps:

  1. Building a rule, which includes a threshold on the metric of interest
  2. Assign it to a policy (ie, the set of resources you want to monitor) and push it out
  3. Setting the Notification to alert you via email when the trap is received.

For the threshold, lets focus on performance metrics right now -  although you can do storage and asset change thresholds as well.

Go to Settings > All Rules > Add New Rule.  From the list of  choices, choose Threshold Rule.  You should see the following screen:

Some quick definitions:

  • Section - basically the scope of resources this rule would apply to  (Ex: NetApp)
  • Category - the types of metrics applicable to that section (Ex: LUN Performance)
  • Instances (if applicable) - the instances of the metric we are monitoring (All instances)
  • Condition - the threshold on the metric. (Average Latency (ms) > 20)
  • Duration - how long the condition has to be met before the threshold is triggered (0 Min)
  • Choose Action - choose one action (Send Trap)

 

So what we are telling Profiler in this example is to send us a trap  whenever any instance of a NetApp LUN has average latency greater than  20ms. Before moving to the next step, a couple of cool things:

  • When you set Profiler to Any Instances, new objects are covered  automatically.  If you create a new LUN, Profiler will automatically  apply the rule to that instance.
  • You can pick one or more instance - so you can get very particular if you need to.
  • The duration allows you to filter out noise, so you don't get alerted on every little spike.

So, you have your rule, now you have to apply it.  In Profiler, you  do that via policies - which are just a collection of resources of the  same type that you configure at the same time.  Every resource type has a  Default Policy, and that is the one we will use today.

Go to Settings > Policies and click the edit icon for Default NetApp Filer Policy (let us stick with NetApp for this example)

Click Rules and you will see a list of rules that are available to be  assigned, or already assigned to the policy.  Note there are default  rules already assigned to identify problems for you.  To assign a rule  to the policy, click the rule and press the down arrow, and then press  the Save button.

Now the rules is assigned to the policy - but - make sure you press  the Push button to update the configuration on the agents monitoring the  NetApp Filers.

So now, if a condition were met, the agent would send a trap to the  Profiler server and you would see the trap in the Event Monitor. You  could then manage the event

However, if you want to receive an email for that event, you need to  turn on notifications.  Notifications are turned on per user, so go to  Settings > Users and click the edit icon for your login.  If you have  defined an email address, you will see a "Notifications" section.   Click the Add button in this section.

Now you can add a notification for on resource or a group of  resources.  Choose "Groups" and then choose "All Devices".  You can then  pick the trap severity you want to be notified, and one or more email  addresses to send the email to.

Whew! That was a few too many steps (hint, we will make this better in the future) - but now I can safely sleep knowing that I will be notified if I have a problem.

As a bonus, I'll throw in a few notes about managing the events on the Event Monitor:

  • Events that occur over and over again for the same object only notify you on the first occurrence, but maintain a count thereafter (hence the count column in the Event Monitor)
  • You can acknowledge and clear traps thru the event monitor
  • In the Setting > Server Setup > Server, you can turn the automatic clearing of events after a certain amount of time.

Thanks for listening - and as always, if you have thoughts or feedback, we would love to hear it.

bmrad

Holiday Schedules

Posted by bmrad Employee Dec 19, 2010

As you wind down for the holidays, you want to get away from it all - as long as you feel secure nothing bad will happen while you are wrapping presents and drinking eggnog!  Orion does quite the trick with the new Meet the Features of NPM 10.1 - Mobile Device Views and Alert Management Enhancements, but what about your storage? You want to make sure there are no Grinches stealing all your space on Christmas Eve.

Although you could view Profiler on your phone while dashing through the stores, scheduling reports to come to you is often a better way to go.  Any Profiler report (pre-defined or custom) that can be run on demand can also be scheduled with all the same options - and setting them up is easier than finding a red-nosed reindeer in winter fog.

To start, go to Quick Reports > Report Schedules.  If you have created schedules before, they will be listed here.  Click the New Schedule button.

The schedule screen allows you to define:

  • When the schedule should run.
  • Who should receive a copy of the schedule reports.
  • Which reports should be included in the schedule.
  • How the emails should be delivered.

Note you can deliver the reports to multiple recipients, and you can embed the reports in the email, attach them as a file, or publish them and just include a link.    Once you have defined the schedule, press the Save button.  This will take you back to the schedule list, so click the edit icon for the schedule you just created, and now you can add reports to the schedule.  Yes, the flow is a little clunky, but its still better than sticking your tongue to a frozen pole in the schoolyard.

Next you can choose the reports you want to add, by clicking the Add button in the reports section.  A report list will appear allow you to select any you want and then click the Add button to add it to the schedule.  

The options that come up for that report are the same as when you execute the report on demand. 

Once you have added all the desired reports, press the Save button and you will be returned to the Schedule list. You can test the schedule by clicking the Run icon (this will send the scheduled report to all recipients).

So now you have your reports emailed to you just when you need them, so you can sleep more soundly during the Holidays.

How do users leverage Profiler schedules?

  • Identify full or fast growing storage
  • Storage usage per department by leveraging Profiler grouping
  • Identify where storage can be reclaimed.
  • Show the busiest LUNs, VM, vDisk, etc.
  • Publish data that internal applications can consume

    So before the in-laws arrive and park their RV in your front yard, better make you schedule all the Profiler reports you think you'll need to survive the Holidays. 

    And remember, even if it gets stressful at times, it's a wonderful life.

    Merry Christmas!

    A number of you have asked how to build custom reports in Profiler and then email or publish them on a schedule.  This week we will cover building a custom report, and will cover scheduling in a future post.

    Like Orion, Profiler reports can be customized to meet your needs.  For Profiler, all reports are based on report templates, which describe the fields and query necessary to build a report.  When you build a custom report, you are simply selecting the fields you want to see, in the order you want to see them, and adding filters to narrow the results.   At report execution (either on-demand or scheduled), you will be able to select the set of devices, the time period, and other variables, in order to narrow the results further. 

    For example, lets say I wanted to find all the servers in my environment where the C:\ drive was greater than 90% full, and I want to understand how fast they are growing. We have the template "Volume Usage Forecast" we can leverage as a starting point.  Go to Reports > My Reports and press the New Report button, and then we will select the template we want to use.  Note the template list is long, and we are working to make it shorter.

    Select Enterprise > Storage > Volume Usage Forecast and press Continue:

    Next, we can select the columns we want in the report and the order of those columns.  Note each template has defaults for the columns selected, the column order and the sort order.  In this case, there are 10 columns selected by default, but I am going to remove the Eighty and Ninety percent columns by highlighting them and pressing the left arrow.

    Next we will select the sorts and filters for the report, allowing us to narrow the output to exactly what we want.  Since this configuration screen is long, we will tackle it in parts. 

    First we will enter the name of the report "C Drives Over 90 Percent" and in this case, we will leave the default sort (server, volume) in place.  If we wanted to change the sort order, we can remove current sorts by highlighting them in the text box and pressing "Delete Sort", and add new sorts by selecting the Sort By field and order and then pressing "Add Sort".  Note the order of the fields in text box is the order the sort will be applied to the report.

    Next, we will pick the filters.  In general, you can filter by any column in the report, and the options of the filter will match the data type:

    • Text Fields: like, not like, <, >, =, <=, >=, <>, in, not in
    • Numeric Fields: like, not like, <, >, =, <=, >=, <>, in, not in
    • Date Fields: <, >, =, <=, >=, <>, before, within

    The basic query we will build is "Volume Name like C: AND % Used > 90".  So for the first part select "Volume Name" and "like" and enter "%C:%" and then press "Add Filter" - you should see the filter appear in the text box.   For "like" filters, make sure you add the "%" before and after the text.  Note since this is the first filter, we can ignore AND/OR drop down.

    For the second filer, choose "AND", "% Used", " > " and enter "90" and press "Add Filter".

    Finally, there are few additional items to select:

    • Time Zone:  The default time zone of the report, which can be changed at run time.
    • Row per Page:  The default rows per page.
    • Permissions:  What users can see this report:  Myself, Group (All users in my groups), All (everyone).

    You are ready to save the report, your page should look like this:

    Available actions:

    • Save Report - saves the report and returns to My Reports.
    • Save and Run - saves the report and takes you to the Run Report page
    • Cancel - abort changes to the report and return to the My Reports list.

    For this report, if you select Save and Run, and then run the report, the output look like this (assuming you have some C:\ drives over 90%).

    So that is how you build a custom report - it is quite flexible and lets you get to your data exactly how you want it.  

    So how do users leverage the reporter functionality? 

    • Identify problem areas before they become "problems" - busiest LUNs, fullest drives, busiest VM or ESX, etc.
    • Chargeback - charge users, departments, customers, etc. for the storage, backups or servers they are using.
    • Backup Compliance - identify full backups that have not been successful for more than 7 days.
    • User Quotas - identify users exceeding their storage quota

    How are you using reports in Profiler?  We would really like to hear.

    PS: Cool new feature alert! In later versions of Profiler, we added the ability to group filters, so you can do "Find all (C: OR D:) drives that are over 90%".   Lots of people of have asked for this feature, and now you can do it!

    For our Profiler customers, the Profiler - What we are working on... is in progress, but many of you have asked about the backup applications supported in Profiler?

    Here is a quick summary of where we are headed:

    • We are actively working on support for the latest version of the following backup vendors:
      • NetBackup 7
      • BackupExec 2010
      • EMC Networker 7.6
      • TSM 6.2
      • Commvault 9

    PLEASE NOTE:  We are working on these items, but this is NOT a commitment that all of these enhancements will make the next release.  If you have comments or questions on any of these items (e.g. how would it work?), please let us know!

    Filter Blog

    By date: By tag:

    SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.