We are looking to provide performance data and graphs for certain metrics to our business groups. If done weekly, we keep a pretty accurate amount of data via graphs, as the data only normalizes in 1-hour increments - still allowing both us and the group to see 'most' of the usage peaks.
But if we try to use monthly reporting the data becomes almost useless in graph form - as it uses 12-hour increments. This loses 90% of the peaks and valleys, dropping the value of the graph to nil. The monthly graphs for the same metrics within vCenter still show all the peaks and valleys, even if it becomes almost impossible to discern some of them - but at least you can still see that there are changes in activity/load.
Is there any way to force a graph to use a different # of data points, normalization ranges, etc? When zooming in to specific periods the normalized data changes from 12-hours, to 1-hour, to 5-minutes. So the more granular data is obviously there in the system still (so that is good).
The goal is to use this product for all of our VMware reporting. Having to take screenshots of individual virtual machine performance pages from within vCenter because I can't report on the monthly metrics from VMAN accurately is a giant pain in the.... well you get the picture.
In order to make the charts responsive, they are limited to 250 data points at a time. As you've found, the data will be loaded as folks zoom in to a tighter time range. There is no current workaround in VMAN proper (barring the integration with Orion).
What is the use case for users to get that granular of information in a monthly report? Other than "they just want it"? I'm digging to understand, because this is the first time I've heard this requested. As you mentioned, the graphs become almost unreadable at that level with the raw data. We did add a "force Raw" reporting option to the latest version of Storage Manager, but the use case there was to troubleshoot a specific issue that occurred. So theoretically, you would be interacting with the charts as you are now by zooming in to a level. You could also report on the monthly value peak, or weekly peaks, if you wanted to see very high usage periods.
Finally, there are no 12-hour roll-ups. There are the hourly or 24 hour roll-ups on the hourly scale.
You are 100% correct on the 12-hour part - I was bouncing between a lot of things and my brain pulled the 12-12 markers and ran with a 12-hour increment.
The use-case for this is that we have a group that is very intent on keeping an eye on their peaks (mostly CPU related) on a monthly reporting basis. They have high workloads every few evenings that we know cause peaks, but they have no desire (currently) for weekly reporting round-ups. Just monthly. With the 24-hour normalizations in VMAN graphing, we aren't able to reliably show the workload their systems are under.
We have tried using the peak-usage metrics, but went through a long support back-and-forth because the values were not trusthworthy at all (IE: reporting >100% CPU usage).
For the time being, and the foreseeable future we are using the monthly performance reports directly within vCenter for the specific systems.
But my $0.02 is that providing reporting out to groups/customers/etc on a monthly (as opposed to weekly) basis would not be that uncommon. With the amount of normalization occurring in VMAN makes it more difficult to provide graphs that more closely portray a specific server's load when pushed past 1 week. Not to say that I don't understand where it is coming from in terms of readability, but sometimes you don't need to read a graph in finite terms to see the trends you need to see.
Ok - so the CPU Used % counter that had problems should be replaced by either CPU Utilization of Host % or CPU Load. Unfortunately, it doesn't appear that either of those have a "Peak" attribute equivalent, so that will need to get addressed on our end. Once that's done, you should be able to generate these same reports for those teams and see the peaks for the time period in question.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community. More than 150,000 members are here to solve problems, share technology and best practices, and directly contribute to our product development process.