Like most we have a dynamic and widespread environment. For some time I have been able to get away with about 4 different disk volume utilization reports. They are:
1. Warning at 80%
2. Critical at 90%
3. Warning at 85%
4. Critical at 95%
(Each of the alerts has a corresponding custom property that is required to be set to 'Yes' on the target node)

- With more apps (servers) coming into our Orion instance I am finding that large disk drives do not necessarily benefit from the percent utilization conditions. As an example, a 2 Tb drive at 99% utilization has over 1980 Gb free. In our environment this would not constitute a critical type ticket.
- I know I am able to define disk utilization in alerts by Bytes, this however does not scale well. In theory each volume on each server could have different utilization threshold.
- I was trying to come up with some other conditions such as: Disk drive 99% used AND less than XX bytes. The problem with this approach is that I still need to define bytes that may or may not be applicable to the target device.
- We obviously don't have this issue with CPU and Mem as we can define at the node level.
- Does anyone have a method of disk utilization that scales in large environments ? I cannot have hundreds of disk utilization alerts as I fear it will negatively impact performance.