Each month I meld two different reports together for a specific reporting requirement. The first report is a report of the average, minimum and peak response times. I select the following items:
Average Response Time sort * function AVERAGE
Peak Response Time sort * function MAX
Minimum Response TIme sort * function MIN
The second report that I run is a query against all of the events for response time violations. The response time alert is configured as an advanced alert. The alert is checked every 1 minute and triggers when my Average Response Time limit has been violated for at least 4 minutes.
Response times are polled every 120 seconds and statistics are polled every 10 minutes.
Here's my problem:
When I pull those statistics together at the end of the month, my report shows sites where the peak response time is less than my alert threshold (250 ms), but I have response time events that triggered during the month. Here's a snapshot of my report.
Node Average Response Time Peak Minimum Percent Packet Loss Average Availability Response Time Events
Response Time Total # Avg / Day
Site1 31 161 26 0% 100.00% 6 0.3
Site 2 31 685 26 0% 100.00% 14 0.7
Site 3 19 405 16 0% 100.00% 12 0.6
Note that Site 1 has a peak response time during the month of 161 ms, but had a total of 6 response time events during the month. That means that during the month at least 6 periods (polling every 2 minutes and the condition has to exist for at least 4 minutes) were *higher* than 250 ms.
Any ideas why my peak response time for the month does not reflect the accuracy of the response times report? For interest sake, the peak response time according to my event report was 465 ms for the month. (Gotta love rural broadband, eh?)