I have AppInsight for SQL monitoring our databases.
I have the monitoring set to poll every 180 seconds with the exception of indexes, which only polls once an hour.
I have an alert which evaluates the trigger every two minutes, the condition is set to exist more than an hour and reset when the condition no longer exists.
I think somewhere in this configuration I have polling and the condition delay conflicting. I have this particular alert configured to send me an email with database name, instance name, host server name and the component that triggered the alert.
Here is the problem I have noticed and it's only occurred on this one alert:
An index reached it's threshold and triggered the alert. I received the email, but the component that triggered the alert was missing from the email. Further investigation showed the index condition cleared but the alert remained for hours afterwards. I tried creating a new alert today and when I attempted to test the alert, if I selected a database without any problems the email would not contain any component data. This seems logical since technically the database doesn't have a problem. When I create an alert and select a database that has an issue the component appears in the email.
So in this case it seems like the delay on the alert trigger is so far out that the condition is already cleared way before the alert has a chance to reset. Is the poll frequency of the indexes, (once an hour) and the delay of the alert, (condition exists for over an hour) causing this to jack up the alert? Can someone sanity check that?
Thank you,
Jan