We have some events automatically log tickets, otherwise we actively monitor the alerts screen.
We keep custom dashboard up on HUGE screen 83" hanging on the wall that shows all the critical issues we have going on in our environment. Feeding some critical alerts into splunk and then correlating the data between Orion, Help Desk Tickets, and log events we can highlight where lights are flashing RED in more than one place. Alerts from the same device from the CMDB in Orion, splunk logs, service desk tickets means something may be seriously amiss and we try and highlight those. Orion now HCO is really good at displaying this on modern dashboards. This goes beyond service desk tickets... it's correlated data. The goal is to see the problems long before users all start reporting problems.Bill
Alerts go to the relvent people / teams mailboxes. Up to them how they deal with them. We also have a large telly on the wall with rotating dashboards showing the current issues, key systems status, network map showing critical infrastructure status etc. Was going to integrate with ServiceNow but the client hasn't the money to do that right now.
We have about 15 different monitoring tools / app triggers with email alerts all sending alerts to Service NOW to create tickets and use those dashboards on Service NOW for ticket tracking and SLA's. We use SolarWinds to clear tickets in Service NOW if the condition clears is SolarWinds.
Throw the ticket 'over the wall' and hope someone fixes it. No tickets are followed up on by any audit process.
We use both ticketing and emails, depending on the urgency of the alert.
Going with the flow - usually email to specific teams, but the noc monitors production/high priority alerts and generates tickets on those.
Multiple, unaligned, unlinked ticketing solutions. Sigh.
Once again I participated in the poll but this month's mission does not register it.
\m/