Can someone tell me what the PendingNotificiations table is in the SolarWinds database? The table is currently at 167+ GB and is growing by 20 - 30 GB every single day. No one seems to be able to tell me what this table is and how to reduce the size of it.
Could you please check which tables are the biggest? You can run this query which should give list of all tables with sizes.. Run against Orion database.
t.NAME AS TableName,
s.Name AS SchemaName,
p.rows AS RowCounts,
SUM(a.total_pages) * 8 AS TotalSpaceKB,
CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceMB,
SUM(a.used_pages) * 8 AS UsedSpaceKB,
CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceMB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB,
CAST(ROUND(((SUM(a.total_pages) - SUM(a.used_pages)) * 8) / 1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceMB
sys.indexes i ON t.OBJECT_ID = i.object_id
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
sys.schemas s ON t.schema_id = s.schema_id
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
t.Name, s.Name, p.Rows
Based on AppInsight for SQL, these are the top 10 tables based on Size:
|PendingNotifications||196.23 GB||1.7 %||74.51 M|
|NCM_ConfigParsedArchive||9.74 GB||0.0 %||27.66 K|
|AlertHistory||8.02 GB||3.9 %||4.95 M|
|APM_HardwareItem_Detail||5.96 GB||24.3 %||49.26 M|
|NCM_ConfigArchive||4.17 GB||0.0 %||28.59 K|
|APM_ComponentStatus_Detail||2.50 GB||51.6 %||19.14 M|
|Events||2.17 GB||17.1 %||4.38 M|
|NCM_Audit||1.84 GB||24.9 %||4.50 M|
|SysLog||1.83 GB||10.8 %||3.87 M|
|APM_PortEvidence_Detail||1.73 GB||20.2 %||14.38 M|
Something is obviously broken as the pendingnotifications table has increase by approximately 20% in less than 24 hours
I have/had a support case for this and I was told to do the follow from this link:
Clear the Information Services Subscription - link here -> Success Center
I have done this in the past and usually within a day or so, i have subscription errors in my log and the pending notifications table begins to grow.
It's used by the info service. I've never seen it get anywhere near that large, I would be really surprised if other things are not basically broken in your environment. Do you run HA or many APE's or AWE's or EOC? Some or all of those features are likely to be broken right now if the pendingnotifications are stacking up at that rate.
We are not using HA with this environment. We have 10 APEs and 1 AWE. I guess my next logical question would be, what things should i look for on my primary server, on the APEs and on the AWE that would be "broken". I do know that I have a bunch of dcom errors for cloud nodes that are down that I am attempting to remediate by unmanaging them when they are down and re-managing them if and when they are up. Outside of that, what things should i be looking for?
In terms of knowing what is not working, I'd run this query in SSMS, but with how big your tables are I would be seriously worried that it might not complete in a reasonable amount of time but you can try.
It gives you the name of the subscriptions that are not getting their messages, the query column should tell you the kind of info they are not getting, when they were last successful, how long they have been failing to deliver.
select s.endpointaddress, s.query, s.lastsuccessfuldelivery, s.faileddeliveryattempts, s.binding, count(*) as notificationCount, min(p.created) as oldestUndeliveredNotification
from subscriptions s
join pendingnotifications p on p.subscription_id=s.id
group by s.endpointaddress, s.query, s.lastsuccessfuldelivery, s.faileddeliveryattempts, s.binding
For reference, my environment with 10 polling engines and 40,000 sam components had 4 rows in it this afternoon when I wrote that query.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community. More than 150,000 members are here to solve problems, share technology and best practices, and directly contribute to our product development process.