cancel
Showing results for 
Search instead for 
Did you mean: 
Create Post
Level 9

How to delete (from the DB) a discovery profile created from API?

I already know about the Ishidden=true flag, but this only hides the profile from showing up in the list of discovery jobs in the SolarWinds web page.
I'm looking for a way to actually delete from the database all references to a created discovery job. Is this possible?

I know that the discovery jobs will get auto-deleted based on setting DISCOVERY RETENTION, but I don't want to have to wait for the (default) 60 days before they get auto-deleted.

We've almost 3000 servers around the world and many have new filesystems/volumes added to them and the only way for SolarWinds to see and monitor the new filesystems/volumes is to re-discover all nodes.

So today I've a nightly re-discover job and I've set Ishidden=true, but that still means that about 3000 new discovery profiles get added to the database every day. If I wait for the default 60 day cleanup to happen, then 180,000 jobs will exist in the database and I've found that running the Configuration Wizard has to process each of the profiles and with a lot of discovery profiles in the database, it results on the Configuration Wizard taking a VERY long time to run; an unacceptably not time to run.

It would be nice if in addition to the

  Ishidden=true

flag there was also a flag like:

  autodelete=true

which would auto-delete from the database once the discovery job is complete.

Without this, I've been forced to run a nightly SQL that 'DELETE's the discovery jobs auto-created (I use a unique naming for all automated rediscovery jobs allowing my DELETE to have a WHERE clause that only gets the profiles used in the nightly re-discover)

0 Kudos
7 Replies

Why do you create a job for each server each night?  A single job could just have the ip address of all the servers in it and then you'd only have a single job.  You could get more fancy than that but I expect that 60 jobs in 60 days is probably pretty livable.

As far as removing jobs, there is a verb for that, you just need to pass is the profileid

Orion.Discovery | Orion SDK Schemas

- Marc Netterfield, Github
0 Kudos

Initially I did try doing servers in batches, but I found that SolarWinds does a batch serially which caused issues when it encountered a down node since it had to timeout before continuing hanging the re-scan and this would have resulted in it taking > 24 hours to re-scan all of our servers. Using a PowerShell script running discovery jobs in parallel allowed me to re-scan all servers in a much more timely fashion.So even when a down node was encountered, only that single re-scan would hang allowing all others to continue.

Our environment has 7 pollers world-wide managing close to 1100 Unix, 1900 Windows and 300 ESX.

I can again try to script things to break servers up by cred type and poller to generate the batches, but only if there is no limit to the max number of IPs that can be specified in a discovery job

0 Kudos

The serial versus parallel thing is a real concern, but if you are doing 1 to 1 jobs you don't have to create a new discovery job each night.  Include the node ip in the profile name and just re-execute the existing job, have your script only create new jobs for nodes that don't already have one? that leaves you with 3k or so hidden jobs, but thats still a far cry better than 180k.

- Marc Netterfield, Github
0 Kudos

It took me several weeks just to figure out how to get a rescan to work with all of the right options. Trying to figure out how to re-use an existing profile sounds like another several weeks of experimentation which I don't have.

Regardless, would still be nice to have a 'delete' option though

0 Kudos

See my first reply, the verb is there.

- Marc Netterfield, Github
0 Kudos

Sorry, missed seeing that.

0 Kudos

Realistically you want to keep it under 1000 nodes per discovery from my past experiences.  They'll let you stick as many as you want into it but beyond 1,000ish i have seen things tend to get a lot more buggy.  When I've done things like that in the past I've just batched things up in my scripts.  Given that you have 7 polling engines then the easiest mechanism is probably justt o split the jobs based on the engineid they are already on, since the discovery gets weird about things when you try to import from a discovery on the wrong engineid.

- Marc Netterfield, Github
0 Kudos