This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Patch Manager Deployment Questions and other Patch Manager Questions

Hello Everyone -

I am looking for information and some guidance to others that have deployed Patch Manager.  We recently deployed this over the past year and are running into a few issues that I am hoping someone here can assist with.  

WSUS Setup - We currently have a dedicated servers in both of our data centers WSUS using a SQL back end.  One is main WSUS server and one is the down stream server.

Patch - We have one server that is dedicated to Patch Manager right now. 

This past weekend we added 250-300 Pre-Production nodes for patching and the Patch Manager server slowed down quite a bit. The patching team will normally watch the patching tasks go by in one of the Windows and when they did this push the Window did not update or it did very slowly.  We only patch Windows servers with Patch Manager right now we utilize SCCM for patching workstations. 

Questions and looking for input.

1. If you patch a large number of nodes what does your patch manager deployment look like?  Do you use one main server and then configure downstream servers from there? 

2. Do you patch anything hosted at Azure, RackSpace, or AWS? If so did you setup downstream servers do they all connect to your main patch server?

3. How do you have all of our servers sized out for this? 

If you have any other kind of information that we could utilize that would be great.  We have done a call or two with Solarwinds to go over Patch Manager but we are looking for how this works out in the real world.  

Thanks everyone!


  • Our setup sounds quite similar. We have a primary WSUS and a downstream (the downstream hosted in Asia, the primary in the UK). A single dedicated PM server and  we have just under 400 nodes, about 30 of which are on the downstream in Asia. We are entirely on-prem so nothing in the cloud being patched but I do have 2 devices off-domain in DMZ which I had to add local creds to the ring for. 

    I haven't noticed any additional slowness than if I'd manually installed although I have over 100 schedules that break everything up over the month and run 3 passes for each collection, each with a specific task (eg. pass 1 installs only security patches and criticals with no reboot, 2nd gives priority to exclusive updates but if there are none will install anything thats been approved and the 3rd pass is a mop up anything not taken and to rerun failures then force a reboot). This 3 pass system took some testing over the past few months to get right and has made an improvement in reducing reboots (which on 2016/19 for some reason can take up to 2 hours on our VMs so doing this multiple times caused updates to extend into production hours) and mopping up failures.

    I will say though that after the last PM update our patch manager is no longer pushing new data to it's DB - the DB has gone blank which I use to produce all our stats through SWQL queries although is still patching. Its obviously pushing data somewhere but its not local SQL express and the remote DB is registering hits from the account, just all the tables remain blank. 

  • Thank you. I shared this with our patching team and let them know to hop on into THWACK. 

  •   can you go in to a little more detail about your schedules with the multiple passes?

    do you have 3 scheduled tasks per group of servers being patched?
    1. install security and critical, ignore exclusive updates - no reboot
    2. install exclusive updates 
    3. install security and critical updates, ignore exclusive updates - force reboot

    Are these all set up as scheduled tasks? what gap do you leave between the schedules running?

    We are in the process of testing patch manger to automate updates and I'm a little unsure about how it would handle the service stack updates. It would be a shame if it is not able to do everything in one go.

  • Yes, 3 passes per WSUS container/group. 1-2 containers per night (this is soon to change as we will be reducing our patching cycle from 1 month to 14 days, so-far tests have done fine doing 7 containers in 1 night).

    Passes 2 and 3 can install anything not just criticals, the only exclusions are a title containing service packs (due to historical issues with old SQL service pack updates and we still have some legacy instances out there) and drivers.

    if any exclusives are found it will do those in pass2 and all else is done in pass 3. if no exclusives found it will go ahead with all else in that pass with pass 3 just mopping up failures.

    We only use PM for the server estate which is around 500 devices in 4 locations and have a scheduled nightly window 
    Pass1: midnight  (no reboots)
    Pass2: 1:30am   (pre-reboot if required, no post)
    Pass3: 5:30am   (pre-reboot if required and forced post reboot)

    Generally pass3 is complete just short of 8am. 

    My biggest problem with the schedules is due to the amount of time 2016 servers take to install CU's. 2012 and 2019 are not so bad and tend to finish way within the schedule windows..

  • Thanks for the extra information. Sounds like we are going to need to expand our patching windows if we are to automate the patching using patch manager as we just don't have the time to do 2 passes never mind 3 on an evening Sweat smile .

    Its a shame as I have used Azure's patch management and this installs, reboots and checks for more, reboots if needed etc until complete so only needs one scheduled task per group of servers.

    Since none of our servers have SSU's this month think I'm going to have to expand the testing period for patch manager in to next month where there no doubt will be some.