This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

Some downstream servers syncronizations failing for localdbothererror

I have a majority of my downstream servers failing to sync completely since doing a server cleanup on all my WSUS servers (we have one upstream and several downstream servers). I've attached a screenshot of the failure below as you can see it's still getting the new updates, but it looks to be failing for a SQL timeout. Any ideas or thoughts as to why this might be happening would be great. If you need additional information let me know as well.

pastedImage_0.png

Thanks in advance.

  • If you double click on that message is there any other detail?  It looks like this could be attributed to timeout errors in WSUS and there are a few links that I found just from a general search on "LocalDBOtherError":

    How can I increase the timeout period for WSUS sync between secondary sites and the primary?

    Software Update on a Secondary site failing with - Sync failed: LocalDBOtherError: SqlException: Timeout expired. - .:…

  • I'll see if I can find any more detail. It's such a vague message. I'll follow up if I find anything. I just don't know why it's only happening on certain downstream servers and not all of them.

  • So after checking the event log on one of the servers the sync isn't working on it's showing an error "The last catalog synchronization attempt was unsuccessful" Error code 10022. It looks like it stopped working on 2/5 that seems to be when the errors started. That is the day after I ran my WSUS cleanup jobs on all the servers. I've been running that for at least a year with no issues, so I'm still not sure what's going on. Any ideas please let me know.

    Thanks.

  • The error is kind of generic, so there are a few things you'll want to check.  The cleanup wizard is the only other clue that's provided so far so this might be an issue at the database level.  I'm going to provide that solution last as it's usually the nuclear option for this (at least until you fully re-deploy).  However, at least confirm if there were any other changes and that the basics are covered:

    1. WSUS Downstream Server Synchronization Failed - UssCommunicationError - SolarWinds Worldwide, LLC. Help and Support
      Make sure the ports and addresses are all still correct.
    2. Restart services for the Upstream and then downstream servers.
    3. wsusutil reset - This is the nuclear option in my opinion in the sense that there are several caveats that you want to pay attention to:
      • Firstly the article on the command:
        Managing WSUS from the Command Line | Microsoft Docs
      • Take a snapshot/backup/etc of the test server if you're worried about anything going sideways.
      • Make sure you test this on one server before running it all of your downstream servers.
      • It is notoriously slow.
      • The general idea is that it is going to consolidate the downstream and upstream databases and verify that all of the necessary content is in place.

    If you think there might have been other configuration changes at the same time I would strongly suggest you investigate those first.  Other than being slow I haven't run into any issues with the WSUS reset myself, but as you are updating the database on a downstream server you want to make sure that you don't cause any additional issues before starting.  I think the reset is the most likely issue as only some of your downstream servers are having the problem, but I still tend to use it as a last resort.

  • Looks like I found the culprit. I was going through some of the earlier links you provided and found this comment to one of the solutions:

    First of all, thanks for this post, it helped me initially with my WSUS replication issue.

    I made a mistake of selecting Drivers as one of the product categories that WSUS pulls from Microsoft. As a result, my WSUS database quickly bloated to 27,000 patches, 20,000 of which were driver definitions for various hardware IDs.

    Bloated database started causing timeouts during replication runs. For a week or so, WSUS Cleanup Wizard and index defrags were helping. After a week or two, replication broke down completely.

    Solution is to delete driver update definitions from the database - not just decline them. If you decline driver updates (or any other large collection of superseded or unneeded patches), you will start getting TdsParser timeout.

    To delete driver definitions, run SQL scripts as described here: http://www.flexecom.com/how-to-delete-driver-updates-from-wsus-3-0/

    If you already hid a ton of updates and are getting TdsParser timeout, run SQL scripts described here: http://www.flexecom.com/wsus-replica-server-fails-to-synchronize/

    This issue was driving me crazy and these two articles fixed it for me; but needless to say, use at your own risk.

    I remember in January I was adjusting which updates to include due to there being so many different versions of Win10 and we ended up getting almost 2000 new updates in one day. I declined them, but yesterday I went back and deleted them on my upstream server and as of last night all downstream servers have 100% sync completion. I don't know why the clean up didn't get rid of these, but I'm glad it's working again.

    Thanks for all the ideas/suggestions it really helped out.