1 2 3 4 5 Previous Next

Product Blog

589 posts

Patch Manager 2.1 Beta 1

Posted by chrispaap Jan 21, 2015

Patch Manager 2.1 is now Generally Available and available for download in the customer portal for all customers on active maintenance!!!!! All features described in the Beta post along with multiple bug fixes were addressed in this release.



The team at SolarWinds has been hard at work at producing the next release of Patch Manager with some great new features. It is my pleasure to announce Patch Manager 2.1 Beta which is packed full of features and is open to Patch Manager customers currently on active maintenance. To try it out please fill out this short survey and provide us your feedback in the Patch Manager Beta forum.




Automated 3rd Party Patching and Publishing

Patch Manager currently offers 3rd party patch/updates through our 3PUP catalog so that administrators can simply update their products using Patch Manager as they would for Windows Patches.  In Patch Manager 2.1 we have introduced Automated Third Party Patching and Publishing which will save administrators time by automating the publishing of 3rd party updates to WSUS in the same way Microsoft updates distribution points.  This reduces the number of touches and time required to publish the 3rd party updates by allowing the local Microsoft WSUS server to automatically download 3rd party patch content on a scheduled basis and automatically publish.  An additional option can be set which automatically approves certain patches based on criteria (typically severity) and provide a scope for automatic approval. For a complete list of supported 3rd party updates check out Table of third party patches - updated 1/21/2015Follow the steps below for enabling and configuring Auto-Publish of 3rd Party Updates to WSUS.


Opening the Auto-Publish of 3rd Party Update Wizard

In the Patch Manager Console select Administration and Reporting - Software Publishing - Right click and choose Auto-Publish of 3rd Party Updates to WSUS.  Alternatively you can select Auto-Publish of 3rd Party Updates to WSUS in the actions panel after highlighting Software Publishing.


3rd party3.JPG

Selecting Products and Specify WSUS

Select the WSUS server that will be publishing the 3rd party update and select the software to have published.

3rd party.JPG


The schedule screen allows you to select your Auto-Publishing schedule. You may choose to have a set period of time to publish as a schedule task or auto-publish after every synchronization. In addition email notifications can be configured on this screen as well.






Scheduled Tasks

Once done, you can review and edit you auto-publish task by selecting Scheduled Tasks under Software Publishing

scheduled tasks.jpg




Report Enhancements

Patch Manager contains volumes of data and environment metrics that can be reported. An important aspect of Patch Management is the ability to perform essential reporting of your environment for compliance and remediation purposes.  We have improved the out-of-the-box reporting by replacing dated reports with relevant and more intuitive reporting options. In addition the following reports have been added or modified based on demand from the Thwack user community.



Custom hardware report

Configuration Management Reports - Computer (System Information) - Custom hardware report

You no longer have to build a hardware report from scratch. The Custom hardware report can be configured using any of the existing 19 available data sources with each source having multiple fields.

HW Report.JPG

HW Report 2.JPG

Installed programs and feature basic

Configuration Management Reports - Installed Programs and Feature Basic (MS Products Omitted)

The default Installed Programs and Feature Basic (MS Products Omitted) report can easily be modified to omit MS products specific to your environment.

MS Office Omited.JPG

Approved updates status counts by WSUS server and update source

WSUS Reports - Windows Server Update Services Analytics Approved updates status counts by WSUS server and update source

Provides the WSUS server name, update source and total sum of approved updates in the following categorizations:

  • Approved not installed
  • Approved unknown
  • Approved downloaded
  • Approved failed
  • Approved installed
  • Approved pending reboot
  • Approved total

Computer update status - locally published updates

WSUS Reports - Windows Server Update Services Analytics - Computer update status - locally published updatesThis Out-of-the-box report identifies all updates that do not have Microsoft Update as a source

Computer update status with aggregate counts of install state for approved updates

WSUS Reports - Windows Server Update Services Analytics - Computer update status with aggregate counts of install state for approved updatesThis report provides an aggregate count all approved updates that are applicable to a particular computer/server and in which phase of install.

Computer Update status counts by classification for approved - not installed

WSUS Reports - Windows Server Update Services Analytics - Computer Update status counts by classification for approved - not installedProvides a list of all approved updates that have not been installed on a computer.

Computer Update status - Approved updates with ID and Revision

WSUS Reports - Windows Server Update Services Analytics - Computer Update status - Approved updates with ID and RevisionProvides a comprehensive report of all approved updates and their status as it pertains to each computer/server.

It is important to note that all Out-Of-The-Box reports are configurable with hundreds of data sources and fields. Customization of reporting has been greatly enhanced in Patch Manager 2.1 with the addition of Cross-Datasource Reporting. This provides users with the ability to write reports without worrying about the specific data sources hampering real creativity within the reporting engine. 

Logging Enhancements

We have enhanced the log collection functionality in Patch Manager by centralizing and bundling of logs which will assist in the troubleshooting of Patch Manager.  By extending the existing Orion diagnostic tool an administrator can now collect the relevant logs and review for diagnostic evaluation or submit to support.  This eliminates having to browse to individual directories to gather separate logs.



Windows 2012 R2 and Windows 8.1 Support

Windows 2012 R2 and Windows 8.1 are now OS options when selecting computer properties.

new OS.JPG

Notification Bar Enhancements

A banner notification has been added to the web console to notify the administrator that new patches are available.


Additional Computer Group Scoping Options

You know have 3 three additional grouping options to allow management and task targeting at subnet, AD organizational unit and/or site levels.This provides granular device grouping options depending on the task at hand when creating computer groupings.

Scoping optionsHeader 2
IP Subnet - You now have the option to target a subset of machines based on a specific subnet.



Active Directory Organizational Unit (OU) -  Patch Manager administrators are able to target all computers within an Active Directory Organizational Unit.options.JPG
Active Directory Site -  You can now group computers by Active Directory Sites as a possible element in routing requests to automation servers. Larger environments with multiple Patch Manager servers require automation routing rules to assign tasks to specific Patch Manager servers based on the IP of the machine in question.




That's it for now.  Don't forget to sign up for the Beta and provide your feedback in the Patch Manager Beta forum

The Log & Event Manager (LEM) team has been hard at work on a release intended to make your lives easier. We know you're swamped and decided to take a little time to make it faster to get LEM up and running and configure rules related to problems you're interested in solving without looking through a big list and clicking, clicking, clicking.


Getting Started with LEM: New & Improved!



We had a handy dandy getting started widget in LEM before, but we've taken it one step further and glued together those steps into one location rather than sending you on a bit of a wild goose chase. Now, from the Ops Center Getting Started widget you can:

  • Quickly configure Basic Settings needed for LEM to be up and running - email server settings and directory service server (for groups and/or authentication) configuration
    • As you click "Next" to move through the wizard, each step will be tested and verified. If you see a pause, that's what's going on. If there's a problem, LEM will let you know.


  • Quickly access the Add Node Wizard including links to the agent installers and the full syslog scan to configure connectors automatically


  • Use the NEW Add Rules Wizard to add rules for different areas of interest (more on that later)
  • And, view all of the quick training videos from within the LEM console!



New Feature: Add Rules Quickly by Category


Our next big addition is the new Add Rules wizard. From Build > Rules or from the Ops Center Getting Started widget/wizard flow, you can launch the fancy new wizard. This wizard will configure for you ALL rules that match a given category that can be configured easily - no active responses, just email and "infer alert" or "create incident" actions (we'll look at improving to add more active response choices in the future). This should be MUCH MUCH faster than all that cloning you had to do before. As a part of this, we've also created a new "General Best Practices" subcategory in each parent category - if you're not sure where to get started, these categories will get you a wide swath of the most common rules enabled.


          Select each category of interest


          View and select subcategories, including the new General Best Practice subcategory


          Specify email server settings (if not already configured), and email recipients (even add contacts from within the wizard if they don't exist)


          Clicking "Finish" will clone, select the right users, and enable the selected rules all in one step!


We've also revamped our quick rules training video to include how to use the new wizard and a fast example of building a rule by hand:




Fixes, Fixes, Fixes


As always, we've fixed a bunch of customer issues and included notable minor improvements. Included:

  • Improvements to our IIS coverage - if you've struggled with which fields to configure, or IIS not working right after you configure it, this is for you!
  • Support for Windows 8.1 with Workstation Edition - if your 8.1 workstations are coming up as servers and pulling Universal instead of Workstation nodes, this will help
  • Lots of new connectors! As always you can download connectors out of band to releases, but you'll get them automatically with the upgrade, too.


Sounds Great - Where do I Download? Where do I Ask Questions?


Easy: from the Customer Portal!


If you've got any questions, head on over to the Log & Event Manager Release Candidate thwack forum and let us know.


As always, feel free to post here and/or contact me directly with any feedback.

Here at SolarWinds we've been working overtime on another update to Web Performance Monitor (WPM), only a few short months after our release of WPM 2.1. I am pleased to announce that the result of our efforts, WPM 2.2, has reached beta status. Now is your chance to install the latest version and provide us your feedback on the newest features. We remain committed to constantly improving WPM and your beta participation and feedback guarantees that we are making improvements that are beneficial to our users.


To participate in the beta, simply fill out this survey and we will send you the download links as soon as they’re ready.



Below is an overview of the newest features you’ll see in WPM 2.2.


1. New AppStack Integration

The AppStack environment is a new feature that we’re very excited about here at SolarWinds. It provides users an overview of their entire environment in a single, easily digestible view – from storage arrays all the way up to the applications consuming the data they house. As a part of the AppStack formula, transactions managed by WPM as well as their dependencies can easily be viewed and any issues diagnosed.


To give an example of where the magic of AppStack + WPM would shine, let's say you get an alert that your WPM transaction has failed and users are beginning to call with complaints. From the AppStack view below, you can quickly glance to determine if the issue lies in your storage, server, web application, virtualization, or elsewhere. Knowing this allows you to focus your remediation efforts right away, decreasing the time to resolution.


Check out a recent blog post for more information about our new AppStack and the screenshot below of what you can expect to see.


2. New Web-Based Alerting

By popular demand, web-based alerting is now ready to go with WPM. To be clear, for us that doesn’t mean simply moving alerts to the web. With WPM 2.2, you’ll see a whole new alerting engine built from the ground up. The update won’t affect the alerting functionality you’re used to; instead, it will give you increased functionality and control when creating transaction alerts. All said, that means you’ll be able get more out of your existing WPM transactions by making the data it generates more actionable. Lots of functionality with this so you'll have to dig in to see more, but a sneak peek of the UI is below and check out a recent blog post that gives an overview of the new features.


3. Step Dependencies

WPM 2.1 brought you dependencies at the transaction level. That meant that you were able to set resource dependencies to transactions as a whole. However, we started to see that users had different dependencies on the various, individual steps of a transaction. With WPM 2.2, we’re letting you take that one step farther (pun intended) - you’ll now be able to drill down even deeper by setting dependencies on the transaction step level.


A quick example. Let's say you have a web transaction that includes various dependencies across the different steps of the transaction - e.g. you're an e-commerce site with a dependency in the first step on checking inventory from an inventory system, a dependency in the second step on routing shipping through a shipping system, and a dependency in the final step on posting a sale to a sales system. Before step dependencies, all three of these systems would have been dependent at the transaction level. That meant if any single step of the transaction were to fail, you would get a warning that any one of those systems might be a potential cause for the transaction failure - you wouldn't know where to start. Now, with step dependencies, WPM 2.2 can tell you based on which step of the transaction failed the exact dependency that is the root cause. The result is more focused alerting, thus a more focused response and a quicker time to resolution.


This granularity of control is why we're excited to provide step dependencies and why we think they'll be valuable to our users. Check out a screenshot below where you can see individual transaction steps on the left and a VM node step dependency for the first step.


So there you have it – WPM 2.2. Be sure to fill out the beta survey to be included and if you have any feedback, leave it here. We’re working hard on making WPM better and your beta participation and feedback goes a long way in our being able to do that.



- the WPM product team

To receive updates on the WHD roadmap, JOIN thwack and BOOKMARK this page


We are currently in the planning phases for the next release, based on user feedback we are investigating following areas. As we move forward we will update this post to reflect what we are working on short-term.  If there is anything missing or arguments to move things up, vote on product ideas or post a comment.


After release of Web Help Desk v12.2, we are now busily working on some new features and enhancements to the product. Here is a preview of some the features:


  • Parent-Child relationship
  • Change Management, for example
    • "Change request" ticket type to better distinguish between service requests and changes
    • Ability to report on changes
    • Automation of change requests
    • Note: We are still in planning phase and you can influence priorities and change management feature in WHD now! Please vote for more change management features here in this survey and comment in this forum
  • Log export for easier troubleshooting
  • Asset Reporting
    • Purchase Order reporting
    • Parts reporting
  • NPM integration improvements
  • Ability to export reports to Excel
  • Ticket Approvals from Tech interface
  • iOS Application improvements


PLEASE NOTE:  This is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel.   If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

I'm very happy to share with you, that we reached GA milestone for Web Help Desk (WHD) 12.2 and Dameware (DW) 11.1 with many new exciting features! Focus of these releases was mostly new reporting for Assets and Integration, but we were also working on other features, namely


  • Dameware Integration
    • Seamless start of remote session from WHD
    • Recording of session data (including chat and screenshots) back to WHD ticket
  • Asset Reporting in WHD
    • Asset statistics reporting
    • Asset reservations reporting
  • New WHD FAQ UI rewritten to a new framework with various improvements
  • Many Documentation improvements, new APIs and many bug fixes and more..


If you want to learn more about these releases check out following links:



How does integration looks like? It's slick and seamless as you can see on this video:



Web Help Desk 12.2 and Dameware 11.1 are now available on your customer portal.


If you are a new user Download Web Help Desk or Dameware now!

To receive updates on the Patch Manager roadmap, JOIN thwack and BOOKMARK this page.


We recently released Patch Manager 2.0, which included support for Windows Server 2012 R2 and Windows 8.1. We're now working on simplifying the interface and making the Patch Administrator's time more efficient. We're also working on other ease of use enhancements such as:

PLEASE NOTE: We are working on these items based on this priority order, but this is NOT a commitment that all of these enhancements will make the next release.  We are working on a number of other smaller features in parallel. If you have comments or questions on any of these items (e.g. how would it work?) or would like to be included in a preview demo, please let us know!

IPAM and UDT have combined like Voltron to form the new IP Control Bundle and as an IT "pro-", you may finally say goodbye to the painful and long IP Address Conflicts troubleshooting. Say goodbye to the long down-time of your servers, key network infrastructure or disconnected end users, caused by duplicated IP address on your network. For new users IPAM 4.3 and UDT 3.2 are now also available as a combo of two products with "in-line" integration focused on IP Address Conflict troubleshooting. For our existing customers both new versions act as regular product upgrade. There is no difference in functionality between bundle and standalone versions.


What IP Address conflicts it detect?


1) Static IP Address assignment on two end-points within the same network


Typical situation when this may happen is cloning virtual machines. People simply download or copy virtual image and run it within the network not knowing if there is static IP address pre-set.


Conflict statiac.png


2) Static IP in DHCP environment

There is many situations when static IP address may cause conflict within dynamic (DHCP driven) network. It can be somebody's laptop with static IP from the hotel, it can be cloned virtual machine with static IP, it can be attacker who is trying to connect to local network using static IP Address. It also can be newly connected network printer which has usually default static IP from private pool of IP address range - 192.168.x.x

Static DHCP conflict.png

3) DHCP scope overlap (split-scope)

DHCP scope overlap problem can occur these days much more frequently because local networks are usually controlled through DHCP servers and if DHCP server goes down, nobody can get new IP Address and connect to the network. from that reason, IT build backup or fail-over solution which consist from two or more DHCP servers. It's usually DHCP config misconfiguration or DHCP failure which cause both DHCP servers can provide the same IP address to two totally different devices.

scope overlap.png

4) IPAM end-point reservation vs. actually connected device

In a case IT admin reserved IP address to the specific end-point (MAC address) and other device is currently connected with that IP it doesn't really mean, there is a real IP address conflict. This may be because of obsolete IPAM information, or an upgrade of originally assigned device. However, it could be just a time bomb waiting for the reserved device to be connected back to the network. In all cases, IPAM administrator should be notified about such situation and either update IPAM reservation or change IP address of unexpected device.


How can IP control bundle help you to solve IP Address Conflicts?


By two clicks you will get the most important information about conflict:


1) Who is in the conflict? MAC addresses, Vendor, Active Directory Information and connected port or Wireless SSID (UDT functionality)


2) History of end-points connection - who is assigned to the IP Address? Show recent user of given IP Address so IPAM admin can understand who should be disconnected first.


3) "How-To" steps helping you do an action


Here is an example of DHCP scope overlap and how IPAM and UDT can help you to troubleshoot the problem:

conflict story.png


Both IPAM 4.3 and UDT 3.2  are now ready on customer portal and waiting for your download and upgrade. If you don't have one of these product, or if you would like to evaluate entire IP Control Bundle, you may download it for evaluation.


IPAM and UDT also brings other improvements and bug fixes, for more details, see our release notes for IPAM and UDT.


I can recommend you to watch following web-cast we made recently, which not only summarizes IP Address Conflict issues but also compares multiple IP Address Management solution and demonstrates latest IPAM & UDT (IP Control Bundle).

For just over a year now, the SolarWinds customer training program has been offering classes on the Orion Core platform and NPM. During that time, we’ve delivered over 150 classes to nearly 2500 students. And now, it’s time to grow!


New courses coming soon:

We have a number of new courses in development, including SAM courses, additional NPM classes on new features like DPI, and we’ll be working our way through the rest of the SolarWinds product line.


New training delivery options:

In addition to the live classes, we’re also developing recorded sessions. But we’re not just talking about an online video. The recorded classes will include a student guide, lab guide, and even access to a live lab environment for all the features of a live class, but on your schedule.


New training products:

We realize one of the biggest challenges to attending training is trying to find the time to squeeze it in. And sometimes you may not need to attend a full course to get just the bits of information you're looking for. With that in mind, we'll begin rolling out training quick hits; shorter training videos focused on a specific topic. We'll be starting with a series of quick start guide videos for our extended product line; designed to get you started out on the right foot.


All new student lab:

We’re currently rebuilding our student lab environment to accommodate more users and a faster turnover time between training sessions. The new lab will let us host simultaneous classes for more students, provide the lab environment for video training users, and ensure a faster, easier user experience.


2015 is shaping up to be a big year for customer training! We’re excited about the offerings and hope to see you in class soon.

For current course schedules and registration, please visit http://customerportal.solarwinds.com/virtualclassrooms.

It is our extreme pleasure to announce Database Performance Analyzer (DPA) 9.0 is generally availability.


Below is a quick summary of the features, but check out these two blog posts for all the details:


Top features in this release include:

  • Storage I/O Analysis

o   Determine if your slow queries are related to storage

o    See latency and throughput of specific files and correlate these to SQL statements

o   Clear visibility of storage performance and its effect on SQL response time

  • Resource Metrics Baselines

o   Determine if your resources are behaving abnormally and correlate them to SQL statements.

  • SQL Statement Analysis and Advice

o    Get expert advice on specific SQL statements

  • Database File/Drive tab for SQL Server (with continued support for Oracle)

o   Use the File and Drive dimensions to isolate poor performance

  • Alerting Improvements

o   Resource Metric Alerts

o   Alerting Blackouts

  • New Version Support
    • MS SQL Server 2014
    • Oracle 12c (single-tenant)
    • DB2 10.5



Want more information on DPA and this release?

On November 11th, Microsoft released a total of 16 security updates for November's Patch Tuesday which mitigate potential security threats in Office, Windows, SharePoint, and Internet Explorer. If you are a patch administrator for your company, then it's worth your time to read the Microsoft Article (Microsoft Security Bulletin Summary for November 2014).


In theory, there are several issues that could be causing concern when you read the article, but the one that seems to be the generating the most buzz is MS14-066: Vulnerability in Schannel Could Allow Remote Code Execution (2992611).  This vulnerability was additionally reported as CVE-2014-6321.  Although there are no known exploits of this vulnerability, it is quite serious and you should take note.  This vulnerability in the Microsoft Secure Channel or Schannel, is a set of security protocols used to encrypt traffic between endpoints and is primarily used for Internet communications over HTTPS.  This particular patch is applicable to every Windows Operating System under active maintenance.  This ranges from Windows Server 2003 SP2 and Windows Vista SP2 through Windows Server 2012 R2 and Windows 8.1.


Although the media is touting both the scope and the number of updates as the craziest thing that we've ever seen in patching, this isn't even the largest bundle of patches that Microsoft has released for a single Patch Tuesday. That current record is for April 2011 with a total of 29. But fear not, Patch Administrators, although the quantity seems daunting, the process is still the same. This is most definitely not a "sky is falling" moment - we're here to help.


One thing that people seem to forget is that Patch Administration is the same on Patch Tuesday whether there are 2 patches or 100 patches. If you follow the same procedure from start to finish, you are guaranteed to make sure that your environment is up to date and secure.  The best practices for any software (not just patches) is to test them on a small segment of your infrastructure before sending them everywhere. Thankfully, you can do this easily with SolarWinds Patch Manager.



Download the Updates from Microsoft to your WSUS Server

I've found that the easiest way to see these updates is within a Custom Updates View. I have one called "Microsoft Updates - This Week" which is defined as "Updates were released within a specific time period: Last Week" and "Updates source is Microsoft Update." If you need to create one, you can navigate to "Patch Manager\Enterprise\Update Services\WSUS Server\Updates" and then right-click on the Updates node and select "New Update View." Feel free to use this screenshot as a reference.


On that view, I tweak a few of the settings so that I can get a direct look at the updates that are concerned with this particular Patch Tuesday. I start by flipping the "Approval Settings" to "All" and the "Status" to "Any" and let the list refresh. Then I group it by the MSRC Number, which I do by dragging the "MSRC Number" header to the gray area just above the headings.


Now I have a list of the all the items released by Microsoft within the Last Week, grouped by the MSRC Number. After that it's as easy as expanding each and scanning through the list and seeing if all the updates that are applicable to my environment are approved. (You can also flip the Approval filter at the top to "Unapproved", but I like seeing all the information.) It's also good to check the "State" field to make sure that the updates are "Ready for Installation."


If you don't see any of this information, it means that the updates haven't yet been synchronized to your WSUS Server. Running a manual Synchronization with Patch Manager is simple - highlight your WSUS server in the left pane and click on the "Synchronize Server" in the Action Pane (right-side of the screen). Click Finish and it's kicked off.


After the synchronization is completed, you can go back and verify that the updates are available using the Update View that we just finished building. Or you can use a report. I've crafted one especially for this month's updates.


To use it, download (Software Updates for MS14-NOV) and import it into the Windows Server Update Services folder. This report has a prerequisite that the WSUS Inventory job has completed after the updates have been synchronized to WSUS. This is normally a scheduled job that runs daily, so you can either kick it off yourself, or just wait for tomorrow to run the report.


This report tells you the WSUS Servers in the environment, the Security Bulletin updates, the Approval Status, the Product Titles, and the Update Title, filtered to only those updates in the MS14-NOV Bulletin.


Test Group

You should run updates against a test group whenever possible.  For me, I've got a test group with a few different versions of Windows in it, so I'll use that.


Approve the Updates for a Test Group

If you need to approve the updates, just right-click on them and select Approve and then "Approve for Install" in the Approve Updates window and (recommended) scope it only to your testing computers. They may already be updated based on your automatic approval rules. If that's the case and you are trusting, then you are good to go!  If not, send the updates to a test group first.


Theoretically, you can stop here and the updates will apply based on your defined policies (either via GPO or Local Policies), but where's the fun in that?

Run a Simulation of the Patch Installation on the Test Group

For any Patch Tuesday, I'm a fan of creating a set of  Update Management Rules and save it as a template. That way I can refer to it for pre-testing, deployment to the test group, and then deployment to the rest of the organization. You can either create your own MS14-NOV Template (within the Update Management Wizard) or download and use mine. It's defined to include all Security Bulletins from MS14-064 through MS14-079.


Now it's time to pre-test these patches. I right-click on my Test Group and then select "Update Management Wizard."


Select "Load existing update management rules" and select the MS14-NOV entry from the drop-down. (If you need to build your own, you can select "Create custom dynamic update management rules"). Click Next.


Verify that the Dynamic Rule shows Security Bulletins from MS14-064 through MS14-079 and click Next.


You can leave most of the defaults on the Options page, but be sure to check the "Run in planning mode" checkbox in the Advanced Options. Click Finish.


Either change your scope to include a few other computers or add additional computers for the testing and then click Next.


Select the schedule (I am a fan of "now") and Export or Email the results as you like and click Finish.


Planning mode is an oft-overlooked feature that you should definitely use for large patch deployments.


This gives you in three quick tabs, the overall status summary of the job, the per patch and per computer details, and the distribution of the job (if you have multiple Patch Manager Servers in your environment).




Pre-Stage the Patch Files on a Test-Group (optional)

If you have a large or highly distributed environment, you can use the Update Management Wizard to deploy the patches to the endpoint, but hold off on installing them. This can be staged to run over several hours or multiple days. This is as simple as running through the same steps as the previous wizard and then checking a different box in the "Advanced Options." Leave the Planning Mode checkbox unchecked and check the box for "Only download the updates, do not install the updates."


That's it. Just use the rest of the wizard as before and check this one box to pre-stage the updates to your environment.

Install the Patches on your Test Group

Same rules apply here. Just make sure that you leave the Planning Mode and Download Only checkboxes empty.  Yeah - it's really just that simple.


Reporting on the Results

To report on the status of these patches within your environment, you can use any number of our pre-built reports or customize your own for your needs.  Likewise, you should take advantage of all the Shared Reports in the Content Exchange here on Thwack and download this one that was kindly donated by LGarvin and repurposed (very) slightly by yours truly:Report: Computer Update Status for MS14-NOV This report shows the status of the MS14-NOV Patches in your environment.


Plan & Test, Stage, Install: Three steps, one wizard, and you have successfully deployed those pesky patches to your test environment.  So what's up next?  Moving to production...

Patching Production Endpoints

You ever read the instruction on the back of a bottle of shampoo?  It says "Lather, rinse, repeat."  This is no different.

The previous process (hopefully) has been run against a group of test computers in your environment. If that go wells, then it's time to schedule the updates for the rest of your environment.  Just use the same few steps (Approve, Test in Planning, Deploy, and Install) to deploy the patches all at once, in waves, or staggered based on your needs.


Next Steps...

Hopefully, this "day-in-the-life" snapshot for a Microsoft Patch Tuesday has been helpful, but this just scratches the surface of what SolarWinds Patch Manager can do to help you keep your environment running smoothly.  Now that you've got Operating System patching under control, extend your knowledge of Patch Manager by keeping on top of patching Third Party Updates from Adobe, Google, Mozilla, Sun, and more!


If you need more help, like everything SolarWinds, start on Thwack.  We've got the best user community bar none.  Just ask a question in the forums and watch the people come out of the woodwork to help.  Start with the Patch Manager Forum and you can go from there.

If you frequent the various product and geek speak blogs here on Thwack, you've likely at one time or another stumbled upon some reference, and possibly sneak peek glimpses, into what has affectionately been referred to as the "AppStack". The Application Stack, or "AppStack" for short, is a term used to describe all the various moving parts that make up today's complex application delivery infrastructure. This begins at the bottom with the backend storage arrays where data is housed, through the various different virtualization layers, up to the server that hosts the application, until finally we reach the application itself. The AppStack Environment View shown below accompanies the SAM 6.2, VMAN 6.2, and SRM 6.0 beta releases.


SAM Button.pngVMAN Button.pngSRM Button.png


The image below isn't the latest incarnation of the Candy Crush sensation, but rather a visual representation of all the various infrastructure components that make up my lab environment. Historically, all of this status goodness was tucked away under each respective categories product tab. Correlation and association of these various objects was a mental exercise based on a working knowledge of the infrastructures initial deployment, or legacy information that has been passed down from systems administrator  to systems administrator before them. To make matters worse, todays infrastructure is dynamic and ever changing, so what you thought you knew about the organizational layout of your supporting infrastructure may have changed dozens, or even hundreds of times since it was initially deployed.




The AppStack Environment provides a 10,000 foot overview of your entire environment. At first, this may seem a bit overwhelming as there's a lot of status information contained within a single view. To that end, our UX research team toiled tirelessly to ensure that the AppStack provides as much useful information possible, while reducing or completely eliminating any extraneous noise that could distract from identifying problems in the environment and their likely cause. Perhaps most unnerving to most when first presented with the AppStack Environment View is the lack of object names displayed. Well fret not my dear friend because our UX masterminds thought of everything!


In most cases, when trying to get a high level overview into the health status of the environment, you're trying to get as much information represented in as little real estate as possible. If the status is good (green) then likely it's not of any concern. For those objects that are in distress however, you may want to glean additional information into what they are before digging deeper. As with most things in Orion, mouse hovers are available extensively throughout the AppStack Environment view to expose information, such as the objects name and other important details. However, if you're trying to determine the object names for a multiple of objects in distress, a mouse hover is a fairly inefficient means for determining what those objects represent. To address this need you will find a "Show Names" link in the top left of the AppStack. Clicking on this link will expose the name of any object currently in distress. Object names for items that are up (green) remain hidden from view to reduce visual clutter, so it's easier to focus on the items currently in distress.


Show Names.png



The AppStack Environment functions as a status overview page that is valuable when visually attempting to identify issues occurring within the environment, such as in the case of a NOC view, or perhaps the view you keep on your monitor while you go about your usual daily routine.  Any changes in status are updated in real-time for all objects represented in the AppStack. No browser refresh necessary. What you see in the AppStack Environment view is a real-time representation of the status of your environment.


Leveraging application topology information gathered across various different products, from Server & Application Monitor (SAM), Virtualization Manager (VMAN), and the all new Storage Resource Monitor (SRM), Orion is now capable of automatically compiling relationships and displaying them in a meaningful fashion between all the various infrastructure components that make up the application stack. Object associations are updated and maintained in real-time, and automatically, as the environment changes. This is important in todays dynamic environments where VMs are vMotioned between hosts, and storage is so easily reprovisioned, reallocated, and presented to servers. If someone had to maintain these relationships manually, it would likely be a full time job that could drive a person insane!


Clicking on any object within the AppStack selects that item and displays all relevant relationships the object has to any other object represented within the AppStack Environment. In the case below, I've selected an Application that's currently in a "Critical" state. I can see in the selection bar that this is a Microsoft SQL Server; one I've been having a bit of trouble with lately, so I'm not at all surprised to see it in a critical state. From here I can see which server SQL is running on, the fact that it's obviously virtualized because I can also see the host, virtual custer, datacenter, and data store the VM resides upon as well. None of those appear to be my issue however, as their status is "green". Moving further down the AppStack I can see the volumes that the VM is using, the LUN where the Data Store this VM resides upon, and the Storage Pool itself. So far, so good. Wait a minute, what's that I see? The backend array this VM is running on appears to be having some trouble.


AppStack Selected.pngDouble clicking on the Array object, or selecting the array and clicking on the object name from within the selection bar (for those of you rocking a tablet or other touch enabled device) takes me to the Array Details view. There I can see the I/O on this array is exceptionally high. I should probably vMotion a few VMs off the hosts that are utilizing this array, or provision more storage to those hosts from different arrays and balance the VMs across them.


Selection Bar.png


In this example, the AppStack Environment view was able to cut through the all the various information Orion is capable of collecting across multiple products to expose a problem, provide context to that problem, and identify the root cause. This can be done across the entire environment, or custom AppStack Environment views can be created to focus on a specific areas of responsibility. Perhaps you're the Exchange Administrator, and all you care about is the Exchange environment. No problem! Creating a custom AppStack environment view is simple. You can filter the AppStack by virtually any property available in Orion, such as Application Name, or Custom Property by adding filter properties.


Narrow your Envionment.pngSave as New Layout.pngSave as new Layout Dialog.png


Once you've filtered the AppStack to your liking and saved it as new layout, you can reference it at any time from layout selector in the top right of the AppStack Environment view. These custom AppStacks can also be used in rotating NOC views for heads up displays, or added to any existing summary view.


In addition to this new AppStack Environment View, you will also find a contextual AppStack resource on the Details view of every object type that participates within the AppStack. This includes, but is not limited to...


  • Node Details (For Servers)
  • Application Details
  • Array Details
  • Group Details
  • Storage Pool Details
  • LUN Details
  • Storage Pool Details
  • Hyper-V Host Details
  • ESX Host Details
  • Cluster Details
  • Virtual Center Details
  • Datastore Details
  • Volume Details
  • vServer Details
New Layout - Microsoft Exchange.png


AppStack MiniStack.png

The AppStack Environment resource provides relevant contextual status information specific to the object details being viewed. The image on the right, taken from the Node Details view of the same SQL server shown in the example above, displays the relationships of all other monitored objects to this node. This context aware application mapping dramatically reduces time to resolution by consolidating all related status information to the object being viewed into a single, easily digestible resource.


Relationships aren't limited exclusively to the automatic physical associations understood by Orion however. These relationships can also be extended using manual dependencies to relate logical objects that are associated in other ways, such as SharePoint's dependency on the backend SQL server. This capability further extends the AppStack to represent all objects as they relate to the business service, and significantly aids in root-cause analysis of today's complex distributed application architectures.


Earn $100.00 & 2000 Thwack Points!


SAM 6.2, VMAN 6.2, and SRM 6.0 beta participants have been given a special opportunity to earn $100.00 and 2000 Thwack points simply for taking part in these betas and providing feedback. To participate you will need to sign-up to participate and install at least two beta products that integrate within the new AppStack Environment view.


If you already own Virtualization Manager, you can sign-up here to participate in the Virtualization Manager 6.2 beta, get it installed and integrated with SAM or SRM and you're well on your way to earning $100.00 and 2000 Thwack points. Please note though, that you MUST already own Virtualization Manager and be under active maintenance to participate in the Virtualization Manager 6.2 beta.


If you currently own Storage Manager, and are under active maintenance, you can sign-up here to participate in the SRM 6.0 beta. Install SRM alongside SAM or VMAN to earn yourself some quick holiday cash!


If you're not yet participating in the Server & Application Monitor 6.2 beta, then what are you waiting for? All SAM product owners under active maintenance are welcome and strongly encouraged to join. Simply sign-up here. If you also happen to own Virtualization Manager or Storage Manager, then now is a great time to try out these awesome combined beta releases, and earn a little cash in the process.


Space is limited, so please reply to this post letting us know that you're interested in participating. You'll not only fatten your wallet in the process, but you'll also be helping us to build better products! Here’s how it’s going down.


When:What you’ll do:What you receive:
When you receive the beta bits for 2 or more productsHave a 15 minute phone call with our UX team to review what to expect and tell us about your job and technical environmentMy eternal gratitude!
2 to 4 days after installing two or more participating beta productsSend us a 1 to 2 minute video telling us about your initial experiences with AppStack. You can make the video with whatever is easiest for you; most people use their phone.$50 Amazon gift card
7 to 10 days after installing two or more participating beta productsSend us another 1 to 2 minute video telling us about how you've used AppStack. What are your favorite things? What drives you crazy?$50 Amazon gift card
12- 14 days after installing 2 or more participating beta productsSpend one hour with us showing us how you use AppStack.  We’ll  meet using GoToMeeting so that you can share your screen and walk us through some typical use cases.2,000 thwack points

The team at Solarwinds has been hard at work at producing the next release of Virtualization Manager with some great new features. It is my pleasure to announce Virtualization Manager 6.2 Beta which is packed full of features and continues VMAN's journey of integration with Orion. The Beta is open to VMAN customers currently on active maintenance, to try it out please fill out this short survey.


VMAN Beta button.png



Beta Requirements

Orion integration with VMAN Beta 1 will require an install of the SAM 6.2 Beta.  Check out the the 3 part blog post (links to the blog posts can be found here) which details the new features of SAM including AppStack support for end-to-end visualization of the environment: App > Server/VM/Host > Storage (datastore).  As a reminder, beta software is for testing only and should not be used in a production environment.



Essential to diagnosing an issue (or preventing one) is understanding dependencies and relationships in a virtual environment.  Most administrators start troubleshooting an issue armed only with the knowledge that an application is slow or a server process is functioning sub-optimally.  AppStack provides an at-a-glance view of environmental relationships by mapping dependencies end to end and providing easily identifiable statuses which can quickly be used to identify root cause.  If a VM is reporting high latency an administrator can quickly identify the datastore, host, and related VMs to determine where the issue may be.  In cases where NPM, SAM, and VMAN are integrated, AppStack can quickly identify if a SAM threshold alert is attributed to network latency or virtual host contention.


AppStack can provide a view of your entire environment with relational dependencies visible end-to-end.  Status from additional integrated SolarWinds products such as SAM, SRM, and WPM are provided and the relationship mapped to the virtual environment.



AppStack is also visible from an individual Virtual Machine's Details View and displays the relational context of that virtual machine only.



Management Actions

A great new feature is the ability to perform virtual management actions without leaving Virtualization Manager.  With all your virtualization monitoring information available you can make a decision to stop, start, or  pause a VM from within VMAN.  In addition to power management options a Virtual Admin can also create and delete snapshots without ever leaving the VMAN console or logging into vCenter. An administrator can now act on an alert they see in the dashboard such as VMs with Old Snapshots and then delete those snapshots from within Virtualization Manager.  The addition of management actions also allows a Virtual Administrator to delegate operational tasks to non-virtual admins or teams (such as helpdesk) without providing access to vCenter.

Mgmt actions 1.JPG


Management ActionsWhat you will see

Power Management Actions

  • Power Off VM
  • Suspend VM
  • Reboot VM

Selecting a power management action will present a pop up confirming your action.


Snapshot Management Actions

  • Take Snapshot of VM
  • Delete Snapshots

Creating a snapshot will present you a pop up to Confirm the snapshot and create a custom name.

Create Snapshot.JPG

Deleting a snapshot will present a pop up to select a snapshot to delete.


Delete snapshot.JPG

Management Action Permissions

To execute management actions, the account used to configure the credentials that SolarWinds will use to contact your Virtual Center, must be given the necessary vCenter Server permissions. The minimum vCenter Server permissions required are:


  • Virtual machine > State > Power Off
  • Virtual machine > State > Power On
  • Virtual machine > State > Reset
  • Virtual machine > State > Suspend
  • Virtual machine > Snapshot management > Create snapshot
  • Virtual Machine > Snapshot management > Remove snapshot

Note: In vCenter Server 5.0 the location of snapshot privileges is below:

  • Virtual machine > State > Create snapshot
  • Virtual Machine > State > Remove snapshot


Managing access to the Management Actions within SolarWinds

Once permissions are set on the account used to connect to vCenter you can delegate access within SolarWinds to execute the management actions. The default permissions for management actions is set to Disallow which disables the virtual machine power management and snapshot management options.  To enable these features for a user or group select Settings in the upper right hand corner of Orion.  Then select Manage Accounts found under User Accounts and select the account or group to edit. Expand Integrated Virtual Infrastructure Monitor Settings and select Allow for the management action you wish to enable.


Mgmt Actions Permissions.JPGVirtual Machine Power Management

     Allow - Enable the options to start, stop, or restart a virtual machine.

     Disallow - Do not enable the options to start, stop, or restart a virtual machine


Snapshot Management

    Allow - Enable the options to take snapshots of a virtual machine, or to delete snapshots.

    Disallow - Do not enable the options to take snapshots of a virtual machine, or to delete snapshots.


If an admin attempts to perform a management action without being provided access within Solarwinds they will receive an error similar to the one below.

failed snapshot.JPG



Co-stop % Counter

A new counter has been added the VM Sprawl dashboard that is useful for detecting when too many vCPUs have been added to a VM thereby resulting in poor performance.  Co-Stop ( %CSTP as it is represented in ESXTOP) identifies the delay incurred by the VM as the result of too many vCPUs deployed in an environment.   Any value for co-stop above 3 indicates that virtual machines configured with multiple vCPUs are experiencing performance slowdowns due to vCPU scheduling. The expected action to take is to reduce the amount of vCPUs or vMotion to a host with less vCPU contention.



CoStop - VIM.JPG


Web Based Reports

Web-based reporting introduced in VMAN 6.1 allowed us to represent all data in the VMAN integration in the web based report writer bringing about an easy way to create powerful customizable reports in an easy to use web interface.  We have extended this functionality to include out--of-the-box web based reports found previously only in the VMAN console. These new web based reports include:


    • Disconnected Hosts
    • High Host CPU Utilization
    • Hosts with Recent Reboots
    • Newly Added Hosts
    • Datastores in Multiple Clusters

    • All VMs
    • Existing VMs with Recent Reboot
    • High VM CPU Utilization - Latest
    • High VM Memory Utilization - Latest
    • Linux VMs
    • Low VM CPU Utilization – Day
    • Low VM Memory Utilization – Day
    • Newly Added VMs
    • Stale VMs – Not Recently Powered On
    • VMs configured for Windows
    • VMs that are not Running
    • VMs Using over 50GB in storage
    • VMs Using Thin Provisioned Virtual Disks
    • VMs With a Bad Tools Status
    • VMs With a Specific Tools Version
    • VMs With Less Than 10% Free Space
    • VMs With Less Than 1GB Free Space
    • VMs With Local Datastore
    • VMs With More Than 20GB Free Space
    • VMs With More Than 2 snapshots
    • VMs With No VMware Tools
    • VMs With old snapshots (older than 30 days)
    • VMs With One Disk Volume
    • VMs With One Virtual CPU
    • VMs With One Virtual Disk
    • VMs With over 2GB in snapshots
    • VMs With RDM Disks
    • VMs With Snapshots
    • Windows VMs
    • High Host Disk Utilization
    • VMs on Full Datastores (Less than 20% Free)
    • VMs Using More Than One Datastore

Web Based Alerts

As discussed in the Virtualization manager 6.1 blog post web based reporting was introduced which allowed virtual administrators to take advantage of baselines and dynamic thresholds. Originally we only included only a small subset of standard VMAN alert out-of-the-box but we now have now extended these out-of-the-box alerts to include:


ClusterHostVMDatastore/Cluster Shared Volume


    • Cluster low VM capacity
    • Cluster predicted disk depletion
    • Cluster predicted CPU depletion
    • Cluster predicted memory depletion

    • Host CPU utilization
    • Host Command Aborts
    • Host Bus Reset
    • Host console memory swap
    • Host memory utilization
    • Hosts - No heartbeats
    • Hosts - No BIOS ID
    • Host to Datastore Latency

    • Guest storage space utilization
    • Inactive VMs - disk
    • Stale VMs
    • VM - No heartbeat
    • VM Memory Limit Configuration
    • VM disk near full
    • VMs with Bad Tools
    • VMs with Connected Media
    • VMs with Large Snapshots
    • VMs with Old Snapshots
    • VMs With More Allocated Space than Used
    • Disk 100% within the week
    • VM Phantom Snapshot Files
    • Datastore Excessive VM Log Files


Appliance Password Improvements

With ease of setup provided by the virtual appliance it is easy to overlook changing the appliance admin account (Linux password) or the Virtualization Manager admin account password.  In Virtualization Manager 6.2 the initial login will now display a password change reminder with links that will easily change both passwords.

Password change.JPG

Linux Password.JPG 

That's it for now.  Don't forget to sign up for the Beta and provide your feedback in the Virtualization Manager Beta forum

VMAN Beta Button 2.png

In my many chats with customers, I’ve found they didn’t know we have tools that allow them to synchronize files across servers automatically – and we give it away for FREE!

Some history here, when we acquired Rhinosoft back many moons ago, they sold a crazy popular FTP Client called FTP Voyager.  As a part of the acquisition we decided to make this free to the world.  A hidden gem within this product was their scheduler set of functionality, which is a service that allows you to automate file transfer operations on a scheduled basis.

Once you have installed FTP Voyager, you can open this by right clicking on the task bar and selecting “Start Management Console”.



Once you launch it you get a dialogue with some options to start from.



Backup & Synchronize are purpose built wizards which ultimately create the same thing as the first option, which is create a task.

Within a task you can set it to perform many tasks and behave in different ways, for example

  • Tasks starting additional tasks
  • Email notifications on success, failure & task completion
  • Tray icon balloons for custom alerts
  • Multiple transfer sessions
  • Conditional event handling
  • Optional trace logs (with log aging) for each task
  • Custom icons & colors for each task


The Backup wizard backs up files or folders and allows you to restore lost data due to hardware failure, software glitches, or some other cause for data loss. Many organizations perform backups of day to day activities to insure minimal downtime in case a disaster like a corrupt hard-drive occurs.


Synchronization focuses on keeping folders and files from/to your local machine to/from a remote server in sync. For example, web developers may use synchronization to maintain a local copy of an entire website. This allows the web developer to make changes to the website offline, and then upload those changes when they are complete.


I can already read your mind and predict your next question via the power of thwack.

               “This is awesome Brandon, but how do I do this in bulk or push out to many machines?”

Here’s the bare minimum you would need to have for a fully configured FTP Voyager Scheduler configuration with no end user interaction.

  1. Deploy FTP Voyager via your favorite app deployment application or leveraging GPO
  2. The registration ID: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerID.txt
  3. Pre-configured Scheduled tasks: C:\ProgramData\RhinoSoft\FTP Voyager\Scheduler.Archive
  4. Pre-configured site profiles for Scheduler: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyager.Archive
  5. Pre-configured transfer settings for Scheduler:  C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerSettings.Archive


This assumes that all machines receiving these files use the same file system structure defined in the tasks, that all required folders for those tasks already exist, etc.

    NOTE: When doing #3 and #4, you must copy them from the same machine, as the items configured in step #3 uses values present in #4 to link the two together. If different machines are used to generate the data, the values will not link properly.

That’s it, simple as that.  Whether you want to deploy this on one of your machines or many in your environment, now you can have your files automatically backed up or synchronized to your Serv-U server for safe keeping.

It's 2014 and simple bandwidth saturation continues to be one of the most common network problems.  Every time we make advances in data transit rates, we seem to find new ways to use more bandwidth.  Or is it the other way around?  QoS helps us use the bandwidth we have more effectively, but adds a layer of complexity.  Another interesting side effect of QoS in many organizations is that, by successfully protecting critical application traffic, QoS allows you to defer bandwidth upgrades.  By the numbers, keeping your circuits saturated absolutely makes business sense.  Who wants to pay for bandwidth you don't use?  The strategy is so effective that saturated circuits are becoming the new normal.


When bandwidth saturation is the normal state and QoS is responsible for protecting our critical traffic, how do we monitor and troubleshoot slowness?  We no longer have an available bandwidth buffer that we can monitor as an indication of the health of the circuit and the quality of the service we're providing to all of the transit traffic.  Saturated circuits require a different strategy.


1-minute max to tell us about your IT troubleshooting pain points


Rummaging Through the Tool Box


Let's take a look in our tool box and see what makes sense to use with saturated circuits:

Ping - Ping falls in only one QoS category and it generally isn't the same category as your important traffic.  Accordingly, ping is not a very good indicator of health.

Bandwidth Utilization - High bandwidth utilization no longer means bad service.  In fact, arguably, high bandwidth utilization is exactly what you're trying to achieve.  This is also not a good indicator of health.

NTA - Knowing what different types of traffic make up your total traffic load continues to be important.  Knowing how much traffic is being sent or received within each QoS class and what types of traffic make up the contents of that QoS class are more important than ever.  Additionally, QoS queue monitoring provides a window into when QoS is making the decision to drop your traffic.

VNQM - VNQM uses IP SLA operations that can masquerade themselves as most other types of traffic.  You can monitoring how VOIP, SQL, HTTP, and other traffic are each uniquely serviced.

QoE - Where VNQM uses synthetic transactions (read: monitoring system generated) to constantly measure the service you're providing to a particular type of traffic, QoE watches your real transit traffic to see what kind of service you're actually providing your users.


Some of our older tools no longer have the fidelity and granularity we need to help.  Ping and simple bandwidth utilization are no longer good indicators of health.  We have to rely on more advanced testing tools like VNQM and QoE.  NTA remains our detailed view into what our traffic actually consists of, with a shift of importance to the QoS metrics.

It's interesting to me that QOE overlaps in some ways with NTA and in other ways with VNQM.  Like NTA, QOE can tell you about what traffic is traversing a specific link (using NPAS) or a specific device (SPAS).  NTA is much better at it than QOE though.  Like VNQM, QOE allows you to continually monitor the quality of service your network is providing but in a different way.  Where VNQM sends synthetic traffic and analyzes the results, QOE analyzes real end user traffic.  QOE's strong suit is providing visibility into the service level you're really providing to your end users and then breaking that down into network service time and application service time.

Now that we've reviewed our tools in this new light, let's look at how our monitoring and troubleshooting processes have changed.


Monitoring Saturated Circuits


Here's an example of how we traditionally monitored for slowness:

1) Ping across the circuit and watch for high latency.

2) Monitor interfaces and watch for high bandwidth utilization.

3) Setup flow exports so that when an issue occurs, you can understand the traffic.


When circuit saturation is expected or desirable, the steps change:

1a) Setup IP SLA operations to test network service at all times and/or

1b) Setup QoE to monitor real user traffic whenever it is occurring.

2) Setup flow exports so that when an issue occurs, you can understand the traffic.


One thing that is very important when all your circuits are full is to be able to ignore things that don't matter.  If bittorrent is running slow for your end users, do you care?  When all of your circuits are running hot, you need to be able to detect when critical applications are impacting your user's ability to do their job, and ignore the rest.


Troubleshooting Saturated Circuits


Here's an example of how we traditionally troubleshot network slowness:

1) Receive high latency alert, high bandwidth utilization alert, or user complaint.

2) Isolate slowness to a specific circuit by finding the high bandwidth interface in the transit path.

3) Analyse flow data to determine what new traffic is causing the congestion.

4) Find some way (nice or not nice) to stop the offending traffic.


When circuit saturation is the norm, troubleshooting looks more like this:

1) Receive alert for poor service from VNQM or QOE, or receive a user complaint.

2) Isolate slowness to a specific circuit by finding high bandwidth interfaces in the transit path and comparing which endpoints or IP SLA operations are experiencing the slowness.

3) Determine which QoS class to focus on by mapping the type of traffic affected to a specific QoS class.

4) Determine if the QoS class is being affected by competition from other traffic in the same class or in a higher priority class.

5) Analyze flow data within the offending QoS class to find the traffic causing the congestion.

6) Find some way (nice or not nice) to stop the offending traffic.


Your Thoughts?


Everyone has different tools in their toolbox and different environments necessitate different approaches.  What tools do you use to monitor and troubleshoot your saturated circuits?  What process works for your environment?


1-minute max to tell us about your IT troubleshooting pain points

It is always exciting to present what is new in our products, and this time I have something I'm particularly happy about: Web Help Desk (WHD) 12.2 and Dameware (DW) 11.1 integration. This Beta starts today and it also includes features mentioned in Web Help Desk 12.2.0: Looking behind the curtain. If you are interested in participation, please go ahead and sign up here:




Note: You have to own at least one of those products to be able to participate in this Beta. Also this Beta is not suitable for production environment and you need a separate test system.


Before we talk about the details, let's quickly introduce both products.


Web Help Desk is a powerful and easy-to-use help desk solution. It does not overload you with tons of configuration settings, but it's still enormously flexible, and for many use-cases it works out of the box. You can integrate it with Active Directory, turn Orion alerts into tickets automatically, use it for Asset Management, or use the powerful Knowledge Base to divert new service requests by proactively displaying possible service fulfillment articles.


Dameware on the other hand is an efficient remote support and administration tool, with support for multiple platforms (native MRC, RDP or VNC protocols) and the ability to connect to remote machines on LAN as well as over the Internet.


Help desk and remote support tools are natural complements and part of every technician's workflow. You have to track incidents and service requests, and in many cases you need to connect remotely to fix an issue or to collect more information as part of the investigation. It is a no-brainer that this workflow should be smooth, without unnecessary steps and clumsy moving around between systems. Therefore we work hard to integrate WHD and DW to create such workflows, and I want to share some initial results with you.


Seamless Start of a Remote Session

It was possible to integrate WHD and DW even before by using the "Other Tool Links" feature of WHD (read more about this integration in Dameware / Web Help Desk Integration article). This integration helps you to start remote sessions directly from WHD

and it requires little configuration. However, it also requires the installation of a 3rd party tool - CustomURL on every Technician's computer. To avoid this hassle, Dameware can register itself as a protocol handler in this Beta version for custom Dameware links. This works for all major browsers (specifically IE, Chrome, Firefox, Safari and Opera).


The links have the format of a normal URL with the schema part of the URL changed to "dwrcc". The browser will recognize such a URL and pass it to Dameware. Here is an example of the URL:




Dameware takes this URL, extracts information such as the IP address ( in this example) or the host name if used, the WHD URL ( in the example above) and so on, and tries to locate the remote machine in the Save Host list. If the IP address or the host name is found, Dameware will use the saved credentials and other information such as protocol to start a remote session. If no information is present in the Saved Host list, then you will see a connection dialog and you can enter the credentials. Yes, they will be saved for later re-use. :). In the following screenshot you can notice that Dameware runs in Web Help Desk integration mode. This is indicated by the yellow bar on the top of the window.




The MRC application will exit after the sessions ends, which is different from the normal behavior when you start a session manually.


We also added a new configuration option to WHD to display Dameware links. You can enable displaying links by selecting the check box under Setup -> Assets -> Options -> DameWare Integration Links Enabled (this is actually everything needed to configure the integration):




Notice that there is also a possibility to define the Request Type. This is related to the second part of integration which is available in this Beta: the ability to save session data to WHD.


Saving Session Data into Tickets

When the remote session ends, you have the opportunity to save some session data to WHD. Specifically this means session meta-data, chat history and any screenshot made from Dameware. Meta-data which will be saved includes the following:


  • Technican's host name or IP address
  • End user's host name or IP address
  • Session start time
  • Session end time
  • Duration
  • Session termination reason (e.g. "The session was closed by technician")


Apart from meta-data, Dameware will also pass the chat history to WHD in the form of an RTF document. This document includes all formatting, and is easier to transfer for later use. The RTF document will be attached to a newly created ticket note, which will also include the aforementioned meta-data. Any screenshot you make will also be saved to WHD as an attachment. (Just be aware of the size limit of WHD attachments and make sure it is sufficient for screenshots). DW uses WHD API to pass the information and inform you about the progress. If anything goes wrong (for example the attachment too big), it will be displayed in a dialog. You will have the opportunity to save the session data locally, so no information is lost. All data is saved into a Note of the given Ticket. Any attachment will be the attachment of the Note, rather than a Ticket attachment to make is easier to identify attachments related to a given session. This information will not only provide customer evidence for remote sessions, but it also saves important information for troubleshooting during investigation and make it possible to track work time and report on it.


Connection from existing Ticket

The common scenario is that a user creates a new ticket reporting an issue on this desktop. You open the ticket details and there is information missing. Therefore you decide to connect remotely. You open the Asset Info tab and click the Dameware connection icon.




If you have previously connected to this machine and the credentials are saved in the Save Host list, you are seamlessly connected to the desktop of the user. Again, notice the yellow WHD integration banner.


Yellow Banner.png


While you troubleshoot, you make several screenshots, ask user for additional details over chat and investigate. When you are finished you close the session, and since the remote session was created from a WHD ticket, the integration dialog is displayed immediately (in some cases the workflow is different, but we will discuss this in a minute).


Ticket update dialog - CHAT.png


In the window title you can see the ticket number. The session meta-data section is displayed and you can describe what happened during the session in the text field below. This text will be saved into the ticket note. The duration of the session will be saved as work time in the same note. You can also decide to attach the chat history or any screenshot you made. Finally you can also decide whether to make this note visible to the client or not. Then you click on "Save".


In the next dialog window you can observe the progress of communication between Dameware and WHD. If anything goes wrong, you can see it here. If the session data was saved sucessfully you will see a confirmation dialog.


Upload progress.png


Following video demonstrates the workflow



Incident reported over phone

There are cases when the user will call you directly. In such a case, there is no ticket. If the user is however reporting an issue on this laptop, you can still connect remotely and troubleshoot the issue.


You can first search for the user's assets and connect directly from the resulting list.




In this case the session is initiated from the asset, not from the ticket. You are remotely connected and do whatever you need to do on the remote machine. When you are done, you close the session normally, but this time there is no ticket to be updated. Dameware will therefore display the list of existing tickets linked to the given asset. Only tickets which are not closed will be displayed, and you can update the ticket if the remote session was a follow-up on an existing incident.


Existing tickets list.png


If this is a completely new incident, you have the opportunity to create a new ticket. Click "Create new ticket" and the simplified new ticket form is displayed. Only the subject and the request details are available, and in the next step you can define more details of the newly created ticket (again, notice the ticket number in the window title).


New ticket.png


The rest of the workflow is the same as in the previous case. You can add a note and save and screenshots or char into a ticket and newly created ticket is updated.


Over the Internet session

Sometimes you have mobile users and you need to connect to them over the internet. The workflow is similar: you click the Dameware connection icon, and Dameware tries to connect. If it doesn't work, you will get back to the connection dialog and you can click on "Internet connection" and establish an OTI session. Once you connect sucesfully, the workflow is the same as in the previous cases. You are asked to update a ticket, or you are offered a list of existing tickets and you can update the ticket as before.


This integration works with all supported protocols: MRC, RDP and VNC. The only difference is that with RDP we do not support screenshots and chat, and with VNC only screenshots are supported.



This integration aims to streamline incident resolution and make the workflow smoother. I'm looking forward to hearing from you, if you have any feedback, questions, comments or ideas, please let us know in comments.


Also do not forget to sign-up for Beta here!

Filter Blog

By date:
By tag: