I'm delighted to share with you more details on what we are working on recently in Web Help Desk (WHD). Many of you are very excited about what's coming and we are listening on all channels and working hard to combine your inputs in the best possible way. Some of the features include


These are all great, but I'd like to also show you what we are working on, so let me elaborate on Reporting and UI changes in this post and give you a small teaser.



You can create two types of reports in WHD now – Ticketing and Billing. One of the larger features we are working on is Asset reporting. If you want to see simple statistics of the devices you have deployed in different locations, or if you need to know how many new laptops you have bought since January, you need an asset report.


Let's say your manager asks, "How many different, new types of devices are now in use across the offices since the beginning of the year?".


You can easily answer this in the new Web Help Desk.

  • Open the Reports page in WHD.
  • Click on the "New" report button.
  • Select "Asset Report".
  • Don't forget to define a name and group (optional) for your new report.


New Asset Report.png


  • Next you move to the Report Details tab
  • Choose "Bar chart" to visualize your results into nice bar charts.
  • In the Bar Category, choose "Asset Type"
  • In the Bar Stack Category, choose "Location".


Asset Report - Details.png


We are almost there. Now you want only assets purchased since January, so you

  • move to Report Filters,
  • click "New", and
  • choose "Purchase Date".
  • Next you define the purchase date from January 1st 2014.
  • Now click "Save".


Asset Report - Filters - Purchase Date.png


The last filter we need is "by Status", since your manager asked for only devices in use.


  • Click on "New" again in the Report Filters tab.
  • Choose "Asset Status", and check the values which represent devices in use in your environment (most likely these are "Deployed", "Used", etc.).
  • Click "Save".


Asset Report - Filters - Asset Status.png


Now we are ready, to run the report, and there is it. Answer for your boss visualized in a nice bar chart.


Asset Report - Output.png


Under the hood of new Asset reports

Asset reports are not just another report type. There is much more to it. They are built on the new reporting engine, which will bring a plethora of improvements. With the new engine we get a more feature rich, performance optimized reporting infrastructure which gives us the long term flexibility to create new reports quicker going forward. Having a more flexible engine, allows you to potentially render reports into many different formats, generate reports faster and with lower memory consumption. Another benefit is easier migration of reports between systems and also easier troubleshooting in case of problems thanks to textual configuration of reports. New engine can also separate generating of reports from main WHD application and thus makes WHD much more robust. If you by mistake create a report with millions of lines, it would not affect the availability of WHD for your users even if generating of that report takes hours. We are working on Asset reports now with the goal to get them to you as soon as possible however ultimately we want to bring all this goodness also to other areas of existing reports (Ticketing and Billing) or new reports like Purchase Orders or Reservations. There is a lot to look forward to!


Knowledge Base UI

Another area we are working on is the UI and WHD front-end. We started with FAQs and are working on improvements in usability, adding lot of on-the-spot help in form of tooltips or making it much easier to work with large numbers of objects. As you grow and have more Clients, Techs or Offices (read Locations) we want to make it even easier for you to scale. WHD should support you and thus we are looking for ways to support your growth. Let me show you some improvements we are working on.


Sometimes you might find Question/Answer column in FAQ list too narrow.

Old FAQ UI - fixed width of columns.png


We are working on improving the UI, so that columns are both easily resizable and moveable. Thus making UI far more flexible.


New FAQ UI - resizable columns.png


If you have a large system with hundreds of Locations or Techs, the new UI we are working on can support large deployments and has UI elements to make the work with large volumes of objects much easier. Instead of checkboxes, you can add objects to the list by browsing a list or typing the names with auto-complete. For example when defining Locations for FAQ.


New Locations selection.png


Another example of improvement is a more detailed help system with new tooltips and details on individual settings. Our goal is to provide on the spot help when and where it's needed.


New FAQ UI - tooltips.png


As a part of the various improvements mentioned, you may notice that the UI is slightly different than what you are used to. I'm not talking about new UI elements, but some familiar parts are just a bit different. The reason is another big change under the hood we are working on.


What's new with front-end anyhow?

Another exciting improvement we are working on is adopting new framework for our front-end. It will bring great new features into WHD, some of which you have seen in previous sections. There are not only huge performance benefits, but more importantly various usability improvements like auto-complete or aforementioned support for very large systems (with vast numbers of objects like Locations, Companies, Techs or Models), better browser compatibility and easier maintenance. While not visible right away, they will bring lot of enhancements to future releases and might be necessary for the product to break ground for other new features in the future.

We will keep you updated on our progress and keep your eyes out for Beta signup or simply let me know if you are interested here in comments or contact me directly!

You can find the installation packages in your customer portal. This service release:


  • Upgrades Orion Platform:
    This version includes the latest Orion Platform features and versions of existing components.
  • Enables Storing of Config Archives on Network Share:
    This version more effectively enables storing your growing archive of device configs on a permissioned network share.
  • Maintains Job Logs:
    The database maintenance job now remove job logs of a specified size and age.
  • Removes Unnecessary Node Information:
    Orion Platform nodes no longer have a Configs subview under Node Details.


Further details and list of fixed issues can be found in the release notes.

Welcome to NPM v11!


It's finally here! Throughout the beta and RC process, there has been a ton of excitement around the new Quality of Experience (QoE) dashboard in NPM 11, and today the wait is over!

For those of you that may have missed it, the QoE dashboard captures packet-level data from sensors deployed in your environment, analyzes it and aggregates the information in a easy-to-read dashboard.

These statistics make it easy to tell at a glance not only if you have a user experience issue, but whether the issue is a problem with the network or the application.  In addition to response time statistics, we are capturing aggregate volume metrics, and have the ability to classify application traffic on your network by risk-level and relation to typical business functions.


QoE_Dashboard (1).png


Out of the box, NPM 11 is able to categorize ~1200 applications. Here we can see a sample of the Citrix applications we support (including ICA protocol):




Sensor Topology


So how are we able to gather this data? In NPM 11.0, we have the ability to deploy Packet Analysis Sensors (PAS) to Windows-based (64-bit / 2008+) systems. Sensors can be deployed in two configurations: "Server" or "Network."

With a "Server" PAS, the sensor is deployed directly on a server you would like to monitor. So for example, if you wish to monitor your Exchange response time / traffic, you would push the sensor directly to your Exchange server:


SPAS (1).png

For a "Network" PAS, the sensor is deployed to a dedicated Windows-based system with a dedicated interface attached to a tap / SPAN / mirror interface.


Once sensors are deployed, you can choose from our library of ~1,200 application signatures to monitor, or create your own custom HTTP applications.

Deployment Considerations

So now that we understand how we are gathering the data, where should we gather it from? Like many deep questions- it depends.

Are you a server admin looking for aggregate response time for your servers? Network admin wanting looking for client response time broken out by site? Both you say?

It's important to note that the PAS will report data aggregated at the server level, unless the individual client endpoints are also managed nodes in NPM. For example, in the case where we would want to see web server statistics from multiple remote sites, endpoints at those remote sites would need to be managed nodes in NPM (even as ICMP nodes.) From there, we could either deploy a SPAS to said endpoints, or use a NPAS anywhere in the traffic path to break out the individual response times.

If the application server is virtualized, you are presented with even more deployment choices for the NPAS. Do you create a virtualized NPAS and sniff the VDS? Create a promiscuous port group? VMWare has some good documentation on the hypervisor side of things: VMware KB: How promiscuous mode works at the virtual switch and portgroup levels  :: How to use Port-Mirroring feature of VDS for monitoring virtual machine traffic?

For more information- please check out our QoE deployment guide.


How are sensors licensed?

Out-of-the-box, a licensed version of NPM comes with 1 NPAS and 10 SPAS licenses. For most small-to-mid sized deployments, this should be sufficient to monitor critical infrastructure. Beyond the built-in licenses, NPAS and SPAS are licensed individually (NPAS) and in packs of 10 (SPAS)

How is NRT / ART calculated?

Short answer: magic. Long answer is a bit more complicated . Check out this video presentation by certified Wireshark© network analyst Jim Baxter: Deep Packet Analysis & Quality of Experience Monitoring - YouTube


How is an application classified?

Short answer again: magic. Long answer is: it depends on the application. Out-of-the-box NPM ships with ~1200 application signatures that use a variety of techniques beyond the typical port & protocol matching you may find in other solutions. Some examples of techniques used:

  • Statistical Pattern Matching
  • Conversation Semantics
  • Deep Protocol Dissection
  • Behavior Analysis
  • Future Flow Awareness


Can I create my own applications?

In NPM 11, we have the ability to create custom HTTP applications. In the future we may look at the definition of custom applications via port & protocol.

7-28-2014 5-31-03 PM.png

How much traffic can a sensor handle?

Both NPAS and SPAS are rated to a sustained 1Gb/s given sufficient resources. (Check out the deployment guide for more detailed information, but good rule of thumb is 1 CPU core per 100Mb sustained traffic.)

Without using a specialty hardware appliance, this is a pretty impressive number.

How many applications can I monitor per sensor?

The recommended maximum is 50 applications per sensor. Beyond this, performance results may vary.

How many sensors can an Orion instance handle?

The current maximum number of sensors an instance can handle is 1,000.

Learn more
For customers under active maintenance, NPM 11 may be found in your SolarWinds Customer Portal.

Release notes for NPM 11.0 may be found here: SolarWinds Network Performance Monitor Release Notes

Sign up for SolarWinds lab, where the topic of discussion will be the new QoE functionality: July 30th, 1PM CT - LAB #16: Deep Packet Inspection Comes To NPM

VMAN is now generally available in the Customer Portal for all customers on Active Maintenance. Several important issues were addressed in this Service Release including:


  • Significant security updates to the CentOS version shipped in the virtual appliance and it's 3rd party software
  • Fixed an issue where the Virtualization Manager tabs do not show up in Orion
  • Fixed a discrepancy between the CPU value utilization shown on the vCenter and in IVIM
  • Fixed a performance issue in the Orion Web Console
  • Fixed an issue where Virtualization Manager starts up slowly after upgrading to version 6.1
  • Improved the auto upgrade procedure
  • Fixed an issue with displaying status tool data
  • Improved the efficiency of midnight hourly roll-up operations
  • Improved the usability of the Synchronization Wizard
  • Fixed an issue with account limitations strings incorrectly displayed
  • Fixed an issue with charts not displaying all data for the selected time period
  • Fixed an issue with inconsistent polling source settings
  • Fixed an issue with creating List resources
  • Many other bug fixes and stability improvements!



The update to the Orion Integration Module can also be found under the server downloads in your customer portal.

Customer Portal - IVIM.jpg

Integration with SolarWinds SAM or NPM provides visibility into the application stack from VM to virtual storage performance.

  • VMware® & Hyper-V® performance metrics
  • Capacity, utilization & performance of datastores & Hyper-V storage
  • Storage sub-views show a slice of the VM environment (vCenter, data center, cluster, ESX® hosts & VMs)
  • Metrics to monitor VM sprawl & reclaim CPU, memory & storage resources
  • Virtualization Manager reports & dashboards from Orion console


6.1.1  is available now in your Customer Portal.

Release notes may be found here

There's been quite a bit of chatter recently surrounding the hotly anticipated release of Network Performance Monitor v11, featuring the entirely new Quality of Experience (QoE) dashboard. At the center of what makes all of this amazing QoE information possible are Packet Analysis Sensors, which can be deployed either to the servers running the business critical applications themselves, or to a dedicated machine connected to a SPAN port which collects the same information completely out-of-band for multiple servers simultaneously. For all intents and purposes, these Packet Analysis Sensors could be considered specialized agents, solely dedicated to the purpose of collecting packet data from the network. But what if these "agents" could be used to monitor other aspects of the servers they were installed on, or leveraged to address many of the complicating factors and limitations associated with agentless monitoring? These were precisely the kind of questions we asked ourselves as we were developing the Packet Analysis Sensors for NPM.

What are these "complicating factors" you might ask? It depends on your environment's architecture. It's quite possible you have numerous uses for an agent today that you're not even aware of yet. Either due to network design obstacles or security requirements and concerns, there are many organizations that have had to make compromises regarding what they monitor, how, and to what extent. This has left blind spots on the network, where some servers or applications simply cannot be monitored to the full extent desired, or not at all in some cases. With the soon-to-be-released beta release of Server & Application Monitor (SAM) 6.2 we take Orion into a brave new world without compromise.

HTML tutorial


So what exactly are some of the challenges many of us face when attempting to monitor our server infrastructure and the applications that reside upon them? 

You can't get there from here

Agent Initiated Communication.png

This is a typical colloquialism you might hear when visiting the great state of Maine, but has been adopted by the IT community commonly to refer to situations where there's no route between two network segments. In most cases this is because one or both networks are behind a NAT device such as a firewall, and there's simply no way to get to the private address space behind the NAT without creating port forwarders, 1:1 address translations, or establishing a site-to-site VPN between the two networks.


With the new Agent included in SAM 6.2, these problems are a thing of the past. This new agent supports two different modes of operation. In the scenario on the left, the Agent is functioning in "Agent initiated mode", which means all communications are initiated from the server where the agent is installed. No direct route from the Orion server or additional poller to the monitored host is required. No port forwarding needs to be configured at the remote site, nor do you need a pool of public routable IP addresses at each remote site for 1:1 address translation to each device you wish to monitor behind the NAT.


With the agent installed on the remote Windows host, you can perform essentially all of the same node and application monitoring that you normally would for agentless hosts within your network, across what would otherwise be disconnected networks.

You want me to open what?

Agent Server Initiated.png

Such is the reaction you're likely to receive when you ask the network/firewall admin what ports you need opened to monitor the servers located in the DMZ. As I discussed in one of my early blog posts "Portocalypse - WMI Demystified" the port range WMI uses for agentless communication can be enormous. While this range can be significantly reduced, it does require either manual registry modifications or creation of a custom group policy. A reboot of each server that has its WMI port range modified is also required before the changes will take effect. As if that weren't bad enough, WMI won't cross most NAT devices. If your internal network goes through a NAT to access the DMZ, you're very likely unable to utilize WMI for monitoring any Windows hosts in your DMZ.


To eliminate these issues, the new agent included in this SAM 6.2 beta allows you to operate the agent in a "Server Initiated" mode. In this mode the agent operates over a single port (TCP 17790) similar to "Agent Initiated" mode. The difference in "Server Initiated" mode is that TCP port 17790 is listening on the host where the agent is installed and the Orion server polls information in a similar fashion to SNMP or RPC, instead of having it pushed to the Orion server in "Agent Initiated" mode. Zero ports need to be opened inbound to the internal network from the DMZ, and all communication is done across a single NAT friendly port.

Peekaboo - I see you!


Whether it's the NSA, those willing to perform corporate espionage, or the black hat hacker who hangs out at your local Starbucks, it's important to keep prying eyes from peering into your organizations packets. While SNMPv3 has existed for quite a long time, all versions up to and including Windows 2012 R2 still rely upon the older and less secure SNMPv2, a protocol which provides no encryption or authentication. While Microsoft's WMI protocol addresses the authentication aspects that are sorely lacking in SNMPv2, encryption is different matter altogether. While it's possible to force the use of encryption in the WMI protocol, this is not the default behavior and is seldom ever done. This requires modifications to WMI namespaces to force the use of encryption, a process that must be repeated on each host you wish to manage. Beyond that, your monitoring solution must also work with WMI encryption, something very few solutions on the market today support.

The Agent included in the SAM 6.2 beta has been designed from the ground up with security first and foremost on our mind. To that end, the agent utilizes FIPS compatible 2048 bit TLS encryption to ensure all communication between the Agent and the Orion Poller are fully encrypted and safe from would-be cybercriminals.


How slow can you go?


Not all protocols are created equal. WMI and RPC may be right at home on todays gigabit Ethernet networks, but that is because these protocols were designed almost two decades ago as LAN protocols. These protocols were never designed to traverse bandwidth-contentious WAN links,nor function in high latency environments or across the internet. Attempting to use either of these agentless protocols in these scenarios is very likely to result in frequent polling timeout issues. Roughly translated, this means you are completely blind to what's going on.


The Agent in SAM 6.2 eliminates the issues associated with these protocols by utilizing the standards based HTTPS protocol, which is both bandwidth-efficient and latency-friendly. This means the agent could be used to monitor such extreme scenarios as servers running on a cruise ship or oil platform in the middle of the south pacific from a datacenter in Illinois via a satellite internet link without issue, something that would be otherwise impossible using traditional agentless protocols such as WMI or RPC.


What does this mean for Agentless Monitoring in Orion?


There are still plenty more challenges this new Agent is aimed at addressing that I will cover in a follow-up post. In the meantime, however, you might be wondering what this means for the future of agentless monitoring capabilities that Orion was built upon.


Absolutely nothing! SolarWinds pioneered the industry in agentless monitoring, and remains 100% committed to our "agentless first" approach in everything that we do. SolarWinds will continue to push the boundaries of agentless technologies to the very limit of their capabilities and beyond. We will continue to lead the industry by being at the forefront of new agentless technologies as they emerge, now or at any time in the future.


Agent vs Agentless - The war rages on


The war between agent-based and agentless IT monitoring solutions has gone on as long as there have been things in the environment that needed to be monitored. Agentless monitoring solutions have always had the advantage of not requiring any additional software that needs to be deployed, managed, and maintained throughout the devices lifecycle. There is typically little concern over resource contention on the host being monitored because there is essentially zero footprint on the machine in an agentless configuration. Due to the nature of agentless monitoring solutions, they can be deployed and providing value within a couple of hours in most environments. Agent based monitoring solutions typically require rigorous testing, as well as going through a tedious internal configuration change approval process before any agent software can be deployed into production. Agent deployment is commonly a manual process that requires running the installation locally on each server before they can be monitored. Then there are the security concerns associated with having any piece of software running on a server that could potentially be exploited by a hacker as a means of entry into the system.


If Agentless is so great why did SolarWinds build an Agent?


If the agent vs agentless war has taught us anything, it is that each approach has its own unique advantages and disadvantages. There is no single method that suits all scenarios best or equally. This is why we fundamentally believe that for full coverage, any monitoring solution you choose must provide excellent agentless monitoring capabilities, as well as provide an optional agent for those scenarios where agentless monitoring simply isn't feasible or prudent.


We here at SolarWinds believe that, given our agentless heritage, we are uniquely qualified to understand and address many of the problems that have plagued agent-based monitoring solutions of the past. It is our intent to make agent-based monitoring as simple and painless as agentless monitoring is today.


Ok, so what exactly does this agent monitor anyway?


The agent included in SAM 6.2 will be capable of monitoring virtually everything you can monitor today on a WMI managed node in SAM. This includes, but is not limited to node status (up/down), response time, latency (all with no reliance on ICMP), CPU, Memory, Virtual Memory, Interfaces, Volumes, Hardware Health, Asset Inventory, Hyper-V virtualization, as well as application monitoring. This very same agent can also be utilized as a Packet Analysis Sensor for deep packet inspection if so desired and appropriately licensed. The agent is officially supported on the following Windows operating systems.


  • Windows 2008
  • Windows 2008 R2
  • Windows 2012
  • Windows 2012 R2


While the agent should also work on Windows 2003 and 2003 R2 hosts, these operating systems are not officially supported. Non-Windows based operating systems such as Linux/Unix are also not supported by the agent at this time. If you are at all interested in a Linux/Unix agent for SAM that provides monitoring of Linux/Unix systems and applications, you can vote for this idea here.


Sounds good, but how much is this going to cost me?


The agent software is essentially free. You remain bound by the limits of the license you own regardless of how you're polling that information, either via an agent or agentless. For example, if I own a SAM AL150 license, I can monitor 150 nodes, volumes, and components. This remains true if I'm monitoring those servers with an Agent installed or agentlessly.


Sign me up already


There's still plenty more agent stuff to talk about, including additional scenarios where the agent could be used to overcome common obstacles you might encounter with agentless monitoring. In my follow-up post I will discuss some of those as well as cover the various different agent deployment options and agent management, so stay tuned for more information.


If you're anything like me, you'd much rather try something out yourself then read about it. Fortunately for you this new Agent is included as part of the SAM 6.2 beta, which will be available soon. If you currently own Server & Application Monitor and it's under active maintenance, you can sign-up here. You will then be notified via email when the SAM 6.2 beta is available for download.



New for NPM 11.0 is the exciting Quality of Experience (QoE) dashboard. The QoE dashboard uses deep packet inspection to provide the ability to easily determine whether a performance issue stems from application server slowness, or an issue with the underlying network. The two key metrics we surface in QoE to help determine this are: Application Response Time (ART) and Network Response Time (NRT).

NRT (or TCP 3-way handshake) provides a relative metric for the quality of the network connection at a given point in time, where ART (or time to first byte) reflect how long the application server took to respond to the request. Individually the statistics are interesting, but together they allow you say fairly definitively whether an issue is a network problem, or if it's time to call the application / systems team.

7-15-2014 3-53-55 PM.png7-16-2014 1-43-18 PM.png


So how do we collect data?


NPM 11 collects packet data via new software-based sensors, the Network Packet Analysis Sensor and Server Packet Analysis Sensor.

Out of the box, NPM comes licensed for (1) Network Packet Analysis Sensor (NPAS) and (10) Server Packet Analysis Sensors (SPAS).




     Network Packet Analysis Sensors are installed on a dedicated 64-bit Windows server (physical or VM) with a dedicated interface to be attached to a SPAN / mirror / tap / etc.

For example:

7-15-2014 4-14-13 PM.png

NPAS's can be deployed at any layer of your network to provide visibility into response time across your network. Any aggregation point for which you would want to collect data would be a good candidate for a NPAS, from a remote access-layer switch to a core datacenter chassis. For example, by collecting access layer data, you may want to focus your monitoring of remote-site client endpoints, versus monitoring centralized servers at your core.


To deploy a sensor to a dedicated host, simply go to Settings->QoE Settings->Manage QoE Packet Analysis Sensors

From there, click Add Packet Analysis Sensor and select the dedicated host to deploy to:

7-15-2014 4-19-58 PM.png


7-15-2014 4-29-24 PM.png

Once the sensor is successfully deployed, you edit the sensor to choose the assigned interface, or add managed nodes for which the sensor will be seeing traffic:

7-15-2014 4-37-31 PM.png

Applications to filter for can then be added on a per-node basis via the "pencil" icon:

7-15-2014 4-44-02 PM.png



Server Packet Analysis Sensors (SPAS) function almost identically to NPAS, with the exception that a SPAS is designed to collect information only for the system it is installed on. For example, if you were a server administrator interested in collecting Exchange response time data, you would deploy a SPAS to each of your Exchange servers:


7-15-2014 5-51-28 PM.png

7-15-2014 5-50-34 PM.png

By default, the sensor will be limited to a single core and 1Gb of memory. It may be desirable to adjust these settings based on amount of traffic, and resources available to the system:

7-15-2014 5-57-07 PM.png


7-15-2014 5-54-53 PM.png


If the nodes are assigned to an additional poller in the environment, the sensor data will report back in through the assigned poller.


Remote Site Considerations


Your level of data granularity will vary dependent upon where the sensors are deployed. For example:


Sensors deployed in a central location - Provides aggregate data for servers in the central site

NPAS deployed to remote sites - Remote client endpoints can be added to the NPAS to provide a per-client view

SPAS deployed to remote endpoints - SPAS can be deployed to some or all remote endpoints to provide comprehensive or sampled endpoint data


The table below should help decide where to deploy:

8-20-2014 6-25-54 PM.png





Whichever deployment method you choose, the sensor installation process should only take a matter of minutes, and begin to immediately report data.

NPAS and SPAS can be mixed in the environment, so for example if you wanted to monitor your core datacenter switch with an NPAS, but wanted to deploy several SPAS to remote-site servers, this would be a valid configuration.

With both a dedicated NPAS and SPAS option, you have the flexibility to quickly and easily gather data in a way that best suits your environment. Licenses are included out-of-the-box with your NPM installation, so give it a try today!


For more information and deployment considerations, please check out the deployment guide.

To receive updates on the NCM roadmap, JOIN thwack and BOOKMARK this page.


After the release of NCM v7.3 and service releases NCM v7.3.1, and v7.3.2, we have been working on another bulk of features. Here they are:


Disclaimer: Comments given in this forum should not be interpreted as a commitment that SolarWinds will deliver any specific feature in any particular time frame. All discussions of future plans or product roadmaps are based on the product teams intentions, but those plans can change at any time.

New Free Tool You Say?


It's our pleasure to announce the availability of SolarWinds' latest free tool: Response Time Viewer for Wireshark®

Response Time Viewer has the ability to read in common packet capture file formats and automatically identify signatures for over 1,200 applications.

With applications identified, we'll then break out a host of metrics:


  • Application Response Time (ART) - Time to first byte
  • Network Response Time (NRT) - Three-way handshake
  • Data Volume (Total and bps)
  • Transaction Volume (Total and Avg per minute)


You can then filter by application, and export the pre-filtered capture back out to a PCAP for detailed analysis in Wireshark. No more digging around in Wireshark, filtering by port and protocol trying to find your interesting traffic!

With Response Time Viewer, we make it easy to find the traffic in question, and start to understand whether there may be an issue with the network, or if you may be having a server problem.

Next time you're hunting for that needle in the haystack, give Response Time Viewer a try: SolarWinds Response Time Viewer for Wireshark®

If you're looking for an excuse to check it out, it's also the subject of this month's Thwack mission: thwack Monthly Mission - July 2014



Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.