1 14 15 16 17 18 Previous Next

Product Blog

620 posts

In the previous post you have seen how to handle the perils of JavaScript and we also touched on the topic of clean and maintainable transactions. Today we will continue with a few more tips on how to make the transactions more maintainable and we will discuss common issues with playbacks.

 

Add some pauses

There are many variables which influence the performance of monitored web applications. Responses are sometimes quick and your page is displayed instantly, but if IT decides it’s a good time to backup the main servers then you might experience delays and variations in response times. There might be many other reasons like Internet connection problems, performance issues on the side of your browser or operating system, or simply the web application is still under development and it has not been optimized for performance. Java applets are a good example where you might need to watch the load time more carefully. Slow load times can not only cause variations in response times, but can even cause the transaction to fail.

 

Variations in response times and false alerts about failed transactions make it difficult to fine-tune your web application performance monitoring. Thresholds you have defined can be reached and false alarms can be triggered just because of this variance in the network.

 

If you have these kinds of problems, consider adding a few ‘waits’ to the transaction. If you see the response time of your action varies between 1-4 seconds, simply add a 5 second wait after your step to accommodate that variation. Wait times won’t count into the overall timing of the transaction, but will help to absorb the changes in the response times. Click on the Add Wait Time button (in yellow on the screenshot below) and define how long to wait (in blue).

 

adding_wait.png

 

Alternatively you can use an image match. Image match will wait up until the defined number of seconds (by default 30s) until the image is loaded. Waiting time is counted into the duration of a transaction; however, if the image match expires before the image is loaded, the whole transaction will fail. Click on Image Match button (in yellow), mark the area to match (in red) and define how long to wait (in blue).

 

image_match.png

 

Playback

 

Browser Versions

Web Performance Monitor is using the Internet Explorer browser to play back your transactions. One of the great features of WPM is the ability to use remote players and thus provide performance data from various locations (offices, branches, customer regions and so on). However each remote player might use a different version of Internet Explorer and this brings variations into your playbacks. Different versions of Internet Explorer could interpret a code differently and display the page with small variations that might result in different response times or failures in the transaction playback.

 

When you are recording a transaction double-check the version of the browser on the machine where you make the recording and the versions on all your remote players and make sure they are same. Even the small difference can make actions like 'Image match' fail.

 

IE8_9_difference.png

 

Optimize the load

A WPM recording is a copy of what a user does in Internet Explorer. This fact has certain requirements for memory and CPU. That is also a reason why you can’t have hundreds of transactions on one player. It is like having hundreds of users using the same computer (just recall how slow can be your browser when you have too many tabs open. If you assign too many transactions to one player it might affect response times and performance data and cause false alarms for failed transactions.

 

To optimize utilization of the players we provide you with load indicator for each player.

 

load_indicator.png

 

The value of the indicator can range from 0 to hundreds (%).  What does this value mean? Here is the simplified formula to calculate the load:

 

player_load = number_of_running_playbacks / total_number_of_playback_workers*100 + transactions_waiting_for_playback

 

The transactions_waiting_for_playback value is based on the sum of wait times of the transactions on the player before they are played back. The longer the transactions wait for playback on the player the higher this value gets.

 

Basically it says how well you utilize your player to playback transactions. Most of the time you want to have it around 100% (or even slight above). There might be ups and downs, and you might experience short spikes and therefore what you need to look at is the status in the long-term. If the load is consistently below 100% there is still capacity to handle more transactions. If the load is consistently above 100% it means your player is too busy and it might have an impact on performance data.

 

player_load_history.png

 

What are the options?

 

  1. Simplify transactions
  2. Reduce the frequency of playbacks
  3. Move transactions to another player
  4. Add resources to the player

 

Simplify the transactions as we described in the previous post. Make sure each transaction is simple and minimalistic and contains only actions that are needed to verify application functionality or for which you want to collect performance data.

 

Reduce the frequency of playbacks so that it still gives you information you need, but balances the number of transactions on a given player. Simply edit transaction in the UI and make it run once per hour instead of every 5 minutes.

 

transaction_playback_frequency.png

 

Consider moving the transaction to another player and check the impact on the load indicator of the original player. You might want to group similar transaction to one player or dedicate a player for the transactions which exercise some of the more complex business logic of your application and consume more resources and time.

 

Add resources to the player. More RAM and CPU can help, but not always. Also, there are internal limitations of Internet Explorer. WPM therefore limits the number of workers on external players to 7 (workers on main poller are limited to 2). Generally horizontal scaling works better than vertical scaling.

 

In this post we have learned how adding waits to the transaction can help to absorb variations in response times We also learned that different versions of browsers interpret the page differently, sometimes causing false alerts, and lastly we discussed how to optimize the load of your players.

 

In the next post we will look at how to use and troubleshoot transactions monitoring desktop applications using Citrix XenApp.

 

Get the most of your Web Performance Monitor

If you own SolarWinds Engineers Toolset (ETS), and any SolarWinds products shipped with Orion platform you can leverage the functionality of Toolset directly from Orion platform based products. For more on Orion Integration with Engineer’s Toolset refer to Craig’s blog.

 

You may have ETS installed on a laptop, helpdesk agent system or any system on your network and happen to use the integration functionality to Orion. If that's so, then you are familiar with the tool you call by right clicking on a node. The tool runs on the local machine therefore the traffic is sourced from your local machine and not the server. The below diagram gives a visual representation of the deployment where Engineers Toolset resides outside of the Orion server. Using the Engineers Toolset - Orion integration function by right clicked a node (switch-32-01), the source traffic originates from the system running Engineers Toolset (10.140.26.198) and the reply is send to the same system, while the local machine displays the output.

 

ddd.png

Why do I, need to source the tool from my Orion server and How do I achieve it?

 

You may have restrictions with in your organization, that does not allow your ETS machine to access some of your network and want to call simple tools from your Orion server without buying another ETS license. In this case, you would want some mechanism to perform this functionality. Lets see, how to achieve calling PING tool from an Orion server.

  1. Download and install PsTools from Microsoft, specifically you will need PsExec. Install this on your Orion server. PsExec allows you to run commands on your remote machine.
  2. Create a batch script (*bat) and save it into the same local folder where PsTools was installed. Batch files are useful for storing set of DOS commands that are executed by calling its name.

Here's a sample batch script.

------------------------------------------

echo off

cmd /K <path>\psexec.exe \\%1 <remote cmd> %2
------------------------------------------

To break down the batch script parameters,

- echo off:  Will turn off the command you put in a batch file from showing itself,

- /K: Carries out the command specified by string and continues (in my case carries out cmd.exe and continues to the installation folder of psexec.exe),

- <path>: Is the local (Orion server) folder where psexec.exe is installed, %1 carries out the first command line argument (IP Address of the Orion server),

- <remote cmd>: Is any DOS commands which will be executed from the source machine/Orion server, and

- %2:  carries out the second command line argument (which is the IP Address of the remote device, you PING)

Example:

This script executes PING on the Orion server and displays the resulting output locally (in this case Orion server)

-------------------------------------------

echo off
cmd /K C:\Tools\psexec.exe \\%1 ping %2

-------------------------------------------

and the below for TRACEROUTE

-------------------------------------------

echo off
cmd /K C:\Tools\psexec.exe \\%1 tracert %2

--------------------------------------------

  3. Next, on your ETS system. Go to Engineer’s Toolset Integration tray icon and update the right click menu to include your remotely executed command.

a1.png

 

   4. Go to Menu Items tab and click “Create, delete and edit menu items”

a13.png

   5. Input Target path and Command-Line Arguments into Item Details page (as shown below)

a14.png

 

6. Transfer the new menu items from the Available field box into the selected Menu field.

a15.png

7. Open Orion web Console and use the Engineer’s Toolset Integration Menu. The new menu items should appear and can be used to execute the remote command to targeted nodes.

a16.png

If you're an Orion customer and you haven't tried the Engineer's Toolset, you can learn more about it here.

SolarWinds NTA v3.11 is now available for download in your customer portal.


NTA v3.11 brings these notable improvements:

 

  • Support for sampled NetFlow
  • QoS hierarchy and performance updates
  • Support for query of IP ranges/CIDR within Flow Navigator

 

You can view the full set of release notes, including problems fixed here.


Here is a preview of how Nested CBQoS Policies are displayed in v 3.11


integration10.PNG

SolarWinds FSM v6.5 is now available for download in your customer portal.


FSM v6.5 brings these notable improvements:

 

  • Juniper SRX support
  • Increased support for managing, tracking, searching and documenting business justification rules in IOS
  • Extended rule/object change analytics, now including IOS
  • Enhanced change modeling capabilities for predicting the impact of rule/object changes on security and traffic flow, now including IOS

 

You can view the full set of release notes, including problems fixed here.


Note about upgrade to FSM 6.5: The Release Notes referenced above, contain important information about upgrades from your current version of FirePAC or FSM, to FSM v6.5
We recommend that users spend a few minutes reviewing them, especially if you plan to perform an upgrade from FirePAC deployed in Standalone mode, to FSM v6.5.

We have recently officially reached the Release Candidate (RC) phase for our next release, Web Help Desk 12.0. RC is the last step before general availability and is a chance for existing customers on active maintenance to get the newest functionality before it is available to everyone else. You can find the RC binaries for download in the SolarWinds customer portal.

 

If you have any questions I encourage you to leverage the WHD RC group on thwack: http://thwack.solarwinds.com/groups/solarwinds-web-help-desk.

 

This release contains the following product improvements and new features:

 

  • Improved LDAP Directory and Active Directory auto-discovery
  • Native Asset Discovery (using WMI)
  • Native integration of Asset Database with SolarWinds Network Performance Monitor (NPM), Network Configuration Manager (NCM) and Server and Application Monitor asset databases (SAM)
  • Native integration with Alerts from SolarWinds NPM, NCM and SAM
  • New Getting Started wizard
  • Automatic setup of WHD database in the wizard
  • Support for Windows 2012, Windows 8 (only for evaluation version of WHD)
  • Support for MS SQL 2012, support for embedded PostgreSQL database
  • Migration utility from Frontbase to PostgreSQL

 

Let's look at some of these features in more detail.

 

Native WMI Asset Discovery

WHD is now capable of discovering assets in your network using WMI so that it doesn't have to rely on external asset databases. We gather information like hostname, IP, Domain, MAC address, OS, SP level, User profiles, CPU and memory information, installed printers, HD size and model or installed software and patches. By simply defining an IP range, credentials for WMI and schedule for discovery and synchronization, WHD will keep your asset database synced with what's actually on your network. You can find configuration details on the following screenshot.

 

discovery_conf.png

Once you schedule discovery and it has successfully completed you can find discovered assets in the database. Please note the Discovery Connection column, which shows where this asset is coming from.

discovered_asset.png

 

SolarWinds Asset Database Integration

To make it easier to integrate WHD with other SolarWinds products we introduced native integration for NPM, NCM and SAM. WHD now natively provides mapping from each product database to the WHD asset database, so you only need to provide connection details and credentials. On the screenshot below you can see configuration options for integration with NPM.

 

npm_asset_import_conf.png

Once the discovery is finished, you can see the import data in WHD. Please note the column Discovery Connection, which indicates where the nodes are coming from.


discovered_assets.png

 

SolarWinds Alerts Integration

Another great feature we introduced is native support for SolarWinds Alerts for NPM, NCM and SAM. Create a connection and define a few simple rules. Each rule will tell when to accept an alert and create a ticket. When an alert is triggered and it matches the filters, a ticket will be created or updated (if it already exists). On the following screenshot you can see a configuration for an NPM connection.

alerts_conf.png

We will define a filter to accept all alerts which have severity other than "Low". This is what it looks like in WHD:

filter_definition.png

If you are familiar with Alert Central, you should be very comfortable here, but filter definitions are intuitive even if you have not tried Alert Central yet. Now lets say you have an alert defined in NPM which is triggered when the node is unmanaged. If we unmanage node NPM_SG9323P038 we will see the following alert:

alert_triggered.png

Immediately we will notice there is a new ticket in WHD.

ticket_created.png

The connection between WHD and NPM is bi-directional. As soon as you add the first note to a ticket, WHD will notify NPM and acknowledge your original alert. Any text you put into the notes in WHD is also added to the alert notes in NPM. On the next two screenshots you can see the same note added in WHD and how it is displayed in NPM.

note_in_WHD.png

The same note is also visible from the NPM web console.

note_in_NPM.png

 

Active Directory and LDAP Directory Auto-Discovery

Active Directory is one of most common user databases and we wanted to make it as easy as possible to configure. With the new auto-discovery feature, you now only need to define connection details like hostname, click Detect Settings button and WHD will not only find out if there is an Active Directory or generic LDAP directory server, but also pre-configure basic settings like user DN or search filter. Now you only need to define connection account details and you can start to use your directory server. On the following screenshot you can see a much simplified configuration form and please also note the new Test Settings button, which will help you to make sure the settings are set up correctly.

conf.png

Once you successfully configured the connection, you can search for users (just don't forget to tick the Search LDAP checkbox):

users_from_ldap_in_search.png

We know that IP Address Management is more and more important because networks became much more flexible, dynamic and people are used to bringing their own devices into corporate networks. If you've seen your What we are working on after 3.1, you won't be surprised that the new IPAM 4.0 beta contains support for BIND DNS, active detection of IP Address conflicts and better integration with User Device Tracker.

 

Active IP Address conflict detection

IP Address conflict is a typical nightmare of admins. IP Address conflict occurs if there are two or more devices in your network and have the same IP Address configured.

IPConflict1.png

Conflict.png

This is an issue that could arise on devices across any operating system that connects to a local area network (LAN) across any operating system, wired or wireless.

 

What problems may IP Address conflict cause?

Primarily network connectivity issues. Impacted machines are loosing internet access or general network connectivity until the conflict is resolved. It can impact laptop, VoIP phone or application server.

 

What causes IP Address conflicts?

There are three typical scenarios:

 

  • Bring Your Own Device phenomenon

    Typically happens when you bring your laptop or tablet from home to the work and you still have the "home" IP address assigned which can cause collisions within corporate network. It's also typical for business trips when you got assigned static IP in the hotel and then you come back to work. It may also occur in Virtualized environments like spin up VM clone in the same subnet when virtual machine has a statically assigned IP.


  • DHCP servers & their configuration

    Two DHCP servers are managing the same IP subnet/segment with overlapping IP addresses and DHCP server doesn't check the network IP status (IP used or not). IP address conflict happens when one machine already has a DHCP address assigned from the first DHCP server and another device is given the same IP address from secondary DHCP server. This could be a typical problem in "load balancing" DHCP configurations.

 

  • Human mistakes during IP address assignments

    When admins do not use any IP address management tools it is so easy to assign already used IP address to the new device on the network.

 

How to manually solve IP address problem?

First, you need to know who caused the conflict and find the MAC addresses that are in conflict. If you have such possibility, unplug device which should have correct IP address. Then use 3rd machine within the subnet to PING the IP address in conflict. Use "ping -a x.x.x.x in order to get two important values. First DNS name of machine which causing the conflict, second TTL value, which may help you to identify operating system. For example Windows has typical TTL 128, Linux may have TTL 64. You may find the whole list here.

ping1.png

It may happen that there is no device name provided or ICMP protocol is blocked by firewall. In this case, you may use "arp -a" command and list MAC address assignment for your IP address:

arpa.png

MAC address is useful information, because you may identify the vendor of that device. MAC address are unique and each vendor has the first three octets of MAC are reserved for identification. You may find the MAC vendor pre-fixes list here.

 

With MAC address information, you may do go to the switch and block related port, or block that MAC on your wireless router/AP and let origin device to use its IP address.


How to solve IP address conflict with IPAM 4.0 Beta?

 

As I stated above, IPAM 4.0 can now actively detect IP address conflict. We primarily focused on alerting and information about MAC addresses which is a key-point information for conflict troubleshooting. IPAM actively scans the network and if it detects duplicate static IP assignment or duplicate IP provisioning form DHCP server, it will trigger an alert with conflict information:

IPConflict.png

Once you see IP Address in conflict, simply click on the IP or MAC address info in the alert message and it will take you to the IP address detail page, where you may see MAC address assignment history. Another IPAM 4.0 improvement is better integration with UDT product. So you may directly see device & port where are machines connected.

 

You may use IPAM Message Center too and get all history of IP Address conflicts:

messageCenterAlert.png

UDTSubviews.png

As you see, you no longer need to use multiple commands via CLI or use 3rd machine to ping who is IP address in collision. More than that, you can see connectivity details including port and user information on one screen. Now you can use for example NPM which can remotely shut-down interface and disconnect the device from the network, or simply connect to the switch and block that port via CLI. Also, because IPAM uses alerting engine you should get IP address information before impacted person creates IT ticket (which will take some time while disconnected from network)

 

BIND DNS Monitoring & Management

 

BIND DNS is one of the most used DNS solutions. IPAM 4.0 now adds support for monitoring and management of BIND DNS services on Linux. You can now manage your Microsoft DNS and BIND DNS via one web-console. IPAM supports all important DNS tasks like automatic monitoring and management of DNS zones, two-way synchronization of DNS records and DNS server availability.

If you want to add your BIND DNS server into IPAM use short wizard that will lead you through the process of addition. When added in IPAM, it will sync and import actual BIND DNS configuration and then you can monitor or manage zones & DNS records:

addBind.png Bind DNS.png

BIND Zones.png

addNewDNS.png

IPAM 4.0 Beta is ready to be tested. All users under active IPAM maintenance may get Beta build and try it for free on non-production environment. If you would like to try it, simply fill this IPAM Beta Agreement.

As always, we also have Thwack IPAM Beta forum and it would be great to get your feedback there.

 

Thanks!

We have officially reached Release Candidate (RC) status for NTM 1.0.10. RC is the last step before general availability and is a chance for existing customers to get the newest functionality for NTM before it is available to everyone else.

NTM 1.0.1 includes some customer requests as well as several bug fixes. Here is a list of the features included in 1.0.10

 

  • NTM is unresponsive during discovery. (case 225201, 218320)
  • NTM returns an internal error when a Seed device is entered for scanning (case 215087)
  • SNMP Credentials test now checks for v1 and v2.
  • NTM now exports to Microsoft Visio 2013
  • Displays & searches all IP Addresses of nodes.
  • NTM now displays 10Gb Link Speed connections.
  • Improvements in reporting, order of tabular report columns adjustable & now showing enumerated values in the Switch port Report.
  • It is now possible to cancel a WMI credentials test.
  • Evaluation Map can be exported to Microsoft Visio with restrictions.

 

RC builds are made available to existing customers prior to the formal release. These are used to get customer feedback in production environments and are fully supported.

Below are some screenshots to see the new features of Network Topology Mapper. But if you really want to see more, download this latest version from your SolarWinds Customer Portal.

 

Connection Display Options – Showing 1-10 Gbps & >10 Gbps link Speed lines

 

A new 10Gbps connection line is now available on NTM maps.

new1.png

 

Display & Searches alternative IP Addresses of nodes.

 

Nodes with multiple IP Addresses can now be displayed on the map, by hovering over the node and right click node details.

One can also search multiple IP Addresses using the Search bar. (make sure IP address is ticked, under Node Display options)

Capture9999.JPG

Capture00000.JPG

 

Improvements in Reporting.

 

The order of the columns in the reports can now be moved and adjusted accordingly. In the STP report, the Spanning Tree State shows enumerated STP state and 2 new columns are added to the report. Namely, Designated Root & Bridge.

CaptureSTP.JPG

...and avoid mistakes when updating your configurations.


When performing an address change on your servers (e.g. due to reorganization of your data centers, virtualization, consolidation...), you will have to reflect these changes in your firewall configurations. Depending on how large and numerous your firewall configurations are, locating the firewall(s) to change and finding the places in their configuration that need updating, can be tedious and error prone.

 

This blog reflects a fairly frequent real-life use case, and explains how FSM - Firewall Security Manager - can help making these changes quickly and safely, by double checking them before they are put in production.

The concrete example will be to detect where to make changes so that the IP of 2 servers 192.168.253.34-35 can be changed for the new addresses: 192.168.253.160-161, while keeping the protection of your firewalls in place.

 

Finding the firewalls impacted by the change

The Query/Search Rules function searches in the configurations of your firewalls, even within subnet masks. Note that a simple text search in your configs will usually not be efficient because a) it would not find the address you are looking for, within a subnet definition and b) searching on the beginning of the address, e.g. 192.168.253 could return 100's of hits in a real life config.

FSM Rule Query.PNG

The blue window shows how to define the query.

Note that in order to avoid finding all the rules that have ANY in the searched parameters, check the box at the bottom of the window.

Then select all firewalls in your inventory (left portion of the pain window) and run the query.

We recommend saving it before the run (notice the Saved Queries tab at the bottom left corner)

 

FSM finds the impacted firewalls, and the objects to modify in their configurations

FSM Rule Query finds impacted firewalls and objects.PNG

The result will show up in a new tab on the right, and will contain one sub-tab per firewall vendor that contains the searched address. In this example, we will focus on the Cisco Firewalls tab, but in reality, you would have to check all vendor tabs.

In this example, the Cisco firewall HV-FW-01 contains a destination network object called Servers_in_Public_DMZ (used in a rule that appears in config line #570) that refers to 2 addresses that we are trying to change: .34 and .35.

The statement to change is 192.168.253.32    255.255.255.248, that "contains" .34 and .35

 

Preparing the change by using the Change Modeling session

A change modeling session is a very convenient feature that creates a safe environment (basically a copy of your production configurations) that you can use for all your tests and changes, without affecting your production configs.

Select the impacted firewall(s) that you identified with the above steps and click Analyze Change / New Change Modeling Session, and name your session (e.g. Server IP Change session)

FSM Change modeling session.PNG

Notice the new TAB on the right (this is your testing environment), that contains a copy of the config tile (config.txt), routing tables, and the tools available to test your changes (icons on the right)

 

Performing the changes

Double click on the config.txt to open a text editor that you will use to modify the configuration.

Search (CTRL F) for the impacted network object: "Servers_in_Public_DMZ" and modify this section:

 

object-group network Servers_in_Public_DMZ

network-object host 192.168.253.15

network-object 192.168.253.32    255.255.255.248

network-object 192.168.253.64    255.255.255.248

 

as follows:

object-group network Servers_in_Public_DMZ

network-object host 192.168.253.15

network-object 192.168.253.32    255.255.255.254

network-object 192.168.253.36    255.255.255.252

network-object 192.168.253.160   255.255.255.254

network-object 192.168.253.64    255.255.255.248


Then save your new config.

At this point you have impacted the copy of the config that sits in the Change Modeling Session, but not the main version of the config within FSM.

 

Verifying that the changes are valid

You have several tools available, that you can launch via the icons on the right, to test the impact that you have made with your change.

In essence, all these tools highlight the differences between the main and copied/modified versions of the versions.

 

We will focus here on the Compare Rule/Object tool.

The result is an Excel report that is generated within the Change modeling session:

FSM Change modeling session details.PNG

Open the report and look at the different tabs:

  • Rule Compare reminds you of the change that has been performed at the rule level (on line 570, an ACL statement is impacted because the destination Servers_in_Public_DMZ has changed)

FSM Change modeling session details - Rulle diff.PNG

  • Network Object Compare highlights the changes that happened to the modified Network Object: Servers_In_Public_DMZ in this example. Red denotes removed lines and green is used for added ones.

FSM Change modeling session details - Object diff.PNG

  • Traffic Flow Compare confirms that you have removed and added the expected traffic, so the traffic to the new server, after the IP address change, remains unchanged

FSM Change modeling session details - Flow diff.PNG

You can find more about this report here.


Putting these changes in production

Now that you are comfortable with these changes, click on the Generate Change Scripts tool on the right side of the Change Modeling Session window.

FSM Change modeling session details - Script.PNG

Here is what the generated script looks like, in this example:

 

!-- Object <Servers_in_Public_DMZ> Change summary

!-- Modified object

!-- Deleted member 192.168.253.32/29

!-- Added member 192.168.253.32/31

!-- Added member 192.168.253.36/30

!-- Added member 192.168.253.160/31

object-group network Servers_in_Public_DMZ

network-object 192.168.253.32 255.255.255.254

network-object 192.168.253.36 255.255.255.252

network-object 192.168.253.160 255.255.255.254

no network-object 192.168.253.32 255.255.255.248


!-- ACL Rule Change summary

!-- Modified rule

!-- Modified destination Servers_in_Public_DMZ

!-- No change command needed as this is an indirect change caused by modifications to object/object groups used in the ACE.

!-- access-list outbound extended permit tcp object-group DM_INLINE_NETWORK_14 object-group Servers_in_Public_DMZ object-group DM_INLINE_TCP_3

 

After you review the proposed changes, you are ready to put them into production by executing these commands in the firewall console

If this device was managed by NCM (not the case in this example), you can actually execute the script via NCM (right click menu Execute Script), so you don't even have to connect to the device console (NCM does it for you)

FSM Change modeling session details - Script exec.PNG

 

You can then use this button to update the configuration into FSM, from whatever source it was taken from - NCM or the device itself - and make sure that your changes were taken into account.

FSM Update configs.PNG

 

I hope that was helpful and would really welcome reading your FSM use cases - does not have to be that detailed :-) .

[thanks to Lan Li, FSM dev, for his contribution]

As anyone who's been regularly following the thwack blogosphere might tell you, SolarWinds Server & Application Monitor (SAM) has an impressive track record of packing a ton of really great features into each and every release; and SAM 6.0 is no exception. In fact, some might argue that we're upping the ante with this release to a whole new level. And to any naysayers amongst you I say, you ain't seen nothing yet.

 

No different than in previous releases, SAM's elite army of robot zombie developers have been diligently and tirelessly working on a lengthy list of new features and improvements for this SAM 6.0 release, some of which are now ready for beta testing. If you would like to participate in the SAM 6.0 beta and begin playing with some of the new features, the process is very simple; sign-up here. The only requirement is that you must be an existing Server & Application Monitor product owner under active maintenance.

 

Real-Time Windows Event Log Viewer

 

Regardless of whether you're new to systems management, or you're a war torn veteran with the battle scars to prove it, most likely the first place you turn whenever trouble starts brewing in your Windows environment is the Windows Event Log. Whether it's application exceptions, user lockouts, failed backups, services stopping, or anything else that happens to go bump in the night, your first clue as to what went wrong, and when, can almost always be found in your tried and true Windows Event Log.

 

However, until now, your only recourse for pouring through the mountain of events in the Windows Event Log has been to either Remote Desktop (RDP) into your server to open the Windows Event Log Viewer locally, or launch the Windows Event Log Viewer on your workstation and connect to the servers Event Log remotely. Both of these options introduce needless additional steps and precious time wasted when trying to quickly troubleshoot the root cause of the issue. Neither of these options allow you to proactively monitor for these kinds of events in the future, should they occur.

 

In SAM 6.0 we wanted to bring the same power to your fingertips that we provide with the Real-Time Process Explorer and Windows Service Control Manager, to the Windows Event Log. To that end, the Real-Time Windows Event Log Viewer was born. Found in the Management resource on the Node Details view, the Real-Time Windows Event Log Viewer allows you to browse through all the events in your Event Log, as well as filter for specific events in a particular log. You can filter based on event severity, such as Error, Warning, Information, etc.  You can also filter on Event Source to quickly isolate the specific event(s) you're looking for.

Real-Time Windows Event Log Forwarder.png

Once zeroed in on the event(s) you're interested in, you can click on the event itself to see the full details of the event, including the full message details. Want to be alerted if this event occurs again in the future? No problem. Simply click Start Monitoring and you will be walked through a wizard for creating a Windows Event Log Monitor and assigning it to a new or existing Application Template. It's just that easy.

 

Asset Inventory

Ok, I know what you're going to say. Inventory collection and reporting is neither cool nor exciting. I get it. However, regardless of your environment size, keeping an up-to-date inventory is a necessary evil normally required by the accounting department or upper management for asset depreciation tracking, insurance purposes, and/or lifecycle management and renewal. It's normally an arduous annual or quarterly task that involves a tremendous amount of manual labor to compile, and has questionable value beyond satisfying those higher up the ladder.

 

In SAM 6.0 we strived to make inventory data not only valuable to upper management, but also those responsible for managing and maintaining the infrastructure. Expanding beyond the typical fixed asset information required by the business, SAM 6.0 allows you to answer key IT questions that help drive your business decisions. Whether you're constantly running low on memory in your server and need to know if you have any free slots for additional RAM, or need to report on software installed for license compliance or security purposes, SAM has you covered. Has your vendor notified you of a major issue identified in a specific driver or version of firmware? With SAM 6.0 you can easily see which machines are affected.

Inventory List Resource.png

 

You can enable Asset Inventory collection for a node by clicking List Resources from the Management resource of the Node Details view. It will also appear listed when adding the node through the Add Node Wizard. To enable collection, simply check the box next to Asset Inventory as depicted in the image on the left. Click submit to save your changes and inventory data for that node will collected.

Asset Inventory collection can be enabled for both physical and virtual assets alike, and functions independently of Hardware Health monitoring, or any other status monitoring for that matter. That means you don't need to monitor your volumes or interfaces for them to be included as part of an inventory report.

 

Asset Inventory collection can be enabled for any Windows host managed in SAM, including Windows servers and workstations. Stay tuned for more information on inventory collection for Linux/Unix and VMware hosts.


Once Asset Inventory has been enabled for the node, you will find a new Asset Inventory sub-view that appears on the Node Details view for that node. Clicking on the Asset Inventory sub-view tab takes you to a dedicated asset inventory view which includes all relevant inventory data collected for that node.

When you access the Asset Inventory sub-view you may at first be overwhelmed by the sheer volume of information being collected. Fortunately all this information is logically categorized and grouped together into independent resources that can be moved or copied to the Node Details view, or any other node based sub-view if you desire.

 

Anyone who's Polling Engine capacity is teetering dangerously on the edge might be worried what kind of load all this newly collected inventory information is going to place on their poller. Fortunately inventory data doesn't need to be collected with the same degree of regularity as status information. As such, Asset inventory collection by default occurs only once a day, but can be configured to occur weekly, bi-weekly, or even monthly depending on your needs.

 

Concerned about licensing? Don't be. Asset Inventory Collection is not licensed separately, and does not count against your SAM component monitor licenses. It's included as part of the Node License. 

Graphics and Audio.png

Asset Inventory Subview.png

 

Below are just a few examples of some of the new Asset inventory resources contained in this view. If you'd like to check them out for yourself in your own environment I encourage you to sign-up here to download the SAM 6.0 beta. We'd love your feedback!

New Asset Inventory Resources

 

Memory

Memory Modules.png
Processors
Processors.png
Software Inventory
Software Inventory.png
Drivers
Drivers.png
Logical Volumes
Logical Volumes.png
Network Interfaces
Network interfaces.png
Removable Media
Removeable Media.png

Recently, we’ve heard from Solarwinds Network Topology Mapper (NTM) users with large networks, that performance during the discovery process is a bit sluggish. How can we improve it?  To answer that question, one must understand how the NTM discovery engine works and what goes on behind the scenes before displaying the network diagram.

 

NTM includes a sophisticated multi-level discovery engine, which uses a combination of protocols and goes through 3 distinct phases in order to discover every node on a network and to produce a highly accurate map.

 

1. The Node Detection Phase: Uses a combination of ICMP, SNMP and WMI Protocols to detect devices.

2. The Polling Phase: Uses a Poller to collect connection data using CDP, LLDP, Bridge table and also STP to detect the role of each node.

3. The Calculation Phase: Uses complex algorithm to calculate connectivity between nodes.

 

The completion of these 3 phases of the discovery process can be slowed down by a number of factors. Primarily these:

 

  • The size of the network being scanned
  • The choice of node detection protocols used for the scan
  • The number of hops specified for the scan
  • Where the scan is being performed from

 

With all of that in mind, we have compiled the following recommendations to help you improve the performance of NTM as it discovers devices on your network.

1. Think small – The tendency of most users of NTM is to try discovering their entire network on the 1st scan. The drawback here is, if you have a network with 1200+ nodes you are in for a long wait and at the end of it, the map might look very cluttered. The smaller the range you define to scan, the faster you’ll receive the results and the easier it will be to interpret them. We recommend discovering Class C networks in at least 2 separate scans.

 

2. Credentials priority - The credentials most commonly used should be at the top of the list, be it SNMP, WMI or VMWare. Performing this simple activity will help enhance the discovery. Also, note that the more credentials there are the longer discovery will take so don’t use any unnecessary credentials.

add this 1 2 AM.png

3. Zero Hop Discovery - As you work your way through the discovery wizard, you will notice that you are prompted to provide the number of hops you wish to use during the scan.  We recommend using 0 hops unless you are not finding all of your expected devices. The more hops the scan goes through, the longer it takes and 0 usually gets them all.

add this 1 AM.png

4. Scanning Location – Scanning with NTM from within the same Management VLAN of your network can improve the speed of the discovery process significantly. This will reduce polling timeouts compared to scanning over VPN, dial-up or routing through another network.

 

5. Evening/Weekend Scanning - If you manage a very large network or you see your network growing in the coming months, it would be best to run the discovery process during evening hours or on weekends and have your map ready for Monday morning.  The key to this is making sure you have entered the proper credentials into the discovery wizard.  Make sure to test your credentials before beginning the discovery process and leaving the building.

 

And that's pretty much sums it up. Hope that it makes the discovery & mapping of your network a little bit easier.

We have officially reached Release Candidate (RC) status for User Device Tracker 3.0. RC is the last step before general availability and is a chance for existing customers to get the newest functionality for user device tracking and capacity planning before it is available to everyone else.

 

Here is the content of this RC version:

  • Whitelisting – Set up a white list of devices based on MAC address, IP address, or hostname.
  • Trap notifications - Get connectivity information in "real time"; receive an alert when a device not on whitelist connects to the network.
  • Watch List - Add users to the Watch List.
  • Domain Controller Wizard - Facilitate collection of user login information by configuring appropriate logging level on Windows® servers.
  • Virtual Route and Forwarding (VRF) - Polls devices for VRF data.
  • Alerts - Get an alert when an endpoint port changes.
  • Reports - See a report on Wireless Endpoints.
  • Groups - Add UDT ports to groups.
  • Port Shutdown - Remotely shutdown a compromised device port.


More details and screenshots can be found in the UDT 3.0 beta blog post.

 

RC builds are made available to existing customers prior to the formal release. These are used to get customer feedback in production environments and are fully supported. If you have any questions, I encourage you to leverage the UDT RC forum on thwack.

 

You will find the latest version on your customer portal in the Release Candidate section. Please note that you will need to activate this RC build using a temporary RC key that you can also find on your customer portal (Licensing and Maintenance – License Management). This temporary key will be replaced with a regular license key after official release of UDT v3.0.


Message was edited by: Jiri Cvachovec

We have just released UDT v3.0 RC2. You can find it on your customer portal. (Note: The downloaded package may say RC1 but the content is actually RC2.)

Enhancements:

  • A confirmation dialog pops up when user tries to shut down a port.
  • Only users with nodes management rights can shut down ports.
  • A few bugs fixed.

Over the coming weeks I will be posting a series of blog posts on common misconception, questions, issues etc. that I have run into over the years that we have been offering the Failover Engine.  The most common question I do get asked is "what exactly is the difference between high availability vs. disaster recovery".

 

I will provide a more in depth explanation below, but the best and quickest way to remember this is:

  • High Availability = LAN
  • Disaster Recovery = WAN


Some groundwork before I jump into the more in depth explanation, the Failover Engine works in a Active-Passive setup, meaning only one server has the SolarWinds services started and running.  The Failover Engine is not an Active-Active solution, meaning both servers have the SolarWinds services started and running.  With that in mind, for this post I will refer to each server as the following

  • Primary or Active server = SolarWinds services are active and running
  • Secondary or Passive server = SolarWinds services are not active and running

 

High Availability

As also illustrated below in the first image, the High Availability (HA) role is normally deployed in a LAN where communications are configured with the Public IP Address being shared by both the active/primary and passive/secondary servers. The active/primary server makes the Public IP visible and available to the network while the passive/secondary server leverages a packet filter installed by the Failover Engine on to hide the server and prevent network access since two machines with the same IP cannot be on the network at the same time.


In the event of a failure on the active/primary server the packet filter is removed from the passive/secondary server making it now assume the role as the active/primary server while simultaneously adding the packet filter to the server that was originally the active/primary server making it the passive/secondary server. Since both servers are sharing the Public IP address, DNS updating is not required.

 

SWBlogLAN.png

Disaster Recovery:

When deployed in a Disaster Recovery role, the active/primary server and the passive/secondary server operates over a Wide Area Network (WAN) in different subnets. As a result, the active/primary and passive/secondary servers are configured with different Public IP addresses. In the event of a failover, the Failover Engine automatically updates DNS with the IP address of passive/secondary, so to the end users they just continue to access the SolarWinds server with the same DNS name they always use.

SWBlogWAN-1.png

Any questions or comments, please ask in the comment section.

SolarWinds recently acquired a set of products that provide a self-hosted solution for securely transferring files both within and outside the the corporate firewall.  These products provide a secure alternative to cloud-based solutions like Dropbox.  "But Dropbox is so convenient and easy to use," you say.  Read on.

 

Dropbox has had its fair share of issues over the past couple of years, shining a big, ugly spotlight on security vulnerabilities with respect to sensitive customer data.  First of all, the exposure to potential security risks and service disruption from Dropbox is enormous.   According to a recent survey of 1300 business users, one in five are using Dropbox to transfer corporate files, effectively circumventing any safeguards their IT departments have put in place with respect to file transfers.  In August of last year, usernames and passwords of Dropbox accounts were compromised that resulted in a spamming campaign to a number of Dropbox users.  Unfortunately for Dropbox, this isn't the first time something like this has happened.  Another breach occurred in June of 2011 that was the result of a breakdown in the service's authentication software, exposing accounts without requiring proper authentication for a period of time.  If the security issues aren't scary enough, the service was completely unavailable for a period of time in January of this year.

 

These breaches beg a fundamental question to be answered when assessing a cloud-based versus a self-hosted solution for securely transferring files: is the cloud secure enough for the needs of my business?  The cloud certainly provides a valuable level of convenience and simplicity that's just fine for most individual consumer users, but it's evident that this convenience has a cost in terms of security.  Businesses, both large and small, often have stricter security requirements around file transfers and the users participating in those transfers that a cloud-based solution won't be able to provide.  When it comes to sensitive and confidential files, convenience is nice, but security is a must-have.

 

 

There is a Better Way

 

FTP Voyager is a free FTP client that supports a number of different protocols for secure file transfer.  Serv-U MFT Server is a managed file transfer server that provides a secure alternative to the cloud-based solutions for transferring files inside and outside the enterprise.  Let's take a look at some of the security based features and protocols that these products provide.

 

In addition to FTP, FTP Voyager supports both the FTPS and SFTP protocols.  This includes strong authentication with both X.509 client certifications and public key authentication.  FTP Voyager uses cryptography that has been FIPS 140-2 validated by NIST, and Voyager has been granted the Certificate of Networthiness by the US Army.

FTP_Voyager.png

 

Like FTP Voyager, Serv-U MFT Server supports the FTPS and SFTP protocols.  It also supports secure file transfers through a web browser or from a mobile device (iPad, iPhone, Android, Kindle Fire) via HTTPS.  Serv-U MFT Server also provides a number of different user management options, including the ability to authenticate against Active Directory.

Serv-U_Administration.png

 

Serv-U also provides a number of encryption options for transferring files.  Individual ciphers and MACs can be enabled or disabled based on your specific security requirements.  Serv-U also provides the ability to run in FIPS-140-2 mode.

Serv-U Encryption Settings.png

 

A separate module called the Serv-U Gateway provides reverse proxy capabilities, preventing data from ever being at rest in your DMZ or opening connections directly from the DMZ to your internal network.  Using Serv-U MFT Server in conjunction with the Serv-U Gateway provides an architecture that is PCI DSS 2.0 compliant as well as satisfying other high security requirements.  See reference architecture below for an example.

serv-U+gateway_architecture.png

 

You don't have to be a conspiracy theorist or even a security expert to have legitimate concerns about your data in the cloud.  Sometimes the nature of the data being transferred warrants consideration of a level of security that cloud-based solutions simply can't provide.  While Dropbox has made managed file transfer more accessible, it can introduce unnecessary risks to your organization.  FTP Voyager and Serv-U MFT Server provide secure alternatives to cloud-based solutions, giving you the best of both worlds.  For more information on Serv-U you can check out some of our videos here.  You can also find a number of security-focused knowledge base articles here.

This is the first post in a series on how to get the most out of Web Performance Monitor. WPM is a powerful tool with some very useful features that might not be so obvious.


THE PERILS OF JAVASCRIPT

When you are making a recording in WPM, you may notice that everything was recorded without issue, but when played back the transaction always fails. Why is that? Most often the culprit is a JavaScript action which did not fire when played back. Everything works just fine in your browser, but for some reason the transaction always fails in the WPM player.


Imagine the simple case of a menu dropdown which is effectively a hidden element on a website. There are web pages which load such menus dynamically, so it’s impossible to locate that element on the page before you trigger the appropriate JavaScript handler. Another example is a dynamically generated ID for an HTML element like menu items. It’s always different, making it impossible to reference.


How can we go about recording edge cases like these in Web Performance Monitor? Let’s take a look.


One way to address this problem is to use your keyboard for navigation instead of the mouse. Try using the tab or arrow keys to move around the page and use the space bar to select and de-select checkboxes. This way, instead of WPM referencing static element IDs, you can navigate to the correct element with a series of key strokes.


In the case where you are testing the general availability of the website or application versus specific features or links on a page, consider using alternative navigation paths to reach the page you want. It is often the case that certain pages are navigable from multiple points on the website. Use the simplest way to reach a page or do an action, especially if it’s not necessary to test a navigation option that requires JavaScript.


On the following screenshot you can see examples of three different links to the Solution Finder on www.solarwinds.com. There are two static links directly on the web page (marked in yellow) and one through a dynamic JavaScript menu (marked in blue).


Pasted_Image_28.3.2013_10_55-2.png


Another option is to leverage native keyboard commands supported by some applications. For example, Gmail supports keyboards shortcuts like “c” to compose a new message, “r” to reply to a message or “e” to archive it. WPM normally records only keyboard actions related to forms and navigation around forms, so you might need to enable XY mode to record all of your keyboard actions and thus these keyboards commands. We will talk more about XY mode later.


Another option which might make your life easier is a neat trick with using “ctrl-shift” during your recording. If you hold these keys during your recording, WPM will record extra steps like mouse-over or mouse-down, helping you to record the display and subsequent click of menu items. WPM doesn’t normally record all possible events on the webpage. This is to prevent unnecessarily long recordings in most cases; however, sometimes you’ll actually want all steps recorded and using this keyboard combination allows for that.


On the screenshot below you can see actions that are recorded using ctrl-shift while randomly wandering around the main Google page with the cursor.


Pasted_Image_28.3.2013_13_32.png


If all else fails, use XY mode. XY mode simulates mouse actions at the operating system level (as mentioned another characteristic of XY mode is, that it also records all keyboard actions). Because of this, there is no feedback to WPM that the mouse action actually worked. Therefore, always combine XY mode with some other validation (textual or image validation) on the page. XY mode will send mouse click actions to the defined absolute coordinates and thus helps you if for whatever reason JavaScript actions are not firing. Because XY mode uses absolute coordinates and webpages are constantly changing, elements on the page can often move so it's safer to avoid XY mode until you’ve exhausted other options. Your transactions will be more stable and easier to maintain.


KEEP IT CLEAN

WPM records all of the actions you make in the browser, intentional or not. In addition, user navigation on the webpage isn’t mechanically precise. Users can wander around the page with a cursor, or unintentionally point to various objects, all of which can trigger actions that you may not want to be recorded by WPM. All of that noise is later played back every time WPM interacts with the application.


This can of course cause prolonged response times. These types of recordings are also more prone to failures. In addition, webpages change over time, making it harder to maintain your recordings.


On the next screenshot you can see that I only moved the cursor over a menu dropdown on www.solarwinds.com and it generated several other mouse over events.


Pasted_Image_28.3.2013_13_35-2.png

 

For these reason it is best to keep your transactions minimalistic and clean. Use only actions which are necessary to verify the application is up and that the monitored functionality works. To keep your recording clean, either re-record it a few times until you find an ideal set of user actions, or manually remove and modify any recorded steps that need it.


In the next post we will look at how to master using WPM remote players and share some more tips and tricks on how to create more stable recordings.

NETWORK TOPOLOGY MAPPER INTEGRATION WITH NETWORK ATLAS

 

I have had number of customer interviews lately and the most common question which arises quite often is, I have Orion Network Atlas why do I need another network mapping software?

To answer this question, we got to understand what use-cases these two applications solves.


Orion Network Atlas

Is a powerful tool built for creating custom network maps and diagram. It is a semi-manual network diagramming software which allows users to design and draw physical & logical topology diagrams by manually placing discovered nodes on to the map canvas. Using the Connect Now button the application automatically draw connections between the nodes.


SolarWinds Network Topology Mapper

Is an automated Network Mapping software. The software is built to create dynamic topology diagrams/maps by using its powerful auto-discovery engine. It can automatically discover, map, provide inventory and document your specific network environment. Network Topology Mapper is a replacement to its earlier network mapping application called LANSurveyor

 

With Network Performance Monitor (NPM) one monitors key devices, interfaces, etc. and not every single object in your environment. But, maybe you want a complete topology map of your network environment (example: Workstations, Printers, Hubs etc) for documentation purposes or to know what’s on your network and represent those devices on you Orion Web console, without actively monitoring them. With the combination of SolarWinds Network Topology Mapper (NTM) and Orion Network Performance Monitor (NPM), now you can have your cake and eat it too!

 

Network Topology Mapper has the capability of exporting the network diagram it has discovered and rendered to Network Atlas and Vice Versa. Once imported, the user has the choice to add the nodes identified in NTM into the Orion database or simply display the identified nodes. The application, provides two ways to export your maps/diagrams into Network Atlas

  • One-Time Export: You can easily export your discovered map to Network Atlas using the following flow File – Export – Network Atlas. Choose from
    • Open directly in Network Atlas – This function will trigger the launch of Network Atlas in the background
    • Export as a Map – Choose this option if you want to save the map and bring it up into Network Atlas at a later stage.

        Note: For both these choices there is a need to save the map first and an option to secure your map by a password.

map1.JPG

  • Schedule Export: Users can also schedule the export of NTM Maps into Network Atlas, there by staying ahead of network changes in your environment. In order to enable the scheduling function, setup the scheduling frequency in the Network Discovery Scan Wizard.


Capture5.JPG

Choose on how you want to save the discovered results to bring up the map in Network Atlas. The available choices are

 

  • Automatically merge results with a map – This option will always merge the new discovered data with the old.
  • Manually select results to merge with a map - Users will need to choose the required map(s) to be displayed into Network Atlas from the available completed scans.
  • Save results as a new map – There will always be a new map created and saved with time-stamps.

 

Next, make sure Keep Network Atlas updated with these discovery results is ticked ON. Enter your Orion Server log-in details (where your active Network Atlas resides), make sure to test the credentials and hit the Set button. Hit on the  discover or update button to complete the wizard.

 

Capture4.JPG

Open or Connect to Orion Network Atlas. An automatic map import window opens, choose the desired map file(s) from the list to be imported . Once completed, Network Atlas will pop up with a notification message informing you of how many objects are new to the Orion database (if any) and if you want to discover them.

NTM56.JPG

Capture22.JPG

If clicked Yes, Orion discovery will run. Upon completion the gathered data will be stored into the Orion database and these new identified objects will be actively monitored. If No, is chosen the nodes will be represented as unknown nodes into the Orion database.


NETWORK TOPOLOGY MAPPER INTEGRATION WITH ENGINEERS TOOLSET & OTHER 3RD PARTY DIAGNOSTICS TOOLS

I have a written a blog post on the subject How NTM integrates with Engineers Toolset, you can find more here. One can also call 3rd Party tools into the application. This integration, brings with it the advantage of troubleshooting and diagnosing problem on your network using network diagrams.

 

NETWORK TOPOLOGY MAPPER INTEGRATION WITH MICROSOFT VISIO

With just one-click, users can export their currently displayed network diagram to Microsoft Visio. From there on, the Visio file can be exported as a publication on to the Web.

The following Microsoft Visio versions are supported 2003, 2007, 2010 and 2013.

Filter Blog

By date:
By tag: