1 2 3 4 5 Previous Next

Product Blog

583 posts

IPAM and UDT have combined like Voltron to form the new IP Control Bundle and as an IT "pro-", you may finally say goodbye to the painful and long IP Address Conflicts troubleshooting. Say goodbye to the long down-time of your servers, key network infrastructure or disconnected end users, caused by duplicated IP address on your network. For new users IPAM 4.3 and UDT 3.2 are now also available as a combo of two products with "in-line" integration focused on IP Address Conflict troubleshooting. For our existing customers both new versions act as regular product upgrade. There is no difference in functionality between bundle and standalone versions.

 

What IP Address conflicts it detect?

 

1) Static IP Address assignment on two end-points within the same network

 

Typical situation when this may happen is cloning virtual machines. People simply download or copy virtual image and run it within the network not knowing if there is static IP address pre-set.

 

Conflict statiac.png

 

2) Static IP in DHCP environment


There is many situations when static IP address may cause conflict within dynamic (DHCP driven) network. It can be somebody's laptop with static IP from the hotel, it can be cloned virtual machine with static IP, it can be attacker who is trying to connect to local network using static IP Address. It also can be newly connected network printer which has usually default static IP from private pool of IP address range - 192.168.x.x

Static DHCP conflict.png

3) DHCP scope overlap (split-scope)


DHCP scope overlap problem can occur these days much more frequently because local networks are usually controlled through DHCP servers and if DHCP server goes down, nobody can get new IP Address and connect to the network. from that reason, IT build backup or fail-over solution which consist from two or more DHCP servers. It's usually DHCP config misconfiguration or DHCP failure which cause both DHCP servers can provide the same IP address to two totally different devices.

scope overlap.png


4) IPAM end-point reservation vs. actually connected device


In a case IT admin reserved IP address to the specific end-point (MAC address) and other device is currently connected with that IP it doesn't really mean, there is a real IP address conflict. This may be because of obsolete IPAM information, or an upgrade of originally assigned device. However, it could be just a time bomb waiting for the reserved device to be connected back to the network. In all cases, IPAM administrator should be notified about such situation and either update IPAM reservation or change IP address of unexpected device.

 

How can IP control bundle help you to solve IP Address Conflicts?

 

By two clicks you will get the most important information about conflict:

 

1) Who is in the conflict? MAC addresses, Vendor, Active Directory Information and connected port or Wireless SSID (UDT functionality)

 

2) History of end-points connection - who is assigned to the IP Address? Show recent user of given IP Address so IPAM admin can understand who should be disconnected first.

 

3) "How-To" steps helping you do an action

 

Here is an example of DHCP scope overlap and how IPAM and UDT can help you to troubleshoot the problem:

conflict story.png

 

Both IPAM 4.3 and UDT 3.2  are now ready on customer portal and waiting for your download and upgrade. If you don't have one of these product, or if you would like to evaluate entire IP Control Bundle, you may download it for evaluation.

 

IPAM and UDT also brings other improvements and bug fixes, for more details, see our release notes for IPAM and UDT.

 

I can recommend you to watch following web-cast we made recently, which not only summarizes IP Address Conflict issues but also compares multiple IP Address Management solution and demonstrates latest IPAM & UDT (IP Control Bundle).

For just over a year now, the SolarWinds customer training program has been offering classes on the Orion Core platform and NPM. During that time, we’ve delivered over 150 classes to nearly 2500 students. And now, it’s time to grow!

 

New courses coming soon:

We have a number of new courses in development, including SAM courses, additional NPM classes on new features like DPI, and we’ll be working our way through the rest of the SolarWinds product line.

 

New training delivery options:

In addition to the live classes, we’re also developing recorded sessions. But we’re not just talking about an online video. The recorded classes will include a student guide, lab guide, and even access to a live lab environment for all the features of a live class, but on your schedule.

 

New training products:

We realize one of the biggest challenges to attending training is trying to find the time to squeeze it in. And sometimes you may not need to attend a full course to get just the bits of information you're looking for. With that in mind, we'll begin rolling out training quick hits; shorter training videos focused on a specific topic. We'll be starting with a series of quick start guide videos for our extended product line; designed to get you started out on the right foot.

 

All new student lab:

We’re currently rebuilding our student lab environment to accommodate more users and a faster turnover time between training sessions. The new lab will let us host simultaneous classes for more students, provide the lab environment for video training users, and ensure a faster, easier user experience.

 

2015 is shaping up to be a big year for customer training! We’re excited about the offerings and hope to see you in class soon.


For current course schedules and registration, please visit http://customerportal.solarwinds.com/virtualclassrooms.

It is our extreme pleasure to announce Database Performance Analyzer (DPA) 9.0 is generally availability.

 

Below is a quick summary of the features, but check out these two blog posts for all the details:

 

Top features in this release include:

  • Storage I/O Analysis

o   Determine if your slow queries are related to storage

o    See latency and throughput of specific files and correlate these to SQL statements

o   Clear visibility of storage performance and its effect on SQL response time

  • Resource Metrics Baselines

o   Determine if your resources are behaving abnormally and correlate them to SQL statements.

  • SQL Statement Analysis and Advice

o    Get expert advice on specific SQL statements

  • Database File/Drive tab for SQL Server (with continued support for Oracle)

o   Use the File and Drive dimensions to isolate poor performance

  • Alerting Improvements

o   Resource Metric Alerts

o   Alerting Blackouts

  • New Version Support
    • MS SQL Server 2014
    • Oracle 12c (single-tenant)
    • DB2 10.5

 

 

Want more information on DPA and this release?


On November 11th, Microsoft released a total of 16 security updates for November's Patch Tuesday which mitigate potential security threats in Office, Windows, SharePoint, and Internet Explorer. If you are a patch administrator for your company, then it's worth your time to read the Microsoft Article (Microsoft Security Bulletin Summary for November 2014).

 

In theory, there are several issues that could be causing concern when you read the article, but the one that seems to be the generating the most buzz is MS14-066: Vulnerability in Schannel Could Allow Remote Code Execution (2992611).  This vulnerability was additionally reported as CVE-2014-6321.  Although there are no known exploits of this vulnerability, it is quite serious and you should take note.  This vulnerability in the Microsoft Secure Channel or Schannel, is a set of security protocols used to encrypt traffic between endpoints and is primarily used for Internet communications over HTTPS.  This particular patch is applicable to every Windows Operating System under active maintenance.  This ranges from Windows Server 2003 SP2 and Windows Vista SP2 through Windows Server 2012 R2 and Windows 8.1.

 

Although the media is touting both the scope and the number of updates as the craziest thing that we've ever seen in patching, this isn't even the largest bundle of patches that Microsoft has released for a single Patch Tuesday. That current record is for April 2011 with a total of 29. But fear not, Patch Administrators, although the quantity seems daunting, the process is still the same. This is most definitely not a "sky is falling" moment - we're here to help.

 

One thing that people seem to forget is that Patch Administration is the same on Patch Tuesday whether there are 2 patches or 100 patches. If you follow the same procedure from start to finish, you are guaranteed to make sure that your environment is up to date and secure.  The best practices for any software (not just patches) is to test them on a small segment of your infrastructure before sending them everywhere. Thankfully, you can do this easily with SolarWinds Patch Manager.

 

 

Download the Updates from Microsoft to your WSUS Server

I've found that the easiest way to see these updates is within a Custom Updates View. I have one called "Microsoft Updates - This Week" which is defined as "Updates were released within a specific time period: Last Week" and "Updates source is Microsoft Update." If you need to create one, you can navigate to "Patch Manager\Enterprise\Update Services\WSUS Server\Updates" and then right-click on the Updates node and select "New Update View." Feel free to use this screenshot as a reference.

CustomViewCreation.png

On that view, I tweak a few of the settings so that I can get a direct look at the updates that are concerned with this particular Patch Tuesday. I start by flipping the "Approval Settings" to "All" and the "Status" to "Any" and let the list refresh. Then I group it by the MSRC Number, which I do by dragging the "MSRC Number" header to the gray area just above the headings.

CustomViewFiltered.png

Now I have a list of the all the items released by Microsoft within the Last Week, grouped by the MSRC Number. After that it's as easy as expanding each and scanning through the list and seeing if all the updates that are applicable to my environment are approved. (You can also flip the Approval filter at the top to "Unapproved", but I like seeing all the information.) It's also good to check the "State" field to make sure that the updates are "Ready for Installation."

 

If you don't see any of this information, it means that the updates haven't yet been synchronized to your WSUS Server. Running a manual Synchronization with Patch Manager is simple - highlight your WSUS server in the left pane and click on the "Synchronize Server" in the Action Pane (right-side of the screen). Click Finish and it's kicked off.

SynchronizeServer.png

After the synchronization is completed, you can go back and verify that the updates are available using the Update View that we just finished building. Or you can use a report. I've crafted one especially for this month's updates.

 

To use it, download (Software Updates for MS14-NOV) and import it into the Windows Server Update Services folder. This report has a prerequisite that the WSUS Inventory job has completed after the updates have been synchronized to WSUS. This is normally a scheduled job that runs daily, so you can either kick it off yourself, or just wait for tomorrow to run the report.

 

This report tells you the WSUS Servers in the environment, the Security Bulletin updates, the Approval Status, the Product Titles, and the Update Title, filtered to only those updates in the MS14-NOV Bulletin.

SampleReportSoftwareUpdates.PNG

Test Group

You should run updates against a test group whenever possible.  For me, I've got a test group with a few different versions of Windows in it, so I'll use that.

 

Approve the Updates for a Test Group

If you need to approve the updates, just right-click on them and select Approve and then "Approve for Install" in the Approve Updates window and (recommended) scope it only to your testing computers. They may already be updated based on your automatic approval rules. If that's the case and you are trusting, then you are good to go!  If not, send the updates to a test group first.

ApprovedForInstall.PNG

Theoretically, you can stop here and the updates will apply based on your defined policies (either via GPO or Local Policies), but where's the fun in that?

Run a Simulation of the Patch Installation on the Test Group

For any Patch Tuesday, I'm a fan of creating a set of  Update Management Rules and save it as a template. That way I can refer to it for pre-testing, deployment to the test group, and then deployment to the rest of the organization. You can either create your own MS14-NOV Template (within the Update Management Wizard) or download and use mine. It's defined to include all Security Bulletins from MS14-064 through MS14-079.

UpdateManagementWizardTemplateRule.png

Now it's time to pre-test these patches. I right-click on my Test Group and then select "Update Management Wizard."

 

Select "Load existing update management rules" and select the MS14-NOV entry from the drop-down. (If you need to build your own, you can select "Create custom dynamic update management rules"). Click Next.

UpdateManagementWizard_1.png

Verify that the Dynamic Rule shows Security Bulletins from MS14-064 through MS14-079 and click Next.

 

You can leave most of the defaults on the Options page, but be sure to check the "Run in planning mode" checkbox in the Advanced Options. Click Finish.

UpdateManagementWizardAdvancedOptions.png

Either change your scope to include a few other computers or add additional computers for the testing and then click Next.

 

Select the schedule (I am a fan of "now") and Export or Email the results as you like and click Finish.

 

Planning mode is an oft-overlooked feature that you should definitely use for large patch deployments.

 

This gives you in three quick tabs, the overall status summary of the job, the per patch and per computer details, and the distribution of the job (if you have multiple Patch Manager Servers in your environment).

StatusSummary.png

StatusDetails.PNG

 

Pre-Stage the Patch Files on a Test-Group (optional)

If you have a large or highly distributed environment, you can use the Update Management Wizard to deploy the patches to the endpoint, but hold off on installing them. This can be staged to run over several hours or multiple days. This is as simple as running through the same steps as the previous wizard and then checking a different box in the "Advanced Options." Leave the Planning Mode checkbox unchecked and check the box for "Only download the updates, do not install the updates."

AdvancedOptions_DownloadOnly.PNG

That's it. Just use the rest of the wizard as before and check this one box to pre-stage the updates to your environment.

Install the Patches on your Test Group

Same rules apply here. Just make sure that you leave the Planning Mode and Download Only checkboxes empty.  Yeah - it's really just that simple.

 

Reporting on the Results

To report on the status of these patches within your environment, you can use any number of our pre-built reports or customize your own for your needs.  Likewise, you should take advantage of all the Shared Reports in the Content Exchange here on Thwack and download this one that was kindly donated by LGarvin and repurposed (very) slightly by yours truly:Report: Computer Update Status for MS14-NOV This report shows the status of the MS14-NOV Patches in your environment.


Summary

Plan & Test, Stage, Install: Three steps, one wizard, and you have successfully deployed those pesky patches to your test environment.  So what's up next?  Moving to production...

Patching Production Endpoints

You ever read the instruction on the back of a bottle of shampoo?  It says "Lather, rinse, repeat."  This is no different.

The previous process (hopefully) has been run against a group of test computers in your environment. If that go wells, then it's time to schedule the updates for the rest of your environment.  Just use the same few steps (Approve, Test in Planning, Deploy, and Install) to deploy the patches all at once, in waves, or staggered based on your needs.

 

Next Steps...

Hopefully, this "day-in-the-life" snapshot for a Microsoft Patch Tuesday has been helpful, but this just scratches the surface of what SolarWinds Patch Manager can do to help you keep your environment running smoothly.  Now that you've got Operating System patching under control, extend your knowledge of Patch Manager by keeping on top of patching Third Party Updates from Adobe, Google, Mozilla, Sun, and more!

 

If you need more help, like everything SolarWinds, start on Thwack.  We've got the best user community bar none.  Just ask a question in the forums and watch the people come out of the woodwork to help.  Start with the Patch Manager Forum and you can go from there.

If you frequent the various product and geek speak blogs here on Thwack, you've likely at one time or another stumbled upon some reference, and possibly sneak peek glimpses, into what has affectionately been referred to as the "AppStack". The Application Stack, or "AppStack" for short, is a term used to describe all the various moving parts that make up today's complex application delivery infrastructure. This begins at the bottom with the backend storage arrays where data is housed, through the various different virtualization layers, up to the server that hosts the application, until finally we reach the application itself. The AppStack Environment View shown below accompanies the SAM 6.2, VMAN 6.2, and SRM 6.0 beta releases.

 

SAM Button.pngVMAN Button.pngSRM Button.png

 

The image below isn't the latest incarnation of the Candy Crush sensation, but rather a visual representation of all the various infrastructure components that make up my lab environment. Historically, all of this status goodness was tucked away under each respective categories product tab. Correlation and association of these various objects was a mental exercise based on a working knowledge of the infrastructures initial deployment, or legacy information that has been passed down from systems administrator  to systems administrator before them. To make matters worse, todays infrastructure is dynamic and ever changing, so what you thought you knew about the organizational layout of your supporting infrastructure may have changed dozens, or even hundreds of times since it was initially deployed.

 

AppStack.png

 

The AppStack Environment provides a 10,000 foot overview of your entire environment. At first, this may seem a bit overwhelming as there's a lot of status information contained within a single view. To that end, our UX research team toiled tirelessly to ensure that the AppStack provides as much useful information possible, while reducing or completely eliminating any extraneous noise that could distract from identifying problems in the environment and their likely cause. Perhaps most unnerving to most when first presented with the AppStack Environment View is the lack of object names displayed. Well fret not my dear friend because our UX masterminds thought of everything!

 

In most cases, when trying to get a high level overview into the health status of the environment, you're trying to get as much information represented in as little real estate as possible. If the status is good (green) then likely it's not of any concern. For those objects that are in distress however, you may want to glean additional information into what they are before digging deeper. As with most things in Orion, mouse hovers are available extensively throughout the AppStack Environment view to expose information, such as the objects name and other important details. However, if you're trying to determine the object names for a multiple of objects in distress, a mouse hover is a fairly inefficient means for determining what those objects represent. To address this need you will find a "Show Names" link in the top left of the AppStack. Clicking on this link will expose the name of any object currently in distress. Object names for items that are up (green) remain hidden from view to reduce visual clutter, so it's easier to focus on the items currently in distress.

 

Show Names.png

 

 

The AppStack Environment functions as a status overview page that is valuable when visually attempting to identify issues occurring within the environment, such as in the case of a NOC view, or perhaps the view you keep on your monitor while you go about your usual daily routine.  Any changes in status are updated in real-time for all objects represented in the AppStack. No browser refresh necessary. What you see in the AppStack Environment view is a real-time representation of the status of your environment.

 

Leveraging application topology information gathered across various different products, from Server & Application Monitor (SAM), Virtualization Manager (VMAN), and the all new Storage Resource Monitor (SRM), Orion is now capable of automatically compiling relationships and displaying them in a meaningful fashion between all the various infrastructure components that make up the application stack. Object associations are updated and maintained in real-time, and automatically, as the environment changes. This is important in todays dynamic environments where VMs are vMotioned between hosts, and storage is so easily reprovisioned, reallocated, and presented to servers. If someone had to maintain these relationships manually, it would likely be a full time job that could drive a person insane!

 

Clicking on any object within the AppStack selects that item and displays all relevant relationships the object has to any other object represented within the AppStack Environment. In the case below, I've selected an Application that's currently in a "Critical" state. I can see in the selection bar that this is a Microsoft SQL Server; one I've been having a bit of trouble with lately, so I'm not at all surprised to see it in a critical state. From here I can see which server SQL is running on, the fact that it's obviously virtualized because I can also see the host, virtual custer, datacenter, and data store the VM resides upon as well. None of those appear to be my issue however, as their status is "green". Moving further down the AppStack I can see the volumes that the VM is using, the LUN where the Data Store this VM resides upon, and the Storage Pool itself. So far, so good. Wait a minute, what's that I see? The backend array this VM is running on appears to be having some trouble.

 

AppStack Selected.pngDouble clicking on the Array object, or selecting the array and clicking on the object name from within the selection bar (for those of you rocking a tablet or other touch enabled device) takes me to the Array Details view. There I can see the I/O on this array is exceptionally high. I should probably vMotion a few VMs off the hosts that are utilizing this array, or provision more storage to those hosts from different arrays and balance the VMs across them.

 

Selection Bar.png

 

In this example, the AppStack Environment view was able to cut through the all the various information Orion is capable of collecting across multiple products to expose a problem, provide context to that problem, and identify the root cause. This can be done across the entire environment, or custom AppStack Environment views can be created to focus on a specific areas of responsibility. Perhaps you're the Exchange Administrator, and all you care about is the Exchange environment. No problem! Creating a custom AppStack environment view is simple. You can filter the AppStack by virtually any property available in Orion, such as Application Name, or Custom Property by adding filter properties.

 

Narrow your Envionment.pngSave as New Layout.pngSave as new Layout Dialog.png

 

Once you've filtered the AppStack to your liking and saved it as new layout, you can reference it at any time from layout selector in the top right of the AppStack Environment view. These custom AppStacks can also be used in rotating NOC views for heads up displays, or added to any existing summary view.

 

In addition to this new AppStack Environment View, you will also find a contextual AppStack resource on the Details view of every object type that participates within the AppStack. This includes, but is not limited to...

 

  • Node Details (For Servers)
  • Application Details
  • Array Details
  • Group Details
  • Storage Pool Details
  • LUN Details
  • Storage Pool Details
  • Hyper-V Host Details
  • ESX Host Details
  • Cluster Details
  • Virtual Center Details
  • Datastore Details
  • Volume Details
  • vServer Details
New Layout - Microsoft Exchange.png

 

AppStack MiniStack.png

The AppStack Environment resource provides relevant contextual status information specific to the object details being viewed. The image on the right, taken from the Node Details view of the same SQL server shown in the example above, displays the relationships of all other monitored objects to this node. This context aware application mapping dramatically reduces time to resolution by consolidating all related status information to the object being viewed into a single, easily digestible resource.

 

Relationships aren't limited exclusively to the automatic physical associations understood by Orion however. These relationships can also be extended using manual dependencies to relate logical objects that are associated in other ways, such as SharePoint's dependency on the backend SQL server. This capability further extends the AppStack to represent all objects as they relate to the business service, and significantly aids in root-cause analysis of today's complex distributed application architectures.

 

Earn $100.00 & 2000 Thwack Points!

 

SAM 6.2, VMAN 6.2, and SRM 6.0 beta participants have been given a special opportunity to earn $100.00 and 2000 Thwack points simply for taking part in these betas and providing feedback. To participate you will need to sign-up to participate and install at least two beta products that integrate within the new AppStack Environment view.

 

If you already own Virtualization Manager, you can sign-up here to participate in the Virtualization Manager 6.2 beta, get it installed and integrated with SAM or SRM and you're well on your way to earning $100.00 and 2000 Thwack points. Please note though, that you MUST already own Virtualization Manager and be under active maintenance to participate in the Virtualization Manager 6.2 beta.

 

If you currently own Storage Manager, and are under active maintenance, you can sign-up here to participate in the SRM 6.0 beta. Install SRM alongside SAM or VMAN to earn yourself some quick holiday cash!

 

If you're not yet participating in the Server & Application Monitor 6.2 beta, then what are you waiting for? All SAM product owners under active maintenance are welcome and strongly encouraged to join. Simply sign-up here. If you also happen to own Virtualization Manager or Storage Manager, then now is a great time to try out these awesome combined beta releases, and earn a little cash in the process.

 

Space is limited, so please reply to this post letting us know that you're interested in participating. You'll not only fatten your wallet in the process, but you'll also be helping us to build better products! Here’s how it’s going down.

 

When:What you’ll do:What you receive:
When you receive the beta bits for 2 or more productsHave a 15 minute phone call with our UX team to review what to expect and tell us about your job and technical environmentMy eternal gratitude!
2 to 4 days after installing two or more participating beta productsSend us a 1 to 2 minute video telling us about your initial experiences with AppStack. You can make the video with whatever is easiest for you; most people use their phone.$50 Amazon gift card
7 to 10 days after installing two or more participating beta productsSend us another 1 to 2 minute video telling us about how you've used AppStack. What are your favorite things? What drives you crazy?$50 Amazon gift card
12- 14 days after installing 2 or more participating beta productsSpend one hour with us showing us how you use AppStack.  We’ll  meet using GoToMeeting so that you can share your screen and walk us through some typical use cases.2,000 thwack points

The team at Solarwinds has been hard at work at producing the next release of Virtualization Manager with some great new features. It is my pleasure to announce Virtualization Manager 6.2 Beta which is packed full of features and continues VMAN's journey of integration with Orion. The Beta is open to VMAN customers currently on active maintenance, to try it out please fill out this short survey.

 

VMAN Beta button.png

 

 

Beta Requirements

Orion integration with VMAN Beta 1 will require an install of the SAM 6.2 Beta.  Check out the the 3 part blog post (links to the blog posts can be found here) which details the new features of SAM including AppStack support for end-to-end visualization of the environment: App > Server/VM/Host > Storage (datastore).  As a reminder, beta software is for testing only and should not be used in a production environment.

 

AppStack

Essential to diagnosing an issue (or preventing one) is understanding dependencies and relationships in a virtual environment.  Most administrators start troubleshooting an issue armed only with the knowledge that an application is slow or a server process is functioning sub-optimally.  AppStack provides an at-a-glance view of environmental relationships by mapping dependencies end to end and providing easily identifiable statuses which can quickly be used to identify root cause.  If a VM is reporting high latency an administrator can quickly identify the datastore, host, and related VMs to determine where the issue may be.  In cases where NPM, SAM, and VMAN are integrated, AppStack can quickly identify if a SAM threshold alert is attributed to network latency or virtual host contention.

 

AppStack can provide a view of your entire environment with relational dependencies visible end-to-end.  Status from additional integrated SolarWinds products such as SAM, SRM, and WPM are provided and the relationship mapped to the virtual environment.

AppStack.png

 

AppStack is also visible from an individual Virtual Machine's Details View and displays the relational context of that virtual machine only.

Ministack.png


 

Management Actions

A great new feature is the ability to perform virtual management actions without leaving Virtualization Manager.  With all your virtualization monitoring information available you can make a decision to stop, start, or  pause a VM from within VMAN.  In addition to power management options a Virtual Admin can also create and delete snapshots without ever leaving the VMAN console or logging into vCenter. An administrator can now act on an alert they see in the dashboard such as VMs with Old Snapshots and then delete those snapshots from within Virtualization Manager.  The addition of management actions also allows a Virtual Administrator to delegate operational tasks to non-virtual admins or teams (such as helpdesk) without providing access to vCenter.


Mgmt actions 1.JPG


 

Management ActionsWhat you will see

Power Management Actions

  • Power Off VM
  • Suspend VM
  • Reboot VM

Selecting a power management action will present a pop up confirming your action.

reboot.JPG

Snapshot Management Actions

  • Take Snapshot of VM
  • Delete Snapshots

Creating a snapshot will present you a pop up to Confirm the snapshot and create a custom name.

Create Snapshot.JPG

Deleting a snapshot will present a pop up to select a snapshot to delete.

 

Delete snapshot.JPG


Management Action Permissions

To execute management actions, the account used to configure the credentials that SolarWinds will use to contact your Virtual Center, must be given the necessary vCenter Server permissions. The minimum vCenter Server permissions required are:

 

  • Virtual machine > State > Power Off
  • Virtual machine > State > Power On
  • Virtual machine > State > Reset
  • Virtual machine > State > Suspend
  • Virtual machine > Snapshot management > Create snapshot
  • Virtual Machine > Snapshot management > Remove snapshot


Note: In vCenter Server 5.0 the location of snapshot privileges is below:


  • Virtual machine > State > Create snapshot
  • Virtual Machine > State > Remove snapshot

 

Managing access to the Management Actions within SolarWinds

Once permissions are set on the account used to connect to vCenter you can delegate access within SolarWinds to execute the management actions. The default permissions for management actions is set to Disallow which disables the virtual machine power management and snapshot management options.  To enable these features for a user or group select Settings in the upper right hand corner of Orion.  Then select Manage Accounts found under User Accounts and select the account or group to edit. Expand Integrated Virtual Infrastructure Monitor Settings and select Allow for the management action you wish to enable.

 

Mgmt Actions Permissions.JPGVirtual Machine Power Management

     Allow - Enable the options to start, stop, or restart a virtual machine.

     Disallow - Do not enable the options to start, stop, or restart a virtual machine

 

Snapshot Management

    Allow - Enable the options to take snapshots of a virtual machine, or to delete snapshots.

    Disallow - Do not enable the options to take snapshots of a virtual machine, or to delete snapshots.

 



If an admin attempts to perform a management action without being provided access within Solarwinds they will receive an error similar to the one below.

failed snapshot.JPG

 


 

Co-stop % Counter

A new counter has been added the VM Sprawl dashboard that is useful for detecting when too many vCPUs have been added to a VM thereby resulting in poor performance.  Co-Stop ( %CSTP as it is represented in ESXTOP) identifies the delay incurred by the VM as the result of too many vCPUs deployed in an environment.   Any value for co-stop above 3 indicates that virtual machines configured with multiple vCPUs are experiencing performance slowdowns due to vCPU scheduling. The expected action to take is to reduce the amount of vCPUs or vMotion to a host with less vCPU contention.

 

Co-Stop.JPG

CoStop - VIM.JPG

 


Web Based Reports

Web-based reporting introduced in VMAN 6.1 allowed us to represent all data in the VMAN integration in the web based report writer bringing about an easy way to create powerful customizable reports in an easy to use web interface.  We have extended this functionality to include out--of-the-box web based reports found previously only in the VMAN console. These new web based reports include:

 

HostVM
    • Disconnected Hosts
    • High Host CPU Utilization
    • Hosts with Recent Reboots
    • Newly Added Hosts
    • Datastores in Multiple Clusters


    • All VMs
    • Existing VMs with Recent Reboot
    • High VM CPU Utilization - Latest
    • High VM Memory Utilization - Latest
    • Linux VMs
    • Low VM CPU Utilization – Day
    • Low VM Memory Utilization – Day
    • Newly Added VMs
    • Stale VMs – Not Recently Powered On
    • VMs configured for Windows
    • VMs that are not Running
    • VMs Using over 50GB in storage
    • VMs Using Thin Provisioned Virtual Disks
    • VMs With a Bad Tools Status
    • VMs With a Specific Tools Version
    • VMs With Less Than 10% Free Space
    • VMs With Less Than 1GB Free Space
    • VMs With Local Datastore
    • VMs With More Than 20GB Free Space
    • VMs With More Than 2 snapshots
    • VMs With No VMware Tools
    • VMs With old snapshots (older than 30 days)
    • VMs With One Disk Volume
    • VMs With One Virtual CPU
    • VMs With One Virtual Disk
    • VMs With over 2GB in snapshots
    • VMs With RDM Disks
    • VMs With Snapshots
    • Windows VMs
    • High Host Disk Utilization
    • VMs on Full Datastores (Less than 20% Free)
    • VMs Using More Than One Datastore



Web Based Alerts

As discussed in the Virtualization manager 6.1 blog post web based reporting was introduced which allowed virtual administrators to take advantage of baselines and dynamic thresholds. Originally we only included only a small subset of standard VMAN alert out-of-the-box but we now have now extended these out-of-the-box alerts to include:

 

ClusterHostVMDatastore/Cluster Shared Volume

 

    • Cluster low VM capacity
    • Cluster predicted disk depletion
    • Cluster predicted CPU depletion
    • Cluster predicted memory depletion


    • Host CPU utilization
    • Host Command Aborts
    • Host Bus Reset
    • Host console memory swap
    • Host memory utilization
    • Hosts - No heartbeats
    • Hosts - No BIOS ID
    • Host to Datastore Latency


    • Guest storage space utilization
    • Inactive VMs - disk
    • Stale VMs
    • VM - No heartbeat
    • VM Memory Limit Configuration
    • VM disk near full
    • VMs with Bad Tools
    • VMs with Connected Media
    • VMs with Large Snapshots
    • VMs with Old Snapshots
    • VMs With More Allocated Space than Used
    • Disk 100% within the week
    • VM Phantom Snapshot Files
    • Datastore Excessive VM Log Files



 


Appliance Password Improvements

With ease of setup provided by the virtual appliance it is easy to overlook changing the appliance admin account (Linux password) or the Virtualization Manager admin account password.  In Virtualization Manager 6.2 the initial login will now display a password change reminder with links that will easily change both passwords.


Password change.JPG


Linux Password.JPG 


That's it for now.  Don't forget to sign up for the Beta and provide your feedback in the Virtualization Manager Beta forum


VMAN Beta Button 2.png


In my many chats with customers, I’ve found they didn’t know we have tools that allow them to synchronize files across servers automatically – and we give it away for FREE!

Some history here, when we acquired Rhinosoft back many moons ago, they sold a crazy popular FTP Client called FTP Voyager.  As a part of the acquisition we decided to make this free to the world.  A hidden gem within this product was their scheduler set of functionality, which is a service that allows you to automate file transfer operations on a scheduled basis.


Once you have installed FTP Voyager, you can open this by right clicking on the task bar and selecting “Start Management Console”.

TrayMenu_On.png

 

Once you launch it you get a dialogue with some options to start from.

Main.png

 

Backup & Synchronize are purpose built wizards which ultimately create the same thing as the first option, which is create a task.

Within a task you can set it to perform many tasks and behave in different ways, for example

  • Tasks starting additional tasks
  • Email notifications on success, failure & task completion
  • Tray icon balloons for custom alerts
  • Multiple transfer sessions
  • Conditional event handling
  • Optional trace logs (with log aging) for each task
  • Custom icons & colors for each task

Task-Properties.png

The Backup wizard backs up files or folders and allows you to restore lost data due to hardware failure, software glitches, or some other cause for data loss. Many organizations perform backups of day to day activities to insure minimal downtime in case a disaster like a corrupt hard-drive occurs.

Backup.png

Synchronization focuses on keeping folders and files from/to your local machine to/from a remote server in sync. For example, web developers may use synchronization to maintain a local copy of an entire website. This allows the web developer to make changes to the website offline, and then upload those changes when they are complete.

Sync.png

I can already read your mind and predict your next question via the power of thwack.

               “This is awesome Brandon, but how do I do this in bulk or push out to many machines?”


Here’s the bare minimum you would need to have for a fully configured FTP Voyager Scheduler configuration with no end user interaction.

  1. Deploy FTP Voyager via your favorite app deployment application or leveraging GPO
  2. The registration ID: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerID.txt
  3. Pre-configured Scheduled tasks: C:\ProgramData\RhinoSoft\FTP Voyager\Scheduler.Archive
  4. Pre-configured site profiles for Scheduler: C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyager.Archive
  5. Pre-configured transfer settings for Scheduler:  C:\ProgramData\RhinoSoft\FTP Voyager\FTPVoyagerSettings.Archive

 

This assumes that all machines receiving these files use the same file system structure defined in the tasks, that all required folders for those tasks already exist, etc.

    NOTE: When doing #3 and #4, you must copy them from the same machine, as the items configured in step #3 uses values present in #4 to link the two together. If different machines are used to generate the data, the values will not link properly.


That’s it, simple as that.  Whether you want to deploy this on one of your machines or many in your environment, now you can have your files automatically backed up or synchronized to your Serv-U server for safe keeping.

It's 2014 and simple bandwidth saturation continues to be one of the most common network problems.  Every time we make advances in data transit rates, we seem to find new ways to use more bandwidth.  Or is it the other way around?  QoS helps us use the bandwidth we have more effectively, but adds a layer of complexity.  Another interesting side effect of QoS in many organizations is that, by successfully protecting critical application traffic, QoS allows you to defer bandwidth upgrades.  By the numbers, keeping your circuits saturated absolutely makes business sense.  Who wants to pay for bandwidth you don't use?  The strategy is so effective that saturated circuits are becoming the new normal.

 

When bandwidth saturation is the normal state and QoS is responsible for protecting our critical traffic, how do we monitor and troubleshoot slowness?  We no longer have an available bandwidth buffer that we can monitor as an indication of the health of the circuit and the quality of the service we're providing to all of the transit traffic.  Saturated circuits require a different strategy.

 

1-minute max to tell us about your IT troubleshooting pain points

 

Rummaging Through the Tool Box

 

Let's take a look in our tool box and see what makes sense to use with saturated circuits:

Ping - Ping falls in only one QoS category and it generally isn't the same category as your important traffic.  Accordingly, ping is not a very good indicator of health.

Bandwidth Utilization - High bandwidth utilization no longer means bad service.  In fact, arguably, high bandwidth utilization is exactly what you're trying to achieve.  This is also not a good indicator of health.

NTA - Knowing what different types of traffic make up your total traffic load continues to be important.  Knowing how much traffic is being sent or received within each QoS class and what types of traffic make up the contents of that QoS class are more important than ever.  Additionally, QoS queue monitoring provides a window into when QoS is making the decision to drop your traffic.

VNQM - VNQM uses IP SLA operations that can masquerade themselves as most other types of traffic.  You can monitoring how VOIP, SQL, HTTP, and other traffic are each uniquely serviced.

QoE - Where VNQM uses synthetic transactions (read: monitoring system generated) to constantly measure the service you're providing to a particular type of traffic, QoE watches your real transit traffic to see what kind of service you're actually providing your users.

 

Some of our older tools no longer have the fidelity and granularity we need to help.  Ping and simple bandwidth utilization are no longer good indicators of health.  We have to rely on more advanced testing tools like VNQM and QoE.  NTA remains our detailed view into what our traffic actually consists of, with a shift of importance to the QoS metrics.


It's interesting to me that QOE overlaps in some ways with NTA and in other ways with VNQM.  Like NTA, QOE can tell you about what traffic is traversing a specific link (using NPAS) or a specific device (SPAS).  NTA is much better at it than QOE though.  Like VNQM, QOE allows you to continually monitor the quality of service your network is providing but in a different way.  Where VNQM sends synthetic traffic and analyzes the results, QOE analyzes real end user traffic.  QOE's strong suit is providing visibility into the service level you're really providing to your end users and then breaking that down into network service time and application service time.


Now that we've reviewed our tools in this new light, let's look at how our monitoring and troubleshooting processes have changed.

 

Monitoring Saturated Circuits

 

Here's an example of how we traditionally monitored for slowness:

1) Ping across the circuit and watch for high latency.

2) Monitor interfaces and watch for high bandwidth utilization.

3) Setup flow exports so that when an issue occurs, you can understand the traffic.

 

When circuit saturation is expected or desirable, the steps change:

1a) Setup IP SLA operations to test network service at all times and/or

1b) Setup QoE to monitor real user traffic whenever it is occurring.

2) Setup flow exports so that when an issue occurs, you can understand the traffic.

 

One thing that is very important when all your circuits are full is to be able to ignore things that don't matter.  If bittorrent is running slow for your end users, do you care?  When all of your circuits are running hot, you need to be able to detect when critical applications are impacting your user's ability to do their job, and ignore the rest.

 

Troubleshooting Saturated Circuits

 

Here's an example of how we traditionally troubleshot network slowness:

1) Receive high latency alert, high bandwidth utilization alert, or user complaint.

2) Isolate slowness to a specific circuit by finding the high bandwidth interface in the transit path.

3) Analyse flow data to determine what new traffic is causing the congestion.

4) Find some way (nice or not nice) to stop the offending traffic.

 

When circuit saturation is the norm, troubleshooting looks more like this:

1) Receive alert for poor service from VNQM or QOE, or receive a user complaint.

2) Isolate slowness to a specific circuit by finding high bandwidth interfaces in the transit path and comparing which endpoints or IP SLA operations are experiencing the slowness.

3) Determine which QoS class to focus on by mapping the type of traffic affected to a specific QoS class.

4) Determine if the QoS class is being affected by competition from other traffic in the same class or in a higher priority class.

5) Analyze flow data within the offending QoS class to find the traffic causing the congestion.

6) Find some way (nice or not nice) to stop the offending traffic.

 

Your Thoughts?

 

Everyone has different tools in their toolbox and different environments necessitate different approaches.  What tools do you use to monitor and troubleshoot your saturated circuits?  What process works for your environment?

 

1-minute max to tell us about your IT troubleshooting pain points

It is always exciting to present what is new in our products, and this time I have something I'm particularly happy about: Web Help Desk (WHD) 12.2 and Dameware (DW) 11.1 integration. This Beta starts today and it also includes features mentioned in Web Help Desk 12.2.0: Looking behind the curtain. If you are interested in participation, please go ahead and sign up here:

 

button-2.png

 

Note: You have to own at least one of those products to be able to participate in this Beta. Also this Beta is not suitable for production environment and you need a separate test system.

 

Before we talk about the details, let's quickly introduce both products.

 

Web Help Desk is a powerful and easy-to-use help desk solution. It does not overload you with tons of configuration settings, but it's still enormously flexible, and for many use-cases it works out of the box. You can integrate it with Active Directory, turn Orion alerts into tickets automatically, use it for Asset Management, or use the powerful Knowledge Base to divert new service requests by proactively displaying possible service fulfillment articles.

 

Dameware on the other hand is an efficient remote support and administration tool, with support for multiple platforms (native MRC, RDP or VNC protocols) and the ability to connect to remote machines on LAN as well as over the Internet.

 

Help desk and remote support tools are natural complements and part of every technician's workflow. You have to track incidents and service requests, and in many cases you need to connect remotely to fix an issue or to collect more information as part of the investigation. It is a no-brainer that this workflow should be smooth, without unnecessary steps and clumsy moving around between systems. Therefore we work hard to integrate WHD and DW to create such workflows, and I want to share some initial results with you.

 

Seamless Start of a Remote Session

It was possible to integrate WHD and DW even before by using the "Other Tool Links" feature of WHD (read more about this integration in Dameware / Web Help Desk Integration article). This integration helps you to start remote sessions directly from WHD

and it requires little configuration. However, it also requires the installation of a 3rd party tool - CustomURL on every Technician's computer. To avoid this hassle, Dameware can register itself as a protocol handler in this Beta version for custom Dameware links. This works for all major browsers (specifically IE, Chrome, Firefox, Safari and Opera).

 

The links have the format of a normal URL with the schema part of the URL changed to "dwrcc". The browser will recognize such a URL and pass it to Dameware. Here is an example of the URL:

 

dwrcc://10.140.47.98?assetID=14&whdEndPoint=https://10.140.2.79/helpdesk/WebObjects/Helpdesk.woa/ra

 

Dameware takes this URL, extracts information such as the IP address (10.140.47.98 in this example) or the host name if used, the WHD URL (https://10.140.2.79/helpdesk/WebObjects/Helpdesk.woa/ra in the example above) and so on, and tries to locate the remote machine in the Save Host list. If the IP address or the host name is found, Dameware will use the saved credentials and other information such as protocol to start a remote session. If no information is present in the Saved Host list, then you will see a connection dialog and you can enter the credentials. Yes, they will be saved for later re-use. :). In the following screenshot you can notice that Dameware runs in Web Help Desk integration mode. This is indicated by the yellow bar on the top of the window.

 

Yellow_Banner_annotated.png

 

The MRC application will exit after the sessions ends, which is different from the normal behavior when you start a session manually.

 

We also added a new configuration option to WHD to display Dameware links. You can enable displaying links by selecting the check box under Setup -> Assets -> Options -> DameWare Integration Links Enabled (this is actually everything needed to configure the integration):

 

WHD_DW_Configuration_annotated.png

 

Notice that there is also a possibility to define the Request Type. This is related to the second part of integration which is available in this Beta: the ability to save session data to WHD.

 

Saving Session Data into Tickets

When the remote session ends, you have the opportunity to save some session data to WHD. Specifically this means session meta-data, chat history and any screenshot made from Dameware. Meta-data which will be saved includes the following:

 

  • Technican's host name or IP address
  • End user's host name or IP address
  • Session start time
  • Session end time
  • Duration
  • Session termination reason (e.g. "The session was closed by technician")

 

Apart from meta-data, Dameware will also pass the chat history to WHD in the form of an RTF document. This document includes all formatting, and is easier to transfer for later use. The RTF document will be attached to a newly created ticket note, which will also include the aforementioned meta-data. Any screenshot you make will also be saved to WHD as an attachment. (Just be aware of the size limit of WHD attachments and make sure it is sufficient for screenshots). DW uses WHD API to pass the information and inform you about the progress. If anything goes wrong (for example the attachment too big), it will be displayed in a dialog. You will have the opportunity to save the session data locally, so no information is lost. All data is saved into a Note of the given Ticket. Any attachment will be the attachment of the Note, rather than a Ticket attachment to make is easier to identify attachments related to a given session. This information will not only provide customer evidence for remote sessions, but it also saves important information for troubleshooting during investigation and make it possible to track work time and report on it.

 

Connection from existing Ticket

The common scenario is that a user creates a new ticket reporting an issue on this desktop. You open the ticket details and there is information missing. Therefore you decide to connect remotely. You open the Asset Info tab and click the Dameware connection icon.

 

DW_link_annotated.png

 

If you have previously connected to this machine and the credentials are saved in the Save Host list, you are seamlessly connected to the desktop of the user. Again, notice the yellow WHD integration banner.

 

Yellow Banner.png

 

While you troubleshoot, you make several screenshots, ask user for additional details over chat and investigate. When you are finished you close the session, and since the remote session was created from a WHD ticket, the integration dialog is displayed immediately (in some cases the workflow is different, but we will discuss this in a minute).

 

Ticket update dialog - CHAT.png

 

In the window title you can see the ticket number. The session meta-data section is displayed and you can describe what happened during the session in the text field below. This text will be saved into the ticket note. The duration of the session will be saved as work time in the same note. You can also decide to attach the chat history or any screenshot you made. Finally you can also decide whether to make this note visible to the client or not. Then you click on "Save".

 

In the next dialog window you can observe the progress of communication between Dameware and WHD. If anything goes wrong, you can see it here. If the session data was saved sucessfully you will see a confirmation dialog.

 

Upload progress.png

 

Following video demonstrates the workflow

 

 

Incident reported over phone

There are cases when the user will call you directly. In such a case, there is no ticket. If the user is however reporting an issue on this laptop, you can still connect remotely and troubleshoot the issue.

 

You can first search for the user's assets and connect directly from the resulting list.

 

Connection_from_Asset_annotated.png

 

In this case the session is initiated from the asset, not from the ticket. You are remotely connected and do whatever you need to do on the remote machine. When you are done, you close the session normally, but this time there is no ticket to be updated. Dameware will therefore display the list of existing tickets linked to the given asset. Only tickets which are not closed will be displayed, and you can update the ticket if the remote session was a follow-up on an existing incident.

 

Existing tickets list.png

 

If this is a completely new incident, you have the opportunity to create a new ticket. Click "Create new ticket" and the simplified new ticket form is displayed. Only the subject and the request details are available, and in the next step you can define more details of the newly created ticket (again, notice the ticket number in the window title).

 

New ticket.png

 

The rest of the workflow is the same as in the previous case. You can add a note and save and screenshots or char into a ticket and newly created ticket is updated.

 

Over the Internet session

Sometimes you have mobile users and you need to connect to them over the internet. The workflow is similar: you click the Dameware connection icon, and Dameware tries to connect. If it doesn't work, you will get back to the connection dialog and you can click on "Internet connection" and establish an OTI session. Once you connect sucesfully, the workflow is the same as in the previous cases. You are asked to update a ticket, or you are offered a list of existing tickets and you can update the ticket as before.

 

This integration works with all supported protocols: MRC, RDP and VNC. The only difference is that with RDP we do not support screenshots and chat, and with VNC only screenshots are supported.

 

Conclusion

This integration aims to streamline incident resolution and make the workflow smoother. I'm looking forward to hearing from you, if you have any feedback, questions, comments or ideas, please let us know in comments.

 

Also do not forget to sign-up for Beta here!

One evening this week, I was reading the latest in tech news on Engadget and Re/code about yet another organization whose network and data had been compromised. With businesses like Target, Home Depot, and even JP Morgan Chase falling victim to Advanced Persistent Threats I wondered what controls, processes and procedures these organization had to monitor suspicious activity and the sharing and storing of sensitive files. Add concerns with compliance requirements like those mandated by PCI and HIPAA, and you end up with a severe migraine.

 

There are logs, logs everywhere with tons of data and there are solutions in the SIEM space which analyze all of these logs from a security perspective, but this is typically reactive in nature. Organizations need proactive protection of data while it resides on the corporate networks – they need encryption of data at rest.

 

Reality is, you need protection, both in transit and at rest.  Serv-U MFT Server protects data while it is in transit using SSL and SSH. Serv-U Gateway, the reverse proxy add-on which prevents the storage of data in the DMZ, further reduces risk.  However, data-at-rest encryption is another important part of the picture, protecting data while it resides on network storage or on a server.

 

Image 1.png

Serv-U.png

There are several options available to customers who are seeking to provide this additional layer of security on their network. Typically, encrypted file systems are the optimal choice as they are usually easiest to deploy. Depending on the platform you want to secure, there are a couple different options.

 

Image 3.png

You can leverage EFS or Encrypting File System, which is a feature already built into many Windows versions including the newest versions of Windows and Windows Server.  There is another feature within Windows in regards to file encryption called BitLocker, but don’t confuse this with Cryptolocker. You can read more about BitLocker vs. EFS here.

EFS.jpg

If you are looking for non-Windows options or even other Windows options that are not created by Microsoft, historically many folks used an open source program called TrueCrypt, but active development for this recently ended.  You can still use this product, but just know that any new issues will not be fixed.  With this being said, this code base has been forked and in the process of being turned into a free product called CipherShed, which will work on Windows, Mac OS and GNU/Linux.

 

If any of the above don’t fit the bill for you, here are some other options for you to look at and consider.

 

Combining Serv-U with one of the options listed above ensures that you data is completely secure, both in-transit and at rest.

Hello fellow network diagram geeks!  I'm excited to announce that NTM v2.2 beta1 is now available.  It has been quite some time since we've had a beta for NTM, but we're making some big changes and we need your input.

 

Let's take a look at what's new in this beta.

button.png


Modular Scanning

 

This is the big one!  We've fundamentally changed how scanning works in NTM.

 

What is Modular Scanning?

Since NTM's inception, one scan has resulted in one map.  To make multiple maps, you perform multiple scans.  Each map may contain any or all nodes found during the scan.  If you want to map a different part of your network, start over with a scan.  If you want to create a different view of the same network (L2 vs L3 diagrams anyone?), start over with a new scan.  Map getting too big and unwieldy?  Start over with a smaller scan.  This worked, but we think there's a better way.

 

In beta1, we introduce the concept of topology databases (shout out to you OSPF engineers out there!).  Now, when you perform a network scan, what you discover with NTM is saved in what we call a topology database.  You then build maps based upon the data in the topology database.  One map, 20 maps; however many maps you want.  All maps are stored in the topology database they're built from.  A single file on your hard drive contains a topology database and all maps built upon it.

 

This realignment of core functionality is simple to explain, but has had a big impact on how NTM works and how you interact with it.

 

How Does Modular Scanning Help Me?

 

Modular scanning lets you:

  • Scan once for your whole environment.
  • Build maps on the fly without performing new scans.
  • Flip between maps quickly because the maps are based on the same topology database.
  • Copy nodes (including their layout) between maps because the maps are based on the same topology database.
  • Keep your NTM diagrams updated with a single manual rescan or rescan schedule.
  • Spend more of your time making useful maps and less of your time configuring, scheduling, and waiting on scans.

 

In addition, the rendering changes (described in the next section) that were required to deliver modular scanning properly provide the following additional benefits:

  • NTM's is faster and scales better in medium and large environments.  Try it.
  • Auto Arrange functions tend to produce more appropriate icon layouts without so much extra space.

 

Rendering Change


When we started down this path, we knew we would also have to significantly change how we render nodes.  In previous versions of NTM, filters were used to control whether nodes would display or not.  Enabling filters or disabling filters does not give NTM any indication of where you would actually like to place those nodes.  As a result, whether you were displaying all of your discovered nodes or 5% of your discovered nodes, NTM always kept track of every node and where it was on the screen to prevent the nodes from overlapping.  Although there were some benefits to this, there were two big negative effects:

  • Displaying maps where many nodes were discovered took more compute resources than was necessary.  Maps with many discovered nodes could be slow or laggy feeling, even if only a handful of nodes were actually visible.
  • When many nodes were hidden, the Auto Arrangement functions would often separate node icons by huge amounts of distance to make room for nodes that were hidden in between the visible nodes.

 

We knew modular maps will  naturally result in users doing bigger scans.  The performance issues and layout issues noted above would cause severe problems at that scale.

 

So we fixed them.  NTM now only renders nodes that are actually on the map.  All nodes exist in the topology database.  You place nodes onto your map by dragging and dropping from the left hand pane (ala Network Atlas), and the nodes that you don't want to see don't bog you down or cause rendering oddities.


 

While NTM's scalability and performance used to revolve around the total number of discovered nodes regardless of how many nodes were displayed, it now revolves around how many nodes are displayed on the map you're looking at.

Filters.pngDrag and Drop.png

Icons, Icons, Icons!

 

You want better icons, bigger icons, smaller icons, custom icons, icon alignment, icon spacing, and anything else icon related we're willing to build.  We've heard you.  Let's see how much of this functionality we can show in one screenshot:

 

2014-10-08_18-45-25.png

 


  1. All new out of the box icons in a scalable format
  2. Re-sizable icons
  3. Custom icon import
  4. Icon alignment tools
  5. Even icon spacing tools
  6. Text alignment
  7. Arbitrary text placement.

Best of all, you can do all of these with bulk operations.

 

 

Enhanced Etherchannel Support

 

This wouldn't be an NTM release if it didn't improve NTM's ability to map your network.  This beta now does a much better job of discovering and mapping your Etherchannels, port-channels, or whatever you choose to call your L2 aggregated links.  Take a look:


Etherchannel.png

 

 

  1. Accurately detecting Etherchannel
  2. Rendering member links of Etherchannel as parallel
  3. Excluding links that are not part of Etherchannel, even if they are parallel
  4. Identifying port-channel name
  5. Protocol name
  6. Protocol mode
  7. Aggregate bandwidth

 

So Much More

 

In addition to the big items above, we've made a truck load of other smaller improvements.  Here's some of them:

 

  • Not sure what connects to the nodes on your map?  Try right clicking and selecting "Add Neighbors".
  • How do you interact with multiple topology databases at the same time?  A drop down in the application?  Multiple levels of tabs?  Yuck.  In this beta, for the first time, you can open multiple instances of NTM.
    • Each NTM instance holds all the maps for one topology database.
    • If you have two topology databases open, you can scan in one window and continue working in the other.
    • If you have three topology databases open, you can scan in two windows (parallel scanning) and continue working in the remaining one.
  • Shortcuts allow you to add intelligent groupings of nodes to your map all at once.
  • Select nodes in the map you're looking at intelligently by clicking on individual nodes, groups of nodes, or node shortcuts in the left hand pane.  For example, selecting the "Router" group in the left hand pane selects all routers on your current map.
  • Connection jumps.
  • Copy paste nodes or groups of nodes between your maps.  Layout is preserved.
  • Improved Credential Management UI.
  • Extensible regex interface name handler to intelligently shorten interface names so they fit well on the map.

 

 

Known Issues in This Beta

 

  • Undo/Redo buttons sometimes do not work.
  • Etherchannels that are configured statically (mode "ON") are not currently detected as Etherchannels.

 

Conclusion

 

We're very excited about this release and can't wait to see the maps you guys create.  Feel free to post your feedback or your new maps in the NTM Beta Forum or email me directly at chris.obrien{at}solarwinds.com

 

button.png

 

As a reminder, beta software is for testing only and should not be used in a production environment.

Up until this point, much of the noise surrounding the Server & Application Monitor 6.2 beta has been focused exclusively on a new optional agent that allows for (among many other things) polling servers that reside in the cloud or DMZ. More information regarding this new optional agent can be found in my three part series entitled "Because Sometimes You Feel Like A Nut" linked below.

 

 

If you can believe it, this release is positively dripping with other incredibly awesome new features and it's time now to turn the spotlight onto one that's sure to be a welcome addition to the product.

 

Since the advent of AppInsight for SQL in SAM 6.0, and AppInsight for Exchange in 6.1, you may have grown accustomed to the inclusion of a new AppInsight application with each new release. Well I'm happy to announce that this beta release of SAM 6.2 is no different, and includes the oft requested AppInsight for Microsoft's Internet Information Services (IIS).

 

 

HTML tutorial

 

DISCOVERY


As with all previous AppInsight applications, monitoring your IIS Servers with SAM is fairly straightforward. For existing nodes currently managed via WMI simply click List Resources from the Node Details view and select Microsoft IIS directly beneath AppInsight Applications. You will also see this option listed for any new IIS servers added individually to SAM and managed via WMI using the Add Node Wizard.

 

You can add AppInsight for IIS applications individually using the methods above, or en masse using the Network Sonar Discovery Wizard.  One-time Discovery and scheduled recurring discovery of IIS servers in the environment using Network Sonar are fully supported. Either method will allow you to begin monitoring all IIS servers running in the environment quickly and easily.

 

AppInsight for IIS has been designed for use with IIS v7.0 and greater running on Windows Server 2008 and later operating systems. To monitor earlier versions of IIS such as 6.0 and 6.5 running on Windows 2003 and 2003R2 respectively, you should use theInternet Information Service (IIS) 6 template available in the Content Exchange.

 

AppInsight for IIS discovery is currently limited to WMI managed nodes. For nodes managed via SNMP, ICMP, etc. you can manually assign AppInsight for IIS to the node no differently than any other application template in SAM.

List Resources.png

 

Network Sonar One Time Discovery.png

Network Sonar Discovery Results.png

 

CONFIGURATION

AppInsight for IIS Configure Server.png

AppInsight for IIS leverages PowerShell to collect much of its information about the IIS server. As such, PowerShell 2.0 must be installed on the local Orion server or Additional Poller to which the node is assigned. PowerShell 2.0 must also be installed on the IIS Server being monitored. Windows 2008 R2 and later operating systems include PowerShell 2.0 by default. Only if you are running Orion on Windows 2008 (non-R2) or are planning to monitor servers running IIS 7.0, will you need to worry about the PowerShell 2.0 requirement.

 

Beyond simply having PowerShell installed, Windows Remote Management (WinRM) must also be configured. This is true both locally on the Orion server, as well as on the remotely monitored IIS host. If you're not at all familiar with how to configure WinRM, don't worry. We've made this process as simple as clicking a button.

 

After discovering your IIS servers and choosing which of them you wish to monitor, either through the Add Node WizardList Resource, or Network Sonar Discovery, you will likely find them listed in the All Applications tree resource on the SAM Summary view in an "Unknown" state. This is because WinRM has not been configured on either the local Orion server or the remotely monitored IIS host. Clicking on any AppInsight for IIS application in an "Unknown" state from the All Applications resource launches the AppInsight for IIS configuration wizard.


When the AppInsight for IIS configuration wizard is launched you will be asked to enter credentials that will be used to configure WinRM. These credentials will also be used for the ongoing monitoring of the IIS application once configuration has successfully completed. By default the same credentials used to manage the node via WMI are selected. Under some circumstances however, the permissions associated with that account may not be sufficient for configuring WinRM. If that is the case, you can select from the list of existing credentials available from your Credential Library, or enter new credentials for use with AppInsight for IIS.

 

Once you've selected the appropriate existing, or newly defined credential for use with AppInsight for IIS, simply click "Configure Server". The configuration wizard will do the rest. It should only take a minute or two and you're up and monitoring your IIS server.

 

If configuring WinRM to remotely monitor your IIS server isn't your jam, or if perhaps you'd simply prefer not using any credentials at all to monitor your IIS servers, AppInsight for IIS can be used in conjunction with the new optional agent, also included as part of this SAM 6.2 beta. When AppInsight for IIS is used in conjunction with the agent you can monitor IIS servers running in your DMZ, remote sites, or even in the cloud, over a single encrypted port that's NAT friendly and resilient enough to monitor across high latency low bandwidth links.

 

MONITORING

 

Sites and Pools.pngAs with any AppInsight application, AppInsight for IIS is designed to provide a near complete hands off monitoring experience. All Sites and Application Pools configured on the IIS server each appear in their respective resources. As Sites or Application Pools are added or removed through the Windows IIS Manager, those Sites and Application Pools are automatically added or removed respectively from monitoring by AppInsight for IIS. This low touch approach allows you to spend more time designing and building your IT infrastructure, rather than managing and maintaining the monitoring of it.

 

Each website listed in the Sites resource displays the current status of the site, it's state (started/stopped/etc.), the current number of connections of that size, the average response time, and whether the side is configured to automatically start when the server is booted.

 

It is all too common for people to simply stop or disable the Default Web Site or other unused sites in IIS rather than delete them entirely. To reduce or eliminate false positive "Down" alert notifications in these scenarios, any sites that are in a Stopped state when AppInsight for IIS is first assgined to a node are placed into an unmanaged state automatically. These sites can of course be easily re-managed from the Site Details view at anytime should you wish to monitor them.

 

The Application Pools resource also displays a variety of useful information, such as the overall status of the Application Pool, it's current state (stopped/started/etc.), the current number of worker processes associated with the pool, as well as the total CPU, Memory and Virtual Memory consumed by those worker processes. It's the perfect at-a-glance view for helping identify runaway worker processes, or noisy neighbor conditions that can occur as a result of resource contention when multiple Application Pools are competing for the same limited share of resources.

 

As one might expect, clicking on a Site or Application Pool listed in either of these resources will direct you to the respective Site or Application Pool details view where you will find a treasure trove of valuable performance, health, and availability information.

 

SERVER EXECUTION TIME

Response Time.png

 

The release of Network Performance Monitor v11 included an entirely new method of monitoring end user application performance in relationship to network latency with the advent of Deep Packet Inspection, from which the Quality of Experience (QoE) dashboard was born. The TopXX Page Requests by Average Server Execution Time resource is powered by the very same agent as the Server Packet Analysis Sensor that was included in NPM v11. AppInsight for IIS complements the network and application response time information provided by QoE by showing you exactly what pages are taking the longest to be served up by the IIS server.

 

This resource associates user requested URLs with their respective IIS "site", and its average execution time. Expanding any object in the list displays the HTTP verb associated with that request. Also shown is the date and time of the associated web request, the total elapsed time, IP address of the client who made the request, and any URL query parameters passed as part of the request.

 

New linear gauges found in this resource now make it possible to easily understand how the value relates to the warning and critical thresholds defined. Did I just barely cross over into the "warning" threshold? Am I teetering on the brink of crossing into the "critical threshold? These are important factors that weigh heavily into the decision making process of what to do next.

 

Perhaps you've just barely crossed over into the "warning" threshold and really no corrective action is required. Or maybe you've blown so far past the "critical" threshold that you can barely even see the "good" or "critical" thresholds anymore and a full scale investigation into what's going on is in order. In these cases, understanding where you are in relation to the thresholds defined is critical to determining both the severity of the incident, as well as measuring a sufficiently adequate response.

 

Server execution time is the time spent by the server processing the users request. This includes the web server's CPU processing time, as well as backend database query time, and everything in between. The values shown in this resource are irrespective of network latency; meaning, page load times will never be better than what's shown here in this resource without backend server improvements, regardless of what network performance looks like. Those improvements could be as simple as rebuilding database indexes, defragmenting the hard drive, or adding additional RAM to the server. Either way, high server execution time means users are waiting on the web server or backend database queries to complete before the page can be fully rendered.

 

AppInsight for IIS - Management.png

REMEDIATION

 

What good is the entire wealth of information that AppInsight for IIS provides if there is no way to remediate issues when they occur?

In addition to a powerful assortment of key performance indicators, you will notice new remediation options available within the Management resource of the AppInsight for IIS "Site Details" and "Application Pool Details" views. Within each respective resource is the ability to stop, start, and even restart the selected site or application pool, as well as unmanage. Remediation actions executed through the web interface are fully audited, no differently than terminating processes through the Real-Time Process Explorer, or stopping/starting/restarting services via the Windows Service Control Manager.

IIS Alert Trigger Action.png
In conjunction with one of the many other improvements also included in the SAM 6.2 beta, it is now possible to automate these IIS remediation actions from within the all new web based Alert Manager. Not only can you say, restart a site that has failed, or stop an Application Pool that's running amuck, but you can perform those remediation actions against any IIS site or Application Pool in your environment regardless if it's the site or application pool in distress. This is important for complex distributed applications where issues with backend application servers or database servers may require a restart of the frontend IIS site or application to resolve. Now this can be completely automated through the Advanced Alert Manager, and you can get some much need shuteye.


These are just a few of the capabilities of AppInsight for IIS. If you'd like to see more, you can try it out for yourself by participating in the SAM 6.2 beta. To do so, simply sign-up here. We thrive on feedback, both positive and negative. So kick the tires on the new SAM 6.2 beta and let us know what you think in the SAM Beta Forum.

 

Please note that you must currently own Server & Application Monitor and be under active maintenance to participate in this beta. Betas must be installed on a machine separate from your production installation and used solely for testing purposes only. Also, if you're already participating in the NPM v12 beta, the SAM 6.2 beta can be run on the same server, alongside the NPM v12 beta.

Last week, we released version 6.0.1 of Log & Event Manager. Normally we don't make too much noise about service releases (minor dot releases) 'round these parts, but this time we decided to make an exception. We packed a lot of security enhancements and customer requests into this release that you should definitely be aware of.

 

Enhanced Security Features

  • Removal of several flagged "vulnerabilities" on the LEM appliance: We continuously monitor security scans and while rarely (dare I say, almost never?) has there been an issue without a mitigation (like ShellShockfor example), we do still try to make sure we reduce and ideally eliminate any vulnerability scans from flagging the LEM appliance in any way. With this release, we've cleaned up all known security scan flags except the visibility of the Tomcat error page which we're looking into for a future release, and a couple of certificate triggers which are expected and would be resolved by using a CA-signed cert.
  • Better support for using signed certs: We had some customers use the ability to sign and re-upload certs for the LEM console, but there were some cases where it didn't work quite right. We shored up our support for certs and things should be much improved.
  • Improved enforcement for password storage for connectors: some connectors require storing username/password credentials (connectors that use a database to retrieve log data, for example) so we've beefed up encryption and version enforcement for storage of those passwords. Once you upgrade your 6.0.1 appliance and console, you'll need to also upgrade any agents that might have one of these connectors configured (or where you'll configure one going forward).

And the big one...

  • Named user access and TLS support for reports: our database has been encrypted since version 5.6 (and even before that has always been limited access), but using JDBC access and a fixed username was cause for concern for some folks. We've migrated to using LEM users (including AD users) for reports instead, and optionally allowing you to enable TLS connectivity.
    • There's a new "reports" role in the LEM console that you can assign if you have users that shouldn't have access to real-time data, but do need access to historical data. In addition, admin and auditor roles also have reporting access (but not monitor users).
    • When you install v6.0.1, be sure to install v6.0.1 reports, launch it, and specify your access credentials, especially if you have scheduled reports. Your reports won't run until you do.
    • If you're interested in using TLS, use the CMC's "enabletls" command to toggle TLS support for reports (you'll have to export the cert using "exportcert" and then import into the reports console as well).

 

FIM (File Integrity Monitoring) Updates

  • Fix for the "NT AUTHORITY\SYSTEM" username when accessed by a fileshare: we put this out in a hot fix but now it's incorporated in the agent install and agent automatic upgrade. When someone accesses a file remotely, the username should be shown instead of the stock NT AUTHORITY\SYSTEM user.
  • Fixes for several configuration issues with FIM: we've had a few issues reported with FIM from customers - directories not displaying, for example - that we've resolved.

 

But Wait, There's More!

  • Support for SQL Auditor with SQL 2012: we're working on SQL 2014, too, but for now we officially support SQL 2012 with SQL Auditor, along with the previous support for earlier versions (2008, 2005, 2000).
  • Better support for large memory configurations: customers with high throughput have assigned extra RAM and CPU to the LEM appliance, but support often had to remote in and tweak some settings. We've improved our auto-tuning on startup to detect and support these configurations.
  • Several additional utilities and smaller fixes, including: improvements to our internal logging, a utility to rebuild indexes more easily, and as always more officially supported connectors (remember, you can download these at any time - see SolarWinds Knowledge Base :: How to apply a LEM connector update package)

 

Customers on maintenance can download the LEM v6.0.1 upgrade on the Customer Portal immediately. For everyone else, the download on the LEM Product Page is now v6.0.1, too.

 

Be sure to check out What We're Working on - Log & Event Manager Edition for some ideas on where we're going next. If you've got any questions about v6.0.1 or all things LEM, post them here or over in the Log & Event Manager forum.

We've been getting an increasing number of questions about the ShellShock vulnerability that was announced, this post will collect the status across different products into one place to make it easy for you to determine if your product is affected or not.

 

What is ShellShock? How does it work?

 

ShellShock is a vulnerability in a command shell commonly used on Linux (and some other Unix flavors) - basically EVERY Linux system out there (before yesterday that hasn't been patched today) is vulnerable in some fashion. The vulnerability allows someone with local access to log in to a Linux system OR remotely run unchecked commands to a linux system (via the web, for example) to elevate their privileges such that they may even have root-level access (at least, they'll have the context of the process they exploited - the service or user account). At that point, changes could be made to the system, additional services could be run (to do things like serve exploit or phishing sites), and further exploits could be attempted to get root-level access and full control.

 

There is not YET a massive scale exploit of this vulnerability, but it's entirely possible that before the day is done, one will be in the wild (we're already seeing smaller scale exploits winding up). With so many web applications that control and access Linux systems (doing things as simple as image manipulation or as complex as system management control panels, for example) and the common usage of Linux for web servers and application platforms in general, it's probably not going to be very long before something is written and scaled to take advantage of this exploit and create a ton of zombie systems out there.

 

For more reading, there's a great summary post here: Troy Hunt: Everything you need to know about the Shellshock Bash bug

 

What SolarWinds Products are Affected?

 

First, anything that exists exclusively on Windows is not affected. This is the majority of SolarWinds products - including NPM, SAM, NCM, Patch Manager, and more.

 

Products installed on a Linux OS or used to manage a Linux OS are not vulnerable, but their underlying system may be. Storage Manager or Serv-U on Linux isn't affected, but if your Linux OS is, you should consider that system at-risk. Similarly, if you are using LEM agents or monitoring Linux systems, those software bits are okay, but the underlying OS probably isn't.

 

The only affected products are our virtual appliance-based products, which run limited versions of Linux.

 

Below is a chart of all products, which are affected, and mitigation or resolution steps you can take if necessary.

ProductAffected?Notes & Next Steps
Alert CentralPartially, See Notes

Alert Central is running a vulnerable version of bash, however at this point none of the exploit vectors apply:

  • Access to the virtual appliance shell requires authentication and the exploit does not elevate privileges.
  • It is not possible to exploit the vulnerability remotely.

 

To mitigate the threat, limit access to the virtual appliance management console and VAMI configuration interfaces where commands can be ran and instantiated. Ensure your appliance "admin" password used for VAMI access is set and secure.


To be safe, we will include the updated bash software in an upcoming Alert Central release. If it becomes necessary to issue a patch, we will do so. More info will be posted here.

Log & Event Manager (LEM)Partially, See Notes

LEM is running a vulnerable version of bash, however at this point none of the exploit vectors apply:

  • It is not possible to exploit the vulnerability interactively (customers do not have access to an authenticated bash prompt).
  • No LEM management commands allow setting environment variables or are used in a vulnerable way.

If you are still concerned, you should limit access to the virtual appliance management console and restrict SSH access to LEM using the LEM advanced configuration console (CMC).

 

To be safe, we will include the updated bash software in an upcoming LEM release. If it becomes necessary to issue a patch, we will do so. More info will be posted here.

Virtualization ManagerPartially, See Notes

Virtualization manager is running a vulnerable version of bash, however at this point none of the exploit vectors apply:

  • Access to the virtual appliance shell requires authentication and the exploit does not elevate privileges.
  • It is not possible to exploit the vulnerability remotely.

 

To mitigate the threat, limit access to the virtual appliance management console and VAMI configuration interfaces where commands can be ran and instantiated. Ensure your appliance "admin" password used for VAMI access is set and secure.


To be safe, we will include the updated bash software in an upcoming Virtualization Manager release. If it becomes necessary to issue a patch, we will do so. More info will be posted here.

Web Helpdesk (WHD)Partially, See Notes

Web Helpdesk is running a vulnerable version of bash, however at this point none of the exploit vectors apply:

  • Access to the virtual appliance shell requires authentication and the exploit does not elevate privileges.
  • It is not possible to exploit the vulnerability remotely.

 

To mitigate the threat, limit access to the virtual appliance management console and VAMI configuration interfaces where commands can be ran and instantiated. Ensure your appliance "admin" password used for VAMI access is set and secure. Please also note this KB article for WHD Virtual Appliance patch SolarWinds Knowledge Base :: Bash Code Injection Vulnerability - Shellshock.


To be safe, we will include the updated bash software in an upcoming WHD release. If it becomes necessary to issue a patch, we will do so. More info will be posted here.

Patch Manager No
DameWareNo
Firewall Security Manager (FSM)No
Storage ManagerNoWhen the patch is installed on the same system, Storage Manager will continue to function normally.
Serv-U ProductsNoServ-U does not run any shell scripts except during install time and it sets no environment variables. The only other use of sub process spawning is direct shell commands to manipulate files, and no environment variables are set.
Network Configuration Manager (NCM)No
Kiwi ProductsNo
Enterprise Operations Console (EOC)No
Web Performance Monitor (WPM)No
Server & Application Monitor (SAM)No
Network Performance Monitor (NPM)No
User Device Tracker (UDT)No
Network Topology Mapper (NTM)No
Netflow Traffic Analyzer (NTA)No
Failover Engine (FoE)No
Mobile AdminNo
ipMonitorNo
IP Address Manager (IPAM)No
VoIP and Network Quality Manager (VNQM)No
Free ToolsNo
Database Performance Analyzer (DPA)No
Engineers ToolsetNo

 

Can any SolarWinds products help determine if I have other systems affected?

If you've got Server & Application Monitor, user mcam posted a template you can use here in our Content Exchange: Bash Vulnerability Test. You can use this to check and change the status of a monitored Linux node if it comes up vulnerable.

 

How do I fix them or prevent them from being attacked?

 

The fix is pretty straightforward - check your Linux distribution maintainer for an update to bash, or as Troy Hunt suggested in his article, compile and deploy your own.

 

You can prioritize what to fix based on how the attack works (requires a shell to be ran to instantiate the attack):

  1. Systems with web or remote control applications that run local commands on the appliance after taking input from users
    1. If you can identify a known vulnerable application (e.g. cPanel), patch the application AND the system - there may be future attacks that the application will now also protect you from
  2. Systems with accounts where you allow people to log in and run commands arbitrarily
  3. Systems with sensitive data or access to sensitive networks
  4. Anything else!

 

If you can't fix something right away, here are some suggested mitigation steps:

  1. Disable or limit access to web or remote control applications that run local commands - ESPECIALLY from the public-facing internet.
    1. NOTE: If you're using SSH, it can't be exploited EXCEPT by an authenticated user, so you don't necessarily need to limit visibility entirely (though it may still be a good idea!)
  2. Disable or limit access (local or remote) from any unnecessary accounts.
    1. For accounts that run services, prevent them from logging in and spawning a shell.
      1. NOTE: Make sure any services you're running, especially accessible from the internet, use service accounts, not root.
    2. For accounts that only need access to something like FTP, prevent them from logging in and spawning a shell.
    3. Audit for dead accounts - users that may not exist any longer.
  1. Consider disabling login access to systems that have access to sensitive data or networks to only critical users while you deploy the fix.
  2. Monitor for common post-attack signs:
    1. Usage of the root account
    2. Services restarting (could be a sign of configuration changes)
    3. Accounts being created and/or sudo access being granted
    4. Monitoring systems being disabled/shut off

 

We'll update this post if anything changes or as more information becomes available on the updates for our virtual appliances.

Over the last 10 years, enterprise storage has been one of the most active areas of innovation and change in the IT environment. Storage forms the foundation of critical IT Server and Application infrastructure and IT Pros who are responsible for managing enterprise storage have had to learn and adapt as the demands of ever increasing business-critical applications push existing SAN and NAS architectures to their limits.  Most critically, virtualization of the server infrastructure has turned the end-to-end support of these applications into a rat's nest. This means that as end users clamor for better application performance and storage capacity, the Storage Pro is often left troubleshooting the environment with a collection of isolated tools to help him monitor, troubleshoot and maintain peak performance and availability. We've heard this loud and clear from the IT community and we believe there is a better way. With that I introduce you to SolarWinds Storage Resource Monitor (SRM), our new native Orion module to monitor and manage your storage environment. Best of all, we already have a Beta available and we are excited to get your direct feedback so we can make even more improvements before release. Since we are delivering this functionality as part of Storage Manager (STM) and Storage Profiler (PL) (more details below), the Beta is open to STM and PL customers currently on Active Maintenance. If you are not a Storage Manager customer, but are still interested in participating, go ahead and sign-up as well, and I'll see if there is a way I can at least get you on a feedback call to interact with a SolarWinds-local install.

 

Picture1.png

 

Why SRM?

 

In 2010, SolarWinds acquired TekTools, which became our venerated Storage Manager product. Storage Manager is a very broad and deep product and has served to save the hides of many a Storage Admin from over-provisioned storage pools and LUN's behaving badly. However, we have heard consistently from you that when it comes to IT Operations Monitoring, you want a Single Pane of Glass view across the entire Application Stack (internally affectionately called "the AppStack"). Well, you'll be happy to know that we agree with that end-to-end vision of your IT environment - and we just can't pay that off by simply extending Storage Manager functionality on its current platform. SRM is the piece of the puzzle that along with our networking tools, Server and Application Monitor (SAM), Web Performance Monitor (WPM), and the recently integrated Virtualization Manager (VMAN) allows us to deliver to you a truly Application-centric view of your IT systems infrastructure performance and availability.

 

The AppStack

e1.png

 

Given your direct feedback, it's become clear that the performance of an individual piece of the IT infrastructure - a LUN, an Array, a VM, an ESXi Host - is only as important as it serves the overall health and availability of the business critical application it supports. To that end, SRM is built to finally give you a NOC view of your storage environment side-by-side with it's related physical and virtual servers and the applications running on them. Furthermore, the Orion platform was built from the ground-up to be a NOC platform and trying to make the current STM platform fit that model would be a extremely difficult feat at best. What do I mean by a NOC platform? Very simply all of the goodness you've come to expect from the Orion platform: Status, customizable UI dashboards, custom properties, a rotating NOC view, extremely flexible alerting and reporting, customizable UI widgets (e.g. custom Top 10's), the Network Atlas.....the list goes on. By bringing your storage data onto the Orion platform, we can enable you to see and interact with your data in the way you want and we're very excited to make this happen for you and your organizations.

 

Your New Storage Home

9-18-2014 9-51-47 PM.png

 

Wait, but I own Storage Manager! What happens to me?

 

You'll be happy to know that you're coming along with us on this ride. Hopefully, you're as excited as we are. SRM will be delivered very similarly to how we've delivered the Virtualization Manager integration with NPM and SAM. Storage Manager and Storage Profiler customers on Active Maintenance at the time of GA will be entitled to SRM as part of their Maintenance.

 

Does this mean that I need to own another Orion module product (e.g. NPM, SAM) in order to deploy SRM?

 

Absolutely not. SRM can be stood up on it's own, the same as any stand-alone Orion module. However, the team is working extremely hard to ensure that there is seamless integration across our primary "Systems Management" products so you can get full end-to-end visibility when you deploy SRM with SAM or VMAN. As I stated above, the end goal is to give you full Application to Array visibility of your IT environment whether your applications are running on physical or virtual servers and integration with SAM and VMAN are key to fulfilling that vision. Continue watching the Product Forums and Product Blog for updates about upcoming SAM Betas and VMAN Betas to test this out in your own environment.

 

Furthermore, you can continue to run Storage Manager in your environment and we're working hard on making sure our licensing allows you the flexibility to deploy the way you need to in order to meet your business requirements.

 

Do I need to be running STM in order to run SRM?

 

No. Unlike the current VMAN integration where the VMAN Virtual Appliance is feeding data collected from your VMware and Hyper-V environment to the Orion database, SRM has been developed to collect data directly from your arrays (or their data providers like SMI-S). SRM is a native Orion module and does not collect or share data with Storage Manager beyond that required to enforce licensing.

 

OK, but will you support MY device?

 

The Enterprise Storage market could be an academic case study in long-tail markets. Furthermore, each of the vendors have made data collection and device modeling completely different across every array family - even array families ostensibly under the same brand name or corporate umbrella. Long story short, we have to make choices on where to start and matching all of STM's incredibly deep array support in v1 simply wasn't feasible. So where did we start? Our current Beta has support for:

 

  • The EMC VNX Family - This includes EMC's VNX (formerly CLARiiON) and VNX2 architectures as well as the VNX NAS Gateway devices (formerly Celerra). VNX enthusiasts I have good news - We will now have full block and file (Unified) support under a single device type.
  • The Dell EqualLogic PS Series (SAN)

 

In short order, we should have a Beta available for:

 

 

I promise you that's not where our vision ends. Stay tuned for more info. Either way, I encourage everyone interested to sign-up for the Beta - the survey will ask what devices you have in your environment and I'll be proactive in reaching out to folks when your device is ready for Beta testing.

 

So What's Next?

 

You can be assured that this will not be the last post on this topic before the product is released. I'm sure you have outstanding questions I have not addressed, so please don't hesitate to ask and I'll address them directly and/or roll them up into follow-on posts! Furthermore - go sign-up for the Beta and try SRM today!

Filter Blog

By date:
By tag: