Skip navigation

VoIP has been widely adopted by enterprises for the cost savings it provides but it is also one of the most challenging applications for a network administrator in the network. Some enterprises choose to run VoIP on their existing IP infrastructure with no additional investment, bandwidth upgrades or preferential marking for voice packets. But because VoIP is a delay sensitive application, the slightest increase in latency, jitter or packet loss affects the quality of a VoIP call.

 

The Story:

A medium sized business with their HQ in Austin, US and a branch office in Chennai, India used VoIP for sales and customer support requirements as well as for communication between offices. IP phones and VoIP gateways were deployed at both Austin and Chennai and the call manager and the trunk to the PSTN for external calls was at Austin. Austin and Chennai were connected over the WAN and the voice calls from Chennai used the same path as data.

 

network dgm.png

The Problem:

Tickets were raised by users in Chennai about VoIP issues such as poor call quality and even call drops when calling Austin and customers around the globe.

 

The network admin had the NOC team check the health and performance of the network. The network devices in the path of the call were analyzed for health issues, route flaps, etc., with the help of an SNMP based monitoring tool. After confirming that the network health was fine, the team leveraged on a few free Cisco technologies for VoIP troubleshooting.

 

The Solution:

  1. Analysis with Call Detail Records (CDR) and Cisco VoIP IPSLA
  2. Root cause with Cisco NetFlow
  3. Resolution with Cisco QoS

 

Analysis with Call Detail Records (CDR) and Cisco VoIP IPSLA

When call drops were first reported, the NOC team quickly set up a tool with which they could analyze both Call Detail Records (CDRs) and Cisco IPSLA operations. The Cisco call manager was configured to export CDR data to the tool and the edge Cisco routers at both locations were added to the tool for IPSLA monitoring. CDR data was analyzed to find details about all failed calls and IPSLA was used to measure MOS, jitter and latency for VoIP traffic between the locations. IPSLA reports were correlated with CDR information to confirm the affected location, subnet and set of users.

failed calls.png

mos score.png

Root cause with Cisco NetFlow

IPSLA confirmed high packet loss, jitter and latency for VoIP conversations origination from Chennai and this put suspicion on the available WAN bandwidth. The network admin verified the link utilization using SNMP. Though WAN bandwidth was being utilized to the max, it was not to the extent that packets should be dropped and latency should be high.

 

The 2nd free technology to be used was NetFlow. Most routing and switching devices from major vendors supports NetFlow or similar flow formats, like J-Flow, sFlow, IPFIX, NetStream, etc. NetFlow was enabled on the WAN interfaces at both Austin and Chennai and set to be exported every 1 minute to a centralized flow analysis tool that provided real-time bandwidth analysis.

 

The network admin checked the top applications being used and did not find VoIP occupying a place in the top applications list as expected. ToS analysis from NetFlow data showed that VoIP conversations from India did not have the preferred QoS priority. A configuration change on the router had caused backup traffic to have a higher priority than VoIP traffic. This had caused backup traffic to be delivered whereas VoIP traffic was being dropped or buffered when the WAN link utilization was high. The admin also found that a few scavenger applications too had high priority.

 

top apps.png       EF-top apps.png

Resolution with Cisco QoS

With reports from the flow analyzer tool, the network admin identified applications and IP addresses hogging the WAN bandwidth and redesigned the QoS policies to provide preferential marking to VoIP and mission-critical applications and put everything else under “Best Effort”. Bandwidth hogging applications were either policed or set to be dropped. Traffic analysis with NetFlow confirmed that VoIP now had the required DSCP priority (EF) and that other applications were not hogging the WAN bandwidth. Because Cisco devices supports QoS reporting over SNMP, the QoS policies on the edge Cisco devices were monitored to confirm that the QoS drops and queuing were as desired.

 

EF priority for VoIP.png  CbQoS drops.png

 

Cisco IPSLA and CDR analysis confirmed that VoIP call performance was back to normal no more VoIP calls had a poor MoS score or were being dropped. We had a smart network admin and that was the day we were taught to be proactive rather than reactive.

 

The question I now have is:

Have you been in a similar soup?

 

Are there alternatives methods we could use and how would you have gone about it?

With the recently launched SolarWinds Network Performance Monitor (NPM), you can now get a view into the application's quality of experience using Deep Packet Inspection (DPI) analysis. Most often, we tend to blame the network when an application issue occurs – whether it’s related to availability or overall performance.

 

To really know if your application is the culprit, you will have to monitor these essential metrics like network response time and application response time or time to first byte. These metrics are broken out by application, and provide an at-a-glance ability to correlate and identify the source of application issues. If it is indeed an application issue, you can drill down further and look at the health of the server hardware where the application is running. You can monitor the server response time and other metrics to pinpoint that it was in fact an application which is having issues.

sql serv 0.jpg

You may be using NPM to monitor the quality of service for a critical application like SQL Server. Network Performance Monitor can give you insights to a few server resource issues, like CPU and Memory performance.  If all looks good, there may still be an issue with your SQL server.  Here is an example of how you could use Server & Application Monitor (SAM) to drill-down further and identify database performance issues.

  1. After discovering your application environment in your node details page you will see the applications monitored. Here you can see that SQL Server is having a problem.

sql serv 1.jpg

2. Drilling further into the SQL server with SAM will show that the lock requests/second is higher than it should be.

sql serv 2.jpg

3. Clicking the metric or the component will tell you that the value is high and the high wait time could be the reason for the poor quality of service. Expert knowledge for this metric in SAM will help you with remediation guidance on how to fix the problem.

sql serv 3.jpg

Whether you’re monitoring file transfer apps, web services, social networking apps, messaging, email, database, and other applications using NPM, you can leverage out-of-the-box templates in SAM or templates on thwack to monitor the complete application performance.  This document has a list of applications that you can monitor using both Server & Application Monitor and the QoE feature of Network Performance Monitor.

It’s a known fact that organizations are turning towards virtualization for various reasons. An IT organization can add several business applications, databases, etc. without increasing or adding new hardware to their IT environment, thus saving hundreds of dollars and optimizing existing hardware. Often, development teams rely heavily on taking snapshots of their dev or test environments. In the event of hardware failure or if difficulties arise with restoring changes made to apps and databases, you can quickly lose this vital information. One of the most crucial values that virtualization offers companies is that it can help save on technology maintenance costs. Imagine having to run several physical servers without a virtual environment? You would end up supporting end-users or customers at a very high cost.

      

Despite the various short-term and long-term benefits virtualization offers, managing this complex infrastructure is a mammoth task for any IT pro. Most organizations would prefer to rely on a proactive virtualization management tool that offers deep and end-to-end visibility on your virtual and storage environments. SolarWinds® Virtualization Manager or VMAN as we fondly call it, gives you unified performance, capacity planning, configuration, VM sprawl control, VDI, and showback management for VMware® and Hyper-V®. In addition, VMAN also integrates with SolarWinds products, such as Server & Application Monitor and Storage Manager providing you with contextual awareness of performance issues across all layers, including applications, database, virtual infrastructure, server hardware, and storage systems.

            

Just this month, SolarWinds conducted a survey in which 136 VMAN customers participated. The pool of respondents included VirtAdmins, SysAdmins, capacity planners, IT generalists, etc. from North America. The objective of this survey was to find out how our customers are using VMAN to troubleshoot performance challenges in their virtual environment and what their ROI was after deploying VMAN.

             

VMAN Survey Aug 2014.jpg

      

After deploying VMAN:

  • 63% of respondents spent an average of only $6,000 per year on software costs to monitor around 250 VMs
  • Respondents decreased downtime from 7-15 hours to less than 3 hours per month
  • Respondents who spent 11-20 hours manually searching for dormant VMs now spend only 1-5 hours per month
  • Respondents who spent around 9 hours to detect VM sprawl every month now only spend a little over 2 hours

               

What’s also interesting is a large chunk of our customers say they leverage product integration between VMAN and Server & Application Monitor, Storage Manager, and Network Performance Monitor. 47% of customers go on to say their previous virtualization management tool didn’t offer such integration capabilities for end-to-end visibility (app to storage), and that’s why they made the switch to SolarWinds.

         

You can view the complete survey findings in the following presentation.

         

Virtualization Manager Survey: Features, Competitive, and ROI

              

If you’re an existing VMAN customer and haven’t had a chance to participate in the survey, tell us what value you see in using VMAN. Take this time to also let us know what you feel we could do better in the coming releases of Virtualization Manager.

If you are using Network Performance Monitor to look at the performance of your Windows servers, you'll get an idea that there is an issue with your server if you get an alert that CPU or Memory is starting to max out.

blog - cpu mem.JPG

Now what?  Server & Application Monitor is the perfect tool to help you more quickly troubleshoot the root cause of server performance issues.  After installation, on the node details view you will see additional management tools - Real-Time Process Explorer, Real-Time Event Log Viewer, the Service Control Manager and a Reboot button.

mgmt tools.JPG

In launching the Real-Time Process Explorer, you can visualize process that are consuming the most resources.  Right from this view, you can kill processes, or start monitoring a process to get alerted to when it uses too many resources.

RTPE WMI.jpg

As a network admin, how many times do you hear the app is down?  Server & Application Monitor has other troubleshooting tools to help you determine the cause of the problem and fix it.  Using the Service Control Manager, you can get a quick view into services that have stopped.  From this view you can restart the service or stop it if it is hanging.  Many times when system performance changes for the worse, you can dig into the log files to determine if there was a recent change, if there is a security event and so on.  Again, it's as easy as launching the Real-Time Event Log Viewer and sorting by log type and severity.

service control.JPG

Earlier this year we conducted a survey and asked you what it’s like managing DHCP, DNS, and IP addresses. We received over 200 responses from the thwack community.  Among the many findings, we observed that on average you’re spending about 40 hrs. each month using rudimentary tools to manage roughly 2,000 IPs.

 

Let’s put this into perspective. The largest hospital in the US has beds to accommodate 2,272 patients. Does it seem rational that a hospital should use something as rudimentary as a spreadsheet to track its patients?  Imagine the many details—name, room, physician, illness, etc. Of course not, it’s unthinkable! The chaos resulting from this could be easily avoided. Well, why do we manage thousands of IP addresses and details with something as limited as a spreadsheet? 

 

Coming back to our survey, our respondents told us they spend most of their time managing DHCP and DNS configurations, troubleshooting IP-related problems, and maintaining documentation of IP addresses. Not surprisingly, they said they wanted these tasks to be less complex. This is where SolarWinds IP Address Manager (IPAM) can help because it’s designed to let you easily manage DHCP, DNS, and IP subnets and addresses by offering three primary functions:

 


Manage IP Address Blocks

IP address management begins with knowing what address blocks and addresses are available to use. SolarWinds IPAM makes this effortless by automatically discovering your subnets. SolarWinds IPAM will then look within each subnet to identify what IPv4 and IPv6 addresses are in use. It’s all automatic and very accurate. No more spreadsheets to maintain! With an accurate inventory, it’s now trivial to find open addresses or reclaim unused addresses. Say goodbye to IP conflicts or perpetually low address pools.

 


Manage DHCP and DNS

SolarWinds IPAM overlays your existing Microsoft, Cisco, and Internet Systems Consortium (ISC) DHCP and DNS servers and presents you with one consistent management interface. Plus, SolarWinds IPAM integrates many DHCP, DNS and IP address management tasks together. Now it’s trivial to find and assign a new server a reserved DHCP IP address and create the necessary DNS host records. Additionally, you have maximum flexibility to add, remove, replace or even consolidate DHCP and DNS services with minimal disruption and re-training. 

 


Monitoring and Troubleshooting

SolarWinds IPAM actively monitors critical IP resources and events so you don’t have to and only alerts you when potential problems occur. SolarWinds IPAM provides a customizable dashboard that summarizes information like IP conflicts, high utilization subnets and scopes, mismatched DNS entries, or other DHCP, DNS and IP configuration changes that occur. It's a great way to view your most significant static and dynamic subnets at a glance.  Plus SolarWinds IPAM helps troubleshoot and resolve problems quickly. 

 


Summary

Not surprisingly, our survey takers reported that a DDI solution is essential because it helps increase mean time between failures (MTTF) and improves mean time to recovery (MTTR). Purpose-built tools like SolarWinds IPAM can help you reduce the time and complexity required to manage DHCP, DNS and IP addresses.



Want to learn more?


 

Customer service, more than ever, has become increasingly important to all types of businesses. Any IT-enabled business model depends on support—whether it’s field service, managed service or internal IT support. With the increasing demand for tech support, there is an increase in the cost of support too. This is accounted in terms of additional staff, more man-hours spent on issues, expenditure on more tools and technology platforms to render support, etc.

Take for example the incident costs of assisted support in enterprises. According to TSIA Support Services benchmark data, incidents resolved via phone now average $510. Email incidents, with their back-and-forth conversations to gather additional data stretching out resolution times, now average close to $700.

 

In efforts to optimize costs, support organizations are keenly looking at customer self-service options which, in addition to reducing costs and service labor, also empowers the end-users to handle basic and common support request and issues on their own. In a social media survey by TSIA, there were more in favor of Web self-service than other channels of assisted support including community, phone, email, chat, etc.

  

The question that every support department is focused on is: “what is the most cost-effective means of achieving self-service, which is not resource-intensive on the support staff?”

 

KNOWLEDGE BASE, A COST-EFFECTIVE ONLINE SUPPORT CENTER FOR END-USERS

A popular channel for offering Web-based self-service to your customers is via a centralized knowledge base which is essentially a repository of common and recurring support incidents in the form of Q&A. Every knowledge base article is contributed by the support team. Their knowledge of handling incidents, resolving and troubleshooting problems are captured and documented so that customers can gain online access to these FAQs and tackle some basic issues/service requests by themselves.

 

The only investment that goes into creating a rich knowledge base is the knowledge contribution by the support staff, and some additional content curation for consistency and ease of use for end-users. But the ROI to this effect is immense in terms of cost and time savings for the support team.

Knowledge Base.png

BENEFITS OF EMPLOYING A KNOWLEDGE BASE:

  • Reduction in number of service tickets created
  • End-users educated and empowered to handle common and recurring support requests by themselves
  • Cost and time savings for  the support team

 

KNOWLEDGE BASE, BEST SERVED WHEN INTEGRATED WITH THE HELP DESK SYSTEM

The help desk is anyway the central point of logging and tracking customer service requests and trouble tickets. It only makes logical sense to associate a knowledge base repository to the help desk which is accessible by IT technicians for capturing knowledge articles.

Once captured, these can now be served to the end-users on the help desk Web portal. Articles can be searched with keywords, or can be made to dynamically populate as users fill out the service request form. This makes it easy for your help desk customers as they can take the first step in handling their needs before the IT admin indulges and spends considerable time on common and recurring requests.

 

MEASURING THE EFFECTIVENESS OF SELF-SERVICE

  • Measure how many people are using the knowledge base Web self-service.
  • Conduct surveys and polls to find out whether they found it useful in terms of content and user experience
  • If the number of instances of self-service incidents by customers lead back to assisted support, then your self-service options are not very effective.

  

BEST PRACTICE: Review knowledge base content periodically and make sure the content is up to date and useful to customers.

 

Help your customers help themselves!

There is a credit card commercial that asks, "What's in your wallet?" I'm going to ask, "What's in your network?" Sure, you might be able to tell me what's in your network right now, but can you still tell me about a device when it's down? Its model and serial number? The modules or line cards installed? Which interfaces are in use and how much bandwidth they use?

 

Maybe you have all that, so let's kick it up a notch. Can you tell me what the configuration of the device was last night? What about last week or last month? Some of these bits of information can be important when troubleshooting or when you have to replace a failed piece of equipment. If you are new at this, you may not realize that some changes can take long periods of time to impact your network. Sometimes they don't actually kick in until a device is rebooted or when a failover takes place. This can lead to misdiagnosing the cause of a failure.


I actually had something like this happen last week. I did a failover to a secondary load balancer so I could install a new license on the primary. While I was working on this, we started getting reports of an encryption certificate. It turned out the certificate configuration on the secondary unit hadn't been completed correctly months ago. However, from my immediate perspective, no configuration had changed...


On a related note, are you using centralized logging or are all your logs on your devices? If you aren't using centralized logging, you are taking away an important troubleshooting tool. Don't turn off local logging, it's really inconvenient when it's not there, but supplement that with centralized logs that you keep longer and will survive a reboot. Centralized logs also let you see all the events happening in your environment at the same time. This makes it much easier to correlate events when tracking down a root cause.


So I ask, do you know what's in your network? What other ideas and tools do you have for helping know your network?

 

Welcome to my fourth and last installment in the discussion about the expectations of user and device tracking. I would like to take a little time up front to thank you all for taking the time to read my post and a real big shout out to the ones that have made comments about the posts. With that said, I want to make this post a little different in that I want to review the different themes of the discussions and incorporate some of you, the reader's comments, as I work on presenting what might be a consensus on the themes of the discussion.

 

In my first post, I attempted to keep the focus on specifically about corporate owned assets that are distributed to the end users in a company. Michael Stumps' opinion is that, “If it's company owned, go ahead and track the hell out of it. I know my work laptop is for work only. But I bring my personal laptop with me to the office, so I can connect to the guest wireless and clearly separate work from non-work things. Plus, here in the public sector, everyone lives in fear of front-page coverage in the Washington Post in the event that a laptop or smartphone or whatever gets lost.” That makes total sense and it seems completely reasonable. After all if it is a company asset it is not something that you own and therefor do not get to call the shots on proper usage. However, have you ever noticed that there is no consistency between different companies? Mikegrocket has clearly pointed out one extreme in that, “I work for the government, so I have sold my soul to them. Everything is monitored and tracked.” One other point that Mikegrocket made that is really worth mentioning is, “The idea of data loss protection/prevention comes into play as well. I need to know who is attempting to remove data and what that data is. We can't have important information going out the door. I know it happens, can you say Snowden, but I do what I can to prevent it happening on my network.”  That is one point that I believe is left out of the discussion until the article in the Washington Post brings it to light. As cahunt pointed out the information should be in, “that agreement you half read and fully signed when starting should encompass the use of that company laptop - on or off site."

 

In my next post, I changed the focus from corporate owned assets and moved to personal computers and devices that we all incorporate into our professional lives.  Here is where we started to see a loss of the general consensus in that with some people that,9 “I dont like the idea of personal devices accessing sensitive corporate information.  To me, that is a good dividing line.  In this way, the corp doesnt need to install anything on my phone, laptop, etc and they have measures in place to keep me out of their data.” I really have to agree with him and draw the line in the sand that Jim Couch has specified on data access, but can’t that really be side stepped where people can email what they need to their device and that the data ends up who knows where or what cloud? Time seems to be an issue that stands out, “My only issue is the time frame, if the device is truly lost 3 days later it would be out of battery. If you are SIM capable, you can pull the SIM and still may have access to some data on the device and without your SIM you lose the WIPE ability since it powered up and did not connect.”  Kevin Crouch took a little detour in the conversation to switch the focus from the device and move it to the person.  “Being completely honest here, 50% of phones I see have no password probably 40% have a pattern lock that’s 4 nodes long, and the remaining is split between face, fingerprint, password and Pin (often just four digits long)”  He even went further with how careless users can be with the credentials.  “The worst part is that those people who just spout it out often want the consultant to enter it too! If you say It once I probably won’t remember it. If you make me type it, unless it’s chicken scratch (#@DFks@dsk1&4) I’m going to remember it easily (Wh0llyM0ly1991).”  Once some unsavory person has your logon and password, the concept of identity theft can take on a whole new dimension.

 

Now, speaking of the users and outside of our corporate oasis, we all become the users that are tracked by websites, stores, and tolls and of course everyone’s favorite...Big Brother.  It seems to appear that the twenty first century is the century for big data in both gathering and mining. When it comes to the web, “practically every site on the web does this. Google especially watches everywhere you visit, everything you buy, every forum you post on, and tries to customize their ads to appeal to you on a deeper and deeper level” How much longer will it be before we all see specialized billboards and signs on the street that will present individualized adds for you as the billboard or sign recognizes you as you approach? Since most of you that are reading this are the admins that perform some of this tracking for your company and for those of you that mentioned you work for the government, which agency did you say you worked with again? Zackm make a point that is worth mentioning when he says “I tend to 'assimilate' under the pretense that the admins should be taking a dose of their own medicine.” That seems to be the way forward for things to stay “real” for all of us. As I mentioned in my last post, these are just the ones we know about. Tcbene summed that up when he said, “Many people have no idea how much data is being collected on them daily.  I just finished reading an article on the data Google maps is collecting on an individual’s whereabouts when using Google maps.  Like free Wi-Fi people generally don't think about what they are giving up when they use the service someone is offering.  With Google maps they're not just helping you find your way to a location, they are keeping the history of everywhere you go anytime the location services are activated.  That aspect must be in the fine print people never read, I know, I didn't ask for that feature.”  It really seems that people and governments that perform that tracking seem to hope for or count on the concept of what you don’t know can’t hurt you. Before the Snowden leak, we all knew the government had tools and were using them, but are you like me that was really kind of surprised and the scope and depth of the tools and the capabilities?  I wonder if I liked it better when I did not really know.

 

No matter you thoughts or concerns on this matter the simple fact is that this is now just a simple fact of life that we all have to deal with.  What makes me worry even more is who is watching the watchers? Could you imagine an unsavory individual that uses the technology to further their own goals?  How much dirt could be gathered on say Congressmen, Senators, Supreme Court Justices or even the President?  We all have dirt, no one is perfect and the ability to gather data to influence the outcome of things that will undoubtedly affect us all.

 

A recap of the previous month’s notable tech news items, blog posts, white papers, video, events, etc. - For the week ending Thursday, July 31st, 2014.


News

 

Net neutrality supporters: Deep packet inspection is a dangerous weapon

Network access providers should be disallowed from using DPI, and should provide regular reports to demonstrate they're not, suggests yet another group of Internet technology leaders.

 

73 Percent of IT Staff Currently Have Unresolved Network Events

Forty-five percent of IT staff say they monitor network and application performance manually instead of using network monitoring tools.

 

Emulex: study shows network visibility can help avoid the IT blame game

Results of a study of 547 U.S. and European-based network and security operations (NetOps and SecOps) professionals, which found that 45% of IT staffs monitor network and application performance manually, instead of implementing network monitoring tools.

 

VoIP grows quicker, even as improvements may not be immediately

With its noted improvements in the business world, there's no question that VoIP is a technology that continues to be growing in popularity. Its adoption is consistent throughout companies worldwide, and as such many are looking for new and better ways to monitor their resulting savings.

 

The BYOD horse is out of the barn: Implementing the right mobile policy for your organization

Striking the proper balance between security, productivity and privacy is the key to establishing a successful mobile device policy.

 

Network Stories

 

Cisco describes its SDN vision: A Q&A with Soni Jiandani

Network World Editor in Chief John Dix caught with Jiandani to get her take on how SDN plays out.

 

Blogger Exploits

 

Application intelligence: THE driving force in network visibility

Business networks continue to respond to user and business demands, such as, access to more data, bring your own device (BYOD), virtualization and the continued growth of IoT. Historically much of the traffic that runs through these networks has been known to network administrators but access to application and user data remains lacking.

 

The benefits of converged network and application performance management

A converged Application Performance Management (APM) and Network Performance Management (NPM) solution gives organizations actionable information to resolve the most challenging performance concerns in minutes.

 

Why Network Monitoring Is Changing?

IT needs end-to-end visibility, meaning tool access to any point in the physical and virtualnetwork, and it has to be scalable and easy to.

 

OpenFlow Supports IPv6 Flows

Software Defined Networking systems are gaining IPv6 capabilities

 

Why Mid-Tier Companies Need to Start Monitoring Their Networks Like Big Companies

All the largest enterprises truly understand that, and most have a network monitoring strategy. On the other hand, there is another group of business owners not monitoring their network and not considering it for the future. Well, they’re in for a rude awakening if they don’t take a cue from those big companies and invest in network monitoring.

 

Do We Need 25 GbE & 50 GbE?

Efforts to bring 25 GbE and 50 GbE to market are underway. Is there a strong case for these non-IEEE solutions? For cloud service providers, there is.

 

Understanding IPv6: Link-Local 'Magic'

Denise Fishburne performs a little IPv6 sleight of hand in the second post in her series on IPv6.

 

Food for Thought

 

A help desk is generally used as a management tool that simplifies ticketing management activities for IT teams—allowing IT technicians to automate workflows and save time on manual and repetitive tasks. Acting as a centralized dashboard and management console, a help desk can simplify various ITSM tasks including IT asset management, change and knowledge management. While this is all truly beneficial from a management standpoint, a help desk can also serve as a platform to support troubleshooting of servers and computer assets in your IT infrastructure.

 

What if you can initiate remote desktop session to connect to your end-user’s computer from the help desk interface?

 

Yes, it is possible. Consider this scenario: an employee using a Windows® computer creates a trouble ticket because his workstation is facing some memory issue. And you, being in the IT team, log the ticket in your help desk system. Now, you have a 2-step procedure. First, you must assign the ticket to the technician who will perform the troubleshooting. Secondly, the technician will have to resolve the issue – either remotely, or by visiting the end-user’s desk in-person and fixing the computer.

 

Help desk integration with remote support software simplifies this process and allows IT admins to initiate remote session of the computer directly from the help desk IT asset inventory. This saves a ton of time as you already have the ticket details in the help desk, and now you have a handy utility to connect to the remote computer to address the issue immediately. Of course, you can use a remote support software to troubleshoot the computer without having to do anything with the help desk. But IT teams, facing staffing and time constraints, and having a lean IT staff wearing multiple hats, can tighten their support process by combining the power of both the help desk and the remote support tool and making remote desktop connectivity just a click away from the help desk console.

 

SolarWinds® introduces Help Desk Essentials, a powerful combo of Web Help Desk® and DameWare® Remote Support software, which allows you to initiate remote control session from Web Help Desk asset inventory.

  • Discover computer assets with Web Help Desk
  • Associate computer assets to problem tickets (This will help to track the history of service requests for each IT asset.)
  • Assign technician to the ticket with Web Help Desk’s routing automation
  • Technician can open the IT asset inventory in Web Help Desk, click on the remote control button near asset entry, and commence a remote session via DameWare Remote Support
  • Using DameWare Remote Support, technician can remotely monitor system performance, view event logs, check network connections, and start/stop processes and services, and more.

IT Service REquest Fulfilment Process.PNG

Check out this video on Help Desk Essentials – peanut butter and jelly for IT pros.

 

 

Do IT remotely!

In previous weeks, I have talked about running a well managed network and about monitoring services beyond simple up/down reachability states. Now it's time to talk about coupling alerting with your detailed monitoring.


You may need to have an alert sent if an interface goes down in the data center, but you almost certainly don't want an alert if an interface goes down for a user's desktop. You don't need (or want) an alert for every event in the network. If you receive alerts for everything, it becomes difficult to find the ones that really matter in the noise. Unnecessary alerts train people to ignore all alerts, since those that represent real issues are (hopefully) few. Remember the story of the boy who cried wolf? Keep your alerts useful.


Useful alerts help you to be proactive and leverage your detailed monitoring. Alerts can help you be proactive by letting you know that a circuit is over some percentage of utilization, a backup interface has gone down, or a device is running out of disk space. These help you to better manage your network by being proactive and resolving problems before they become an outage, or at least allowing you to react more quickly to an outage. It's always nice to know when something is broken before your users, especially if they call and you can tell them you are already working on it.


What's your philosophy on alerts? What proactive alerts have helped you head off a problem before it became an outage?


  If you have not picked up on this yet, my topics of conversation in my last couple of posts have been about the expectations of user device tracking.  Based on the comments I believe it is safe to assume that you, the readers, work in I.T. and are responsible in one way or another for the tracking of the corporate assets and any device that connects to the corporate infrastructure with tools like SolarWinds User Device Tracker software. The reason I am pointing this out is to highlight the type of I.T. professionals that make up who this audience is.  The audiences that help to mold and shape the policies and implement the tools needed to enforce the policies. When you say it that way it almost sounds like you’re talking about the law makers in the government, but I digress.

 

From the administrative point of view in the comments, I believe we all seem to have basically the same expectations of what kind of tracking should be in place and why.  In my next post I presented the same concept by focusing on the latest trend in I.T. We have moved on from bring your child to work to bring your own device to work.  The comments were clearly not in unison, as was more of the case for tracking corporate devices. I was actually hoping for this to be the case as I get into what I really want to have a conversation about today.

 

Now I get to present this thought to the people that are part of the I.T. tracking administrative teams themselves.  Have you ever thought about the tracking that is going on that we don’t know about? For me, the NSA is the first to come to mind.  We have all heard stories about what they had in capabilities, and it really woke us up to a nightmare that Snowden spilled the beans on those capabilities. No, that would be just way too easy and way too obvious. When I am out with the family, either around town or on a trip, there are four words that I know are coming and have to make you think what "free" really is. Those four words are “Hey look, free Wi-Fi!" I guess "free" means the "price" is that we can then be tracked through a store or at an airport, and that is just a couple examples.

 

How many of you live or work in a city where they have the automated toll passes?  You know, the ones that are attached to the car and automatically pay the tolls when we go through.  Notice how the government added all those high speed toll pass lane and maybe one or possible two cash lanes? If you are going to use those roads it just make sense in time and sanity to utilize the automated system, right? Next time you’re riding around town as the passenger, take a good look around and you will find will these devices that trigger a signal from the toll pass device all over town. Maybe they government just wants to monitor the traffic flow, or maybe it is something more.

 

How many of us have membership cards for different stores to get better shopping deals from the store? If you are like me you might use a couple for your grocery shopping, or the pharmacy, the list is endless and now the store has our shopping habits available to the highest bidder.  The latest trend of the twenty first century is how we were so willing to give up privacy for convenience.  I bet we are all guilty in one way or another. As you think of more, please post then in the comments.

 

So what do you, the admin, think of you becoming the end user?  Does your perspective change at all when you become the device that is tracked? That is one question that I would really like to read your response to. What hidden devices and perspective do you have to share?

 

Cloud technology has transformed the way we conduct business. Since its inception, Cloud technology has systematically dismantled the traditional methods of storing data (tape, hard disks, RAM, USB devices, zip drives, etc.) and replaced it with a more boundless storage environment. Now, the thought of storing proprietary data on a hard drive or local storage device seems “so last year.” The Cloud might be the latest trend in file transfer and storage, but in terms of security, it’s not exactly a vault-like storage receptacle.

 

For example, a phishing scam that targeted Dropbox resulted in a 350K ransomware infection and illegal earnings of nearly $70,000. In a similar incident involving note-taking and archiving software company, Evernote, hackers gained access to confidential information, email addresses, and encrypted passwords of millions of users at the California-based company. Evernote also offers file transfer and storage services.

 

A recent survey, conducted by F- Secure, indicated that 6 out of 10 consumers were concerned about storing their confidential data with Cloud storage services. This survey also found that out of the varying levels of technology users, the younger, tech-savvy generation was the most wary of Cloud storage. The survey revealed that 59% of consumers were concerned that a 3rd-party could access their data from the Cloud and 60% felt that the cloud storage providers might even be selling their data to 3rd parties for some quick bucks. In addition, other apprehensions were raised about the quality of technology used by these Cloud providers. Some recent security-breach incidents leads me to conclude that these concerns might have some merit.

 

Automated file transfer software not only simplifies and speeds up file transfers, it enhances the security of all file-transfer operations. This is important as data security is a high priority for all users. Unlike SaaS-based FTP services, Self-hosted FTP server solutions do not compromise data security and integrity by exposing your transferred and stored data to the Cloud.

 

A Self-hosted FTP solution is a safer option for transferring, storing, and accessing your confidential files and data. The following are some of the benefits of a self-hosted FTP solution:

  • Hosted on your premises and enables you to maintain the integrity of shared data.
  • Offers security for data that’s both at rest and in motion.
  • Offers internal resource protection (DMZ resident) enabling it to conceal internal IP addresses.
  • Provides granular access control.
  • Secures data transmissions with encryption and authentication features.

 

For an organization, Cloud-based storage services may be convenient, but the question is should you compromise on the integrity of your data. Data is precious. You need to ensure that your data is under the care of someone who is serious about its security and safety.

John TSIA.PNG

Let me have the privilege of welcoming to thwack John Ragsdale, VP of Technology Research for TSIA, an industry association for the service divisions of enterprise hardware and software firms. In preparation for an upcoming joint webinar on August 21, at 10am PT, “Managing the Complexities of Technology Support,” we recently had a conversation with John about industry trends he is tracking that may be of interest to the SolarWinds community.

  

Here’s a sneak peak at that conversation:


 

SolarWinds: TSIA has published a lot of research about the rise of technical complexity and the impact it is having on enterprise support organizations. Could you talk about this complexity and some of the related drivers/trends?

John Ragsdale: In our support services benchmark survey, we asked the question: “How complex are the products your company supports?” The range of options include standard complexity, moderately complex, and highly complex. In 2003, less than half of tech firms defined the technology they sell and support as "highly complex." Today, more than two-thirds of companies say their products are "highly complex." This rising complexity has big impacts on support organizations. When I first started as a technical support rep back in the 90s, by the time I finished training and went live on the phone, I knew how to handle more than half of the problems I would encounter. After about 6 months on the job, I'd seen just about every problem customers could potentially have. Today, because products are loaded with features and customization options and run on a myriad of platforms, it takes years to really become an expert on a product. This makes the learning curve extremely steep for “newbies” just out of training. In addition to product complexity, technology environments are contributing to the complexity. Today's hardware and software tools are so tightly integrated and interconnected that it is hard to identify a single failing component. As a result, we see average talk times and resolution times stretching out and first-contact resolution rates trending downward.

 

SW: Each year, you do a survey that tracks technology adoption and spending plans for tools used by service organizations. I know that remote and proactive support technology is one of the categories you track. Could you share some of your data around this technology category?

John: Currently, just over one-third of enterprise technology firms (37%) have remote/proactive support tools in place. However, just over half (51%) have budget for these tools in 2014-2015. Any time you see half or more of tech companies investing heavily in an application category, you know that an industry shift is occurring. In this case, service organizations are looking for opportunities to dramatically improve service levels without hiring more employees, and remote and proactive support tools can help achieve this. I expect to see the adoption numbers rising each year until the technology is as common as CRM, incident management, and knowledge bases. I really can't imagine a technology support organization in 2014 functioning without remote diagnostics and monitoring.

 

SW: TSIA launched a new association discipline on managed services last year. Could you tell us TSIA’s definition of managed services and why you see this as a hot investment area for technology firms?

John: The easiest definition of managed services is "paid to operate." Clearly, IT departments don't have the staff they did a decade ago, but their hardware and software environments are bigger and more complex than ever. When a company is struggling to manage all the equipment and software in their environment, they often look to the vendor of that technology to assume responsibility for maintaining it, customizing it, upgrading it, and supporting it. In other words, the vendor's managed services team becomes an extension of the customer's IT team. TSIA sees this as one of the fastest growth segments for services, and more tech firms are looking to managed services to generate additional services revenue. While many companies are investing in cloud solutions to minimize their IT footprint, the truth is that many cloud solutions are best suited for smaller firms. Large companies need highly sophisticated enterprise tools, which usually means owning the implementation. Managed services allows a company to have a "best of breed," on-premise implementation, but with none of the ownership headaches.

 

SW: Earlier this year you conducted a survey of your managed services members specifically around the technology they use to remotely support and manage customer hardware and software. What were some of your findings from that survey?

John: George Humphrey, head of our managed services research, will be publishing the findings from that survey in time for our October conference. But a sneak preview of the data shows that the majority of managed service operations have made heavy investments in technology to support remote customer equipment, including proactive monitoring of application and network performance, ITIL-compliant help desk tools for incident and problem management, configuration management databases (CMDB), and release and capacity management. This survey was what first brought SolarWinds to my attention, because half of the companies surveyed were using SolarWinds for application and network monitoring.

 

SW: We see great interest from our customers around improving the efficiency of support and ITSM when there is always a growing volume of tickets and a lean support staff. How big of a role do you think technology like SolarWinds plays in efficiency improvements?

John:  I've been involved in the support industry for more than 25 years, and I can tell you that more than any other department in the company, support and help desk operations are experts on "doing more with less." We are often the first hit with budget cuts and downsizing, and unfortunately, many service teams are still being operated as a cost center. We hire smart people, train them well, and our processes are fine-tuned and compliant with industry best practices. In my opinion, it’s up to technology to "take us to the next level" for efficiency and productivity improvements. Over the last decade, tools proven to work within IT help desks are finding larger adoption among external customer support teams, and definitely within managed service operations. Flexible and customizable help desk software for ticketing automation can easily reduce the time to open and manage incidents. IT asset management is critical to know what equipment is where and who is using it. Change management can automate common, repetitive processes—from adding new users to upgrading systems—making sure every process is complete and accurate. Underlying all of this is knowledge management—capturing new information in a searchable repository so no one ever has to "reinvent the wheel" to solve a problem.

 

SW: You have conversations with companies about selecting and implementing remote support technology. Where does the ROI for this investment come from?

John: Remote and proactive monitoring technology has huge potential for lowering support costs, increasing service levels, and ultimately improving customer satisfaction, loyalty, and repurchase. By identifying problems at a customer’s site quickly, support can fix the problem before it impacts end-users. This increases uptime and lowers the cost of supporting customers. In fact, we've had members present case studies at our conferences showing that customers who encounter problems and have them fixed rapidly have higher satisfaction and tend to buy additional products—knowing that you will take care of them no matter what happens. A surprising fact one large hardware company uncovered is that customers who experienced a fast resolution to a problem are actually more satisfied than a customer who never encountered a problem. And, we are seeing more companies leveraging remote support to generate revenue by offering this capability as part of a premiere support package.

 

SW: John, thank you for taking the time to speak with me today!

John: It has been my pleasure. Thanks for having me!


You can follow John Ragsdale on Twitter!

 

I hope everyone tunes in to our live webinar, "Managing the Complexities of Technology Support," on August 21st, at 10am PT. If you aren't able to attend, register anyway. We'll send you a link to watch an OnDemand version of the event, as well as a link to download all the presentation materials.

Network monitoring tracks the state of the network and is primarily looking for faults. At the most basic level, we want to know if devices and interfaces are "up." This is a simple binary reachability test. Your device is either reachable or not, it's either "up" or "down." However, just because a device is reachable does not mean there are no faults in the network. If a circuit is dropping packets, performance may be impacted and can make the circuit unusable even though it is "up." Time to stop thinking in terms of reachability and start thinking in terms of availability.


Availability is a service oriented concept that asks, "is the service this widget provides available to its users?" Is the service 100% available or is it degraded in some way? Here are some examples of situations that simple reachability monitoring has difficulty detecting:


  • A circuit is dropping packets somewhere in your WAN provider's network. It is "up," but throughput is reduced.
  • A circuit is congested and latency has shot through the roof. The circuit is still "up." There may not be anything technically wrong with the circuit, but it isn't really usable to the end users.
  • A router is using 100% of its memory. It is processing packets slowly or perhaps it is not able to add new routes. It may still be "up."
  • An Ethernet interface in a port aggregation group is down or perhaps it was one blocked by Spanning Tree. While an interface will be down, from the packet perspective everything is still "up."


In the first two cases, you will probably hear about it from the end users. In the last two cases, you might not know about them until something else changes in the network that causes a (possibly confusing) outage. And probably a bunch of trouble tickets.


Are you thinking in terms of availability or reachability? Is your NMS configured to match your mindset?

What are your expectations or your thoughts when it comes to having a discussion about user device tracking technology?  This was the opening question that I presented to spark a dialog about the topic of user device tracking. In my first post, I wanted to center the conversation on what the expectations should be with corporate owned technology.  I shared my thoughts that any computer, laptop, phone or any other company device, belongs solely to the company and as such the company has the right to be able to have total control of those devices as well as the final say on how those devices are used.  For the sake of this discussion, let's refer to this as old school expectations and in today’s world the technology and the way we do business has completely changed the twenty first century.

 

No longer is the laptop the only device that is used to access company resources in our day-to-day operations. What is new in the twenty-first century is the concept of Bring Your Own Device (BYOD) or is also called Bring Your Own Technology (BYOT), Bring Your Own Phone (BYOP), and Bring Your Own PC (BYOPC). This concept refers to the policy of permitting employees to bring personally owned mobile devices (laptops, tablets, and smart phones) to their workplace, and to use those devices to access privileged company information and applications. The term is also used to describe the same practice applied to students using personally owned devices in education settings.

 

The foundation of my argument has been that corporate, or academic owned assets are the property of the institutions and as owner of these devices, they get to call the shots.  Companies and/or institutions have no ownership claim of these personally owned devices and that I believe changes the dynamics of the conversation.  The concept of BYOD did not come about because a company thought it would be a good idea to give their employees this kind of freedom, in reality it was quite the opposite; they could not stop the employees from using their own devices and needed to figure out some kind of way to handle, control, and track the company data in this wild west free-for-all device world.

 

In all practical purposes, the technology is already available to track these personal devices using the same tools that are being used to track corporate laptops and other devices.  Some of the most common methods use certificates to establish the identity of the device when connecting to corporate or academic resources and from there the MAC address of the device or the username can be utilized to track the connectivity inside the corporate network. Same rules apply and the technology is there, but is this type of tracking what we should really be focusing on? I believe this should be one part of the process with an even greater focus on data tracking. This is what will be the most challenging, but also one of the most important tasks for companies who utilize BYOD. The companies must develop a policy that defines exactly what sensitive company information needs to be protected and which employees should have access to this information, and then to educate all employees on this policy. However, is education and policy going to be enough? How much corporate control of the personal devices needs to be incorporated into these BYOD policies?

 

That idea of corporate control of personal devices is where things can really get out of hand in my personal opinion.  I have seen corporation’s present policies to their employees where they welcome the employees bringing their own devices as long as the company can install a company approved image on the device that gives the corporation complete and total control of the device.  Which in my view, changes the device from a personally owned and operated device and morphs the device to nothing different than any other corporate owned and maintained asset that the company can use without the financial responsibility of the company to have to purchase the device.  Should this be the way of the future in that when you work for a company you should be expected to supply your own computers, phones and/or tablets that must be loaded with the company approved operating systems and applications, as well as adhering to corporate based usage policy? Where is the middle ground when it comes to BYOD before it turns into supply your own corporate device? That is the question I would really like to open for discussion. What are your expectations when it comes to the personal devices that you own, but utilize in both your personal and professional worlds?

 

There are always questions on what’s better: Synthetic End-User Monitoring (SeUM) or Real End-User Monitoring (ReUM)? Whether you’re using one of the two or both depending on your business scenario, the ultimate goal is to improve end-user experience. You achieve this by monitoring performance and continuous availability of websites and Web applications. Let’s take a closer look at both.

    

Organizations consider synthetic monitoring to be an invaluable tool, as it helps detect issues in websites and applications at an earlier stage, allowing you to address and fix issues prior to deployment. Further, it provides the ability to drill-down into individual Web components and diagnose performance problems affecting websites and Web apps. Synthetic monitoring offers other benefits, such as:

  • Record any number of transactions and test Web app performance for deviations
  • Easily locate front-end issues in your websiteswhether it’s HTML, CSS, JavaScript, etc.
  • Proactively monitor response time for different locations and compare against baselines
  • Get notified when a transaction fails or when a Web component has issues

   

Another key area where synthetic monitoring is beneficial is when websites host live information. For example, game scores, trading stocks, ads, videos, etc. Continuously monitoring these websites will help to proactively identify issues related to third-party content. With SeUM, information can be interactively shown in the form of waterfall charts, transaction health, steps with issues, and so on. Additionally, this method allows you to easily pinpoint components with performance issues.

          

On the other hand, you have real-time monitoring tools that present a different angle to monitoring end-user experience. When trying see the world through the users’ eyes, you gain insight into their behavior and can assess overall user experience in real-time. Since real-time monitoring doesn’t follow pre-defined steps or measure preset Web transactions, you have access all the data you need. Moreover, you gain visibility into:

  • Application usage & performance and track based on individual users
  • Location specific usage and performance
  • Make changes to your applications and monitor dynamically

     

In addition, you will always be aware of your website’s status. In turn, you ensure it’s up and running from various locations because you’re able to get traffic and monitor user interaction. Without traffic, real-time monitoring is meaningless. Reason being, you will no longer receive visitors, nor will you be able to look at performance metrics to adjust navigation or change the look and feel of Web pages. Here’s where synthetic monitoring has a slight edge since you don’t need real traffic to measure website performance. Synthetic monitoring has the ability to monitor website performance from any location with pre-defined steps and transactions.

       

While SeUM and ReUM each have their own benefits, it really boils down to what your business model is and how your business is aligned with end-users. IT pros can certainly leverage both within the same environment. Unfortunately, since both tools are built for completely different usages, you will have to use them independently to monitor and measure user experience.

        

Tell us your stories. How does your IT organization monitor end-user experience today?

 

What are your expectations or your thoughts when it comes to having a discussion about user device tracking technology?  Have you really given it much thought or had a good conversation recently?  For as long as I have been working in IT and have been given a company computer or other technology asset to use, I pretty much have given the expectation that this computer, laptop, phone or any company device, belongs solely to the company and as such the company has the right to be able to have total control of those devices as well as the final say on how those devices are used. That would be a fair assessment? Don’t you see some similar type verbiage that can be found in most company’s onboarding paperwork?

 

I am pretty sure that most of you that are reading this post might be involved with the administration of user devices and I am also willing to bet that a good portion of you might actually be handling that task of utilizing SolarWinds own User Device Tracker software (UDT). Before I really express some of my thoughts in the attempt to provoke a riveting conversation,  I want to be clear in that I believe that software like this is a must for corporations to be able to manage and protect their technology assets and the company infrastructure from unauthorized access from stolen or disgruntled device or person.

 

But hold on for a second, what about outside the corporate network? Isn’t there just as much of a need to maintain, be able to control, as well as track those corporate devices once it leaves the safety and comfort of the company infrastructure? While giving that some thought, one of the first scenarios that flashed into my mind is something that is the type of scenario that has already been played out and gotten great press coverage multiple times already.  I know you have all heard about or read about these different kinds’ stories where someone loses or gets their device stolen along the way.  In most cases like this, it is not the loss of the device that is the most important, but rather the data that was stored on it.  Private company documents on future products, company business strategies or yet, one of my personal favorites, a database of customer account information or even better yet, private patient medical information.  Security protocols should in place to encrypt this data as well as have multiple type authentication mechanisms in place to secure that data, but I digress and will leave that topic for another discussion. Let’s be honest in that no matter how much security is preached, not all companies go to such lengths to protect their devices and data. The ability to have some kind of a nuclear option to wipe these devices when needed can be a life or job saver. Can you really put a price on not having bad press and a loss in customer confidence?

 

Those are some pretty profound use case scenarios that can help easily justify the need for tracking and control of the devices both in and out of the datacenter, but there is an old saying that immediately comes to mind.  Even the most honest and justified intentions will tend to have unforeseen unintended consequences that come with it.  Case in point, using SolarWinds User Device Tracker as an example, this software has the ability to not only track down devices either by its IP or MAC address, but a search can also be done on a user logon account itself.  There are multiple scenarios that can be easily argued for the need to be able to search for a specific user, but I present to you the thought that this function easily changes the topic of this conversation from user device tracking to just user tracking? Circling back to a point I made earlier, when at the company’s place of business and utilizing company resources this idea of device tracking and monitoring should be fully well understood and expected.

 

Now, what about outside of the company’s place of business? Do you feel that the justification to manage and protect company’s physical resources extends to both inside and outside of the corporate offices? Is there a line that should be drawn in regards to tracking devices and in all other practical terms, tracking the users or employees, if you will, once they leave the office? There lies an interesting question and as you develop your answer to that question, lets expand the parameters to include not only corporate devices but let’s also add non corporate assets that are commonly known as Bring Your Own Device (BYOD). Does that change your answer at all? Hold on to that thought and join me next time to contemplate tracking the BYOD.

 

What is network management and what constitutes a well managed network? Is it monitoring devices and links to ensure they are "up?" Is it backing up your device configurations? Is it tracking bandwidth utilization? Network management is all this and more. We often seem to confuse network monitoring with network management, but monitoring is really just the start.


Network management is about being proactive. It's about finding problems before the users do. It's being able to see what changes have taken place recently. It's about monitoring and analyzing trends. It's tracking software updates for your devices and deciding if and when they should be installed. It's creating procedures before a change, not during the maintenance window. Oh, and creating procedures to roll back the change if it all goes horribly wrong.


Planning and budgeting comes into play. I remember being surprised at how much time I spent "pushing paper" when I started as a sysadmin. In order to push these proverbial papers, you need data to analyze. You should be collecting the data you will need with your network management tools. After all, how can you plan for the future with no information about what has happened before. That's just shooting from the hip and making a wild guess...


This is a starting point for thinking about what goes into a well managed network. What else does a well managed network need? What challenges are you running into in trying to manage your network well?

Filter Blog

By date: By tag: