1 15 16 17 18 19 Previous Next

Geek Speak

1,572 posts

We all have them lurking around in our data centers.  Some virtual, some physical.  Some small, some large.  At times we find them consolidated onto one server, other times we see many of them scattered across multiple servers.  There’s no getting away from it – I’m talking about the database.  Whether its’ relational or somewhat flattened the database is perhaps one of the oldest and most misunderstood technologies that we find inside businesses IT infrastructure today.  When applications slow down, it’s usually the database in which the fingers get pointed at – and we as IT professionals need to know how to pin point and solve issues inside these mystical structures of data, as missing just a few transactions could potentially result in a lot of loss revenue for our company.


I work for an SMB, where we don’t have teams of specialists or DBA’s to look after these things.  This normally results in my time and effort focusing on being proactive by automating things such as index rebuilds and database defragmentation.  That said we still experience issues and when we do seeing as I have a million other things to take care of I don’t have the luxury of taking my time when troubleshooting database issues.  So my questions for everyone on first week of partaking as a thwack ambassador are.


  1. Which database application is mostly used within your environment (SQL Server, Oracle, MySQL, DB2, etc)?
  2. Do you have a team and/or a person dedicated solely as a DBA, monitoring performance and analyzing databases?  Or is this left to the infrastructure teams to take care of?
  3. What applications/tools/scripts do you use to monitor your database performance and overall health?
  4. What types of automation and orchestration do you put in place to be proactive in tuning your databases (things such as re-indexing, re-organizing, defragmentation, etc).  And how do you know when the right time is to kick these off?


Thanks for your replies and I can’t wait to see what answers come in.


Related resources:


Article: Hardware or code? SQL Server Performance Examined — Most database performance issues result not from hardware constraint, but rather from poorly written queries and inefficiently designed indexes. In this article, database experts share their thoughts on the true cause of most database performance issues.


Whitepaper: Stop Throwing Hardware at SQL Server Performance — In this paper, Microsoft MVP Jason Strate and colleagues from Pragmatic Works discuss some ways to identify and improve performance problems without adding new CPUs, memory or storage.


Infographic: 8 Tips for Faster SQL Server Performance — Learn 8 things you can do to speed SQL Server performance without provisioning new hardware.


Michael Stump of #eager0 was recently one of the distinguished Tech Field Day Extra at VMworld delegates we had the pleasure of presenting to. Following the event, I got to thinking we should get to know him a little bit better. And what better way to do that than through our IT Blogger Spotlight series.


SW: What got you started in the blogosphere?


MS: I started my blog in January 2013 because I had some downtime between projects, and I wanted to share some things with the Internet. I’d had the name of the blog in my mind for a few months, so one day I just registered the name and ran with it. I wish it was a more exciting story, but it isn't.


SW: Speaking of the name…


MS: So, the virt and storage geeks will recognize the name as a reference to thick provisioning strategies within vSphere. But it’s also a nod to my storied career as an ambitious underachiever.


SW: Oh, come on! From what I've seen, you’re anything but an “ambitious underachiever.” Tell us more about the blog.


MS: Well, as for what I enjoy writing about: anything. I've learned that I’m much more interested in writing than in writing about any particular topic. I initially tagged my blog as “Data Center Virtualization with VMware.” But that was back when I was hell-bent on breaking into the VMware community online. Over time, I realized that I had inadvertently limited the types of posts I could write. So, I dropped the tagline. That’s why you’ll see me post all kinds of stuff now: vSphere vSS options, Volvo s40 repair, various Minecraft-related posts and yes, technology in general.


Far and away, my most popular post is a summary of a VMware event in DC last year. Scott Lowe was talking about NSX. But I really think that a web crawler just gamed the page views on that post. Second on the list is a how-to I wrote last year about installing VMTools on a Fedora 17 VM (not as easy as it sounds!). And third is a post I wrote about moving Exchange 2010 transaction logs, but please don’t tell anyone I’m an Exchange engineer; I’d like to never deal with that application again. Ever.


SW: Got it! From now on, in my book you shall be known as Michael Stump: the anti-Exchange engineer. So, that’s #eager0 in a nutshell. Where else can people follow your musings?


MS: I’m a regular on Thwack, where you can find me as Michael Stump. I’m also on Twitter as @_stump.


SW: Are there any other blogs you follow?


MS: Like most virt people, I read Duncan Epping’s blog and Frank Denneman’s blog frequently. Not just because they’re well respected, but because the writing and overall presentation of information is so clean. Chris Wahl’s blog is great, too, because he’s got a style that is immediately identifiable. I’m a fan of good writing, and of bloggers who convey technical information in a concise, original manner. Finally, my good friend Oliver Gray’s blog has nothing to do with technology, but it’s so satisfying to read, and the photography is so good, that it’s worth the mental break. He’s another writer whose style I admire as much as his content.


SW: What keeps you busy during the work week?

I own an IT consulting business, and I’m currently working at the National Institutes of Health (NIH) in Bethesda, Maryland.


SW: OK. So, how’d you get your start in IT?


MS: Same old story: I had a Radio Shack Tandy as a kid, then an Apple Performa 6300 and then connected to the Internet to join The Well with the rest of the dorks in the early 1990s. I don’t want to know what kind of phone bill my high school had to pay as the result of me spending my lunch breaks on the modem in their library! But professionally, I got my first job in IT when I was a technical writer for a software company in Falls Church, Virginia. The network admin quit to spend more time surfing. So, I moved into his office, called myself the network admin and no one ever kicked me out. Fourteen years later, here I am.


SW: After 14 years in IT, what tools can’t you live without?


MS: I can’t function without Nmap. And give me an Ubuntu VM so I can use grep and vi and I can do pretty much anything. Admittedly, I've been a fan and user of SolarWinds tools my entire career, but it seems a bit silly to gush about that here. Suffice to say that I've left many SolarWinds deployments in the places I've worked. I might even get the chance to build another one at my current site. Stay tuned.


SW: Very nice! And what are the most significant trends you’re seeing in the IT industry?


MS: Hyper-convergence has been simmering for years, and VMware’s EVO:Rail announcement at VMworld this year validated what companies like Nutanix and SimpliVity have been doing for a while now: collapsing technology silos for the purposes of simplifying and improving IT. I even wrote a post for thwack earlier this year about the rise of the hybrid engineer, which complements this shift in infrastructure from discreet resources to hyper-converged systems.


SW: Interesting. So, last question for you: What do you do when you’re not working or blogging?


Lots of video games with my kids, lots of bad piano and guitar playing and lots of Netflix with my wife. Oh, and I often wander around the yard taking photos of bugs: bugsimby.blogspot.com. Yeah, I’m that guy.


SW: Awesome! Well, thanks for taking the time to answer our questions. It’s been fun.

Lawrence Garvin

VMWorld 2014 Recap

Posted by Lawrence Garvin Sep 2, 2014
We’ve just returned from this year’s VMWorld conference and it was a busy one! With a dozen staff members and four demo stations we were well prepared to talk to customers and not-yet-customers (yes, there are still a few out there) non-stop, and that’s pretty much what happened.

The Expo Hall

Normally at a trade show there’s an ebb and flow of traffic during the day as most participants are in sessions, and then the breaks between sessions are like a breakfast rush at your local coffee shop. This year was noticeably different, however, as we experienced a non-stop line of visitors to the booth throughout the entire show. This is a Good Thing™. :-)


I’m not sure if there were just that many more attendees and the sessions were full, or attendees just weren’t going to sessions, but we certainly appreciated the interaction with everybody. The official report is that there were 22,000+ attendees, which I’m told is actually a bit lower than 2013.

d think at VMWorld the primary interest would be virtualization software, and yet, we talked about every product in the catalog, some of them more than Virtualization Manager!



Experts & Espresso

We did something different this year. We hosted a series of breakfast sessions with free coffee. The sessions were livestreamed, live-tweeted, and live-attended too!


You can watch the video recordings of the presentations at the YouTube links above, or just go to http://go.solarwinds.com/vmworld14.


Tech Field Day Extra!

Joel Dolisy (pictured left) and Suaad Sait, EVP Products and Markets, SolarWinds (pictured right), also presented at the Tech Field Day Extra! held in conjunction with VMWorld. They talked about our perspective on the performance of hybrid IT and the importance of integration. You can view that presentation online as well.



As expected, VMWare announced some new products, although eagerly anticipated, vSphere 6.0 product was only announced as a forthcoming beta. The big announcement, I guess you could call it, was VMWare EVO, a family of hyper-converged infrastructure services.


  • EVO:Rail – a base level of services designed to ramp up a hundred VMs in 15 minutes. Real products are being delivered from several vendors.
  • EVO: Rack – building on EVO:Rail, produces an entire cloud environment in a couple of hours. This is still a technical preview, but look for those same vendors to expand into this realm as well.

Also announced is an OpenStack distribution that will include vSphere, VSAN, and NSX… but I’m not sure how “openstack” you can call that, since it’s mostly based on proprietary products. 

VMWare is also making a big play in the end-user space, with Desktop-as-a-Service (DaaS) -- my jury is still out as to whether I want my *desktop* to be dependent on my Internet connection! – enterprise mobility management, and content collaboration  

You can view all of the VMWorld sessions online.

Did you attend VMWorld? What were your thoughts and experiences?



Customer service is a key revenue stream for companies that offer end-user support as a business offering. Especially when it comes to IT support, and your end-users are not technologically savvy, there is a high likelihood of having communication gaps in understanding the user’s issue and determining the cause—which may result in IT teams spending more time in identifying and solving issues. Here are some useful tips for IT teams (both MSPs serving clients and internal IT support teams) that will help simplify the support process and enhance the communication with end-users.


#1 Be Responsive & Communicate Effectively

This is a very common requirement in customer service. When end-users create trouble tickets, make sure they get a response from the IT team acknowledging receipt of the ticket. Also, institute a process to keep the end-users updated about the progress and status of the tickets as you keep working on them. Your end-users may know you are doing your job, but keeping them updated helps them understand you care about their problem, and, more so, are attending to their issue and not leaving them wondering what IT is doing and whether at all their ticket is being processed.


#2 Show Patience & Positivity During On-Call Support

When you are handling on-call support, the communication can go awry when the customer feels his concern is not understood or if he’s not getting a convincing response. Be patient in addressing customer concerns, and listen to the entire request before making your conclusion and cutting the customer out from expressing his problem fully. State your responses positively and share timelines for support action.


#3 Help Your Customers Help Themselves

By providing self-service and self-resolution options to your customers, you will make them feel empowered to resolve some basic and recurring IT issues such as resetting passwords or unlocking accounts. From the IT teams’ angle, this will cause a reduction in common and recurring service requests which the customers can now handle themselves. Especially for smaller support teams with lean workforce, end-user self-service is a cost and time-saving option.


#4 Organize & Streamline Your Support Process

Help desk tools are an effective solution to help organize and work with customer service requests. When an end-user creates a trouble ticket (via email or from a service request portal), having a help desk configured to assign and route the ticket to the right technician for the job based on technician availability, skill, location, department, and any other custom logic will save you a ton of back-end time in sorting and manually routing tickets. This will help you improve time-to-resolution of problem resolution and positively influence customer satisfaction.


#5 Invite Customer Feedback

Customer feedback is of primacy when it comes to measuring support performance and improving quality of service. IT teams should plan on conducting periodic customer surveys to understand what the customers think of the service provided, and help the support team measure their success. Feedback need not always be a full-fledged survey, it can also be built into a help desk framework to allow the customer the select a satisfaction criteria when a ticket is closed.


Build a customer-friendly and effective support framework that improves the efficiency of your support process while also boosting customer satisfaction.

VoIP has been widely adopted by enterprises for the cost savings it provides but it is also one of the most challenging applications for a network administrator in the network. Some enterprises choose to run VoIP on their existing IP infrastructure with no additional investment, bandwidth upgrades or preferential marking for voice packets. But because VoIP is a delay sensitive application, the slightest increase in latency, jitter or packet loss affects the quality of a VoIP call.


The Story:

A medium sized business with their HQ in Austin, US and a branch office in Chennai, India used VoIP for sales and customer support requirements as well as for communication between offices. IP phones and VoIP gateways were deployed at both Austin and Chennai and the call manager and the trunk to the PSTN for external calls was at Austin. Austin and Chennai were connected over the WAN and the voice calls from Chennai used the same path as data.


network dgm.png

The Problem:

Tickets were raised by users in Chennai about VoIP issues such as poor call quality and even call drops when calling Austin and customers around the globe.


The network admin had the NOC team check the health and performance of the network. The network devices in the path of the call were analyzed for health issues, route flaps, etc., with the help of an SNMP based monitoring tool. After confirming that the network health was fine, the team leveraged on a few free Cisco technologies for VoIP troubleshooting.


The Solution:

  1. Analysis with Call Detail Records (CDR) and Cisco VoIP IPSLA
  2. Root cause with Cisco NetFlow
  3. Resolution with Cisco QoS


Analysis with Call Detail Records (CDR) and Cisco VoIP IPSLA

When call drops were first reported, the NOC team quickly set up a tool with which they could analyze both Call Detail Records (CDRs) and Cisco IPSLA operations. The Cisco call manager was configured to export CDR data to the tool and the edge Cisco routers at both locations were added to the tool for IPSLA monitoring. CDR data was analyzed to find details about all failed calls and IPSLA was used to measure MOS, jitter and latency for VoIP traffic between the locations. IPSLA reports were correlated with CDR information to confirm the affected location, subnet and set of users.

failed calls.png

mos score.png

Root cause with Cisco NetFlow

IPSLA confirmed high packet loss, jitter and latency for VoIP conversations origination from Chennai and this put suspicion on the available WAN bandwidth. The network admin verified the link utilization using SNMP. Though WAN bandwidth was being utilized to the max, it was not to the extent that packets should be dropped and latency should be high.


The 2nd free technology to be used was NetFlow. Most routing and switching devices from major vendors supports NetFlow or similar flow formats, like J-Flow, sFlow, IPFIX, NetStream, etc. NetFlow was enabled on the WAN interfaces at both Austin and Chennai and set to be exported every 1 minute to a centralized flow analysis tool that provided real-time bandwidth analysis.


The network admin checked the top applications being used and did not find VoIP occupying a place in the top applications list as expected. ToS analysis from NetFlow data showed that VoIP conversations from India did not have the preferred QoS priority. A configuration change on the router had caused backup traffic to have a higher priority than VoIP traffic. This had caused backup traffic to be delivered whereas VoIP traffic was being dropped or buffered when the WAN link utilization was high. The admin also found that a few scavenger applications too had high priority.


top apps.png       EF-top apps.png

Resolution with Cisco QoS

With reports from the flow analyzer tool, the network admin identified applications and IP addresses hogging the WAN bandwidth and redesigned the QoS policies to provide preferential marking to VoIP and mission-critical applications and put everything else under “Best Effort”. Bandwidth hogging applications were either policed or set to be dropped. Traffic analysis with NetFlow confirmed that VoIP now had the required DSCP priority (EF) and that other applications were not hogging the WAN bandwidth. Because Cisco devices supports QoS reporting over SNMP, the QoS policies on the edge Cisco devices were monitored to confirm that the QoS drops and queuing were as desired.


EF priority for VoIP.png  CbQoS drops.png


Cisco IPSLA and CDR analysis confirmed that VoIP call performance was back to normal no more VoIP calls had a poor MoS score or were being dropped. We had a smart network admin and that was the day we were taught to be proactive rather than reactive.


The question I now have is:

Have you been in a similar soup?


Are there alternatives methods we could use and how would you have gone about it?

With the recently launched SolarWinds Network Performance Monitor (NPM), you can now get a view into the application's quality of experience using Deep Packet Inspection (DPI) analysis. Most often, we tend to blame the network when an application issue occurs – whether it’s related to availability or overall performance.


To really know if your application is the culprit, you will have to monitor these essential metrics like network response time and application response time or time to first byte. These metrics are broken out by application, and provide an at-a-glance ability to correlate and identify the source of application issues. If it is indeed an application issue, you can drill down further and look at the health of the server hardware where the application is running. You can monitor the server response time and other metrics to pinpoint that it was in fact an application which is having issues.

sql serv 0.jpg

You may be using NPM to monitor the quality of service for a critical application like SQL Server. Network Performance Monitor can give you insights to a few server resource issues, like CPU and Memory performance.  If all looks good, there may still be an issue with your SQL server.  Here is an example of how you could use Server & Application Monitor (SAM) to drill-down further and identify database performance issues.

  1. After discovering your application environment in your node details page you will see the applications monitored. Here you can see that SQL Server is having a problem.

sql serv 1.jpg

2. Drilling further into the SQL server with SAM will show that the lock requests/second is higher than it should be.

sql serv 2.jpg

3. Clicking the metric or the component will tell you that the value is high and the high wait time could be the reason for the poor quality of service. Expert knowledge for this metric in SAM will help you with remediation guidance on how to fix the problem.

sql serv 3.jpg

Whether you’re monitoring file transfer apps, web services, social networking apps, messaging, email, database, and other applications using NPM, you can leverage out-of-the-box templates in SAM or templates on thwack to monitor the complete application performance.  This document has a list of applications that you can monitor using both Server & Application Monitor and the QoE feature of Network Performance Monitor.

It’s a known fact that organizations are turning towards virtualization for various reasons. An IT organization can add several business applications, databases, etc. without increasing or adding new hardware to their IT environment, thus saving hundreds of dollars and optimizing existing hardware. Often, development teams rely heavily on taking snapshots of their dev or test environments. In the event of hardware failure or if difficulties arise with restoring changes made to apps and databases, you can quickly lose this vital information. One of the most crucial values that virtualization offers companies is that it can help save on technology maintenance costs. Imagine having to run several physical servers without a virtual environment? You would end up supporting end-users or customers at a very high cost.


Despite the various short-term and long-term benefits virtualization offers, managing this complex infrastructure is a mammoth task for any IT pro. Most organizations would prefer to rely on a proactive virtualization management tool that offers deep and end-to-end visibility on your virtual and storage environments. SolarWinds® Virtualization Manager or VMAN as we fondly call it, gives you unified performance, capacity planning, configuration, VM sprawl control, VDI, and showback management for VMware® and Hyper-V®. In addition, VMAN also integrates with SolarWinds products, such as Server & Application Monitor and Storage Manager providing you with contextual awareness of performance issues across all layers, including applications, database, virtual infrastructure, server hardware, and storage systems.


Just this month, SolarWinds conducted a survey in which 136 VMAN customers participated. The pool of respondents included VirtAdmins, SysAdmins, capacity planners, IT generalists, etc. from North America. The objective of this survey was to find out how our customers are using VMAN to troubleshoot performance challenges in their virtual environment and what their ROI was after deploying VMAN.


VMAN Survey Aug 2014.jpg


After deploying VMAN:

  • 63% of respondents spent an average of only $6,000 per year on software costs to monitor around 250 VMs
  • Respondents decreased downtime from 7-15 hours to less than 3 hours per month
  • Respondents who spent 11-20 hours manually searching for dormant VMs now spend only 1-5 hours per month
  • Respondents who spent around 9 hours to detect VM sprawl every month now only spend a little over 2 hours


What’s also interesting is a large chunk of our customers say they leverage product integration between VMAN and Server & Application Monitor, Storage Manager, and Network Performance Monitor. 47% of customers go on to say their previous virtualization management tool didn’t offer such integration capabilities for end-to-end visibility (app to storage), and that’s why they made the switch to SolarWinds.


You can view the complete survey findings in the following presentation.


Virtualization Manager Survey: Features, Competitive, and ROI


If you’re an existing VMAN customer and haven’t had a chance to participate in the survey, tell us what value you see in using VMAN. Take this time to also let us know what you feel we could do better in the coming releases of Virtualization Manager.


Faster root cause analysis

Posted by jkuvlesk Aug 26, 2014

If you are using Network Performance Monitor to look at the performance of your Windows servers, you'll get an idea that there is an issue with your server if you get an alert that CPU or Memory is starting to max out.

blog - cpu mem.JPG

Now what?  Server & Application Monitor is the perfect tool to help you more quickly troubleshoot the root cause of server performance issues.  After installation, on the node details view you will see additional management tools - Real-Time Process Explorer, Real-Time Event Log Viewer, the Service Control Manager and a Reboot button.

mgmt tools.JPG

In launching the Real-Time Process Explorer, you can visualize process that are consuming the most resources.  Right from this view, you can kill processes, or start monitoring a process to get alerted to when it uses too many resources.


As a network admin, how many times do you hear the app is down?  Server & Application Monitor has other troubleshooting tools to help you determine the cause of the problem and fix it.  Using the Service Control Manager, you can get a quick view into services that have stopped.  From this view you can restart the service or stop it if it is hanging.  Many times when system performance changes for the worse, you can dig into the log files to determine if there was a recent change, if there is a security event and so on.  Again, it's as easy as launching the Real-Time Event Log Viewer and sorting by log type and severity.

service control.JPG

Earlier this year we conducted a survey and asked you what it’s like managing DHCP, DNS, and IP addresses. We received over 200 responses from the thwack community.  Among the many findings, we observed that on average you’re spending about 40 hrs. each month using rudimentary tools to manage roughly 2,000 IPs.


Let’s put this into perspective. The largest hospital in the US has beds to accommodate 2,272 patients. Does it seem rational that a hospital should use something as rudimentary as a spreadsheet to track its patients?  Imagine the many details—name, room, physician, illness, etc. Of course not, it’s unthinkable! The chaos resulting from this could be easily avoided. Well, why do we manage thousands of IP addresses and details with something as limited as a spreadsheet? 


Coming back to our survey, our respondents told us they spend most of their time managing DHCP and DNS configurations, troubleshooting IP-related problems, and maintaining documentation of IP addresses. Not surprisingly, they said they wanted these tasks to be less complex. This is where SolarWinds IP Address Manager (IPAM) can help because it’s designed to let you easily manage DHCP, DNS, and IP subnets and addresses by offering three primary functions:


Manage IP Address Blocks

IP address management begins with knowing what address blocks and addresses are available to use. SolarWinds IPAM makes this effortless by automatically discovering your subnets. SolarWinds IPAM will then look within each subnet to identify what IPv4 and IPv6 addresses are in use. It’s all automatic and very accurate. No more spreadsheets to maintain! With an accurate inventory, it’s now trivial to find open addresses or reclaim unused addresses. Say goodbye to IP conflicts or perpetually low address pools.


Manage DHCP and DNS

SolarWinds IPAM overlays your existing Microsoft, Cisco, and Internet Systems Consortium (ISC) DHCP and DNS servers and presents you with one consistent management interface. Plus, SolarWinds IPAM integrates many DHCP, DNS and IP address management tasks together. Now it’s trivial to find and assign a new server a reserved DHCP IP address and create the necessary DNS host records. Additionally, you have maximum flexibility to add, remove, replace or even consolidate DHCP and DNS services with minimal disruption and re-training. 


Monitoring and Troubleshooting

SolarWinds IPAM actively monitors critical IP resources and events so you don’t have to and only alerts you when potential problems occur. SolarWinds IPAM provides a customizable dashboard that summarizes information like IP conflicts, high utilization subnets and scopes, mismatched DNS entries, or other DHCP, DNS and IP configuration changes that occur. It's a great way to view your most significant static and dynamic subnets at a glance.  Plus SolarWinds IPAM helps troubleshoot and resolve problems quickly. 



Not surprisingly, our survey takers reported that a DDI solution is essential because it helps increase mean time between failures (MTTF) and improves mean time to recovery (MTTR). Purpose-built tools like SolarWinds IPAM can help you reduce the time and complexity required to manage DHCP, DNS and IP addresses.

Want to learn more?


Customer service, more than ever, has become increasingly important to all types of businesses. Any IT-enabled business model depends on support—whether it’s field service, managed service or internal IT support. With the increasing demand for tech support, there is an increase in the cost of support too. This is accounted in terms of additional staff, more man-hours spent on issues, expenditure on more tools and technology platforms to render support, etc.

Take for example the incident costs of assisted support in enterprises. According to TSIA Support Services benchmark data, incidents resolved via phone now average $510. Email incidents, with their back-and-forth conversations to gather additional data stretching out resolution times, now average close to $700.


In efforts to optimize costs, support organizations are keenly looking at customer self-service options which, in addition to reducing costs and service labor, also empowers the end-users to handle basic and common support request and issues on their own. In a social media survey by TSIA, there were more in favor of Web self-service than other channels of assisted support including community, phone, email, chat, etc.


The question that every support department is focused on is: “what is the most cost-effective means of achieving self-service, which is not resource-intensive on the support staff?”



A popular channel for offering Web-based self-service to your customers is via a centralized knowledge base which is essentially a repository of common and recurring support incidents in the form of Q&A. Every knowledge base article is contributed by the support team. Their knowledge of handling incidents, resolving and troubleshooting problems are captured and documented so that customers can gain online access to these FAQs and tackle some basic issues/service requests by themselves.


The only investment that goes into creating a rich knowledge base is the knowledge contribution by the support staff, and some additional content curation for consistency and ease of use for end-users. But the ROI to this effect is immense in terms of cost and time savings for the support team.

Knowledge Base.png


  • Reduction in number of service tickets created
  • End-users educated and empowered to handle common and recurring support requests by themselves
  • Cost and time savings for  the support team



The help desk is anyway the central point of logging and tracking customer service requests and trouble tickets. It only makes logical sense to associate a knowledge base repository to the help desk which is accessible by IT technicians for capturing knowledge articles.

Once captured, these can now be served to the end-users on the help desk Web portal. Articles can be searched with keywords, or can be made to dynamically populate as users fill out the service request form. This makes it easy for your help desk customers as they can take the first step in handling their needs before the IT admin indulges and spends considerable time on common and recurring requests.



  • Measure how many people are using the knowledge base Web self-service.
  • Conduct surveys and polls to find out whether they found it useful in terms of content and user experience
  • If the number of instances of self-service incidents by customers lead back to assisted support, then your self-service options are not very effective.


BEST PRACTICE: Review knowledge base content periodically and make sure the content is up to date and useful to customers.


Help your customers help themselves!

There is a credit card commercial that asks, "What's in your wallet?" I'm going to ask, "What's in your network?" Sure, you might be able to tell me what's in your network right now, but can you still tell me about a device when it's down? Its model and serial number? The modules or line cards installed? Which interfaces are in use and how much bandwidth they use?


Maybe you have all that, so let's kick it up a notch. Can you tell me what the configuration of the device was last night? What about last week or last month? Some of these bits of information can be important when troubleshooting or when you have to replace a failed piece of equipment. If you are new at this, you may not realize that some changes can take long periods of time to impact your network. Sometimes they don't actually kick in until a device is rebooted or when a failover takes place. This can lead to misdiagnosing the cause of a failure.

I actually had something like this happen last week. I did a failover to a secondary load balancer so I could install a new license on the primary. While I was working on this, we started getting reports of an encryption certificate. It turned out the certificate configuration on the secondary unit hadn't been completed correctly months ago. However, from my immediate perspective, no configuration had changed...

On a related note, are you using centralized logging or are all your logs on your devices? If you aren't using centralized logging, you are taking away an important troubleshooting tool. Don't turn off local logging, it's really inconvenient when it's not there, but supplement that with centralized logs that you keep longer and will survive a reboot. Centralized logs also let you see all the events happening in your environment at the same time. This makes it much easier to correlate events when tracking down a root cause.

So I ask, do you know what's in your network? What other ideas and tools do you have for helping know your network?


Welcome to my fourth and last installment in the discussion about the expectations of user and device tracking. I would like to take a little time up front to thank you all for taking the time to read my post and a real big shout out to the ones that have made comments about the posts. With that said, I want to make this post a little different in that I want to review the different themes of the discussions and incorporate some of you, the reader's comments, as I work on presenting what might be a consensus on the themes of the discussion.


In my first post, I attempted to keep the focus on specifically about corporate owned assets that are distributed to the end users in a company. Michael Stumps' opinion is that, “If it's company owned, go ahead and track the hell out of it. I know my work laptop is for work only. But I bring my personal laptop with me to the office, so I can connect to the guest wireless and clearly separate work from non-work things. Plus, here in the public sector, everyone lives in fear of front-page coverage in the Washington Post in the event that a laptop or smartphone or whatever gets lost.” That makes total sense and it seems completely reasonable. After all if it is a company asset it is not something that you own and therefor do not get to call the shots on proper usage. However, have you ever noticed that there is no consistency between different companies? Mikegrocket has clearly pointed out one extreme in that, “I work for the government, so I have sold my soul to them. Everything is monitored and tracked.” One other point that Mikegrocket made that is really worth mentioning is, “The idea of data loss protection/prevention comes into play as well. I need to know who is attempting to remove data and what that data is. We can't have important information going out the door. I know it happens, can you say Snowden, but I do what I can to prevent it happening on my network.”  That is one point that I believe is left out of the discussion until the article in the Washington Post brings it to light. As cahunt pointed out the information should be in, “that agreement you half read and fully signed when starting should encompass the use of that company laptop - on or off site."


In my next post, I changed the focus from corporate owned assets and moved to personal computers and devices that we all incorporate into our professional lives.  Here is where we started to see a loss of the general consensus in that with some people that,9 “I dont like the idea of personal devices accessing sensitive corporate information.  To me, that is a good dividing line.  In this way, the corp doesnt need to install anything on my phone, laptop, etc and they have measures in place to keep me out of their data.” I really have to agree with him and draw the line in the sand that Jim Couch has specified on data access, but can’t that really be side stepped where people can email what they need to their device and that the data ends up who knows where or what cloud? Time seems to be an issue that stands out, “My only issue is the time frame, if the device is truly lost 3 days later it would be out of battery. If you are SIM capable, you can pull the SIM and still may have access to some data on the device and without your SIM you lose the WIPE ability since it powered up and did not connect.”  Kevin Crouch took a little detour in the conversation to switch the focus from the device and move it to the person.  “Being completely honest here, 50% of phones I see have no password probably 40% have a pattern lock that’s 4 nodes long, and the remaining is split between face, fingerprint, password and Pin (often just four digits long)”  He even went further with how careless users can be with the credentials.  “The worst part is that those people who just spout it out often want the consultant to enter it too! If you say It once I probably won’t remember it. If you make me type it, unless it’s chicken scratch (#@DFks@dsk1&4) I’m going to remember it easily (Wh0llyM0ly1991).”  Once some unsavory person has your logon and password, the concept of identity theft can take on a whole new dimension.


Now, speaking of the users and outside of our corporate oasis, we all become the users that are tracked by websites, stores, and tolls and of course everyone’s favorite...Big Brother.  It seems to appear that the twenty first century is the century for big data in both gathering and mining. When it comes to the web, “practically every site on the web does this. Google especially watches everywhere you visit, everything you buy, every forum you post on, and tries to customize their ads to appeal to you on a deeper and deeper level” How much longer will it be before we all see specialized billboards and signs on the street that will present individualized adds for you as the billboard or sign recognizes you as you approach? Since most of you that are reading this are the admins that perform some of this tracking for your company and for those of you that mentioned you work for the government, which agency did you say you worked with again? Zackm make a point that is worth mentioning when he says “I tend to 'assimilate' under the pretense that the admins should be taking a dose of their own medicine.” That seems to be the way forward for things to stay “real” for all of us. As I mentioned in my last post, these are just the ones we know about. Tcbene summed that up when he said, “Many people have no idea how much data is being collected on them daily.  I just finished reading an article on the data Google maps is collecting on an individual’s whereabouts when using Google maps.  Like free Wi-Fi people generally don't think about what they are giving up when they use the service someone is offering.  With Google maps they're not just helping you find your way to a location, they are keeping the history of everywhere you go anytime the location services are activated.  That aspect must be in the fine print people never read, I know, I didn't ask for that feature.”  It really seems that people and governments that perform that tracking seem to hope for or count on the concept of what you don’t know can’t hurt you. Before the Snowden leak, we all knew the government had tools and were using them, but are you like me that was really kind of surprised and the scope and depth of the tools and the capabilities?  I wonder if I liked it better when I did not really know.


No matter you thoughts or concerns on this matter the simple fact is that this is now just a simple fact of life that we all have to deal with.  What makes me worry even more is who is watching the watchers? Could you imagine an unsavory individual that uses the technology to further their own goals?  How much dirt could be gathered on say Congressmen, Senators, Supreme Court Justices or even the President?  We all have dirt, no one is perfect and the ability to gather data to influence the outcome of things that will undoubtedly affect us all.


A recap of the previous month’s notable tech news items, blog posts, white papers, video, events, etc. - For the week ending Thursday, July 31st, 2014.



Net neutrality supporters: Deep packet inspection is a dangerous weapon

Network access providers should be disallowed from using DPI, and should provide regular reports to demonstrate they're not, suggests yet another group of Internet technology leaders.


73 Percent of IT Staff Currently Have Unresolved Network Events

Forty-five percent of IT staff say they monitor network and application performance manually instead of using network monitoring tools.


Emulex: study shows network visibility can help avoid the IT blame game

Results of a study of 547 U.S. and European-based network and security operations (NetOps and SecOps) professionals, which found that 45% of IT staffs monitor network and application performance manually, instead of implementing network monitoring tools.


VoIP grows quicker, even as improvements may not be immediately

With its noted improvements in the business world, there's no question that VoIP is a technology that continues to be growing in popularity. Its adoption is consistent throughout companies worldwide, and as such many are looking for new and better ways to monitor their resulting savings.


The BYOD horse is out of the barn: Implementing the right mobile policy for your organization

Striking the proper balance between security, productivity and privacy is the key to establishing a successful mobile device policy.


Network Stories


Cisco describes its SDN vision: A Q&A with Soni Jiandani

Network World Editor in Chief John Dix caught with Jiandani to get her take on how SDN plays out.


Blogger Exploits


Application intelligence: THE driving force in network visibility

Business networks continue to respond to user and business demands, such as, access to more data, bring your own device (BYOD), virtualization and the continued growth of IoT. Historically much of the traffic that runs through these networks has been known to network administrators but access to application and user data remains lacking.


The benefits of converged network and application performance management

A converged Application Performance Management (APM) and Network Performance Management (NPM) solution gives organizations actionable information to resolve the most challenging performance concerns in minutes.


Why Network Monitoring Is Changing?

IT needs end-to-end visibility, meaning tool access to any point in the physical and virtualnetwork, and it has to be scalable and easy to.


OpenFlow Supports IPv6 Flows

Software Defined Networking systems are gaining IPv6 capabilities


Why Mid-Tier Companies Need to Start Monitoring Their Networks Like Big Companies

All the largest enterprises truly understand that, and most have a network monitoring strategy. On the other hand, there is another group of business owners not monitoring their network and not considering it for the future. Well, they’re in for a rude awakening if they don’t take a cue from those big companies and invest in network monitoring.


Do We Need 25 GbE & 50 GbE?

Efforts to bring 25 GbE and 50 GbE to market are underway. Is there a strong case for these non-IEEE solutions? For cloud service providers, there is.


Understanding IPv6: Link-Local 'Magic'

Denise Fishburne performs a little IPv6 sleight of hand in the second post in her series on IPv6.


Food for Thought


A help desk is generally used as a management tool that simplifies ticketing management activities for IT teams—allowing IT technicians to automate workflows and save time on manual and repetitive tasks. Acting as a centralized dashboard and management console, a help desk can simplify various ITSM tasks including IT asset management, change and knowledge management. While this is all truly beneficial from a management standpoint, a help desk can also serve as a platform to support troubleshooting of servers and computer assets in your IT infrastructure.


What if you can initiate remote desktop session to connect to your end-user’s computer from the help desk interface?


Yes, it is possible. Consider this scenario: an employee using a Windows® computer creates a trouble ticket because his workstation is facing some memory issue. And you, being in the IT team, log the ticket in your help desk system. Now, you have a 2-step procedure. First, you must assign the ticket to the technician who will perform the troubleshooting. Secondly, the technician will have to resolve the issue – either remotely, or by visiting the end-user’s desk in-person and fixing the computer.


Help desk integration with remote support software simplifies this process and allows IT admins to initiate remote session of the computer directly from the help desk IT asset inventory. This saves a ton of time as you already have the ticket details in the help desk, and now you have a handy utility to connect to the remote computer to address the issue immediately. Of course, you can use a remote support software to troubleshoot the computer without having to do anything with the help desk. But IT teams, facing staffing and time constraints, and having a lean IT staff wearing multiple hats, can tighten their support process by combining the power of both the help desk and the remote support tool and making remote desktop connectivity just a click away from the help desk console.


SolarWinds® introduces Help Desk Essentials, a powerful combo of Web Help Desk® and DameWare® Remote Support software, which allows you to initiate remote control session from Web Help Desk asset inventory.

  • Discover computer assets with Web Help Desk
  • Associate computer assets to problem tickets (This will help to track the history of service requests for each IT asset.)
  • Assign technician to the ticket with Web Help Desk’s routing automation
  • Technician can open the IT asset inventory in Web Help Desk, click on the remote control button near asset entry, and commence a remote session via DameWare Remote Support
  • Using DameWare Remote Support, technician can remotely monitor system performance, view event logs, check network connections, and start/stop processes and services, and more.

IT Service REquest Fulfilment Process.PNG

Check out this video on Help Desk Essentials – peanut butter and jelly for IT pros.



Do IT remotely!

In previous weeks, I have talked about running a well managed network and about monitoring services beyond simple up/down reachability states. Now it's time to talk about coupling alerting with your detailed monitoring.

You may need to have an alert sent if an interface goes down in the data center, but you almost certainly don't want an alert if an interface goes down for a user's desktop. You don't need (or want) an alert for every event in the network. If you receive alerts for everything, it becomes difficult to find the ones that really matter in the noise. Unnecessary alerts train people to ignore all alerts, since those that represent real issues are (hopefully) few. Remember the story of the boy who cried wolf? Keep your alerts useful.

Useful alerts help you to be proactive and leverage your detailed monitoring. Alerts can help you be proactive by letting you know that a circuit is over some percentage of utilization, a backup interface has gone down, or a device is running out of disk space. These help you to better manage your network by being proactive and resolving problems before they become an outage, or at least allowing you to react more quickly to an outage. It's always nice to know when something is broken before your users, especially if they call and you can tell them you are already working on it.

What's your philosophy on alerts? What proactive alerts have helped you head off a problem before it became an outage?

Filter Blog

By date:
By tag: