1 14 15 16 17 18 Previous Next

Geek Speak

1,544 posts

A help desk is generally used as a management tool that simplifies ticketing management activities for IT teams—allowing IT technicians to automate workflows and save time on manual and repetitive tasks. Acting as a centralized dashboard and management console, a help desk can simplify various ITSM tasks including IT asset management, change and knowledge management. While this is all truly beneficial from a management standpoint, a help desk can also serve as a platform to support troubleshooting of servers and computer assets in your IT infrastructure.


What if you can initiate remote desktop session to connect to your end-user’s computer from the help desk interface?


Yes, it is possible. Consider this scenario: an employee using a Windows® computer creates a trouble ticket because his workstation is facing some memory issue. And you, being in the IT team, log the ticket in your help desk system. Now, you have a 2-step procedure. First, you must assign the ticket to the technician who will perform the troubleshooting. Secondly, the technician will have to resolve the issue – either remotely, or by visiting the end-user’s desk in-person and fixing the computer.


Help desk integration with remote support software simplifies this process and allows IT admins to initiate remote session of the computer directly from the help desk IT asset inventory. This saves a ton of time as you already have the ticket details in the help desk, and now you have a handy utility to connect to the remote computer to address the issue immediately. Of course, you can use a remote support software to troubleshoot the computer without having to do anything with the help desk. But IT teams, facing staffing and time constraints, and having a lean IT staff wearing multiple hats, can tighten their support process by combining the power of both the help desk and the remote support tool and making remote desktop connectivity just a click away from the help desk console.


SolarWinds® introduces Help Desk Essentials, a powerful combo of Web Help Desk® and DameWare® Remote Support software, which allows you to initiate remote control session from Web Help Desk asset inventory.

  • Discover computer assets with Web Help Desk
  • Associate computer assets to problem tickets (This will help to track the history of service requests for each IT asset.)
  • Assign technician to the ticket with Web Help Desk’s routing automation
  • Technician can open the IT asset inventory in Web Help Desk, click on the remote control button near asset entry, and commence a remote session via DameWare Remote Support
  • Using DameWare Remote Support, technician can remotely monitor system performance, view event logs, check network connections, and start/stop processes and services, and more.

IT Service REquest Fulfilment Process.PNG

Check out this video on Help Desk Essentials – peanut butter and jelly for IT pros.



Do IT remotely!

In previous weeks, I have talked about running a well managed network and about monitoring services beyond simple up/down reachability states. Now it's time to talk about coupling alerting with your detailed monitoring.

You may need to have an alert sent if an interface goes down in the data center, but you almost certainly don't want an alert if an interface goes down for a user's desktop. You don't need (or want) an alert for every event in the network. If you receive alerts for everything, it becomes difficult to find the ones that really matter in the noise. Unnecessary alerts train people to ignore all alerts, since those that represent real issues are (hopefully) few. Remember the story of the boy who cried wolf? Keep your alerts useful.

Useful alerts help you to be proactive and leverage your detailed monitoring. Alerts can help you be proactive by letting you know that a circuit is over some percentage of utilization, a backup interface has gone down, or a device is running out of disk space. These help you to better manage your network by being proactive and resolving problems before they become an outage, or at least allowing you to react more quickly to an outage. It's always nice to know when something is broken before your users, especially if they call and you can tell them you are already working on it.

What's your philosophy on alerts? What proactive alerts have helped you head off a problem before it became an outage?

  If you have not picked up on this yet, my topics of conversation in my last couple of posts have been about the expectations of user device tracking.  Based on the comments I believe it is safe to assume that you, the readers, work in I.T. and are responsible in one way or another for the tracking of the corporate assets and any device that connects to the corporate infrastructure with tools like SolarWinds User Device Tracker software. The reason I am pointing this out is to highlight the type of I.T. professionals that make up who this audience is.  The audiences that help to mold and shape the policies and implement the tools needed to enforce the policies. When you say it that way it almost sounds like you’re talking about the law makers in the government, but I digress.


From the administrative point of view in the comments, I believe we all seem to have basically the same expectations of what kind of tracking should be in place and why.  In my next post I presented the same concept by focusing on the latest trend in I.T. We have moved on from bring your child to work to bring your own device to work.  The comments were clearly not in unison, as was more of the case for tracking corporate devices. I was actually hoping for this to be the case as I get into what I really want to have a conversation about today.


Now I get to present this thought to the people that are part of the I.T. tracking administrative teams themselves.  Have you ever thought about the tracking that is going on that we don’t know about? For me, the NSA is the first to come to mind.  We have all heard stories about what they had in capabilities, and it really woke us up to a nightmare that Snowden spilled the beans on those capabilities. No, that would be just way too easy and way too obvious. When I am out with the family, either around town or on a trip, there are four words that I know are coming and have to make you think what "free" really is. Those four words are “Hey look, free Wi-Fi!" I guess "free" means the "price" is that we can then be tracked through a store or at an airport, and that is just a couple examples.


How many of you live or work in a city where they have the automated toll passes?  You know, the ones that are attached to the car and automatically pay the tolls when we go through.  Notice how the government added all those high speed toll pass lane and maybe one or possible two cash lanes? If you are going to use those roads it just make sense in time and sanity to utilize the automated system, right? Next time you’re riding around town as the passenger, take a good look around and you will find will these devices that trigger a signal from the toll pass device all over town. Maybe they government just wants to monitor the traffic flow, or maybe it is something more.


How many of us have membership cards for different stores to get better shopping deals from the store? If you are like me you might use a couple for your grocery shopping, or the pharmacy, the list is endless and now the store has our shopping habits available to the highest bidder.  The latest trend of the twenty first century is how we were so willing to give up privacy for convenience.  I bet we are all guilty in one way or another. As you think of more, please post then in the comments.


So what do you, the admin, think of you becoming the end user?  Does your perspective change at all when you become the device that is tracked? That is one question that I would really like to read your response to. What hidden devices and perspective do you have to share?


Cloud technology has transformed the way we conduct business. Since its inception, Cloud technology has systematically dismantled the traditional methods of storing data (tape, hard disks, RAM, USB devices, zip drives, etc.) and replaced it with a more boundless storage environment. Now, the thought of storing proprietary data on a hard drive or local storage device seems “so last year.” The Cloud might be the latest trend in file transfer and storage, but in terms of security, it’s not exactly a vault-like storage receptacle.


For example, a phishing scam that targeted Dropbox resulted in a 350K ransomware infection and illegal earnings of nearly $70,000. In a similar incident involving note-taking and archiving software company, Evernote, hackers gained access to confidential information, email addresses, and encrypted passwords of millions of users at the California-based company. Evernote also offers file transfer and storage services.


A recent survey, conducted by F- Secure, indicated that 6 out of 10 consumers were concerned about storing their confidential data with Cloud storage services. This survey also found that out of the varying levels of technology users, the younger, tech-savvy generation was the most wary of Cloud storage. The survey revealed that 59% of consumers were concerned that a 3rd-party could access their data from the Cloud and 60% felt that the cloud storage providers might even be selling their data to 3rd parties for some quick bucks. In addition, other apprehensions were raised about the quality of technology used by these Cloud providers. Some recent security-breach incidents leads me to conclude that these concerns might have some merit.


Automated file transfer software not only simplifies and speeds up file transfers, it enhances the security of all file-transfer operations. This is important as data security is a high priority for all users. Unlike SaaS-based FTP services, Self-hosted FTP server solutions do not compromise data security and integrity by exposing your transferred and stored data to the Cloud.


A Self-hosted FTP solution is a safer option for transferring, storing, and accessing your confidential files and data. The following are some of the benefits of a self-hosted FTP solution:

  • Hosted on your premises and enables you to maintain the integrity of shared data.
  • Offers security for data that’s both at rest and in motion.
  • Offers internal resource protection (DMZ resident) enabling it to conceal internal IP addresses.
  • Provides granular access control.
  • Secures data transmissions with encryption and authentication features.


For an organization, Cloud-based storage services may be convenient, but the question is should you compromise on the integrity of your data. Data is precious. You need to ensure that your data is under the care of someone who is serious about its security and safety.


Let me have the privilege of welcoming to thwack John Ragsdale, VP of Technology Research for TSIA, an industry association for the service divisions of enterprise hardware and software firms. In preparation for an upcoming joint webinar on August 21, at 10am PT, “Managing the Complexities of Technology Support,” we recently had a conversation with John about industry trends he is tracking that may be of interest to the SolarWinds community.


Here’s a sneak peak at that conversation:


SolarWinds: TSIA has published a lot of research about the rise of technical complexity and the impact it is having on enterprise support organizations. Could you talk about this complexity and some of the related drivers/trends?

John Ragsdale: In our support services benchmark survey, we asked the question: “How complex are the products your company supports?” The range of options include standard complexity, moderately complex, and highly complex. In 2003, less than half of tech firms defined the technology they sell and support as "highly complex." Today, more than two-thirds of companies say their products are "highly complex." This rising complexity has big impacts on support organizations. When I first started as a technical support rep back in the 90s, by the time I finished training and went live on the phone, I knew how to handle more than half of the problems I would encounter. After about 6 months on the job, I'd seen just about every problem customers could potentially have. Today, because products are loaded with features and customization options and run on a myriad of platforms, it takes years to really become an expert on a product. This makes the learning curve extremely steep for “newbies” just out of training. In addition to product complexity, technology environments are contributing to the complexity. Today's hardware and software tools are so tightly integrated and interconnected that it is hard to identify a single failing component. As a result, we see average talk times and resolution times stretching out and first-contact resolution rates trending downward.


SW: Each year, you do a survey that tracks technology adoption and spending plans for tools used by service organizations. I know that remote and proactive support technology is one of the categories you track. Could you share some of your data around this technology category?

John: Currently, just over one-third of enterprise technology firms (37%) have remote/proactive support tools in place. However, just over half (51%) have budget for these tools in 2014-2015. Any time you see half or more of tech companies investing heavily in an application category, you know that an industry shift is occurring. In this case, service organizations are looking for opportunities to dramatically improve service levels without hiring more employees, and remote and proactive support tools can help achieve this. I expect to see the adoption numbers rising each year until the technology is as common as CRM, incident management, and knowledge bases. I really can't imagine a technology support organization in 2014 functioning without remote diagnostics and monitoring.


SW: TSIA launched a new association discipline on managed services last year. Could you tell us TSIA’s definition of managed services and why you see this as a hot investment area for technology firms?

John: The easiest definition of managed services is "paid to operate." Clearly, IT departments don't have the staff they did a decade ago, but their hardware and software environments are bigger and more complex than ever. When a company is struggling to manage all the equipment and software in their environment, they often look to the vendor of that technology to assume responsibility for maintaining it, customizing it, upgrading it, and supporting it. In other words, the vendor's managed services team becomes an extension of the customer's IT team. TSIA sees this as one of the fastest growth segments for services, and more tech firms are looking to managed services to generate additional services revenue. While many companies are investing in cloud solutions to minimize their IT footprint, the truth is that many cloud solutions are best suited for smaller firms. Large companies need highly sophisticated enterprise tools, which usually means owning the implementation. Managed services allows a company to have a "best of breed," on-premise implementation, but with none of the ownership headaches.


SW: Earlier this year you conducted a survey of your managed services members specifically around the technology they use to remotely support and manage customer hardware and software. What were some of your findings from that survey?

John: George Humphrey, head of our managed services research, will be publishing the findings from that survey in time for our October conference. But a sneak preview of the data shows that the majority of managed service operations have made heavy investments in technology to support remote customer equipment, including proactive monitoring of application and network performance, ITIL-compliant help desk tools for incident and problem management, configuration management databases (CMDB), and release and capacity management. This survey was what first brought SolarWinds to my attention, because half of the companies surveyed were using SolarWinds for application and network monitoring.


SW: We see great interest from our customers around improving the efficiency of support and ITSM when there is always a growing volume of tickets and a lean support staff. How big of a role do you think technology like SolarWinds plays in efficiency improvements?

John:  I've been involved in the support industry for more than 25 years, and I can tell you that more than any other department in the company, support and help desk operations are experts on "doing more with less." We are often the first hit with budget cuts and downsizing, and unfortunately, many service teams are still being operated as a cost center. We hire smart people, train them well, and our processes are fine-tuned and compliant with industry best practices. In my opinion, it’s up to technology to "take us to the next level" for efficiency and productivity improvements. Over the last decade, tools proven to work within IT help desks are finding larger adoption among external customer support teams, and definitely within managed service operations. Flexible and customizable help desk software for ticketing automation can easily reduce the time to open and manage incidents. IT asset management is critical to know what equipment is where and who is using it. Change management can automate common, repetitive processes—from adding new users to upgrading systems—making sure every process is complete and accurate. Underlying all of this is knowledge management—capturing new information in a searchable repository so no one ever has to "reinvent the wheel" to solve a problem.


SW: You have conversations with companies about selecting and implementing remote support technology. Where does the ROI for this investment come from?

John: Remote and proactive monitoring technology has huge potential for lowering support costs, increasing service levels, and ultimately improving customer satisfaction, loyalty, and repurchase. By identifying problems at a customer’s site quickly, support can fix the problem before it impacts end-users. This increases uptime and lowers the cost of supporting customers. In fact, we've had members present case studies at our conferences showing that customers who encounter problems and have them fixed rapidly have higher satisfaction and tend to buy additional products—knowing that you will take care of them no matter what happens. A surprising fact one large hardware company uncovered is that customers who experienced a fast resolution to a problem are actually more satisfied than a customer who never encountered a problem. And, we are seeing more companies leveraging remote support to generate revenue by offering this capability as part of a premiere support package.


SW: John, thank you for taking the time to speak with me today!

John: It has been my pleasure. Thanks for having me!

You can follow John Ragsdale on Twitter!


I hope everyone tunes in to our live webinar, "Managing the Complexities of Technology Support," on August 21st, at 10am PT. If you aren't able to attend, register anyway. We'll send you a link to watch an OnDemand version of the event, as well as a link to download all the presentation materials.

Network monitoring tracks the state of the network and is primarily looking for faults. At the most basic level, we want to know if devices and interfaces are "up." This is a simple binary reachability test. Your device is either reachable or not, it's either "up" or "down." However, just because a device is reachable does not mean there are no faults in the network. If a circuit is dropping packets, performance may be impacted and can make the circuit unusable even though it is "up." Time to stop thinking in terms of reachability and start thinking in terms of availability.

Availability is a service oriented concept that asks, "is the service this widget provides available to its users?" Is the service 100% available or is it degraded in some way? Here are some examples of situations that simple reachability monitoring has difficulty detecting:

  • A circuit is dropping packets somewhere in your WAN provider's network. It is "up," but throughput is reduced.
  • A circuit is congested and latency has shot through the roof. The circuit is still "up." There may not be anything technically wrong with the circuit, but it isn't really usable to the end users.
  • A router is using 100% of its memory. It is processing packets slowly or perhaps it is not able to add new routes. It may still be "up."
  • An Ethernet interface in a port aggregation group is down or perhaps it was one blocked by Spanning Tree. While an interface will be down, from the packet perspective everything is still "up."

In the first two cases, you will probably hear about it from the end users. In the last two cases, you might not know about them until something else changes in the network that causes a (possibly confusing) outage. And probably a bunch of trouble tickets.

Are you thinking in terms of availability or reachability? Is your NMS configured to match your mindset?


Expectations of BYOD Tracking

Posted by sbeaver Aug 11, 2014

What are your expectations or your thoughts when it comes to having a discussion about user device tracking technology?  This was the opening question that I presented to spark a dialog about the topic of user device tracking. In my first post, I wanted to center the conversation on what the expectations should be with corporate owned technology.  I shared my thoughts that any computer, laptop, phone or any other company device, belongs solely to the company and as such the company has the right to be able to have total control of those devices as well as the final say on how those devices are used.  For the sake of this discussion, let's refer to this as old school expectations and in today’s world the technology and the way we do business has completely changed the twenty first century.


No longer is the laptop the only device that is used to access company resources in our day-to-day operations. What is new in the twenty-first century is the concept of Bring Your Own Device (BYOD) or is also called Bring Your Own Technology (BYOT), Bring Your Own Phone (BYOP), and Bring Your Own PC (BYOPC). This concept refers to the policy of permitting employees to bring personally owned mobile devices (laptops, tablets, and smart phones) to their workplace, and to use those devices to access privileged company information and applications. The term is also used to describe the same practice applied to students using personally owned devices in education settings.


The foundation of my argument has been that corporate, or academic owned assets are the property of the institutions and as owner of these devices, they get to call the shots.  Companies and/or institutions have no ownership claim of these personally owned devices and that I believe changes the dynamics of the conversation.  The concept of BYOD did not come about because a company thought it would be a good idea to give their employees this kind of freedom, in reality it was quite the opposite; they could not stop the employees from using their own devices and needed to figure out some kind of way to handle, control, and track the company data in this wild west free-for-all device world.


In all practical purposes, the technology is already available to track these personal devices using the same tools that are being used to track corporate laptops and other devices.  Some of the most common methods use certificates to establish the identity of the device when connecting to corporate or academic resources and from there the MAC address of the device or the username can be utilized to track the connectivity inside the corporate network. Same rules apply and the technology is there, but is this type of tracking what we should really be focusing on? I believe this should be one part of the process with an even greater focus on data tracking. This is what will be the most challenging, but also one of the most important tasks for companies who utilize BYOD. The companies must develop a policy that defines exactly what sensitive company information needs to be protected and which employees should have access to this information, and then to educate all employees on this policy. However, is education and policy going to be enough? How much corporate control of the personal devices needs to be incorporated into these BYOD policies?


That idea of corporate control of personal devices is where things can really get out of hand in my personal opinion.  I have seen corporation’s present policies to their employees where they welcome the employees bringing their own devices as long as the company can install a company approved image on the device that gives the corporation complete and total control of the device.  Which in my view, changes the device from a personally owned and operated device and morphs the device to nothing different than any other corporate owned and maintained asset that the company can use without the financial responsibility of the company to have to purchase the device.  Should this be the way of the future in that when you work for a company you should be expected to supply your own computers, phones and/or tablets that must be loaded with the company approved operating systems and applications, as well as adhering to corporate based usage policy? Where is the middle ground when it comes to BYOD before it turns into supply your own corporate device? That is the question I would really like to open for discussion. What are your expectations when it comes to the personal devices that you own, but utilize in both your personal and professional worlds?


There are always questions on what’s better: Synthetic End-User Monitoring (SeUM) or Real End-User Monitoring (ReUM)? Whether you’re using one of the two or both depending on your business scenario, the ultimate goal is to improve end-user experience. You achieve this by monitoring performance and continuous availability of websites and Web applications. Let’s take a closer look at both.


Organizations consider synthetic monitoring to be an invaluable tool, as it helps detect issues in websites and applications at an earlier stage, allowing you to address and fix issues prior to deployment. Further, it provides the ability to drill-down into individual Web components and diagnose performance problems affecting websites and Web apps. Synthetic monitoring offers other benefits, such as:

  • Record any number of transactions and test Web app performance for deviations
  • Easily locate front-end issues in your websiteswhether it’s HTML, CSS, JavaScript, etc.
  • Proactively monitor response time for different locations and compare against baselines
  • Get notified when a transaction fails or when a Web component has issues


Another key area where synthetic monitoring is beneficial is when websites host live information. For example, game scores, trading stocks, ads, videos, etc. Continuously monitoring these websites will help to proactively identify issues related to third-party content. With SeUM, information can be interactively shown in the form of waterfall charts, transaction health, steps with issues, and so on. Additionally, this method allows you to easily pinpoint components with performance issues.


On the other hand, you have real-time monitoring tools that present a different angle to monitoring end-user experience. When trying see the world through the users’ eyes, you gain insight into their behavior and can assess overall user experience in real-time. Since real-time monitoring doesn’t follow pre-defined steps or measure preset Web transactions, you have access all the data you need. Moreover, you gain visibility into:

  • Application usage & performance and track based on individual users
  • Location specific usage and performance
  • Make changes to your applications and monitor dynamically


In addition, you will always be aware of your website’s status. In turn, you ensure it’s up and running from various locations because you’re able to get traffic and monitor user interaction. Without traffic, real-time monitoring is meaningless. Reason being, you will no longer receive visitors, nor will you be able to look at performance metrics to adjust navigation or change the look and feel of Web pages. Here’s where synthetic monitoring has a slight edge since you don’t need real traffic to measure website performance. Synthetic monitoring has the ability to monitor website performance from any location with pre-defined steps and transactions.


While SeUM and ReUM each have their own benefits, it really boils down to what your business model is and how your business is aligned with end-users. IT pros can certainly leverage both within the same environment. Unfortunately, since both tools are built for completely different usages, you will have to use them independently to monitor and measure user experience.


Tell us your stories. How does your IT organization monitor end-user experience today?


What are your expectations or your thoughts when it comes to having a discussion about user device tracking technology?  Have you really given it much thought or had a good conversation recently?  For as long as I have been working in IT and have been given a company computer or other technology asset to use, I pretty much have given the expectation that this computer, laptop, phone or any company device, belongs solely to the company and as such the company has the right to be able to have total control of those devices as well as the final say on how those devices are used. That would be a fair assessment? Don’t you see some similar type verbiage that can be found in most company’s onboarding paperwork?


I am pretty sure that most of you that are reading this post might be involved with the administration of user devices and I am also willing to bet that a good portion of you might actually be handling that task of utilizing SolarWinds own User Device Tracker software (UDT). Before I really express some of my thoughts in the attempt to provoke a riveting conversation,  I want to be clear in that I believe that software like this is a must for corporations to be able to manage and protect their technology assets and the company infrastructure from unauthorized access from stolen or disgruntled device or person.


But hold on for a second, what about outside the corporate network? Isn’t there just as much of a need to maintain, be able to control, as well as track those corporate devices once it leaves the safety and comfort of the company infrastructure? While giving that some thought, one of the first scenarios that flashed into my mind is something that is the type of scenario that has already been played out and gotten great press coverage multiple times already.  I know you have all heard about or read about these different kinds’ stories where someone loses or gets their device stolen along the way.  In most cases like this, it is not the loss of the device that is the most important, but rather the data that was stored on it.  Private company documents on future products, company business strategies or yet, one of my personal favorites, a database of customer account information or even better yet, private patient medical information.  Security protocols should in place to encrypt this data as well as have multiple type authentication mechanisms in place to secure that data, but I digress and will leave that topic for another discussion. Let’s be honest in that no matter how much security is preached, not all companies go to such lengths to protect their devices and data. The ability to have some kind of a nuclear option to wipe these devices when needed can be a life or job saver. Can you really put a price on not having bad press and a loss in customer confidence?


Those are some pretty profound use case scenarios that can help easily justify the need for tracking and control of the devices both in and out of the datacenter, but there is an old saying that immediately comes to mind.  Even the most honest and justified intentions will tend to have unforeseen unintended consequences that come with it.  Case in point, using SolarWinds User Device Tracker as an example, this software has the ability to not only track down devices either by its IP or MAC address, but a search can also be done on a user logon account itself.  There are multiple scenarios that can be easily argued for the need to be able to search for a specific user, but I present to you the thought that this function easily changes the topic of this conversation from user device tracking to just user tracking? Circling back to a point I made earlier, when at the company’s place of business and utilizing company resources this idea of device tracking and monitoring should be fully well understood and expected.


Now, what about outside of the company’s place of business? Do you feel that the justification to manage and protect company’s physical resources extends to both inside and outside of the corporate offices? Is there a line that should be drawn in regards to tracking devices and in all other practical terms, tracking the users or employees, if you will, once they leave the office? There lies an interesting question and as you develop your answer to that question, lets expand the parameters to include not only corporate devices but let’s also add non corporate assets that are commonly known as Bring Your Own Device (BYOD). Does that change your answer at all? Hold on to that thought and join me next time to contemplate tracking the BYOD.


What is network management and what constitutes a well managed network? Is it monitoring devices and links to ensure they are "up?" Is it backing up your device configurations? Is it tracking bandwidth utilization? Network management is all this and more. We often seem to confuse network monitoring with network management, but monitoring is really just the start.

Network management is about being proactive. It's about finding problems before the users do. It's being able to see what changes have taken place recently. It's about monitoring and analyzing trends. It's tracking software updates for your devices and deciding if and when they should be installed. It's creating procedures before a change, not during the maintenance window. Oh, and creating procedures to roll back the change if it all goes horribly wrong.

Planning and budgeting comes into play. I remember being surprised at how much time I spent "pushing paper" when I started as a sysadmin. In order to push these proverbial papers, you need data to analyze. You should be collecting the data you will need with your network management tools. After all, how can you plan for the future with no information about what has happened before. That's just shooting from the hip and making a wild guess...

This is a starting point for thinking about what goes into a well managed network. What else does a well managed network need? What challenges are you running into in trying to manage your network well?

09-03-49.pngThis month, we’ve shined our IT Blogger Spotlight on Larry Smith, who runs the Everything Should Be Virtual blog and tweets as @mrlesmithjr. As usual, we’ve asked some deeply philosophical questions about everything from the nature of truth to the meaning of life. OK, maybe not, but we still had fun chatting with Larry. Check it out below!


SW: Let’s mix things up a bit this month and first talk about you before we get to Everything Should Be Virtual. Who is Larry Smith?


LS: Well, I’m currently a senior virtualization engineer for a major antivirus company, but prior to that I worked for a major retail company that specialized in children’s clothing. Overall, I’ve been in IT 19 plus years, though. I’ve done everything from network administration to systems engineering. And when I’m not working or blogging—which is extremely rare, it seems—I most enjoy spending time with my family.


SW: Wow, 19 years in IT! That’s some staying power. How did it all begin?


LS: I started programming when I was about 12 years old on a TRS-80 back in the early 1980’s. I always knew I wanted to be in the computer field because it came very natural for me. However, I decided after attending college for a while that programming was not for me! So, I started getting more involved with networking, servers and storage. And then I got into x86 virtualization in the late 1990’s and early 2000’s.


SW: With such an illustrious career, surely you’ve come to hold some tools of the trade in higher esteem than others. Any favorites?


LS: Really, my favorite tools are anything automation-related. I also enjoy writing scripts to automate tasks that are repeatable, so I am learning PowerCLI. Anything Linux-based as far as shell scripting. I also really enjoy finding new open source projects that I feel I can leverage in the virtualization arena.


SW: OK, switching gears now, tell me about Everything Should Be Virtual. Judging by the name, I’m guessing it kind of sometimes touches ever-so-lightly on virtualization.


LS: You guessed it. Everythingshouldbevirtual.com is, of course, focused on why everything should be virtual! This could be about an actual hypervisor, storage, networking or any type of application stack that can leverage a virtual infrastructure. I enjoy learning and then writing about new virtualization technologies, but also I really enjoy metrics. So, I spend a lot of time writing on performance data and logging data. And again, with the main focus in all this being around virtualization. I spend a great deal of time using Linux—Windows, too—but what I find is that it is extremely difficult to find a good Linux post that is complete from beginning to end. So, my goal when writing about Linux is to provide a good article from beginning to end, but also to create shell scripts that others can utilize to get a solution up and running with very minimal effort. I do this because I want something that is repeatable and consistent while also understanding that others may not necessarily want to go through some of the pain points on getting a Linux solution up and running.


SW: How long has it been around now?


LS: I started it in 2012, so a couple years. I got started with blogging as a way to keep notes and brain dump day to day activities that I encountered, especially if they revolved around something that was out of the norm. The more I blogged, the more I realized how beneficial some of the posts were to others as well. This, of course, inspired me to write even more. I’ve always had a passion for learning at least one new technology per week, and the blog allows me to share with others what I’m learning in hopes of helping someone else.


SW: Any specific posts stick out as ones that proved most helpful or popular?


LS: Yeah, some of my most popular posts are around metrics and logging—Cacti, Graylog2 and the ELK (Elasticsearch Logstash Kibana) stack. While these are typically the most popular, there are probably as many hypervisor-based articles that are really popular as well. I think this shows the value you can provide to the community as a blogger.


SW: As per the norm, let’s finish things off with your perspective on the most significant trend or trends in IT that have been on your mind lately.


LS: One of the major trends that’s still fairly new and will be a real game changer is software defined networking (SDN). I have the luxury right now of learning VMware NSX in a production environment from the ground up, so I am extremely excited about this development. This area is really going to set the stage for so much more to come in the future. Obviously, another area that I have enjoyed watching take shape is around storage. The idea of getting away from expensive shared SAN arrays makes a lot of sense in so many ways. Being able to scale compute and storage as your requirements change is huge. Instead of just rolling in an expensive SAN array and then having to pay very expensive scaling costs in the future, you can scale in smaller chunks at a more reasonable cost, which also provides more compute resources. Here’s a link to explain a bit more around using VSA's or VSAN I wrote up a few months back.


Announcing NPM 11

Posted by brad.hale Jul 30, 2014

Stop the finger pointing with NEW Deep Packet Inspection & Analysis

We know you've been there.  The network is always the first to be blamed when application performance is poor.  Then the finger pointing begins.  It's the network!  No, it's the application!..Now you can tell if application performance problems are the result of the network of the application with Network Performance Monitor version 11's deep packet inspection and analysis


By analyzing packet traffic captured by network or server packet analysis sensors, NPM's new Quality of Experience dashboard analyzes and aggregates the information in a easy-to-read dashboard.

These statistics make it easy to tell at a glance not only if you have a user experience issue, but whether the issue is a problem with the network or the application.  In addition to response time statistics, we are capturing aggregate volume metrics, and have the ability to classify application traffic on your network by risk-level and relation to typical business functions.


QoE_Dashboard (1).png


Learn more

For customers under active maintenance, NPM 11 may be found in your SolarWinds Customer Portal.

You can learn more about NPM 11 and deep packet inspection and analysis here.

Sign up for SolarWinds lab, where the topic of discussion will be the new QoE functionality: July 30th, 1PM CT - LAB #16: Deep Packet Inspection Comes To NPM

Register for our upcoming Quality of Experience Monitoring with Deep Packet Inspection and Analysis from SolarWinds webcast:  Thursday, August 7, 2014 11:00 AM - 12:00 PM CDT

Organizations implement or plan to implement virtualization for benefits such as cost savings, improved server utilization and to reduce hardware. Many of these benefits can be limited by problems caused by the additional complexity of management, configuration, monitoring, and reporting caused by the new layer of abstraction and the additional complexity it can cause.


Given the ease of creating new VMs and how dynamic most virtual environments are, it can be very difficult to manage manually using raw information provided by the hypervisor. As a result, one simple way to increase the return on investment is to adopt an automated virtualization management tool such as SolarWinds’ Virtualization Manager. A tool will aid in configuring, monitoring, and managing virtual servers. Moreover, a virtualization management tool automates the process of data collection and report generation (CPU usage, storage, datastores, memory usage, network consumed, etc…). Thus reducing the time and effort invested on manual processes of monitoring and administration.


All organizations consider time and money as valuable factors.  A virtualization management tool reduces both time and money spent. Some factors are:

  • Reduced number of admins needed to manage VMs
  • Faster problem resolution reducing downtime
  • Foresee upcoming issues and assist in proactive troubleshooting
  • Avoid VM sprawl and improve resource utilization
  • Reporting made easier and faster


Now, let’s take a look at some return on investment figures with examples that explain how a virtualization management tool reduces both time and money spent.

Consider an organization with 500 VMs running on about 50 hosts with 100 sockets (i.e., 2 sockets/ host). With an average of only 5 VMs per socket or CPU (it can easily be higher), you would get a system of about 500 VMs to manage. Below we look at some of the key factors driving cost and efficiency of the system and the potential cost implications of not having a management system.


Adopting a virtualization management tool in an organization definitely increases uptime, as the tool aids in faster problem resolution. Also, virtualization management tools can help foresee or predict the most pressing issues and, in turn, increase the uptime of VM servers.

According to this EMA whitepaper, the average uptime for a virtual environment is 99.5%. Utilizing a virtualization management tool easily increases uptime to 99.95% with best in class operations achieving 99.999%.

VM sprawl:

VM sprawl occurs when VMs are created and left without proper management. This situation causes bottlenecks in system resources and can lead to performance problems as well as the economic impact of wasted resources. Keeping track of IOPS, network throughput, memory, and CPU of the VM helps find VM sprawl. A virtualization management tool helps identify idle or zombie VMs and can eliminate resource waste in the organization.

An article by Denise Dubie using data from an Embotics study provides estimates that an organization with 150 VMs has anywhere from between $50,000 to $150,000 locked up in redundant VM’s. This includes costs from infrastructure, management systems, server software, and administration resources and expenses.

Admin headcount:

A VM admin should configure, monitor, and manage VM servers. To get the best VM performance, admins should configure the VM with only resources (processors, memory, disks, network adapters etc.) required by the user. A virtualization management tool will help monitor the performance (such as CPU utilization, VM memory ballooning, memory utilization etc.) of the configured VM servers.

Without a VM management tool in place, admins face constant challenges with monitoring VM environments. Some of these challenges include:

  • Identifying bottleneck issues, especially ones related to storage I/O. Identifying CPU and memory issues are fairly simple, however if the issue is a bottleneck with storage it can be difficult. This is because storage issues can stem from various metrics (slow disk, a lot of network traffic, a lot of VM’s on a SAN volume, etc.).
  • Monitoring usage trends, system availability, and resource bottlenecks. For these results, admins should correlate data from multiple places.
  • Storage allocation is also an on-going challenge when it comes to VM management.


Finding the true problem without a VM management tool can be a nightmare. A virtualization management tool provides insight into VM performance, delivers key metrics, visibility to root causes of resource contention, and optimizes virtual environments. A study by EMA indicates that on average a virtual admin can manage 77 VMs without a virtualization management tool. Conversely, experts say that virtual admins can manage up to 150 VMs with a virtual management tool in place. Moreover, this means that business will scale more easily with existing staff.

Also, do not forget the expense of downtime represented by MTTR (Mean Time to Repair) problems. Extra admins are required when an issue occurs. According to VMware®, it takes 79 minutes for an admin to fix an issue without a virtualization management tool and 10 minutes with a tool, thus reducing the downtime in a VM environment. Also, a virtualization management tool reduces the dollars paid to an admin as the time needed to resolve the issues lowers. 99.5% uptime a year means 43.8 hours of downtime per year and with a virtualization management tool the uptime is increased to 99.95 (4.4 hr downtime/ year). Assuming the average cost is $50/hour for two IT administrators to identify and solve each problem, an organization will pay $4380.00 (downtime *admin’s labor charge per hour (2*43.8*50)) in direct labor for virtual troubleshooting without a virtualization management tool, compared to $440.00 (2*4.4*50) with a virtualization management tool for an annual savings of $3940.


It’s important to consider how revenue as an outage/downtime will impact revenue. Revenue differs from one organization to another. Based on the report from National Centre for the Middle Market at Ohio State University, most of midsize organizations revenue is between $10m and $1 billion per year. Organizations these days are heavily dependent on their IT environment. During VM downtime, let’s assume the organization loses half of its revenue earned during that time period. Consequently, a loss of $571 per hour occurs (assuming an organization with revenue of $10m/ year) when the VM environment fails.

Using publicly available data sources, we compiled the data to estimate the type of savings that could be possible using an automated virtualization management tool. We then compared it to manually managing the virtual environment using virtual infrastructure/hypervisor information. The virtualization management software licensing costs are based on SolarWinds Virtualization Manager published prices and may be higher for other management products.

Table 1 -Estimated Cost Savings with Virtualization Management (based on a 500 VM environment):


CategoryQuantityQuantity UnitsCost Per UnitCost UnitsCost ($/yr)Comments
No virtualization management tool
  1. 43.8
hr / year$571 per hour$24,999
VM sprawl500VM's$50K per 150 VMs$166,665 150 VMs will have $50,000 to $150,000 lost in VM sprawl. So let’s take the minimum $50,000. Hence $333 per VM.
Admin's  daily routine6persons$50 per hour$360,000 50 weeks x 5 days x 8 hrs x $50/hr *number of admins= (100000 * number of admins)
Admin's - to work on issues
  1. 43.8
hrs$100 per hour/2 admin$4,380 # of hours downtime* number of admins * cost per unit/hour
1st year of virtualization management tool
  1. 4.4
hr / year$571 per hour$2,511
VM sprawl500VM's$41,666 And with a VM management tool assume VM sprawl will decrease to 25%
Admin's - daily routine3persons$50 per hour$180,000 50 weeks x 5 days x 8 hrs x $50/hr*number of admins= (100000 * number of admins); VMs per admin ~ double from 77/adm to 150/adm
Admin's - to work on issues
  1. 4.4
hrs$100 per hour/2admin$440# of hours downtime* number of admins * cost per unit/hour
Cost virtualization management tool (SolarWinds)$23,995 List price of a VM112 license (up to 112 sockets) for SolarWinds Virtualization Manager
2nd year onwards of virtualization management tool
  1. 4.4
hr / year$571 per hour$2,511
VM sprawl500VM's$41,666 And with a VM management tool assume VM sprawl will decrease to 25%
Admin's - daily routine3persons$50 per hour$180,000 number of admins * cost per unit/hour *8 hours * 365 days
Admin's - to work on issues
  1. 4.4
hrs$100per hour/2 admins$440# of hours downtime* number of admins * cost per unit/hour
Maintenance cost virtualization management tool ( SolarWinds)$4,799Estimated annual maintenance charge


Using this data, Table 2 provides a summary of cost savings estimated at $307K for the first year, and slightly more for the second year with a very strong ROI.

Table 2 - Virtualization Management Estimated Savings

Virtualization Management Estimated Savings
Costs with No VM Mgmt Baseline$556,044
Cost with VM Mgmt - Year 1$248,612
With VM Mgmt - Year 1 Savings$307,432
Year 1 ROI12.8
Cost with VM Mgmt - Year 2+$229,416
With VM Mgmt - Year 2 Savings$326,628
Year 2 ROI


While these costs will vary for each company, they help illustrate how quickly an automated virtualization management tool can pay for itself. Using a similar methodology, a company could plug in alternate values that are applicable to their situation to customize the estimate for their company. These ranges may be high, but the opportunity to produce meaningful savings is certainly there and as a result it’s probably worth the effort to evaluate potential savings in your environment.


To conclude, new VMs are constantly created and become even more dynamic, making it very difficult to manage manually. The facts and numbers indicate there’s a substantial upside to utilizing a virtualization management tool. You can find out more about SolarWinds’ Virtualization Manager on our product page.

Stephen Covey, creator of the “7 Habits of Highly Effective People,” often said, “As long as you think the problem is out there, that very thought is the problem.”

There’s truth to that statement. But all motivational phrases and buzzwords aside, you can expend a lot of mental energy dwelling on the fact that there are problems in your day-to-day endeavors. Of course, when your role in life is a system administrator, you are no stranger to a plethora of problems. In fact, the notion that there are problems out there in your IT world is what keeps you moving each day.

The Information Technology Infrastructure Library (ITIL) tells us that a problem is the unknown root cause of one or more existing or potential incidents. Furthermore, it stands to reason that an incident is an event which is not part of the normal operation of an IT service. For example, if the employees within your corporate IT network are unable to access network related services like email, printing, and Internet services, it may be a problem to them, but according to ITIL, that’s an incident. If the network access is hampered by an issue with the core router, that’s the root cause of the incident and hence, the actual problem.

Enter the help desk. Given that problems are your help desk’s reason for living, proactive problem and incident management is necessary for effective IT support and problem resolution.

Your help desk software can do a lot toward proactively managing your IT administration issues. Here are some useful features for an effective help desk solution:

  • Ability to integrate with network performance and monitoring solutions.
  • Ability to set up your help desk solution to receive and assimilate alert data and automatically assign tickets to specific technicians.
  • Automatic triggering of new alerts and note updates on existing tickets according to changes within the network parameters.
  • Configurable variables to filter alerts based on severity and location. (This can provide information about your operating system, machine type, IP address, DNS, total memory, system name, location, alert type, etc.).
  • Simplified ticketing, such as linking an unlimited number of incident tickets to a single problem.
  • The necessary tools, ITSM incident diagnostics, and ticket routing features, allowing for better integration and relationships with knowledge base articles, CMDB asset associations, service requests, known problems, change requests, and SLAs.

Getting to the root cause of incidents and resolving problems fast creates a good-to-great situation for your help desk department and staff. When customer satisfaction is key, allying yourself with an efficient help desk solution is how you achieve that.

Here's the deal. I have a Windows 7 laptop. Updates are pending, they get installed, machine reboots, updates fail. Rinse, repeat daily. Every day I go through this and I'm LOSING MY MIND! The updates keep accumulating and nothing ever gets installed. I just get failure, after failure, after failure! @#$%^ Microsoft!


Here's what I have done so far:

  • Reboot - nothin
  • System restore - nothin
  • Download each update individually - still fails
  • MS Fix It tool - didn't help
  • A/V off
  • Admin rights
  • Uninstall previous updates - still nothing
  • Clear all the logs and .dat files - nothing helped
  • No viruses or any other bad thingys. All clear.
    • Will attempt to uninstall the Windows Update Service - that might help, but I doubt it.
    • My last resort is the battery re-seating, which actually WORKED in the following scenario: Troubleshooting. (The Hard Way.)


Anyone have any other ideas other than a formatting? I'm losing it here. Coming close to "Naked Bronx in a clock tower" moment. Thanks.

Filter Blog

By date:
By tag: