1 2 3 Previous Next

Geek Speak

41 Posts authored by: karthik

To quickly diagnose and resolve performance bottlenecks in Exchange server, you should try to know every component within the application and how they impact performance. Continuously monitoring Exchange will get you familiarized with each metric. In turn, you will be more aware of the kind of thresholds that you can set to optimize alerts. On the other hand, Exchange performance bottlenecks can occur due to various reasons, such as a heavy server load, high rate of incoming mails, MAPI operations, POP3 requests, and so on.


You can start by looking at RPC load – determine if there’s high RPC per second. You should then drill down further and see if the RPC load is coming in from a single user or if it’s from a string of users. Mailbox size is an area that you should constantly monitor because it can affect server performance. Whenever there’s an increase in folder size or mailbox size, there’s an increase in CPU and disk usage which brings down server performance. Typically, when there’s a heavy load on the server or applications, it usually leads to bottlenecks which occurs in one of the following:

  • Database Capacity: Exchange bottlenecks can occur with the email database capacity.  You can set the size of the database capacity it can grow faster than you expected. Therefore, you should regularly check the email database because if it reaches capacity, then all the mailboxes in that database will have issues.
  • Disk: A bottleneck in the disk occurs when there’s high read and write latencies in the database drives and storage logs. The best way to go about diagnosing this is to look at a list of all applications that are utilizing the same disk. Since users are constantly performing a variety of operations in Exchange, it utilizes the allocated disk resource. When other applications are hogging the resource it leads to a bottleneck. To avoid this, you can start monitoring the I/O database read/write latency and I/O log read/write latency counters.
  • CPU: CPU performance can be affected when you have heavy and critical applications running side-by-side. If the CPU is over 80% or 90% then there’s definitely a bottleneck. Look at what processes are running and how much CPU they’re hogging. One way to fix this is by simply adding additional CPU to the existing capacity, or consider moving Exchange to another dedicated server with adequate resources. In addition, you should monitor the % processor time, % user time, and processor queue length counters.
  • Memory: Sufficient memory must be available to support several user logons simultaneously. If there’s insufficient memory and with several logons, a bottleneck will occur. In order to reduce logons per user, having lesser client applications per user and turning off several third-party plugins will free up memory resources. Monitoring total memory, memory cache, and memory usage % will be helpful.
  • Configuration: Setting up Exchange server with the right configuration is key. For example, you should ensure that database files, temp file, transaction logs, etc. are not shared with other resources. These are critical to Exchange performance and availability. Another tip to keep in mind is to not have scheduled maintenance during business hours. When one of these is misconfigured, end-users will be affected.
  • Hardware: Insufficient memory, storage disks, and underperforming CPU should be monitored regularly and upgraded if needed.  A hardware failure will affect the Exchange server and applications running in the server. Other applications depending on a shared resource will also be impacted when a hardware resource has contention issues.

Checking these performance counters for bottlenecks is recommended because you can evaluate performance at all times. Your monitoring software will provide a visual representation of application performance in the form of graphs, charts, or tables for specific time intervals. Receiving alerts will tell you if there’s a performance issue. Once notified, you can immediately start troubleshooting as opposed to waiting for a user to raise a help desk ticket.

Often, virtual admins have a pressing need to move virtual machines from one host to another without facing application downtime. The reason this scenario arises often is because a particular host may run out of resources due to contention issues with other virtual machines in that same host. vMotion® from VMware® allows for seamless virtual machine migration without any interruptions. Similar to vMotion, with the latest generation of Hyper-V®, Microsoft® now offers Live Migration which provides the capability to move virtual machines flawlessly between several hosts without disruption or application downtime.


Live Migration offers virtual admins plenty of flexibility and several added advantages:

  • Enables proactive maintenance. Live Migration offers the ability to perform migration and maintenance of hosts simultaneously. This drastically brings down the migration time giving you plenty of time to plan and simplify Hyper-V mobility.
  • Improved hardware utilization. When you have many virtual machines which are only occasionally used, then you can move those virtual machines to either a standalone server or cluster with minimum capacity and the source host can be powered down.
  • Performance and flexibility when working in a dynamic environment.
  • Rich in features. With the latest Hyper-V, Microsoft offers new APIs to migrate virtual machines across cluster nodes along with functionalities that enable migration of virtual machines in and out of failover clusters without downtime.


Like Live Migration, there are other virtual migration mechanisms that are offered by Hyper-V. These are also used by virtual admins to improve virtual machine availability and performance.

  • Live Storage Migration: With Live Storage Migration, IT admins can now move around files and data from virtual machine to another storage location without any service interruption or downtime. For example, several virtual machines in a host are dependent on the same volume and also require high disk I/O, then storage performance will be affected. In turn, all the virtual machines dependent on the same volume will have performance issues. So Live Storage Migration will be useful to move a virtual machine to another host which has more storage capacity.
  • Shared Nothing Live Migration: In Shared Nothing Live Migration, IT admins can move a running or a live virtual machine from one host to another. For example, admins can move the virtual machine between hosts even if the virtual machine’s file system resides on a storage device that is not shared by both hosts. This kind of migration is very useful for admins in small and medium organizations who want to perform periodic maintenance on a host during a planned outage.
  • Live Migration Queuing: A host can queue up several Live Migrations so virtual machines are lined up one after another to move to the destination host. This is useful when performing several Live Migrations.
  • Live Virtual Machine and Storage Migration (VSM): This is a process for migrating virtual machine and the virtual machine storage together from one host to another. VSM is supported between two stand-alone or Hyper-V host clusters. With VSM, IT admins can transfer virtual machines to either a cluster share volume (CSV) or SMB 3.0 file share on the destination host cluster.


Whichever migration method you choose to deploy, the ultimate goal is ensure the end-user or the business isn’t impacted by virtual machine performance issues. One other thing to keep in mind when you’re migrating virtual machines is to look out for performance bottlenecks across your virtual infrastructure and ensure application performance is nowhere near the set threshold limit.

In today’s business world, email services are considered to be one of many mission critical applications, if not the most important application. At the end of the day, email downtime will affect business operations. In turn, sales teams may have a tough time reaching out to customers. Microsoft Exchange is a great tool where setting up the infrastructure and accessing emails are concerned. However, the biggest challenge for an Exchange admin is to constantly have a fully functioning system running during peak business hours. Below is the list of essential tips that an Exchange admin can leverage to optimize and improve Exchange server performance.


1. Storage Allocation and Performance


Sometimes when Exchange admins don’t correctly configure or partition the disks which store Exchange, it leads to email failure. When storage volumes runs out of available disk space, it leads to issues where Exchange can’t store additional emails. Even before assigning email space to users, as a best practice, Exchange admins should monitor available disk space and avoid the issue in advance. Planning your disk storage for Exchange server is critical because high disk latency will lead to performance issues. For optimum storage performance, it’s essential for admins to consider:

  • Deploying high performance disks and spindles
  • Selecting the right RAID configuration
  • Improving performance by aligning your disks


2. Mailbox Quota

Often, Exchange admins get trouble calls from users who can no longer send or receive emails. This happens when they’ve gone beyond their allocated mailbox quota. Instead of removing mailbox restrictions, Exchange admins can encourage users to move attachments, set up an archive, or adjust their archiving policy to reduce mailbox size. It’s important to monitor mailbox sizes to ensure each user group or individual employees remain at their original allocation. Larger organizations with many users will have to be smart about configuring mailbox sizes since they can get more trouble tickets due to an oversized email. Admins should proactively configure alerts and warn users when they’re about to reach their threshold.

3. Mailbox Database Capacity

Admins have to understand mailbox database capacity in order to get more visibility into:

  • Mailbox database size: You can determine how many mailboxes can be deployed in a single database. For example, development teams may have to share emails with heavy attachments to customers for feedback, such user groups or departments with large mailboxes can be moved to another database to balance load and capacity.
  • Storage usage: In order to estimate database capacity, you should map the number of user mailboxes per disk or array.
  • Transaction log files: Looking at transaction logs for user mailboxes will give you more information about message size, attachment size, and amount of data sent and received, etc.
  • Database growth and size: The database size will give you a rough estimate on the number of mailboxes you can deploy. This will depend on the availability factor (if you have a database copy then you have something to fall back on during failover) and storage configuration.

4. Indexing Public Folders

Even though indexing is useful, it utilizes a lot of resources causing performance issues to Exchange server. When you index public folders in Exchange it consumes a lot of CPU and disk usage. When your mailbox content changes, for example, as you receive more and more emails, the size of the index becomes larger and larger. If you have a mailbox which is about 5GB in size, then your index size also takes up a fairly large size. This happens because your email or message gets indexed separately to each public folder. Admins should only recommend indexing to specific teams or departments, otherwise there will be various resource bottlenecks.

5. Managing Unused Mailboxes

Businesses typically hire temp staff, interns, and contractors for a limited period. Admins have a tendency to leave those dormant mailboxes untouched for a long time. Removing these will free up storage and database capacity. Admins can reassign these to users who are short of capacity or keep them handy for future hires.

6. Bonus Tip: Automated Alerting

Admins can set up mechanisms to alert end users that their quota is nearly reached. Additionally, they can provide information about their mailbox, like the number and size of attachments, and information on how to reduce mailbox size. This automation can save the admin several hours and leaves the responsibility of size reduction on the user.

Exchange server can be a complex application to deal with especially when you have very little time to diagnose and troubleshoot issues. In order to be proactive, you can optimize the performance well in advance for your environment. Looking at other areas, such as controlling spam, creating proper backups, and cleaning up Exchange server will ensure consistent performance.

System administrators face many day-to-day challenges. Specifically, when they’re troubleshooting and resolving user issues. The last thing on their mind is probably to look at performance data and make meaningful sense out of it. While this may be a daunting task, data analysis is key when solving issues that may arise from monitoring your servers and applications. This is especially true when you have mission critical applications running in your servers. Therefore, SysAdmins must be proactive to ensure there’s no downtime with their servers.


Data analytics will only be as good as your application monitoring solution. It’s key to have an effective monitoring solution so your analysis and predictions are accurate. Major considerations to keep in mind when you’re shopping for an application monitoring solution include:


Application Health and Availability

You can analyze how many applications are up or down for a given week or month. Further, you can drill down further to see critical application status and component issues. Data analytics will show you the performance of business applications and whether or not there are deviations in performance. In addition, you can analyze patterns and see how each application responds to user requests and compare against your baseline. With analytics, you can look at application storage and estimate and predict storage growth for critical applications.

Real-time Alerts

When your applications reach their threshold, you should instantly receive an alert. This helps you look at changing up your threshold settings and analyzing the number of alerts you receive per week for a desired application.

Server Performance

Look at server performance and analyze a range of metrics, such as: response time, CPU load for any given time, memory utilization, node details, number of applications in a server, real-time processes and services, storage volumes for a given server, etc.

Hardware Health

To effectively look at server environment and performance, you need an integrated console which shows you a range of hardware components. You can analyze the performance of each component and their metrics to ensure failure of hardware components doesn’t negatively impact application performance. In order to do this, you should analyze the performance of hard drives, arrays, array controllers, power supply, fan, CPU temperature, CPU fan speed, memory, voltage regular status, etc.

Database Optimization

To optimize SQL performance, you can look at various critical metrics and analyze the performance of each. You can also identify expensive queries and see how long it takes for the query to run. Badly written queries will affect database performance. Similarly, look at other components in SQL server such as, index fragmentation so data searches occur faster. Storage utilization, transactions, average response time, etc. will have to be analyzed proactively. This will keep you aware of performance and availability of the database, avoid performance bottlenecks, offers you scalability to monitor more databases and instances, and maintain server hardware and keep it healthy.

Operating System

Depending on your environment, you may be using different operating systems. It’s essential to analyze the performance of your operating systems. Therefore, you should drill down in to your server operating system and analyze the performance of CPU utilization, physical memory, virtual memory, and disk performance. This will help show what’s causing a strain on the server, operating system, and applications.

Benefits of Analytics


Receiving answers to what-if situations and determining where problems may occur can be made easy. You can also get a meaningful relationship to your IT components, application performance metrics, customer experience monitoring, and end-user monitoring. To have optimum application performance, today’s SysAdmins in both small and large enterprises are expected to analyze these patterns and come up with substantial information to help make further IT investments and determine ROI.


To conclude, SysAdmins can certainly take a load off their shoulders by utilizing the proper monitoring solution. In turn, analysis and predictions will be more accurate. That being said, the aforementioned considerations are important to keep in mind when searching for an application monitoring solution.

It’s normal for IT admins to utilize several operating systems to support different applications and user groups. Particularly because certain operating systems are best suited to take on the user load, ease of troubleshooting, faster to implement updates, easy to detect issues, cost effectiveness, and so on. At the same time, supporting multi-vendor operating systems can have its own drawbacks, as opposed to strictly running a Windows® or Linux® environment. Let’s say your end users are having an issue with an application running on a Windows server and it’s taking a significant amount of time to diagnose and resolve the issue. Not to mention, the lack of visibility and downtime will put a dent on your business. Similarly, imagine having critical applications running in a multi-vendor environment. You can’t troubleshoot all the issues at the same time. It’s impossible, and your senior IT staff and other executives are going to come after you with serious questions.


Let’s assume you require a multi-vendor operating system because of its capabilities and flexibility. To start, the question you need to ask yourself is, “do I have the right tool to help me get through issues if they arise during peak business hours?” Having the right server management solution to monitor multiple operating systems will provide IT personnel with the specific operating system expertise they need to dig deeper and correct relevant issues quickly. For example:

  • Get complete visibility into various operating system performance, metrics, and key statistics
  • Ensure continuous application performance and availability
  • Leverage diagnostic capabilities to get insights into the health of your commonly used environments such as Windows, Linux, and UNIX
  • Get notified using advanced alerting mechanisms so you can learn and fix the issue before your users starting raising trouble tickets
  • Quickly identify key performance bottlenecks by looking at rogue processes and services
  • View error logs to find the reasons for operating system issues and crashes


In addition, a server management solution offers a range of capabilities that will help you monitor your server operating systems. Look at real-time processes and services that are hogging resources, performance counters, real-time event logs, a range of business applications, and top components with performance issues.


Drilling down into server operating systems will show you the obvious and critical metrics that’s causing a strain on the server, operating systems, and applications.

  • CPU Utilization: You can see how much load is being placed on the servers’ processor at any given time. If the CPU value is higher, then it’s probably because of an issue with an underperforming hardware which is affecting operating system performance.
  • Physical Memory: RAM is where the operating system stores information it needs to support applications. A server, despite its operating system and applications running inside will face issues when there’s inadequate memory.
  • Virtual Memory: When there’s an increase in virtual memory usage, data is going to move from RAM to disk and back to RAM. In turn, this puts physical disks under tremendous pressure.
  • Disk Performance: Disk performance also is a leading cause of server and application issues. Partly because a large virtual environment puts a strain on servers’ disk I/O subsystems.

Microsoft Lync is a commonly used collaboration application by organizations that runs a Windows environment. Employees in large organizations often use Lync as a medium to send chat messages, setup internal and external conference calls, share screens with remote users, and other call recording facilities. All this makes Lync a critical application that system administrators need to monitor continuously to prevent from failing.


When Microsoft released Lync Server 2013, it shipped Lync Server 2013 with a built-in monitoring functionality. All you have to do is enable the monitoring functionality on the Front-End Server. If you’re using a previous version on Lync then you probably have a dedicated monitoring server running separately. You don’t have to do this if you’re using Lync Server 2013. Using the built in monitoring functionality in Lync 2013 allows you to access a variety of reports based on Lync’s call detail recording. Some of the key reports you can access are:

  • User activity report: Get a summary of what users are sharing via the instant messaging window in Lync. It can be any multi-media files, and other file transfer sessions.
  • Conference summary report: Get a summary of conference calls involving more than 3 people. Look at conference call times and other conference activities.
  • IP phone inventory report: Generate a report that shows user logins and which IP phones they’re mapped to. Get a complete list that shows which users are active on Lync and which users haven’t logged on to Lync. Have an up to date phone inventory based on user login patterns and determine active and inactive users.
  • Call admission control report: This report shows you all conference activities and user activities. The call admission control will alert the leader whether video calls are permitted with a particular call. In some instances, if many users are having a video call, there can be a network bottleneck. Also, depending on the type of session, you can estimate your network speed and availability.


These reports give you the overall system usage summary for all your users. As IT admins, you can also answer key questions that are raised by your senior IT managers on Lync performance – whether it’s related to the amount of calls made in a given hour, if call recordings are taking a long time to process, how many users are connecting from outside the office with or without a mobile device, or if latency issues are present, etc.


The built-in monitoring functionality will only tell you information about the performance of the Lync server. To drill down deeper to diagnose and troubleshoot when issues arise, you need an application performance monitoring software that will show you: where the bottleneck in Lync is, connection delays are, incoming and outgoing requests, messages, responses, and many more that goes beyond Lync performance. All these components in Lync Server 2013 are critical to monitor. With these, you can assess the status and overall health of services as well as the performance of front-end Microsoft Lync Server 2013.

Virtual storage I/O latency will impact VM performance because high read or write operations can cause performance issues to all the resources in the datastore. In order to resolve the issue, you need a common view into the virtual and the storage environments to help you identify the root cause of the issue.


To effortlessly troubleshoot I/O latency issues in your virtual environment, your virtualization management software should have a datastore I/O latency widget which lists all the top datastores with high read or write values. When you drill down to a latency issue within the datastore, you will be able to see several key indicators that show the current performance of the datastore. For example, IOPS, IO latency, disk usage, capacity, disk provisioning information, alerts that have been raised for that datastore, etc.


Monitoring the I/O latency metric is crucial. Generally, when you see a read or a write value which is greater than 20 millisecond, your datastores will more than likely experience latency issues. Drilling down into the datastore will show you a relationship or an environment map informing whether the datastore is affecting another resources’ performance. It’s possible that more VMs are hogging this storage resource causing latency issues. This occurs when VMs contend with each other for storage resources causing performance issues to other VMs and the storage system. When VMs have performance issues, critical applications running inside will also end up having a bottleneck. All this ultimately affects end users.


You can take the following measures to regulate I/O latency issues so your VM performance is not affected:

  • Allocate I/O resources to VMs based on their workload.
  • If you have a critical application or a latency heavy application, ensure that’s the only application running in the VM. Associate a VM that has a critical application running in it to a single datastore to prevent high I/O latency.
  • If the VM is not utilizing the allocated storage resource, consider routing unused resources to other VMs which have I/O contention issues.


It should be noted that not all storage system performance the same. That being said, set appropriate threshold values based on the performance of each storage device to avoid latency issues. Manage your VMware datastore with high I/O latency issues easily. In the video provided at the end of this blog, vExpert Scott Lowe shows you how to drill down from a datastore. You’ll see how the datastore is mapped to hosts, clusters, and VMs in your environment and look at their dependencies. Once you have this mapped, it’s easy to find related resources that are either causing the problem or being affected by the problem. Leverage a virtualization management tool to avoid storage I/O latency issues.


If you’re monitoring a large virtual infrastructure, then you might have come across memory ballooning and swapping issues at some point. Although memory ballooning causes swapping issues in a virtual environment, let’s look at what they both are and how it affects individual VM performance.


When you setup a VM, you allocate resources for that VM based on usage, user access, and so on. Once you assign resources to a VM, the VM is no longer aware of how much resources are consumed by other VMs in a host. For example, when you have three VMs in a host and two of them are consuming the assigned memory, the third VM will have performance issues. This usually leads to various bottlenecks and you’ll very often find yourself in such situations which leaves you scrambling – troubleshooting to ensure VM performance or availability issues don’t affect end users.


Memory ballooning & swapping: How it works and where it fits in your virtual environment


Memory ballooning is a memory reclamation method and it is generally not a good thing because it causes performance issues to your VMs. The hypervisor assigns memory resources to all the VMs in your environment. A VM is usually unaware when other VMs have memory contention issues, causing performance problems in the environment. To avoid this, the hypervisor uses a balloon driver which is installed in the guest operating system. This balloon driver identifies if a VM has excess memory resources. Once the balloon driver has identified the memory resource, the balloon driver inflates and pins down the memory resource so the VM does not consume the pinned memory. The balloon driver communicates back to the hypervisor asking the hypervisor to reclaim the pinned memory from the VM.


It is up to the hypervisor to allocate more memory to other VMs which have memory contention issues using the balloon driver. One thing to keep in mind is, when there is high memory ballooning, the guest OS is forced to read from the disk, this causes high disk I/O, and it can bring down the performance of the VM. When you think the host is under memory pressure, it’s critical to monitor the memory swap metric for performance issues.

Memory ballooning leads to memory swapping. Memory swapping is also not a good thing because the system swaps to disk and disk is much slower than RAM. All this causes a performance bottleneck. Usually, when there’s excessive memory swapping for a VM, it often indicates there’s memory contention issues on the host.

To proactively find out and address memory issues, you can setup alert criteria. Set thresholds and get notified when there is ballooning or swapping issues. Drill down to see which VMs have performance issues and identify and fix the bottleneck. Watch vExpert Scott Lowe in action as he shows you how to monitor and troubleshoot memory ballooning and swapping issues using a virtualization management software.



You can read this whitepaper from VMware® to learn more about memory resource management and memory ballooning.

Holiday seasons are an ideal time for stores because it’s the time they get the most foot-falls and it’s a great time to cash in on sales. It is also an ideal time for shoppers to cash in on great deals and discounts. After all, who doesn’t love to shop for items when there’s a steep sale? For those shoppers who avoid the holiday rush of going to a store may just end up visiting the online store. To give those shoppers the convenience, it’s essential to have a website which offers an awesome user experience – from the time the buyer enters the site till the time the sale is complete.


If the user experience is bad, it’s going to cost you the sale, and the customer may never come back. It’s a huge risk, especially during the holidays. We recently spoke about application downtime, similarly, if your website is unavailable, it’s going to cost your business… in plenty. It’s vital to monitor website availability, performance, and responsiveness. When customers are accessing your site from multiple locations, the more reason it is for you to ensure there’s continuous uptime.


Having a website performance monitoring tool will help you find the root cause of a slow Web transaction or if the website itself is down.

  1. Monitor page load speeds. Monitoring critical Web page elements such as images, JavaScript, CSS, third party content, etc. and look whether these elements are impacting the page load speeds. If one of the page elements fail to load, then it spoils the whole user experience, and affects the page load time.
  2. Set appropriate baseline metrics. You can set baseline values once you record the Web transaction. Based on the performance of the transaction, you can keep adjusting your baseline values. For example, once you’ve set a baseline value of 80% as warning and 90% as critical, you should keep revisiting from time to time and modify based on the website performance.
  3. Monitor page element behavior. If you've found what the issue is, drill-deeper to see the root cause. For example, if your site is not loading the way you want because of a style sheet, you should go deeper to look at different components, their load times, whether another issue is preventing the style sheet from loading etc.
  4. User experience monitoring. Just because you've identified and fixed the website issue doesn't mean that all is well. You have to test your fixes continuously from an end user’s perspective. Since your user base is going to access your website from multiple locations, you need to measure the site performance accordingly and compare them with your baseline values.
  5. Get notified when there’s a problem. You can proactively receive alerts in real-time if your website is not loading, or if a Web transaction has failed, or if the website performance is trending very poorly.


Be Proactive this Holiday Season


Rely on Web performance monitoring tools that will keep an eye on your websites along with other front-end applications. Proactively monitor your webpages, receive alerts if your website is having periodic issues, and fix problems before your business performance is affected.


Happy holidays everybody!

A VMKernel is a critical component of any virtual environment. VMKernel is the reason why your VMs are allocated with memory, CPU, and other resources from the physical hardware. What makes the VMKernel special is that it runs directly runs on the ESX/ESXi host. However, this doesn’t mean that performance and latency issues are particularly your VMs. They can be also be found in the host itself, which can cause sudden I/O spikes affecting the VM’s performance.


If you think your VM performance is poor due to I/O spikes, you need to go to the additional step and drill into that host to look at the kernel I/O latency metric. This metric will show whether or not the host has I/O spikes. Experts recommend that this value should be around 0 to 1 millisecond. If you see values over 1 millisecond, then your VMs may not perform the way you want them to.


In addition to monitoring the kernel I/O latency, you should also monitor the queue latency counter. This measures the average amount of time the data is in the queue. There should be absolutely no data in the queue and the value of this counter should be nothing greater than 0. If the value is greater than 0, then it means workload in the VM is very high and data cannot be processed with high efficiency, which will lead to I/O spikes and performance issues.


In a virtual environment, problems start slowly and work their way up affecting other hosts in a group, VMs in a host, OS, and resources. Other measures can be taken to keep the host I/O latency to a minimum. For example:

  • One way to reduce I/O spikes is to increase virtual memory. For this to work, you should consider increasing the host memory. This way the system memory is utilized to store data thereby avoiding disk access.
  • Determine how your storage arrays are performing. When there are too many VMs trying access the storage system, a bottleneck occurs.
  • Look at ways to balance disk loads across your disk drives. This way efficiency improves and there’s nothing stuck in queue.


In order to proactively troubleshoot virtual host I/O latency issues, you need a virtualization management software that will monitor all your virtual hosts, giving you key insights into critical performance metrics. You can never go wrong with virtualization management software because it allows you to do the following in real-time:

  • Identify and troubleshoot the root cause of kernel I/O latency in the VM host. See how each of your ESX/ESXi hosts are performing – drill down to see any abnormal behavior in each host.
  • Ensure the VM has enough resources to perform smoothly.
  • Get a wealth of critical information about your storage performance through an intuitive and customizable dashboard.


Virtualization bottlenecks can be tedious to troubleshoot if you don’t know where to start looking. Watch this video where vExpert Scott Lowe shows you how to monitor kernel I/O latency using virtualization management software.


We’ve spoken in the past on how system administrators have no fixed working hours. You are usually at the mercy of your employees continuously troubleshooting and fixing their issues. It does not matter whether you’re at lunch, out of the office, or even on vacation. You get those “desperate situation” calls to fix users’ issues immediately. Even if you have all the tools in the world to monitor and manage your IT infrastructure, sometimes you don’t have access to those tools when you need them the most. This is where a mobile solution can come to your rescue so you can resolve issues in your datacenter using your Android™ or Apple® devices.



A mobile IT management tool helps you troubleshoot and fix users’ issues, even when you’re away from your desk. If you have proactive alerts in place, you immediately get notified of pressing issues in your environment. You can then log on to your monitoring tool to assess the issue and determine if you can resolve it right away or refer it to another IT pro. A mobile IT admin tool eliminates the wait time necessary for you to get to your desk to look at issues. If you have a smart phone or a tablet, you can do the following anywhere on-the-go:

  • Look at real-time issues from your smartphone or other mobile device and drill-down to the issue—whether it’s a server, network, or an application issue.
  • Look at key indicators in your monitoring tool that shows you critical information such as active alerts, the top 10 databases with problems, applications running in different groups, and more.
  • Look at under-performing application components and drill-down further to see the type of alert, its current stage, and if it is affecting other applications.
  • Identify issues in your server and see if they are affecting other hardware components or application performance.


Your mobile IT admin tool should enable you to fix issues, not just show that an issue exists in your environment. On scanning a node, you can use a range of different APIs which lists all the troubleshooting features for specific applications that will help you resolve the issue.


Mobile IT Management Made Easy

Manage your datacenter from anywhere using SolarWinds® Mobile Admin™. Monitor and manage various Microsoft applications like Active Directory®, System Center Operations Manager, ActiveSync, and Exchange. You can also manage virtual environments like VMware® and Hyper-V®; open source applications like Nagios®, and many more—all using your mobile device.


TechRepublic recently published an article about how easy it is for IT admins to use Mobile Admin to remotely monitor and troubleshoot issues in your datacenter. Read the article to find out what Mobile Admin is, how you can use it to monitor and manage user issues, see what range of applications it supports, and obtain licensing information. Try Mobile Admin free for two weeks!

You have no control over when you may experience downtime. If you’re thinking, ‘what’s an hour of downtime going to cost?’ You may want to re-think that. What you thought was an hour of downtime can cost your organization millions (if you’re a large organization). Organizations whose businesses are driven through online traffic and sales experience massive losses, especially during peak business hours. For such organizations, application downtime is almost never accepted.


According to this report from Aberdeen Group, only 7% of organizations claimed to have less than 5 minutes of downtime in an entire year. These organizations have had almost 100% up-time. Another staggering stat from the report: downtime costs large organizations a little over $1.1 million per hour. Organizations of all sizes gets affected due to downtime and Aberdeen’s research shows there is a 19% increase YoY in cost of downtime per hour.


As sysadmins, you have a lot of responsibility in trying to ensure critical applications do not fail and end users don’t experience any downtime. You need to have consistent application performance and continuous application availability to avoid facing downtime. The only way to know application performance and availability is not affected is by monitoring key performance counters. Set baseline values and get proactively alerted if a counter or a metric is going over the threshold. All of this is possible when you have a right monitoring solution in place.


Monitoring tools allows you to do the following proactively:

  • Check the status of the application
  • See application performance – group applications based on type, location, user base, etc.
  • Get notified whenever an issue comes up
  • Seamlessly integrate with other monitoring solutions in your IT infrastructure. That way, you can address other issues in your IT environment
  • Resolve server hardware issues by stopping services and killing processes


Simplify your IT infrastructure by using a monitoring solution. You don’t have to find reasons to convince your boss on why you need an ideal tool that will monitor critical business applications. Instead, use the ROI calculator from SolarWinds and see how Server & Application Monitor (SAM) can proactively monitor applications.


How the ROI Calculator Adds Value to your Decision Making


Make informed decisions, calculate short term and long term ROI, costs and maintenance options from one vendor to another.

  • Look at how your current solution functions and determine the ROI that you get after you deploy SAM.
  • Determine the challenges you go through on your existing system and benchmark how SAM adds direct value to those challenges.
  • Look at the current spend on your tool. You can compare how much downtime you had in the past year and see if your current application monitoring tool is giving you the ROI you deserve.


You get more for less with SAM – whether it’s in-depth monitoring of your SQL server, or managing all your IT assets, it’s better to rely on a tool that proactively shows you issues as they occur, giving you a chance to quickly diagnose and fix them before more users raise helpdesk tickets. You don’t have to take our word, have a look at this a recent survey conducted by IT Brand Pulse for IT pros. SAM came out as a clear leader in all areas – market leader, performance, reliability, service and support, and innovation.


If you fall outside the 7% category of having hours of downtime in a year, it’s time you had another look at your current ROI and take some hard decisions in order to avoid downtime.


ROI Calculator.png

In a virtual environment, if a VM has performance issues, it can affect the performance of other components within the virtual infrastructure. The converse is applicable too. For example, you can collectively map all your VMs and, depending on the location of your bottleneck, you could find that the issue is related to storage performance.  When you want to view storage performance, you typically look at storage IOPS to measure performance of a storage system. Measuring IOPS throughput will give you the amount of data transferred per second, and measure the number of operations per second that a storage system can process.


Whether you’re using VMware or Hyper-V storage, you should look at storage performance across the hosts and clusters, and periodically monitor for issues. One of the ways you can assess VM performance is by looking at critical performance metrics. To do this, you need to drill down further into a VM and check which VMs are using the most storage resources. For example, if you have three VMs with 30 GB allocated, and two of the VMs are hogging resources, then the third will have performance issues.


As virtual and storage admins, you should determine the number of IOPS your applications use on a daily basis. The best way to go about doing this is to monitor your current application performance to see how your servers are performing and to ensure they are healthy. The next thing you want to look at is usage. Knowing how your applications perform at any given time is key. This will tell you the average IOPS value an application uses. The IOPS value will vary for different applications. For example, Exchange Server may be widely used as opposed to someone using a database server or a multi-media application.


When you’re looking at IOPS and storage performance, you should keep the following in mind:

  • Map all the VMs against how much IOPS each one is consuming and determine if the IOPS values are higher than normal
  • Look at IOPS against latency, I/O size, read/write values, etc. to make meaningful insights on why storage performance issues occur
  • Proactively monitor applications or VMs to determine if they are consuming too much of storage resources
  • Have proper baselines in place to ensure high IOPS values aren’t affecting your storage performance
  • A significant amount of I/O means there could be potential storage issues


Watch this short video where virtualization vExpert Scott Lowe shows you the steps involved to identify the drivers of storage I/O for a datastore. Learn how to drill down from a datastore to see which VMs are tied to that datastore, which VMs are driving storage I/O, spikes in I/O, and more using a virtualization management software.


The need for a well-rounded reporting tool is not often spoken about. If you’re thinking how much of a difference your report is going to make to your organization, well think again. Having a great reporting system makes a huge difference to senior IT staff, since a lot of critical decisions they make, whether it’s related to performance monitoring or availability of servers and applications, are based off of the information presented to them.


There are a few challenges you can run into when trying to put together a weekly or a quarterly performance report.

  • Incomplete Reports: Sometimes your reports are not a 100% complete. This can happen if you’ve gone over the specified page limit or if your report doesn’t support graphic heavy images.
  • Customization: You may not have rights to create or customize a chart or a table. For example, if you want to show top 10 expensive database queries or % memory used by database, the reporting tool may not allow you to show this information the way you want to.
  • Report Size: If your reports have a limitation on the size, then you’re forced to have a basic report with limited information.
  • Difficulty with Sorting Data: Tweaking the data can be challenging. For example, you may not be able to sort columns, group data, or limit results to the top 10, etc.


The reason why it’s extremely important to have a built-in reporting system is to ensure you have the flexibility to create reports in a dynamic environment. An ideal reporting tool should offer you the following capabilities, whether you’re looking at nodes, databases, memory, queries, disk usage, CPU load, etc.

  1. Adding Content: You should be able to add database tables, down applications, monitored processes, etc. that you see in your server monitoring software web console.
  2. Edit Resources: You should have the option to select and edit the resource, for example, select a column or a group and have the option to sort them by query type, memory load, etc.
  3. Customize Report Layout: Customizing your report is what matters. You should be able to show critical information that is happening in your servers and applications in real-time in the form of charts, tables, and gauges.
    1. Modify the number of columns in your report. Display charts and tables for a specific time period and view them side by side.
  4. Preview and Check the Report: Your reporting tool should allow you to double check the report for you and ensure it only displays the data you require. If you have to modify, it should allow you to go back to the layout builder to make any additional changes you want.
  5. Custom Properties: Custom properties should help you organize and find your report for later. You can mark reports as favorites so they appear on top of your report list.


In addition to the capabilities mentioned above, your reporting tool should allow you to generate out-of-the-box reports dynamically:

  • CPU
  • Memory & Virtual Memory
  • Response times
  • Statistic data
  • Resource usage
  • Hardware monitoring
    • Server warranties (Due to expire, set to expire, and expired)
  • Server and application health (Up, running, critical, down)
  • Group based reports
    • Servers, application specific groups, critical alerts, etc.


The reason why out-of-the-box is beneficial is because an IT environment keeps growing every day. Requirements keep changing as your users are accessing various information in your servers and applications. As sysadmins, it’s imperative to know what’s going on in your environment. To enable proactive monitoring of your servers and applications, a built-in web-based reporting tool will help you analyze and drill into details and provide real-time insights in an organized manner.

Whenever there is an issue in your Windows® environment, one of the places we troubleshoot issues is by looking at Windows event logs. Unfortunately, issues are prone to arise and they get logged whether it’s caused by an internal event or an external event. On the bright side, today, you have server monitoring software that allows you to browse through all the events in your event logs, as well as filter for specific events in a particular log.


It’s important to monitor Windows Event Logs because they indicate where the real issue lies. Here are a few examples of potential issues that get logged as events in your Windows environment.

  • Application Exceptions: Application exceptions are logged into application logs in Windows. They occur when an alert or an issue is raised while running the applications.
  • User Lockouts: When users have multiple incorrect password entries, they get locked out of their system. Both the password attempts and the lock-outs get logged as individual events.
  • Failed Backups: This usually happens when the server has reached its maximum limit or if you don’t have folder rights to back up your information. All these get logged as events.
  • Abruptly Failed Processes/Services: If a file is heavy to load and tedious to navigate, sometimes it just stops working because your system can’t handle the volume. This causes your systems to freeze.

Real-Time Event Log Viewer: Monitor Event Logs from One Place

Windows generates hundreds or even thousands of logs for one server in a day. It’s not practical to go over each server and check logs to determine what’s causing an application to fail over and over again. A busy sysadmin needs something that monitors event logs proactively but also looks at event logs in a more organized manner. A server monitoring software comes with a real-time event log viewer that gives sysadmins the flexibility to monitor any systems’ event logs remotely from any location. In addition, a real-time event log viewer gives sysadmins the following benefits:

  • Let’s you choose the type of log files you want to monitor—security logs, system logs, etc. After selecting the log files, you can filter based on sources and look at logs by event log message, event log ID, severity of the event, date and time of event occurrence, the computer or user that generated the event, etc.
  • Allows you to troubleshoot problems as they occur in your environment.
  • Connects to the server and starts collecting Windows event logs from a specific host—both current and historical logs.
  • Create custom monitors for events that have occurred periodically.


Finally, you have the added advantage of monitoring Windows event logs through a server monitoring software and not having to worry about using the event log viewer that comes with the Windows operating system. This is because the event viewer differs with the version of the Windows operating system that you’re using as it logs events based on the version of the operating system.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.