1 2 3 Previous Next

Geek Speak

41 Posts authored by: karthik

The month of July is always special for various reasons. We officially welcome the summer heatwave, and it’s also one of those times where we look forward to taking a break and spending time with family. Another reason why July is special is because it’s time to give thanks to your friendly Systems Administrator, yes, the person you always call for help when you get locked out of your computer, or when your email stops working, or when the internet is down, or when your mouse stops responding, or when you need just about anything.


To all the SysAdmins and IT superheroes out there, SysAdmin Day is fast approaching. And this year, just like every year, we at SolarWinds dedicate the entire month of July to SysAdmins across the world and we invite you to join the festivities as we celebrate SysAdmin Day in the biggest possible way.


SysAdmin Day can mean a lot of things to IT pros and organizations. But, what I think makes SysAdmin Day a day to remember is being able to share your journey with fellow SysAdmins and IT pros. So, in the comment section, share why you chose this career path, the meaning behind those defining moments, or remind us about the day you knew you were going to be a SysAdmin. Take this time to also narrate funny instances or end-user stories that made you laugh or challenging situations you successfully dealt with in your very own server rooms.

IT Pro.png

We’re super thrilled about this year’s SysAdmin Day because we will have fun weekly blogs on Geek Speak to celebrate and a thwack monthly mission that offers weekly contests and exciting giveaways. Some of these sweet prizes include:


Now it’s time to get the party started. Visit the July thwack monthly mission page today!

SysAdmin Day 2015.png

IT pros now have the added responsibilities of having to know how to troubleshoot performance issues in apps and servers that are hosted remotely, in addition to monitoring and managing servers and apps that are hosted locally. This is where tools like Windows Remote Management (WinRM) come handy because it allows you to remotely manage, monitor, and troubleshoot applications and Windows server performance.


WinRM is based on Web Services Management (WS-Management) which uses Simple Object Access Protocol (SOAP) requests to communicate with remote and local hosts, multi-vendor server hardware, operating systems, and applications. If you are predominately using a Windows environment, then WinRM will provide you remote management capabilities to do the following:

  • Communicate with remote hosts using a port that is always open by firewalls and client machines on a network.
  • Quickly start working in a cloud environment and remotely configure WinRM on EC2, Azure, etc. and monitor the performance of apps in such environments.
  • Ensure smoother execution and configuration for monitoring and managing apps and servers hosted remotely.


Configuring WinRM

For those who rely on PowerShell scripts to monitor applications running in remote hosts, you will first need to configure WinRM. But, this isn’t as easy as it sounds. This process is error prone, tedious, and time consuming, especially when you have a really large environment. In order to get started, you will need to enable Windows firewall on the server you want to configure WinRM. Here is a link to a blog that explains step-by-step how to configure WinRM on every computer or server. Key steps include:



Alternative: Automate WinRM Configuration

Unfortunately, manual methods can take up too much of your time, especially if you have multiple apps and servers. With automated WinRM configuration, remotely executing PowerShell scripts can be achieved in minutes. SolarWinds Free Tool, Remote Execution Enabler for PowerShell, helps you configure WinRM on all your servers in a few minutes.

  • Configure WinRM on local and remote servers.
  • Bulk configuration across multiple hosts.
  • Automatically generate and distribute certificates for encrypted remote PowerShell execution.


Download the free tool here.


How do you manage your servers and apps that are hosted remotely? Is it using an automated platform, PowerShell scripts, or manual processes? Whatever the case, drop a line in the comments section.

Last week, Omri posted a blog titled, What Does APM Mean to You? Personally, I think it means several things, but it really got me thinking about security issues related to APMhow they are of high concern in today’s IT world. Systems and application environments are specifically prone to denial of service attacks, malware, and resource contention issuescaused by remote attacks or other miscellaneous security issues.


I've always looked at continuous application or systems monitoring as something that goes hand-in-hand with security monitoring. If SysAdmins are able to provide security insights, along with systems and application performance, it will only benefit the security and operations team.  After all, IT as a whole works best when teams interface and collaborate with each other.


It’s not ideal to rely on an application performance monitoring software for IT security, but such tools are certainly designed with some basic features that deliver capabilities that are related to security use casesto complement your existing IT security software.


Here are some key security related use cases you get visibility into using an application and systems monitoring software.


Check for important updates that should be applied

Forgetting to install an OS or hardware update may put your servers and apps at risk. Your apps may be prone to attacks from malicious software and other vulnerabilities. OS updates will ensure such vulnerabilities are corrected immediately when they are discovered. In addition, you should report on the number of critical, important, and optional updates that are not yet applied to the server.  Remember, you can also view when updates were last installed and correlate that time period to performance issues.  Sometimes these updates cause unexpected performance impacts.

Windows Server.png


Keep an eye on your antivirus program

Monitor the status of your antiviruswhether it is installed or not, make sure to check if key files are out of date. When you fail to scan your antivirus software or monitor whether it’s up and running, then you increase your chances of security issues.


Ensure your patch updates are installed

Collects information related to patch updates, and answers questions like: are they installed, what’s their severity, by whom and when were they installed? You install patches so that security issues, programs, and system functionalities can be fixed and improved. If you fail to apply patchesonce an issue has been detected and fixed, hackers can then leverage this publically available information and create malware for an attack.

OS Updates.png


View event logs for unusual changes

Monitor event logs and look for and alert on potential security events of interest. For example, you can look for account lockouts, logon failures, or other unusual changes. If you don’t have other mechanisms for collecting log data, you can simply leverage some basic log collection, such as event logs, syslog, and SNMP traps. You can use these for also troubleshooting.



Diagnose security issues across your IT infrastructure

Troubleshoot security issues by identifying other systems that may have common applications, services, or operating systems installed. Say a security issue with an application or website occurs, you can quickly identify what systems were in fact affected, by quickly searching for all servers that are related to the website or application. 



While these are just a few use cases, tell us how you use your APM softwaredo you use it to monitor key system and app logs, do you signal your IT security teams when you see something abnormal, or do you rely on an APM tool for basic security monitoring? Whatever the case is, we’re curious to learn from you.

A system administrator’s roles and responsibilities span various dimensions of an IT organization. As a result, keeping tabs on what’s going on in the world of technology, including vendors and their products, latest product releases, end-user experiences, and troubleshooting performance issues are just some of the areas of focus. Over time, system administrators turn into thought leaders due to the technology, industry, and domain experience they gain and use. They pass on their knowledge to colleagues and technology aficionados. Even organizations turn to such experts to hear what they have to say about where IT is headed.


On that note, we at SolarWinds® are glad to have brought together IT gurus, geeks, and fellow system administrators to share their thoughts on system and application performance management. This event took place recently in the form of a #syschat on twitter. For those who didn’t get a chance to tune in, here are some highlights:


Application monitoring: Generally, there is a consensus that application downtime affects business performance. Given that businesses are paranoid about this, why hasn’t the adoption of application monitoring in some organizations taken off like it should? Experts like @standaloneSA and @LawrenceGarvin feel that, “Some of it has to do with need.” Or, as @patcable points out, “Admins don’t know what to monitor, and apps don’t provide the right data.” This is true for various reasons. Often, IT pros are given a mandate by business groups saying that all apps are critical. Therefore, they have to watch apps closely for performance issues. Before answering the “what to monitor” question, IT pros need to ask, “Why should I monitor these apps, are they really that critical?” Knowing the answer to this question eliminates the additional noise, and you can focus only on what to do with the really critical apps and ensure that you’re monitoring the right metrics.


Apps in the cloud: Monitoring the performance of apps in the cloud is, again, not a direct solution to solving a performance problem that can arise from your apps running in the cloud. As more applications are being deployed in the cloud, the level of difficulty in monitoring those apps gets higher. IT pros have to really get down to understanding the “how,” which takes time. For example, @vitroth said, “Ops finds it hard to monitor what engineering doesn't instrument. Give me counters, categories and severities!” When IT pros have difficulties managing apps running on a physical server, the cloud layer is certainly going to be an unfamiliar place, and new complications will arise.


Skills sets for SysAdmins: A lot of buzz is going around about whether SysAdmins will need to have coding skills one day. It may not be mandatory for IT pros to have programing skills, but they might want to develop these skills so they can create and automate tools. While this was only one opinion, others like @patcable suggested that “sysadmins are going to have to become more comfortable writing stuff in some language other than shell.” Learning and understanding your IT infrastructure and environment are essential. IT pros should be willing to learn and learn quickly because ‘things aren’t slowing down.’ Where gaining technical knowledge and skills is concerned, it always helps to “learn a programming language, version control w/git, config management, and keep an eye on Hadoop,” as recommended by @standaloneSA.


What are your thoughts on these topics? Where are you doing with application monitoring in your organization? What difficulties do you see with monitoring apps in the cloud? Do you see DevOps improving the adoption of application monitoring? We’re happy to hear your views and opinions. Follow us on @SWI_Systems to learn more.

We know there are many organizations out there who do no asset inventory at all, maybe short of slapping an organizational serial number tag on the notebook and noting where it went.


But think about the next time you have to replace legacy systems. For example, say you need to replace highly inefficient power supplies that are drawing hundreds of megawatts of power. With new systems, you could achieve 10x the capacity but draw only a quarter of the power load. This could potentially be a self-funding project just in the electrical and cooling savings alone! But not so fast, says your budget approver. Without proper inventory of those legacy systems, alongside your hardware warranty data, it’s near impossible to make a tangible case to your budget approver.


OK, so hopefully your systems have energy-efficient power supplies but you get the point: server and IT asset management is a must. It provides you the means to achieve complete visibility into your infrastructure inventory, helping you gain an in-depth understanding of:

  • Where servers and other hardware exist
  • Where components reside
  • How they are used
  • What they cost
  • When they were added to the inventory
  • Is there an expiry date for warranties and upgrades
  • How they impact IT and business services


Having this level of visibility into server performance helps SysAdmins improve infrastructure efficiency and performance, doing year-end budgeting, show how existing server hardware assets yielded in a strong ROI, or planning and forecasting. That conversation about budgeting for that next hardware upgrade just got a lot easier.


But let’s say you’re one of the few organizations that has an extensive Excel worksheet that captures all relevant information as the system is first unboxed. You’re not encountering any pain points so far, but no doubt, they’re surely to be on the way.


In the beginning, it’s a cheap and straightforward answer to having visibility into your inventory. But suddenly, more people are hired, or you’re implementing cloud-based services, or department ‘X’ wants to upgrade/replace their legacy business application; all of this requires additional equipment or the upgrade/replacement of existing equipment. As your IT organization grows, your spreadsheet exponentially grows into a big, hairy mess of manual tracking, taking more and more effort to maintain and falling further and further down your list of priorities.


So we want to know, where you do fall in the spectrum of asset management? At what point is something like a tool for automated asset management warranted? And when does a simple spreadsheet do the trick? And at what point is that spreadsheet doomed to inefficiency?

It’s a known fact that organizations are turning towards virtualization for various reasons. An IT organization can add several business applications, databases, etc. without increasing or adding new hardware to their IT environment, thus saving hundreds of dollars and optimizing existing hardware. Often, development teams rely heavily on taking snapshots of their dev or test environments. In the event of hardware failure or if difficulties arise with restoring changes made to apps and databases, you can quickly lose this vital information. One of the most crucial values that virtualization offers companies is that it can help save on technology maintenance costs. Imagine having to run several physical servers without a virtual environment? You would end up supporting end-users or customers at a very high cost.


Despite the various short-term and long-term benefits virtualization offers, managing this complex infrastructure is a mammoth task for any IT pro. Most organizations would prefer to rely on a proactive virtualization management tool that offers deep and end-to-end visibility on your virtual and storage environments. SolarWinds® Virtualization Manager or VMAN as we fondly call it, gives you unified performance, capacity planning, configuration, VM sprawl control, VDI, and showback management for VMware® and Hyper-V®. In addition, VMAN also integrates with SolarWinds products, such as Server & Application Monitor and Storage Manager providing you with contextual awareness of performance issues across all layers, including applications, database, virtual infrastructure, server hardware, and storage systems.


Just this month, SolarWinds conducted a survey in which 136 VMAN customers participated. The pool of respondents included VirtAdmins, SysAdmins, capacity planners, IT generalists, etc. from North America. The objective of this survey was to find out how our customers are using VMAN to troubleshoot performance challenges in their virtual environment and what their ROI was after deploying VMAN.


VMAN Survey Aug 2014.jpg


After deploying VMAN:

  • 63% of respondents spent an average of only $6,000 per year on software costs to monitor around 250 VMs
  • Respondents decreased downtime from 7-15 hours to less than 3 hours per month
  • Respondents who spent 11-20 hours manually searching for dormant VMs now spend only 1-5 hours per month
  • Respondents who spent around 9 hours to detect VM sprawl every month now only spend a little over 2 hours


What’s also interesting is a large chunk of our customers say they leverage product integration between VMAN and Server & Application Monitor, Storage Manager, and Network Performance Monitor. 47% of customers go on to say their previous virtualization management tool didn’t offer such integration capabilities for end-to-end visibility (app to storage), and that’s why they made the switch to SolarWinds.


You can view the complete survey findings in the following presentation.


Virtualization Manager Survey: Features, Competitive, and ROI


If you’re an existing VMAN customer and haven’t had a chance to participate in the survey, tell us what value you see in using VMAN. Take this time to also let us know what you feel we could do better in the coming releases of Virtualization Manager.

There are always questions on what’s better: Synthetic End-User Monitoring (SeUM) or Real End-User Monitoring (ReUM)? Whether you’re using one of the two or both depending on your business scenario, the ultimate goal is to improve end-user experience. You achieve this by monitoring performance and continuous availability of websites and Web applications. Let’s take a closer look at both.


Organizations consider synthetic monitoring to be an invaluable tool, as it helps detect issues in websites and applications at an earlier stage, allowing you to address and fix issues prior to deployment. Further, it provides the ability to drill-down into individual Web components and diagnose performance problems affecting websites and Web apps. Synthetic monitoring offers other benefits, such as:

  • Record any number of transactions and test Web app performance for deviations
  • Easily locate front-end issues in your websiteswhether it’s HTML, CSS, JavaScript, etc.
  • Proactively monitor response time for different locations and compare against baselines
  • Get notified when a transaction fails or when a Web component has issues


Another key area where synthetic monitoring is beneficial is when websites host live information. For example, game scores, trading stocks, ads, videos, etc. Continuously monitoring these websites will help to proactively identify issues related to third-party content. With SeUM, information can be interactively shown in the form of waterfall charts, transaction health, steps with issues, and so on. Additionally, this method allows you to easily pinpoint components with performance issues.


On the other hand, you have real-time monitoring tools that present a different angle to monitoring end-user experience. When trying see the world through the users’ eyes, you gain insight into their behavior and can assess overall user experience in real-time. Since real-time monitoring doesn’t follow pre-defined steps or measure preset Web transactions, you have access all the data you need. Moreover, you gain visibility into:

  • Application usage & performance and track based on individual users
  • Location specific usage and performance
  • Make changes to your applications and monitor dynamically


In addition, you will always be aware of your website’s status. In turn, you ensure it’s up and running from various locations because you’re able to get traffic and monitor user interaction. Without traffic, real-time monitoring is meaningless. Reason being, you will no longer receive visitors, nor will you be able to look at performance metrics to adjust navigation or change the look and feel of Web pages. Here’s where synthetic monitoring has a slight edge since you don’t need real traffic to measure website performance. Synthetic monitoring has the ability to monitor website performance from any location with pre-defined steps and transactions.


While SeUM and ReUM each have their own benefits, it really boils down to what your business model is and how your business is aligned with end-users. IT pros can certainly leverage both within the same environment. Unfortunately, since both tools are built for completely different usages, you will have to use them independently to monitor and measure user experience.


Tell us your stories. How does your IT organization monitor end-user experience today?

End-users rely on technology providers to offer a simple solution to help solve complex problems; something that helps both business groups and their customers. Specifically, Web services expedite the application to application delivery for building integrated systems of web application components. Without the proper knowledge on monitoring performance, the application could run into a number of issues. Therefore, it’s important to understand the specifics around measuring the performance of Web services.


Measuring the Performance of Web Services

When you use various web applications, you’ll want to make sure the Web services connected to the application are thoroughly measured. Some reasons for measuring Web services are: to determine if services have enough storage capacity, are secure and free from vulnerabilities, operate well within SLAs, etc. Web components provide seamless data exchange and transfer. A prime example of this would be when you access your bank account online, check the account balance, make a transaction, or credit card payment. 


By keeping track of Web services, you will be able to better monitor application performance. In turn, issues can be identified and fixed ahead of time, ensuring that applications remain free of errors and glitches.

Organizations use Web services to improve business standards and processes. However, there’s still a possibility that you may run into issues. Some of these issues include:

  • Web service availability: No matter how well your website or Web application runs, it’s still prone to availability and performance issues. When your website is down, Web services will automatically experience availability issues.
  • HTTP performance: Web pages and websites are built using the HTTP communication protocol, a protocol which shares its working with Web services. The challenge with the HTTP protocol is that it establishes a connection with a server only for a specific time until the data is transferred. With this, a lot of time is spent creating and terminating connections to the server. In addition, Web services mostly utilize XML messages and conversion of data to and from XML can be time consuming during the communication phase.
  • Reliability: Web services mostly utilize the HTTP protocol which doesn’t assure data delivery and response. These reliability issues are only part of the HTTP protocol. You can choose to use other protocols to try and avoid reliability issues.
  • Authorization: Web services tend to ignore authentication standards and can directly have data transferred to an unauthorized source.


There’s several other challenges organizations face, such as scalability, testing of Web services, Web service communication, etc. that can cause performance degradation to your websites or applications.


Monitoring Web Services

Despite all these challenges, utilizing Web services has its own benefits, both from a business and technology standpoint. Before attributing those benefits, it’s crucial to first figure out how to avoid issues that will degrade performance. Before services affect the overall performance of the application and the end-user experience, it’s important to monitor them from the beginning. Not just the Web services, but the application, and any processes that may slow the service. Additionally, the server will have to be closely monitored. Monitoring Web services continuously will allow you to gain coverage into application health so you can start helping end-users and customers manage their SaaS and other internal applications. Here are some of the benefits of monitoring Web services:

  • Monitor internal and SaaS based apps running across all servers in the environment
  • Determine key Web service availability, latency, and validate the content returned of that query
  • Saves organizations time and money as Web services utilize protocols that are used widely by all organizations and therefore requires very little investment
  • Web services are easy to access when you want to track or modify data
  • Built-in alerting and reporting capabilities allow you to pinpoint the root cause of problems in minutes


Even if precautionary measures are taken, sometimes issues with applications just can’t be avoided. However, with the proper knowledge of measuring and monitoring Web services, application downtown can be identified and remediated in a timely fashion. Another great solution is to implement an application performance monitoring tool. A tool like this will make measuring performance of Web services substantially easier. Moreover, it will provide you with the insight you need to optimize overall performance.


Identify issues and start monitoring your JSON & SOAP Web services.

System Center is a great platform to perform application monitoring and management, but it contains a few gaps where native functionality isn’t provided. Fortunately, System Center is an extensible platform and SysAdmins can leverage 3rd party tools that quickly fill technology gaps and often plug into the System Center infrastructure to provide seamless visibility.  SolarWinds®, a partner in the System Center Alliance program, offers multiple capabilities to help SysAdmins fill functional gaps in System Center, including:

  • Non-Microsoft® application monitoring & custom application monitoring
  • Capacity planning for Hyper-V® & VMware® virtual environments
  • Third party patch management
  • Multi-vendor database performance
  • Storage & network management


Application performance management tools, such as Server & Application Monitor (SAM) offers additional visibility beyond the System Center. Moreover, it’s a tool that can be easily deployed with your System Center environment so you can seamlessly manage other aspects of your IT environment.


  • Monitor the Monitor: SolarWinds has worked with Cameron Fuller, Operations Manager MVP, to build application monitoring templates to monitor the health of System Center Operations Manager (services, Agent & Management Server) and System Center Configuration Manager 2012.  To get a detailed list of the services, apps, and TCP ports monitored, please see details in the Template Reference Document
  • Deep Microsoft application monitoring: Go deeper than just looking at availability and performance of critical Microsoft applications using System Center. Microsoft application monitoring tools like Server & Application Monitor (SAM) from SolarWinds provide a level of depth to manage issues in applications like Exchange and SQL Server® that System Center doesn’t provide. For Exchange environments, Server & Application Monitor provides details on individual mailbox performance (attachments, synced mailboxes, etc.). In addition, you can monitor Exchange Server performance metrics like RPC slow requests, replication status checks, and mailbox database capacity.   SAM also provides detailed dashboards for finding SQL server issues like most expensive queries, active sessions, and database & transaction log size. 
  • Non-Microsoft application monitoring: An IT infrastructure isn’t only limited to Microsoft and because System Center has limited capabilities to manage non-Microsoft applications, IT pros require management packs for non-Microsoft application monitoring.  SolarWinds provides the ability to quickly add application monitoring for non-Microsoft apps. Additionally, SolarWinds supports over 150 apps out-of-the-box.  With SolarWinds’ Management Pack, you can integrate application performance metrics right into System Center Operations Manager. 
  • Capacity planning for Hybrid Virtual Environments: In addition to having visibility into application performance, SolarWinds provides capacity planning for organizations with larger virtual server farms or a private cloud infrastructure. Whether it’s Microsoft Hyper-V or VMware, you get instant insights into capacity planning, performance, configuration, and usage of your environment with the built-in virtualization dashboard.
  • Contextual visibility from the app to the LUN: For applications that leverage shared compute and storage resources, Admins are able to contextually navigate from the application to the datastore and related LUN performance across multiple vendors in each layer of the infrastructure. SolarWinds supports over 150 apps, Hyper-V & VMware, and all the major storage vendors.  Integrated visibility is provided natively with SolarWinds Server & Application Monitor, Virtualization Manager, and Storage Manager.
  • Third-party patch management: Expand the scope of WSUS by leveraging pre-packaged third party updates for common apps, such as Java®, Abobe®, etc. A patch management tool, such as Patch Manager, which natively integrates with System Center, manages patch updates for Microsoft, as well as third party applications. In addition, deploy, manage, and report on patches for 3rd party applications. Further, get notified when new patches are available for deployment, and synchronize patches automatically based on a fixed schedule with this integration.
  • Database performance management: System Center offers limited visibility into performance of non-Microsoft databases from Oracle®, IBM®, Sybase®, etc.  Database Performance Analyzer is an agentless, multi-vendor database performance monitoring tool that goes deeper into monitoring the performance of databases. For example, it allows Admins to look at the performance problems most impacting end-user response time and provides historical trends over days, months, and years.
  • Network management: In the previous launch of System Center, Microsoft added limited network management capabilities. SolarWinds provides advanced network management capabilities to include network top talkers, multi-vendor device details, network traffic summaries, NetFlow analysis, and more. This information can be integrated to System Center Operations Manager with the free Management Pack


In conclusion, Admins should utilize a single comprehensive view of servers, applications, and the virtual & network infrastructure. Especially when all are integrated into System Center. In turn, you can look at the exact performance issue, easily detect hardware & application failure, and proactively alert for slow applications.


Learn how SolarWinds can help you by allowing you to gain additional visibility beyond System Center for comprehensive application, virtualization, and systems management by checking out this interactive online demo.

A database admin has responsibilities to fulfill around the clock. From ensuring the backing up of databases, attending to breakdowns of applications affecting database performance, verifying accuracy of information within the organizations’ database, and constantly monitoring the entire database server. Fulfilling all these responsibilities is what makes a DBA one of the most valuable players in an organization. On any given day, database admins have a set of routine tasks to attend to, these include:


SQL Server® Logs

DBAs view SQL logs to see whether SQL agent job statuses have completed all required operations. If the job status is incomplete, then this will lead to errors within the database. Looking at SQL logs regularly will ensure an issue or a database error doesn’t go unnoticed for an extended time period. Login failures, failed backups, database recovery time, etc. are key fields a DBA looks for in SQL logs. Looking up SQL logs are beneficial, especially when you have critical databases in your environment.


Performance Tuning

In order to fully maximize the potential of the database server and also ensure applications don’t have downtime due to a SQL issue, it’s become a best practice for DBAs to monitor SQL server performance and metrics. Whether an issue is due to an expensive query, fragmented indexes, or database capacity, DBAs can set up optimum baseline thresholds. In turn, they’re notified whenever a metric is about to reach the threshold. In addition, it helps to glance through these metrics to see workloads and throughput so you can adjust your database accordingly.


Database Backup

DBAs have to regularly test backups and make sure they’re restored. This allows them to be risk-free from issues pertaining to applications and user backups if they’ve been deployed to a different host, server, or datacenter. DBAs also regularly test backups because this helps them verify if they’re staying within the SLA.


Reporting & Dashboard

As the size of a database grows, complexity of maintaining and monitoring also grows. Database issues have to be addressed as soon as possible. Therefore, DBAs need real-time data on SQL performance before a disaster occurs. For this reason, having up to date information in the form of reports and dashboards provides visibility and reporting about the server, hardware resources, SQL queries, etc. DBAs need access to reports for database size, availability and performance, expensive queries, transaction logs, database tables by size, and so on.


Activities such as database maintenance, data import/export, running troubleshooting scripts, etc. are other areas DBAs focus on and spend their time. To manage and optimize SQL server, it’s essential to consider using a performance monitoring tool which comprehensively monitors vital metrics within your SQL server and simplifies your “to do” list of activities.


Read this whitepaper to learn more about how to monitor your SQL Server.

I had an opportunity recently to interview a long time SolarWinds Server & Application Monitor (SAM) customer, Prashant Sharma, IT Manager, Fresenius Kabi (India).


KR: As an IT manager, what are you primary roles and responsibilities?

PS: I’m in charge of the whole of IT where I manage the IT environment, look at IT security, data center performance, monitoring, and also application development.


KR: What other SolarWinds products do you currently own other than SAM and how are you using SAM?

PS: Other than SAM, we currently have NPM and NTA. For monitoring IT security, we use RSA, but I’m not happy with RSA as it’s too expensive to maintain and we are looking at replacing it with LEM. We have around 98 nodes and we use SAM to monitor the performance of servers, we monitor critical applications like SQL, SharePoint, IIS, AD, and we also have custom and out of the box applications we monitor for R&D using SAM. Since we have a huge R&D center, we do monitor applications in both development and production environment. We use the built in feature integrated virtualization module within Orion to monitor our virtual environment as we are a VMware shop.


KR: Why did you choose SAM and what other products did you look at before narrowing down on SAM?

PS: We chose SolarWinds products because they are easy to implement and troubleshoot. We are actually able to setup in about an hour. We have also never reached SolarWinds for any issues in the last 4 years of owning the product. SAM is cost effective software and we have all the features that we want in the product. We also evaluated other products from CA and Whatsup Gold. Ended up going with SolarWinds as it was fairly simple and straight forward. We also own NPM and NTA, and are able to monitor for issues easily with those as well.


KR: How was it before using SAM and how have things changed right now?

PS: SolarWinds is the only company we use for monitoring. Before SAM we didn't know how to troubleshoot for issues, didn't know where to go and look for issues and solve them. Now we are able to identify performance issues more easily with SAM and are able to save a lot of time and money. I also leverage thwack to find answers to anything I needs from other IT pros.


Learn more about Server & Application Monitor

Application performance monitoring (APM) is a broad subject which tends to look at how businesses are using enterprise level applications. These applications can help solve end-user requirements and continuously maintain high availability to ensure optimal performance. Furthermore, organizations that depend on APM technology to scale certain areas within their business must understand that innovation plays a vital role. After all, CIOs and other decision makers will want to look at what the ROI is over a period of time.


The Role of an APM Software

Organizations with a sizeable IT infrastructure need an APM software to effectively manage IT assets to ensure they last for a certain amount of time, always deliver from the time they’re set up, and so on. For example, say you’re an enterprise with 1000+ physical and virtual servers. These servers have mission critical applications that need to support a certain business group. As IT pros, it’s your duty to ensure the life of server hardware is always healthy and applications are always available without downtime. Managing this manually isn’t an option since there are several nuts and bolts you’ll have to look at. Moreover, you also have to support and manage other areas within the environment.


Having an APM tool means you can automate availability and performance management for your servers and applications. In addition, APM tools offer various benefits, for example, getting automatically notified when something goes wrong with servers and apps. Within minutes, APM tools will help pin-point where an issue originates, and monitor application performance in phases before they go live on to a production environment. In turn, you can fix minor issues before the end-user starts pointing them out, and much more.


Where APM Fits

APM as a technology has evolved from primarily monitoring only a set of applications that IT uses. Today, this technology has a significant impact among various groups in an organization. For example, industry specific users leveraging APM to manage their business needs and addressing everyday challenges, IT pros looking for tools that go deeper to manage performance of business and critical applications, like Exchange and SQL server®.


Manage Critical Applications: Several times in a week, IT pros are asked to help users by unlocking their accounts. APM tools these days not only monitor Active Directory® metrics, but also have built-in functionalities to manage logs that are generated by critical applications.


Manage Custom Applications: Industries like healthcare and financial services are largely dependent on APM tools to help improve customer support, help streamline auditing and compliance processes, manage large amounts of customer data, and so on. For example, the banking industry may have poor customer satisfaction when sites are slow to respond to user requests. Monitoring online transactions based on response time, traffic, etc. will help business groups streamline their system.


Manage Overall IT Infrastructure: It’s not enough for IT personnel to only know the performance of a network. To really identify the root of the issue, IT needs APM to figure out whether the network is at fault or whether the problem has to do with inadequate hardware resources or whether an app failure has caused end-user issues.


Mobile IT Management: Other than accessing emails via a smartphone, IT organization and business groups feel the need to have critical applications at their fingertips. Using an APM solution on the go means that instant notifications of components with issues can be routed to the right teams so it can be fixed in real-time and in matter of minutes.


Role of APM in Analytics: An APM tool gives you different types of information on the performance of your servers and hardware, operating systems, critical applications, databases, etc. Making sense of this data is essential to be able to determine problems that may arise. 


Manage Web Applications: Getting visibility into the performance of your websites and Web applications can help you quickly pinpoint and resolve the root cause of issues. APM helps you determine if there are constraints on resources, such as Web server, application server, or databases.


Manage Virtual Environments: Organizations may have virtual admins managing the health of virtual appliances. Virtual admins also need visibility into how applications running in VMs are performing. APM also allows you to plan capacity management for a given application and its underlying resources.


Whether it’s analytics, cloud-based apps, or managing various assets within your IT infrastructure, APM fits well and more often than not, provides assistance in managing your IT environment.

According to Forrester, the SaaS application and software market is expected to reach $75 billion in 2014. Forrester goes on to quote that the, “browser-based access model for SaaS products works better for collaboration among internal and external participants than behind-the-firewall deployments.” When you think about it, today’s users at organizations spend most of their time accessing various “smart applications”. Whether it’s applications like Office 365 or salesforce, the user base accessing and using these applications are increasing tremendously.


Monitoring the performance of these applications will make a huge difference considering more and more users are adopting the use of SaaS and cloud based applications. Monitoring the load on the server, user experience, and bottlenecks are crucial to optimize the overall performance whether the application is hosted on-premise, in a public cloud, or using a hybrid approach. If your organization is using several SaaS based applications, you can look at the following considerations if you choose to monitor performance and availability of such applications.


Monitor User Experience: Since users are going to be accessing the application extensively, you should monitor overall user experience and users’ interaction with the application. This allows you to analyze performance from the end-users perspective.  Slow page load times or image matching issues can be a first indication that there’s an issue with the application.  By drilling in deeper, you can determine if the problem is related to a specific page and location.  Ultimately, monitoring user experience allows you to improve and optimize application performance, which results in improved conversion.


You could also look at this in two ways: from the perspective of the service provider, and from the perspective of the service consumer.


Service providers need to focus on:

  1. User experience: It’s likely service providers have SLAs with end users and they need to demonstrate they are meeting uptime and other SLA considerations.
  2. Infrastructure: There are many factors that can cause a service failure, therefore all aspects of the infrastructure must be monitored. These aspects include applications, servers, virtual servers, storage, network performance, etc.
  3. Integration services (web services): Services provided are dependent on other SaaS providers or internal apps.


Service consumers need to focus on: 

  1. User experience: If part of your web application consumes web services, this can be the first indication of a problem.
  2. Web service failures: This can help identify a failure in communication.


Focusing on these aspects are essential when you’re monitoring SaaS applications. These key considerations help IT admins to take proactive measures to ensure applications don’t suffer downtime during crucial business hours. At the same time, each application will be optimized by continuously monitoring, thus improving overall efficiency.


Check out the online demo of Web Performance Monitor!

Mailbox server role in Exchange 2013 is quite simple actually. All you have is mailbox and public folder databases, and email storage. The mailbox server role consists of mail database, replication, storage, information store, RPC requests, and calendaring and resource bookings. Although it’s crucial to monitor the mailbox server role, it’s also equally important to look at other areas within Exchange.

Monitor ActiveSync Connectivity: Microsoft’s Exchange sync protocol is ActiveSync which is optimized to work when networks have latency and bandwidth issues. You can monitor ActiveSync based on the HTTP protocol and find out whether mobile devices are having issues connecting to Exchange server and also whether or not users can still access email, folders, contacts, and calendar information offline.

Active Directory Driver: The Active Directory (AD) driver in Exchange server allows Exchange services to create, modify, delete, and query AD domain service data. The AD driver utilizes Exchange AD topology information which allows the driver to access directory services. If there is an issue with the service then it affects Exchange services causing bottlenecks.

Client Access Role: To keep your email up and running, you should monitor the client access server role. The client access server connects the client requests and routes it to the appropriate mailbox database. Monitoring client role components such as Exchange POP3, IMAP, unified messaging, etc. will tell you whether end-user performance is having an impact.


Server & Application Monitor has various component monitors to monitor your entire Exchange environment. All you have to do it select the monitor that best addresses your pain area, assign and monitor, and get notified whether there is warning or critical alert.


Check out the SAM templates & AppInsight for Exchange here.

If you’re running a Windows environment, chances are you’re using Microsoft IIS as your Web server to run your Web applications. Your IIS Web server is a critical component of your IT infrastructure. Services such as SharePoint, Outlook, and your Web presence are dependent on its availability. If it’s going to take a long time to access a Web application or website, you will likely end up leaving the site or raise a help desk ticket.

When you have different Web applications for different user groups, you should monitor the performance of the Web server because a Web server issue can cause application downtime which can impact business services. To address this issue, it’s a good idea to tune your Web servers from time to time. By doing so, you can make sure the effects are showing positive growth in application performance and availability.

Like any other Web server in the network, IIS is prone to performance issues. Monitoring IIS server is useful for improving server performance, identifying performance bottlenecks, increasing throughput, and identifying connectivity issues. To successfully achieve optimum server performance, you should consider the following best practices:

  • Application pools provide a convenient way to manage a set of websites, applications, and their parallel worker processes. You'll want to start by monitoring your app pool's memory usage. If you find that the app pool is utilizing high memory, recycle it for optimum performance.
  • Monitor the number of client connections to the worldwide web service. This will help you have a better understanding of the load on the server. As the number of client connections increases, you should consider load balancing across multiple Web servers.
  • Monitoring data downloaded or uploaded to the WWW service helps you have a better understanding of the data traffic on your server, which allows you to make informed decisions on bandwidth allocation.
  • If you detect that the IIS server is too busy when monitoring incoming sessions, start by increasing the connection count or if the server is too overloaded, load balance across multiple Web servers.
  • As with any application, you will need to monitor basic metrics, such as CPU usage, physical memory, virtual memory, I/O read/write and so on.


Watch this video and find out how you can improve the health of your IIS Web server.


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.