When there are application performance issues, most IT teams focus on the hardware, after blaming and ruling out the network, of course. If an application is slow, the first thought is to add hardware to combat the problem. Agencies have spent millions throwing hardware at performance issues without a good understanding of the true bottlenecks slowing down an application.

 

But a recent survey on application performance management by research firm Gleanster LLC reveals that the database is the No. 1 source of issues with performance. In fact, 88 percent of respondents cite the database as the most common challenge or issue with application performance.

 

Understanding that the database is often the cause of application performance issues is just the beginning; knowing where to look and what to look for is the next step. There are two main challenges to trying to identify database performance issues:

 

There are a limited number of tools that assess database performance. Tools normally assess the health of a database (is it working, or is it broken?), but don’t identify and help remediate specific database performance issues.

 

Database monitoring tools that do provide more information don’t go much deeper. Most tools send information in and collect information from the database, with little to no insight about what happens inside the database that can impact performance.

 

To successfully assess database performance and uncover the root cause of application performance issues, IT pros must look at database performance from an end-to-end perspective.

 

The application performance team should be performing wait-time analysis as part of regular application and database maintenance. This is a method that determines how long the database engine takes to receive, process, fulfill and return a request for information. A thorough wait-time analysis looks at every level of the database and breaks down each step to the millisecond.

 

The next step is to look at the results, then correlate the information and compare. Maybe the database spends the most time writing to disk; maybe it spends more time reading memory. Understanding the breakdown of each step helps determine where there may be a slowdown and, more importantly, where to look to identify and fix the problem.

 

We suggest that federal IT shops implement regular wait-time analysis as a baseline of optimized performance. The baseline can help with change management. If a change has been implemented, and there is a sudden slowdown in an application or in the database itself, a fresh analysis can help quickly pinpoint the location of the performance change, leading to a much quicker fix.

 

Our nearly insatiable need for faster performance may seem like a double-edged sword. On one hand, optimized application performance means greater efficiency; on the other hand, getting to that optimized state can seem like an expensive, unattainable goal.

 

Knowing how to optimize performance is a great first step toward staying ahead of the growing need for instantaneous access to information.

 

Find the full article on Government Computer News.