Remember, back in the day, when you’d go to a website and it was down? Yes, down. We’ve come a long way in a short time. Today it’s not just down that is unacceptable, users find it unacceptable and get frustrated if they have to wait more than three seconds for a website to load.

 

In today’s computing environments, slow is the new down and a slow application in a civilian agency means lost productivity, but a slow military application in theater can mean the difference between life and death. Due to a constantly increasing reliance on mission critical applications, the government must now meet, and in most cases surpass, the high performance standards that are being set by the commercial industry, and the stakes continue to get higher.

 

Most IT teams focus on the hardware, after blaming and ruling out the network of course. If an application is slow, the first thought is to add hardware: more memory, faster processors, upgrade storage to SSD drives, etc. – to combat the problem. Agencies have spent millions of dollars throwing hardware at application performance issues without a good understanding of the bottleneck slowing down an application.

However, according to a recent survey on application performance management by research firm Gleanster, LLC, the database is the number one source of issues with application performance, in fact 88 percent of respondents cite the database as the most common challenge or issue with application performance.

 

Trying to identify database performance issues poses several unique challenges:

  • Databases are complex. Most people think of a database as this mysterious black box of secret information and are wary to dig too deep.
  • There are a limited number of tools that assess database performance. Tools normally assess the health of a database (is it working, or is it broken?) and don’t identify and help remediate specific database performance issues.
  • Database monitoring tools that do provide more information don’t go that much deeper. Most tools send information in and collect information from the database, with little to no insight about what happens inside the database that can impact performance.

To successfully assess database performance and uncover the root cause of application performance issues, IT pros must look at database performance from an end-to-end perspective.

 

In a best-practices scenario, the application performance team should be performing wait-time analysis as part of their regular application and database maintenance. A thorough wait-time analysis looks at every level of the database—from individual SQL statements to overall database capacity—and breaks down each step to the millisecond.

 

The next step is to look at the results, then correlate the information and compare. Maybe the database spends the most time writing to disk; maybe it spends more time reading memory.

 

Ideally, all federal IT shops should implement regular wait-time analysis as a baseline of optimized performance. Knowing how to optimize performance—and understanding that it may have nothing to do with hardware—is a great first step toward staying ahead of the growing need for instantaneous access to information.


Read an extended version of this article on GCN