High CPU on a Database Server—Should You Be Concerned?

Back when I used to be an on-call DBA, I got paged one morning for a database server having high CPU utilization. After I punched the guy who setup that alert, I brought it up in a team meeting—is this something we should even be reporting on, much less alerting on. Queries and other processes in our use CPU cycles, but frequently as a production DBA you are the mercy of some third party applications “interesting” coding decisions causing more CPU cycles than is optimal.

Some things in queries that can really hammer CPUs are:

  • Data type conversions
  • Overuse of functions—or using them in a row by row fashion
  • Fragmented indexes or file systems
  • Out of date database statistics
  • Poor use of parallelism

Most commercial databases are licensed by the core—so we are talking about money here. Also, with virtualization, we have more options around easily changing CPU configurations, but remember overallocating CPUs on a virtual machine leads to less than optimal performance.  At the same time CPUs are a server’s ultimate limiter on throughput—if your CPUs are pegged you are not going to get any more work done.

The other angle to this, is since you are paying for your databases by the CPU, you want to utilize them. So there is a happy medium of adjusting and tuning.

Do you capture CPU usage over time? What have you done to tune queries for CPU use?

  • My answer : This is a work harder vs smarter question.  Nobody cares how hard you work as long as the job gets done satisfactorily.


    In the decade I managed SQL Servers, I found that CPU utilization is not immediately relevant to performance.  It's immediately relevant to capacity planning.  My users never cared what percentage of CPU the database was using as long as it was performing well for them.  For that, you need something that monitors the end user experience measured in wall clock time.  And if that shows trouble, CPU utilization can be a cause of it, but certainly not the only cause and sometimes users are perfectly satisfied when the CPU utilization was high.

    So what did I do if I saw high CPU utilization?  I first checked to see if anyone was experiencing performance degradation.  If they were, I urgently sought a solution that improved performance regardless of what it did for CPU utilization.  If they were not experiencing performance degradation, I took a more casual approach to analyzing my capacity trend line and laying out long term options.

  • I think you really need to do some data collection over an extended period of time.  Then you may be able to determine what is really going on.  Or at least be able to move in the right direction.

  • For me it all depends. If you database server is virtual on VMware lets say, a high CPU doesn't bother me as much unless it is affecting performance. In our environment we have around 250 database servers. I have an alert setup to send me an email when the CPU has been pegged for over an hour. I rarely see an alert for high CPU, maybe 1 - 3 a week.

    To go back to my virtualization statement. If the server is virtualized it may show in Windows and the monitor that it is pegged out at 100%, but that is actually running efficiently. In a virtualized server you may only give the server 4 vCPU's but when it is making CPU requests it actually goes to the least busiest core on any of the processors on the host it sits on. This gives the server a faster CPU response time because the hyper visor can determine  the best route for that cycle to go. Now if the server is under allocated that can cause a problem as well so there is a fine balance between it, but in general I don't care that much about high CPU unless whatever the server is servicing is affected.

  • I'm thankful our DBAs have their own monitoring to do what they will. 

Thwack - Symbolize TM, R, and C