How many CPU cores does your instance have? And the server it's on? I've seen near idle SQL Servers show a lot of signal wait under various circumstances but the common thread is... pun intended... threads. How many are there for SQL to use vs how many would be needed to keep up with runnable queries. Difficult to answer that scientifically... more trial & error in my experience. The fewer threads there are, the easier it is to hit signal wait. More CPU cores equals more threads. So lets say you have a 1 core Windows server running SQL Server. I'd expect a lot of signal wait b/c there's barely enough CPU to run the OS without adding a SQL query. We may need to get on a call and look at your server to provide better guidance.
I am having this issue as well and have been digging into it for a few weeks. My main question is - what stats is DPA pulling to determine the signal wait percent? Can I get the specific query it is executing? Here is the background in my situation:
Virtualized 2008 R2 SQL Server
4 vCPUs, 26 Gigs of dedicated RAM
Near 0% ready from VM ware - i.e. there is no waiting on CPU from the host
Spikes occur with minimal load (2-3 active sessions). Under minimal load I am seeing no CPU pressure from traces - specifically looking for long running queries (>200ms) in a trace and nothing registers except an occasional trace for the ignite monitor.
Available worker threads - 512
Typical amount of active worker threads when spikes occur: 50
No memory or disk(i/o) pressures are reported during the spikes
DPA is querying the sys.dm_os_wait_stats DMV and doing some math, something similar to this query:
SELECT SUM(signal_wait_time_ms) AS [SignalWaitTime], SUM(wait_time_ms) AS [TotalWaitTime]
WHERE wait_type NOT IN (
'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE', 'SLEEP_TASK',
'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'LOGMGR_QUEUE',
'CHECKPOINT_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BROKER_TO_FLUSH',
'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT', 'CLR_AUTO_EVENT', 'DISPATCHER_QUEUE_SEMAPHORE',
'FT_IFTS_SCHEDULER_IDLE_WAIT', 'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'BROKER_EVENTHANDLER',
'TRACEWRITE', 'FT_IFTSHC_MUTEX', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
'BROKER_RECEIVE_WAITFOR', 'ONDEMAND_TASK_QUEUE', 'DBMIRROR_EVENTS_QUEUE',
'DBMIRRORING_CMD', 'BROKER_TRANSMITTER', 'SQLTRACE_WAIT_ENTRIES',
'SLEEP_BPOOL_FLUSH', 'SQLTRACE_LOCK', 'SP_SERVER_DIAGNOSTICS_SLEEP',
'DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT', 'DIRTY_PAGE_POLL',
AND wait_time_ms <> 0
If you have that light of a load, then I suspect what you are seeing is SQL having to wait for a vCPU to change from being idle to being busy, which is different than VM ready time. Another thing to check is the co-stop time, but with such a light load I doubt you are going to see an issue with the vCPU bumping into each other.
thanks Brian for replying. we are all virtualize, and for this instance server we got only 2 CPU cores. i am just confuse, because the server cpu usage is nothing almost 1%.
Signal waits are an indication of possible internal CPU pressure. The CPU Signal Waits metric is a rate of change metric. That means you can see a spike in signal waits from one minute to the next without the server itself showing a high CPU utilization for the server.
With a virtual database server this could also mean that your guest O/S is waiting for an available vCPU to be assigned, so you should check the VM Ready Time (assuming VMware here) metric as well to see if that is the case.
Another metric to look at within vSphere would be the co-stop time, this is the amount of time the host is taking to schedule vCPUs to guests. We don't have this metric in DPA yet (but we should, IMO), so you will need to go to vSphere to see this (again, assuming VMWare here).