- Docs
- /
06 Jun 2022 17683 views 0 minutes to read Contributors
When a user application submits to SQL Server a request for data, the biggest element of SQL Server’s total response time would, ideally, be the CPU processing time. In other words, the time it takes the CPU to pull together the required data, slice, dice, and sort it according to the query specifications and send it back. However, in a busy database system, with hundreds or thousands of user requests competing for the limited resources of the database server, there will be times when a request is waiting to proceed, rather than actively processing. For example, Session A’s request may be waiting for Session B to release a lock on a resource to which it needs access.
Every time a request is forced to wait, SQL Server records the length of the wait, and the cause of the wait, a.k.a. the wait type, which generally indicates the resource on which the request was waiting, these are the wait statistics.
In order to understand wait statistics in SQL Server, and how SQL Server measures wait times, we need to delve a little into SQL Server’s scheduling model.
A scheduler can be described as a piece of software that coordinates the execution of different processes and administers its available resources. SQL Server has its own scheduling mechanism that is implemented in what we know as SQLOS. The reason behind this is that the Windows scheduler cannot satisfy what a relational database needs for working. Windows uses a preemptive scheduling mechanism and assigns a quantum of CPU time to every thread, when a thread consumes its quantum it is sent to a queue and other threads are granted execution.
In opposition, SQL Server uses a cooperative scheduling mechanism when threads can voluntary yield its quantum of time. This allows SQL Server to optimize CPU utilization, because when a thread is signaled for execution but is not ready to run it can yield its quantum of time in favor of other threads.
SQL Server has one scheduler per CPU core independently if it is a physical core or hyperthreaded. But the schedulers aren’t bound to a CPU unless you define an affinity mask. This means that a scheduler running on CPU 1, after a context switch can end up running on CPU 2.
A thread can be in one of three states, as exposed by the STATUS column of the sys.dm_exec_requests DMV:
The SQLOS allows any given thread a small quantum of processing time (4 ms), before it must yield the CPU to another session’s thread. If, before its allotted quantum is complete, the running thread needs to request a resource, it moves to the waiter list. Once the resource is available, the thread moves to the back of the runnable queue and then, when its time comes, back onto the CPU. It then gets another 4 ms quantum of processing time. If it doesn’t have to wait for any resource in this time, but fails to complete its work, it yields the CPU anyway, moves to the back of the runnable queue, and awaits its next turn.
Analysis of wait statistics represents a very effective means to diagnose response times in our databases. In very simple terms, our requests either work, or they wait:
First, it’s perfectly normal for threads to wait during task execution. In fact, in busy databases supporting a high degree of concurrent access, it’s almost inevitable that a thread for any particular task will have to wait for a resource at some point in its execution cycle, however briefly. If not, it may simply exceed its 4 ms quantum and wait in the runnable queue.
Second, wait statistics are not really a tool to use in isolation in order to diagnose a performance issue. They are best used in conjunction with other diagnostic tools, as a way to direct our tuning efforts and avoid wasting time.
We can investigate the current, “real time” wait profile of a SQL Server instance, in terms of the requests and sessions associated with suspended threads, by querying the sys.dm_os_waiting_tasks DMV and joining it to various other useful views and functions. The sys.dm_os_waiting_tasks DMV reveals, amongst other columns, the:
In such cases, we will likely observe significant locking waits (waits of type LCK_M_XX), indicating that a session is unable to proceed with its work until it can acquire a lock on the resource on which another session’s thread holds a conflicting lock. The most common causes of persistent blocking are poor code, which causes SQL Server to hold locks for far longer than necessary, lack of indexing, or an IO-intensive process that is blocking many other sessions for long periods.
12345678 SELECT blocking.session_id AS blocking_session_id , blocked.session_id AS blocked_session_id , waitstats.wait_type AS blocking_resource , waitstats.wait_duration_ms , waitstats.resource_description , blocked_cache.text AS blocked_text , blocking_cache.text AS blocking_textFROM sys.dm_exec_connections AS blocking INNER JOIN sys.dm_exec_requests blockedON blocking.session_id = blocked.blocking_session_id CROSS APPLY sys.dm_exec_sql_text(blocked.sql_handle)blocked_cache CROSS APPLY sys.dm_exec_sql_text(blocking.most_recent_sql_handle)blocking_cacheINNER JOIN sys.dm_os_waiting_tasks waitstatsON waitstats.session_id = blocked.session_id
We can also perform historical wait statistics analysis, using the data provided in sys.dm_os_wait_stats.
If a SQL Server instance has been running for quite a while and then is subjected to a significant change, such as adding an important new index, it’s worth considering clearing the old wait stats in order to prevent the old cumulative wait stats masking the impact of the change on the wait times.
The sys.dm_os_wait_stats DMV gives us a list of all the different types of waits that threads have encountered, the number of times they have waited on a resource to be available, and the amount of time waited. The following columns are available in the view (times are all in millisecond units):
There are many reasons why a certain task within SQL Server may need to wait, which means there are many possible values for the wait_type column.
If the signal wait time is a significant portion of the total wait time then it means that tasks are waiting a relatively long time to resume execution after the resources that they were waiting for became available. This can indicate either that there are lots of CPU-intensive queries, which may need optimizing, or that the server needs more CPU. The query below will provide a measure of how much of the total wait time is signal wait time.
123 SELECT SUM(signal_wait_time_ms) AS TotalSignalWaitTime , ( SUM(CAST(signal_wait_time_ms AS NUMERIC(20, 2)))/ SUM(CAST(wait_time_ms AS NUMERIC(20, 2))) * 100 ) AS PercentageSignalWaitsOfTotalTimeFROM sys.dm_os_wait_stats
If signal waits analysis indicates CPU pressure, then the sys.dm_os_schedulers DMV can help verify whether a SQL Server instance is currently CPU-bound. This DMV returns one row for each of the SQL Server schedulers and it lists the total number of tasks that are assigned to each scheduler, as well as the number that are runnable. Other tasks on the scheduler that are in the current_tasks_count but not the runnable_tasks_ count are ones that are either sleeping or waiting for a resource.
123 SELECT scheduler_id ,current_tasks_count , runnable_tasks_countFROM sys.dm_os_schedulers WHERE scheduler_id < 255
The filter for schedulers below 255 removes from the result set the numerous hidden schedulers in SQL Server, which are used for backups, the Dedicated Administrator Connection (DAC), and so on, and are not of interest when investigating general CPU load.
The primary use of the sys.dm_os_wait_stats DMV is to help us determine on which wait types and which resources SQL Server has spent the most time waiting, historically, while executing that server’s workload. As noted earlier, it’s important to focus this analysis on the most prevalent waits, and also filter out ‘benign’ waits that will occur frequently
on any database system and are not, except in extreme edge cases, the cause of performance issues.
Below query provides both total wait times as well as resource and signal wait times, filters out many benign waits, and performs some math to present very clearly the most significant waits, historically, on the instance.
123456789101112131415161718192021222324252627282930313233 WITH [Waits]AS ( SELECT [wait_type] ,[wait_time_ms] / 1000.0 AS [WaitS] ,( [wait_time_ms] - [signal_wait_time_ms] ) / 1000.0AS [ResourceS] , [signal_wait_time_ms] / 1000.0 AS [SignalS] , [waiting_tasks_count] AS [WaitCount] ,100.0 * [wait_time_ms] / SUM( [wait_time_ms] ) OVER ( )AS [Percentage] , ROW_NUMBER ( ) OVER ( ORDER BY [wait_time_ms] DESC )AS [RowNum]From sys.dm_os_wait_statsWhere [wait_type] NOT IN ( N’BROKER_EVENTHANDLER’,N’BROKER_RECEIVE_WAITFOR’, N’BROKER_TASK_STOP’, N’BROKER_TO_FLUSH’, N’BROKER_TRANSMITTER’, N’CHECKPOINT_QUEUE’, N’CHKPT’, N’CLR_AUTO_EVENT’, N’CLR_MANUAL_EVENT’))SELECT [W1].[wait_type] AS [WaitType] ,CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S] ,CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S] , CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S] ,[W1].[WaitCount] AS [WaitCount] ,CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage] , CAST (( [W1].[WaitS] / [W1].[WaitCount] ) AS DECIMAL(14, 4))AS [AvgWait_S] , CAST (( [W1].[ResourceS] / [W1].[WaitCount] ) AS DECIMAL(14, 4))AS [AvgRes_S] , CAST (( [W1].[SignalS] / [W1].[WaitCount] ) AS DECIMAL(14, 4))AS [AvgSig_S]FROM [Waits] AS [W1]INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum] GROUP BY [W1].[RowNum] ,[W1].[wait_type] ,[W1].[WaitS] ,[W1].[ResourceS] ,[W1].[SignalS] ,[W1].[WaitCount] , [W1].[Percentage]HAVING SUM([W2].[Percentage]) - [W1].[Percentage] < 95;Go;
In general, when examining wait statistics, either historically or for currently active requests, it’s wise to focus on the top waits, according to wait time, and look out for high wait times associated with the following specific wait types (some of which we’ll cover in more detail in subsequent sections).
Often indicates nothing more than that certain queries are executing with parallelism; CXPACKET waits in the server are not an immediate sign of problems, although they may be the symptom of another problem, associated with one of the other high value wait types on the instance.
The tasks executing in the system are yielding the scheduler, having exceeded their quantum, and are having to wait in the runnable queue for other tasks to execute. As discussed earlier, this wait type is most often associated with high signal wait rather than waits on a specific resource. If you observe this wait type persistently, investigate for other evidence that may confirm the server is under CPU pressure.
A task had to wait to have a worker thread bound to it, in order to execute. This could be a sign of worker thread starvation, requiring an increase in the number of CPUs
in the server, to handle a highly concurrent workload, or it can be a sign of blocking, resulting in a large number of parallel tasks consuming the worker threads for long periods.
These wait types signify that blocking is occurring in the system and that sessions have had to wait to acquire a lock, of a specific type, which was being held by another database session.
These waits are commonly associated with disk I/O bottlenecks, though the root cause of the problem may be, and commonly is, a poorly performing query that is consuming excessive amounts of memory in the server, or simply insufficient memory for the buffer pool.
Non-IO waits for latches on data pages in the buffer pool. A lot of times PAGELATCH_* waits are associated with allocation contention issues. One of the most well-known allocations issues associated with PAGELATCH_* waits occurs in tempdb when the a large number of objects are being created and destroyed in tempdb and the system experiences contention on the Shared Global Allocation Map (SGAM), Global Allocation Map (GAM), and Page Free Space (PFS) pages in the tempdb database.
These waits are associated with lightweight short-term synchronization objects that are used to protect access to internal caches, but not the buffer cache. These waits can indicate a range of problems, depending on the latch type. Determining the specific latch class that has the most accumulated wait time associated with it can be found by querying the sys.dm_os_latch_stats DMV.
This wait is often incorrectly attributed to a network bottleneck. In fact, the most common cause of this wait is a client application that is performing row-by-row processing of the data being streamed from SQL Server as a result set (client accepts one row, processes, accepts next row, and so on). Correcting this wait type generally requires changing the client side code so that it reads the result set as fast as possible, and then performs processing.
Occurs when a thread makes a call to structure that uses the OLEDB provider and is waiting while it returns the data. A common cause of such waits is linked server
queries. However, for example, many DMVs use OLEDB internally, so frequent calls to these DMVs can cause these waits. DBCC checks also use OLEDB so it’s common to see this wait type when such checks run.
Once we know what’s normal, then we can focus our tuning efforts, and we have a reference in the event that performance suddenly degrades. We can see trends over time and look out for worrying change in the pattern of waits. We can also map sudden changes to, for example, deployment of new code, or change in hardware configuration and so on.
In order to establish what is normal, then track changes over times, we need to establish baselines for our historical wait statistics data.
By default, SQL Server clears the cumulative wait statistics for an instance from the sys.dm_os_wait_stats DMV upon instance restart. In addition, a DBA can clear the statistics manually using DBCC SQLPERF.
The queries below will reveal when wait statistics were last cleared by an instance restart, as well as if, and when, someone last cleared them manually. Simply compare the two values to see if wait statistics have been manually cleared since the last restart.
12345678910111213141516171819 SELECT [wait_type] ,[wait_time_ms] ,DATEADD(SS, -[wait_time_ms] / 1000, GETDATE())AS “Date/TimeCleared” ,CASE WHEN [wait_time_ms] < 1000THEN CAST([wait_time_ms] AS VARCHAR(15)) + ‘ms’ WHEN [wait_time_ms] BETWEEN 1000 AND 60000THEN CAST(( [wait_time_ms] / 1000 )AS VARCHAR(15)) + ‘ seconds’ WHEN [wait_time_ms] BETWEEN 60001 AND 3600000THEN CAST(( [wait_time_ms] / 60000 )AS VARCHAR(15)) + ‘ minutes’ WHEN [wait_time_ms] BETWEEN 3600001 AND 86400000THEN CAST(( [wait_time_ms] / 3600000 )AS VARCHAR(15)) + ‘ hours’WHEN [wait_time_ms] > 86400000THEN CAST(( [wait_time_ms] / 86400000 )AS VARCHAR(15)) + ‘ days’END AS “TimeSinceCleared” FROM [sys].[dm_os_wait_stats]WHERE [wait_type] = ‘SQLTRACE_INCREMENTAL_FLUSH_SLEEP’;
/*
1 SELECT [sqlserver_start_time] FROM [sys].[dm_os_sys_info];
1 SELECT [create_date] FROM [sys].[databases] WHERE [database_id] = 2
In order to collect wait statistics, on a regular schedule, the first step is to create a table to hold the information, as shown below.
123456789101112 USE [BaselineData];GOIF NOT EXISTS ( SELECT *FROM [sys].[tables]WHERE [name] = N’WaitStats’ AND [type] = N’U’ )CREATE TABLE [dbo].[WaitStats] ([RowNum] [BIGINT] IDENTITY(1, 1) ,[CaptureDate] [DATETIME] , [WaitType] [NVARCHAR](120) , [Wait_S] [DECIMAL](14, 2) , [Resource_S] [DECIMAL](14, 2) , [Signal_S] [DECIMAL](14, 2) ,[WaitCount] [BIGINT] , [Percentage] [DECIMAL](4, 2) , [AvgWait_S] [DECIMAL](14, 2) , [AvgRes_S] [DECIMAL](14, 2) , [AvgSig_S] [DECIMAL](14, 2));GO
12 CREATE CLUSTERED INDEX CI_WaitStatsON [dbo].[WaitStats] ([RowNum], [CaptureDate]);
The second step is simply to schedule a query to run on a regular basis, which captures the wait information to this table.
1234567891011121314151617181920212223242526272829303132333435363738394041424344 USE [BaselineData];GOINSERT INTO dbo.WaitStats( [WaitType])VALUES ( ‘Wait Statistics for ‘ + CAST(GETDATE() AS NVARCHAR(19)));INSERT INTO dbo.WaitStats( [CaptureDate] , [WaitType] , [Wait_S] , [Resource_S] , [Signal_S] , [WaitCount] , [Percentage] , [AvgWait_S] , [AvgRes_S] , [AvgSig_S])EXEC( ‘WITH [Waits] AS (SELECT[wait_type],[wait_time_ms] / 1000.0 AS [WaitS], ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0AS [ResourceS], [signal_wait_time_ms] / 1000.0 AS [SignalS], [waiting_tasks_count] AS [WaitCount],100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER()AS [Percentage],ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum] FROM sys.dm_os_wait_statsWHERE [wait_type] NOT IN (N’’BROKER_EVENTHANDLER’’, N’’BROKER_RECEIVE_WAITFOR’’, N’’BROKER_TASK_STOP’’, N’’BROKER_TO_FLUSH’’, N’’BROKER_TRANSMITTER’’, N’’CHECKPOINT_QUEUE’’, N’’CHKPT’’, N’’CLR_AUTO_EVENT’’, N’’CLR_MANUAL_EVENT’’))SELECTGETDATE(),[W1].[wait_type] AS [WaitType],CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S],CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S], CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S],[W1].[WaitCount] AS [WaitCount],CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage], CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4))AS [AvgWait_S], CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4))AS [AvgWait_S], CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4))AS [AvgRes_S], CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4))AS [AvgSig_S]FROM [Waits] AS [W1]INNER JOIN [Waits] AS [W2]ON [W2].[RowNum] <= [W1].[RowNum]GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],[W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95;’);GO
We should capture wait statistics regularly, at least once a week or once a month. We could do so more frequently, perhaps daily, but remember that unless we are clearing the data regularly, they represent an aggregation of waits since the last restart. The longer the waits have been accumulating, the harder it may be to spot smaller changes in wait percentages.
We’ll want to review regularly the waits for each SQL Server instance. If you capture them once a week, then check on the trending once a week. The simple SELECT in Listing 11 retrieves from the dbo.WaitStats table all the data captured for the last 30 days.
123 SELECT *FROM [dbo].[WaitStats]WHERE [CaptureDate] > GETDATE() - 30 ORDER BY [RowNum];