This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

Custom Report - Top SQL by Any Metric

Description

This procedure, when executed, will return all SQL statements for the given date range ordered by the metric of choice. See comments inside the procedure for more details on the parameters and methods for calling. Install the stored procedure into the Ignite repository database.

Stored Procedure

IF (SELECT COUNT(*) FROM SYS.OBJECTS WHERE NAME LIKE 'TopSQLByMetric') = 1

  DROP PROC TopSQLByMetric

GO

-- TopSQLByMetric PROC

--

-- PARMS '@DBNAME', '@METRIC', '@FROM_DATE', '@TO_DATE', '@SQL_HASH'

-- @DBName - name as shown in the Ignite GUI

-- @METRIC - Executions, Logical Reads, Logical Writes, Physical Reads, LIO / Exec, LIO Writes / Exec, PIO / Exec

-- @FROM_DATE - start date for the report in SQL Server format which is typically YYYY-MM-DD HH24:MI

-- @TO_DATE - end date for the report in SQL Server format which is typically YYYY-MM-DD HH24:MI

-- @SQL_HASH (optional) - if provided limits the data to a list of SQL statements by Ignite hash value

--

-- EXAMPLE:

--

-- EXEC TopSQLByMetric 'SEBOX1\SE12005', 'LIO / Exec', '2011-1-1 00:00','2011-1-10 23:59'

-- EXEC TopSQLByMetric 'SEBOX1\SE12005', 'LIO / Exec', '2011-1-1 00:00','2011-1-10 23:59', '746746777, 78676876'

CREATE PROC TopSQLByMetric (

  @DBNAME VARCHAR(50),

  @METRIC VARCHAR(20),

  @FROM_DATE VARCHAR(16),

  @TO_DATE VARCHAR(16),

  @SQL_HASH VARCHAR(100) = NULL)

AS

DECLARE

  @SQL VARCHAR(4000),

  @DBID VARCHAR(3),

  @MetricColumnName VARCHAR(30)

BEGIN

SELECT @DBID = ID FROM COND WHERE NAME = @DBNAME

SELECT @MetricColumnName = CASE @METRIC

  WHEN 'Executions' THEN 'SUM(EXECS)'

  WHEN 'Logical Reads' THEN 'SUM(BGETS)'

  WHEN 'Logical Writes' THEN 'SUM(PARSES)'

  WHEN 'Physical Reads' THEN 'SUM(DREADS)'

  WHEN 'LIO / Exec' THEN 'SUM(BGETS)/SUM(EXECS)'

  WHEN 'LIO Writes / Exec' THEN 'SUM(PARSES)/SUM(EXECS)'

  WHEN 'PIO / Exec' THEN 'SUM(DREADS)/SUM(EXECS)'

  ELSE 'Invalid Metric' END

IF @MetricColumnName = 'Invalid Metric'

  PRINT 'Invalid Metric Name Provided. Valid values are Executions, Logical Reads, Logical Writes, Physical Reads, LIO / Exec, LIO Writes / Exec, PIO / Exec'

ELSE

  BEGIN

  SELECT @SQL = 'SELECT ISNULL(N.NAME, SS.SQLHASH) "NameOrHash", ' +

  'SUM(EXECS) as "Executions", ' +

  'SUM(BGETS) as "Logical Reads", ' +

  'SUM(BGETS)/SUM(EXECS) as "LIO / Exec", ' +

  'SUM(DREADS) as "Physical Reads", ' +

  'SUM(DREADS)/SUM(EXECS) as "PIO / Exec", ' +

  'SUM(PARSES) as "Logical Writes", ' +

  'SUM(PARSES)/SUM(EXECS) as "LIO Writes / Exec" ' +

  'FROM CON_STATS_SUM_' + @DBID + ' SS ' +

  'LEFT OUTER JOIN CON_SQL_NAME N ON SS.SQLHASH = N.HASH ' +

  'WHERE DATEHOUR BETWEEN ' +

  'CONVERT(DATETIME,''' + @FROM_DATE + ''', 101) AND ' +

  'CONVERT(DATETIME,''' + @TO_DATE + ''', 101) '

  -- if a hash value was passed in, only do this for that SQL

  IF ISNULL(@SQL_HASH,'*') <> '*'

  BEGIN

  SELECT @SQL = @SQL + ' AND SS.SQLHASH IN (' + @SQL_HASH + ') '

  END

  SELECT @SQL = @SQL + 'GROUP BY ISNULL(N.NAME, SS.SQLHASH) HAVING SUM(EXECS)<>0 ' +

  'ORDER BY ' + @MetricColumnName + ' DESC'

  EXEC (@SQL)

  END

END

GO

  • I see this in my database always aggregated to an hour. For Example, if I see the PLE graph going down at 9:15 AM to 9:30 AM, we would not be able to tell what SQL has contributed to the loss of PLE because the query is aggregted by an hour. Atleast, that is how I am seeing in our setup.

    I dont see any significance for the "mi" field in the todate and fromdate.

    Do you have the same problem? If yes, did you overcome this by any means?

  • Have you tried using the Timeslice to go back to the moment that PLE plunged and see what was running?  I've attached a screenshot showing how I can use Timeslice to get down to what was running in the couple of minutes where I know PLE dropped.  There's 3 queries that had some significant wait time and I'd suspect them first.  From there, I could evaluate which queries may have called for a lot of data which could send PLE plummeting. 

    Capture.JPG

  • Thanks Bryan for the response.

    As you can observe the trend even in graphical image above, they are there before the PLE drop as well because of which I may not be able to conclude if they have contributed to teh PLE drop.

    No?

  • Well yes, they are there before the PLE drop, but what's important is what were they doing when PLE dropped.  In order for PLE to drop, there must have been a request for a lot of pages not currently in memory.  Unfortunately, we're not capturing any parameter values that may have been used in those SQLs so we can't really know exactly which rows were requested, but if we select one of those bars and drill in, we can find out what kind of waits they were incurring and it will help us know if it was likely the cause.  Alas, the answer is seldom clean cut.  Using my example, I'll click into the 9:30 AM bar and it will show me the queries running in that minute and what kind of waits they had.  All 3 of those queries had PAGEIOLATCH_EX so there is a good chance all 3 contributed to my dip in PLE, but I can at least say which ones contributed more by how much PAGEIOLATCH_EX they had.  Then, I can step forward one minute at a time and see if it is consistently like this, or if from one moment to the next, the wait types change.

    Capture.JPG

  • Thanks for the update Brian. That was useful..

    My question is specific to the Custom Report. My emphasis is that there is no need for "mi" (minute) field as the data stored is always aggregated for every hour. So, one cannot look into a minute range parameters from the query as it is always at an hour level.

    Thanks

    Adithya

  • I checked with support and they say you are correct.  That dangling mi field isn't providing value here.  It is possible to build an alert off of the tables that contain the high detail data but we haven't mapped it out.  It may be complex and because the detail tables see more frequent activity, you'll have to carefully balance the query load to avoid blocking or slowing DPA.  You could work with support to get the information you need to build such a report and we hope that you would share it back to Thwack if you did that.