Finding Low Use Indexes Via SQL Sentry
Do you know if the nonclustered indexes in your SQL Server databases are being used effectively? You may think they are, but do you really know?
If you're lucky enough to be attending SQLBits in a few weeks, you can come to my Uncovering Duplicate, Redundant, and Missing Indexes session to learn more! I'll be covering solutions using native T-SQL. On the other hand, if you happen to use SentryOne as your enterprise monitoring solution, I have another solution for you now.
Indexes in SentryOne
In SentryOne, we store a wide variety of data points about your monitored SQL Servers in our Repository database. In Performance Analysis, the Indexes tab has a graph at shows index usage history over time.

Knowing that we store this information in our Repository, I decided to leverage it to come up with a new solution to tie into my SQLBits presentation. I should point out that we encourage customers to data mine the repository to meet their own needs. Jason Hall (jason.hall ) wrote a fantastic Mining Performance Data from SQL Sentry (Part 1) to get people started. As a recent newcomer to SentryOne, I made use of his series to reverse engineer how we store index usage data. The result is a new stored procedure I've created called sp_FindLowUseIndexesS1
.
Introducing sp_FindLowUseIndexesS1
The purpose of sp_FindLowUseIndexesS1 is to help you identify nonclustered indexes that are under used or not used at all. How does one determine that in the first place? In this case, this procedure aggregates index seeks and scans, and compares the total against the total number of update operations that have occurred. If the ratio between total read operations and total update operations is 5% or below, it will appear in the output.
Let's see this in action!
First, you'll have to deploy this stored procedure to your SentryOne repository. You'll notice that this is a temporary stored procedure. This is intentional, because I have created and provided this code external to the SentryOne codebase. Once it is deployed, pick a SQL Server, a database, and a date range to analyze.
Example #1: Defaults
Below is a screenshot showing the procedure run with the required parameters. Notice that there are few indexes that have had some activity. This is to help you also identify indexes that are rarely used, as opposed to not used at all.

Example #2: @ThresholdPct = 0
In this next example, I show the behavior where @ThresholdPct
is set to 0. This only shows indexes that have had zero read activity.

Example #3: @ThresholdPct = 999999
Alternatively, you could set @ThresholdPct
to an arbitrarily high value like 999999
. What does this accomplish? It allows you to see a summary of all index usage data.

How to Utilize this Data
I'd like to share some of my thoughts on how to make use of this data. We all know that nonclustered indexes incur overhead whenever we run DML operations on our SQL Servers. So if an index is rarely or never used by a production workload, what's the point? We're paying the cost of DML overhead but not getting any benefits!
This begs the question, should you immediately remove any indexes that come back from this report? I say absolutely not! Why? Because an index may have been put in place that is used only on a periodic basis, like a month-end process. And it may be critical for that process. This is why I suggest that indexes identified by this report be flagged and watched for an additional period of time, to make sure that no periodic processes make use of this index. Think of it like cleaning house - are you sure you can throw away that piece of mail? Sure it won't be important in a week or two?
Call to Action
I would encourage everyone to take the time to find and clean up their low use indexes. If you're a SentryOne customer, try out my code. If you have any feedback, I'd love to hear about it - I already have some improvements in mind for the next release. Enjoy!
DOWNLOAD sp_FindLowUseNCIndexes_S1