Today’s episode of SQL Server A to Z is brought to you by the letter D. D is for Dynamic Management View (DMV). DMVs were introduced in SQL 2005 as a way to provide administrators with a window into what’s going on in SQL Server at any given moment. This can range from OS statistics, to information about replication, to what execution plans are currently in cache. Today I’ll give you a brief look at a few of the DMVs I use.
This view is like having Perfmon in T-Sql, you’ll see same SQL Server performance metrics in either. I happen to like Perfmon for gathering performance stats over a period of time, like when I’m getting a baseline for a server. But sometimes you just want a glimpse of how things are running right now, at this moment. Or maybe you want to see those stats next to a list of what processes are currently running in the database. That’s difficult, or impossible to do with Perfmon. So that’s when you’d use this DMV. Let’s take a look at the counters this DMV exposes.
SELECT distinct object_name from sys.dm_os_performance_counters
And if we drill down a little further:
SELECT * from sys.dm_os_performance_counters where object_name like '%Buffer Manager%'
And there you see the same counters for Buffer Manager that you’d see if you ran Perfmon. This is one of the DMVs I use in my script to see how SQL Server is using its memory;.
I’ll admit this DMV doesn’t get used a lot. But when it is used, it’s very handy. The sys.dm_exec_query_memory_grants DMV returns information about current sessions that have been granted memory to execute or are waiting on memory grants. If I start seeing resource_semaphore waits in my database, this is where I go first. It will tell me how much memory each session has been granted, how much it requested, how much it would have requested given unlimited resources (useful for identifying very bad queries), the sql and plan handles (so you can go get the exact query) etc. Let’s take a look.
SELECT * from sys.dm_exec_query_memory_grants
The last DMV I’m going to cover is one I use quite regularly to determine what indexes are being used, how much they’re used, and when they’re used. The sys.dm_db_index_usage_stats view contains a record for every index that’s been used since the instance last started, along with counts for the various read and write operations that have been executed against it. Why is this helpful? If you monitor this view over time and find an index that has high user updates but no user seeks, scans or lookups, in most cases this index isn’t being used by the application and you might consider removing the index to eliminate the overhead of maintaining it. Now before you go out and start dropping indexes, there are some caveats to keep in mind. Those are outside the scope of this post, but google “SQL Server drop unused indexes” before you do anything else.
I know I said that would be the last one, but I feel I should mention a few DMVs that compliment sys.dm_db_index_usage_stats nicely. sys.dm_db_missing_index_groups, sys.dm_db_missing_index_group_stats, and sys.dm_db_missing_index_details views are commonly joined together to provide a list of indexes that SQL Server recommends based on usage. Like so:
SELECT TableName=o.name, migs_Adv.index_advantage , s.avg_user_impact , s.avg_total_user_cost , s.last_user_seek ,s.unique_compiles, d.index_handle ,d.equality_columns, d.inequality_columns, d.included_columns, d.[statement] from sys.dm_db_missing_index_group_stats s inner join sys.dm_db_missing_index_groups g on g.index_group_handle=s.group_handle inner join sys.dm_db_missing_index_details d on d.index_handle=g.index_handle inner join sys.objects o (nolock) on o.object_id=d.object_id inner join (select user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) as index_advantage, migs.* from sys.dm_db_missing_index_group_stats migs) as migs_adv on migs_adv.group_handle=g.index_group_handle order by migs_adv.index_advantage desc, s.avg_user_impact desc
Keep in mind that all of these stats are reset at system startup. So you want to develop a procedure to retain this data over time. Personally, I have a script that captures the information once a week and I didn’t start using it until I had 4 months of data. Why so long? There are some processes that only run once a month. And there are a few that only run quarterly. If I based my actions on only a few weeks worth of statistics, I might end up dropping an index that’s critical to the performance of one of our quarterly reports. So be careful!
So there’s a taste of what DMVs I use. Which ones are your personal favorites?