I thought I knew how SQL Server manages statistics and started writing some maintenance procedures to augment the built in stats maintenance. All was well until I sat down to test my code and applied the principles of black box testing. This made me think about what I needed happen rather than what I was expecting to write in the implementation of the code. I wrote the following acceptance criteria:
Table statistics are updated whenever the number of records in a table increases by a percentage defined at the instance level.
Simple enough, so I started by investigating how SQL does things by default and I came across this excellent SQLBits presentation by Maciej Pilecki: https://sqlbits.com/Sessions/Event7/Lies_Damned_Lies_And_Statistics_Making_The_Most_Out_of_SQL_Server_Statistics
After watching this I realised that what my code needed to do was provide a mechanism for overriding the default threshold for updating stats and not to simple schedule routine stats update statements. Maciej also makes it clear that the rate and volume of change in a table is a far better indicator of stale stats than STATS_DATE, which is what I have always used, and that rate of change is calculable by using the [modification_counter] attribute of the sys.dm_db_stats_properties function.
So, the result of this is that my code now simply checks, every hour, if there are any tables that have exceeded a defined percentage threshold (stored as an extended property) of change rather than the 500 records + 20% which is the default (which for very large tables means that your stats won't get updated very often).
I don't think I have bottomed this one out yet but the SQLBits presentation really helped so I thought I'd share it.
Later Dudes and Dudettes!
No comments:
Post a Comment