Where’s that Baseline?

Everybody needs one; each business is different so what are you going to do? No, if you’re like me and the first thing you thought of was to call the Ghost-busters I’d advise against it (although re-living the Bill Murray glory days might not be a bad idea) . In reality, as DBA’s, we will have a need to trend patterns over time. It is inevitable and should be a way of life for us in most instances. How do we do it; or shall I say how am I going to do it. I don’t believe that I can specifically tell you how to meet your business needs; however after attending the Pass Summit this past year and getting some great ideas I think I’ve come up with something that works for me as a DBA.

I definitely want to recommend checking out Chris Shaw’s Utility Databases presentation he has made available on  his blog. It’s in attending his session at PASS that this idea spawned; I am part of a DBA team that falls into the tax / banking industry. We heavily support the tax side of things so for 6 months out of the year it is a slamming great time!! Needless to say 6 weeks is small compared to 52 weeks out of the year so capturing data is important to trend it out and see what is happening and occurring in the system.

I have chosen to capture a few things that will help me in the future troubleshoot and look at problem areas. Due to regulations I’m not going to dump my findings into my prod environment but will in turn opt to dump them into my local environment ( I will do this only because I do not have live production data in my local environment; just statistics etc.).

Needless to say I have my tables and schema set up to accommodate the following criteria (date time stamps of entries are recorded):

Server Level items

  • Calculates average stalls per read, per write, and per total input/output for each database file
  • CPU utilization by database
  • total buffer usage by database for current instance
  • Isolate top waits for server instance since last restart or statistics clear
  • Signal Waits for instance
  • Get CPU Utilization History for last 256 minutes
  • Basic information about memory amounts and state
  • SQL Server Process Address space info
  • Memory Clerk Usage for instance
  • Single-use, ad-hoc queries that are bloating the plan cache

Database Level:

  • Individual File Sizes and space available for current database
  • I/O Statistics by file for the current database
  • Top Cached SPs By Execution Count
  • Top Cached SPs By Avg Elapsed Time
  • Top Cached SPs By Total Worker time
  • Top Cached SPs By Total Logical Reads
  • Top Cached SPs By Total Physical Reads
  • Top Cached SPs By Total Logical Writes
  • Lists the top statements by average input/output usage for the current database
  • Possible Bad NC Indexes (writes > reads)
  • Missing Indexes current database by Index Advantage
  • Find missing index warnings for cached plans in the current database
  • When were Statistics last updated on all indexes
  • Get fragmentation info for all indexes above a certain size in the current database (the maintenance job we have will take care of fragmentation but I wanted to see what it was like after our jobs ran and before the next run)
  • Index Read/Write stats

In the end, I’ll have a plethora of information to go back to after the “season” ends to review. If you don’t have a “utility” database I suggest you get one.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s