Author Archives: Chris Yates

Unknown's avatar

About Chris Yates

Senior Vice President | Managing Director of Data and Architecture | 7-time Microsoft Data Platform MVP | Microsoft Regional Director | RedGate Ambassador | Friend of Redgate Program

A Typical Day That Wasn’t So Typical

As DBA’s we have our own patterns and schedules. The way I set my day will not be the same way you set your day; nor will occurances that occur throughout the day affect me the same as it would you. Last week turned out to be a not so typical day. Everyday I come into the office I have a set of things I’d like to get done in my head; now whether that occurs or not is remained to be seen but nevertheless some form of schedule is in my mind. As the work day progressed last Friday we were told of storms rolling in, to be honest when first hearing this I didn’t think much of it. I knew I had alot of things I needed to get done for that day and accomplished before I left.

We began to notice and get calls from family members that businesses were going to be shut down and people where heading out of town to get back to their homes. I cut my day short about an hour and a half not expecting what was about to happen.

I’m going to tell my age somewhat but the last time any Tornado came through was 1974; I was not born yet at that time but I have heard stories of it. As my family and I sat and watched the storms unfold we started to hear some hail, then some rain….that was all that unfolded where I was at however 20 minutes a way there was devestation…….I know this post is a little bit different but we get so wrapped up in our routines and what we have to do that sometimes we forget what really matters most. I look at the devastation below and it could have just as easy been me and my family. Be thankful for everyday, and next time when you think that the problem at work is going to do you in remember stuff like this. In the grand scheme of things the things we worry about might not be so important. Please remember the people in Indiana during this time as they try to pick up and rebuild and for the many lives that were lost. Out of a tradgedy you will see communities bond together and buil each other back up.

 

       

Dynamic Management View Extravaganza

It was at the PASS Summit 2011 this year as I sat in a session being presented by Jason Strate on extended events that I made a mental note…….why the heck aren’t you utilizing the DMV’s as much as I could be. Then last week I am reading an article by Steve Jones on his blog about two DMV’s that were released with Microsoft’s SQL Server 2008 R2 Service Pack 1 release.

Dynamic Views and Functions return information on the state of the server. They can be used to monitor the health of a server, assist in diagnosing problems, or just help in every day tuning performance. It is important to note that with each release or version release the views could be removed or altered.

Since the Dynamic Views and Functions are broken out into Server scoped and Database scoped you will need to take into account two permissions when viewing or trying to run queries against utilizing these views and functions:

  • Viewer Server State
  • Viewer Database State

Querying a view or function is pretty straight forward, an example would be:

SELECT *
FROM sys.dm_server_services

I think Microsoft has done a pretty good job in busting these out into groups and I have noted them below. The two that were released with Service Pack 1 are for checking the Service account.

Change Data Capture Related Dynamic Management Views I/O Related Dynamic Management Views and Functions
Change Tracking Related Dynamic Management Views Object Related Dynamic Management Views and Functions
Common Language Runtime Related Dynamic Management Views Query Notifications Related Dynamic Management Views
Database Mirroring Related Dynamic Management Views Replication Related Dynamic Management Views
Database Related Dynamic Management Views Resource Governor Dynamic Management Views
Execution Related Dynamic Management Views and Functions Service Broker Related Dynamic Management Views
Extended Events Dynamic Management Views SQL Server Operating System Related Dynamic Management Views
Full-Text Search Related Dynamic Management Views Transaction Related Dynamic Management Views and Functions
Index Related Dynamic Management Views and Functions Security Related Dynamic Management Views
Filestream-Related Dynamic Management Views (Transact-SQL)

What Kind of a DBA Are You?

I get a chuckle sometimes of the off the wall answers I get when an end user approaches us as DBA’s and say, “The whole system is down, what is going on, fix it now” I think that for must of us as DBA’s this has happened at one point in time in our careers. I’ve learned over my 12 yrs of experience that the DBA has to dive into a problem and become and investigator. We should already have a pretty good working knowledge of how the front end and back end interact along with the structure of each. What’s the first thing you check on when there is a problem?

One of the main things I have done for myself is to keep a checklist I run through when something within the system goes awry. You may ask me, “Why do you have this if you consider yourself a strong DBA”. While I am confident in my abilities; we are all humans and we make mistakes. A checklist reinforces the checks of the most common mishaps before having to dive into bigger investigation purposes such as going through that plan cache or error logs. I learned early on to “keep it simple stupid” ~ just start going through your checklist and rule out the obvious first and foremost.

Another thing I have learned over the years is to treat people with true professionalism. There is an age old additive of the DBA and a DBA team. Developers will always blame the DBA’s and the DBA’s will always blame the Developers. One of my main goals is to bridge that gap at my current employment. I recently took a poll among my peers – developers, business analyst, QA personnel as to when they hear the name DBA Team what do they think of? Some of the responses were surprising and some were not but I left with the take a way that the team I’m on has some work to do in bridging gaps. How you treat people when a problem occurs goes along way, and not everyone has the ability to communicate.

I hope that if you don’t have a checklist that you’ll get one, and I hope that when situations do arise that as a DBA you treat everyone with true professionalism. In the end we all want the same goal; a good system that performs to the best of it’s ability.

Think Outside the Box

SQL Multi Script

I enjoy reading what other people in the industry have to say, I might do something one way whereas someone else might do the same thing totally different but equally good if not better. I try to read from some of the top DBA’s in the industry ~ I have picked up so much from them and different techniques that have enhanced my capabilities in my own work situations.

In saying that I happened to be reading on Steve Jones’ blog the other night and he laid down a challenge. Well…….maybe not a challenge per say but I think he hit the nail on the head and it challenged me. Many of us have many types of tools at our disposal; I just so happen to have some that Steve mentioned in his blog from Red Gate. Basically, it is easy for us to get stuck in a rut and rely on the same tools that we’ve been using for months, years, or even decades.

I personally have the Red Gate Tool Belt at my disposal. Have I used all of them ~ nope. Have I even opened all of them ~ nope. So in reading Steve’s blog I felt challenged to test drive each product that was in the tool belt. Well, you know what IT PAID OFF.

I will try to lay out my scenario in how this has helped me. I have many .SQL files stored in TFS in various directories for replication along with corresponding index files. When having to blow away replication or if updates have been made to the files and a deployment is needed then my normal routine would be to go to each directory pull what I need, place it into an analyzer and execute my code.

This was a painstakingly slow process. So instead I completed the following:

  • Created two SSIS packages – I was dealing with .SQL files for publications and then files for indexes. Now I get the latest from TFS; execute the first package to grab all my publication files and dump them into one directory (will explain why in a second). The second package grabs the index files and pushes them out the the server in any of my environments.
  • I go back to my directory with all my publications and because I use SQL CMD variables in them I open each one up in sequential order and remove the commenting out of the variables I need. Then execute each one in analyzer; each one is ran simultaneously.
  • I then use the SQL Multi Script 1 utility from Red Gate that allows me to select .SQL files I created to start each agent by generating a snapshot automatically instead of having to start each one manually from the replication monitor.

Needless to say I took about a 45 minute process and turned into a 2 minute process. Using tools that you have available to you pays off; I’m making a point to go through what I have if I’ve never used it before and see if I can shore up any other processes I have.

SQLSaturday #122

SQL Saturday #122

It’s that time again; SQL Saturday is coming to the Louisville, KY area July 21, 2012. The event will be held at:

The University of Louisville School of Business

2301 3rd Street

Louisville, KY 40202

If you have never been to a SQL Saturday please check it out. This is a free one day SQL Server training event for the SQL Community that focus’s on local speakers, providing a variety of high-quality technical sessions, and making it all happen through the efforts of volunteers. Check out the link below for further details and if you are feeling giddy enough ways for you to volunteer. Let’s get the word out and make this one of the best SQL Saturday’s that has come to Louisville.

http://www.sqlsaturday.com/122/eventhome.aspx

Is Your Data Being Collected?

For those of you who have read my posts then you will know I am very big on Utility Databases. I like them because there is a plethora of information that can be gathered that you can trend, baseline, troubleshoot, etc.

In browsing some forums and whatnot I came across a good article regarding the data collector within SQL. I decided in some late night reading to study up on it; I found it to be something that in our line of business pretty beneficial. This component of SQL 2008 collects different data sets. I like it for many reasons; one perk that stood out to me was configuring the different types of monitors on a scheduled bases. You may ask what is the difference between this and the Utility Databases I always speak of. I am glad you asked; my Utility Databases are custom fitting to me to meet my exact needs; this data collector component comes out of the box you just have to configure it.

I’ve started utilizing both; configuring the data collector takes a matter of minutes and you can set schedules and retention periods of how long you want to keep your data. I basically set up an empty database shell; then the configuration takes care of creating the necessary stored procedures and schema to support it.  Another feature that I liked are the 3 reports that come with it out of the box – disk usage, query stats history, and server stats history. Granted all these can be gathered via scripts but why not take advantage of something already built for you.

This link by Brad McGehee provides information on how to set up and configure the Data Collector.

This link on the MSDN site will provide a basic understanding of what the Data Collector does.

After research I found that this resides best on a CMS Server if you have one. If you are unfamiliar with the CMS Server check out John Sterret’s blog. I sat on one of his sessions at PASS and really enjoyed it.

If you aren’t collecting data your missing out; if you are collecting data and aren’t using the two methodologies take some time and review them; it might just meet your needs.

Sniff that Parameter

How many times does it happen, you have this masterful plan of what you are going to do as soon as you get into the office and bam ~ haven’t even made it out of the car and emails start exploding and coming in.

I would gather with most that this occurance is not out of the norm. In fact it probably happens more times than not. Today, such an issue came up ~ I was anticipating working with some VLF files when a process decided to timeout, a process that we shall say is of the utmost importance. I will do my best to lay out the scenario.

The issue was narrowed down to one procedure. That procedure had one parameter being passed in. Here are initial steps that were thought of

  • Check Indexes (found one to be missing so it was added; will say that Aaron Bertrand’s blog where I found the SQL Sentry utility a few months ago was very helpful in looking at the estimated and actual execution plans).

This did not improve performance. Initially the timeout period for the app has a threshold of 30 seconds. We were consistently hitting over 2 minutes.

  • Check the SP – nothing really stood out. Had some views that were being selected but they were pretty straight forward.
  • Check for any blocking (this was done multiple times throughout the whole assesment process)
  • Check any other sytem abnormalities

So with the procedure still timing out not many other options remaining one of the team members recalled a session they sat in on ~ Parameter Sniffing. The parameter, after being read in was then set to an internal parameter to that stored procedure. Then the newly created parameter was then passed down through the procedure where needed.

Upon executing the proc it took a mere second!

I’ve done a little bit of reading on parameter sniffing and find it very interesting. I would suggest reading Greg Larson’s article ~ he has it laid out very nicely.

This is one aspect I enjoy of being part of a core DBA team. All backgrounds come together and bring different aspects to solutions. I look forward to the next problem to find that unique solution.

SSIS Configurations From a DBA Perspective

The title might be a little miss leading. “DBA Perspective” is solely my perspective in dealing with SSIS up to this point.

The list that I have tallied on how to proceed with configurations are as follows:

  • XML Configurations – XML file that contains the SSIS configurations (can include multiples)
  • Environment Variable – environment variable that contains configurations
  • Registry Entry
  • Parent Package Variable – This config type is more often used to update child packages
  • SQL Server Table – a table inside SQL Server contains the configurations (can include multiples)

Now, the irony if you will, is that in my specific case we currently have a number of variations of packages that use many different methods – This to me = monstrosity

The two styles that seem the easiest for me are the XML Configurations or the SQL Server Table configuration. The preferred method of my choice is to store them all in a SQL table.

I have three tables that I use:

  • SSIS_Configurations
  • SSIS_File_Spec_Configuration
  • SSIS_Log_Details

I’ll try to break down the schematics of each table:

SSIS_Configurations

  • ConfigurationFilter – Name of process
  • ConfiguredValue – Server Instance
  • PackagePath – package connections
  • ConfiguredValueType – self explanatory (string, etc.)

SSIS_File_Spec_Configuration

  • Pkey – identifies multiple entries for one package
  • JobIdentifier – name of package
  • Apptype – what application is it a part of
  • InputFileType – .txt, .csv, etc.
  • InputFilePathType – is it a share, sql server, etc.
  • InputFilePath – location of where the input file path is coming from
  • SProcName – name of procedure utilized
  • OutputFileType – .xlsx, .csv, etc.
  • OutputFilePath – location of where the output file will be sent
  • InputFileArchivePath – Archive location for the input file

SSIS_Log_Details

  • ExecStartTime – start time of the SSIS run
  • ExecEndTime – end time of the execution
  • TaskName – name of Task being executed
  • Status – what is the status of the package
  • ErrorDescription – what was the error the package encountered
  • Duration – duration of the package
  • PkgName – package name
  • SubPkgName – child package
  • BatchId – self explanatory

This process has proved helpful for me; each DBA is different though and I can only say to find what works best for you in your particular situation and environment.

Where’s that Baseline?

Everybody needs one; each business is different so what are you going to do? No, if you’re like me and the first thing you thought of was to call the Ghost-busters I’d advise against it (although re-living the Bill Murray glory days might not be a bad idea) . In reality, as DBA’s, we will have a need to trend patterns over time. It is inevitable and should be a way of life for us in most instances. How do we do it; or shall I say how am I going to do it. I don’t believe that I can specifically tell you how to meet your business needs; however after attending the Pass Summit this past year and getting some great ideas I think I’ve come up with something that works for me as a DBA.

I definitely want to recommend checking out Chris Shaw’s Utility Databases presentation he has made available on  his blog. It’s in attending his session at PASS that this idea spawned; I am part of a DBA team that falls into the tax / banking industry. We heavily support the tax side of things so for 6 months out of the year it is a slamming great time!! Needless to say 6 weeks is small compared to 52 weeks out of the year so capturing data is important to trend it out and see what is happening and occurring in the system.

I have chosen to capture a few things that will help me in the future troubleshoot and look at problem areas. Due to regulations I’m not going to dump my findings into my prod environment but will in turn opt to dump them into my local environment ( I will do this only because I do not have live production data in my local environment; just statistics etc.).

Needless to say I have my tables and schema set up to accommodate the following criteria (date time stamps of entries are recorded):

Server Level items

  • Calculates average stalls per read, per write, and per total input/output for each database file
  • CPU utilization by database
  • total buffer usage by database for current instance
  • Isolate top waits for server instance since last restart or statistics clear
  • Signal Waits for instance
  • Get CPU Utilization History for last 256 minutes
  • Basic information about memory amounts and state
  • SQL Server Process Address space info
  • Memory Clerk Usage for instance
  • Single-use, ad-hoc queries that are bloating the plan cache

Database Level:

  • Individual File Sizes and space available for current database
  • I/O Statistics by file for the current database
  • Top Cached SPs By Execution Count
  • Top Cached SPs By Avg Elapsed Time
  • Top Cached SPs By Total Worker time
  • Top Cached SPs By Total Logical Reads
  • Top Cached SPs By Total Physical Reads
  • Top Cached SPs By Total Logical Writes
  • Lists the top statements by average input/output usage for the current database
  • Possible Bad NC Indexes (writes > reads)
  • Missing Indexes current database by Index Advantage
  • Find missing index warnings for cached plans in the current database
  • When were Statistics last updated on all indexes
  • Get fragmentation info for all indexes above a certain size in the current database (the maintenance job we have will take care of fragmentation but I wanted to see what it was like after our jobs ran and before the next run)
  • Index Read/Write stats

In the end, I’ll have a plethora of information to go back to after the “season” ends to review. If you don’t have a “utility” database I suggest you get one.

 

Pro-Activeness vs. Passiveness

I was posed this question the other day by a fellow DBA of mine. We tend to bounce ideas off each other randomly and the question in simple terms was this, “Do you consider yourself a pro-active DBA or do you consider yourself a Passive DBA.” First I want to explain the term passive so it is not taken out of context of the conversation I had. Passive is not meant to be downgrading in any manner. The question asked simply meant do you wait for things to happen than attack the situation.

In my career I have seen various DBA methods utilized by many different people. I myself tend to fall in the Pro-Active category. I want to head as many items off before they happen than to re-act to them. This doesn’t mean some won’t slip through the cracks; of course they will we are human. It simply means that everyday I come into the office I give 110% in every aspect to make my environment as secure and efficient as possible.

So, I pose this question out to my fellow DBA’s and also a challenge. Which category do you fall in and do you think it is the right category to be in?

This simple question turned into an hr conversation over lunch on the why’s and why-not. Challenge yourself in 2012; motivate  yourself to become a better person and better DBA.