Category Archives: Uncategorized

SQLSaturday #122

SQL Saturday #122

It’s that time again; SQL Saturday is coming to the Louisville, KY area July 21, 2012. The event will be held at:

The University of Louisville School of Business

2301 3rd Street

Louisville, KY 40202

If you have never been to a SQL Saturday please check it out. This is a free one day SQL Server training event for the SQL Community that focus’s on local speakers, providing a variety of high-quality technical sessions, and making it all happen through the efforts of volunteers. Check out the link below for further details and if you are feeling giddy enough ways for you to volunteer. Let’s get the word out and make this one of the best SQL Saturday’s that has come to Louisville.

http://www.sqlsaturday.com/122/eventhome.aspx

Is Your Data Being Collected?

For those of you who have read my posts then you will know I am very big on Utility Databases. I like them because there is a plethora of information that can be gathered that you can trend, baseline, troubleshoot, etc.

In browsing some forums and whatnot I came across a good article regarding the data collector within SQL. I decided in some late night reading to study up on it; I found it to be something that in our line of business pretty beneficial. This component of SQL 2008 collects different data sets. I like it for many reasons; one perk that stood out to me was configuring the different types of monitors on a scheduled bases. You may ask what is the difference between this and the Utility Databases I always speak of. I am glad you asked; my Utility Databases are custom fitting to me to meet my exact needs; this data collector component comes out of the box you just have to configure it.

I’ve started utilizing both; configuring the data collector takes a matter of minutes and you can set schedules and retention periods of how long you want to keep your data. I basically set up an empty database shell; then the configuration takes care of creating the necessary stored procedures and schema to support it.  Another feature that I liked are the 3 reports that come with it out of the box – disk usage, query stats history, and server stats history. Granted all these can be gathered via scripts but why not take advantage of something already built for you.

This link by Brad McGehee provides information on how to set up and configure the Data Collector.

This link on the MSDN site will provide a basic understanding of what the Data Collector does.

After research I found that this resides best on a CMS Server if you have one. If you are unfamiliar with the CMS Server check out John Sterret’s blog. I sat on one of his sessions at PASS and really enjoyed it.

If you aren’t collecting data your missing out; if you are collecting data and aren’t using the two methodologies take some time and review them; it might just meet your needs.

Sniff that Parameter

How many times does it happen, you have this masterful plan of what you are going to do as soon as you get into the office and bam ~ haven’t even made it out of the car and emails start exploding and coming in.

I would gather with most that this occurance is not out of the norm. In fact it probably happens more times than not. Today, such an issue came up ~ I was anticipating working with some VLF files when a process decided to timeout, a process that we shall say is of the utmost importance. I will do my best to lay out the scenario.

The issue was narrowed down to one procedure. That procedure had one parameter being passed in. Here are initial steps that were thought of

  • Check Indexes (found one to be missing so it was added; will say that Aaron Bertrand’s blog where I found the SQL Sentry utility a few months ago was very helpful in looking at the estimated and actual execution plans).

This did not improve performance. Initially the timeout period for the app has a threshold of 30 seconds. We were consistently hitting over 2 minutes.

  • Check the SP – nothing really stood out. Had some views that were being selected but they were pretty straight forward.
  • Check for any blocking (this was done multiple times throughout the whole assesment process)
  • Check any other sytem abnormalities

So with the procedure still timing out not many other options remaining one of the team members recalled a session they sat in on ~ Parameter Sniffing. The parameter, after being read in was then set to an internal parameter to that stored procedure. Then the newly created parameter was then passed down through the procedure where needed.

Upon executing the proc it took a mere second!

I’ve done a little bit of reading on parameter sniffing and find it very interesting. I would suggest reading Greg Larson’s article ~ he has it laid out very nicely.

This is one aspect I enjoy of being part of a core DBA team. All backgrounds come together and bring different aspects to solutions. I look forward to the next problem to find that unique solution.

SSIS Configurations From a DBA Perspective

The title might be a little miss leading. “DBA Perspective” is solely my perspective in dealing with SSIS up to this point.

The list that I have tallied on how to proceed with configurations are as follows:

  • XML Configurations – XML file that contains the SSIS configurations (can include multiples)
  • Environment Variable – environment variable that contains configurations
  • Registry Entry
  • Parent Package Variable – This config type is more often used to update child packages
  • SQL Server Table – a table inside SQL Server contains the configurations (can include multiples)

Now, the irony if you will, is that in my specific case we currently have a number of variations of packages that use many different methods – This to me = monstrosity

The two styles that seem the easiest for me are the XML Configurations or the SQL Server Table configuration. The preferred method of my choice is to store them all in a SQL table.

I have three tables that I use:

  • SSIS_Configurations
  • SSIS_File_Spec_Configuration
  • SSIS_Log_Details

I’ll try to break down the schematics of each table:

SSIS_Configurations

  • ConfigurationFilter – Name of process
  • ConfiguredValue – Server Instance
  • PackagePath – package connections
  • ConfiguredValueType – self explanatory (string, etc.)

SSIS_File_Spec_Configuration

  • Pkey – identifies multiple entries for one package
  • JobIdentifier – name of package
  • Apptype – what application is it a part of
  • InputFileType – .txt, .csv, etc.
  • InputFilePathType – is it a share, sql server, etc.
  • InputFilePath – location of where the input file path is coming from
  • SProcName – name of procedure utilized
  • OutputFileType – .xlsx, .csv, etc.
  • OutputFilePath – location of where the output file will be sent
  • InputFileArchivePath – Archive location for the input file

SSIS_Log_Details

  • ExecStartTime – start time of the SSIS run
  • ExecEndTime – end time of the execution
  • TaskName – name of Task being executed
  • Status – what is the status of the package
  • ErrorDescription – what was the error the package encountered
  • Duration – duration of the package
  • PkgName – package name
  • SubPkgName – child package
  • BatchId – self explanatory

This process has proved helpful for me; each DBA is different though and I can only say to find what works best for you in your particular situation and environment.

Where’s that Baseline?

Everybody needs one; each business is different so what are you going to do? No, if you’re like me and the first thing you thought of was to call the Ghost-busters I’d advise against it (although re-living the Bill Murray glory days might not be a bad idea) . In reality, as DBA’s, we will have a need to trend patterns over time. It is inevitable and should be a way of life for us in most instances. How do we do it; or shall I say how am I going to do it. I don’t believe that I can specifically tell you how to meet your business needs; however after attending the Pass Summit this past year and getting some great ideas I think I’ve come up with something that works for me as a DBA.

I definitely want to recommend checking out Chris Shaw’s Utility Databases presentation he has made available on  his blog. It’s in attending his session at PASS that this idea spawned; I am part of a DBA team that falls into the tax / banking industry. We heavily support the tax side of things so for 6 months out of the year it is a slamming great time!! Needless to say 6 weeks is small compared to 52 weeks out of the year so capturing data is important to trend it out and see what is happening and occurring in the system.

I have chosen to capture a few things that will help me in the future troubleshoot and look at problem areas. Due to regulations I’m not going to dump my findings into my prod environment but will in turn opt to dump them into my local environment ( I will do this only because I do not have live production data in my local environment; just statistics etc.).

Needless to say I have my tables and schema set up to accommodate the following criteria (date time stamps of entries are recorded):

Server Level items

  • Calculates average stalls per read, per write, and per total input/output for each database file
  • CPU utilization by database
  • total buffer usage by database for current instance
  • Isolate top waits for server instance since last restart or statistics clear
  • Signal Waits for instance
  • Get CPU Utilization History for last 256 minutes
  • Basic information about memory amounts and state
  • SQL Server Process Address space info
  • Memory Clerk Usage for instance
  • Single-use, ad-hoc queries that are bloating the plan cache

Database Level:

  • Individual File Sizes and space available for current database
  • I/O Statistics by file for the current database
  • Top Cached SPs By Execution Count
  • Top Cached SPs By Avg Elapsed Time
  • Top Cached SPs By Total Worker time
  • Top Cached SPs By Total Logical Reads
  • Top Cached SPs By Total Physical Reads
  • Top Cached SPs By Total Logical Writes
  • Lists the top statements by average input/output usage for the current database
  • Possible Bad NC Indexes (writes > reads)
  • Missing Indexes current database by Index Advantage
  • Find missing index warnings for cached plans in the current database
  • When were Statistics last updated on all indexes
  • Get fragmentation info for all indexes above a certain size in the current database (the maintenance job we have will take care of fragmentation but I wanted to see what it was like after our jobs ran and before the next run)
  • Index Read/Write stats

In the end, I’ll have a plethora of information to go back to after the “season” ends to review. If you don’t have a “utility” database I suggest you get one.

 

Pro-Activeness vs. Passiveness

I was posed this question the other day by a fellow DBA of mine. We tend to bounce ideas off each other randomly and the question in simple terms was this, “Do you consider yourself a pro-active DBA or do you consider yourself a Passive DBA.” First I want to explain the term passive so it is not taken out of context of the conversation I had. Passive is not meant to be downgrading in any manner. The question asked simply meant do you wait for things to happen than attack the situation.

In my career I have seen various DBA methods utilized by many different people. I myself tend to fall in the Pro-Active category. I want to head as many items off before they happen than to re-act to them. This doesn’t mean some won’t slip through the cracks; of course they will we are human. It simply means that everyday I come into the office I give 110% in every aspect to make my environment as secure and efficient as possible.

So, I pose this question out to my fellow DBA’s and also a challenge. Which category do you fall in and do you think it is the right category to be in?

This simple question turned into an hr conversation over lunch on the why’s and why-not. Challenge yourself in 2012; motivate  yourself to become a better person and better DBA.

What Is In Your Tool Belt

Every DBA that I have ever known has his/her own tool belt. Usually a tool belt consists of tools that you are in need of while currently working on a project. I take this in a literal sense being if I am working on framing I’ll have my hammer, nails, screwdriver, drill, etc. Taking that concept into the DBA realm of things; when working on a project I still like to go in with my tool belt. I work in an environment that is generous enough to afford our team the Red Gate tool belt. In the tool belt we have such things as:

  • SQL Backup
  • SQL Compare
  • SQL Comparison SDK
  • SQL Data Compare
  • SQL Data Generator
  • SQL Dependency Tracker
  • SQL Doc
  • SQL Monitor Installer
  • SQL Multi Script
  • SQL Object Level Recovery Native
  • SQL Packager
  • SQL Prompt
  • SQL Search
  • SQL Source Control
  • SQL Tab Magic

While all these tools are each effective in their own right I went one step further and grabbed the freebie from Red Gate called SQL Scripts Manager. I won’t go into much detail here on what SQL Scripts Manager is but I would highly recommend you check it out; in it you will find a plethora of information via scripts that you can tailor to meet your needs within your respective businesses. Authors such as Tim Ford, Gail Shaw, and Phil Factor comprise many scripts for tasks such as backups, diagnostics, DMV’s, and indexes.

Take time to check out this great tool; another instance of how great the SQL Community really is in sharing ideas and thoughts.

Policy Based Management With a CMS Twist

One of the many perks I have of being on a DBA Team is the number of ideas that are bounced around to find different solutions. One project that we have ran across that we are working on implementing is enforcing policies based on our newly stood up CMS server. The CMS server is nice simply because we can have rapid deployment among 90 some servers.

Not all of these servers will have all of the policies pushed out to them; some might only  have 3 some might have 23 who knows. In order to alleviate some of that pressure we split up our servers into 4 groups within our CMS Server; basically segregating out test boxes, prod boxes, some straggling 2000 boxes (due to third-party apps), and boxes the group doesn’t maintain but still nice to see what they are.

I’m big on  having a plan and road map in place then executing that plan; our team is deciding which group we’d like to tackle first, then choose the policies (standard) out of the box to push first. We can come back in a separate phase and tackle custom policies.

Another great aspect I like about the CMS server-side of things is that we now have a one location repository with all of our server information that we are responsible for. In the repository table we capture the following pieces of information:

  • Server Name
  • Instance Name
  • Server Name \ Instance Name (reporting purposes)
  • Domain
  • IP Address
  • Port
  • Memory Size
  • CPU Count
  • Current Version of Windows
  • Windows Name
  • SQL Server Product Name
  • SQL Server Version
  • SQL Server Edition
  • SQL Server Service Pack
  • SQL Server Language (eh why not throw it in)
  • SQL Server Root Directory
  • SQL Server Collation
  • Methods
  • Business Owner
  • DBA Notes (in case we need to add special info on the servers)
  • Maintained (Do we maintain this server or not; in our industry some servers we do not maintain but we are aware that they are there and in place)
  • Category (prod, test, dev, etc)

Having one central location with all this information has helped tremendously. In the next few posts I’ll go over the policies of what are considered some best practices. Each business is different so tailoring them to meet your needs is the fun part.

Let the games begin!!

It’s a New Year

I haven’t been able to post as much as I would like to and am working on changing that this upcoming year; building it more into my schedule. As this year begins I want to take a moment and implore my readers to learn something new this year. As a SQL DBA, I am always trying to evolve my techniques, learn new ways of doing things, enhancing productivity, streamlining processes, etc. In order to do this I have to stay on top of my game  – always reading articles, blogs, doing extra work on my own time, conferences, SQL Saturday’s.

So I ask the question, what are you going to do in 2012? Will we stagnate? I hope not….take this not as a rallying cry but more of an encouragement; there are so many opportunities out there to get involved in the community, helping others, and just learning in general.

As a SQL DBA my door is always open to help as is so many others. Good luck this year and I wish everyone the best in their endeavors.

Whats’s in your SQL Server Checklist

It’s late in the day; 10 minutes before you are to leave to start your weekend and what happens…….a SQL Server failure has occurred. Your phone begins to ring off the hook, your boss and his boss are now at your desk standing over your shoulder inquiring as to what happened, the business unit is now emailing 100 times over saying they need into the system now. Has this situation ever happened to you? Well, I hope nothing ever does, but if it has join the club and if it hasn’t get ready cause it will.

I am very big on being pro-active instead of re-active to issues. I’d rather have a document in my hand that is a legitimate well thought through checklist to run down in critical situations so I’m not running around like a basket case that I’ve seen so many people do. One of the first things that I ask people when I get to a new place of employment is, “How is the teams documentation?”  Nine times out of ten the response is “What documentation?”

If you don’t currently have a checklist, I strongly recommend one; if you do have a checklist then look for ways to improve the process.

Every DBA should have some mentor so to speak, someone who has been through the trenches before I’ll list several of ones I look up to here in a minute, but you can get so many good ideas from people in the SQL community. I strongly suggest taking advantage of every tool out there. I’m a huge Brent Ozar fan so I want to share his idea of a checklist with you….now I modeled mine after his but took some stuff out and added some stuff in that I believed fit my business need. The article if I’m not mistaken actually is presented by Kendra Little. You can access the article here

Some DBA’s that I really enjoy following are: