Tag Archives: SQL Server

Giving Credit Where Credit Is Due

Respect2

Have you ever stopped and looked at the SQL Community as a whole entity and all it has accomplished? Better yet, have you ever stopped and thought about a problem you’ve researched and found that someone else has already been experiencing it and has provided a solution? If you are doing any blogging or social media and present a solution in a manner that it is your own, is that right? Answer to the last question is no.

The Community

I have never been associated with a community like the SQL Community where everyone is eager to share their knowledge or advice in trying to achieve an answer or solution to an issue. More times than not solutions are provided on a blog similar to this one, a news letter, or twitter; it is very easy to take the research that was found and utilize it and pass it off as being completed by oneself – I’ve seen some do it and not think twice about it.

The Cost

The SQL Community has a plethora of great minds, some of which you will find over to the right in the DBA Blog section. Think about the countless work everyone puts in figuring solutions to issues and then sharing. If someone’s work is taken and used for their own gain time and time again eventually the well might dry up.

The Call to Give Credit Where Credit Is Due

The solution is basic and simple, if you use something that someone else has written give credit to them by referencing it. Below are some ways to do this:

  • Script – think of the countless scripts that have been provided over the years, some that come to my mind off hand are Brent Ozar’s sp_Blitz, Adam Machanic’s sp_whoisactive, Kendra Little’s sp_BlitzIndex, and Glenn Berry’s awesome diagnostic queries. It doesn’t have to be the ones I’ve mentioned here it can be scripts that someone has provided however big and small they maybe. The point is reference their work; because they are the ones who provided it.
  • Blog Information – A vast majority of my DBA colleagues have blogs they use daily, weekly, or monthly with a ton of information on them. If you are passing or using this information along just note where you got it from.
  • Email the Author – email the author and ask them if it is okay to use there work on a site for example if you reference their name. A couple reasons I mention this – 1.) it is a display of respect and 2.) it also shows the author that there is appreciation for their efforts in sharing their knowledge

The Awareness

Not everyone is perfect, I understand that, but over the course of the last few months I have seen many occurrences where situations could have been avoided and hard working data professionals have been bitten by their work being taken and utilized for someone else’s personal gain. Think about this question – we are data professionals in some form or fashion; where does utilizing someone else’s work without referencing it sound professional? Let’s keep our community strong and thriving.

Closing with the Thanks

A big thanks to all the community for the relentless time and effort along with the countless hours in making solutions for us who seek them for every day issues. If we make a mistake along the way because we are human may we own up to it, learn from it, and move on with integrity and character.

How’s Your Database Mail?

Database MailFrom time to time I field questions regarding Database Mail usage within SQL. Questions come from all over discussing how to identify what the job is doing or what the job has done. Sure, you can send some test mails through the nice GUI part, but that is not what this post is about. I enjoy T-SQL and looking inside SQL the old fashion way so to speak so I utilize some simple queries that a colleague of mine recommended for me.

Database Mail in and of itself is a useful tool; it allows for notifications of failed SQL jobs for instance. The messages in and of itself can contain a plethora of information that can assist one in troubleshooting a variety of issues. According to Microsoft they state Database Mail in this manner – “Database Mail is designed for reliability, scalability, security, and supportability.”

**NOTE** Database Mail is not active by default; it has to be configured and turned on. The below information assumes that Database Mail is already set up. For information on how to set up Database Mail you can go here

To give a brief overview the below script is broken out into 7 mini scripts; these scripts consist of checks against Database Mail along with the process of stopping and restarting database mail. Please note the disclaimer and hope this helps with some of the questions that I’ve received thus far regarding Database Mail.

/**************************************************************************************************************

Disclaimer: Do not execute code found on the internet without testing on your local or testing environment. Running any code in a production environment that you find on the internet is not an acceptable practice and this site is not responsible for any repercussions that may follow if you choose to do so.

Scripts below are numbered; the corresponding numbers will give you a description of what they are utilized for.

1. The status of the Database Mail. Possible values are Started and Stopped (msdn article on sysmail_help_status_sp)

2. Stops the database mail queue that holds outgoing message requests (msdn article on sysmail_stop_sp)

3. Starts the database mail queue that holds outgoing message requests (msdn article on sysmail_start_sp)

4. Shows all the mail items

5. Shows all the unsent mail items

6. Shows all the sent mail items

7. Shows all the failed mail items

**************************************************************************************************************/

USE msdb

GO

/*1.*/ EXECUTE sysmail_help_status_sp

/*2.*/ EXECUTE sysmail_stop_sp

/*3.*/ EXECUTE sysmail_start_sp

/*4.*/ SELECT * FROM dbo.sysmail_mailitems (NOLOCK)

/*5.*/ SELECT * FROM dbo.sysmail_unsentitems (NOLOCK)

/*6.*/ SELECT * FROM dbo.sysmail_sentitems (NOLOCK)

/*7.*/ SELECT * FROM dbo.sysmail_faileditems (NOLOCK)

Why is Count(*) Taking So Long

Phone rings……..I answer…….a DBA from a third party vendor has supplied someone with two scripts. The first script is a simple insert with a where clause:

insert into [databasename].[dbo].[tablename]
( column1,column2, column3, column4 )

select column1, column2, column3, column4
from [databasename].[dbo].[tablename]

where column1<[integeramount]
and column1>[integeramount]

First question I ask; how much data is loading into the table – answer millions of records; the vendor gave us a script to see if the number is increasing. What is the script; do you have it? Sure…..script below:

select COUNT(*) from tablename

I was reminded of something I came across several years ago about this very scenario so figured why not put it to the test. I will try to explain to the best of my ability why the the second query was taking an hour to run.

The first problem I see right off hand is the COUNT(*) statement has to do a table scan; it is a requirement to figure out the calculation to return the result set. You take this statement and run it against several million row tables with a ton of reads on it you have a recipe of being prepared to sit and watch and wait for the result set to return.

How Do I Get Around This?

It’s not that difficult and here is a nice trick to provide a quick solution. I’m a huge fan of DMV’s and it just so happens that you can utilize one to return row counts for all tables in a database or specific tables in the database:

SELECT o.name,
ddps.row_count
FROM sys.indexes AS i
INNER JOIN sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID
AND i.index_id = ddps.index_id
WHERE i.index_id < 2
AND o.is_ms_shipped = 0
–AND o.name = [table name] /*UNCOMMENT AND PLUG IN TABLE NAME FOR SPECIFIC TABLE INFO*/
ORDER BY o.NAME

The result will give you the specific table name with row count

image

Don’t be alarmed by using the system object. Unlike others the row count does not depend on any updated statistics so the count is accurate. On this 132 million record table I can get the result set to return immediately.

Next time you get stuck waiting on a COUNT(*) statement to run; think about using a DMV; for a listing check out what Microsoft has listed into categories

It is always nice to have some tricks up your sleeve; especially when dealing with outside vendors.

T-SQL Tuesday #040: File and Filegroup Wisdom

SQL Tuesday

It’s that time again for the T-SQL Tuesday party! This party was created by none other than Adam Machanic (Twitter). If you are interested in hosting a party at some point this year give him a shout; you need to have participated in two T-SQL Tuesdays along the way and also maintain your own blog for at least 6 months.

Now that we have what the party is all about let’s get into what this month’s party is centered around Filegroups and his hosted by Jen McCown / MidnightDBA

My focus today is garnered toward indexes on filegroups and what they can do to your index strategy. I’m a big fan of having strategies when tackling issues, problems, or even believe it or not from the beginning of a project. Placing indexes on filegroups carefully can improve query performance (at the same time I want to note that indexes can also hurt performance in some situations so thorough testing needs to be taken into consideration).

Back from my 2008 R2 studies, if my memory serves me correctly, indexes are stored in the same filegroup by default; a non-partitioned clustered index and the associated table always reside in the same filegroup however you can do one of three things:

  1. You can partition both clustered and non clustered indexes to span multiple filegroups
  2. Create non clustered indexes on a filegroup
  3. Move a table from one filegroup to another

You can achieve performance gains by created non clustered indexes on a different filegroup if the filegroups are using different physical drives. The data and index information can be read in parallel by the multiple disk heads when the physical drives are on their own controllers.

One cannot foresee the access that will transpire or when it will happen, a better decision to spread your tables and indexes across all file groups might be of help. This would guarantee all disks are being used and accessed because all data and indexes would be spread evenly across all disks.

To bring this all back together you can think of a filegroup in its simplest of forms. Every database that you create has at least a data file and a log file, and every database has a primary filegroup. The filegroup contains the primary data file and any secondary files that are associated with it. One filegroup can contain multiple mdf/ndf files.

In the end I have seen significant gains with indexes being placed on specific filegroups, but as I stated before it is good to test all this out. Set up some scenarios on your test server and start doing some test cases to prove different theories and ideologies. One thing to remember as well is not every case is the same; ensure that the decisions you are making is good for what you are working on; never take a suggestion and drop it into a production environment. Prove the statement to be true or false no matter who it comes from.

Well, that’s a wrap for today’s party. Until next month…….

Select * Syndrome

NoSomething that I have seen lately over and over again and even ran into this morning is a practice that I would say is a pretty bad habit in SQL….the dreaded Select * syndrome

This method is heavily used for adhoc querying and I’ve seen it used in some troubleshooting scenarios but in my case I don’t have room for it in a production environment embedded in functions, procedures, or views.

To me it is a wasteful tactic in bringing back what is needed; it can produce unwanted scans or look-ups when in some cases all that is needed is index tuning. I’m a big fan of bringing back what you need instead of bringing back a tremendous amount of data. One can also make an argument for all the unused overhead it can produce.

I cannot begin to tell you the many times of deploying something out and then to find out the schema has changed and the select * in a view that was left in place years ago is my culprit from years of past coding that has been done.

For example; one that I have seen within the past couple of months is a view:

Select *

From table 1

Union All

Select *

From table2

This was being used quite frequently and is just asking for trouble and poor performance. There will always be excuses as to why it wasn’t done differently but in the end you will go back in and clean it up so it is best to think the process you are working on through in the beginning instead of the end.

Can I ever use it?

Sure….I’ve seen it used in If Exists clauses many times over and from my research and what I know SQL does not count this in the execution plans; if you leverage SQL the correct way it is more than powerful to handle what you need.

Tools to fight the good fight………

My source control is TFS and I like the newest version as you can set up controls that if a Select * is found it will break the build in dev forcing it to be resolved

If you haven’t already downloaded the free version check out the SQL Plan Explorer provided by SQL Sentry. Execute the tasks different ways with the select * and with pulling back designated columns and review the execution plan; you will be surprised at the outcome, and if you are old school that is fine too – analyze it in SSMS and see what you find.

Dashboard Time

AutomationI was fortunate enough to attend the PASS 2011 Summit in Seattle. If you do not know what I am speaking of when I say PASS I encourage you to check it out. PASS stands for Professional Association for SQL Server. The event that is put on yearly speaks for itself and I can dedicate a whole blog to just that but no; I’m going to speak of something I picked up while at the conference.

SQL Server MVP – Deep Dives Vol 2

This book has a plethora of valuable information and golden nuggets so much so I figured I’d implement something on my own that I can use everyday from it. There are countless number of good authors in this book

The Dashboard

I’m on a team that runs a full range of SQL servers from 2000 to 2012 on physical and VM’s, but chapter 12 stood out to me the other day which I decided to tried out. I’ve built reports and metrics in the Utility Database (idea spawned in my head after attending a session by Chris Shaw (B|T) but I started thinking of building a dashboard off the information.

Pawel Potasinski (B|T) wrote a chapter in this book called “Build your own SQL Server 2008 performance dashboard” – as I read through the chapter ideas started to spin in my head and before I knew it I was giving it a try.

I combined some of his ideas with the metrics I pull back using Glenn Berry’s (B|T) Diagnostic Queries and built a standard dashboard for myself that gets generated every morning when I walk in the door. In it I include some of the basics such as CPU, PLE, %Log Used. Pawel uses DMV’s and SQLCLR to get the performance counters; I’ve started to incorporate some extended events results in there as well.

Some additional items I’ll be incorporating in the near future is further drill downs into the details of the counters themselves and sharing the report out to the team I am on as a custom report. Once I have everything completed my plan is to make another post entry with the screen shots, code, etc.

In the end I would say I was not fully taking advantage of what SQL Server has to offer for me….are you? I’ve enjoyed digging further into Reporting Services and what I can leverage from it in administering databases I’m responsible for. Take a look at what your processes are and if it isn’t automated how can you better leverage your time and can it be automated?

Who Do I Follow? Where Do I Go?

There are many helpful sites within the SQL Community and several more blog sites that I follow. My favorites are noted on this site; however one that keeps drawing me back time and time again is by Brent Ozar’s group. For those of you who have not had the opportunity to check the site out I’ll lay out some real world specifics on what has helped me and how I have benefited from such sites as this one.

The Webcast’s

Every Tuesday I usually find my way to their 30 minute webcast for treating pain points within SQL (among other topics). At the end of each web cast, if time permits, they will host a quick question and answer session over the topic to viewers. Check out future webcast’s here

Two Important Free Tools

There are two scripts that have seemed to help me tremendously over the course of the year. One is sp_Blitz (comes with a SSMS custom report) which a new version just came out; and the other is sp_BlitzIndex. I recently just started to use the sp_BlitzIndex but I liking this little utility while the other sp_Blitz I use when hitting new or old servers, you know the ones that you stumble upon that no one knows about and no one has a clue of what it is doing. Two great free utilities that are offered that may just save your hide one day.

Popular Topics

I like the fact that on the site they keep a section for Popular Topics that are happening within the industry; keeps me up to date and provides insight on some of the issues I experience on a daily basis. Some topics you may find:

The Team

The team makeup of Brent, Kendra, Jeremiah, and Jes makes it an easy choice for me to have in my arsenal of following. I try to find people in the industry of whom I consider for myself top in the industry and learn from them and their techniques to help better myself and further gain more knowledge.

Check it out

If you haven’t already done so go check their site out and what their about. Real people providing real solutions with some fun along the way.

The Microsoft SQL Server MVP

What is an MVP?

For myself growing up in the realm of sports through high school and college an MVP is a most valuable player. In general an MVP is recognized in his area or field, an honor bestowed on him or her that distinguishes them as being recognized by their peers.

What is a SQL MVP

This carries over for me from my statement above on what an MVP is. I have friends that are SQL MVP’s and some friends that aren’t. Microsoft’s SQL MVP program recognizes individuals who make exceptional contributions to technical communities, sharing their passion, knowledge, etc.

Am I a SQL MVP?

No, I am not currently a SQL MVP and this is where my thought and blog really comes to life and the purpose for the post. As I stated before I have several friends who are SQL MVP’s and a lot who aren’t. One who is not approached me the other day via phone and I could tell something was bothering them. After some inquiring I discovered that the person was clearly upset that they did not have an MVP title next to their name so much so that they disclosed they were going to stop writing, being involved in the SQL Community etc.

The Outlook

I have mad respect for all of the current SQL MVP’s that are available to the community and the efforts that they put forth day in and day out; they are examples to me of what hard work and diligence can achieve in the profession and I hope one day I can become one; but I also want to share a different point of view to other fellow SQL Server Professionals. The SQL Community is just that a community of individual professionals that provide a knowledge base like no other. I implore the individuals who like my friend, basically was going to throw the towel in to keep working hard.

I once was told by my coach “Attitude – what you or I feel or think about something or somebody”. What’s your attitude today? Are you making a difference? Are you helping your co-workers? Are you continually learning to make yourself better? Do you want to me a game changer?

Somewhere somebody will always be practicing, learning, fine-tuning their skills – what will you be doing? Let’s get in the game, stay in the game, and while we are at it we might as well have some fun with it. All the other stuff will fall into place in due time, give 110% every time out.