Category Archives: Uncategorized

DBA Deployments

There are many aspects a DBA must handle on a daily, monthly, yearly basis. One that has been sticking out to me in recent weeks is deployment methodologies and the various ones that other people might have in place.

For example:

My current process utilizes TFS and RedGate tools. Below is a representation of what a deployment of mine might look like.

  • Code is checked in to source control by developer
  • A release is created consisting of bugs and tasks within TFS by a release manager
  • Once the release is put into place in TFS a comparison with in TFS is completed on the branch or by a label to see what was deployed out to our first landing zone ( a QA environment) from the last deployment
  • The latest code is gotten from Red Gate SQL Source Control
  • Using Red Gate SQL Compare differences are compared and pushed based on what was scheduled on the release ticket

Now this is just a very high level view without all the intricate workings that are not defined out in this post. I am always looking for ways to improve processes and be as productive as can be. I am curious as to what others might be using or the different methodologies and processes that others in the community use to deploy out to their respective environments.

Fill free to drop me a line or comment.

Job History Is Missing

I ran into an interesting issue this morning after upgrading to SQL 2012 that I did not know existed. I had a job fail for various reasons and was alerted like normal of the failure. Upon going to the SQL agent to view the history of the job I found nothing in the history – which I thought was odd. I decided to hop over to the sysjobshistory table and sure enough my history was all there as expected. So what is the correlation? Why was my job history in the table but not being shown in the view history on the agent?

I  noticed a message on the bottom left where the status and progress sections are in the properties section of the Log File Viewer for the job – the message read “The given key was not present in the dictionary”. Upon doing some research it so happens that a known bug was discovered with special characters in job names. In my job, for whatever reason, it had a “,” in the name of it. In digging some more I could find only the term “Special Characters” was being thrown about, but I could not find a specific listing of what special characters…was it 1,2,3 or all?

Once I removed the “,” from the name and re-ran the job upon completion the history itself was re-populated in the agent with no issues.

I decided even further to test this out; I put the comma back in the job and then scripted the job out to see if it would error, to my surprise it did not error out. I would have expected it to error on script creation due to the comma; I guess that the bug runs a bit deeper than I initially thought.

Microsoft has responded that this will be resolved in the next SP release of 2012; I figured if anyone else runs across this that the information might help.

Which Publication Are My Tables In

Ever been in a deployment situation where the deployment script breaks due to a table being replicated? I have – many times. Hopefully this tidbit of information will help you in your future deployments.

When I receive a scheduled or even a non scheduled deployment I like to prep for it so I don’t have any surprises at the actual time of deployment. I use Red Gates SQL Compare tool for my deployments; whatever your mechanism is the methodology is similar.

Table A is being replicated from the publisher down to a reporting environment lets say; Table A has a modification to some part of the schema. If Table A is being replicated you will receive an error on deployment stating that the table is being “published” for replication.

What I like to do is review the schema before hand; I have a pretty good working knowledge of what I am replicating over but I also deal with 17 publications that house various amounts of tables and I might not be able to recall where all the tables are in what publications.

In order for me to figure this out quickly I use the following script. I do not have a use statement at the beginning; however I do run this in the specific database that I am deploying to:

SELECT sps.name
FROM syspublications sps WITH ( NOLOCK )
JOIN sysarticles sas WITH ( NOLOCK ) ON sps.pubid = sas.pubid
WHERE sas.name = ‘tablename’ –change to your table name

By running the above script it will allow you to find what publication the table is being replicated in. From there you can script out the drop and create statements for the publication and continue on with your deployment.

Recompile Option – Yes, No, Maybe?

This seems to be hot topic right now among my peers here locally and even abroad ~ To Recompile or Not to Recompile that is the question at hand.

I’ve been associated with companies in the past where utilizing this option is almost a standard; then I’ve been with companies to where it was never used at all. What is my personal preference?  I don’t use the option all of the time but I do have some use for it in my daily checks of the systems I monitor.

My two real world examples I just ran into not long ago are some simple ones but thought I would share……..

  • I kept on having some issues with a stored proc and I knew that what I was troubleshooting was off kilter and wasn’t adding up in my head on what I was seeing in some of the results. In diving into the query of course I saw the recompile option which of course told me that anything in my plan cache I should just throw out the window……I love using my DMV’s and what not; however on a recompile each time a proc is used puts a damper on my assessments at times. There is a trade off when using the recompile option to improve performance, just something to keep in mind.
  • One aspect I do use the recompile option for is my diagnostic queries of my environments. Prime example: I’m a fan of Glenn Berry’s Diagnostic scripts he’s put together with some of my own customized ones. When running these, I personally, don’t care how they are stored in the plan cache etc.

I recently had the opportunity to sit in on a good session about this by Kendra Little over at Brent Ozar PLF ( good group of guys and if you haven’t ever checked them out I implore you to do so; they have some great items they discuss and free weekly training ). Kendra goes through some real world examples in her demo and it would behoove you to give it a look.

As I said earlier I’d love to hear some of your experiences with the option and the pros and cons you may have on it. If you have time drop me a line.

Site Maintenance ~ Excuse The Mess

Over the course of the next week I plan on making a face lift to the site. Hopefully will help streamline the content bit better and provide a cleaner look and feel.

If at anytime anyone experiences some weird look or something not lining up correctly bare with me. It will return to normal soon!!

Static Data

Static Data Scripts


Ever had to wipe out tables only to put data back into the table such as maybe look up tables or some tables that never really ever change but maybe on a yearly basis?

To me static data scripts are integral in a sense it keeps you from re-inventing the wheel over and over again. In thinking of this topic I realized that I had some gaps myself I could shore up and get some quick scripts created so that when the time comes I have them at my disposal.

A basic template to go buy can be something like the one below:

/***************************************
*** Static data management script ***
***************************************/

PRINT ‘Updating static data table [dbo].[tabl1]’

— Turn off affected rows being returned
SET NOCOUNT ON

— Change this to 1 to delete missing records in the target
— WARNING: Setting this to 1 can cause damage to your database
— and cause failed deployment if there are any rows referencing
— a record which has been deleted.
DECLARE @DeleteMissingRecords BIT
SET @DeleteMissingRecords = 1

— 1: Define table variable
DECLARE @tblTempTable TABLE (
[Id] int,
[Info1] BIT
)

— 2: Populate the table variable with data
INSERT INTO @tblTempTable ([Id], [Info1])
VALUES (‘1′, ’10’)

— 3: Delete any missing records from the target
IF @DeleteMissingRecords = 1
BEGIN
DELETE FROM [dbo].[tabl1] FROM [dbo].[tabl1] LiveTable
LEFT JOIN @tblTempTable tmp ON LiveTable.[Id] = tmp.[Id]
WHERE tmp.[Id] IS NULL
END

— 4: Update any modified values with the values from the table variable
UPDATE LiveTable SET
LiveTable.[Info1] = tmp.[Info1],
FROM [dbo].[tabl1] LiveTable
INNER JOIN @tblTempTable tmp ON LiveTable.[Id] = tmp.[Id]

— 3: Insert any new items into the table from the table variable
INSERT INTO [dbo].[tabl1] ([Id], [Info1])
SELECT tmp.[Id], tmp.[Info1]
FROM @tblTempTable tmp
LEFT JOIN [dbo].[tabl1] tbl ON tbl.[Id] = tmp.[Id]
WHERE tbl.[Id] IS NULL

PRINT ‘Finished updating static data table [dbo].[tabl1]’

GO

I know in times past this type of methodology has saved me some hours here and there and if you have data that you know will not change, especially if you push from one environment to another makes life a little easier.

 

Sessions For PASS 2012 Released

I was excited to see the sessions have been released for this years PASS 2012 Summit

If you’ve been involved in the SQL Community for any particular time than hopefully PASS has come up here or there. I can say from personally attending in the past that the knowledge gained and networking that you will do is invaluable.

Some of my peers have inquired what are the pro’s, and why should I attend such a Summit. Again, I can only say that you get out of it what you put into it. The atmosphere and mindset going into the Summit to learn and gain some knowledge from the best speakers the SQL Community offers ~ well you’ll walk away amazed.

I think PASS does a great job in detailing what they are about and the why’s and who should attend questions. You can check all that out here.

Everything you will need to know about the Summit from lodging to pricing, etc is available on their site. If you are attending ~ GREAT; if you are on the border line than can I implore you to go; and if you don’t think you are going I ask you to take a hard look at it in  hopes to change your mind.

Let’s make this PASS Summit the best yet. Hope to see you there!

Granting View Definition to All Procedures

The other day I came across something where I needed to grant view definition rights to a specific user for one single DB. Now, there are several different ways to achieve this but I decided to dabble in just writing some code to take care of it for me. Others might have something very similar or different mechanisms to do such but for me I used the following code:

The basic gist is as follows:

  1. Create Temp Table – you can use a # table just drop it when done with it
  2. Declaration of Variables
  3. Insert data into temp table – only want to grab the procedures and nothing with a _dt
  4. Grab all of the procedures found in the DB and dynamically set the grant permissions to a SQL string
  5. Execute the SQL string
  6. Restart the counter and reset the name variable just for good measure
  7. Select the temp table to view what was just granted view definition to

Code in doing this:

DECLARE @PermTable TABLE
(
PK_ID INT IDENTITY(1, 1)
NOT NULL
PRIMARY KEY CLUSTERED ,
[name] SYSNAME
)

–declare variables
DECLARE @name SYSNAME ,
@RowCount INT ,
@RecordCount INT ,
@strSQL VARCHAR(2000)

INSERT INTO @PermTable
(
[name]
)

SELECT ROUTINE_SCHEMA + ‘.’ + ROUTINE_NAME
FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_TYPE = ‘Procedure’
AND ROUTINE_NAME NOT LIKE ‘dt_%’

— counters for while
SET @RecordCount = ( SELECT COUNT(*)
FROM @PermTable
)

SET @RowCount = 1
WHILE ( @RowCount < @RecordCount + 1 )
BEGIN
SELECT @name = [name]
FROM @PermTable
WHERE PK_ID = @RowCount
SET @strSQL = N’Grant VIEW Definition on ‘
+ RTRIM(CAST(@name AS VARCHAR(128))) + ‘ to [User]’

–Execute the Sql
EXEC(@strSQL)

–Restart the Counter
SET @RowCount = @RowCount + 1

–Never know so why not reset
SET @name = NULL
END

SELECT *
FROM @PermTable

Hope this helps; cheers.

 

“Make It Right”

Over the years I have grown accustomed to watching a show called Holmes Inspection. I enjoy working projects at the house ~ am I good at it, eh some things yes some things no, but nevertheless I do enjoy it.

The premise of the show is Mike Holmes goes into these homes in Canada where contractors have come in and basically not done the job they should have done; or families have bought homes and the inspector hadn’t caught obvious issues with the home to report back to the family. His biggest quote and what he is known for is “Make It Right” and also a big believer in “Do it right the first time and you won’t have to come back to it”. He has built his show and business around three things:

  1. Quality
  2. Integrity
  3. Trust

I’ve started to sit and dwell on these things and how that applies to my DBA work that I enjoy doing.

Quality – without a doubt the quality of a DBA is very important. In some instances it can make or break a company. From security down to the backup of the tiniest of databases. If a DBA is not on top of his/her game daily potential catastrophic events can occur. This in no way means that the DBA will be perfect; we hold ourselves to very high standards but at the end of the day we are human and mistakes will be made. When those mistakes do occur how do we handle those?

Integrity – one definition found in the dictionary says that this about integrity, “possession of firm principles: the quality of possessing and steadfastly adhering to high moral principles or professional standards” I can’t think of a better synopsis of a DBA.

Trust – as a DBA do people trust you? To me this is an obvious question….if you are a DBA you have all the ins and outs to the database where all the data is stored. A whole topic can be delved into on trust.

I’ve done things in the past and have viewed other DBA’s work that was done before me at jobs. Some things I sit back in amazement and think, “Why in the world is it done this way?”, and “What was I thinking?”

I find it very prudent to “Do It Right” the first time even if it means taking a bit longer to accomplish the task and ensuring all aspects (if applicable) have been thought out, and along those same lines if I come upon something that I know isn’t right then, “Make It Right” ~ quality, integrity, and trust

 

T-SQL Tuesday #31 – Logging

Looking at my schedule I realized after discussing with fellow DBA’s that I was missing out on something ~ T-SQL Tuesday. I think this is fantastic idea and to host a T-SQL Tuesday you can contact Adam Machanic.

Today is T-SQL Tuesday #31 and is by Aaron Nelson on Logging……….

Logging

In general, to me, logging is an essential part of my every day activities as a DBA. It helps me in many ways tracking changes from deployments from fellow DBA’s and myself to bringing back statistical information for us to determine growth.

I recently came across two real world examples that has helped me with logging.

  1. A DBA friend was in the middle of a deployment; after several minutes the deployment was labeled complete and everyone was on their merry way. A few days later some catastrophic events unfolded; and everyone wanted to know the why; because we were logging the who, what, when along with the commands given we traced back quickly the change and what was changed by the user.
  2. I’m a huge fan of the Utility Database idea; have been ever sense I sat in on it at PASS ( Chris Shaw presented it). After getting back to the office it was my mission to get this idea implemented in. Currently, we are building an enterprise edition out that encaptulates a plethora of information from statistical stats, heartbeats, file sizes, space on drives, and the list goes on. Logging this information will help guide and plan for sizing and space.

I’ve only touched on a couple of topics regarding logging; there is so much more out there.