Utilizing SQL CMD Variables in Transactional Replication

In some situations it maybe necessary to use SQL CMD variables while executing scripts. This happens when utilizing a replication process I have. To start off with I must iterate that to use SQL CMD variables you have to have the setting actually turned on in SSMS. The enabling is pretty simple and can be viewed on MSDN’s site at the following location.


If you start to use SQL CMD variables while this setting is turned off than a syntax error will occur when attempting to check the syntax or run the script.

In my situation I have several environments that I can deploy to; each environment has set SQL CMD Variables such as:

:setvar Pub_Server “[(local)]”
:setvar Sub_Server “[(local)\Reporting]”
:setvar PostSnapshotScript “index file location”

I have four more settings like this one for my other environments that are commented out. I can easily flip the switch at any time.

From the statement above I am setting my SQL CMD variables for my publisher (local server) to my subscriber (local\reporting server) utilizing some index files I have at a location on my C:\ As I move through my environments of course these settings will change.

Once I have my environment variables in place I utilize 3 more variables for setting the publication name, the database the replication is occurring in, and the subscribers database:

:setvar Pub_Name “Name of Publication”
:setvar Repl_TRS “Database Name”
:setvar SubscriberDB “Subscribers Database (reporting side)”

I will not go into the specifics of the internals of the script that deploys out the replication; that in and of itself will need to be a different post. I plan on disecting the internals in the near future. However to utilize these variables throughout the script you reference them by using the following methodology:

exec sp_addarticle @publication = $(Pub_name) and so on………

I found this approach very fast and simple.

SQL Saturday #122

SQL Saturday #122 is fast approaching! Don’t forget it is set for July 21, 2012 in Louisville, KY. The event is being held at the University of Louisville School of Business. The formal address is:

University of Louisville School of Business

2301 3rd Street

Louisville, Ky 40202

As of the 24th they still had available seating left. As time approaches more information will be available regarding the speakers, track, and session titles. For more information you can check out SQLSaturday.com or for specific inquiries email the SQL Saturday team at sqlsaturday122@sqlsaturday.com

To take a quote directly from the SQLSaturday site:

“PASS SQLSaturday’s are free 1-day training events for SQL Server professionals that focus on local speakers, providing a variety of high-quality technical sessions, and making it all happen through the efforts of volunteers. Whether you’re attending a SQLSaturday or thinking about hosting your own, we think you’ll find it’s a great way to spend a Saturday – or any day.”

Take advantage of these free training sessions; come learn from some of the top notch speakers in the area. Hope to see you there.

For MapQuest directions you may go here

Automating Startup of Replication Agents

In my business world we utilize Transactional Replication to copy data from our OLTP to another box for Reporting purposes. For a little background on replication I will quote directly from MSDN what transactional replication is http://msdn.microsoft.com/en-us/library/ms151198.aspx:

Replication At a Glance

“Replication is a set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency. Using replication, you can distribute data to different locations and to remote or mobile users over local and wide area networks, dial-up connections, wireless connections, and the Internet.

Transactional replication is typically used in server-to-server scenarios that require high throughput, including: improving scalability and availability; data warehousing and reporting; integrating data from multiple sites; integrating heterogeneous data; and offloading batch processing. Merge replication is primarily designed for mobile applications or distributed server applications that have possible data conflicts. Common scenarios include: exchanging data with mobile users; consumer point of sale (POS) applications; and integration of data from multiple sites. Snapshot replication is used to provide the initial data set for transactional and merge replication; it can also be used when complete refreshes of data are appropriate. With these three types of replication, SQL Server provides a powerful and flexible system for synchronizing data across your enterprise.”

Now that an explanation of what replication is about I can dive further into the steps I’ve taken to automate my own replication. Within replication there are publications that reside on the publishers side (where the data is being copied from) and subscriptions on the receiving side (where the data is being copied to). In my current set up I have 18 publishers. Since this post is regarding Automation I will not go into how to manually start the agent via the Replication Monitor.

I created 18 SQL files that correspond to my 18 Publications. Within my sql files I have the following:

Use [DBName]


Declare @publication VARCHAR(50)

SET @publication = [PublicationName]

EXEC sp_startpublication_snapshot @publication = @publication


This allows me to have 18 seperate start up files; now can I have one file instead of 18. Sure I can; however in some instances I only want to run maybe 1 or 2 so to keep this automated I have seperated mine out. To execute these I have multiple options. I can use my Redgate SQL Multi Script utility that I have at my disposal (which I have done in the past); however I have also created a SSIS package. The SISS package calls a .BAT file with the following contents:

for %%G in (*.sql) do sqlcmd /S (local) -E -i”%%G”

Within my SSIS package I point the running of this command to the directory where I store all my publications; the command executes in this case all sql files (*.sql) to my local environment.

Once the process completes all my agents are started and the snapshot generation is underway.

Database Backups ~ Good or Bad?

I recently responded to a post on SQL Server Central regarding upgrading SQL Server 2005 to SQL Server 2008 R2. The suggestions laid out before my post were spot on; it was the one that came after my response of “agreeing” that backups were important……their is an age old saying I’ve always heard of “You’re only good as your last backup”. The response I pondered on this weekend was from a user who said, “Backups are worthless; restores are priceless“. Being the analytical person that I am I decided to dive into this statement for myself. I can only conclude that in order to do the priceless restore you first must have the worthless backup to restore. So, for me personally I cannot justify backups as being worthless.

I hope DR plans never have to be used but we live in a world that has error so it could be inevitable. No one knows when a disaster will strike, heck we just experienced one of the worst tornado’s around my area since 1974. Stuff happens, the world doesn’t stop turning so as a DBA I better be making sure that our systems have solid backups.

Backups should be pretty obvious, to have a back up is essential in order to recover ( to even be able to do the restore ) from most disasters. Backups aren’t just needed for disasters; think about upgrades gone bad, system issues, or even table corruption.

I know I probably shouldn’t have read to much into the statement, but to me that is a pretty big ordeal when hearing backups are worthless. I’d be interested to here from others what their thoughts are on backups and strategies, do you think they are worth it?

I won’t dive into the different methodologies of backing up databases; that would need to be an entirely different post, I personally like a good backup strategy and for any DR plan in place you need to have a good one. I also, like the idea of routinely verifying backups by restoring them to a DBA sandbox so to speak.

In any sense have a good one guys, thanks for listening to my ramblings on the matter!

Are Developers Data Scripts Standard?

In all my years experience with SQL and being a developer before transitioning to become a DBA I will answer for me personally NO. Each developer has their own niche or style that they use when writing code. I’ve seen some pretty wild and insane ways people develop. I worked with one guy several years ago who would name his counters off t.v. shows so every now and then you might have a skipper or a s.s. minnow thrown in. I hope and trust that development teams have set their own standards better than the one we had years ago…..however even with standards I think as a DBA we need to be able to communicate and bridge the gap that it just makes sense that when you receive a data script or migration script that it is in an expected format than everyone adheres and agrees to with no gotchas.

Recently this standard below was presented; given what the business needs are the script will allow for multiple servers that the script will be allow on. This is particular handy since some people may or may not accidentally run the script on an environment it shouldn’t be ran on. This also is helpful in dealing with a CMS server.

USE [DBName]

IF @@SERVERNAME IN (‘ServerName’,’ServerName’) /*Enter Valid Server Name/Instance*/




      /*Enter Code*/


      PRINT‘Successfully committed!’

    End TRY



      PRINT ‘An error has occurred.  Rolled back!’






    PRINT ‘PLEASE EXECUTE ON ServerName ONLY!’ /*change server name*/


Whatever your choice of a standard is work toward getting things streamlined and laid out so there are not variations of the same kind of step. When it is up front and in your face as a standard it is easier to follow and maintain.

Work Encouragement a DBA Perspective

Motivation……..what is it? In Websters Dictionary this is defined as a motivating force, stimulus, or influence . I’m often asked, “What is your motiviation, or why should I be motivated”. Each person is different; I cannot tell you why you should be motivated or why you shouldn’t be motivated. Work wise I think we should take pride in what we do. At the end of the day I do not something out with my name on it that doesn’t meet my standards (which I set pretty high for myself). What do you think the reason the guy in the picture is motivated……for me as a DBA I’m motivated to help our team keep every server up and running with the best security, least amount of down time, always accessible, fine tuning processes like the excercise rinse, wash, and repeat, and the list can go on and on.

I have a sheet of paper on my office wall; on the paper I have the following:

  • Believe while others doubt
  • Plan while others are playing
  • Study while others are sleeping
  • Decide while others are delaying
  • Prepare while others are daydreaming
  • Begin while others are procrastinating
  • Work while others are wishing
  • Save while others are wasting
  • Listen while others are talking
  • Smile while others are frowning
  • Commend while others are criticizing
  • Persist while others are quitting

So I ask this question; what motivates you? Do you take pride in your work?

Removing Read Only Attribute From Files Using SSIS

Recently, I had built an SSIS package that searches my TFS structure after getting the latest version. I had many files I needed to bring over to one location; so to do this I built out a simple SSIS package. I failed to realize that when I bring my files down from TFS to my local environment they are protected as read only. This causes me a problem as I am not able to clear out my repository directory ~ you will inevitably get a “permission denied” error.

To prohibit manual intervention every time I perform this action I tacked on a Script Task property to my package. The basic flow of the package is as follows:

  • Check if my landing zone (repository) is available. If it is then use the existing directory if it is not than create it.
  • If there are any files in the directory prior to bringing over my new files than obliterate them from the directory.
  • Grab 16 various files from different locations and drop them into the landing zone.
  • Turn off the read only attribute of the files brought over.
  • Rinse and repeat as needed.

It is in step 5 where I turn off the read only attribute. As you can see the file system task in my package is below:

Read Only Attribute (Script Task)

By double clicking the Script Task you will be taken to a Script Task Editor screen. Simply click on “Edit Script”:

Edit Script Task

After clicking on the “Edit Script” button you will be taken to an SSIS Script editor. The below code provides a simple way of removing the read only attribute; in my beta version I have left the value hard-coded but eventually I would like to pass this is as a parameter so that I won’t have to come inside the code to change it. Also note that you can swap out the false with true and turn the read only attribute back on.

public void Main()
System.IO.DirectoryInfo directory = new System.IO.DirectoryInfo(@”C:\INRepository”);

foreach (System.IO.FileInfo fi in directory.GetFiles())
fi.IsReadOnly = false; // or true