Archive for January, 2012

I always prefer to use C drive only for OS and D drive for other installations. And I did the same thing when I installed Visual Studio on my machine. I installed it on D drive and when I tried to install newly downloaded LightSwitch from DreamSpark, I found that even the install directory is D, it uses loads of space on C drive as well !!

I just don’t understand the point of having thing on C. But I guess it is typical MSFT. Because I found same behavior when I installed SQL Server or Visual Studio in past. Only difference is this time I remembered to take screenshot of installation

Few days back I was working on some task and as a part of that I was required to run some command line utility. Now condition was to keep running that utility all the time and for whatever reasons it crashes it should start by itself.

Now, I could do it by creating another application or service which monitors this utility all the time and if it crashes, this service or monitoring app restarts that utility. But I thought it is more work just for a simple task. Instead I chose a different and easy approach. I created a batch file that runs in infinite loop and made this utility run inside that loop. For sake of some history I also added a log for each restart event. And that’s it !!

I am posting my script below (of course it is not original script but it is nearly as original)

Logging is an essential part of any application. It gives user an insight of application operation. And it proves valuable in event of any issue. Same stands true for SSIS packages as well. Since SSIS package will get executed usually by SQL Server Agent or some Windows Scheduled Task and in both case it is usually unattended, having logger gets really helpful when all of a sudden you start receiving alerts that some files are not being processed.

For this post, I have create a very simple demo to show how logging works in SSIS. For this demo, I am using file system task to transfer file from one folder to another folder. And I have created a log file to record each step of this process.

First I have added a file system task. And then when you right click on control flow tab, it gives few options and first option is logging. By selecting this option, it will open up another window which is basically logging configuration wizard. Select objects of containers for which you want to create logging. In this case I wanted to log everything for whole package so I selected folder. If you click on “Provider Type” you should see different logging options available to log events in SSIS. And good thing is we can select multiple logging options, for example file based logging for simple operational steps and windows event logging for critical errors. For sake of simplicity (and I am feeling lazy to go into too deep) I have selected “SSIS Log provider for text files”. There is another tab next to “Providers and Logs”, “Details” which lets user select which events user want to log into logger. If you click on “Advanced” it will expand current window and will give more fine grain choices for logging.

Just like many people, internet is my another playground (after our backyard ). I use google for many things, for work related stuff, to search cheap deal of computer hardware or just to find some crazy video. And every time I go to some personal blog or site, it always amazes me that why people like to like about them in 3rd person view ?? I mean come on, if it is my own blog then why the hack I will write about myself in 3rd person view ?? Unless I am just one of many poster of that blog and every author has their own bio written by some professional dude (which will be weird because … you blog because you want to write something not let others write for you, aren’t you ??) … I think it is totally OK to write bio in 3rd person view but when it is done by real 3rd person on his/her blog not yours.

I usually when find any blog, I try to find “About Me”. It is really entertaining to read those self posted 3re person bio. Some classic examples are, “S/He has XX years of experience in field of XYZ. And s/he is expert in XWYX technology.” Hell I have even seen some linkedIn profile like that !!

In previous post I explained basics of ISOLATION levels in general. This post is a sort of demo of isolation levels in SQL Server. In SQL Server, Isolation property can be configured using “SET ISOLATION” command. This command can be used to set isolation at any given transaction.

For this post, I will be setting up isolation level to Read Uncommitted. As I explained in previous post, this isolation level is of the lowest in group. This will give users highest concurrent access as this isolation level will cause least amount of locks on resources. And users will even see uncommitted transactional data from another transaction.

First step for this test is to create a transaction to update some field in a table, I will be using my TestDB for this demo which can be created using following script,

Just like any reliable transaction system, SQL Server supports ACID properties. If you have never heard of ACID properties before than it is actually set of four properties Atomicity, Consistency, Isolation and Durability. These set of properties makes sure that what ever transaction we do in SQL Server (or any DBMS) and database performs consistently every time.

One of the ACID property is Isolation. Which basically makes sure that one transaction is completely independent of another transaction and one transaction can’t access resources being used by another transaction. This basically controls concurrent access to any given resource in database. And it is implemented using locking of database resources.