It’s been a long time since i posted in my blog. One of the common request i have been asked by my colleagues, friends about “Is there a way to automate backups that are taken on the SQL Server database server running on EC2 instance to Amazon S3 storage?” Today, I would like to show how we can achieve this.

Requirement: I would like to keep one days of backups on the disk and also maintain remaining backups on Amazon S3 storage for 30 days and later archive them.

I’m assuming that you have a mechanism to take backups to Disk.

Attach a S3 policy to Current Database Server running on EC2 instance. Instance need access to read and write permissions. You may use the below policy.

Once you attach above the policy to IAM ROLE on EC2 instance, EC2 Instance gets access to the Amazon s3 bucket.

Next, Install AWS CLI command line tool on the EC2 instance. You may download here

Now, let’s begin the actual process. I’m assuming that you already have backups jobs running on database server based on your schedule which takes the backups on disk. Include the below powershell script in the second step after taking backups to disk.

Have we go you have backups sync back to S3. The advantage of this is you can reduce the cost of disk storage and have backups highly available and reliable. You don’t have to worry about disk capacity and maintaining redundancy at disk level. Amazon s3 storage has 99.999999999% durability and 99.99% availability.

Now, you can create a life cycle policy to keep the backups on s3 storage for 30 days and archive them to Glacier storage (Much Cheaper storage). You may follow the link here

If you are looking for specific process id then you can provide the pid value in the where clause as below

SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'dbname' –- Database Name
AND pid <> (processid) -- Provide the process id you would like to kill;

Recently i worked on migrating sql server databases running on amazon EC2 instances to Amazon RDS instances. Starting from 27th Jul 2016 Amazon announced you can ship your databases from ec2 to rds using native backup and restore method. Let’s see how it works.

Steps:

Assume you have RDS Instance ready with default setting

Create a custom option group “NativeBackup” and enable S3 bucket full access. You may use the below IAM rule

Right click on Custom option group and then click on Add option –> Select SQLSERVER_BACKUP_RESTORE option — > Apply Immediately

Now you have custom group setup with native backup and restore feature and correct permissions to S3 bucket. Now, Add this group to RDS instance. Right click on RDS instance –> Click on Modify –> Navigate to Option group —> Assign the NativeBackup group which was created in the step 2 –> Click Modify instance

Now, Copy the backup of EC2 instances to S3 bucket. You may use AWS CLI command line tool to copy from one EC2 instance to S3 bucket

Restore will begin, you may use the below stored procedure to get the status of the restore process

exec msdb.dbo.rds_task_status

That’s all about the restore! Below are the possible errors you may encounter while migrating to RDS instance

Error1:

Aborted the task because of a task failure or a concurrent RESTORE_DB request.Database TestDB cannot be restored because there is already an existing database wih the same family_guid on the instance.

Reason: If you have same file guid for data or log file exists then you cannot restore to rds instance. The file guid’s are unique to each database. There is a possibility to get the same fileguid based on the name.

You may use sys.master_files view to get the details of file guid

Another reason for this error is you have already restored the same database and trying to restore, hence the error. RDS has the limitation that you cannot restore the same database file names on the same instance.

Resolution: Do not restore the same database names on the same RDS instance

Please make sure the login you used have execute permission for the stored procedure rds_restore_database. If you are using master credentials and you still getting the same error then you have not granted full permission to S3 bucket or missing IAM roles for the S3 bucket. Please grant the access and re-try

You or your team or someone has written an SSIS package which is running flawlessly over few days/months/years. One fine day SSIS package is failing continuously and you need to troubleshoot and fix the issue.

Error message is not showing much information about why the package has been failed and until what step the package was successful or failed.

What will you do in this scenario?

Debug! Debug! Debug! Let’s see how this works.

You can debug this using breakpoints and data viewer in SSIS. Let’s explore this.

Data Viewer:

You can view the data while the ssis package is running in each step. This will help you to know what data is processing in each step in data flow task. You can see that in the below screenshot.

After you execute the package then the result can be seen as below

Break Points:

You can also debug a package by setting the breakpoints on a package task/container as below

As you can see in the above screenshot that you can enable any of the above options to debug the ssis package. Also can see each option in the below table.

Break condition

Description

When the task or container receives theOnPreExecute event.

Called when a task is about to execute. This event is raised by a task or a container immediately before it runs.

When the task or container receives theOnPostExecute event.

Called immediately after the execution logic of the task finishes. This event is raised by a task or container immediately after it runs.

When the task or container receives theOnError event.

Called by a task or container when an error occurs.

When the task or container receives theOnWarning event.

Called when the task is in a state that does not justify an error, but does warrant a warning.

When the task or container receives theOnInformation event.

Called when the task is required to provide information.

When the task or container receives theOnTaskFailed event.

Called by the task host when it fails.

When the task or container receives theOnProgress event.

Called to update progress about task execution.

When the task or container receives theOnQueryCancel event.

Called at any time in task processing when you can cancel execution.

When the task or container receives theOnVariableValueChanged event.

Called by the Integration Services runtime when the value of a variable changes. The RaiseChangeEvent of the variable must be set to true to raise this event.

** Warning ** The variable associated with this breakpoint must be defined at the container scope. If the variable is defined at the package scope, the breakpoint does not get hit.

When the task or container receives theOnCustomEvent event.

Called by tasks to raise custom task-defined events.

Both breakpoints and data viewer options are very useful when you troubleshoot/debug/develop complex ssis packages.