A few random extra points: there are some issues with snapshots for reporting purposes - you can't back them up and you can't use FTI for your reporting. Conversely, FTI is supported out of the box for replication in SQL 2005. Also, concurrency issues are no longer a problem with the new isolation levels in SQL 2005.

Great article. We are currently using a home grown Log Shipping process to copy logs from one location to 4 other locations with compression/decompression. We also use the standby databases for user reporting and for programmers to research data problems. The problem with reporting, as the article points out, is it kicks users out during log restoration; thus, we increase the restoration interval to every 2 hours. What we are looking for is a combination of Log Shipping for warm standby and Replication for reporting. So far, all I read is this is possible but tricky to setup. I would love to see a detailed step by step instruction for this solution. I am sure other people have similar needs.

The biggest problem I had with replication was changing my db schema. Adding a new column became a pain in the ass, as you had to use the replication manager GUI. This was my main reason for choosing log shipping.

As someone else noted the article was a bit old!! Replication is always difficult for GUI users but if you script then making changes to objects is a doddle. I'm not sure your article really mentioned the issues you get with making changes to a database which is replicated, e.g. in prod systems with controlled software releases you'll find index changes and such can be problematic. I've log shipped since 6.0 and it's a cool way to provide recovery, I've found generally 5 mins is about the minimum log ship, especially if you have many databases on a server - which is of course the problem with all such solutions - ideal for one of two databases but when you have 20 servers and a couple of hundred databases that pipe can get mightly narrow and the distance to your DR site can increase latency.

That said sql 2005 has some really cool slants you can apply so a follow up for 2005 might be good.

The GrumpyOldDBAwww.grumpyolddba.co.ukhttp://sqlblogcasts.com/blogs/grumpyolddba/

Just thought I'd my ten cents worth. A case for Log shipping as a reporting solution.

Replication is obviously the preferred method of ensuring Data is in Sync. But here is an example of where log shipping works as providing the reporting server. We are currently using a particular MS application, supposed Enterprise financial application whos name would seem to engender a sage like quality. Unfortunately runs like a 'hairy goat'. Normal operation of posting invoices, entering timesheets would grind to a halt whenever someone ran a report, user satisfaction of zip. I had to separate the reporting from the running system. As the application has never been 'Certified for Replication', management are wary of me offering replication as a soloution. Hence the solution. I set up a log ship sequence to an other server this occurs every six hours, which the Business at the moment deemed an acceptable lag in data. If there was any body running a report the restoration job would fail - normal. This is where a little magic comes in. I set the re-try to once every 1 minute for 1 hour. In the restoration job I check if there are any connections that have their last batch time less than the longer than the average of the longest running report. If there are the I force the restore job to fail (select 1 from XXX) - and wait 1 minute and try again. As soon as all connections are older than the expected average then the SPID's are killed. If after one hour there are still connections still active -- All are killed, the restore is done. Log shipping complete. I know this is not optimal but the refresh times are gazzetted and I have found most people have accepted this process. No, not all are satisfied and these are managed by exception, the rest are happy campers.

Great article. If schema changes are not copied during log shipping, what is the best method to maintain updates to scripts and tables with large databases where full backups may take hours to copy to a DRP site, even with a log compression solution.

Great article. If schema changes are not copied during log shipping, what is the best method to maintain updates to scripts and tables with large databases where full backups may take hours to copy to a DRP site, even with a log compression solution.

thanks,Andrew

wavesmash, schema changes are copied over by log-shipping, its one of its big advantages.

Paul, good article. As a firm believer in log shipping I am glad to see you recommend it over replication for failover. I would have been even more forceful in this recommendation, I see replication as more for scale-out, and log shipping as much the better failover\DR solution. Only circumstance I personally would ever recommend replication for failover is if absolutely no data can afford to be lost. even then if you get the tail of the log, log shipping won't lose you any data, and if there is a delay before the restore on the standby could give you a way of backing out incorrect amendments to the data.

Is it possible to ship logs from a database that is a replication subscriber? I have tried this and although the log shipping jobs complete successfully, I am unable to update a record on the subscriber and see the changes on the standby server after a restore of the transaction log.