Data generation is at best done with tools. Since Visual Studio Data Tools have dropped support there is for instance red gate’s DBA bundle. My current license does not work with Microsoft SQL Server 2014. I need a quick solution to reproduce a scenario with *some more data* in just one table. Which means I already have some data in the table. So I wrote the following TSQL script to help me out. Just clone the Table schema, define the upper limit and multiply the data by running it.

I have a test environment for an application. From time to time I take the database to my local development environment. A smaller database log means less network latency and a faster local reproduction scenario. After a while of pumping data into a MSSQL Server instance the log can grow a bit – even if there is a lot of free space left. Shrinking is NOT GOOD at all, but I don’t have 50gigs of space left to pull up that database on my local machine. So I will shrink to gain space and reorganize the indices to fight fragmentation. DECLARE @DbName VARCHAR(128); SET @DbName = DB_NAME(); DECLARE @LogName VARCHAR(128); SET @LogName = @DbName + '_Log'; DECLARE @AlterSql NVARCHAR(512); SET @AlterSql = 'ALTER DATABASE ' + @DbName + ' SET RECOVERY SIMPLE;'; EXEC sys.sp_executesql @AlterSql; DBCC SHRINKFILE(@LogName, 1, TRUNCATEONLY) GO EXEC sp_msforeachtable 'ALTER INDEX ALL ON ? REORGANIZE;'; GO The result is notable and should be easier transmitted to my local development machine’ Another option is to set the recovery model to simple. But that cuts a few features: Log shipping AlwaysOn or Database mirroring Media recovery without data loss Point-in-time restores SELECT
name,
recovery_model_desc
FROM
sys.databases
WHERE
name = 'mydatabasename';
GO
USE master ;
ALTER DATABASE [mydatabasename] SET RECOVERY SIMPLE;

I have a test environment for an application. From time to time I take the database to my local development environment. A smaller backup means less network latency and a faster local reproduction scenario. Since 2008 Microsoft SQL Server comes with a feature called backup compression. I gave it a try and so I ran the two following statements in competition: BACKUP DATABASE [XXX] TO DISK = N'D:\DATA\SQL\BACKUP\Uncompressed.bak' WITH COPY_ONLY, NOFORMAT, NOINIT, NAME = N'Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO vs. BACKUP DATABASE [XXX] TO DISK = N'D:\DATA\SQL\BACKUP\Compressed.bak' WITH COPY_ONLY, FORMAT, INIT, NAME = N'Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10 GO Here are the results:

Monday: I met Andreas Hoffmann (2nd UG Lead of the VfL Usergrop) and Peter Nowak (Head of FIAEon.net, a community for .NET related vocational education) at Starbucks in Düsseldorf.
Tuesday: Benjamin Mitchell notified me that one of my sessions was voted by the british community and I'll have a session at the Developer Developer Developer Day.
Wednesday: I'm in contact with the Student Partners in Wuppertal now (better said Anselm Haselhoff because Marcel Wiktorin is moving and has not replied yet :-)).
Thursday: Usergroup meeting in Düsseldorf: Sebastian Weber (Developer Evengelist at Microsoft Germany and member of the VfL-UG) answered all our members questions about SQL Server 2005 and Tuan Nguyen (Lead of annos.de and VfL-Member) talked about the Annos project. Great, thanks guys.
Friday: I updated the VfL-Site and fixed a few bugs.

Ok, here is another one:
1. I add a login to my database server:
EXEC sp_addlogin @Username, @Password, @Database;
This works fine!
2. I add a user, to a database by using the stored prcedure sp_adduser:
Use [MyDB]; EXEC sp_adduser @Username;
This also works fine!
3. I want to remove the user from the database. Therefor i use the stored prcedure sp_dropuser:
EXEC sp_dropuser @Username;
This removes the user BUT what you'll see while digging deeper is that sp_adduser has created an SCHEMA and sp_dropuser don't cares a s%#t about that - it's still there after calling sp_dropuser :-(