Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Databases are a critical component utilized by many (if not most) software applications. And yet, they are traditionally underserved by tools that can facilitate development, testing &amp; deployment. Database developers are usually left to cobble together disconnected tools for developing &amp; deploying database changes. Furthermore, database developers usually follow a process that is disconnected from the process followed by the application development team, resulting in an error-prone and labor-intensive system of collaboration. With VS2010, you can finally bring the process of managing changes to your database into the same toolset and process framework used by the other members of your software development team.

Databases are a critical component utilized by many (if not most) software applications. And yet, they are traditionally underserved by tools that can facilitate development, testing &amp; deployment. Database developers are usually left to cobble together disconnected tools for developing &amp; deploying database changes. Furthermore, database developers usually follow a process that is disconnected from the process followed by the application development team, resulting in an error-prone and labor-intensive system of collaboration. With VS2010, you can finally bring the process of managing changes to your database into the same toolset and process framework used by the other members of your software development team.

For those new to Visual Studio Database Projects, Database Projects provide 3 main pillars of functionality:1.) Managing ChangeVersioning of your database schema – Full fidelity offline schema development of your database within the project systemIncremental Deployment - Deploy your database schema against multiple versions of a database schemaSchema Compare – Database Tooling to compare schemas between databases, database projects, and dbschema (dbschema files which are the primary output artifact of building a database project) Schema Refactoring –Database Tooling to support common iterative changes to your database schema within the development cycle (Refactoring operations include: Expand Wildcards, Rename, Move Schema, Fully Expand Names 2.) Managing QualityDependency validation and visualization –Validation during design and build ensuring consistency and integrity of your database schema Static Code Analysis – Analogous to FxCop, but for SQL. The product has 14 rules that it ships with. The inbox rules cover naming, design, and performance issue categories. Unit Testing and Data Generation – A Unit Testing Framework similar to what is available to the app developer, but with T-SQL and Data Generation capabilities to test your instantiated schema with data.

3.) DDLC Integration into ALMTeam Developer brings the database development lifecycle into the Visual Studio Developer experience in a consistent and familiar manner. Many of the same idiom or metaphors now exist for database development that c#/vb.net developers are accustomed to. Database projects bring the data tier developer into the team based development lifecycle. (Define, Design, Develop, Test, Deploy, Manage) Lather, rinse, repeatBuild integration with MSBuild and TeamBuild4.) Database Development Platform ExtensibilityDatabase projects offer extensibility to add features to the project system or to expand features that ship with the productFeature extensibility includes: Custom Static Code Analysis Rules, Custom Unit Test Conditions, Custom Data Generators and Data Distributions, Custom Refactoring Types and Targets

Can you use a Diff Utility of any Version Control System to get the changes between V2 &amp; V3 for the database ? No

Not performantError-prone because it’s difficult to get the dependency tracking correctLabor-IntensiveRequire extensive additional logic to account for customizations or versioning differences between different instances of production databases.Database changes must be made in a precise order to preserve the integrity of the schema &amp; prevent data loss.Can introduce additional bugs

Schema Model = In-memory model of the database.Creating database objects &amp; editing TSQL cause visual studio to perform background analysis of your changes &amp; make the appropriate changes to the schema model, it may also detect syntax errors, dependency errors &amp; thus protect the integrity of your database schema.As mentioned we persist our database schema as separate .sql scripts within a Visual Studio project system. Which is a physical container for grouping all of the .sql files that make up a database schemaBecause we are completely offline, and there is no database in place within the Database project system, we also need to understand the database from a logical perspective, namely how each of the object relate, depend on each other.We do that via a Schema Model, which is basically an in memory representation of that database schema as it would exist in a live database.To populate our schema model, we can take an existing database, reverse engineer the definition of that database (no data) and create .sql files for each schema objects(.sql file for each table, .sql file for each primary key, stored procedures etc.).We can then version control those .sql files just like you version control any of your .Net coding filesWith the .sql files, we then load the content of the file through our intrepret component to interpret the schema definition (is this a stored procedure we’re dealing with, or is this a table etc.) and then analyze the content(do we already have an object defined using this name, does this object depend on another object, namely for Primary Keys which table does this pk belong to etc.) and then we validate(is the object defined correctly, for Primary keys is the table defined and in an non error state). Once each of our objects pass this process, we then populate it into our schema model, which provides a true picture of a database which is fully validated and could be deployed to a live database without any semantic errors.The persistent model of the schema model is a .dbschema file

This is a simplified diagram of how the incremental deployment engine works. In a nutshell, it uses in memory models of the source and target databases to compare schemas to create a deployment plan. The deployment plan produced is dependency and data aware. Examples: Trying to DROP objects that have dependencies not in comprehended in the project will often fail the deployment.Modifying the schema of the database that may introduce data loss is optionally blocked in deployment. Test OK to lose data. Production, not so much. The deployment engine produces a list of differences and a deployment contributor acts on those differences to produce the least invasive approach to modify the target database to match that of the source. The deployment contributor combines this deployment plan with additional project artifacts and generates a SQL deployment script. Additional deployment contributors also act on the database target by emitting additional SQL or supporting artifacts LIKE a change list or data dictionary. We will see more of this later.

Now you can defer the comparison of the source &amp; target database so that an appropriate script is generated based on the state of the target database.