Every part of your life is best, if you can know yourself and thus create your life like an artist!

News

Ashraful Alam is a Software Architect, who has 8 years of professional experience in Software Development industry. This Bangladeshi national is involved with project management and development of several US based software projects from his country. Already he has managed and developed several software projects, which are being used by several users of different countries, such as USA, Canada, Australia, and Bangladesh. While developing and managing a team, he contains and maintains a set of well defined engineering practices developed by him and other online developer communities.

Due to his willingness to give effort to improve and share better software development practices, Ashraf has been awarded as “Most Valuable Professional” (MVP) in ASP.NET category by Microsoft since year 2007 multiple times, which is a rare honor and prestigious reorganization among the developers around the world.

My Works

After a long wait, the next version of Employee Info Starter Kit is released! This starter kit is basically a project template that contains code samples targeting a specific technology, such as ASP.NET Web Form, ASP.NET MVC etc.

Since its first release, this open source project gained a huge popularity in the developer community and had 250K+ combined downloads. This starter kit is honored to be placed at the official ASP.NET site, along with other asp.net starter kits, which all are being considered as the “best” ASP.NET coding standards, recommended by Microsoft. EISK is showcased in Microsoft’s Channel 9’s Weekly Show, as well.

The ASP.NET MVC Edition of the new version 6.0 bundles most of the greatest and successful platforms, frameworks and technologies together, to enable web developers to learn and build manageable and high performance web applications with rich user experience effectively and quickly.

User End Specifications

Creating a new employee record

Read existing employee records

Update an existing employee record

Delete existing employee records

Role based security model

Key Technology Areas

ASP.NET MVC 4

Entity Framework 4.3.1

Sql Server Compact Edition 4

Visual Studio 2012

QuickStart Guide

Getting started with EISK 6.0 ASP.NET is pretty easy. Once you've Visual Studio 2012 installed, then just follow the steps as provided below:

Before going to consider whether comments are really needed, let’s consider few other things that already became “rivals” of it.

Unit Tests: Having well written unit tests are smarter and useful solution, than detailed comments. Unit tests not only can check for the quality (and better architectural design in case of TDD), but also serves as a documentation regarding how the API should be used.

Advanced IDE Features: With the advanced features, such as conditional debugging, call stack, dependency graph (and a lot more) in modern IDE’s its relatively easy to understand the code without manually reading the comment and the code.

Architectural Documentation: A well written architectural documentation really helps people who intend to start working on an existing codebase.

Now the first question, whether comments are needed? The short answer is, yes. However in real world comment may lead to confusions if they are not maintained through the change cycle of the code. Thus it should be written only when it is really needed and should be maintained.And the next question is why good developers don’t want to write comment? I’d say, people who build software that meet quality, time and budget accordingly are good developers, where almost everyone uses the “rivals” of comments, along with bare minimum comments (that are truly required), some of them are more obsessed to focus on unit tests, as these are automated and easy to maintain than comments.

What is Model?

A model can be considered as a container that facilitates presentation view, behavior and/or persisting data to/from data source (i.e. database etc). Besides the data container elements, a model may or may not contain behavior (i.e. logic), depending on design context of corresponding architecture. While the term “Model” is frequently discussed and used in Model-View-Controller pattern context, it is one of most important consideration in current world of software architecture.

Download the EISK 6.0 MVC Edition release to see few of the patterns mentioned in this post – in action!

Common Container Models

Entity

Entities can be considered as the “heart” of a data driven application and plays a primary role on all Model related patterns. By definition, each data container that is designed as an Entity - contains identity (i.e. primary key) and typically utilized to store data in structured storage system.

Value Object Pattern

Value Objects are simple data containers which don’t have any identity. A value object may participate in an entity object as member to provide an object oriented style design or may itself be used as Data Transfer Object (discussed later) to transfer data.

Figure: the Person class represents an Entity, where as the Address class represents a Value Object

Model Set Pattern

As the name applies - a Model Set encapsulates a set of model objects in a single class. While containing objects of different type in a single class is very common in software architecture and implementation world, defining the term “Model Set” in model related architecture context will enable understanding different boundaries in different scenarios.

Data Transfer Object Pattern

As the name applies, Data Transfer Object (DTO) is used to transfer data from one boundary to another. According to Fowler - an object that carries data between processes in order to reduce the number of method.

Data Transfer Objects can be considered a form of Model Set pattern, in addition with a mandatory responsibility of participation in transferring data.

Relevant patterns:

Model Set Pattern

Business Models

Domain Model Pattern

By definition, Domain Model is - an object model of the domain that incorporates both behavior and data.

Presentation Data Model

It is often beneficial to create a presentation data model that contains decorated version of domain data models, which are completely presentation friendly.

For instance, if we have a property “Notes”, for a given employee in a domain object, we can create our view friendly model that will encapsulate necessary display logic in a property of presentation data model. In that way we can reduce untestable and unmaintainable code from view. Such an example is shown as below:

Based on characteristics, Presentation Data Models can be categorized on 2 primary ways:

View Model Pattern

As mentioned earlier, instead embedding display rendering logic to view, it is often beneficial for testing and maintainability to construct the view rendering logic in to a separate class before sending the model to view. View models are not typically designed to be persisted to database.

Some people consider “View Model” to have Editor related data as well, which I think is very confusing, as the term “View” provides a “read-only” impression. Patterns specific to editor related data are discussed later.

Relevant patterns:

Façade Pattern

Data Transfer Object

Presentation Model

Editor Model Pattern

Editor model is a special model to facilitate rendering appropriate data in editor user interface, where the entity and other editor related objects are encapsulated in a single view friendly class and enables the entity to be persisted to database upon submission of data to service.

Relevant patterns:

Façade Pattern

Data Transfer Object

Presentation Model

Editor Model Pattern can be categorized in 2 following ways:

Member Editor Model Pattern

Member Editor Model is a simple form of Editor Model pattern, where the persistence entity object is placed in a Model Set as a class member. Other class members of the Model Set include related presentation friendly objects. Since the Model Set contains the database persistence entity, it requires very less coding effort to perform crud operation.

Relevant patterns:

Model Set Pattern

Façade Pattern

Adapter Editor Model Pattern

Adapter Editor Model Pattern combines the benefit of View Model and Editor Model pattern, with an additional overhead of adapter component that converts a editor model object to entity object to facility working with data access layer.

Relevant patterns:

View Model Pattern

Façade Pattern

Adapter Design Pattern

Hope you found it useful. Cheers!

Subscribe me on Facebook, if you like the post and are interested to learn my latest thoughts!

Every year Microsoft organizes software contest Imagine Cup where young technologists around the world participate based on a theme: to help resolve some of the world’s toughest challenges.

This year, a team from Bangladesh is also going to participate in Imagine Cup final round, which has been selected among hundreds of teams in Bangladesh, through different phases.

As one of the judges of Imagine Cup 2011 to select finalists from Bangladesh, I was very excited to see the efforts that were given by all the participants of Imagine Cup 2011 from Bangladesh.

One of the most exciting things that I enjoyed, besides excellence in software engineering, all participants put their effort on innovation to do some good for humankind, by being encouraged with the theme of this competition.

On 10th May 2011, the result has been announced. “Team Rapture” won the first position, which will participate to the final round of Imagine Cup 2011, to be held in USA on July. Their project was focused on mobile phone client software to make the life easy for visually impaired people.

I believe this is just a good start to show the brightest light from Bangladesh to the world.

Thanks to all who participated and congratulations to the winning team!

Ajax enabled data centric applications are getting popular day by day in web development space. While these type of web applications provide rich user experience, building a robust and powerful application quickly is a great challenge for developers. Fortunately Microsoft has started providing great frameworks, plug-ins and APIs to facilitate this process.

Last week Microsoft announced a new version of java-script API “datajs”, which is intended to help web developers to build data centric AJAX applications quickly, along with utilizing modern browsers features (such as HTML5 local storage, IndexDB etc) and protocols (such as oData). Datajs is designed to be small, fast and easy to use.

While datajs includes lots of cool features, today in this post, I will have a very basic introductory quick start so that any developer can have an idea how datajs can be implemented in different application architecture. Checkout last week’s MIX session video to learn few cool features available in datajs.

Invoking Cross-Domain Service

Netflix has provided an excellent oData API to enable developers to experiment with their data hosted in their infrastructure. The code below, uses NetFlix service and datajs API to show movie data. The cool thing is you really don’t need to build any web or data service to test your app.

<html>

<head>

<scripttype="text/javascript"src="datajs-0.0.3.min.js"></script>

<script type="text/javascript">

var url = "http://odata.netflix.com/Catalog/Titles";

OData.defaultHttpClient.enableJsonpCallback = true;

OData.read(

url,

function (data, request) {

var html = "";

for (var i = 0; i < data.results.length; i++) {

html += "<div>" + data.results[i].Name + "</div>";

}

document.getElementById("Movies").innerHTML = html;

},

function (err) {

alert("Error occurred ");

});

</script>

<title>Movies - dataJS + Netflix</title>

</head>

<body>

<divid="Movies">

</div>

</body>

</html>

Invoking WCF Data Service

WCF Data Service is a part of Windows Communication Foundation that exposes REST style data. You can use Entity Framework to build WCF Data Service in literally few seconds.

Invoking WCF Service

While WCF Data Service enables you to build REST style services pretty quickly, one problem with this approach is, it’s very hard to implement business logic during CRUD operation. One solution in this regard, is to create Ajax Enabled WCF Service, where you will invoke a web method, that contains your business logic.

Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context.

Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production.

User Stories

The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below.

Creating a new employee record

Read existing employee record

Update an existing employee record

Delete existing employee records

Key Technology Areas

ASP.NET 4.0

Entity Framework 4.0

T-4 Template

Visual Studio 2010

Architectural Objective

There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution.

Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production.

“Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level.

Why Employee Info Starter Kit is Not a Framework?

Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework.

Why Employee Info Starter Kit is Different than Other Open-source Web Applications?

Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly.

Roadmap

Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously.

Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls?

If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context.

Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production.

This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword. The latest version of this starter kit is hosted in Codeplex.

Release Highlights

User End Functional Specification

The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below.

A VSIX file enables us to install Visual Studio extensions (tools, controls, template etc) with few clicks. I have created a simple example of creating multi-project templates with wizard using Visual Studio 2010, which generates VSIX file. Checkout the sample in codeplex here.

I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable.

In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers:

Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model.

Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing.

.NETTER Code Starter Pack contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies released by Microsoft. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database. The current release of this project includes starter samples for the following technologies:

Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.

This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintainable web applications quickly and easily.

Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,50,000+ downloads from project web site.

Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier. A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Checkout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0.

Running the Starter Kit for First Time

2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file

3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5

4. You will be prompted to install database, just follow the instruction.

That’s it! You are ready to use this starter kit.

Running the Tests

Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly.

1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time)

2. Click “Run Checked Tests” control from the upper left corner.

You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios.

Yesterday I’ve been informed that I’ve gained the Most Valuable Professional award again for next year, in ASP.NET category. This is the third time I have received this award, which is pretty exciting.

Special thanks to few Microsoft employees including Technical Fellow Brain Harry, Sr. Program Manager Joe Stagner, Lead Product Manager Dan Fernandez and South Asia MVP Lead Abhishek Kant who encouraged and supported me in several ways last year.

Thanks Microsoft for this recognition, which will encourage me to keep my passion on MS products continued with more optimization and greater efforts.

Thanks everybody who Participated in the event Microsoft Day @ Dhaka, held on 20 June 2009 at IDB Auditorium, Dhaka. It was an excellent gathering of 250+ professionals, specially developers in Bangladesh.

Besides the knowledge sharing stuffs, the event was very successful to create a social gathering of technical professionals. I was really found it pretty nice that, I have meet at least 20+ guys there, whom I knew and meet virtually before.

The good news for the community is, we will be organizing the similar events in futures and organize it in a better way based on community feedback we received.

Microsoft Community in Bangladesh proudly presents Microsoft Day @ Dhaka. This is a special day dedicated to all Microsoft technology professionals and students in Bangladesh. We will be having the best Microsoft community technologists from Bangladesh - Microsoft Most Valuable Professionals (MVPs) delivering sessions at the event.

This technology marathon is a great opportunity to learn from the best and network with each other. Both Microsoft developers and networking professionals would find the event worth attending.

I am really very excited to be a part of this event, both as an organizer as well as as a speaker. I’ll be delivering speech there on Visual Studio Team System 2010.

If you not already registered, but don’t want to miss this cool event, register now at the msdn bangladesh site.

Last month (May 2009) Microsoft has released its first beta for Visual Studio Team System 2010 and Team Foundation Server 2010 release, two of the most waited and wanted tools in developer community. From my point of view these two releases are going to be one of the most historical releases, as lots of really cool stuffs has been added with respect to the last version.

However, as the Beta 1 releases are pretty infant, there are very limited resources available in the web and community, so I just wanted to gather all of the useful resources with respect to these two tools in one place, so that anyone can move forward from installation to first “Hello VSTS/TFS” excitement smoothly!

Step 1. What’s New on VSTS 2010 and TFS 2010

Well, you are really liking what the tools you are using, however you are pretty interested what are the cool stuffs that MS bring with VSTS 2010 and TFS 2010. Here we go:

Biran Harry also explaining cool new features of TFS 2010 in this Channel 9 video.

Step 2: Installation Planning

Well, you are convinced VSTS 2010 and TFS 2010 new features are really cool. Now you need to plan, if your existing infrastructure is supported.

While VSTS 2010 installation is pretty simple, TFS 2010 installation stuffs are pretty large deal. Team System MVP Mike has provided an excellent diagram with respect to Microsoft Fellow Brian Harry’s post, which really shows on the fly which software installation are required/recommended/not supported while installing TFS 2010.

Step 3: Installer Download

Step 4: Installation Walkthrough

As soon as the required files are downloaded, you are ready to go start the installation.

Brian Keller provides an excellent walkthrough explaining the installation process of TFS 2010 Beta 1 in this Channel 9 video. It also includes installation process (along with all relevant download links/instruction) of other pre-requisites of TFS 2010, including Sql Server 2008 Beta 1 and supporting software as VSTS 2010.

Step 5: First Walkthrough with VSTS 2010 and TFS 2010

And finally you are done with the installation! Great and congratulations! What what to do? Take some breath and move forward to the exciting world of VSTS 2010 and TFS stuffs, to see on hand and own eye, what really been implemented by MS guys.

Jason Zander, General Manager, Visual Studio, Developer Division provides a quick walkthrough from creating a simple WPF application to testing it using the latest cool features available in Visual Studio 2010 in this two part (part 1 and part 2) blog post.

Brian Keller’s video, as mentioned in the earlier section also have a quick walkthrough with TFS 2010 Beta 1. Really cool for beginners.

Although the earlier version of TFS (2008) is considered, but I really liked this walkthrough written by Mitch Denny, in this two part (part 1 and part 2) article. Extremely helpful and quick resource to begin working with such a big developer platform like Team Foundation Server.

If you wish to know more about TFS but need a single resource to explore most of the powerful features, you can have a look on this book, hosted at CodePlex and published by the team. The TFS version is 2008, however hopefully they will publish the updated version of this book with respect to the latest version of TFS.

Aggregator Provider Pattern is an extension of Provider Pattern, which enables us to create and utilize multiple instance of the class having the same provider interface. In this pattern, there is an Aggregator class which implements the provider interface and contains a collection of instances of classes having the same provider interface.

The underlying caller class of this aggregator is simply unaware of how many provider instances do the caller Provider Aggregator contains, but all of the provider instances will be utilized with a single invocation from the caller class.

Comparison with Provider Pattern

Provider Aggregator Pattern is fully compatible with the existing Provider Pattern and the power of provider pattern can be easily extended to use multiple providers concurrently without any modification on the caller classes that were using a provider.

In short Provider Pattern is concerned with the utilization of one of the available providers; whereas Aggregator Provider Pattern is concerned with the utilization of all of the available providers at the same time.

Example Demonstration

Aggregator Provider Pattern is useful when we need a configurable framework to add/remove multiple services used by one caller/user. For instance we can have Logger Provider framework, where we need log info to be saved at text files, save to database and sent to email addresses and so on. Having an easy configurable framework along with Aggregator Provider Pattern will enable us to add or remove more services without requiring the code modification in the code that uses this provider.

Regarding the example case that just been described can utilize the Aggregator Provider Pattern, by creating the classes as illustrated above. The code snippet below shows a basic usage of this pattern, where the last line will perform the log operation based in list of log providers loaded in the aggregator class dynamically.

Microsoft’s Visual Studio Team System Test Edition provides a powerful platform to perform high volume load testing. It also provides high end flexibilities to write and utilize external plug-in for extended functionalities.

Email Reporter: VSTS 2008 Load Test Plug-in enables users to send the load test reports to one or more pre-configured email addresses automatically, once a VSTS Load Test is completed. This open-source load test plug-in also provides supports for customization by which you can customize the reported performance data.

Roy Osherove written an excellent ‘Restaurant’ analogy to explain the difference between unit tests and integration tests. This type of analogy really becomes lot helpful to understand the concepts that are similar to each other, but has significant difference as well as.

In the world of testing, Smoke Testing, Sanity Testing and Regression Testing are very similar to each other: to ensure the quality running the test cases of an existing application with respect to a new feature being added/dropped/modified. They are targeted to find out the bugs in both UI and code level.

We can consider a River Analogy to understand the difference between Smoke Testing, Sanity Testing and Regression Testing better. Before moving to the analogy, lets consider the very basic definition of three of these testing:

Smoke Testing: Testing all (wide) areas related to new feature, not deeply. Determines if we should go for further testing.

Sanity Testing: Testing narrow areas related to new feature, deeply.

Regression testing: Testing all areas related to new feature, deeply.

If we consider a river, for instance, which has, for instance 1000 feet width, and contains “dusts” in its water (which can be considered as “bugs” in software), the goal for the corresponding three types of tests should be as follows:

For Smoke Testing: to find out the dusts in all over the surface of the river, which not includes the dusts under water.

For Sanity Testing: to find out the dusts in a specific width (for instance left side 200 feet), which not only includes the dusts on surface, but also includes the dusts under water, till the last depth of the river.

For Regression Testing: to find out all the dusts that are available on surface and under the water in all over the river.

The previous group site address has been changed, as it was hosted at MSN group, which was closed from March 2009. All of the previous group members and those who are not yet been member but interested to share and learn new cutting edge technologies are requested to join in the new site:

Software Designers often have to face a common decision factor, that whether they need to design automated test (unit test, integration test etc) infrastructure for data access layer code, specially when the data access layer codes are written using code generator tool. Basically the straight forward answer as ‘Yes’ or ‘No’ in this regard depends of several situations/ factors such the size and budget of the project etc. Here are my 5 top reasons to write automated test (unit test, integration test etc) for generated data access layer code.

1. When the code generation template/logic itself contains bug

While using the code generators, it’s possible that the underlying code/logic of the code generators may contain bug! Having the automated test for the generated data access layer code greatly helps of indentify the ‘generated’ bugs!

2. When the code is not re-generated after to change of underlying database object

Well I swear my generated code engine is perfect (i.e. no bug on my generator engine/template logic)! Cool. I have generated the code perfectly. However I have changed the underlying database, but really forgot to re-generate the code using the code generator! Well that could be possible and can be detected if we have corresponding automated tests!

3. When the re-generated code is not perfect for the new changes of underlying database object

Well I swear my code has been re-generated after I have changed by database. Cool. But the initial version of generated code may work perfectly with the initial version of database objects. However, new bugs can be found/introduced in the generated code for the new changes of underlying database objects, which results updating the code generation logic. So, still you need automated tests for your data access layer code!

4. When custom code is added in the generated code

Sometimes custom code needs to be placed on the generated code. In those cases automated tests really helps a lot of indentify the bugs possibly placed in custom code in data access layer and/or in database stored procedure, function etc.

5. To check the perfect integration with database objects

Even the codes are generated/written for data access layer perfectly, but it’s still possible of failing the code running properly while the data access layer code and database objects are integrated. For instance using wrong connection string that points to wrong version of database etc. Having a well designed automated testing infrastructure really helps us in this regard!

Microsoft Visual Studio Team System/Test Edition provides an excellent tool to perform web site load testing. Using this load testing tool, you can monitor and measure the site performance along with system status with respect to a given load/stress.

Fortunately VSTS provides a support for wide range of performance counters, from web page request per second to condition of physical disk, memories. Unfortunately, they are too huge that, initially testers/designers get overwhelmed with all of those, to find out a clear idea about the performance of the site they built.

The number of counter parameters to be considered by the load tester/designers is greatly varies based on the type and size of the web application to be tested. Here is my favorite top 10 performance counters that I use on my each load tests, regardless of project size. These counters are based two primary categories: Web Site end and Hardware end.

Web Site Related Performance Counters

Web site related performance counters are the counters that provide valuable information about the health of web site that is under test. These parameters are categorized as Requests, Pages, Tests and Errors.

1. Request - Avg Req/Sec

Desired value range: High

This is the average number of requests per second, which includes failed and passed requests, but not cached requests, because they are not issued on web server. Please note that, all http requests, such as image, java-script, aspx, html files generates separate/individual/single request .

2. Request - Avg Req Passed/Sec

Desired value range: High

While “Request - Avg Req/Sec” provides an average with respect to all passed and failed request, “Request - Avg Req Passed/Sec” provided the average of passed requests. This info also helps to determine the average number of failed requests/sec.

3. Page - Avg Page Time (Sec)

Desired value range: Low

While a single request refers to request to a single http elements (such as css, java-script files, images, aspx, html etc), a page is the container of all of the corresponding requests generated when a web page is requested (for instance via the browser address bar). “Page - Avg Page Time (Sec)” counter refers to the average of total time taken to load a page with all of its http elements.

4. Test - Total Test

Desired value range: High

For instance, we have created a web test, that contains two web pages, pushing on a button on the first page will re-direct the user to the second page, although there will be multiple entries will be involved for Requests and Pages counters, but the whole process will be considered as a single Test.

This counter considers the total number of tests (which includes passed and failed tests) during the test period.

5. Scenario - User Load

Desired value range: High

This counter considers the maximum user load that has been provided during the test run. Please note that, for Step Load pattern, where more user volume is added on step by step basis, the maximum user load will be counted through this counter parameter.

6. Errors - Errors/Sec

Desired value range: Low

Includes average number of errors occurred per second, which includes all types of errors.

Hardware Related Performance Counters

7. Processor - % Processor Time

Desired value range: Low

This is the number of processor time being utilized in percentage.

8. Memory - Available MBytes

Desired value range: High

This the amount of Memory available in Mega byte.

9. Physical Disk - Current Disk Queue Length

Desired value range: Low

It shows how many read or write requests are waiting to execute to the disk. For a single disk, it should idle at 2-3 or lower.

10. Network Interface - Output Queue Length

Desired value range: Low

This is the number of packets in queue waiting to be sent. A bottleneck needs to be resolved if there is a sustained average of more than two packets in a queue.

Download the SQL Script which selects all of the parameters as mentioned above with respect to the latest load test, from here:

Microsoft Visual Studio Team System/Test Edition provides an excellent tool to perform web site load testing. Using this load testing tool, you can monitor and measure the site performance along with system status with respect to a given load/stress.

Fortunately VSTS provides a support for wide range of performance counters, from web page request per second to condition of physical disk, memories. Unfortunately, they are too huge that, initially testers/designers get overwhelmed with all of those, to find out a clear idea about the performance of the site they built. Today we’ll be discussing about three counter sets that are closely related to each other, but provides a meaningful information about the health of the target web site to be load tested.

Definitions

Requests: Requests are the smallest load testing parameter with respect to a web page.

It contains details for individual requests issued during a load test. This includes all HTTP requests, and dependent requests such as images, css and java-script files.

Meaning, each refereeing contents in a web page, such as images, css files, java-script files will generate separate request along with the request for the web page which includes the actual textual content (text, tags etc). It will include the same number of additional entries for a postback.

Pages: Right after “Requests” we can consider “Pages” as the next level of load testing counter set, which is defined in MSDN as “Displays a list of pages accessed during a load test run. Some data in this table is available only after a load test has completed.”

For instance, we have a web page, which has a button, clicking on which shows a message on that page, which generate two entries for Page counter.

One important note is, any redirection to a separate page will not be counted with additional value, however the corresponding Requests for the redirected page will be counted.

Tests: Contains the details for individual tests run during a load test.

For instance, we have created a web test, that contains two web pages, pushing on a button on the first page will re-direct the user to the second page, although there will be multiple entries will be involved for Requests and Pages counters, but the whole process will be considered as a single Test.

Case Study

To have better idea, lets consider 4 different cases to see the corresponding entries with respect to the counters we are discussing.

Text only:

We have a web page, that contains only textual content, no external content such as image, css, java-script file referred there. However it may contain html controls, but no postback (button click etc) will be considered here.

Counter Status: number of request = number of pages = number of tests

For a single hit by a single user:

- 1 entry in “request” counter will be added
- 1 entry in “page” counter will be added
- 1 entry in “test” counter will be added

Text and Image:

We have a web page, which contains an image, besides the textual contents.

Counter Status: number of pages = number of test

For a single hit by a single user:

- 2 entries in “request” counter will be added
- 1 entry in “page” counter will be added
- 1 entry in “test” counter will be added

Text with Postback:

We have a web page, which has a button control only, besides the textual contents. Clicking on the button will how a message in the page “Hello World” with a postback.

Counter Status: number of pages = 2 * number of test

For a single hit by a single user:

- 2 entries in “request” counter will be added
- 2 entries in “page” counter will be added
- 1 entry in “test” counter will be added

Text and Image with Postback:

We have a web page, which has a button and a image, besides the textual contents. Clicking on the button will how a message in the page “Hello World” with a postback.

Counter Status: number of pages = 2 * number of test

For a single hit by a single user:

- 4 entries in “request” counter will be added
- 2 entries in “page” counter will be added
- 1 entry in “test” counter will be added

From Text only page to another text only page using postback:

We have a page, which has a button, besides the textual contents. Clicking on the button will to a another “Text only” page.

Counter Status: "number of failed request = number of failed test" (i.e. if one of the request in a test fails, the entire test would be considered as failed test)

For a single hit by a single user:

- 3 entries in “request” counter will be added
- 2 entries in “page” counter will be added
- 1 entry in “test” counter will be added

Microsoft Visual Studio Team System 2008 provides an excellent data synchronization tool, synchronize data and schema between two database. It saves lots of developer time to sync database objects. Among two possible synchronization techniques, VSTS 2008 uses the unidirectional synchronization technique. In this consideration, as safety measurement will be helpful for developers before start synchronization.

The security measurement is considered whether it would create any loss of data or not. Here few useful cases with respect to database table schema synchronization has been discussed one by one.

New Table in Source Database:Safe. The new table will be added to the destination database.

Remove Table in Source Database:Safe. The removed tabled in source database will also be removed from destination database.

Modify Table in Source Database, Add New Field:Safe. However the new field needs to have the ‘allow null’ property as true.

Modify Table in Source Database, Remove Old Field:Safe. The old column as well as the corresponding data in the destination database will be removed. However, as the old column has been removed, so the removal of corresponding data is expected.

Modify Table in Source Database, Modify Old Field:Not safe. Data Loose in VSTS, as it does alter table add + drop field. To retain existing data due to sync process, external script would be useful. Below is some sample sql statement in this regard:

-- sql script to change data type of a table field, without data loss

alter table Contact

alter column Comment nvarchar(10) null

-- sql script to rename a table field 'Address' of a table 'Contact', to 'FullAddress' -- without data loss

Data or content synchronization is one of the classic problems in software world. It becomes very trivial point while working on software production, where production data and schema needs to be synchronized with live data and schema. Having some basic conceptual complexity, developers often get afraid to use any automated tool considering the risk factors to lose of data or content. Mostly, in these cases a manual process has to involve making sure a safe content synchronization. However as human is also error prone, there still exists risk factor to lose of content, but also includes a huge human time and effort on it. Having a clear specific idea on content synchronization will greatly help to reduce such overheads. Although, the synchronization concept exists in disk space, network, database etc sectors, today we’ll basically focus on database synchronization concept, which will also help to understand synchronization concept from a generic point of view.

What is synchronization?

So, what is synchronization? This is a process that ensures the same content among two participating entities, having different set of content possibly.

For instance, a database table, named Employee, which has two instance on two different databases, and exact same schema definition, after a synchronization process, both table will contact the identical number of data rows and column values.

In a synchronization process, there involves two participants, generally, termed as source and destination, where the content will be placed from source entity to destination entity.

Based on requirement and characteristics of data, the synchronization process can be categorized in two ways

Unidirectional synchronization: replacing destination entity with the source entity In a unidirectional synchronization, all of the contents from source entity will be placed to destination entity, which also implies, any content in the destination entity, that doesn’t exist in source entity, will be deleted.

Before understanding both synchronization processes clearly, let’s consider three sample states of data entities:

a) Initial state: where both source and destination entity contains exact same number of records and column values.

b) Data change state: the state where data get changed in both source and destination entity.

c) The synchronized state: where data has been synchronized among source and destination state.

In a unidirectional synchronization, all of the contents from source entity will be placed to destination entity, which also implies, any content in the destination entity, that doesn’t exist in source entity, will be deleted.

There is a high degree of data lost risk factors in unidirectional synchronization, as all of the data contents will be deleted in the destination entity, which don’t exist in source entity. In the above sample, #2 and #5 row item has been deleted due to data synchronization process. So, database administrators need to be cautious to confirm that if this data lose is expected.

In bidirectional synchronization, all of the rows and column values in source and destination entity will be merged data from both participating entities.

Thus, on bidirectional synchronization, no data will be deleted neither in source or destination entity during the synchronization process. However the only data lose risk factors in bidirectional synchronization can be considered when same data row (identified by primary key) that has been modified in source entity, get replaced in the destination entity. In the above sample, #1 row has been updated in destination entity, from ‘Ashraf’ to ‘Ashraful’. So, database administrators need to be cautious to confirm that if this data replacement is expected.