In the previous articles (data access approach, code first approach, and WebAPI) we learned a lot about the Entity Framework and its practical implementations. The intent of this article is to explain the concept of Entity Framework core. We’ll go step by step to explore the topic of Entity Framework core. We’ll explore the code first approach using EF Core and learn about data annotations as well. We’ll also cover code first migrations in this article along with an understanding of how to seed the database. I’ll use Visual Studio 2017 for the tutorial. For the database, we would be using SQL Server. You can make use of local dB if you do not have SQL server installed.

Series Info

We’ll follow a five-article series to learn the topic of Entity Framework in detail. All the articles will be tutorial form except the last where I’ll cover the theory, history, use of Entity Framework. Following are the topics of the series.

Entity Framework Core

EF Core is also an ORM, but it’s not an upgrade to Entity Framework 6, and it shouldn’t be regarded as such. Instead, Entity Framework Core is a lightweight, extensible, and cross-platform version of Entity Framework. Therefore, it comes with a bunch of new features and improvements over Entity Framework 6, and it’s currently at version 2. For ASP.NET Core 1, version 1 should be used, and for ASP.NET Core 2, version 2 is advised. Entity Framework does not support all the features of Entity Framework 6. Entity Framework Core is recommended for new apps that don’t require the heavy set of features Entity Framework 6 offers, or for apps that target .NET Core. It comes with a set of providers which could be used with a variety of databases. It could be used with Microsoft SQL Server, of course, but also with SQLite, Postgre, SQL Server Compact Edition, MySQL, and IBM DB2. And there’s also an in-memory provider for testing purposes. EF Core can be used both for a code-first approach, which will create the database from the code, or a database-first approach, which is convenient if the database is already there. The following section will show the basic implementation of Entity Framework Core in a console application and of course, it could be used in any type of .NET application like MVC, Web API, Windows which needs to interact with the database. We’ll start out by adding an EF Core it to our project. We’ll also investigate migrations, a way to migrate between different versions of our underlying datastore. We’ll also check out how we can seed the database with data from code. Let’s dive in by introducing Entity Framework Core.

Where can EF Core be Used

Entity Framework Core runs on .NET Core, and .NET Core runs in a lot of places. It runs inside of the full .NET Framework that is any version that is 4.5.1 or newer, and .NET Core itself can run on the CoreCLR, that’s the runtime, and CoreCLR can run natively, not only in Windows but also on Mac and Linux. And the other place you can use EF Core on the Universal Windows Platform, or UWP for Windows 10, so that runs on any device or PC that can run Windows 10, but that doesn’t necessarily mean you should use Entity Framework Core in all of these scenarios, and that’s a really important point to keep in mind because Entity Framework Core is a brand new set of APIs, it doesn’t have all of the features that you might be used to with Entity Framework 6, and while some of those features will be coming in future versions of EF Core, there are a few that will never be part of Entity Framework Core, so it’s important to understand that, and that you may not want to start every single new project with the Entity Framework Core. Be sure that Entity Framework Core has the features that you need. If you want to target cross-platform or UWP, you have to use Entity Framework Core, but for .NET apps, you can still use Entity Framework 6, and in fact for ASP.NET Core apps that will definitely stay on Windows, in other words on a Windows Server, you can still build separate APIs using full .NET with the Entity Framework 6, and just have your ASP.NET Core app talk to that Entity Framework 6-based library.

Code First Approach using Entity Framework Core

Data access approaches are same in Entity Framework 6 and Entity Framework Core apart from some of the new features that EF Core provides. There are minor differences and implementation techniques that come along with the related packages. Let’s see the EF Core in action step by step using code first approach. We’ll cover more topics like data annotations and other migration techniques while walking through practical implementation.

Adding Entities

Like explained, for code first approach, the application should have entities that would eventually result in the database tables. So, create a console application to start within .NET Framework 4.6.2. Name the application EmployeeManagement and solution name as EFCore.

Add two entity classes one named Department and other as Employee. There would be one too many relationships between department and employee i.e. a department can have multiple employees and an employee would be associated with any one department.Department

Code

namespace EmployeeManagement

{

publicclass Department

{

publicint DepartmentId { get; set; }

public string DepartmentName { get; set; }

public string DepartmentDescription { get; set; }

}

}

EmployeeCode

namespace EmployeeManagement

{

publicclass Employee

{

publicint EmployeeId { get; set; }

public string EmployeeName { get; set; }

publicint DepartmentId { get; set; }

public virtual Department Departments { get; set; }

}

}

The department entity has a department name and description property and the entity employee has EmployeeId, Name and DepartmentId property that acts as a navigation property for departments. Add a property Employees denoting the collection of employees in the Department entity as the department can have many employees. Similarly, a property of type Department is added in Employee class that returns a single department and not the list.

public virtual ICollection < Employee > Employees {

get;

set;

}

Data Annotations

By giving the class an ID with name Id, this field is automatically regarded as the primary key. DepartmentId, i.e., the class name, followed by ID would be possible as well. If we would name this differently, the convention doesn’t apply. But we can apply the key data annotation from System.ComponentModel.DataAnnotations. Personally, I like to apply the key annotation anyway, even if convention would ensure this property would be regarded as primary key, I feel it makes Entity classes so much more understandable at first glance. But, well, I’ve got the same gripe with a lot of convention-based approaches, so this is totally up to you. Following is the way in which you can add a [Key] for the property you want to make a primary key.

Similarly, add the key for EmployeeId i.e. a primary key for Employee entity.

Another thing of importance is a generation of ID primary keys. By convention, primary keys that are of integer or GUID data type will be set up to have their values generated on add. In other words, our ID will be an identity column. To explicitly state this, we can use another annotation, the database-generated annotation from the ComponentModel.DataAnnotations.SchemaNamespace. It has three possible values,

null for no generation,

identity for generation on add,

computed for generation on add or update.

We need the identity option. A new key will be generated when an Employee is added. How this value is generated depends on the database provider being used. Database providers may automatically set up value generation for some property types, while others will require you to manually set up how the value is generated. In our case, we’ll be using SQL Server. So, we’re good to go. A new integer primary key will be automatically generated without further setup.

We want to signify the relationship between Department and Employee. If we look back at the Department entity, we already defined a collection of Employee, but we want to navigate through our object graph from a point of interest to the parent department. So, we need a property to refer to that parent department. And we need to state what the foreign key property will be. Again, there’s a convention-based and an explicit approach possible. By convention, a relationship will be created when there is a navigation property discovered on a type. And a property is considered a navigation property if the type it points to cannot be mapped as a scalar type by the current database provider. So, if we add a property Department of type Department, this is considered the navigation property, and a relationship will be created. Relationships that are discovered by convention will always target the primary key of the principal And in this case, that’s the ID of the Department. That will be our foreign key. It’s not required to explicitly define this foreign key property on the dependent class. And the dependent class, well, that’s our Employee class. But it is recommended, so we’ll add one. So that’s the convention-based approach. If we do not want the convention-based approach to be followed, which states that a foreign key will be named according to the navigation property’s class name followed by id, so DepartmentId in our case, we can again use an annotation for that. The foreign key annotation from the System.ComponentModel.DataAnnotations.Schema namespace.

The entity classes properties do not have any data annotation w.r.t mandatories or max lengths. If we leave our entity classes like this, our database columns will allow null for fields that should not be null. And we’ll be off max and varchar length instead of a specific maximum size. It’s best practice to ensure these field restrictions are applied at the lowest possible level. So, in our case, that’s the database itself. This ensures the best possible integrity. So, let’s apply these attributes. For Employee, the EmployeeName was required with a maxLength of 50. And for the Department entity, we want these as well. Let’s make Name required with a maximum length of 50.

Code

using System;

using System.Collections.Generic;

using System.ComponentModel.DataAnnotations;

using System.ComponentModel.DataAnnotations.Schema;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EmployeeManagement

{

publicclass Employee

{

[Key]

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]

publicint EmployeeId { get; set; }

[Required]

[MaxLength(50)]

public string EmployeeName { get; set; }

publicint DepartmentId { get; set; }

[ForeignKey(“DepartmentId”)]

public virtual Department Department { get; set; }

}

}

Adding DB Context

In this section, we’ll create a context to interact with our database and that context represents a session with the database, and it can be used to query and save instances of our entities. Our entity classes are just classes. We didn’t need any extra dependencies to create those but the DBContext, that’s part of Entity Framework Core. And we’ll also need a provider. In our case, we’ll use the SQL Server provider, so we can connect to a LocalDB instance.

Let’s open the NuGet Right click on the project and select “Manage NuGet Packages…” option.

We want to look for the Microsoft.EntityFrameworkCore.SqlServer If we install that, the Entity Framework core dependencies will be added as well, so we’ll have all we need for now.

Select the latest stable version and click install.

Accept the license agreement.

As you can guess by now, no need to do this when you’re on ASP.NET Core 2 and you’ve referenced the Microsoft.AspNetCore.All package. That includes the necessary references for Entity Framework Core.

Now let’s add a new class, EmployeeManagementContext.

Have the class inherit DBContext. DBContext can be found in the EntityFrameworkCore namespace. Bigger applications often use multiple contexts. For example, were we to add some sort of reporting module to our application, that would fit in a separate context. There’s no need for all the entities that map to tables in a database to be in the same context. Multiple contexts can work on the same database. In our case, well, we only have two entities, so one context is sufficient.

In this context, we now want to define DbSets for our entities. Such a DbSet can be used to query and save instances of its entity type. LINQ queries against a DbSet will be translated into queries against the database. Add two properties each for entity classes that we have returning DbSet of the entity.

Code

using Microsoft.EntityFrameworkCore;

namespace EmployeeManagement

{

publicclass EmployeeManagementContext : DbContext

{

public DbSet<Employee> Employees { get; set; }

public DbSet<Department> Departments { get; set; }

}

}

How do we tell the context that which database it has to associate to? Well, that’s through a connection string. We need to provide this connection string to our DBContext. In other words, we need to configure this DBContext. And there are essentially two ways of doing this. Let’s open our EmployeeManagementContext again. The first way of doing this is through overriding the OnConfigure method on the DBContext. This has an optionsBuilder as a parameter. And that optionsBuilder provides us with a method–UseSqlServer. This tells the DBContext it’s being used to connect to a SQL server database, and it’s here that we can provide a connection string. So that’s one way.

But let’s look at the other way–via the constructor. So, let’s comment OnConfiguring out, and have a look at the DBContext The DBContext exposes a constructor that accepts DBContext options.

So, let’s add a constructor that calls this constructor overload. What this allows us to do, and what isn’t possible when overriding the OnConfigure method, is that we can provide options at the moment we register our DBContext. And that’s a more logical approach.

To get the instance of this context class now, let’s add a new class named Initialize and add a static method responsible for returning context instance. The GetContext() method overload on these options, so we can use now–UseSqlServer. It’s from the EntityFrameworkCore namespace, so let’s add that using statement. And in this method, we can pass in the connection string. So, let’s add a variable to hold this connection string for now. The next logical question is, What would that connection string look like? Well, we’re going to be using local DB, as this is installed automatically together with Visual Studio. But if you have a full SQL Server installation in your network, it’ll work as well. Just make sure you change the connection string accordingly. The name (localdb)\MSSQLLocalDB is the default instance name, but it can be different on your machine depending on whatever you inputted on install. So, if you’re not sure, have a look at the SQL Server Object Explorer window. If you don’t see that on your machine, you can find it underneath the View menu item.

Call the GetContext() method to create the instance of context class in Program.cs. Ideally, when we run the application and instance of context class get created, the database should be ready at local db.

Code

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EmployeeManagement

{

class Program

{

staticvoid Main(string[] args)

{

var context = Initialize.GetContext();

context.EnsureSeedDataForContext();

}

}

}

This is a code-first approach for a new database, so the database should be generated if it doesn’t exist yet. Let’s open our EmployeeManagementContext again to make sure that happens. To the constructor that will be used when requesting an instance from the container through dependency injection, we call EnsureCreated() on the database object. This database is an object is defined on DBContext. If the database already exists, nothing will happen, but if it doesn’t, this call ensures it is effectively created.

Run the application. Once the application is run and Program.cs’s main method executes the GetContext() method, let’s open that SQL Object Explorer window again. Let’s refresh the database list from our MSSQLLocalDB instance. And it looks like our EmployeeManagementDB data is there. Let’s have a look at the tables. Apparently, two tables have been created, a Departments and an Employee table, the pluralized names of our entities.

The Departments table has a primary key, DepartmentId, a DepartmentName with a maximum length of 50, which cannot be null. If you look at the Department entity, we see that the column definition matches the definition of the fields on our Department entity. Let’s have a look at the Employees table. It has a DepartmentId, which is a foreign key, and it has an EmployeeName field, which is required, thus cannot be null, and a maximum length of 50. So that matches our Employee entity. The attributes we applied to the properties on our entity classes were, thus, taken into account. So far, so good. But this is only one way of doing this.

If we work like this, we work by ensuring the database is created by calling Database.EnsureCreated(). But if we do that, well, we’re forgetting something. Just as code evolves, a database evolves as well. Let’s look into migrations to see how we can improve on what we’ve done up until now and how we can handle an evolving database.

Code First Migrations in EF Core

Just as our code evolves, so does the database. New tables might be added after a while, existing tables might be dropped or altered. Migrations allow us to provide code to change the database from one version to another. They’re an important part of almost all applications, so let’s look into it. What we are going to do, we are going to use migrations to create the initial database version, version 1. So, we’ll replace what we did in the previous demo by this new and better approach. The reason is that by doing that, we’ll have code in place to start from no database at all, rather than having to provide an already existing one. Then, we’ll add another migration to migrate to a new version, version 2. To allow for something like this, we’ll first need to create an initial snapshot of our database. In the Entity Framework core world, this is achieved with tooling, so we’ll have to add these tools first. And these tools are essentially just another set of dependencies that add commands we can execute.

Let’s add the package Microsoft.EntityFrameworkCore.Tools.

So, then we’ll have to create that initial snapshot or migration of our database and schema. For that, we have to be able to execute one of the commands we just enabled. And executing those commands, well, you can do that in the package manager console. If you don’t currently see that, you can get it via Tools, NuGet Package Manager, Package Manager Console.

The command we’re looking for is the Add-Migration It expects a name for the migration we’re going to add. So, let’s say we want to name it EmployeeManagementInitialMigration.

It gives us the error that it cannot create the object of type EmployeeManagementContext and asks us to add an implementation of IDesignTimeContextFactory. So, let’s create a new class named DesignTimeContextFactory inheriting from IDesignTimeDbContextFactory of our context class and add te CreateDbContext method which creates optionsBuilder and returns the context class instance with these options as a parameter.

If we look at our solution now, we see there’s a new Migrations folder. And it contains two files. One, a snapshot of our current context model.

Let’s have a look at that.

This contains the current model as we defined through our entities, including the annotations we provided. We can find our Department entity and our Employee entity. And at the end of the file, the relation between Department and Employee.

The second file we see is the EmployeeManagement InitialMigration. That’s the name we just gave to our migration. This contains the code needed by the migration builder to build this version of the database, both Up (from current to the new version) and Down (from this version to the previous version). If we look at Up, we see two CreateTable statements and a CreateIndex statement. That means it’s starting from no database at all, and this migration contains the code to build the initial database. And if we look at Down, we see what should happen to end up with an empty database–two DropTable statements.

If new migrations are added, new files like this will be created, and by executing them in order our database can evolve together with our code. By the way, you don’t need to run the Add-Migration command to generate these files. We could’ve written them by hand. And that might still be feasible for one or two or three tables maybe. But it’s definitely not something you want to do for a larger database. So, these tools are quite helpful. So far, so good.

There’s one more thing we have to do. We have to ensure that the migration is effectively applied to our database. And there’s another command for that. It’s called the update-database command. If we execute this, the migrations will be applied to our current database. Rather than doing it from command, I’ll show you how we can do this from code.

Let’s open the context again. What we can do is replace Database.EnsureCreated() by Database.Migrate(). This will execute migrations, which, if there’s no database yet, will create it. And that’s really all we have to do. But as said, we’re replacing what we did in the previous clip because, well, most applications do require migrations. And for those, it’s a good idea to start from no database at all if you have the chance.

So, what we want to do is remove the current database first. If we don’t do that, this call will try and apply the migrations, i.e., create the Departments and Employees tables, and that will fail because they already exist. In the SQL Server Object Explorer, right-click the existing database and delete it.

If you do want to provide an existing database, you can follow the same flow we just did, but delete the first migration file. Generally speaking, though, that’s not a good place to be unless your application must start from an existing database. Let’s give this a try.

Run the application.

Let’s have a look at our localDB Let’s refresh the databases list. Our database was created again, but by working like this instead of how we did it previously, we’ve ensured our database can migrate from not existing at all to its initial version and upcoming versions after that. A better approach than what we did in the previous sections. Let’s have a look at the database itself.

It now contains an additional table–_EFMigrationsHistory. Let’s have a look at what’s in there. Entity Framework Core uses this table in the database to keep track of which migrations have already been applied to the database.

This ensures that that Database.Migrate() call, or alternatively, the Update-Database call from the command line doesn’t try to execute the same migrations over and over again.

Let’s continue with adding a new migration. An Employee doesn’t seem to have a salary. We may have missed that on purpose because this allows us to look into an additional migration. So, let’s add that Salary property.

Code

using System;

using System.Collections.Generic;

using System.ComponentModel.DataAnnotations;

using System.ComponentModel.DataAnnotations.Schema;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EmployeeManagement

{

publicclass Employee

{

[Key]

[DatabaseGenerated(DatabaseGeneratedOption.Identity)]

publicint EmployeeId { get; set; }

[Required]

[MaxLength(50)]

public string EmployeeName { get; set; }

publicint DepartmentId { get; set; }

publicint Salary { get; set; }

[ForeignKey(“DepartmentId”)]

public virtual Department Department { get; set; }

}

}

Then, let’s execute the Add-Migrationcommand again so the file gets generated for us. Let’s name this migration EmployeeManagementAddSalaryToEmployee.

Our Migrations folder now includes a new file.

And if looking at this file, we see that the Up method contains the code to add the Salary column, and the Down method contains the code to drop the column again.

Let’s run the application again.

Let’s have a quick look at the database. Employees now indeed contain a Salary column.

Let’s have a look at that EFMigrationHistory table. And, indeed, it also contains the new migration. And that’s how we can work with migrations to migrate our database from one version to another. But if we look at the data that’s in these tables, we see there’s nothing there yet. No Employees, Department. To add data to start with, we should seed the database. Let’s see in the next section how we can do that.

Seeding the Database

We still haven’t got data in our database. It would be nice to have some to test with. That principle, providing your database with data to start with, is called seeding the database. It’s often used to provide master data.

We saw how we do that in EF 6 versions. Here we’ll discuss another approach to seed the database.

We’ll write an extension method in our context. So, let’s start with that extension method. Let’s add a new class, EmployeeManagementContextExtensions. Let’s make it static.

Let’s add one static method to it, EnsureSeedDataForContext. The method has one parameter of type EmployeeManagementContext named context, and it’s decorated with this, which tells the compiler it extends EmployeeManagementContext. The first thing we want to do is check if the database already contains our sample data. We want to insert departments and their employees. So, let’s check if the Departments table is empty. An employee can’t exist without a department, so that’s sufficient. If it’s not empty, we already have data in there, and we don’t want to insert additional data. And, otherwise, we can start adding data. We first create a Department with the name “Technology” and provide three employees (Jack, Kim, Shen) to that department. We do not provide IDs as these are now auto-generated by the database. Then we’ll want to add these to the context. For that, we can use Add method or AddRange method if there are multiple departments on the Departments DBSet on our context. And from this moment on, the entities are tracked by the context. But they aren’t inserted yet. For that, we must call SaveChanges on the context. Calling SaveChanges on the context will effectively execute the statements on our database.

And that’s already it for the extension method. Then we need to execute this extension method.

Call the extension method EnsureSeedDataForContext() after you create the instance of the context in Program.cs class. Then run the application.

Code

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EmployeeManagement

{

class Program

{

staticvoid Main(string[] args)

{

var context = Initialize.GetContext();

context.EnsureSeedDataForContext();

}

}

}

Let’s have a look at our database. And the Departments table contains sample data, and so does the Employees table. And with that, we now know what Entity Framework Core is, its most important concepts, and how to use those.

EF Core Summary

The .NET Core and Entity Framework Core are truly cross-platform, but they’re cross-platform on two levels. You can run .NET Core apps using EF Core on any of these platforms, but you can also create, debug, and build them on any one of the platforms as well, and with this cross-platform tool, Visual Studio Code and all of its rich features, plus the fact that it is open source, I’ve got the ability to do that coding and debugging on any one of the platforms. Visual Studio Code only enhances the flexibility we have for working with .NET Core and Entity Framework Core, but EF Core itself is also flexible. You can also deploy these apps to Docker and run them anywhere that Docker runs. Entity Framework Core is a lightweight, extensible, and cross-platform version of Entity Framework. It’s recommended for new applications that don’t need the full Entity Framework 6 feature set and for .NET Core applications. We created entity classes first. We can use annotations on those to define things like primary and foreign keys, required fields, and so on. Those are then registered as DBSets on the DBContext. That context represents a session with the database. And it can be used to query and save instances of our entities. From that moment on, we could access our entities through LINQ. There was another important concept we looked into–migrations. Just as our code evolves, so does the database. New tables might be added. After a while, existing tables might be dropped or altered. Migrations allow us to provide code to change the database from one version to another. And, lastly, we investigated an option to seed the database providing it with data to start with. Download the complete free eBook (Diving into Microsoft .NET Entity Framework) on Entity Framework here.

In the last article of learning Entity Framework, we learned about the code-first approach and code-first migrations. In this article, we’ll learn how to perform CRUD operations with ASP.NET Web API2 and Entity Framework. We’ll go step by step in the form of a tutorial to set up a basic Web API project and we’ll use the code-first approach of Entity Framework to generate the database and perform CRUD operations. If you are new to Entity Framework, follow my previous articles explaining data access approaches with Entity Framework. The article would be less of a theory and more practical so that we get to know how to set up a Web API project using Entity Framework and perform CRUD operations. We’ll not create a client for this application but rather use Postman; i.e., the tool to test the REST endpoints.

Roadmap

We’ll follow a five-article series to learn the topic of Entity Framework in detail. All the articles are in tutorial form except the last where I’ll cover the theory, history, and use of Entity Framework. Following are the topics of the series.

“HTTP is not just for serving up web pages. HTTP is also a powerful platform for building APIs that expose services and data. HTTP is simple, flexible, and ubiquitous. Almost any platform that you can think of has an HTTP library, so HTTP services can reach a broad range of clients, including browsers, mobile devices, and traditional desktop applications.ASP.NET Web API is a framework for building web APIs on top of the .NET Framework.”

And there is a lot of theory you can read about Web API on MSDN.

Entity Framework

Microsoft Entity Framework is an ORM (Object-relational mapping). The definition from Wikipedia is very straightforward for ORM and petty much self-explanatory,

“Object-relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between incompatible type systems using object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language. ”

Being an ORM, Entity Framework is a data access framework provided by Microsoft that helps to establish a relation between objects and data structure in the application. It is built over traditional ADO.NET and acts as a wrapper over ADO.NET and is an enhancement over ADO.NET that provides data access in a more automated way thereby reducing a developer’s effort to struggle with connections, data readers or data sets. It is an abstraction over all those and is more powerful with the offerings it makes. A developer can have more control over what data he needs, in which form and how much. A developer having no database development background can leverage Entity Framework along with LINQ capabilities to write an optimized query to perform DB operations. The SQL or DB query execution would be handled by Entity Framework in the background and it will take care of all the transactions and concurrency issues that may occur. Entity Framework offers three approaches for database access and we’ll use the code first approach out of those three in this tutorial.

I am using Visual Studio 2017 for this tutorial. Open the Visual Studio and add a new project.

2. Choose the “Web” option in installed templates and choose “ASP.NET Web Application (.NET Framework)”. Change the name of the solution and project for e.g. Project name could be “StudentManagement” and Solution name could be “WebAPI2WithEF”. Choose the framework as .NET Framework 4.6. Click OK.

3. When you click OK, you’ll be prompted to choose the type of ASP.NET Web Application. Choose Web API and click OK.

Once you click OK, you’ll have default basic Web API project with required NuGet packages, files, and folders with Views and Controllers to run the application.

Creating the model

We’ll create a model class that will act as an entity for Student on which we need to perform database operations. We’ll keep it simple just for the sake of understanding on how it works. You could create multiple model classes and even can have a relationship between those.

Right-click Models folder and add a new class. Name the class as “Student”.

Make the class public and add two properties to the class, i.e., Id and Name. Id will serve as a primary key to this entity.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Web;

namespace StudentManagement.Models

{

publicclass Student

{

publicint Id { get; set; }

public string Name { get; set; }

}

}

Rebuild the solution.

Adding the API Controller

Let’s add a controller that will contain the database operations to create, update, read and delete over our model class.

Right click the controller folder and add choose the option to add a new controller class.

In the next prompt, choose the option to create a Web API 2 Controller with actions, using Entity Framework. Click on Add button.

Next, choose the model we created i.e. Student model in the option of Model class.

Since we do not have data context for our application, click on the + button close to Data context class option dropdown, and provide the name “StudentManagementContext” in the text box shown and click Add.

The name of the controller should be “StudentsController”. Click Add to finish.

Once you click “Add” to finish, it will try to create a scaffolding template of the controller with all read/write actions using Entity Framework and our model class. This will also add the reference to the Entity Framework and related NuGet packages because it is smart enough to understand that we want our controller to have database operations using Entity Framework as we mentioned the same in the second step on adding a controller. Creating scaffolding template may take a while.

Once the template is generated, you can see the controller class added to the Controller folder in the Web API project. This controller class derives from ApiController class and has all the methods that may be needed for performing a database operation on the student entity. If we check the method names, those are prefixed with the name of the verb for which the method is intended to perform an action. That is the way the end request is mapped to the actions. If you do not want your actions to be prefixed with the HTTP verbs, you can decorate your methods with HTTP verb attributes, placing the attribute over the method or applying attribute routing over the actions. We’ll not discuss those in details and will stick to this implementation.

Imagine a scenario where you want to add a new model/entity and you do not want the existing database to get deleted or changed when you update the database with the newly added model class. Code first migrations here help you to update the existing database with your newly added model classes and your existing database remains intact with the existing data. So, the data and the schema won’t be created again. It is a code first approach and we’ll see how we can enable this in our application step by step.

Open Package Manager Console and select the default project as your WebAPI project. Type the command Enable-Migrations and press enter.

Once the command is executed, it does some changes to our solution. As a part of adding migrations, it creates a Migrations folder and adds a class file named ”Configuration.cs”. This class is derived from DbMigrationsConfiguration class. This class contains a Seed method having the parameter as the context class that we got generated in the Models Seed is an overridden method that means it contains a virtual method in a base class and a class driven from DbMigrationsConfiguration can override that and add custom functionality. We can utilize the Seed method to provide seed data or master data to the database if we want that when our database is created there should be some data in a few tables.

DbMigrationsConfiguration class,

Let’s utilize this Seed method and add a few students in the Students model. I am adding three students named Allen, Kim, and Jane.

The context parameter is the instance of our context class that got generated while we were adding a controller. We provided the name as StudentManagementContext. This class derives from DbContext class. This context class takes care of DB schema and the DbSet properties of this class are basically the tables that we’ll have when our database will be created. It added Students as a DbSet property that returns our Student model/entity and would be directly mapped to the table that will be generated in the database.

The next step is to execute the command named “Add-Migrations”. In the package manager console, execute this command with a parameter of your choice that would be the name of our first migration. I call it ”Initial”. So, the command would be Add-Migrations Initial

Once the command is executed, it adds a new file with the name “Initial” prefixed with the date time stamp. It prefixes the date time stamp so that it could track the various migrations added during development and segregate between those. Open the file and we see the class named “Initial” deriving from DbMigration class. This class contains two methods that are overridden from DbMigration class i.e. the base class. The method names are Up() and Down().The Up method is executed to add all the initial configuration to the database and contains the create command in LINQ format. This helps to generate tables and all the modifications done over the model. Down command is the opposite of Up command. The code in the file is self-explanatory. The Up command here is having the code that creates the Students table and setting Id as its primary key. All this information is derived from the model and its changes.

Initial Migration,

namespace StudentManagement.Migrations

{

using System;

using System.Data.Entity.Migrations;

public partial class Initial : DbMigration

{

public override void Up()

{

CreateTable(

“dbo.Students”,

c => new

{

Id = c.Int(nullable: false, identity: true),

Name = c.String(),

})

.PrimaryKey(t => t.Id);

}

public override void Down()

{

DropTable(“dbo.Students”);

}

}

}

Again in the package manager console, run the command “Update-Database”.

This is the final command that creates the database and respective tables out of our context and model. It executes the Initial migration that we added and then runs the seed method from the configuration class. This command is smart enough to detect which migrations to run. For example it will not run previously executed migrations and all the newly added migrations each time will be taken in to account to be executed to update the database. It maintains this track as the database firstly created contains an additional table named __MigrationHistory that keeps track of all the migrations done.

On the command is successfully executed, it creates the database in your local database server and adds the corresponding connection string to the Web.Config file. The name of the connection string is the same as the name of our context class and that’s how the context class and connection strings are related.

Exploring the Generated Database

Let’s see what we got in our database when the earlier command got successfully executed.

Since we used the local database, we can open it by opening Server Explorer from the View tab in Visual Studio itself.

Once the Server Explorer is shown, we can find the StudentManagementContext database generated and it has two tables named Students and __MigrationHistory. Students table corresponds to our Student model in the code base and __MigrationsHistory table as I mentioned earlier is the auto-generated table that keeps track of the executed migrations.

Open the Students table and see the initial data added to the table with three student names that we provided in the Seed method.

Open the __MigrationsHistory table to see the row added for the executed migration with the context key and MigrationId, Migration Id added is the same as the Initial class file name that got generated when we added the migrations through package manager console.

Running the application and Setup Postman

We got our database ready and our application ready. It’s time to run the application. Press F5 to run the application from Visual Studio. Once the application is up, you’ll see the default home page view launched by the HomeController that was automatically present when we created the WebAPI project.

Setup Postman. If you already have postman application, directly launch it and if not, search for it on Google and install it. The postman will act as a client to our Web API endpoints and will help us in testing the endpoints.

Once Postman is opened you can choose various options from it. I choose the first option to Create a basic And save the name of the request as TestAPI. We’ll do all the tests with this environment.

Endpoints and Database operations

We’ll test our endpoints of the API. All the action methods of the StudentsController act as an endpoint thereby following the architectural style of REST.

While consuming an API an Http Request is sent and in return, a response is sent along with return data and an HTTP code. The HTTP Status Codes are important because they tell the consumer about what exactly happened to their request; a wrong HTTP code can confuse the consumer. A consumer should know (via a response) that its request has been taken care of or not, and if the response is not as expected, then the Status Code should tell the consumer where the problem is if it is a consumer level or at API level.

GET

While the application is running that means our service is up. In the Postman, make a GET request for students by invoking the URL http://localhost:58278/api/students. When we click the Send button, we see that we get the data returned from the database for all the students added.

This URL will point to GetStudents() action of our controller and the URL is the outcome of the routing mechanism defined in Route.config file. In GetStudents() method the db.Students are returned that means all the students from the database returned as IQueryable.

private StudentManagementContext db = new StudentManagementContext();

// GET: api/Students

public IQueryable GetStudents()

{

return db.Students;

}

One can invoke the endpoint to get the details of a single student from the database by passing his ID.

The GetStudent(int id) method takes student id as a parameter and returns the student from the database with status code 200 and student entity. If not found the method returns “Not Found” response i.e. 404.

// GET: api/Students/5

[ResponseType(typeof(Student))]

public IHttpActionResult GetStudent(int id)

{

Student student = db.Students.Find(id);

if (student == null)

{

return NotFound();

}

return Ok(student);

}

POST

We can perform POST operation to add a new student to the database. To do that, in the Postman, select the HTTP verb as POST and URL as http://localhost:58278/api/students. During POST for creating the student, we need to provide student details which we want to add. So, provide the details in the JSON form, since we only have Id and Name of the student in Student entity, we’ll provide that. Providing the Id is not mandatory here as the Id generated for the new student will be generated at the time of creation of student in the database and doesn’t matter what Id you supply via request because Id is identity column in the database and would be incremented by 1 whenever a new entity is added. Provide the JSON for a new student under the Body section of the request.

Before sending the request, we also need to set the header information for the content type. So, add a new key in Headers section of the request with name “Content-Type” and value as “application/json”. There are more keys that you can set in the headers section based on need. For e.g., if we would have been using a secure API, we would need to pass the Authorization header information like the type of authorization and token. We are not using secure API here, so providing content type information will suffice. Set the header information and click on Send to invoke the request.

Once the request is made, it is routed to PostStudent(Student student) method in the controller that is expecting the Student entity as a parameter. It gets the entity that we passed in the Body section of the request. The property names for the JSON in the request should be the same as the property names in our entity. Once the Post method is executed, it creates the student in the database and sends back the id of the newly created student with the route information to access that student details.

After the POST method is executed, check the Students table in the database and we see a new Student with the name John got created.

PUT

Put HTTP verb is basically used to update the existing record in the database or any update operation that you need to perform. For e.g. if we need to update a record I the database, say student name “Akhil” to “Akhil Mittal”, we can perform PUT operation.

Select the HTTP verb as PUT in the request. In the URL, provide the Id of the student that you want to update and now in the body section, provide the details, such as the updated name of the student. In our case “Akhil Mittal”.

Set the Content-type header and send the request.

Once the request is sent, it is routed to mapped PutStudent() action method of the API controller which takes id and student entity parameter. The method first checks that are the model passed is valid?, if not it returns HTTP code 400 i.e. Bad request. If the model is valid, it matches the id passed in the model with the student id and if they do not match, it again sends the bad request. If model and id are fine, it changes the state of the model to be modified so that the Entity Framework knows that this entity needs to be updated and then save changes to commit the changes to the database.

// PUT: api/Students/5

[ResponseType(typeof(void))]

public IHttpActionResult PutStudent(int id, Student student)

{

if (!ModelState.IsValid)

{

return BadRequest(ModelState);

}

if (id != student.Id)

{

return BadRequest();

}

db.Entry(student).State = EntityState.Modified;

try

{

db.SaveChanges();

}

catch (DbUpdateConcurrencyException)

{

if (!StudentExists(id))

{

return NotFound();

}

else

{

throw;

}

}

return StatusCode(HttpStatusCode.NoContent);

}

Check the database and the student name with id 4 is now updated to “Akhil Mittal”. Earlier it was “Akhil”.

DELETE

The delete verb as the name suggests is used to perform delete operations in the database. For example, if we need to delete a record in the database, like deleting the student “John” from the database, we can make use of this HTTP verb.

Set the HTTP verb as DELETE in the request and pass the student id that needs to be deleted in the URL for e.g. 5 to delete “John”. Set the content type and send the request.

The request is automatically routed to the DeleteStudent() action method of the API controller due to the name of the action. The method takes an id parameter to delete the student. The method first performs the get operation for the student with the id passed. If a student is not found, it sends back the error NotFound() i.e. 404. If the student is found, it removes the found student from the list of all students and then saves changes to commit the changes to the database. It returns OK; i.e., 200 status code in the response after a successful transaction.

// DELETE: api/Students/5

[ResponseType(typeof(Student))]

public IHttpActionResult DeleteStudent(int id)

{

Student student = db.Students.Find(id);

if (student == null)

{

return NotFound();

}

db.Students.Remove(student);

db.SaveChanges();

return Ok(student);

}

Check the database and we see the student with id 5 is deleted.

So, our delete operation also worked fine as expected.

Conclusion

In this article, we learned how to create a basic Web API project in Visual Studio and how to write basic CRUD operations with the help of Entity framework. The concept could be utilized in big enterprise level applications where you can make use of other Web API features like content negotiation, filtering, attribute routing, exception handling, security, and logging. On the other hand, one can leverage the entity framework’s features like various other approaches to data access, loadings, etc. Download the complete free eBook (Diving into Microsoft .NET Entity Framework) on Entity Framework here.

The intent of this article is to explain the code first approach and code first migrations that Microsoft’s Entity Framework provides. In my last article, I explained the theory behind the Entity Framework and the other two approaches, i.e., database first and model first approach. We’ll go step by step to explore the code first approach via which we can access the database and data using Entity Framework in our application. I’ll use Entity Framework version 6.2 and .NET Framework 4.6. and Visual Studio 2017 for the tutorial. For the database, we would be using SQL Server. You can make use of local database if do not have SQL Server installed.

Series Info

We’ll follow a five-article series to learn the topic of Entity Framework in detail. All the articles will be in tutorial form except the last where I’ll cover the theory, history, and use of EF. Following are the topics of the series.

The code first approach is the recommended approach with EF especially when you are starting the development of an application from scratch. You can define the POCO classes in advance and their relationships and envision how your database structure and data model may look like by just defining the structure in the code. Entity Framework, at last, will take all the responsibility to generate a database for you for your POCO classes and data model and will take care of transactions, history, and migrations.

With all the three approaches you have full control over updating the database and code as per need at any point in time.

Using code first approach, a developer’s focus is only on code and not on database or data model. The developer can define classes and their mapping in the code itself and since now, the Entity Framework supports inheritance which makes it easier to define relationships. Entity Framework takes care of creating or re-creating database for you and not only this while creating a database, but you can also provide seed data i.e. master data that you want your tables should have when the database is created. Using code first, you may not have a .edmx file with relationships and schema as it does not depend upon Entity Framework designer and its tools and would have more control over the database since you are the one who created classes and relationships and managing it. There is a new concept of code first migrations that came up which makes code first approach easier to use and follow, but in this article, I’ll not use migrations but old method of creating DB context and DB set classes so that you understand what is under the hood. Code first approach could also be used to generate code from an existing database, so basically it offers two methods in which it could be used.

Code First Approach in Action

Create a new console application named EF_CF. This will give you Program.cs and a Main() method inside that.

We’ll create our model classes now, i.e., POCO (Plain Old CLR Object) classes. Let’s say we have to create an application where there would be database operations for an employee and an employee would be allocated to some department. So, A department can have multiple employees and an employee will have only one department. So, we’ll create the first two entities, Employee, and Add a new class to the project named Employee and add two simple properties to it i.e. EmployeeId and EmployeeName.

Similarly, add a new class named Department and add properties DepartmentId, DepartmentName, and DepartmentDescription as shown below.

Since an employee belongs to one department, each employee would have a related department to it, so add a new property named DepartmentId to the Employee class.

Now, it is time to add EntityFramework to our project. Open the Package Manager console, select the default project as your current console application, and install the Entity Framework. We already did this a couple of time before, so it won’t be a problem now on how to install it.

Since we are doing everything from scratch, we need our DBContext class as well. In model first and database first, we got the DBContext class generated. But, in this case, we would need to create it manually. Add a new class named CodeFirstContext to the project which inherits from DbContext class of namespace System.Data.Entity as shown in the following image. Now add two DbSet properties named Employees and Departments as shown in the following.

The final code may look like this.

using System;

using System.Collections.Generic;

using System.Data.Entity;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_CF

{

publicclass CodeFirstContext: DbContext

{

public DbSet<Employee> Employees { get; set; }

public DbSet<Department> Departments { get; set; }

}

}

Both DbContext and DbSet are our superheroes in creating and dealing with database operations, and make us far abstracted, providing ease of use to us.

When we are working with DbContext, we are in real working with entity sets. DbSet represents a typed entity set that is used to perform create, read, update, and delete operations. We are not creating DbSet objects and using them independently. DbSet can be only used with DbContext.

Let’s try to make our implementation a more abstract and instead of accessing dbContext directly from the controller, let’s abstract it in a class named DataAccessHelper. This class will act as a helper class for all our database operations. So, add a new class named DataAccessHelper to the project.

Create a read-only instance of the DB context class and add few methods like FetchEmployees() to get employees details, FetchDepartments() to fetch department details. One method each to add employee and add a department. You can add more methods to your will like the update and delete operations. For now, we’ll stick to these four methods.

The code may look like as shown below,

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_CF

{

publicclass DataAccessHelper

{

readonly CodeFirstContext _dbContext = new CodeFirstContext();

public List<Employee> FetchEmployees()

{

return _dbContext.Employees.ToList();

}

public List<Department> FetchDepartments()

{

return _dbContext.Departments.ToList();

}

publicint AddEmployee(Employee employee)

{

_dbContext.Employees.Add(employee);

_dbContext.SaveChanges();

return employee.EmployeeId;

}

publicint AddDepartment(Department department)

{

_dbContext.Departments.Add(department);

_dbContext.SaveChanges();

return department.DepartmentId;

}

}

}

Let’s add the concept of navigation property now. Navigation properties are those properties of the class through which one can access related entities via the Entity Framework while fetching data. So while fetching Employee data we may need to fetch the details of its related Departments and while fetching Department data we may need to fetch the details of associated employees with that. Navigation properties are added as virtual properties in the entity. So, in Employee class, add a property for Departments returning a single Department entity and make it virtual. Similarly, in Department class, add a property named Employees returning the collection of Employee entity and make that virtual too.

Following is the code for the Employee and the Department model,

Employee

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_CF

{

publicclass Employee

{

publicint EmployeeId { get; set; }

public string EmployeeName { get; set; }

publicint DepartmentId { get; set; }

public virtual Department Departments { get; set; }

}

}

Department

using System;

using System.Collections.Generic;

using System.Linq;

using System.Security.Policy;

using System.Text;

using System.Threading.Tasks;

namespace EF_CF

{

publicclass Department

{

publicint DepartmentId { get; set; }

public string DepartmentName { get; set; }

public string DepartmentDescription { get; set; }

public virtual ICollection<Employee> Employees { get; set; }

}

}

Let’s write some code to perform database operations with our code. So, in the Main() method of Program.cs class add the following sample test code,

In the above code of Main() method, we are trying to create an object of Department class and add a list of Employees to the Employees property of that class. Create an instance of the dbHelper class and invoke the method AddDepartment, passing the department entity object to that method to add the new department.

Just after adding the department, we are fetching the newly added department and just to make sure that the department and its related employees got added successfully to the database. So, we’ll fetch the departments and on the console, print the department name and its related employees. But how will all this be done, we do not have a database yet

Not to worry, let’s see how we can make sure that we get the DB created from our code. First like we saw earlier, our context class name should be the same as our connection string name or vice versa. So, add a connection string having the same name as DB context class in the App.config file as shown below.

Job done! Entity Framework will take care of the rest of the pending work of creating a database. We just run the application and now, DB context class is first used to perform a DB operation, we get our database created.

Put a breakpoint on the main method and run the application.

As soon as the line where we write the code to AddDepartment gets executed, our database is created.

Go to the database server and see we got the database created with the same name that we supplied in the connection string. We have Departments and Employees table and a table named __MigrationHistory to track the history of code first migrations performed on this database.

We see that we also got one Department added in the database having the name ”Technology” that we used in the code.

And, got our employee’s table filled with three rows having three employees with department id 1 i.e. the id of the newly added department. And so our code first approach worked as well.

You can proceed to press F5, to run the application and when console window appears we see the details of the department and added employees in that window, so our fetch operations also work fine.

Though we covered all the approaches of the Entity Framework. I would like to show the code first migrations as well now to make you understand how to code first migrations work with Entity Framework. Before that, we need to know what is the requirement of migrations and what is the benefit of having migrations while working with the code first approach.

Code First Options

The Entity Framework code first approach provides us with three approaches while creating the database.

CreateDatabaseIfNotExists

It is the default option provided as an initializer class for code first approach. This option helps us create a database only if there is no existing database and so any accidental dropping of the database could be avoided via this option.

DropCreateDatabaseWhenModelChanges

This initializer class keeps an eye on the underlying model and if the model changes, it drops the existing database and re-creates a new one. It is useful when the application is not live and the development and testing phase is going on.

DropCreateDatabaseAlways

This option as the name says always drops and creates a database whenever you run the application. It is most useful in testing when you are testing with the new set of data every time.

Code First Migrations

Imagine a scenario where you want to add a new model/entity and you do not want the existing database to get deleted or changed when you update the database with the newly added model class. Code first migrations here help you to update the existing database with your newly added model classes and your existing database remains intact with the existing data. So, the data and schema won’t be created again.

Code First Migrations in Action

Let’s see how we can work with code first migrations step by step like we did for other approaches.

Add a new console application named EF_CF_Migrations.

Add the Department model with properties DepartmentId, DepartmentName and DepartmentDescription. Add a virtual property as a navigation property called Employees because a department can have multiple employees.

Similarly, add a model class named Employee and add three properties as EmployeeId, EmployeeName, DepartmentId, and Departments as a navigation property as an employee may be associated with any department.

Install Entity Framework from the package manager console as shown in the following,

Add a context class deriving from DbContext class and add Employee and Department class as a DbSet property in the class.

Now, execute command named “Enable-Migrations” but before that select the default project as your newly added project. The command has to be executed using the package manager console.

Once the command is executed, you’ll get a folder in your application named “Migrations” and by default, a class named Configuration would be added that holds your initial configurations and all other configurations you want to have with code first approach. You can configure the settings in the constructor of this class. This class derives from DbMigrationsConfigurations which has a virtual method Seed in the base class. We can override the method in our derived class to add some seed data to our database when it gets created.

The Seed method takes the context as a parameter. Context is the instance of our CodeFirstContext class. Now add sample data to the context for e.g. as shown below, I am adding one department named Technology with three sample employees and one additional employee separately to the context. The class will look similar to the code below.

Now execute one more command that says “Add-Migration Initial” on package manager console. This command, when executed, creates one more file under the Migrations folder.

The name of the file comprises of the date-time stamp and is appended with the keyword “_Initial”. This class derives from DbMigration class that has a virtual Up() method. The command overrides this method in the generated class and adds statements to create the database tables when our code will execute. Similarly, the Down() method is the opposite of the Up() method.

Following is the code that got generated for us when we added the initial migration. The Up method holds the database statements and takes care of foreign key constraints as well while creating tables in the database.

There, still is a gap that needs to be bridged before we proceed. We’ll need to have a connection string with the same name as of our context class in our App.config. So open the app.config file of the project and add the connection string as needed with the server and database name details.

The last step of migrations is to execute a command that says “Update-Database”.

This command, when executed on package manager console, applies all the migrations we have under the Migrations folder and runs the seed method of Configuration class.

Now, go to the database to check if we got our tables created or not with the sample data that we provided in the seed method. In the image below we see the Departments table having the sample department that we added in seed method to context as Department model.

In the Employees table, we have all the employees associated with that department and one additional employee as well that we added via the seed method.

Let’s add some code to our program.cs class to check if the database operations are working fine or not. So, create an instance of CodeFirstContext and add one more sample department with sample employees and save the changes.

Following is the code.

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_CF_Migrations

{

class Program

{

staticvoid Main(string[] args)

{

CodeFirstContext context =new CodeFirstContext();

Department department = new Department

{

DepartmentName = “Management”,

Employees = new List<Employee>

{

new Employee() {EmployeeName = “Hui”},

new Employee() {EmployeeName = “Dui”},

new Employee() {EmployeeName = “Lui”}

}

};

context.Departments.Add(department);

context.SaveChanges();

}

}

}

Run the code by pressing F5 and then go to the database to check if the records for department and Employees associated with it got inserted or not. We see in the following image while selecting top records from Departments table, we get one additional department that we just created.

Similarly, we get added Employees for the newly added department as shown in the following image.

MigrationHistory Table

This is the most important part of code first migrations. We see that along with our entity tables we got an additional table named __MigrationHistory. This table takes responsibility to hold all the migrations history that we add from the code. For e.g., check the row that it got initially. The MigrationId column of the first row contains the value that is the same as the name of the file that got created when we added migrations in our code. It contains the hash and every time we add or modify something in the model and run update migrations command, it checks the history in the database and compares with the existing files of migrations that we have in our Migrations folder. If the file is new, it executes that file only and not the old ones. This helps us to track database changes in the more organized way. One can also revert back to a particular migration from code by supplying the migration id of that migration. Migration id is nothing but the name of the migration file and the same that got stored in the __MigrationHistory table as the column value. The following two images show that the column value in the MigrationHistory table and the file name in the code for migration is similar.

In case, you add a new migration, a new file would be created with a unique name having date-time stamp and when you run update migration, a new row will be inserted into the __MigrationHistory table for the same having same column value as the name of the newly added file.

Conclusion

In this article, we closely looked at how we can leverage the Entity Framework’s code first approach and as per need use those. I took the basic console application to explain the concept, but these could be used in any enterprise level application that uses WebAPI’s, Asp.net projects or MVC projects as well. We closely looked into code first migrations and importance of migration table as well.

]]>data20access20approaches20of20entity20framework20in20-net02akhilmittal20Code First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkCode First Approach And Migrations In Microsoft .NET Entity FrameworkLearning Entity Framework (Day 1): Data Access Approaches of Entity Framework in .NEThttps://codeteddy.com/2018/10/03/learning-entity-framework-day-1-data-access-approaches-of-entity-framework-in-net/
Wed, 03 Oct 2018 13:06:27 +0000http://codeteddy.com/?p=1900Continue reading "Learning Entity Framework (Day 1): Data Access Approaches of Entity Framework in .NET"]]>Introduction

The intent of this article is to explain the three data access approaches that Microsoft’s Entity Framework provides. There are several good articles on the internet on this topic but I would like to cover this topic in a more detailed way and in the form of a tutorial that would be a primer for someone who is starting to learn Entity Framework and its approaches. We’ll go step by step to explore each approach via which we can access the database and data using EF in our application. I’ll use Entity Framework version 6.2, .NET 4.6, and Visual Studio 2017 for the tutorial. For the database, we would be using SQL Server. You can make use of a local DB if you do not have SQL Server installed. I’ll explain the database first and model first approaches in this article; while the code first approach and code first migrations will be used in the following article.

Series Info

We’ll follow a five-article series to learn the topic of Entity Framework in detail. All the articles will be in the tutorial form except the last one where I’ll cover the theory, history, use of Entity Framework. Following are the topics of the series.

Entity Framework

Microsoft Entity Framework is an ORM (Object-relational mapping). The definition from Wikipedia is very straightforward for ORM and pretty much self-explanatory.

“Object-relational mapping (ORM, O/RM, and O/R mapping tool) in computer science is a programming technique for converting data between incompatible type systems using object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language. ”

Being an ORM, Entity Framework is a data access framework provided by Microsoft that helps to establish a relation between objects and data structure in the application. It is built over traditional ADO.NET and acts as a wrapper over ADO.NET and is an enhancement over ADO.NET that provides the data access in a more automated way, thereby, reducing a developer’s effort to struggle with connections, data readers, or datasets. It is an abstraction over all those and is more powerful with the offerings it makes. A developer can have more control over what data he needs, in which form and how much. A developer having no database development background can leverage Entity framework along with LINQ capabilities to write an optimized query to perform DB operations. The SQL or DB query execution would be handled by Entity Framework in the background and it will take care of all the transactions and concurrency issues that may occur.

Entity Framework Approaches

It is very common to know the three approaches that Microsoft Entity Framework provides. The three approaches are as follows,

Model First,

Database First, and

Code First

Generate database from the data model classes.

Generate data model classes from an existing database.

The Model-First approach says that we have a model with all kinds of entities and relations/associations using which we can generate a database that will eventually have entities and properties converted into database tables and the columns and associations and relations would be converted into foreign keys respectively.

The Database-First approach says that we already have an existing database and we need to access that database in our application. We can create an entity data model along with its relationship directly from the database with just a few clicks and start accessing the database from our code. All the entities, i.e., classes, would be generated by EF that could be used in the application’s data access layer to participate in DB operation queries.

The Code-First approach is the recommended approach with EF, especially when you are starting the development of an application from scratch. You can define the POCO classes in advance and their relationships and envision how your database structure and data model may look like by just defining the structure in the code. Entity Framework, at last, will take all the responsibility to generate a database for you for your POCO classes and for the data model and will take care of transactions, history, and migrations.

With all three approaches, you have full control over updating the database and code as per the need at any point in time.

Model First

Using a Model-First approach, a developer may not need to write any code for generating a database. Entity Framework provides the designer tools that could help you make a model and then generate a database out of it. The tools are more of a drag and drop controls that just need inputs like what your entity name is, what properties it should have, how it is related to other entities and so. The user interface is very easy to use and interesting.

Model designer, when good to go, will help you to generate DDL commands that could be directly executed from Visual Studio or on your database server to create a database out of your created model. This creates an EDMX file that stores the information of your conceptual model, storage model, and mapping between both. The only drawback that I can see is that dropping the database completely and recreating it would be a challenge with this approach.

Database First

We use the Database-First approach when we already have an existing database and need to access that in our application. Establishing the data access methodology for existing database with Entity Framework will help us to generate the context and classes in our solution through which we can access the database. It is the opposite of a Model-First approach. Here, a model is created via a database and we have full control to choose what tables to include in the model, and what stored procedures, functions, or views to include. Your application may be a sub-application that does not need all the tables or objects of your big database so you can have liberty here to control what you want in your application and what not. Whenever the database schema changes, you can easily update the entity data model by just one click in the designer or entity data model and that will take care of mapping and create necessary classes in your application.

Code First

Using the Code-First approach, a developer’s focus is only on the code and not on the database or data model. The developer can define classes and their mapping in the code itself and since now Entity Framework supports inheritance, it is easier to define relationships. EF takes care of creating or re-creating the database for you and not only this; while creating a database, you can provide seed data, i.e., the master data that you want your tables to have when the database is created. Using code first, you may not have an EDMX file with relationships and schema as it does not depend upon Entity Framework designer and its tools and would have more control over the database since you are the one who created classes and relationships and managing it.

There is a new concept of code-first migrations which makes the code-first approach easier to use and follow; however, in this article, I’ll not use migrations but old method of creating DB context and DB set classes so that you understand what is under the hood. Code-first approach could also be used to generate code from an existing database, so basically it offers two methods in which it could be used.

Entity Framework Approaches in Action

Enough of theory, let’s start with the implementation part one by one and step by step to explore and learn each approach. I’ll use a sample project and a console application to connect with the database using EF for all the three approaches. I’ll use basic sample tables to explain the concept. The intent here is to learn the concept and implement it and not to create a large application. When you learn it, you can use the concepts with any large enterprise level application or any big database server which can have thousands of tables. So, we’ll follow the KISS strategy and keep it simple here.

Model First

Create a simple .NET Framework console application by opening your Visual Studio and choosing the console application template. We can choose any application type like a web application that could be ASP.NET web forms, MVC or Web API or Windows application/WPF application. You can give a name to the project and solution of your choice.

We’ll have Program.cs, the only class and App.config in our project.

Code

using System;

using System.Collections;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_MF {

class Program {

staticvoid Main(string[] args) {}

}

}

Right-click the project and click on “Add a new item”. This will open the window to add a new item, just go to Data as shown in the below image and choose ADO.NET Entity Data Model as shown in the following. Give it a name, for e.g., EFModel and click on Add.

Once you click Add, you’ll be shown to choose Model Contents, and this is the place where you choose what approach you want to use for data access out of the three EF approaches. So, choose Empty EF Designer because we would be using model first approach and create a model from scratch.

Once you click “Finish”, you see the empty designer window that is the .edmx file. The name of the .edmx file in solution is the name that we provided while adding the EF designer model. In the toolbox, you see the tools available that you could use to create entities and associations between them.

Drag and drop the Entity tool from the toolbox into the designer. It will create an empty entity as shown below with one property named Id, saying it is a primary key. Here you can rename the entity and add more scalar properties.

Right click on the entity created and add a new scalar property as shown in the following. Rename the name of the entity from Entity1 to Student. You can rename the entity by double-clicking on the entity name and right click and rename the entity.

Name the scalar property as “Name”.

In an analogous way, add a new entity named Class and add a new property named ClassName. We here are trying to create a student and a class relationship where a class can have multiple students. So, we have an option to choose Association from toolbox as shown below and drag the Association tool from Class to Student and it showed a 1-to-many relationship.

We are not adding more entities and trying to understand the basic functionality of these two entities. Right-click on the designer and click on “Generate Database from Model…” option to generate the scripts.

Once you click on “Generate Database from Model…” option, you’ll be asked to choose a data connection as shown in the following. You can choose a new connection or an existing one. I’ll choose a new connection but before that, I’ll create an empty database on my SQL server so that I do not have to modify my scripts to provide a database name. By default, the generated scrips create tables in the master database if DB name is not specified.

Open your SQL Server and create a new database and name it as per your choice. I am naming it StudentDB as shown in the following,

Coming back to the window where we needed to provide the connection details. Choose your data source and server name as shown in the following The server name should be the server where you created the empty database. Now in the selecting database option, expand the dropdown and you should see your database name. Select the database name.

Once you select the database name, a connection string would be generated as shown below and it will say that the connection string would be saved in the App.Config file with the name EFModelContainer. EFModelContainer is the name of the connection string. Since it is an EF generated connection string, you see it has the information about EF CSDL, MSL and SSDL files as well that would be present in our application. Click Next to proceed.

Next step is to choose your Entity Framework version. We’ll use 6.x i.e. it will automatically pick the latest stable version with EF6. Click Next.

As a last step of the wizard, you’ll see the needed SQL scripts created for us. You can choose to rename the scripts but by default, it takes the name as <model name>.edmx.sql. I’ll leave it as it is and click Finish to proceed.

You’ll see the script located in solution explorer now. Double click to open it and it opens in a window where you have an option to directly execute it.

Before executing the scripts let’s first install Entity Framework latest stable version from the Nuget package manager. It is very simple to do. Go to Tools in Visual Studio, then choose NuGet Package Manager->Package Manager Console as shown in the following,

The NuGet Package Manager console window will be opened at the bottom of Visual Studio by default. Now choose the project for which the Entity Framework package needs to be installed. And in the command that says PM> type Install-Package EntityFramework and press enter. We do not specify the version of the Entity Framework as we want the latest stable package to be downloaded and added to our project as a DLL.

Once done with installing Entity Framework, go back to the script window and on the top left, you see the button to execute the scripts as shown below. Press the button to execute the scripts.

Once you click on Execute, a new window will show up asking for server and database details. Fill in the details specific to your server and database as shown below and click Connect.

Once done, go to your database server and you’ll see the tables are created for our database StudentDB. The names of the tables are pluralized, and Student table has a foreign key reference to Classes table and the foreign key is automatically created named Class_Id referencing Classes table. It is magical, isn’t it?

In our solution explorer, we see the .edmx file and the context classes created and model classes for Student and Class entities. This is all done in the background by EF designer. So, till now we did not write a single code and got all the code generated by EF.

Open the EFModel.Context.cs class file and we see the name of the DbContext class that got generated is EFModelContainer. Remember that is the name of our connection string stored in App.Config. The name of the context class has to be the same as connection string name for EF to know the relation. So, you can have multiple DB context classes in the same solution with different names and pointing to different connection strings. You can explore more on DbContext class and what other ways you can make its relation to the connection string in the config file. One another way is to call its base constructor by passing name of the connection string as a parameter to its parameterized constructor. But for the sake of understanding, we’ll stick to this implementation.

Now it’s time to test our implementation and check if the entity framework is actually working and helping us in database operations or not. So, in the Program.cs class’s Main method we’ll try to write some code that saves a new class for us in the database. Create a new object of EFModelContainer and in the container, we get the entity classes collection coming from DbContext. Add a new Class. The class is the name of the entity class generated for us via designer. And name the class as “Nursery”. We do not have to specify the id attribute for the class, as EF will automatically handle this and provide an Id to a newly added record. The code to add a new class named “Nursery” is shown in the following image. The container.SaveChanges statement is the statement that when executed will add a new record in the database for us.

Code

using System;

using System.Collections;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_MF {

class Program {

staticvoid Main(string[] args) {

EFModelContainer container = new EFModelContainer();

container.Classes.Add(new Class() {

ClassName = “Nursery”

});

container.SaveChanges();

}

}

}

Just run the application and let the main method code execute. Once done, go to your database and check the Classes table, you’ll see a new record is added in the Classes table with the class name “Nursery” which is what we provided while wanted to add a record. So, it works. Notice the Id that is auto generated by entity framework.

Now, let’s try something new and try to add a new class but this time with students. We have a relationship of class with students that is a class can have many students and a student will belong to one class. Check the created model classes for Student and Class if you want to explore how the relationship is maintained in the classes. So, this time, we’ll add a new class and add some students to that class. Entity Framework should automatically add these students to the Students table and make the relationship with the Class Following is the simple self-explanatory code for doing this.

staticvoid Main(string[] args) {

EFModelContainer container = new EFModelContainer();

ICollection < Student > students = new List < Student > {

new Student() {

Name = “Mark”

},

new Student() {

Name = “Joe”

},

new Student() {

Name = “Allen”

}

};

container.Classes.Add(new Class() {

ClassName = “KG”, Students = students

});

container.SaveChanges();

}

In the above code, we create an EFModelContainer object and a list of Students by adding three students into it. Now add a new class to the container object, just like we did in the last example and assign students to the Students property of the Class object. Last but not least, container.SaveChanges().

Run the code and go to the database. Check the Classes table and see a newly created class row with name “KG” that we supplied from the code.

Now, go to the Students table and we got three students created there which we supplied from code and check the Class_Id column that has the foreign key reference to the newly created class with Id 2. Amazing

Like this, you can perform complex queries and other CRUD operations on your database by writing simple code. Try to perform more operations like editing, deleting fetching the records to understand more. Let’s move to our next topic that is database first approach with Entity Framework.

Database First

Like we did in model first approach, create a new console application and name it EF_DBF.

Code

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

namespace EF_DBF {

class Program {

staticvoid Main(string[] args) {}

}

}

The second step is to add a new ADO.NET Entity Data Model to this project. Name it as your choice. I named it ModelDBF.

Now, from the choose model window, we’ll choose the option of EFDesigner from the database, this will help us to create an entity framework designer from the existing database.

Next, choose the connection for the database, i.e provide the details on the wizard for your existing database. I’ll take the database we created with our model first approach i.e. StudentDB. Once we choose the database, we see the entity framework connection string and the name of the connection string to be saved in App.Config i.e. StudentDBEntities. You can also change it if you want. Click Next.

Choose the EF version. I already explained the meaning of 6.x. We’ll choose the same and click Next.

Now in this step, you would be shown all the database objects related to the database you selected initially, and it is your choice to include or exclude the objects you need. The objects could be tables, views or stored procedures. Since we do not have views and stored procedures, we’ll only choose our two tables as shown in the following. Since we already have the table’s name in pluralized forms, I do not want to complicate this by again pluralizing it and appending one more ‘s’ to my entity classes, so I unchecked that option of pluralizing the entity names. Provide model namespace or leave it as it is with the default name provided and click Finish.

Once you click Finish, you see the entities created in the EF designer for the database objects we selected from the database. We notice that it is like what we had when we manually created the entities and generated a database out of it. This EF designer also takes care of the foreign key relationship and shows the one to many associations between class and student entities.

Time to add Entity Framework package like we did in the first approach discussed. Make sure you choose the right project i.e. the current project where you need to add EF. Type the command in package manager Console and press enter.

Now, when we open the generated ModelDBF.Context.cs, we see the name of the partial class as StudentDBEntities i.e. the name of the connection string we got stored in App.Config. I have already explained the logic behind it in the last,

Time to see some action now. Add the code below to the Program.cs Main() method.

staticvoid Main(string[] args) {

StudentDBEntities container = new StudentDBEntities();

ICollection < Students > students = new List < Students > {

new Students() {

Name = “Harry”

},

new Students() {

Name = “Jane”

},

new Students() {

Name = “Nick”

}

};

container.Classes.Add(new Classes() {

ClassName = “Class 1”, Students = students

});

container.SaveChanges();

container.Students.Add(new Students() {

Class_Id = 1, Name = “Ben”

});

container.SaveChanges();

}

In the above code, we are trying to create an object of StudentDBEntities class, i.e., our context class and a collection of students to be added to our database. To check if the relationship is working fine or not, we’ll add a new class named “Class 1” and assign the Students property to our students collection and SaveChanges() again to check if individual student insertion is working or not, we’ll add a new student named “Ben” to the Students model and assign the class id to 1 i.e. the existing class we have in database and SaveChanges(). Put a breakpoint in Main method and press F5.

When the application runs it will hit the breakpoint. Navigate through the statements by pressing F10 and stop at line 24 i.e. before we add a new student. Since we already executed the code for saving changes for newly added class. Let’s go to the database and check.

In the database, we see the newly added class has a new row in Classes table with ID 3.

And in the Students table, we see that three students that we added from code got inserted in the table with the class id as 3 i.e. the newly created class.

Now get back to Visual Studio and execute the line for adding a new student.

Code

container.Students.Add(new Students() {Class_Id = 1, Name = “Ben”});

container.SaveChanges();

Once done, check the database and we see a new student having the name “Ben” added to our Students table having Class_Id 1 that we assigned in code.

We see our database first approach also working fine. Again, you can try other DB operations in the code at your will and play with the code to explore more. Let’s move on to code first approach.

Conclusion

In this article, we closely looked at how we can leverage Entity Framework approaches and as per our need use those. I took the basic console application to explain the concepts, but these could be used in any enterprise level application that uses WebAPI’s, Asp.net projects or MVC projects as well. We briefly discussed the pros and cons of the approaches used and tried to create small sample applications to see those working. There is a lot in Entity Framework to explore; e.g. what is underlying architecture, how the architecture works, transaction management, loadings, etc. I personally find EF as one of the best and powerful ORMs to use that seamlessly integrates with any .net application. I purposely skipped the code first approach and code first migrations in this article as it would make the article lengthy. In my next article, I’ll explain the code first approach using Entity Framework and Code First Migrations in Entity Framework. Download the complete free eBook (Diving into Microsoft .NET Entity Framework) on Entity Framework here.

In this book, you will learn about the basics of Entity Framework and the three data access approaches that Microsoft’s Entity Framework provides. This book covers the introduction to Entity Framework, how Entity Framework’s capabilities could be leveraged in .Net development irrespective of the type of application used, the key features of Entity Framework.

]]>https://codeteddy.com/2018/09/20/diving-into-microsoft-net-entity-framework/feed/1Akhil Mittal Entity Frameworkakhilmittal205 Possibilities For Blockchain In Education Technologyhttps://codeteddy.com/2018/08/01/5-possibilities-for-blockchain-in-education-technology/
Wed, 01 Aug 2018 15:09:53 +0000http://codeteddy.com/?p=1851Continue reading "5 Possibilities For Blockchain In Education Technology"]]>Imagine if an infrastructure which is available, and everyone can securely process transactional code and access the data that can never be tampered. All the transactions are stored in a form of a block which is very hard to manipulate or tamper once they are stored on a blockchain. This is the behavior of blockchain where you can store the data in the most trustworthy way in the scenarios where there is no trust. Blockchain obviously is not a place where you can store a large amount of data for every transaction. For e.g. you cannot store a lot of images or documents in bulk, but you can for sure store an information that can validate that your documents or images are tampered with or not. Most data stored on a blockchain is focused on transactions and states of objects, rather than the actual objects themselves.

The Legacy and Drawbacks

It was made very easy to transfer/share audio files using Napster (a network for file sharing in 1999). Since it used a centralized directory network, it was called a mixed peer to peer network. Users using this infrastructure had the flexibility to retain a copy of the file shared, and so a single digit asset retention led infinity number of copies to be stored on a network that was global. The technology was benefitted with a computer with so ease that it got Tower Records which eventually led to the shutting down of around its 89 stores based out of U.S. by 2006.There was the challenging need to create a digital infrastructure which is mediator free and available to transfer digital assets freely and reliably. The infrastructure’s demand was to be secure, the transfer should be peer to peer rather than shared or copied, could be trusted and should not have any central governing authority.

Blockchain in Education Industry

Blockchain’s capabilities are not limited to Bitcoin and financial transactions, rather it has a wide scope that could be leveraged in our education industry as well. Blockchain can have its implementation in educational institutions like universities, in the publishing industry or in a group of an educational institution as well. It could be used to avail education data, qualifications and credits in a more secure and transparent manner. Following are five areas where blockchain technology would be an excellent choice to be used.

1. Education Institutions

Universities and other educational institutions that offer a project-based education or training can leverage the blockchain technology to generate a tamper-proof certificate for their students. An encrypted certification with two-factor authentications could be kept in the blockchain database generating a unique decentralized number that could be used by the authorities for authenticity. This will prevent anyone to produce a fake or non-authentic certificate. Few international education institutions have already adopted this methodology to ensure certificate authenticity and security.

2. A global database for qualifications

Not limited to just storing a certificate securely, but using blockchain technology a global (international) database could be created where an individual does not have to store their paper degrees and qualification certificates which are very much prone to lose or tampering. A blockchain-based platform could be established to store all the qualifications information which could be used with authorities like VISA officers to check the authenticity of an individual if he is travelling cross-border or migrating, Company’s management to check the authenticity of their employee or if they are hiring a new employee, Education institutions to check the background and authenticity of produced qualifications by an individual.

3. Learning Platforms

Individuals in corporate offices or students in educational institutions can have a platform where they can seek online training or sessions with their peers/bosses or teachers respectively. An independent learning platform could be established between a trainer and a trainee and the terms and conditions for training like projects, tasks, fee, duration of training could be easily agreed upon between both the parties and could be stored as smart contracts for a transparent and secure execution thereby eliminating the concept middleman who can make money via this.

4. Corporate Learning

There is a need for a more secure and transparent system for the employees and the companies in which they work for corporate training. Tracking the achievement of an employee or on overall company’s capabilities of providing huge training is a challenge and hard to measure. The legacy learning management systems and technologies are outdated now. Blockchain technology in this space can play a key role in keeping the track of all the training done and employees achievement on those in a more secure and transparent manner where an employee can also leverage this record to showcase to his new employer.

5. Secure Payments

Students can use this platform for paying their tuition fee/ course fee to educational institutions via cryptocurrency. The Cumbria Institute for Leadership and Sustainability had already announced an option in 2014 where students can pay their fees in bitcoins. Accepting payments via bitcoins does not have to depend upon a big infrastructure but is the more secure mode of payment. Moreover, the international students or students located globally can find this an effortless way to pay their fee and do not have to depend upon third parties who can make money via charging the fee or conversion rates to students or educational institutions.

Conclusion

In short, blockchain is a kind of data structure which is tamper-proof, and it tracks the digital assets as they pass from owner to owner. The digital asset could be a digital coin like Bitcoin, any document or the transaction could be a monetary transaction between anonymous strangers on the internet. It can for e.g., give you an ability to store your medical information where only you have a secure access or someone you allow would have that access. Not only this but an asset with a digital fingerprint can also be tracked on a blockchain. The blockchain technology is positively affecting and could influence the education system and education technology in a more effective way. Blockchain accomplishes this outstanding coup rapidly and internationally with no central authority to govern.

Imagine if an infrastructure which is available and everyone can securely process transactional code and access the data that can never be tampered. All the transactions are stored in a form of a block which is very hard to manipulate or tamper once they are stored on a blockchain. This is the behavior of blockchain where you can store the data in the most trustworthy way in the scenarios where there is no trust. Blockchain obviously is not a place where you can store a large amount of data for every transaction. For e.g. you cannot store a lot of images or documents in bulk, but you can for sure store an information that can validate that your documents or images are tampered with or not.

Most data stored on a blockchain is focused on transactions and states of objects, rather than the actual objects themselves.

The Legacy and Drawbacks

It was made very easy to transfer/share audio files using Napster (a network for file sharing in 1999). Since it used a centralized directory network, it was called a mixed peer to peer network. Users using this infrastructure had the flexibility to retain a copy of the file shared, and so a single digit asset retention led infinity number of copies to be stored on a network that was global. The technology was benefitted with a computer with so ease that it got Tower Records which eventually led to the shutting down of around its 89 stores based out of U.S. by 2006.

In early 2008, millions of credit-card numbers were exposed by a known payment system due to data leak and were resulted in fraudulent transactions. These episodes describe the immediate danger of relying and living in the digital world that relies on some middleman who generated money from the transactions and exposes the people to digital exploitation, fraud, and greed.

There was the challenging need to create a digital infrastructure which is mediator free and available to transfer digital assets freely and reliably. The infrastructure’s demand was to be secure, the transfer should be peer to peer rather than shared or copied, could be trusted and should not have any central governing authority.

The Bitcoin Blockchain: Genesis

On 3rd January 2009, about 50 digital coins were mined via a new kind of infrastructure and the infrastructure not only mined by also recorded the coins on a public ledger that was impossible to tamper and replicated on a peer-to-peer decentralized network on the internet. Those 50 digital coins were named as bitcoins and were registered as the genesis block i.e. the first establishment to be called as Bitcoin blockchain. The one important thing that makes this infrastructure unique is that crypto-currency powered by blockchain has no centralized governance and trust authority like banks and other financial institutions to validate the transactions. The blockchains do not depend on a mediator, this makes it possible to transfer digital assets globally on the internet with no involvement of a middleman. The terminology “blockchain” is not only limited to bitcoins but in general, applies to cryptocurrencies. The data located in the blockchain is encrypted with modern cryptographic techniques which makes it more reliable and tamper-proof. It is not prone to any single point of failure as it is replicated on each node that consists of the peer-to-peer network which makes the technology more trustworthy and available. Bitcoin technologies keep on maturing itself rapidly since its launch and so there is a drastic variation of their implementation details making the study of blockchains vast, complex and dynamic. Since the genesis, blockchains have become more mature to work smarter and faster. It is being perfected and refined more and more as it expands. Some blockchains like Ethereum also support smart contracts i.e. allowing scripting on the blockchain. And so, you can apply your customized constraints over the blockchain nodes. So, beyond the limitations, blockchain technology has proved to be a new kind of hack-proof, programmable storage technology.

How does it work

So how does it all work? It begins with someone doing a single or a group of transactions. A transaction is typically sending data in the form of a contract. Depending on the blockchain implementation you are using, it can also involve cryptocurrency being sent from one account to another. The transactions are sent to a large peer to peer network of computers. These are basically distributed throughout the world. Each computer is called a node and they all have a copy of the existing data. Thereafter the transaction is first validated then executed on the basis of pre-shared contracts and scripts. This ensures that all nodes execute using the same set of rules. When the transaction was executed, the result is added to the blockchain. Since this is done at each node, you would have to compromise every node in the chain in order to compromise the transaction.

When doing transactions in the blockchain, there are some aspects that are absolutely necessary for it to have the characteristics. First, all transactions are atomic. This means that the full operation run or nothing at all. Let’s say you have a monetary transaction. You would want to ensure that both the function that credits one account and a function that debits another are executed successfully. If one of them fails, the entire transaction should fail. If not, you might end up either destroying or creating money. Secondly, transactions run independently of each other. So, no two operations can interact or interfere with each other. It has to be inspectable. Every single method call that comes to blockchain comes with the actual address to the caller. Just think about it. This gives a unique possibility for securing and auditing solutions on a very, very wide scale. This is unique to the blockchain. Blockchain objects are immortal. That means that all data from an object are permanent.

Since Bitcoin blockchain came out to be world’s first known blockchain technology, “blockchain” is often understood as being a synonym of Bitcoin. The modern blockchain technology not only tracks the digital currency but also provides an offering to track digital assets and the working model of these blockchains is far different from the operation strategy of Bitcoin’s blockchain.

Conclusion

In short, blockchain is a kind of data structure which is tamper-proof and it tracks the digital assets as they pass from owner to owner. The digital asset could be a digital coin like Bitcoin, any document or the transaction could be a monetary transaction between anonymous strangers on the internet. It can for e.g., give you an ability to store your medical information where only you have a secure access or someone you allow would have that access. Not only this but an asset with a digital fingerprint can also be tracked on a blockchain. Blockchain makes sure that the ownership of digital assets should be transferred rather than being shared or copied and so it solves the problem related to “double-spend”. Blockchain accomplishes this outstanding coup rapidly and internationally with no central authority to govern. This makes the commerce more advanced for business thereby eliminating the middleman and transactional fees.

In my last article on Blockchain Development, we learned about setting up the development environment before we start coding or developing our first smart contract. We installed necessary packages and tools those would be needed for development. In this article, we’ll explore Solidity and develop our first smart contract of “Hello World”.

Smart Contracts and Solidity

Developing smart contracts is just writing code in the supported language of the blockchain implementation for which we need to develop a smart contract. For example, Ethereum supports the language named “Solidity”. The code after being written needs to be compiled to bytecode. There are many compilers available for it and few are available online as well. In this article, we will use Solidity to write code and a framework named “Truffle” that we installed in my last article. Truffle provides us with an inbuilt compiler to compile the smart contract. After successful compilation, the smart contract needs to be uploaded/deployed and mined. Once it is successfully mined, one can start interacting with it. The interaction could be done via user interfaces for the contracts or straightaway via HTTP post method operations.

Solidity provides us the flexibility to write code following object-oriented principles. It is the language that is very much like the style of JavaScript. A developer coming from the background of an object-oriented programming language can quickly grasp Solidity and its syntax. Solidity supports both normal and multiple inheritances and its data types and coding structs are very much like any other object-oriented language. For e.g. “bool” is the keyword to support Boolean data types that can hold either “true” or “false” as a value. Strings, as usual, are used with double quotes but have very limited string manipulation capability that we find in other languages like C#, JavaScript or Java. Solidity has both signed and unsigned integers having a range of 8 bit to 256 bits. One of the important types that Solidity uses is address type. It is used to store addresses used in Ethereum for an account or for a smart contract.

Solidity has support for access modifiers like public, private, internal or external. Access modifiers help to provide an abstraction to the code and more control on who can access the code from where. If it needs to be accessed from everywhere, Public is used. If the code needs to be accessed within a contract then we use Private. Internal means that a contract and its deriving contracts i.e. child types could use the code i.e. methods and properties. Using external only allows contracts methods and properties to be used externally and none child types could access those as was in internal.

A contract in Solidity is defined by a contract keyword and the name of our choice to the contract. This is very much like writing a class in any programming language. A contract after been defined can have methods and variables. One of the important things to remember is that we can have multiple return types in methods in Solidity.

Truffle and Test RPC

Truffle helps us in development of the contract and its testing. The Truffle toolset also acts as an Ethereum development pipeline. Truffle has an inbuilt compiler to build our solution. It supports automated testing and makes it very easy to deploy the contract via deployment locations that are configurable in the solution. Truffle could be used in console mode and we can directly interact with our deployed contracts.

Developing a Smart Contract

Let’s start developing our first smart contract of “hello world”. We’ll start with executing Test RPC and create a project with the help of Truffle and then create our hell world contract.

Create a new folder in your windows with the name helloworld.

Once done with folder creation, open a new instance of PowerShell in administrator mode as shown below.

Now, using cd command go to the folder we just created i.e. helloworld as shown in the following image.

Now, we’ll use truffle to kick-start our solution. This will get the framework and simple project created for us in the helloworld directory that will help us to start quickly. So, type command “truffle init” and press enter in the power shell window.

As you see in the above image, the command runs, downloads necessary packages and sets up or development environment. You can go back to the helloworld folder to check the folders and files created for our development environment.

Now you can launch Visual Studio code and open the folder for our source code or you can follow a simpler method by just typing “code .” in the power shell console window. This will open up the folder for us in our IDE (Integrated Development Environment) i.e. visual studio code.

Once VS Code is launched, you can see the files and folders loaded in the solution. We see that it has automatically created some test contracts for us. Let’s leave it as it is for now and move to next step.

Time to create a new Right click on contracts and add a new file.

Add a new file with the name of your choice. In my case, I have given it a name helloworld.sol as shown below.

Let’s write some code now. Before we proceed to write our method, we will set pragma as shown in the following image (first line of code) to 0.4.22 that means it works with any version of solidity about 0.4.22. In this way, we will be confident that our code works exactly as expected with the set version.

Next, define a contract with the name helloworld followed with opening and curly braces just like we do while defining a class, and inside those brackets will be our methods, properties, and variables. Define a function called PrintHelloWorld () that returns a string in the way as shown in the following image and return a hard-coded string saying “Hello World !”.

Our code looks like as follows,

pragma solidity ^ 0.4 .22;

contract helloWorld {

function PrintHelloWorld() public pure returns(string) {

return“Hello World !”;

}

}

Now, save the file by using ctrl+s. Next step is to deploy the contract.

Deploying Smart Contract

Under the migrations folder, we find the file that tells Truffle which files need to be deployed to the blockchain. In that file, create a deployer for helloworld as well by creating a variable named HelloWorld requiring the file helloworld.sol and in the module.exports write deploy (HelloWorld) as shown in below image.

The code looks like as follows,

var Migrations = artifacts.require(“./Migrations.sol”);

var HelloWorld = artifacts.require(“./helloworld.sol”);

module.exports = function(deployer) {

deployer.deploy(Migrations);

deployer.deploy(HelloWorld);

};

Now, we need to run the Test RPC server. To do that, open a fresh new instance ow windows PowerShell and leave the already opened instance as it is.

Type the command testrpc as shown in the following image to start the server. We could see as described by me earlier, as soon as it starts it creates test accounts and private keys for us and the very first account is used as a default account. Some of the features may need access from admin to get started. So, in case a window pops up asking for access, please allow.

We can see that by default the server runs at the port 8545. Now since our server is up and running, leave this window opened as it is and go back to the prior PowerShell window that we were working on.

It’s time to compile our solution now. Use the command truffle compile to compile as shown in the following image, it will write artifacts to the build\contracts folder.

Now we are good to deploy the contract as it is successfully compiled. We can do the deployment via the truffle migrate command as shown below. Type the command and press enter.

The error says that it could not determine the current network. So, there is some preparation we need to do to get that working. You can skip steps 5 to 7 if you do not get an error as it may have configured on your version. But for the ones who get this error please follow steps 5 to 7 as well as described below.

Go back to VS Code where the solution is opened and we see a file there named truffle.js that should contain the server configurations.

By default, everything in that file is commented out and we can define our custom configurations. So, replace the complete text in that file with the code below,

module.exports = {

networks: {

development: {

host: “localhost”,

port: 8545,

network_id: “*”// Match any network id

}

}

};

Now, add a new migrations file under migrations node as shown in the following image and call it 2_deploy_contracts.js and move the deploy code from 1_initial_migration.js to this newly created file.

So, the file contains the code as below.

var HelloWorld = artifacts.require(“./helloworld.sol”);

module.exports = function(deployer) {

deployer.deploy(HelloWorld);

};

Now again go to the PowerShell window where we got an error and again compile the code by typing the command truffle compile and press enter. Once you press enter, the code will again be compiled for the latest changes we made in VS Code. Now again run the command truffle migrate for deployment.

This time our migrations run fine as they find the current running server. See the image above as it successfully deploys our contract. We can also see the address it got while uploading.

Now, you can go ahead to create a user interface to test the contract we just deployed, alternatively Truffle also provides a mechanism to interact with the contracts. We can do this via Truffle console. To start the truffle in console mode, run the command truffle console as shown below.

This opens console listener for Truffle. In this mode, we can directly write JavaScript code for our contract. Let’s do it step by step.

Test Smart Contract

In the truffle console mode, define a variable named helloW or any of your choices with the var And press enter. We’ll try to access our contract with this variable. After pressing enter we get undefined and that is obvious because the variable is still not defined with any content. Since the communication with contracts should be async, we write an async code to access the contract now. Since in our code the contract was referenced using the deployed keyword, so we’ll also access it via helloWorld.deployed(), thereafter a then keyword is used to create an async method that maps our deployed contract with the variable that we created earlier ;i.e. helloW. The command would be helloworld.deployed().then(function(deployed){helloW=deployed}); See the image below with all the commands to understand in detail.

Now, access and test the contract with the variable we mapped it to in our last step. Use the method call() to the helloworld contract method to invoke the contract as PrintHelloWorld.call() as shown in following image.

Here we get the string that we were returning from that method and so our method and contract is tested running on internal private Blockchain. It was really a fun to implement and test our smart contract.

Conclusion

In this article, we learned about smart contracts, the language used to develop smart contracts on Ethereum, Truffle and Test RPC. This article serves as a primer to those who are beginners in blockchain development and new to Ethereum and smart contracts. This article gives a foundation on how to start and proceed with smart contract development. I hope you enjoyed creating your own smart contract, deploying and testing it.

Here is a list of articles that will help you getting good insights about Azure, Blockchain, Ethereum and Smart Contracts development using Solidity.

In my last article on Blockchain, we learned about setting up Ethereum Blockchain on Microsoft Azure using Consortium leader. It is time for some development now. Before we move on to Smart Contracts and their development, it’s important for us to set up a development environment as a prerequisite. This article will solely focus on setting up the development environment for Smart Contract development. In the next article, we’ll see what smart contracts are and how we can develop those.

We’ll use a list of tools to set up our development environment before we proceed with actual development. I’ll use a fresh Windows installation on Microsoft Azure. You can follow the steps to set up a VM on Azure or can refer to my article on Setting up VM on Azure before we actually start. We’ll install Chrome browser on the fresh machine followed by MetaMask that is a Chrome plugin which will help us in authentication. Microsoft provides Visual Studio code, that is absolutely free and we can use that as a development IDE for our smart contracts development. We’ll install NuGet Package Manager (NPM) and Chocolatey; i.e., also a package manager to get other tools and packages needed for development. We’ll use Git along with NPM and Windows build tools for the purpose of building/compiling the code. We’ll use an in-memory test server known as Test RPC to test the application and lastly, Truffle. We’ll explore more about Truffle when we start development.

Creating VM on Microsoft Azure

Azure

Azure is a cloud platform from Microsoft and provides numerous resources in the context of cloud computing. One of the resources is virtual machine; i.e., a fully functional machine of your choice with the choice of your configurations and operating system could be created within seconds with just a few clicks and you can access the machine remotely from anywhere with your secure credentials and do whatever you want for e.g. hosting your website, developing applications, creating production or test environment for your software etc. Let’s see step by step how we can achieve that.

Azure Account Setup

If one does not have a paid Azure account, one could leverage Azure’s new account’s benefits of giving $200 credits. That means if you are new to Azure and want to play around with its free trial, you’ll get $200 credits that you can use to explore Azure. If you are new to Azure and do not have an account, follow the following process; else, directly log in to your portal.

Open the Azure web site i.e. azure.microsoft.com

Click on Start free to create your free Azure account and get $200 as credits.

Creating an account and claiming $200 would need your credit/debit card for verification purposes only and will not deduct any amount from your card. You can play around with this credit and account for 30 days. You’ll see the signup page, where you fill all your information and signup step by step. Once signed-up successfully, you’ll see the link to the portal as shown below.

Click on the portal and you will land up on the dashboard and are ready to use/play around with Azure.

Virtual Machine Setup on Azure

Once on the dashboard, click on the “Virtual machines” link on the dashboard and a right panel would open where you see all your VMs. Since we are creating new and we do not have existing ones, so it would be blank. Click on “Create virtual machine”.

Once you click on “Create virtual machine”, you’ll get to see all the operating systems and solution templates that Azure provides to create a machine. You can choose to have Windows or Linux operating system based on requirements, but be careful about costs involved.

Since this article is for learning how to create a virtual machine, I’ll choose the Windows client machine with minimal machine configurations, one can choose based on requirements and need. So, choose “Windows Client” as shown in the following image.

You’ll get the window of the license agreement and legal Read that carefully and press “Create button”.

After clicking Create, you’ll be asked to fill out some basic requirements as shown below. Give the name as per your choice, for e.g. I gave it “AKHILPC”, leave VM disk type as SSD, or choose as per your need. Provide username and password you would be needing when you connect remotely with the machine. Keep the username and password safe and secure. Choose the subscription, if you have a paid one, choose that else choose the trial subscription that you got. You must provide a resource group. You can create a new or use an existing one. Resource Group gives you a logical separation of all your Azure resources. Since I have an existing resource group created, I am using that. Choose Location, click on confirmation checkbox and click OK.

Once you click OK, you get to see the second section to choose the size of the machine where you see the list of RAMs, Hard disk size, SKU, and zones. Each configuration has a cost associated with it so choose as per your need and budget. For training/tutorial purposes I am choosing the first one that has the minimum cost as shown below.

In the third step, you are needed to choose certain settings related to availability, storage, and network. Choose/Provide the settings as per your discretion.

Once you click OK, you’ll be shown a summary page for all the configurations you choose, cost per hour and OS. Give confirmation if everything looks good by clicking on confirmation checkbox as shown in the below image and click on Create button.

Once you click Create, it may take a while to create your VM. It will say “Submitting deployment for…”. Wait till the deployment is complete. For me, it took 5~9 minutes.

Once deployment is done, you’ll see the section for your deployed VM, where you can choose to Start, Stop, Restart, Move or Delete your created VM. Clicking on Connect will show you two options in the right panel i.e. RDP and SSH. We’ll choose to connect via RDP, so download the RDP file shown at the right panel. Click the blue “Download RDP file” to get the file. Alternatively, you can directly open RDP connect via a mstsc command on your local machine. You get the IP address as shown below in RDP section.

The downloaded RDP file will be located at your local download location. Click on that to configure RDP connection.

The IP would automatically be filled, just fill the username and password to connect.

Once the connection is successful, you’ll see the Welcome message while the window loads and configures for first-time use. Please wait for a while.

Once Windows has loaded, you’ll see the desktop as shown below. Now you can choose to do whatever you want with this machine.

Note that for the time you use a machine, you’ll be charged hourly. In case you do not want to use the machine for some time or stop the machine daily at the defined time, you can do that manually by clicking on Virtual Machines option at your Azure dashboard. You’ll see your VM. Select your VM and click Stop. You can Start whenever you want. Thus, you can save a lot of costs.

Moreover, by clicking on your select VM, you can monitor its hourly/daily usage statistics as shown below.

See how easy it was to set up a VM on Azure with just a few simple clicks? Now, you do not have to depend on any physical machine to do your job.

Step by Step Installation of Tools and Packages

As described by me in the earlier section, let’s start with step by step installation of other tools and packages.

Install Chrome browser – The very first step is to install the Chrome browser. Download the Chrome browser.

Read Google terms and services and install it on the fresh machine that is just created.

Now we have to install MetaMask i.e. a chrome plugin that will help us in authentication and testing. Download MetaMask from GitHub from the following URL: https://github.com/MetaMask. As shown in the image below, choose metamask-extension link shown in pinned repositories and click it.

I have chosen the version 4.7.4, you may see a newer stable version when you try to download that. Download the chrome plugin by clicking on the link metamask-chrome-4.7.4.zip as shown in the following

Unzip the downloaded zip file to any location on windows.

Now, open Chrome extension settings by typing following URL in chrome: chrome://extensions. Switch on the developer mode as shown in the following image and click on LOAD UNPACKED i.e. to load the unzipped downloaded extensions.

Now navigate to the folder where the MetaMask chrome extension was unzipped and choose that root folder as shown in the following image.

As soon as you click OK the extension will be loaded Chrome and you’ll see the MetaMask home page that confirms the extension is now part of chrome browser. You’ll see a fox icon at the top right of chrome as an extension. Now you can go back to chrome extensions page that we opened earlier and disable the developer mode that we enabled while loading the unpacked extension.

Time to download Visual Studio code i.e. the IDE that helps us in development. VS Code is widely accepted and a lightweight free IDE provided by Microsoft for development. Navigate to the URL:https://code.visualstudio.com/. Download the latest stable build as shown in the following,

Once the executable is downloaded click on the same to install that. Read the license agreement and accept it and follow the instructions of installation setup to get visual studio code install on your machine.

Now its turn to download NPM i.e. Nuget Package Manager. Navigate to the URL: https://nodejs.org/en and download the latest released stable version that you see on the window. I am downloading 8.11.2 version of NPM as shown in the following image

Click on the setup once it gets downloaded and follow the steps.

Keep all the settings as default and click on Next buttons until the setup is installed successfully.

Click on Finish once the setup is installed.

Now, as discussed earlier in this article that we need one more package manager. Let’s install that as well i.e.” Chocolatey”. Chocolatey could be installed using PowerShell commands and we get the commands/scripts on the URL: https://chocolatey.org/. Go to the website and click on “Install Chocolatey Now” button as shown in the following image

On the page where you land, scroll down to the section where you see subheading as “Install with PowerShell.exe”. Copy the script as shown in the red box in the following image. The script would be: “Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-ObjectSystem.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1&#8217;))”

Now open the PowerShell window from your windows start menu as shown below and run as administrator to launch the PowerShell command prompt.

The command prompt would look like as shown below.

Now paste the copied script to the command prompt and press enter.

Once Chocolatey commands and NuGet packages are in place, we’ll install GitHub client using install command of Chocolatey, this installs other useful tools that would be used while development.

Now we are good to start our development as our tools and software are well in place and our development machine on Azure is up and running. Yes, it was a lot of work to setup the environment and tools, but don’t worry, it will be fun to develop and learn these new tools and technologies.

In this article, we learned how to setup the development environment to start development. We came across various packages like NPM and Chocolatey that helped us to install more tools. We installed VS Code, MetaMask which will help us to provide coding environment. I have specially provided all the scripts in this article in text form and not an only image so that it is easy for you as a reader/developer to copy and execute. In my next article, we’ll focus on Smart Contracts and development. Cheers!

This article will focus on creating VM, i.e., a Virtual Machine on Microsoft Azure. Microsoft Azure provides many cloud services and getting a virtual machine is one of those. One can create a virtual machine; i.e. a remote desktop machine, on the cloud and access the same with the provided credentials. Azure gives us the flexibility to choose the type of machine; i.e. client or server and the operating system and machine configuration of one’s choice. So, it gives the flexibility to create a small machine or set up a huge configuration machine based on the requirement. Each configuration and component chosen had a price that depends on how long and how much the VM is used. In this article, I’ll set a VM on Microsoft Azure step by step and this article will be more of a tutorial form.

Azure

Azure is a cloud platform from Microsoft and provides numerous resources in context on cloud computing. One of the resources is a virtual machine, i.e., a fully functional machine of your choice with the choice of your configurations. The operating system could be created within seconds with just a few clicks and you can access the machine remotely from anywhere with your secure credentials and do whatever you want. For e.g. – hosting your website, developing applications, creating production or test environment for your software etc. Let’s see step by step how we can achieve that.

Azure Account Setup

If you do not have a paid Azure account, you could leverage Azure’s new account’s benefits of giving $200 credits. That means if you are new to Azure and wants to play around with its free trial, you’ll get $200 credits that you can use to explore Azure. If you are new to Azure and do not have an account, follow following process, else directly login to your portal.

Open the Azure web site i.e. azure.moicrosoft.com

Click on “Start free” to create your free Azure account and get $200 worth of credits.

Creating an account and claiming $200 would need your credit/debit card for verification purposes only and will not deduct any amount from your card. You can play around with this credit and account for 30 days. You’ll see the signup page where you fill all your information and signup step by step. Once signed-up successfully, you’ll see the link to the portal, as shown below.

Click on the portal and you land up on the dashboard and are ready to use/play around with Azure.

Virtual Machine Setup on Azure

Once on the dashboard, click on the “Virtual Machines” link on the dashboard and a right panel would open where you see all your VM’s. Since we are creating new and we do not have existing ones, it would be blank. Click on “Create Virtual Machine”.

Once you click on “Create virtual machine”, you’ll get to see all the operating systems and solution templates that Azure provides to create a machine. You can choose to have Windows or Linux operating system based on requirements, but be careful about costs involved.

Since this article is for learning how to create a virtual machine, I’ll choose the Windows client machine with minimal machine configurations, one can choose based on requirements and need. So, choose “Windows Client” as shown in the following image.

You’ll get the window of the license agreement and legal terms. Read that carefully and press “Create button”.

After clicking Create, you’ll be asked to fill in some basic requirements as shown below. Give the name as per your choice, for e.g. I gave it “AKHILPC”, leave VM disk type as SSD, or choose as per your need. Provide username and password you would be needing when you connect remotely to the machine. Keep the username and password safe and secure.

Choose the subscription. If you have a paid one, choose that otherwise choose the trial subscription that you got. You must provide a resource group. You can create a new or use an existing one. Resource Group gives you a logical separation of all your Azure resources. Since I have an existing resource group created, I am using that. Choose Location, click on confirmation check box, and click OK.

Once you click OK, you get to see the second section to choose the size of the machine where you see the list of RAMs, Hard disk size, SKU, and zones. Each configuration has a cost associated with it so choose as per your need and budget. For training/tutorial purpose, I am choosing the first one that has the minimum cost, as shown below.

In the third step, you will need to choose certain settings related to availability, storage, and network. Choose/provide the settings as per your discretion.

Once you click OK, you’ll be shown a summary page for all the configurations you choose, cost per hour and OS. Give confirmation if everything looks good by clicking on confirmation checkbox, as shown in the below image and click on Create button.

Once you click Create, it may take a while to create your VM. It will say “Submitting deployment for…”. Wait till the deployment is complete. For me, it took 5~9 minutes.

Once deployment is done, you’ll see the section for your deployed VM, where you can choose to Start, Stop, Restart, Move or Delete your created VM. Clicking on Connect will show you two options in the right panel i.e. RDP and SSH. We’ll choose to connect via RDP, so download the RDP file shown at the right panel. Click the blue “Download RDP file” to get the file.

Alternatively, you can directly open RDP connect via mstsc command on your local machine. You get the IP address, as shown below in RDP section.

The downloaded RDP file will be located at your local download location. Click on that to configure RDP connection.

The IP would automatically be filled, just fill the username and password to connect.

Once the connection is successful, you’ll see the Welcome message while the window loads and configures for the first-time use. Please wait for a while.

Once Windows is loaded, you’ll see the desktop as shown below. Now, you can choose to do whatever you want with this machine.

Note that for the time you use the machine, you’ll be charged hourly. In case you do not want to use the machine for some time or stop the machine daily at a defined time, you can do that manually by clicking on Virtual Machines option at your Azure dashboard. You’ll see your VM. Select your VM and click Stop. You can Start whenever you want. Thus, you can save a lot of money.

Moreover, by clicking on your selected VM, you can monitor its hourly/daily usage statistics, as shown below.

See how easy it was to set up a VM on Azure with just a few simple clicks. Now, you do not have to depend upon any physical machine to do your job.

Conclusion

In this article, we learned how to setup a Virtual Machine (VM) on Microsoft Azure. With a few simple steps, we can set up a machine of our choice based on our need and requirements and moreover, can have full control over that machine and cost. So, no more dependency on your IT office guys or your Networking guys. Cheers!