If you are familiar with ASP.NET Core middleware[1], you may have noticed that in our previous post we already had a middleware. In the initial blank app, that middleware was responsible for throwing a

If you are familiar with ASP.NET Core middleware[1], you may have noticed that in our previous post we already had a middleware. In the initial blank app, that middleware was responsible for throwing a Hello World response. Later we replaced it with our custom code so that it can respond back a result of some static GraphQL query.

Middleware is software that's assembled into an application pipeline to handle requests and responses. Each component:

Chooses whether to pass the request to the next component in the pipeline.

Can perform work before and after the next component in the pipeline is invoked.

Practically a middleware is a delegate or more precisely; a request delegate. As the name suggests, it handles incoming request and decides whether or not to delegate it to the next middleware in the pipeline. In our case, we configured a request delegate using the Run() (extension) method of IApplicationBuider. Between three extension methods (Use, Run, Map), Run() terminates the request for further modifications in the request pipeline.

The code inside our middleware was very simple and can only response back result of a hardcoded static query. However, in a real world scenario the query should be dynamic hence we must read it from the incoming request.

Every request delegate accepts a HttpContext. If the query is posted over a http request, you can easily read the request body using the following code,

A request body can contain a whole lot of fields, but let's say the passed in query comes within a field named query. So we can parse out the JSON string content of the body into a complex type that contains a Query property,

The complex type looks as follows,

public class GraphQLRequest
{
public string Query { get; set; }
}

Next thing to do is deserialize the body to an instance of GraphQLRequest type using Json.Net's JsonConvert.DeserializeObject and replace the previous hardcoded query with the request.Query,

Now you make a POST request containing the query field using any rest client (Postman/Insomnia),

We are pretty much done with this post. But you can see that we have a lot of newing ups of objects like the new DocumentExecuter(), new Schema(), new DocumentWriter() etc. In the next post, we will see how we can use the built-in dependency system of ASP.NET Core and make them injectable.

Repository Link (Branch)

Important Links

]]>Tired of REST? Let's talk about GraphQL. GraphQL provides a declarative[1] way in which you can fetch data from the server. You can read about every bit of goodness that is baked into GraphQL in the official site. However, in this series of blog posts, I'm going to deal]]>http://fiyazhasan.me/graphql-with-asp-net-core/c90df8d0-4b48-4a5e-bb79-24afc0d46e8aMon, 05 Mar 2018 16:26:35 GMT

Tired of REST? Let's talk about GraphQL. GraphQL provides a declarative[1] way in which you can fetch data from the server. You can read about every bit of goodness that is baked into GraphQL in the official site. However, in this series of blog posts, I'm going to deal with ASP.NET Core and will show how you can integrate GraphQL with it and use it as a query language for your API.

Meaning that you only declare the properties you need (In contrast to restful API where you call a specific endpoint for a fixed set of data and then you dig out the properties that you are actually looking for)

To work with C#, the community has provided an open source port for GraphQL called graphql-dotnet and we are going to use that. So, let's get started, shall we?

Start by creating a blank ASP.Net Core App.

dotnet new web

We are going to build the GraphQL middleware later (Next Post). But first, let's get our basics right. Assuming you already know a bit about GraphQL. So, consider a simple hello world app: we will query for a 'hello' property to the server and it will return a 'world' string. So, the property hello will definitely be a string type.

This series of articles will be updated with each version updates of the package hence using the most recent alpha release. If I forget to update the article and the example code doesn't work, leave a comment below saying, Update Needed.

Let's create a query and call it HelloWorldQuery. It's a simple class used to define fields of the query. Any query class should be extended from the ObjectGraphType. So, the HelloWorldQuery class with a string Field should be as followings,

Now go back to the Starup.cs, and this time only ask for the howdy{ howdy } field in the query. You will have the following response,

And you can also ask for both fields by passing a query like the following,

{
hello
howdy
}

And you will have the following response,

So, you get the point. You are asking for the things you are interested in a declarative manner. And GraphQL is intelligent enough to understand your needs and giving you back the appropriate response.

We only scratched the surface here. This hello world example was needed because in the next post when we will start building the middleware; we won't have any problems understanding each and every line of the code.

If you are using the ASP.NET Core Angular SPA template that has been shipped with the .NET Core 2.0 release of the framework, you will have the following project structure.

Assuming that you put your application scripts under the ClientApp/app folder. So, before migrating, take a back up of the folder.

Delete the ClientApp folder altogether. We will generate the folder with the Angular CLI. On the project root directory, open up a command prompt window and generate an Angular app by running the following the CLI command,

ng new ClientApp

If you don't have the Angular CLI installed already. Install it with the following npm command

npm install -g @angular/cli

Once the CLI scaffolds an Angular application in the ClientApp folder, you should have the following application structure.

Now replace the current app folder with your previously backed up app folder. You can delete the app.module.browser.ts and app.module.server.ts file. Rename the app.module.shared.ts to app.module.ts. Open the renamed file and do the following changes:

Import BrowserModule from angular and add it in the imports array:

import { BrowserModule } from '@angular/platform-browser';

Add and configure a bootstrap property under @NgModule. Add the AppComponent in the in the array,

bootstrap: [AppComponent]

Rename AppSharedModule to AppModule

Depending on the selector name of your bootstrap component, change the selector name in index.html if you need to e.g. <app-root> to <app>

Configure your application base url in the main.ts file. Add the following scripts after the imports in main.ts file,

Client side npm dependencies should be installed on the ClientApp root. So, extract the packages from the package.json that is on your project root directory and add them in the package.json file under the ClientApp folder. Once done, do a npm install to restore the packages.

Now you should be able to run just the Angular application by executing the following CLI command on the ClientApp folder,

ng serve

Server Side Changes

Remove the old Microsoft.AspNetCore.SpaServices references from Startup.cs,

Remove using statements: using Microsoft.AspNetCore.SpaServices.Webpack;

.csproj file need some modifications. Basically, you just have to keep the <PackageReference> and change everything. However, I'm providing the whole new .csproj files. You can extract and add the information needed to your .csproj file,

Remove the webpack.vendor.config.js and webpack.config.js from the client root.

At this point you are done if you don't have/want SSR in your application.

Remove Unnecessary files (Be careful!)

Remove the Views folder from the project root (if you want). Because we no longer need asp-prerender-module for SSR and also the index.html file under ClientApp/scr is the new entry point where the Angular application is rendered.

Enabling SSR

In the previous templates we used to have a TagHelper, asp-prerender-module to enable Server Side Rendering support in our Angular apps. We no longer need this TagHelper and also we don't have any specific placeholder in some razor view where the whole client application is rendered. Instead the index.html under ClientApp/scr is the main entry point of our application. The official documentation talks at length about how to enable SSR in the following url,

More or less everyone working with ASP.NET Core, tried out the following SPA templates shipped with the framework,

These templates are configured to use Angular/React/ReactRedux (based on your choice) on the client side and ASP.NET Core as back-end. Behind the scene a package Microsoft.AspNetCore.SpaServices is used as a middleware to provide different configurable options for your application such as HMR (Hot Module Replacement), Routing Helper, SSR (Server Side Rendering) etc. . Main problem with this approach is that the client frameworks are strictly coupled with the back-end. So even though if you want, you can't actually run them in a decoupled way.

However, if you worked with the @angular/cli and create-react-app; you already know how to create apps with these command line interfaces. Also you are familiar with the semantics and idioms of these CLIs. Coming from that realm to this feels like a totally new learning curve over again.

Moreover, these spa templates are updated/managed by the MSFT team and the community members. So, if you think further, you may understand that the organization who pushes updates to their respective client side framework doesn't really care whether the changes will break the spa templates or not. Because the spa templates aren't their own products.

That's why the existing configuration from these templates may or may not be able to make your application work when you try to update your client libraries. For example, the existing angular spa template won't work if you update your angular version to the most recent one i.e. Angular 5.

On the other hand, what the framework people do is that when they update their frameworks, they also update the CLIs. So, developers find it easy to work with the newly updated framework instantly. But in case of these templates, you have to wait for configuration changes, if needed; to start working with the updated clients.

In simple words, the CLIs make it easy to start working in a hassle free development environment. With that thing in mind the MSFT team came up with the idea that from now on they will use the CLIs for the front-end and make appropriate changes in the back-end only to make them work with ASP.NET Core.

So, a new extension to the Microsoft.AspNetCore.SpaServices has been introduced, Microsoft.AspNetCore.SpaServices.Extensions. The extension will let you spin up any framework respective CLI based development servers from an ASP.NET Core back-end. The following code snippet shows how to configure app.UseSpa() middleware to start spinning up an Angular CLI based development server while in development mode,

Similarly if you are working with React, you will use the development server created using create-react-app and the configuration changes that is needed to run that development server is the following,

spa.UseReactDevelopmentServer(npmScript: "start");

Of course the ClientApp folder contains all the files generated from the respective CLIs hence the spa.Options.SourcePath = "ClientApp";

In production mode, you no longer need the development server rather you need the vendor and application scripts/files/images, bundled and minified for easy deployment. That's why we need the following configuration,

The published scripts/files/images will be in the dist folder under the ClientApp hence configuration.RootPath = "ClientApp/dist";. The folder will be automatically get generated when in production mode.

The newly updated preview templates can be installed using the dotnet CLI. Run the following command and you are good to go,

Once installed, use dotnet new angular/dotnet new react/dotnet new reactredux to create angular/react/reactredux app using the new templates.

In the new templates the package.json file is now in the ClientApp folder. So any third party package installation requires running npm command in the ClientApp folder.

cd ClientApp
npm install --save @angular/material

Remember the development servers are not strictly coupled with the back-end. So if you want, you can run ng serve (for Angular) or npm start (for React) directly on the ClientApp folder and the applications will run without an ASP.NET Core back-end.

By default, both the development server and your ASP.NET Core back-end will run on a single configured port. But if you do want to run them separately, you can configure a proxy address for your development server. For that you have to replace the call to your app.UseAngularCliServer() or spa.UseReactDevelopmentServer with something like the following,

spa.UseProxyToSpaDevelopmentServer("http://localhost:4200");

After you set up a proxy address for your development server, you can use the npm start command to spin up the development server separately from your back-end server.

In the previous templates we used to have a TagHelper, asp-prerender-module to enable Server Side Rendering support in our Angular apps. We no longer need this TagHelper and also we don't have any specific placeholder in some razor view where the whole client application is rendered. Instead the index.html under ClientApp/scr is the main entry point of our application. The official documentation talks at length about how to enable SSR in the following url,

And that is pretty much all you need to know about the updated spa templates from ASP.NET Core. The templates are still in preview mode and as the team mentioned, we are hoping to have a RTM release in January 2018.

]]>We must consider saving files in byte[] formats unless it is absolute necessary, because over the time it can affect the performance of any data storage system. With that being said, streaming can be a good idea in this scenario.

We can configure a static files folder for storing the

]]>http://fiyazhasan.me/story-of-file-uploading-in-asp-net-core-part-iii-streaming-files/a5c04a18-d2a7-4f76-8c19-190311921cabMon, 20 Nov 2017 10:32:20 GMTWe must consider saving files in byte[] formats unless it is absolute necessary, because over the time it can affect the performance of any data storage system. With that being said, streaming can be a good idea in this scenario.

We can configure a static files folder for storing the uploaded files. However, this part is optional. You can simply use the web root folder wwwroot instead. But let's just see how to create a separate static files folder on the content root and use it to serve the uploaded files.

The following code configuration code is placed in the Configure method of the Startup.cs file,

Notice, along with the default app.UseStaticFiles(); we have an additional app.UseStaticFiles(); which configures a folder named Uploads to be a custom static files folder. The Uploads folder is available in the content root directory of the application,

Files under the Uploads folder can be accessed at will on a url like www.<app-url>.com/Uploads/<filename> (example: localhost:5000/Uploads/Avatar.png), hence the RequestPath configuration.

Next, we have to modify the Create action of UserMvcController and PostUser action of UserController. But first change the type of Avatar property from byte[] to string of the User entity class. From now on we will use the property to store the file name of the uploaded file.

Notice, we have constructed a file path for the newly uploaded file name. Later using a FileStream we are creating the file i.e. File.Create. Last of all we are copying the content of the uploaded file to the opened file stream.

And that's it. The project repository contains all the code shown above. If you clone and run the project, you can upload files using one of the controller's create user action. Following shows the Angular way of uploading files,

Note: To keep things simple, I didn't talk about file validations and it's a topic for the next post.

Repository Link

]]>Uploading files from Angular end to an ASP.NET Core Web API can be done using the same IFormFile interface; introduced in the previous post. To keep things separated, a new API controller (UserController) has been created with the following POST action,

[HttpPost]
public async Task<IActionResult> PostUser(

]]>http://fiyazhasan.me/story-of-file-uploading-in-asp-net-core-part-ii-angular/f8b6eb34-a439-4c8b-a7a0-0ab47ced15e6Wed, 15 Nov 2017 19:18:50 GMTUploading files from Angular end to an ASP.NET Core Web API can be done using the same IFormFile interface; introduced in the previous post. To keep things separated, a new API controller (UserController) has been created with the following POST action,

Only difference between the Create action introduced in the previous post and the newly created PostUser is: upon saving a user, the MVC action returns to the Index view, where as the Web API returns a 201 Created status containing a location link of the newly created resource in the response header.

The UserVm view model and User entity classes are also same as before.

A regular model driven (reactive) form for saving an user name with an avatar can be as following,

Note: Using the reactive form is not mandatory here. You can use template driven form if you prefer or you can invent a new form creation technique and use that. 😊

Notice that, we don't have a formControlName attribute on the file input element. Because we can't bind a file input element with a FormControl in our FormModel. If you try to do that, your application will run without problems and the from will be rendered but will give you the following error when you try to select a file,

ERROR DOMException:Failed to set the 'value' property on 'HTMLInputElement': This input element accepts a filename, which may only be programmatically set to the empty string.

But the value of avatar is filled via a patch on the FormModel i.e. userForm. The patching is done when a file is selected hence the (change)="fileChange(uploader.files)". In the markup above, #uploader is a template reference variable and it's setting the file input as the event raiser i.e. target (this technique strips down other DOM events information and only dispatches the event information generated by the target element). Without the template reference variable you would have to write the change event handler like the following,

The method takes the files array of the file input control, checks whether it is null and the file at first index has a file size greater than zero. If not, takes the file in the first index (we are uploading a single file here) and set the File object directly in avatarFormControl via a patch.

Notice that in the prepareSaveUser method, we have instantiate an instance of FormData. Later we have used the instance (formData) to append individual FromControl's value of the formModel. formData.append() accepts key-value pairs where each individual key represents a name of a server-side view model property and value is its raw values.

Why the FormData? Remember, when you put an Angular form directive on a native html form control, it no longer behaves like a real html form. For example, a native form control submission will reload the page and post the form control values on the server. But in SPA (Single Page Application) world, we use ajax to do the work without a cost of a page reload. That's why we don't have a enctype='multipart/form-data' attribute like we had for the MVC/Razor Pages forms. And we have to do the encoding pragmatically hence the FormData.

It's a simple http POST request that calls the PostUser action of the user API controller.

A Different Approach:

We already talked about why an Angular form doesn't act like a real native html form element. Also we are not encoding the form data on submit rather we are pragmatically using the FormData to do that.

So we can go with something like querySelector('#id') to get the file input control and get its value (file) if we want in the submit method. In Angular we can use @ViewChild for that. Following will get the file input element that has template reference variable #uploader attached to it and set its value in the uploader variable.

@ViewChild("uploader") uploader: any;

Now onn the submit method we can directly use the uploader variable to get the selected file available in the nativeElement property.

So, we do have multiple options. It's up to you which one you will go for. I tends to follow the first approach since it gives a impression that everything inside the form element is part of the FormModel(userForm)

In the next post we will upload the file in a folder rather than storing it as a byte[] array in the database.

]]>http://fiyazhasan.me/story-of-file-uploading-in-asp-net-core-part-i-mvc/2e5cf79f-26ec-43a1-b0ac-7779dd653c6dTue, 14 Nov 2017 10:01:28 GMTThe built-in IFormFile interface can be used to represent a file sent via some http request on the ASP.NET Core server side. Following is the skeleton of IFormFile:

Notice the enctype="multipart/form-data" attribute. Submitted form data are sent to the server side once they are encoded first. According to W3C specs, there are three encoding types available to use with form submission. They are the followings:

application/x-www-form-urlencoded

multipart/form-data

text/plain

For a form control, application/x-www-form-urlencoded is always the default encoding technique; used to encode form data. But if you have a file input control that resides in your form, you have to use the multipart/form-data encoding. It will let you submit forms that contain files, non-ASCII data, and binary data.

The form above can be used to create an user with his/her name and a selected avatar using the file input. Following is the view model (DTO) class representing properties that the individual form controls are attached to:

Note: MVC/Razor Pages forms (TagHelper, HtmlHelper) are not real native html form elements. Instead they are forms on steroids. They give you the impression of a real form control but you can do something special with them. For example, if you want to fire a server side action (method) on form submission, use the asp-action attribute. Similarly to explicitly define the controller, the action method belongs to; use the asp-controller attribute.

In the form example above, action (method) fired on form submission is Create, hence the asp-action="Create". Following is the method body,

Notice that we are copying the content of the uploaded file into a memory stream and next we are storing it in the Avatar field of the entity class. Since, field Avatar is a byte[] type, we had to cast the memory stream to array, hence the memoryStream.ToArray().

Note: Don't get mixed up between view model (DTO) and entity class. Entity class are solely used to represent a database table in ORM (Object Relational Mapping) languages. Where as the view models are used as data transfer objects between the View and the Controller.

We are using entity framework as an ORM here. The _context is an instance of a class (ApplicationDbContext) which is inheriting the DbContext of entity framework.

The form submission will create a request payload that pretty much looks like the following:

Notice how the multipart/form-data encodes a form element name and its value. These are simple key-value pairs delimited by boundary value available in the Content-Type header. (We will talk about the boundary value later in the series)

Ignore the _RequestVerificationToken for now. It's a hidden form element that automatically gets injected by the framework. We will talk about it later on the blog series.

We are storing the the file as a byte array in our database. But in real life that can cause a huge performance problem in the application. That's why it's better to save the uploaded file in a separate folder and save a reference (url, filename) of that file in the database. We will see how to do it in the next post. Before finishing, for those of you who are using the MVC's HtmlHelper element of the form control, here is how your form should look like,

Notice that, in this case we have used the [BindProperty] attribute to bind an instance of UserVm to the razor page. That's why we don't need any binding parameter as the argument of the OnPostAsync() method.

Repository Link:

]]>In my previous post, I wrote about Asynchronous Validation in Angular's Reactive Forms Control. But that was when we were in the age of Angular 4, now a new age is upon us: age of Angular 5. With the new version, we have some new ways of doing validations in]]>http://fiyazhasan.me/angular-forms-validation-updateon-blur/c39663d8-54b3-42d6-bdee-51309d1eebceSun, 05 Nov 2017 09:17:58 GMTIn my previous post, I wrote about Asynchronous Validation in Angular's Reactive Forms Control. But that was when we were in the age of Angular 4, now a new age is upon us: age of Angular 5. With the new version, we have some new ways of doing validations in Angular forms. Since I won’t talk about everything from the beginning (How to define a form model, how to add validators etc.), consider this post as a continuation of my previous one. I’ll just show you the two new ways of doing Form Control validation.

Notice that the changes are immediately reflected in the form model and the validations are instant. That's cool but not the behavior you want when you have several Form Controls in your form model for performance reasons. But you can bypass this behavior using the updateOn: 'blur' and updateOn: 'submit' option.

So, the first one in our backpack is the updateOn: 'blur' option. Applying this will make the value of a Form Control subject to change with the blur event of a native input element associated with it.

Lot of things going on there. First, notice that we don't have this.fb.group(...) anymore (fb is a private instance of FormBuilder), because there is an issue regarding it. So, we have to go with the new FormGroup(...) instead. Also notice that now we have an options object as a second parameter of the FormControl where we can configure the synchronous, asynchronous validators and of course the updateOn: 'blur' option.

Rather than individually setting updateOn: 'blur', it can be set on parent Form Group for activating the option for every Form Control.

If you run the application with the changes, you will get the following behavior. Notice that, value of a Form Control is only updating in the form model when we are blurred out of the input element.

Similarly the updateOn: 'submit' option will make the value/values of the Form Control(s) subjected to change on a submit event fired on the native form element. Following code shows you how to add the behavior for all the child Form Controls.

And that's it! updateOn: 'blur' and updateOn: 'submit' are the options we were really craving for while working with Forms. Now it's available out of the box and you don't need any custom implementations. But remember, this options won't work while you are working with FormBuilder to build your form. As the Angular team mentioned that it will be available down the road.

Repository Link:

]]>I've been playing around with Angular for some time now. When it comes to building forms, I tend to go with reactive forms. In short, while using reactive forms; you create a form model consists of different form controls directly in your component class. Once you are done, you bind]]>http://fiyazhasan.me/asynchronous-validation-in-angulars-reactive-forms-control/688eef40-f246-4bfe-9b0c-9ad27bc4d3e1Fri, 20 Oct 2017 13:24:23 GMTI've been playing around with Angular for some time now. When it comes to building forms, I tend to go with reactive forms. In short, while using reactive forms; you create a form model consists of different form controls directly in your component class. Once you are done, you bind them (form controls) to your native form control elements defined in the component's view. But that is all about reactive forms and the official documentation talks at length about it here,

I'm assuming that you already have an go to idea on reactive forms. In this post I'm going to talk about adding a custom asynchronous validator to a form control. So the example I'm using for the sake of this demonstration is pretty simple and kind of a rip-off of the Tour of Heroes tutorial sample available in the official site. Instead of managing heroes, here we are managing weather forecasts of different hours of specific dates. The application doesn't let you add new forecast, but it let's you modify existing forecast's data (i.e. date, temperature, summary etc.). Here is the final look of the application; once a date time is selected, you can start modifying it through the form:

The following is the structure of the form model that is specified in the component's code:

If you noticed carefully, we already have a validation constraint on the dateFormatted i.e. the field can't be null of empty (Validators.required). The other validation constraint we are looking for on that form control is, we should have an unique date time for every forecast. In edit view, the date time can be changed but it can't be changed to a date time that is already available in the list. The Validators.required is a built-in synchronous validator provided by Angular. You specified the synchronous validatiors in an array and pass it as the second parameter to the FormControl. Similarly, you specified your asynchronous validators in another array and pass it as the third parameter to the control. The reason behind specifying two different arrays instead of one is that Angular doesn't fire the asynchronous validators until every synchronous validation is satisfied. Since an asynchronous validation is resourceful (i.e. you would call some kind of server to do the validation), you shouldn't always initiate the validation process on every changes made to the form control.

If you don't know how to make a custom synchronous validators, go though this post:

The idea is very similar to the type of function we define for a custom synchronous validator. In case of asynchronous validators, we define a function which returns a function that is a type of AsyncValidatorFn. Following is the structure of the AsyncValidatorFn interface:

The AsyncValidatorFn then returns a promise or observable. We process the validation result and decide whether it satisfies a specific validation constraint or not by resolving the promise or the observable object. Following is the validator function that checks the uniqueness of the date time value available in the control.value:

Here, AbstractControl is the base class from which every FormControl inherits from. Also, I'm going for observable here but you can return a promise if you want.

Notice the piece of code written in the else statement. Once the subscription is done to the observable that is returned from the getForecastByDate(dateFormatted: string) function, we map the result and check whether we have an existing forecast or not. If yes, we return a validation key (existingDate) with the value set to the form control's value { value: control.value }. And if not, we return a null object. The getForecastByDate() function is available in the forecast.service.ts file:

Remember that we have the validation result returned with a key named existingDate. So, we can check whether the value of the key is null or not in a conditional *ngIf directive and show relevant message if it isn't. We check for the existingDate key in the errors object of the form control i.e. dateFormatted.errors.existingDate. Following is the component's markup as plain html with some basic native form control elements:

And that's all about it. One thing you can do to further optimize your validation function. At this moment the validation logic will fire every time the component is changed. But you can minimize the payload by checking whether the current control value is same as the initial value set to it i.e. the value of this.forcast.dateFormatted.

Notice that, instead of newing up a instance of the ForecastService directly in the validator function; now we are using the constructor of ForcastValidators to do that for us.

Once you are done, don't forget to add the ForcastValidators service in the providers list of the main module of your application:

app.module.shared.ts

@NgModule({
providers: [ForecastService, ForcastValidators]
})

Run the application now and go to edit mode of a forecast. Try changing the date time value to value that is already available in the forecast list. Tada! you will get an error message, saying that your validation has failed miserably:

That is how easy to make a custom asynchronous validators. I hope you like the post. Share it and have a nice day ☺.

The sample project is available in this repository:

]]>If you've been using ES2015 modules throughout your javascript codebase; in other words: import(ing) and export(ing) modules, there is a good news for you. You can eliminate unused modules (dead code) from your published script when using different module bundlers available online. And this process of dead code]]>http://fiyazhasan.me/how-tree-shaking-works-in-asp-net-core-angular-spa/93f5d068-be3f-4f07-97bf-b0102dea8873Fri, 13 Oct 2017 09:01:34 GMTIf you've been using ES2015 modules throughout your javascript codebase; in other words: import(ing) and export(ing) modules, there is a good news for you. You can eliminate unused modules (dead code) from your published script when using different module bundlers available online. And this process of dead code elimination is known as so called Tree Shaking. While rollup (a trending module bundler) being the first to introduce the concept and implement it, other module bundlers came to realize that this is a must have feature and soon they also start implementing the feature in their own way. webpack is a popular module bundler, that is extensively used in Angular application development. In this post, I'll show you how tree shaking is done using webpack in the ASP.NET Core Angular SPA template.

First, let's see how tree shaking actually works with a simple demo. Fire up a ASP.NET Core Angular SPA application. The following commands will install all the available spa templates from dotnet and initialize an Angular SPA application in a directory you are currently in:

The reason I'm using this specific template is because everything is already been setup/installed to apply tree shaking.

Once initialized, open the project in Visual Studio or VS Code. I'm using VS Code here. Open the webpack.config.js file and comment out everything (we will come back later) and paste the following lines of code:

Not too shabby! The piece of code above will read the main-module.ts file under the treeshaking folder, transpile the content into javascript and then push the script into the main-module.js file under wwwroot folder. Other than this, nothing so special going on here. For transpiling typescript into javascript, we have used a webpack plugin called awesome-typescript-loader (npm install --save awesome-typescript-loader). We are telling webpack to resolve typescript file by specifying .ts in the extensions array. The CheckerPlugin in the plugins array is optional. It is only added to do some async error reporting.

Now, open your command prompt, go to your project directory and run the following command:

webpack

Similarly you can also use the VS Code console like for that:

As you can see, webpack has done all the heavy lifting behind the scene and gave you the transpiled script as main-module.js. If you open the file, you will see something like the following,

reference-modules.ts

Notice, that they are both import(ed) in the main-module.ts file like the following:

main-module.ts

import { SayHi, SayBye } from './reference-modules';
SayHi();

That is why transpiling main-module.ts file has also trnaspiled the imported functions into the main-module.js.

Okay! That was expected. But let's revisit the main-module.js and see what's so special. If you noticed in the earlier code block (main-module.js), you may noticed that we have two lines of comment on top of the functions,

As you can see, webpack is intelligent enough to tell that, although you have imported both of the functions in main-module.ts, you have only used the SayHi method in it, i.e. SayHi();. It has also specified that you have a unused harmony export SayBye. Harmony is the code name for ES2015/ES6 module system.

That's cool. Now that we know which modules are being used and which are unused, we can remove the unused modules (dead code) from our published script if we want. So, webpack has another plugin called UglifyJsPlugin which is built into it (no need for separate npm install). Just specify the plugin in the plugins array and you are done. Your plugins array should now look like the following:

Now, if you run the webpack command again, you will see that while doing the uglification, it has removed the exported SayBye function from your published main-module.js file.

You can search for it though, but you will not find it anywhere the main-module.js. And that's the magic of tree shaking.

Now that, we have our basics done, let's see how tree shaking is done in the actual Angular SPA template. If you uncomment the previous code in webpack.config.js file and fiddle around a bit, you can see that tree shaking is done only when you are in production mode.

Here, isDevBuild decides whether you are in production or development mode. The reason of doing tree shaking only in the production mode is because, eliminating dead code won't do any good in the development mode and also unnecessary.

Now, if you run just webpack command, you will see that the size of your main-client.js is initially around 76Kb. While running the same command with production flag (i.e. --env.prod) will give you a file size that is around 36kb (of course, these statistics will vary depending on your application code):

webpack --env.prod

dev mode:

prod mode:

So, that is tree shaking, used to minimize the size of your published script by eliminating unused (dead) code.

You can further minimize the size of your script by doing AOT (Ahead of Time compilation) in your Angular SPA application. But that will only work if you are in Angular context. That is a future blog topic.

On the other hand, tree shaking can be applied anywhere, whether you are building apps using Angular, React, Vue, or just plain javascript using ES2015 module syntax i.e import and export.

]]>Five years back, at this very same day I started my journey in the world of blogging. I know it's nothing to brag about. However, since it's my own domain, I can say anything I want. But seriously, there have been many ups and downs. There were times when I]]>http://fiyazhasan.me/its-my-domain-and-i-can-say-whatever-i-like-five-year-blog-anniversary/ec587fc1-3e3c-46f4-ae22-c602f149e4efTue, 19 Sep 2017 17:57:34 GMT

Five years back, at this very same day I started my journey in the world of blogging. I know it's nothing to brag about. However, since it's my own domain, I can say anything I want. But seriously, there have been many ups and downs. There were times when I literally wanted to stop blogging. Although the reasons were silly enough to make anyone say "This guy is a stupid". But again, aren’t we all stupid? Ask yourself why do you read blogs? Isn't it the very same reason for all of us; to know something and become less stupid compared to the day before? I think you would agree on that.

Don't get me wrong but like any other job, I consider blogging itself is a job. And it's a tough job to do especially if you are dealing with technology stuff. Because technology is something that is always evolving. And to keep up with it, you have to have that kind of mindset.

It's a job from which you don't expect anything. Something that you do for fun. Something that you put on the world wide web to let people know that you learned something and you want to let them know what you just learned.

Some people may throw a punch at you saying, "Why did you say it's a tough job?". All you are doing is hitting some keys. What else?". Well, I would have agreed on that if it only includes copying stuff to earn money from page views.
A simple blog post involves a lot of background study and experiments. One of my favorite quote from the great Stephen King is,

"If you don't have time to read, you don't have the time (or the tools) to write. Simple as that."

With that being said, it's also about having interest too. You can't push yourself in learning something unless you have interest in it. Interest is something that grows by itself. Again, that's the best part about this job. If you don't have interest right now, there is no one to give you headache about it. You can always start and stop depending on your mood.

Long posts, short posts, blog that doesn't look beautiful, what my fellow bloggers will think about it etc. Those things don't actually matter. The only thing that matters is a starting of something great. Something that's your own. All you need is another quote from King (Yes sir! I'm a king fan),

"The scariest moment is always just before you start."

Once you start, don't search for motivation in places where you wouldn't find an inch of it. It is a tough world I must say. But once in a while you will find people who really appreciates what you do. In my case that was Shahriar Hyder. The guy who told me I'm definitely going to make my HR hire you once he interviewed me.

Be optimistic! Doing something that you love will bring you lot of opportunities. You have to give it some time. Who would have thought that a guy like me could contribute to the official docs of Microsoft. Years from now If I had given up, maybe one year from now Rick Anderson would have asked another guy/gal on twitter if he/she is interested in contributing.

Best thing that my blog ever brought to me is that, it opened up so many paths for me. Now I can frankly talk with other community members knowing that they consider me as one of their own.

I remember the time when I often suffered from back and neck pain and I had to take physio therapies every now and then. The reason was that I was trying too hard. Doing my day job and prepare materials for my next blog, once I get back home was too much. But that didn’t stop me from doing what I love. Instead it taught me how to work under pressure and tight schedules. That also gave me the courage to start a startup of my own. Now that I have my own startup, I’ve freedoms to do pretty much anything. So, I’m spending my time more on reading and doing interesting things nowadays.

Every now and then I laugh at myself thinking that, at the beginning stages, I was too afraid to put something on my blog before it gets reviewed by my sister first. So, thank you my big sis (Humaira Binte Hasan) for not getting irritated and for all your supports.

Last but not least, I thank all the readers out there who like reading my blogs. It's been a pleasure writing for you. I also thank other awesome tech bloggers from whom I get to know so many things.

To people who want to start blogging but not finding a proper way. My suggestion to them would be to start small. I started with Blogger at Classroom of Fizz. Now I'm on Ghost with my own domain. So, it's not like you can't start unless you have your own space or a credit card. It's about having passion for it.

If you have further queries, feel free to ask me anything. I will be glad to help you out.

]]>

Live unit testing is only available in the Enterprise editions of Visual Studio 2017 Version 15.3.0 or later

I used to hate writing tests for my projects. But not so long ago, I was forced to change my perspective on that once I start managing big enterprise level

Live unit testing is only available in the Enterprise editions of Visual Studio 2017 Version 15.3.0 or later

I used to hate writing tests for my projects. But not so long ago, I was forced to change my perspective on that once I start managing big enterprise level applications. As a newbie in the world of TDD (Test Driven Development) what I expect from any IDE is,

Quickly show which parts of my application code is covered by unit testing and which are not.

Yesterday, I saw the .NET Core 2.0 Released! video and noticed that Microsoft has shipped Visual Studio 2017 Enterprise (Version 15.3.0) with a newly added feature called Live Unit Testing. As the name suggests, using this feature, you can see if a specific unit of your code is covered by any unit test or not; directly in your code editor.

As shown above, with live unit testing enabled; I can now see that only GetExpertGeeks method of my repository is covered by my unit test. So, how do you enable this cool feature? Simply click on Test from the top menu bar and from the context menu go to Live Unit Testing > Start to enable live unit testing in your codebase.

You have to have a unit test project setup first; otherwise starting live unit testing may feel like nothing is happening.

All of the code snippets shown in the post is coming from a project; available in the following git repository:

You can also see which test method(s) is/are covering a code block from a popup dialog; which can be accessed by clicking on a green tick. Similarly, the blue bar represents that some code blocks are not yet covered by any unit test.

So, let's write a unit test for the Get method of Repository.cs and see live unit testing in action. To accommodate the new test method, I've modified my GeekRepositoryTest.cs a little bit and it looks like the following:

Once you add a new unit test, live unit test will intelligently analyze your codebase and decorate code blocks that falls under the unit test with a passing or failure flag (green tick for passing tests and red cross for failing tests).

Again, you can navigate to the failing test (if any) directly from the popup dialog.

Live unit testing feature only works with project that uses one of the following test frameworks:

xUnit.Net

NUnit

MSTest

Live unit testing falls under the static analysis tool category since it is giving you a sense of passing/failing tests without building or explicitly running the tests first.

Static analysis is a way of analysis your codebase without executing it.

Live unit testing is integrated with the test explorer to give you a WYSIWYG (what you see is what you get) feeling.

And that's it. To know more about Live Unit Testing, you can read this MSDNblog. Let me know your experiences with this new feature.

]]>Alright, I messed up between the terms: faking and mocking when one of my friend asked me "How are you faking in .net core?". I told him with my little to no knowledge about faking that, "Use the Moq library". But later found that I was totally wrong. And mocking]]>http://fiyazhasan.me/faking-with-in-memory-database-in-asp-net-core-2-0/8586c7b4-cae9-48a5-ac69-663f9d33bc1fWed, 26 Jul 2017 09:16:17 GMT

Alright, I messed up between the terms: faking and mocking when one of my friend asked me "How are you faking in .net core?". I told him with my little to no knowledge about faking that, "Use the Moq library". But later found that I was totally wrong. And mocking is not actually faking. Then I read a lot of articles/blogs on those topics and came across with this beautiful blog post by Martin Fowler on Mocks Aren't Stubs. Take your time and read it to get your facts straight like me.

Quoting from Martin Fowler's blog post:

Fake: objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).

Mock: objects pre-programmed with expectations which form a specification of the calls they are expected to receive.

Fakes and Mocks both are test doubles. While fakes insist on state verification, mocks are for behavior verification.

All the terms like test double, state verification, behavior verification are well described in Martin Fowler's blog post.

So, how do you do fake testing in .net core then? One of the nicest feature available in Entity Framework Core (also available in EF 6.1 and later) is that it has a support for In-Memory database. Before this feature you would have fake the database/tables by faking the DbSet(s) of your entities. It is still possible to fake the DBSet if you want. But unlike in the earlier versions of Entity Framework (maybe before EF 6) there is no such interface like IDbSet<> in the newer versions. So, if you really want to fake a DbSet use the abstract class (DbSet<>) itself and override the methods of your choice. By the way I'm not going with that process here.

In my demo project, I have this ApplicationDbContext which basically contains a single DbSet called Geeks:

Startup.cs

Basically, the code wires up the ApplicationDbContext with an Sql server instance. Following is the appsettings.json file where a production level connection string is provided as the value against the DefaultConnection key.

You can download the source code to see how the IRepository is laid out and configured to serve a new instance of Repository again in the ConfigureServices() method of Startup.cs.

Now you don't want to unit test against the actual SQL database rather you would use an In-Memory database. For the testing purpose, I've created a xUnit project and added the following package to enable In-Memory database support in EF Core:

Notice that I'm now creating a new instance of the DbContextOptionsBuilder. And while using the builder pattern to build the options for ApplicationDbContext, I'm telling it to use an in-memory database instance. Now, I'm good to go with faking a database instead of using a real one. That is how simple it is.

On a side note: In-memory database uses Linq to Object instead of Linq to Entity. Following are the links if you want to know which technique provides what facilities:

Angular has this cool feature of loading a module lazily. Modules that are set up to load lazily can significantly save application startup time. Lazy-load module set up is done in application's routing configuration section.

As the title suggests we will be using the AngularSPA template shipped with Visual Studio

Angular has this cool feature of loading a module lazily. Modules that are set up to load lazily can significantly save application startup time. Lazy-load module set up is done in application's routing configuration section.

As the title suggests we will be using the AngularSPA template shipped with Visual Studio 2017 Preview (2) for demonstration.

Route that is configured to lazy-load a module sends an HTTP GET to the server which in turns return the module in a chunk of code block. This only happens when the router is activated for the first time in application lifecycle.

Here's how the AppModuleShared (app.module.shared.ts) is setup in the AngularSPA starter project:

Now, we no longer have the component property; instead we replaced it with loadChilden. Property loadChildren takes a relative path to a module that would be lazy-loaded. Notice the module name itself is added at the end of the path string (#CounterModule). That is because CounterModule class is not the default export of the file.

The last piece of the configuration required is in the webpack.configure.js file. But first we need to install the angular2-router-loader package.

angular2-router-loader is a Webpack loader for Angular that enables string-based module loading with the Angular Router. Use the following npm install command to install the package:

Moving to configuring the angular2-router-loader package in the webpack.configure.js. In the use array, add another entry for angular-router-loader along with awesome-typescript-loader?silent=true, angular2-template-loader. The module section should now look like the following:

When finished, build and run the application. To make sure the CounterComponent is coming from a lazy-loaded module, open the developer console of your browser and go to the network tab. Navigating to the counter route (using the side-menu) will now load a chunk of new code via an HTTP GET request.

You typically work with one of the following action results when you want to write a file to the response.

FileContentResult (writes a binary file to response)

FileStreamResult (writes a file from a stream to the response)

VirtualFileResult (writes a file specified using a virtual path to the response)

These are all coming from ASP.NET Core's MVC package. In earlier version of ASP.NET Core (1.*); using these action results, you could have served files but there wasn't any way of configuring cache headers for the responses.

I'm pretty sure that you are familiar with StaticFiles middleware of the framework. All it does is serve static files (css, javascript, image etc.) from a predefined/configurable file location (typically from inside the web root i.e. wwwroot folder). But along the way it also does some cache headers configuration. The reason behind this is you don't want your server to be called up every time for a file that doesn't change frequently. Let's see how a request for a static file looks like in the browser's network tab.

So, when the middleware starts processing the request it also adds a Etag and a Last-Modified header with it before sending it to the browser as a response. Now if we request for the same file again the browser will directly serve the file from its cache rather than make a full request to the server.

On a subsequent request such as above the server validates the integrity of the cache by comparing the If-Modified-Since,If-None-Match header values with the previously sent Etag and Last-Modified header values. If it matches; means our cache is still valid and the server sends a 304 Not Modified status to the browser. On the browser's end, it handles the status by serving up the static file from its cache.

You may be wondering that, to make sure whether the response is cached or not I'm calling the server again to give me a 304 Not Modified status. So how it benefitw me? Well, although it's a real HTTP request to the server but the request payload is much less than a full body request. Means that once the request passes cache validation checks, it is opted out of any further processing. (the processing tunnel may include an access to the database or whatever)

You can add additional headers when serving up static files by tweaking the middleware configuration. For example, the following setting adds a Cache-Control header to the response which notifies the browser that it should store the cached response for 10 minutes (max-age=600) only. The public field means that the cache can be stored in any private (user specific client/browser) cache store or some in between cache server or on the server itself (memory cache). The max-age field also specified that when the age of the cache is expired a full body request to server should be made. Now your concern would be what about the Etag and Last-Modified headers validation. Well when you specify a Cache-Control with a max-age the Etag and Last-Modified headers just goes down into the priority level. By default, the StaticFiles middleware is configured to have a max-age of a lifetime.

But enough about static files that is publicly accessible. What about a file we intended to serve from an MVC action? How do we add cache headers to those?

Well it wasn't possible until in the recent version of the ASP.NET Core 2.0 (Preview 2). The action results, I pointed out at the very beginning of the post are now overloaded with new parameters such as:

DateTimeOffset? lastModified

EntityTagHeaderValue entityTag

With help of this parameters now you can add cache headers to any file that you want to send up with the response. Here is an example on how you will do it:

This is just a simple example. In production, you would replace the values of entityTag and lastModified parameters with some real world calculated values. For static files discussed earlier the framework calculates the values using the following code snippet and you can use the same logic if you want:

However, once it is setup you can make an HTTP request to your API that serves the file like the following:

One thing I should mention that although we didn't have to explicitly define the If-Modified-Since,If-None-Match headers while making subsequent requests from within the browser (cause the browser is intelligent enough to set those for us), in this case we have to add those in the header.

Anyways, you can also request for partial resource by adding a Range header in the request. The value of the header would be a to-from byte range. Following is an example of the Range header in action:

Not every resource can be partially delivered. Some partial requests can make the resource unreadable/corrupted. So, use the Range header wisely.

Last of all, how do you add other cache headers like Cache-Control in this scenario? Well you can use the Response Caching middleware and decorate your action with the ResponseCache attribute like the following:

Of course, that is an option. You can modify the response with your custom code also. A response with a Cache-Control header added would be like the following:

And that's it! I hope you enjoyed reading the post. But that's not all about caching. Other than files you can add caching to any entity you want. But that's for later. If you want a blog on that topic, let me know in the comment.