chsakell's Bloghttps://chsakell.com
Anything around ASP.NET MVC,WEB API, WCF, Entity Framework & AngularJSWed, 13 Dec 2017 18:27:55 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngchsakell's Bloghttps://chsakell.com
Azure Cosmos DB: DocumentDB API in Actionhttps://chsakell.com/2017/08/13/azure-cosmos-db-documentdb-api-in-action/
https://chsakell.com/2017/08/13/azure-cosmos-db-documentdb-api-in-action/#commentsSun, 13 Aug 2017 11:45:41 +0000http://chsakell.com/?p=4782Read More ›]]>During tha last years, there has been an significant change regarding the amount of data being produced and consumed by applications while data-models and scemas are evolving more frequently than used to. Assuming a traditional application (makes use of relational databases) goes viral, those two combinations could easily bring it to an unfortunate state due to lack of scalability capabilities. This is where Azure Cosmos DB, planet NoSQL database as a service comes into the scene. In a nutchel, Azure Cosmos DB or DocumentDB if you prefer, is a fully managed by Microsoft Azure, incredibly scalable, queryable and last but not least, schema-free JSON document database. Here are the most important features that Azure Cosmos DB offers via the DocumentDB API:

Elastically scalable throughput and storage

Multi-region replication

Ad hoc queries with familiar SQL syntax

JavaScript execution within the database

Tunable consistency levels

Fully managed

Automatic indexing

Azure Cosmos DB is document based and when we refer to documents we mean JSON objects that can be managed through the Document API. If you are not familiar with the Azure Cosmos DB and its resources, here is the relationship between them.
This post will show you how to use the Document DB API to manipulate JSON documents in NoSQL DocumentDB collections. Let’s see in more detail what we are gonna see:

Create DocumentDB database and collections

CRUD operations: we will see in detail several ways to query, create, update and delete JSON documents

Create Generic Data DocumentDB repositories: ideally, we would like to have a single repository that could target a specific DocumentDB database with multiple collections

Use the famous Automapper to map DocumentDB documents to application domain models

What do you need to follow along with this tutorial? In case you don’t have an Azure Subscription, simply install the Azure Cosmos DB Emulator with which you can develop and test your DocumentDB based application locally. Are you ready? Let’s start!

Clone and explore the GitHub repository

I have already built an application that makes use of the DocumentDB API and the Azure Cosmos DB Emulator. Clone the repository from here and either open the solution in Visual Studio 2017 or run the following command to restore the required packages.

dotnet restore

Apparently the project is built in ASP.NET Core. Build the solution but before firing it up, make sure the Azure Comsos DB Emulator is up and running. In case you don’t know how to do this, search in the apps for Azure Cosmos DB Emulator and open the app. It will ask you to grant admimistrator permissions in order to start.
When the emulator starts, it will automatically open its Data Explorer on the browser at https://localhost:8081/_explorer/index.html. If it doesn’t, right click on the tray icon and select Open Data Explorer... Now you can run the app and initiate a DocumentDB database named Gallery and two collections, Pictures and Categories. The initializer class which we ‘ll examine in a moment, will also populate some mock data for you. At this point, what matters is to understand what exactly a collection and a document is, throught the emulator’s interface. Before examine what really happened on the emulator’s database, notice that the app is a Photo Gallery app.
Each picture, has a title and belongs to a category. Now let’s take a look at the emulator’s data explorer.
You can see how a collection and a JSON document looks like. A collection may have Stored Procedures, User Defined Functions and Triggers. A JSON document is of type Document and can be converted to an application’s domain model quite easily. Now let’s switch to code and see how to connect to a DocumentDB account and initiate database and collections.

Create Database and Collections

The first thing we need to do before create anything in a DocumentDB account is connect to it. The appsettings.json file contains the default DocumentDB Endpoint and Key to connect to the Azure DocumentDB Emulator. In case you had a Microsoft Azure DocumentDB account, you would have to place the relative endpoint and key here. Now open the DocumentDBInitializer class inside the Data folder. First of all, you need to install the Microsoft.Azure.DocumentDB.Core NuGet package. You create a DocumentClient instance using the endpoint and key of the DocumentDB account:

The DatabaseId parameter is the database’s name and will be used for all queries against a database. When creating a database collection you may or may not provide a partitionkey. Partition keys are specified in the form of a JSON path, for example in our case and for the collection Pictures, we specified the partition key /category which represents the property Category in the PictureItem class.

Partitioning in DocumentDB is an instrument to make a collection massively scale in terms of storage and throughput needs. Documents with the same partition key values are stored in the same physical partition (grouped together) and this is done and managed automatically for you by calculating and assign a hash of a partitionkey to the relative physical location. In order to understand the relationship between a partition and a collection, think that while a partition hosts one or more partition keys, a collection acts as the logical container of these physical partitions.
Documents with the same partition key are grouped together always in the same physical partition and if that group needs to grow more, Azure will automatically make any required transformations for this to happen (e.g shrink another group or move to a different partition). Always pick a partition key that leverages the maximum throughput of your DocumentDB account. In our case, assuming thousands of people uploading pictures with with different category at the same rate, then we would leverage the maximum throughput. On the other hand, if you pick a partition key such as the DateCreated, all pictures uploaded on the same date would end up to the same partition. Here is how you create a collection.

Don’t worry about the implementation, we ‘ll check it later on the CRUD section. Last but not least, there are the concrete classes that can finally target specific DocumentDB database. In our case, we want a repository to target the Gallery database and its collections we created on the first step.

When we want to CRUD agains a specific collection we call the InitAsync method passing as a parameter the collection id. Make sure you register your repositories in the dependcy injection on the Starup class.

It is more likely that you wont need more that two or three DocumentDB databases, so a single repository should be more than enough.

CRUD operations using the DocumentDB API

The Index action of the PicturesController reads all the pictures inside the Pictures collection. First of all we get an instance of the repository. As we previously saw, its constructor will also initiate the credentials to connect to the Gallery DocumentDB database.

The Index action may or may not receive a parameter to filter the picture results based on their title. This means that we want to be able either query all the items of a collection or pass a predicate and filter them. Here are both the implementations.

Here you can see for the first time, how we convert a Document item to a Domain model class. Using the same repository but targeting the Categories collection, we will be able to query CategoryItem items.

Create a Trigger

Let’s switch gears for a moment and see how to create a JavaScript trigger. We want our picture documents to get a DateCreated value when being added on the collection. For this we create a function that can read the document object from the request. This is the Triggers/createDate.js file.

One important thing to notice here is that a trigger is registered on a collection level (collection.TriggersLink). Now when we want to create a document and also require a trigger to run, we need to pass it in the RequestOptions. Here is how you create a document with or without request options.

The picture item instance parameter, has a Category value which will be used as the partition key value. You can confirm this in the DocumentDB emulator interface.

Create attachments

Each document has an AttachmentsLink where you can store attachments, in our case we ‘ll store a file attachment. Mind that you should avoid storing images attachments but instead you should store their links, otherwise you ‘ll probably face performance issues. In our application we store the images because we just want to see how to store files. In a production application we would store the images as blobs in an Azure Blog Storage account and store the blob’s link as an attachment to the document. Here is how we create, read and update a document attachment.

When you need to use the domain’s properties, the first one is preferred while when you need to access document based properties such as the document’s attachments link, the second one fits best. But what if you need both? Should you query twice? Fortunately no. The generic Document instance has all the properties you need to get a domain model instance. All you have to do, is use the GetPropertyValue value for all domain model’s properties. However, instead of doing this every time you want to create a domain model you can use AutoMapper as follow:

Mind though that in case you change the value for the partition key, you cannot just simply update the document, since the new partition key may be stored in a different physical partition. In this case as you can see in the EditAsyncPOST method, you need to delete and re-created the item from scratch using the new partition key value.

If you try to query a collection that requires a partition key and you don’t provide one, you ‘ll get an exception. On the other hand if your query indeed must search among all partitions then all you have to do use the EnableCrossPartitionQuery = true in the FeedOptions.

Stored Procedures

A collection may have stored procedures as well. Our application uses the sample bulkDelete stored procedure from the official Azure DocumentDB repository, to remove pictures from the Pictures collection. The SP accepts an SQL query as a parameter. First, let’s register the stored procedure on the collection.

The DeleteAll action method deletes either all pictures from a selected category or all the pictures in the collection. As you ‘ll see, the query passed to the bulkDelete stored procedure is the same, what changes is the partition key that can target pictures on an individual category.

Here we can see, an alternative and powerful way for quering JSON documents using a combination of SQL and JavaScript syntax. You can read more about the Azure Cosmos DB Query syntax here.

That’s it we finished! We have seen many things related to the DocumentDB API, starting from installing the Azure Cosmos DB Emulator and creating a Database with some collections to CRUD operations and generic data repositories. You can download the project for this post, here.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2017/08/13/azure-cosmos-db-documentdb-api-in-action/feed/5azure-cosmos-documentdb-06chsakellfacebooktwitter-smalltwitter-smallContinuous Integration & Delivery with Microsoft Azure & GitHub – Best Practiceshttps://chsakell.com/2017/06/18/continuous-integration-delivery-with-microsoft-azure-github-best-practices/
https://chsakell.com/2017/06/18/continuous-integration-delivery-with-microsoft-azure-github-best-practices/#commentsSun, 18 Jun 2017 11:48:55 +0000http://chsakell.com/?p=4679Read More ›]]>Continuous Integration and Delivery(CI/CD) automation practices, is one way path when you want to continuously produce and deliver software in short iterations, software that is guaranteed that when deployed, it has passed successful reviews, builds and tests through an automated and strict process. Automating the software release process is not an easy task and usually requires a batch of tools, patterns and platforms to accomplish. All these parameters depend on the company’s culture (e.g. open source or not), employee’s know how and the nature or the variety of the software. This post will describe not only how to configure a CI/CD environment but also provide instructions for Git operations as well. We will use Microsoft Azure and GitHub as the base for setting our CI/CD environment. I will break the post in the following three sections:

Requirements from developer, tester perspective

Continuous Integration & Delivery Architecture

Setup the Continuous Integration / Delivery environment

Example: Run a full release cycle (new feature / release / hot-fix)

Are you ready?

The developer’s perspective

Each developer should be able to work in an isolated way without affecting other’s work.

If needed, more than one developers should be able to collaborate and work on the same feature in a seamless way.

New features should always pass through a review process before merged (pull request). Pull requests should always have build and tests status indicators.

The development should take place in a separated and safe environment rather than the staging or production, where all developers can check the current development status (develop branch). Each time the develop branch changes (push), the corresponding deployed Azure slot is affected.

New features are created from the develop branch which means, each time a developer needs to add a new feature, he/she creates a new feature branch from the develop one. At the same time, there may be many new features on development process by different developers.

When a developer feels that the new feature is ready, pushes (publishes) the new feature branch and opens a pull request to be merged to the develop branch. The pull request has build and test status and after being successfully reviewed is being merged into the develop branch. If the feature branch fails the review, developers can continue to commit and push changes on the feature branch till the pull request is ready for merge.

After the successful merge in the develop branch, the feature branch may be deleted from both local and origin.

When we need to ship a new release, we create and push a new release branch from the develop. The release branch is being tested on a staging environment (we ‘ll talk about this more later..). If the release branch is ready for production, we merge the branch both in the develop and master branch. At this stage we have a specific tag-version as well. If not, we continue to commit and push changes to the release branch till is ready for production.

When we need to apply a fix in the production (ASAP), then we create a hotfix branch from the master. This hotfix branch should be deployed and tested on the staging environment in the same way a release branch is being tested. When the hotfix is ready for production, we merge the branch in both develop and master and finally delete the branch.

The tester’s perspective

Release candidate branches should be tested on a separated staging environment without affecting the production

Staging environment should be able to simulate how the production environment would behave in case the release were applied

Testers should be able to dynamically deploy release candidate branches on the staging environment with a single command

When a release candidate is successfully tested and ready for production, it should be deployed on production with a single command as well

This is where Microsoft Azure deployment slots and Azure CLI 2.0 come into the scene. Testers don’t have to be aware about the configuration for all these, all they need to know in order to deploy release candidates in staging or the production environment, is the name of the release or hotfix branch. Deployment and slot swapping between the staging and the production environment will happen using two Azure CLI 2.0 commands.

Continuous Integration & Delivery Architecture

All requirements that we have listed before can be achieved using the architecture shown in the above image. There are three platforms used, each one for different reason. Microsoft Azure, is where we have the deployment slots for our application. Assuming we have a .NET Core Web application that also uses an SQL Azure database, that would result in an Azure App Service with two deployment slots, the dev and the staging. The default website is considered as the production slot. Each of these slots have their application settings and of course a separated database connection string. The connection string setting will be a per slot setting so they don’t swap when we swap staging and production slots (more on this later..) As far as for the source control, there is a develop branch, from which all the new feature branches are created. The develop branch has a fixedWebhook hooked to the App Service dev slot. This means that each time we push to develop branch, changes are reflected on the dev slot (build/re-deploy).

When we wish to ship a new software release, we create a new release branch with the next version (tag) number. We deploy the new release candidate branch on the staging slot using Azure CLI 2.0. What this will do, is delete any Webhook existed on the Staging slot and dynamically create on demand, a new one hooked to the new release branch. Testers can either test the new features using the staging slot’s settings or swap with preview the staging and production slots in order to test the release candidate using the production settings. Till the release candidate branch passes all the tests, any push to that branch would result in a new build/deploy cycle on the staging slot. When all the tests pass, testers finalize the swap to production. The release candidate branch can now be merged to the develop and master and finally deleted. The same applies for any hotfix branch that is being created from the master branch.

Any push to develop / release-version / hotfix-version or feature branches or pull requests, should trigger an Appveyor build task, that should detect if the build and any tests succeeded. This is very important, because can detect bugs before reaching the production environment and the end users.

Setup the CI/CD environment

This section will describe exactly the steps I took in order to implement the requirements we have set for Continuous integration & Delivery using Microsoft Azure and Git. Let’s begin. I started by creating an empty ASP.NET Core Web Application (no database yet) and initialize a Git repository so that I can push it up to GitHub. I made sure to create a develop branch from master before pushing it. The repository I used for the tutorial is this one. As shown from the architecture diagram, there is build/test task running on AppVeyor each time we push code to certain branches. So this is what I needed to setup next. The steps to make this work are quite easy. First, I signed in to Appveyor with my GitHub account, pressed the NEW PROJECT button and selected the azure-github-ci-cd repository.
AppVeyor requires you to add an appveyor.yml file at the root of your repository, a file that defines what tasks do you want it to run when something is pushed to your repository. There are a lot of things you can configure in AppVeyor but let’s stick with the basics, which are the build and test tasks. Here is the appveyor.yml file I created.

For master, release/.*/, hotfix/.*/ and bugfix/.*/ branches I want Appveyor to build and test the solution in Release mode while the develop and feature/.*/ branches may run in Debug. More over I made sure the tasks run on pull requests as well. With this configuration file, each time we push to those branches a build/test task is running on AppVeyor. The final result can be shown using badges like this:

The next step is to create the AppService up on Microsoft Azure Portal and the required slots as well. I named the App Service ms-azure-github and added two slots, dev and staging. The default instance will be used as the production slot. For all these slots, I added the following App setting, to make sure that Git operations will be used without problems.

SCM_USE_LIBGIT2SHARP_REPOSITORY 0

What this setting will do, is ensure to use git.exe instead of libgit2sharp for git operations. Otherwise you may get errors such as this one. I didn’t add any database yet, I did later on, when I wanted to create a new feature for my app (more on this on the example section..). At the moment, the only thing left was to add a Webhook between the develop branch and the dev slot. This is very important step, because you are going to authorize Azure to access your GitHub account. Doing this directly from the Azure Portal will help you run related commands from the Azure CLI 2.0 without any authentication issues. To do this, I selected the dev slot, clicked the deployment options, connected to my GitHub account, select the ms-azure-git-ci-cd repository and finally the develop branch. Azure instantly fired up the first build and I could see the dev slot online.

Example: Run a full release cycle (new feature / release / hotfix)

This is the section where we will see in action a full release cycle. A full release cycle will show us how to use and mix together, all the tools, platforms and Git operations in order to automate the release process. We ‘ll start developing a new feature, publish it and open a pull request,merge the feature and create a release branch for testing. We will deploy the release candidate on the staging slot using Azure CLI 2.0 and apply a swap with preview with the production slot. After finishing and ship the feature in production, which by the way will be related to adding database features, we will run another full cycle where will have to make a hotfix directly from the master branch. Before starting though, I would like to introduce you some Git extensions that will help us simplify the process when applying the git-flow branching model we mentioned on the start of the post. The extension is named git-flow cheatsheet and I encourage you to spend 5 minutes to read and see how easy is to use it. As a Windows user I installed wget and cygwin before installing the git-flow-cheatsheet extension. I made sure to add the relative paths to the System path to work from the command line.
You certainly don’t have to work with git-flow-cheatsheet, you can just use the usual git commands on the way you are used to, that’s fine. I used the utility for this tutorial to emphasize the git-flow branching model and that’s all. Having the extension installed I run the following git-flow command at the root of my git repository.

git flow init

I left the default options but you could pick your own, for example you can set the name of the releases or hotfix branches etc..

Add a new Feature

Let’s say we have a new task where we need to add entity framework and also show a page with users. The users data will come from the database. The developer that is assigned to complete the task (always happy to..), starts by creating a new feature branch directly from the develop one. With git-flow-cheatsheet this is done by typing:

git flow feature start add-entity-framework

SQL Azure Database

Since this task is related to database, we need to prepare the database environment first, that is create 3 databases, one for each environment: dev, staging, production. We also need to add the connection string settings per slot, in each App Service slot, pointing to the respective database. Creating database on Microsoft Azure is quite easy. You select SQL Databases, click add and fill the required fields.
As you can see I have named the databases in the same way I named the slots. If you click on a database and select Properties, you can find the connection string.
Make sure to set this connection string to the respective App Service slot in the Application Settings connection strings.
It’s very important to check the Slot setting option. Now that you have the SQL Databases set, you need to create the schema of course. For start, I added the Entity Framework model, created the connection string pointing to the dev database, in the appsettings.json file.

I run Entity Framework Core migrations from Visual Studio 2017 and when it was time to update the Azure database, Visual Studio asked me to sign in with my Microsoft Azure account and also add my local IP to the relative Firewall Rules. I run the same Update-Database command three times, for the three different connection strings. I also added some mock users to all databases to ensure that connections work as intended. When the add-entity-framework feature was ready for review, I published (pushed) it up to GitHub. Other developers could also contribute to that feature.
As soon as I pushed the branch, AppVeyor run its tasks and send me the result on my personal address. Apparently there was an error occurred..
I signed in to AppVeyor and checked the build for details..
After fixing the error while still working on the add-entity-framework branch, the new push triggered a new successful build.
Now I was ready to open a Pull Request to merge the new feature branch the develop one.
Since the feature branch passed the review and all the build/test tasks, I merged it with the develop and then delete it by running the git-flow command:

git flow feature finish add-entity-framework

When I pushed the develop branch to origin, the related slot took all the changes have been made on the add-entity-framework feature.
By the way, at the same time there was another (fiction) developer working on another feature following the same process and when opened a pull request, AppVeyor build/test tasks failed.
He fixed the issue, merged the new feature in to develop and delete that branch.

Ship a new Release

It is about time to ship a new release, where we can see all the new features have been added to develop branch. We ‘ll deploy the new features through a new branch named release/{next-version} where {next-version} is the next tag/release of your software, such as 1.0.0, 1.2.3 etc.. Of course you can make your own convensions for naming your tags. To get the next version simply run git tag. I run the command and since the latest tag was 1.0.1 I named the next release release/1.0.2. I did that using the git-flow-cheatsheet command:

git flow release start 1.0.2

Based on the convensions I accepted during the git-flow init, this created a new branch named release/1.0.2 directly from the develop branch. I also published that branch up to origin using the command:

git flow release publish 1.0.2

It is very important to push the release branch up to GitHub because the next step is to sync the staging slot with that branch. At this point the developer may inform the tester that a new release candidate with the version 1.0.2 is ready for testing. The tester may sync the release/1.0.2 branch with a single Azure CLI 2.0 command but first he/she has to have it installed. You can install Azure CLI 2.0 from here. The first time you are going to use Azure CLI you have to login with your Microsoft Azure credentials. In order to do this, open a Powershell and run the az login command. After a successful login you will be able to access your Azure resources from the command line. As we described before, you could add a Webhook to the staging slot through the Azure portal in the same way we did for the develop branch and the dev slot. But since the staging slot is going to continuously “host” different release candidate branches, we need to use a more robust way and this is why we use Azure CLI. We want a Powershell script that accepts as a paremeter the release candidate branch name (e.g. 1.0.2) and when runs, it removes any previous branch/Webhook bound and add the new one. Doing this would result a new build and deployment on the staging slot, having all the new release features we want to test. Awesome right? The script is this one:

You should replace your repository url and the Azure Resource Group under which the App Service is bound. Let’s see what I run.
As you can see, Azure instantly fired up a new build on the staging slot, and here is the result after the successful build..
You can find the script here. Testers at this point can test the new release features on a staging environment (staging database, application settings, etc..). If they find a bug, the developer can commit and push the changes on the release branch as it is. Since there is a Webhook to that branch, the staging slot will trigger a new deployment each time you push changes to the branch.
A very good practice for testing new features is to test what impact would the new features have in the production environment. But can we do this without affecting the actual production environment? Yes we can, thanks to the Swap with Preview feature. What this does in a nutshell, is apply production application settings on the staging slot. The very first time, you may haven’t deployed anything to production yet, but remember that we have already set all the application settings, such as connection strings. So we run a swap with review, production settings are applied in to the staging slot and we test the release with those settings. When we ensure that everything is OK, we complete the swap. If we aren’t, we “reset” the swap, that is apply the default staging settings to the staging slot, in other words revert the settings. We want to do all that stuff in a robust way as well and this is why we will use an Azure CLI 2.0 command again.

You can find the script here. The script have 3 options to pass, preview to apply the production settings into the staging slot, swap to complete the swap which will result to deploy exactly what the staging slot has in to production and reset which resets the staging slot’s settings. You can read more about swap here.
After completing the swap to production we can finally see the release version deployed on the production slot.
A full release cycle has been completed and we can merge now the release branch to both the develop and master. Then we can safely delete it from both local and origin. The command to do it using the git-flow-cheatsheet is this:

git flow release finish 1.0.2

Hotfix

There are times (like a lot..) that you have to immediately push a hotfix in production. The way we do this using git-flow branching model is create a new hotfix branch from master and follow the same process we did for a release candidate branch. The naming though may vary, for example if you want to make a hotfix to release 1.0.2 then the hotfix branch may be named hotfix/1.0.2-1 and so on. Using git-flow-cheatsheet simply run this command:

git flow hotfix start <release-tag-version-#>

Push/publish the branch up to origin and sync with the staging slot as we did with a release branch.
Hotfix branches trigger AppVeyor build tasks as well..
Test the hotfix in staging, apply the swap with preview with the production and complete the swap when ready. When you are ready, finish/delete the hotfix branch, that is merge it with both the master and develop and then deleted it.

git flow hotfix finish VERSION

Discussion

Let’s review what we’ have done in this post. We saw how to use and setup Microsoft Azure, AppVeyor and Azure CLI continuously work with GitHub and more specifically the Git-Flow branching model. The more important points to remember is that a single push to develop branch where all the features are merged, automatically re-deploys the dev-slot up on Azure so that developers have a first view of what is currently happening. We also saw how easy and quick is to dynamically deploy a new release candidate or a hotfix up to the staging slot and swap it with the production. As far as the status of your builds, AppVeyor with its badges and email notifications will capture any failed builds or tests during the push or pull requests.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2017/06/18/continuous-integration-delivery-with-microsoft-azure-github-best-practices/feed/2azure-github-ci-cd-37chsakellfacebooktwitter-smalltwitter-smallReactiveX operators – Angular playgroundhttps://chsakell.com/2017/05/28/reactivex-operators-angular-playground/
https://chsakell.com/2017/05/28/reactivex-operators-angular-playground/#commentsSun, 28 May 2017 15:43:27 +0000http://chsakell.com/?p=4654Read More ›]]>Reactive programming pattern seems to get more and more trusted by developers for building large-scale Web applications. Applications built with this pattern make use of frameworks, libraries or architecture styles that eventually will force you to intensively use RxJS and its operators. It’s kind of difficult to start using ngrx/store if you haven’t already being familiar with RxJS operators. This is why I thought it would be nice to create a playground project where we could gather as many RxJS operators examples as possible, using Angular. This will help you to visually understand the exact behavior of an RxJS operator.

The Playground

The previous gif image is actually the home screen of the project, making use of RxJS operators in order to flip individual div elements.

The project is built with Angular 4, Angular Material 2 and has currently examples for the most commonly used RxJS operators, such as merge, scan, reduce or combineLatest. I will be adding more in the future and you are welcomed to contribute as well. You will find that each example has 3 tabs, one to show what an operator can do, another that has an iframe with the operator’s documentation and a third one to show the most important code lines used for the example.

Make sure you clone or fork the repository and get the latest changes being committed every time.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2017/05/28/reactivex-operators-angular-playground/feed/2reactivex-operators-4chsakellfacebooktwitter-smallMaster Microsoft Azure Web application deploymenthttps://chsakell.com/2016/10/29/master-microsoft-azure-web-application-deployment/
https://chsakell.com/2016/10/29/master-microsoft-azure-web-application-deployment/#commentsSat, 29 Oct 2016 12:47:45 +0000http://chsakell.com/?p=4544Read More ›]]>During this year we had the chance to build several Web applications using ASP.NET Core combined with frameworks and libraries such as Angular 2 or SignalR. Since then I have been receiving requests to post about how to deploy that kind of applications on Microsoft Azure so this is what we are going to do on this post. The truth is that despite the fact that those apps were built with the same technologies, they were created and can be run using different tools. For example, the PhotoGallery application was built entirely in Visual Studio which means it can be opened, run and deployed to Azure using Azure Tools for Visual Studio. On the other hand, the Scheduler app has two parts, the server part with the API which can be opened, run and deployed through Visual Studio as well and the client side one which was built outside Visual Studio using NodeJS packages and other client-side libraries. The LiveGameFeed app is an ASP.NET Core – Angular 2 – SignalR app built entirely outside of Visual Studio. Those kind of apps will be deployed using a different techniques that are supported by Azure. More over, we are going to see the way we will handle NodeJS dependencies/packages, in other words the node_modules folder in our app. This folder usually contains a large number of files and would be pain in the ass to get them all in Azure. The interesting thing is that the apps we have built, handles NodeJS depenencies in different ways and hence will be deployed accordingly. Let’s see in detail the contents of the post. Each section denotes the basic app’s features that will affect the way it will be deployed to Azure.

The app is built entirely in a text-editor such as Visual Studio Code (no .sln file)

Firstly, the app will be published locally and then deployed to Azure using Git tools and a Local Git repository

Additional configuration needed up on Microsoft Azure in order to enable Web-Sockets and leverage SignalR features

In the post we will deploy the apps that mentioned before but in case you want to deploy your app, just follow the instructions that suits best on your app.

Prerequisites

In you are a Windows user, make sure to install the Azure Tools for Visual Studio. Also, you must have an Azure subscription, which is a plan that gives you access to a variety of Azure services. You can get started with Azure with a free account. Once you decide to purchase a subscription plan, you can choose from a variety of purchase options. If you’re an MSDN subscriber, you get free monthly credits that you can use with Azure services, including Azure Storage. See Azure Storage Pricing for information on volume pricing.

The PhotoGallery app, is an app built using Visual Studio 2015, it uses an SQL database for data store and Angular 2 on the client side. For start, clone the repository and follow the instructions in the README.md file. Even if you haven’t an SQL Server instance installed on your machine (maybe Linux or MAC user) make sure to run at least the command that initializes mirgrations.

dotnet ef migrations add initial

This is important for initializing the respective SQL Server database on Microsoft Azure. When you finish setting up the application right click the PhotoGallery app and select Publish….
This will open the Publish wizard which requires that you are a signed user. You can sign in in Visual Studio on the upper right, with your Microsoft Account for which you have an Azure subscription or add that account in the next step.
We are going to use the Platform as a service (PaaS) deployment enviromnent to host our application. Click the Microsoft Azure App Service button and click the New… button in order to declare a Resource Group, an App Service Plan and any additional Azure services that our app requires. In case you are unfamiliar with those terms, that’s all right, all you need to know is that all the resources such as Web Applications, SQL Servers are contained in a Resource Group. Deleting that Resource Group will also delete the contained services. Typically, all resources in a Resource Group share the same lifecycle. The App Service Plan declares the region that your resources are going to be deployed and the type of Virtual Machines to be used (how many instances, cores, memory etc..). Name the Resource Group PhotoGalleryRG and click New… to configure an App Service Plan. Leave the App Service Plan name as it is, set the Location that you are closest at and select any size you wish. I chose West Europe and S1 (1 core, 1,75 GR RAM) as size.
Click OK and then click Explore additional Azure services in order to create an SQL Server and a database.
Click the green plus (+) button to add an SQL Database. Click New… on the SQL Server textbox to create an SQL Server and enter an administrator’s username and password. Click OK and your credentials will fill the textboxes as shown below. Leave the connection string name DefaultConnection and click OK.Attention: It’s important that the connection string name matches the one in the appsettings.json file.

Mind that you don’t need to change the connection string in the appsettings.json to point the new SQL Database on Microsoft Azure. Azure will be responsible to inject the right connection string when required.
At this point you should have a window such as the following..
Click the Create button and Visual Studio will deploy the configured services up in Azure. Before procceeding let’s take a look what happend on your Microsoft Azure subscription. Navigate and sign in in the Azure Portal. Then click the All resources button..
The App Service resource is the actuall web application where the PhotoGallery app will be deployed. At the moment, it is just an empty web site. You can click it and you will find various properties. Find the URL and Navigate to it.
Back in Visual Studio and with the Connection tab textboxes all filled, click the Settings tab.
Make sure to check the checkboxes related to your database configuration. The first one will make sure that Azure will inject the right connection string when required and the second one is required in order to initialize the database.
Click next and finally Publish!!! Visual Studio will build the app in Release mode and deploy the PhotoGallery app up in Azure. After finishing it will probably open the deployed web app in your default browser.

Notes

You may wonder, what happend with the NodeJS depenencies. First of all, if you check the project.json file you will notice that we certainly didn’t deploy that folder.

What happend is that we deployed only the packages using the setup-ventors gulp task, that copies all required packages into the www folder. This means that when you publish your app those packages will also be deployed at least the first time. Of course you need to run the build-spa task that runs all the neccessary gulp tasks before publishing the app.

Now lets move to the Scheduler app that consists of two different projects. The first one is the server side which contains the MVC API controllers, an SQL Server database and can be deployed in the exact same way we did with the PhotoGallery app. For this reason, I ‘ll assume you can deploy the app on your own, just follow the previous steps we saw before. Clone the repository and follow the instructions on the README.md file. The project you need to deploy is the Scheduler.API. As you can see I have deployed the API on a separated Resource Group and Plan..
And here’s what the Azure Portal resources look like. I have filter them by typing Scheduler on the filter box.

Deploy an Angular 2 application

The client side of the Scheduler app is build outside of Visual Studio (no .sln file), in fact I used my favorite text editor, Visual Studio Code. This is a classic Angular 2 application and certainly cannot be deployed on the same way we did with the previous two. First of all, go ahead, fork the repo and follow the instructions to install the app. I said fork because later on we will authorize Azure to access our Github projects so we can set the deployment source. This repo has two branches, the master and the production. I have created the production branch in order to integrate it with the Azure deployment services (we ‘ll see it later on). I assume you have already hosted the Scheduler.API project by now so in order to test the Angular 2 app, switch to the production branch and make sure to alter the API URL in the utils/config.service.ts to point the previously deployed Scheduler.API. Next run npm start.

Publishing a pure Angular 2 app on Azure is another story. First of all we will create the App Service on the portal, instead of letting Visual Studio creating it for us, as we did in the previous examples. Login to Azure Portal and create a new Web App.
Give it a name (mind that all App Services should have uniquely identified names) and assign it to a resource group. I assigned the app under the same Resource Group that the Scheduler API service belongs.
Click create to provision the App Service. Switch back to the Angular app, open a terminal and run the following gulp task. Make sure you are on the production branch and you have changed the API URL to point tha Azure API.

gulp

This command will run the default gulp task existing in the gulpfile.js and creates a production build inside a build folder. This folder is the one that hosts our application and the one that we want Azure to run. If you take a good look at the generated build folder, you will find an app folder that contains the actual SPA and a lib folder that has only the required NPM packages. It also changes folder references in the index.html file and the systemjs.config.js one. The most important files though are the index.js and the package.json files that are copied from the src/server folder. The package.json contains only an express server depedency to be installed up on Azure and a post-install event for installing the bower packages. Microsoft Azure will notice the package.json file and will assume that this is a node.js application. After installing the dependencies it will run the node index.js command which in turn starts the express server.. If you want to test the production build before commiting any changes to the production branch, navigate to the build folder, run npm install and node index.js. This will emulate what Azure does on cloud.
Now that we have our production branch with our latest build, we need to configure the App Service on Azure to hook with that branch and use it for continuous integration and deployment. Click on the Web app, then select Deployment options and click the Github option.
In order to associate a Github project first you need to authorize Azure accessing your Github account. This means that you will not be able to use my Github project through my account and it would be better to simply fork it to yours and use that instead. After authorizing Azure, click Choose project, find the angular2-features from your Github account and finally select the production branch. Click OK.
Azure will set the deployment source and will try to sync and deploy the app.
When deployment finished, I got an error (awesome).
From the logs you can understand that Azure tried to run the root’s package.json and the npm start command which means that there is somemthing missing here. Azure needs to be aware that our project exists inside the build folder not the root. To do this, go the Application settings and add an App setting with a key-value pair Project – build. Click Save.
Now you need to trigger a deployment so the easiest way to do this is push a change to the production branch. This time the deployment succeeded and we are ready to run the app!

The moment you start thinking “OK, I believe I can deploy any app I want up on Azure now, the LiveGameFeed comes in to the scene. This app is an ASP.NET Core – Angular 2 – SignalR Web Application and certainly cannot be deployed using Visual Studio (maybe Visual Studio 15 Preview though). It was created in Visual Studio Code leveraging all the cross-platform features in .NET Core. This means that we need to deal both with the Angular 2 features and .NET Core at the same time but without the Visual Studio Azure tools. Clone the repo, follow the instructions to install it and make sure you can run it locally. Switch to Azure Portal, create a new Web App and give it a uniquely identified name.
Switch to Visual Studio Code or your editor where you opened the source code and make sure to change the apiURL in the appsettings.json file to match your newly created Web app url.

Make sure you run tsc command in order to compile the app. The idea is to produce a build of our app in a folder, in or outside the app. Then we configure a Local repository up on Azure and we will set it as remote for our local. Finally we will push the build to Azure Git repository. I will publish the app by running the following command.

dotnet publish

This published the app in Release mode on the bin/Release folder. Swith to Azure and from the Deployment options select Local Git repository. Click OK.
Next click Deployment credentials and set username and password for this repository. Click Save.
Open the Overview Blade and you will find a Git clone url for your project. Copy it.
All we need to do now is push the published folder up on the remote repository. I will make something dirty on this example by simply coping the contents from bin/Release/netcoreapp1.0/publish to another directory.
Then I will open that folder in a terminal, init a local repository and commit all files on the master branch.

git init
git add .
git commit -m "init repo"

Then add the remote repository on Azure.

git remote add azure your_clone_url_git_here

Push the changes and enter the credentials you configured previously if asked.

git push azure master

In Azure Portal, go to Application settings and enable Web sockets. Otherwise you wont be able to leverage SignalR features which are needed by our app.
.. and voila!!

Conclusion

We ‘ve seen several ways to deploy a Web App up on Azure but this doesn’t mean that they are just them. There are a few more deployment options such as the classic FTP or Visual Studio Online integration. Microsoft Azure gives you the options to set your deployment plan that best fits your application and your organization’s source control tools. I will stand for a moment in the way we deployed the Angular 2 ScedulerUI app. You can have only one Github repository for your app and create for example 3 branches, dev, stage and production. Up on azure you can create the respective slots and map each one of them to the respective Github branch. When your stage branch reaches a stable and ready to deploy state, all you have to do is merge it with the production one. The Azure App Service production slot will be synced and redeployed automatically. Amazing isn’t it? Or you could set only the stage slot work on this way and when it ‘s time to deploy on the production, swap the stage and the production slots.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2016/10/29/master-microsoft-azure-web-application-deployment/feed/4microsoft-azure-deployment-15chsakellmicrosoft-azure-deployment-01microsoft-azure-deployment-02microsoft-azure-deployment-03microsoft-azure-deployment-04microsoft-azure-deployment-05microsoft-azure-deployment-06microsoft-azure-deployment-07microsoft-azure-deployment-08microsoft-azure-deployment-09microsoft-azure-deployment-10microsoft-azure-deployment-11microsoft-azure-deployment-12microsoft-azure-deployment-13microsoft-azure-deployment-14microsoft-azure-deployment-15microsoft-azure-deployment-16microsoft-azure-deployment-22microsoft-azure-deployment-23microsoft-azure-deployment-24microsoft-azure-deployment-25microsoft-azure-deployment-26microsoft-azure-deployment-27microsoft-azure-deployment-28microsoft-azure-deployment-29microsoft-azure-deployment-30microsoft-azure-deployment-31microsoft-azure-deployment-32microsoft-azure-deployment-35microsoft-azure-deployment-33microsoft-azure-deployment-34microsoft-azure-deployment-36facebooktwitter-smallReal-time applications using ASP.NET Core, SignalR & Angularhttps://chsakell.com/2016/10/10/real-time-applications-using-asp-net-core-signalr-angular/
https://chsakell.com/2016/10/10/real-time-applications-using-asp-net-core-signalr-angular/#commentsMon, 10 Oct 2016 09:10:36 +0000http://chsakell.com/?p=4528Read More ›]]>
The source code for this post has been updated to the latest ASP.NET Core version (.NET Core SDK 1.0 project – can be opened using VS 2017) and angular 4 as well (Repository).

Real-time web applications are apps that push user experience to the limits while trying to immediately reflect data changes to a great number of connected clients. You make use of such applications on a daily basis, Facebook and Twitter are some of them. There are several ways to design and implement Real-time web applications and of course Microsoft made sure to provide you with a remarkable library named SignalR. The idea behind SignalR is let the server push changes automatically to connected clients instead of having each client polling the server on time intervals. And what does connected clients means anyway? The answer is hidden behind the concept of the HTTP persistent connections which are connections that may remain opened for a long time, in contrast with the tradional HTTP Connections that can be disconnected. The persistent connection remains opened due to certain type of packet exchanging between a client and the server. When a client calls a SignalR method on the server, the server is able to uniquely identify the connection ID of the caller.

What this post is all about

SignalR has been out for a long time but ASP.NET Core and Angular 2 aren’t. On this post we ‘ll see what takes to bind all those frameworks and libraries together and build a Real time application. This is not an Angular tutorial nor a SignalR one. Because of the fact that the final project associated to this post contains code that we have already seen on previous posts, I will only explain the parts that you actually need to know in order to build a real time application. And this is why I will strongly recomend you to download the Live-Game-Feed app and study the code along with me without typing it. Here’s what we ‘ll see in more detail..

About the LiveGameFeed app

The app simulates a web application that users may visit and watch matches live. I am sure you are aware of plenty of such websites, most of them are related to betting. The idea is that there will be two matches running, and every time score is updated all connected clients will receive the update. On the other hand, if a user also wants to getting live feed for a specific match then he/she has to be subscibed to the match. More over, if subscribed, the user will be able to post messages related to that match while those messages will be pushed and read only by users also subscribed to the that match. Why don’t we take a look at the LiveGameFeed app (zoom out a little bit if needed so that you can see both clients)..
Are you ready? Let’s start!

Fire up an empty ASP.NET Core web application using yeoman

I assume you have already installed .NET Core on your platform and you have opened the Live-Game-Feed app on your favorite text editor. You can start a .NET Core application either using the dotnet-new cli command or using the open-source yeoman tool. I picked the latter choise cause there are some great options to fire up a ASP.NET Core application. In order to use yeoman you need to run the following commands.

npm install -g yo bower

npm install -g generator-aspnet

Next, open a console and navigate where you want to fire up the project and run the following command:

yo aspnet

The tool will give you some options to start with.
Select Empty Web Application and give a name for your app.
Open the created folder in your editor (mine is Visual Studio Code) and check the files created. Those are the minimum files required for an empty Web Application. Navigate inside the app’s root folder and restore .NET packages by running the following command.

dotnet restore

As you can see, Visual Studio Code has also an integrated terminal which certainly makes your life easier.
Then make sure that all have been set properly by running the app..

dotnet run

Of course you will only get the famous Hello world! response but it’s more than enough at the moment.

Configure and install MVC and SignalR Server dependencies

The next step in to install ASP.NET Core MVC and SignalR packages and add them into the pipeline as well. Your project.json file should look like this:

This error occurred cause you miss NuGet package configuration which is needed in order to install the SignalR and WebSockets packages. Add a NuGet.config file at the root of your app and set it as follow:

Now the dotnet restore command will not fail. You add MVC and SignalR into the pipeline in the same way you add any other middleware. In the Startup.cs file you will find the following commands into the ConfigureServices method..

You will find that in the finished Startup.cs file I have also set dependency injection for the data repositories, Entity Framework InMemoryDatabaseprovider and some recurrent tasks to run using the RecurrentTasks package. We ‘ll talk about the latter little bit before firing the final app.

Install SignalR Client-Typescript dependencies

The client side will be written in TypeScript entirely and this is something new since in most of the SignalR tutorials the client side was written in pure javascript and jQuery. In case you are familiar with Angular 2 then you already know how to intall npm packages. You need to create a package.json file under the root and also make sure you add the signalr as a dependency.

At this point you can run the npm install command to install all the NPM packages and typings as well.

Create a SignalR hub

A Hub is nothing but a C# class derived from the Microsoft.AspNetCore.SignalR.Hub. The idea is that clients may connect to a certain Hub and hence it’s logic that this class would implement methods such as OnConnected or OnDisconnected. Let’s view the abstract class in more detail.

A Hub can implement methods that the client may call and vice versa, the SignalR client may implement methods that the Hub may invoke. That’s the power of SignalR. Our app has a simple Hub named Broadcaster under the Hubs folder.

Broadcaster implements the OnConnected method by calling a client-side SignalR method named setConnectionId. The OnConnected event fires when the client calls the start method on the accossiated hub connection. It’s going to look like this:

Before invoking a client method, you can target specific clients. On the above example we targeted only the caller using the Client(Context.ConnectionId). There are other options though as you can see.

SignalR lets you group clients using the Group property.

public IGroupManager Groups { get; set; }

Broadcaster Hub, has two server methods that clients may call in order to subscribe/unsubscribe to/from certain chat groups. In SignalR, all you have to do is add/remove the respective client connection id to/from the respective group. Here we set that the group name is equal to the matchId that the client wants to listen messages for. Later on, when the server needs to send a message to a certain group, all it takes to do is the following..

Clients.Group(message.MatchId.ToString()).AddChatMessage(message);

What the previous line of code does, is invoke the addChatMessage(message) client-side method only to those clients that have been subscribed to the group named message.MatchId.ToString().

Subscribe and Unsubscribe are the only methods that our hub implements and can be called from the client. The client though will implement much more methods and most of them will be invoked through the MVC Controllers. As you noticed, in order to call a client-side method you need reference to the IHubCallerConnectionContext Clients property but for this, we need to integrate MVC with SignalR.

We have also used an interface so we have typed support for calling client side methods. You can omit this behavior and simply derive the class from Hub.

Integrate MVC Controllers (API) with SignalR

This is the most important part of the post, making Hubs functionality available to MVC Controllers. The reason why this is that much important is based on the web application architectural patterns where clients usual make HTTP calls to REST APIs, with the only difference this time the API is also responsible to send notifications to a batch of other connected clients as well. For example, in the context of a chat conversation, if a user posts a new message to a MessagesController API Controller and that message needs to be delived to all participants, the API Controller should be able to immediately push and deliver the message to all of them.
The image denotes that SignalR server can communicate with SignalR clients either via a direct “channel” between the Hub and the client or through an integrated MVC Controller which does nothing but access and use Hub’s properties. To achieve our goal, we ‘ll make any MVC Controller that we want to use SignalR derived from the following abstract ApiHubController class. You will find that class inside the Controllers folder.

The where T : Hub means that you can create as many Hub classes as you want and make them available to any MVC Controller on demand. Now let’s see an example where we actually use this class. LiveGameFeed app has a MatchesController MVC Controller which basically is used for two reasons. First for retrieving available matches that our app serves and second, when score is updated on a match, pushes the change to all connected clients.

When a match score is updated we want to notifify all connected clients, regardless if they are subscribed or not to the related feed. The client is going to implement an updateMatch function that can be called from the Hub.

await Clients.All.updateMatch(_matchVM);

In a similar way you will find a FeedsController MVC Controller where when a new Feed is added to a match, the API notifies those clients that not only are connected but also subscribed to that match feed. Since we want to target only the clients subscribed to the group named equal to the matchId, we use the Group property as follow.

Create the Angular-SignalR service to communicate with SignalR hubs

Well here’s the tricky part. First of all you should know that the server will generate a client hubs proxy for you at the signalr/js location and this why you will find a reference to this file in the Views/Index.cshtml view. This script contains a jQuery.connection object that allows you to reference any hub you have defined on the server side. In many tutorials where the client side is implemented purely in jQuery you would probably find code similar to the following:

The code references a hub named Broadcaster and defines a client side method on the broadcaster.client object. Notice the lowercase .broadcaster declaration that connects to a Hub class named Broadcaster. You can customize both the custom Hub name and the path where the server will render the proxy library. We need though to switch to TypeScript so let’s define interfaces for the SignalR related objects. You will find them in the interfaces.ts file.

The SignalR interface is defined in the typings/globals/signalr/index.d.ts and we installed it via typings. The FeedProxy will contain references to the client and server hub connection objects respectively. Any client side method that we want to be invoked from the server be implemented on the client object and any server side method implemented on the server (e.g. Subscribe, Unsubscribe) will be called through the server object. The FeedClient is where you define any client side method you are going to implement and the FeedServer contains the server methods you are going to invoke. Again the methods are in lowercase and matches the uppercase relative method on the server. If you don’t use this convetion you will not be able to call the server methods. The feed.service.ts file is an @Injectable angular service where we implement our interfaces.

Implement client-side methods

The pattern is simple and we will examine the case of the addChatMessageSubject client side method. First you define an Observable property of type ChatMessage cause when called from the server, it will accept a parameter of type ChatMessage.

addChatMessage: Observable<ChatMessage>;

.. the ChatMessage looks like that and of course there is a relative server ViewModel on the server.

And of course you would have to declare any methods to be called from that hub, on the otherHub.client object and so on.. We followed the observable pattern which means that any client-component that wants to react when a client method is invoked from the server, needs to be subscribed. The chat.component.ts listens for chat messages:

But remember.. in the LiveGameFeed app, this method will be called only on those clients that are subscribed on the relative match. This is defined on the MessagesController MVC Controller, when a chat message is posted.

Add Reccurent Tasks on a ASP.NET Core application

You may have noticed that in the project.json there is a RecurrentTasks package reference. I used that package in order to simulate live updates and make easier for you to see SignalR in action. In the Core folder you will find a FeedEngine class that triggers updates on specific time intervals.

There are two type of updates. A match score update which will be pushed to all connected clients though the MatchesController MVC Controller and feed updates being pushed through th FeedsController. In the Startup class you will also find how we configure this IRunnable task class to be triggered on time intervals.

Have fun with the app!

I guess you have already downloaded or cloned the repository related to this post as I mentioned on start. In order to fire the app you need to run the following commands (open two terminals and navigate to the project) The first three will download NPM and Bower packages and compile the angular app. Also it will be watching for TypeScript changes during development..

npm install
bower install
npm start

and the .NET Core related that will restore the packages and run the server.

dotnet restore
dotnet run

Open as many browser tabs or windows as you wish and start playing with the app. Every 15 seconds the app will trigger updates and all clients will receive at least the score update. If subscribed, they will receive the feed and any messages related to the match as well. Mind that two tabs on the same window browser are two different clients for SignalR which means have a different Connection Id. The connection id for each client is displayed on the chat component. On new feed received event, the new row to be displayed is highlighted for a while. Here is the angular directive responsible for this functionality.

Conclusion

SignalR library is awesome but you need to make sure that this is the right choice to make before using it. In case you have multiple clients that is important to push them updates on real time then you are good to go. That’s it, we finally finished! We have seen how to setup an ASP.NET Core project that leverages SignalR library through MVC Controllers. More over we used SignalR typings in order to create and use the SignalR client library using Angular and TypeScript.

Source Code: You can find the source code for this project here where you will also find instructions on how to run the application.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2016/10/10/real-time-applications-using-asp-net-core-signalr-angular/feed/26aspnet-core-signalr-angularchsakellaspnet-core-signalr-angular-05aspnet-core-signalr-angular-01aspnet-core-signalr-angular-02aspnet-core-signalr-angular-03aspnet-core-signalr-angular-04aspnet-core-signalr-angularfacebooktwitter-smallBuilding hybrid mobile apps using Ionic 2 and Firebasehttps://chsakell.com/2016/08/27/building-hybrid-mobile-apps-using-ionic-2-and-firebase/
https://chsakell.com/2016/08/27/building-hybrid-mobile-apps-using-ionic-2-and-firebase/#commentsSat, 27 Aug 2016 14:19:35 +0000http://chsakell.com/?p=4451Read More ›]]>Mobile application development has been dramatically changed in the last years with more and more frameworks trying to stand on the front-line and convience developers that they are their best option for building hybrid mobile applications with the smallest effort. One of the top frameworks on the market right now is Ionic and more specifically Ionic 2 which is a complete re-write and re-design from scratch of the first version. In case you are an Angular developer you will find Ionic framework exciting since you can leverage all your knowledge and easily build mobile apps by using Ionic’s components, which are nothing more but Angular components.

What is this post all about

We have seen several Angular or .NET related posts and apps on this blog and it’s time for us to see a mobile app as well. The purpose of this post is to build an hybrid mobile app using Ionic 2, Angular 2 and Firebase. Yes, you ‘ve just read Firebase. You don’t have to know anything about Firebase to follow with me because I ‘ll guide you through step by step. The reason why I chose Firebase is because I wanted you, at the end of this post to build and run the app on your mobile phone immediatly. Right now, there are some tutorials regarding Ionic 2 but most of them describe the basics of building mobile apps such as how to setup the app or use a specific component. The app we are going to build here will use features that you see in famous apps such as LinkedIn or Facebook. Which are those features? Let’s enumerate some of them.

Network availability detection

Offline application operation

SQLite database support

Event notifications

Camera features

File uploading

Open browsers

.. and much more.. Before start setting up the required environmnet for the app let us see a preview.
I hope you enjoy this journey as much as I did. Go grub some coffee and let’s start!

Firebase setup

So what exactly is Firebase? In a nutchel Firebase is the platform that will give us out of the box a database to store our data, a storage location to store our blobs or files such as user profile pictures and last but not least, the authentication infrustructure for the users to sign in the app. In other words.. Firebase has everything our app needs and it’s free! All you need is a Google account in order to login. One of the most important reasons that Firebase is that popular is it’s event based mechanisms which apparently is crusial for mobile apps. Think the example of the Facebook app. You create a post and some of your friends start posting comments on that. All of your friends receive the updates instantly on their app. Yes, Firebase can do that too. Using it’s API each of your Angular components can subsribe to a specific database location (we ‘ll explain little bit later) and every time an update happens on that location, such as a comment added, all subscribers get the update instantly. The first thing we need to do in order to start using Firebase is to create a project. Go ahead and Sing In in Firebase using your Google account.
After signing in, click the Go to console button and press CREATE NEW PROJECT.
Name the project ForumApp and choose your country.
Firebase will create the project and redirect you into the console where you can see all the available options in Firebase. We will be using mostly the Auth, Database and Storage services.
The USERS tab on the Auth page display all users that have been registered in the project. You can either create a Firebase user through the ADD USER button or using the API as we are going to see later on. For the moment don’t do anything, just take a look.
In my case there’s only one user registered. I have registered this user through the mobile app and not from the website. Firebase allow you to authenticate application users using several providers such as Github, Facebook or Twitter. To view all available providers click the SIGN-IN METHOD tab. Our application will use the Email/Password provider so we need to enable it. Click on that provider, enable it and save.
Click Database from the left menu. Here is where our data will be stored in a JSON format. YES I ‘ve just said JSON. Your data in firebase is going to be a large JSON object which means that in case you have only relational database background this is going to be a real strange experience. Forget anything about foreign keys or complicated queries. It’s only a javascript object and Firebase’s API will help you run queries on that. Here’s how my database look’s like.
Each node represents kind of corresponding table in a relational database but this time since it’s a Javascript object, it can also contain other nested javascript objects. Notice for example the way we are going to store the voting information on a comment entity. A Comment has a Unique identifier such as -KPhOmvtsJ6qTcIszuUE and a key named votes which in turn is a JavaScript object containing which user voted Up (true)) or Down (false)). Here the user with uidYohF9NsbfLTcezZDdTEa7BiEFui1 has voted Up for the specific comment. With this design you know how many and which users have voted for a specific comment and more over prevent a user to vote more tha one times. Each node or key in the database is a Firebase location that can be referenced. It’s very important to understand this concept because queries or event listeners require Firebase locations, the so called references. You can read more about references here. Before switching to the Storage page we need to set the access level in our database. Press the RULES tab in the Database page. By default only authenticated users may read or write in our database. Change the Rules object as follow:

What the above rule means is that statistics/threads and threads locations are readable/writable by un-authenticated users but comments aren’t. Application’s users will be able to upload pictures but we need to setup this on Firebase first. Click the Storage menu button and set the Rules as follow:

Make sure to replace the your_id with your’s. Each user will upload his/her profile picture under an images folder, with a sub-folder named equal to user’s uid.
We these rules all users may view other user’s images but only authenticated can upload. We are done setting up Firebase, time for the good stuff.

Ionic 2 – The Forum App

In order to start developing Ionic mobile apps, you need to install it first. After installing NodeJS(in case you haven’t already) run the following command.

npm install -g ionic@beta

Later on the post we will be adding Cordova plugins in our app for accessing native mobile features so go ahead and run the following command as well.

npm install -g cordova

We ‘ll start a brand new Ionic 2 project using Ionic’s CLI command start with a blank template parameter. Go to your working directory and run the command.

ionic start forum-app blank --v2

Ionic will create a blank project in a folder named forum-app which you may open with your IDE envrironment of your preference. I personally use Visual Studio Code which I find great for both client-side and mobile development. Your starting project should look like this.App folder is the one we will be focusing mostly. The plugins folders is where Cordova plugins are being installed. One thing I want you to immediatly do is to update ionic-native package inside the package.json file because ionic-cli may not use the latest version by default. This would result not finding some modules. Update it as follow.

I changed mine from 1.3.10 to 1.3.17. Make sure your re-run npm install to update the package. In case you wonder, Ionic Native is a set of ES5/ES6/TypeScript wrappers for Cordova/PhoneGap plugins which will help us a lot for accessing native features in our device.
Now let’s start talking about our app. The Forum mobile app is an app where users can create Threads and then add Comments. A thread belongs to a specific category which you may change as you wish. Comments may also have Up and Down votes. A user may add a thread to his/her favorites collection. We want users to be able to upload profile pictures either using their mobile Camera or their Photo albums. We also want to add a specific View that displays info regarding the Forum app. Only authenticated users can view/create Threads or Comments or in other words, only authenticated users may use the Forum app. With that said we should already start thinking about the views we need to create in our app. I can tell that we need at least three Tabs, one to display all Threads, another one for user’s profile info a last one for the app’s info. Each tab in Ionic can have nested views and hence the first one that initialy renders the threads, will allow the user to navigate and view a thread’s comments or create a new Thread or Comment.
We mentioned that only authenticated users may use the app so we need to provide a way for them to register and login as well. There will be two pages for this purpose, a Login and a Register one. Those pages will not be sub-views of a specific Tab but injected by a root component under certain circumstances. More over, we ‘ll use a Menu for signing out from the app.
Add an app.html page under the app folder and paste the following code.

The idea here is to make the menu component accessible by all tabs. The ion-nav‘s rootPage will be either the TabsPage component or the LoginPage. I won’t show you the entire app.ts code yet cause it contains native related code and you will be confused. The app.ts file is the one that bootstraps the Ionic app. Here’s part of it..

We have three tabs on our app, one to display threads, one to display user’s info and another one for application’s info. Threads tab has a tabBadge in order to inform the user that new threads have been added in Firebase at real time. When this tab displays the badge, which means that there are new threads added, when clicked should publish a threads:addevent so that any subscribers (ThreadsPage) do what they have to do.
Add the tabs.ts file under the tabs folder as well.

Services

Our app is not only an Ionic app but an Angular as well. It will make use of some shared @Injectable() services and Component Directives as well. We will create them firstly so we can start getting familiar with the FirebaseAPI. Add a folder named shared under app and create the interfaces.ts file.

Take a look at the models that we are going to use in the Forum app, they are pretty self-explanatory. Here’s how a Thread object is being represented in Firebase.
For communicating with Firebase we will be using References to specific locations or keys in our database object. The API cals almost always return a Promise with an object called DataSnapshot which in turn we need to map in one of our model entities we created before. For this reason, add a folder named services under shared and add the mappings.service.ts file.

Run npm install and typings install to install new packages. Time for the most important service in our Forum app, the one that is responsible for retreiving data from Firebase. Add the data.service.ts inside the services folder. Instead of pasting all the code here, I will explain the important functions one by one. You can copy the entire data.service.ts contents from the repository. At this point I will strongly recommend you to study the firebase.database.Reference API. First, we declare any Firebase references we will use in the app.

Self-explanatory I believe. The connectionRef is how Firebase let us detect client’s connection state. We will use this in the ThreadsPage initialization logic, in order to check if the user can communicate with Firebase or not. If not, we ‘ll try to fetch SQLite data from app’s database and keep working in Offline mode till network connected event fires. But something missing here.. The firebase object needs to know where your project is, in other words your project’s settings in order to understand the previous references. Login in Firebase and go to your project’s console. Over there you will find an Add Firebase to your web app button.
Click the button and copy its contents.
Now open www/index.html and change the body contents as follow. Make sure you replace your copied settings from the previous step.

Now back to data.service.ts. The InitData function initializes the first Thread for you, just for demonstration purposes. The transaction method will check if there is any value set in the statistics/threads location. If not, it will set the statistics/threads value equal to 1 (return 1) and when successfully committed, it will push the new thread. The push method generates a unique key which will be used later as the key property of an IThread. We commit the new thread using the setWithPriority method so that each thread has a priority depending on the order added.

The reason why we used transaction here is because in case you try to deploy the Forum app in your browser using the ionic serve –lab command, three different instances will be initialized, one for each platform. If we remove the transaction, there is a possibility that all of them will try to push the new thread which mean you will end up having three threads and an invalid statistics/threads value equal to 1, because when all of three checked the location, the value was null.

Disclaimer: I have used priorities in order to sort and support pagination when retrieving Threads later in a simple way. This is not the best way because in case you break the statistics/threads value or remove a thread from Firebase you are going to get strange results. But let’s keep some things simple on this app and focus mostly on the features rather than the implementation.

CheckFirebaseConnection is the one that listens in a specific Firebase location and check the client’s connecton status.

The submitThread function is simple to understand. It creates a new reference on Firebase and commits the new thread in the same way we saw before. It also updates the current number of threads in statistics/threads location which means that before invoking this method we need to check the current number of threads and increase it by one. You may wonder why do we have to keep a location such as the statistics/threads anyway? The thing is that this is how you work in an NoSQL environment. You may have to keep copies of your values in multiple places so you don’t have to retrieve all the data each time. If we disn’t have statistics/threads we would have to get all the threads dataSnapnot and enumerate them to get their length. Another example we are going to see later on, is the way we know who created a comment. A comment has a user object with the unique user’s identifier plus his/her username. If that user changes the username, you will have to update all those references.

We call the set method to store user’s favorite threads in the addThreadToFavorites method. The method will create a key-value pair under the user’s unique key. This is how we know the favorite threads for a specific user. If a thread belongs to his/her favorites, then a threadKey – true value pair exists under that user’s object.

Commiting a new comment works in a similar way. The submitComment method accepts the thread’s key under which the comment was created and the comment itself. Mind that before calling this method we have already called the push method on the commentsRef so that we have the new generated key available. We make sure to update the number of comments existing under the specific thread.

Let’s see how a user can submit a vote for a comment. There are two options, Up or Down and the value is stored under the respective comment. We have the voteComment function that accepts the unique comment’s key, user’s uid and true or false for Up and Down votes respectively.

In this way, if a user press again the same value (Up or Down) nothing changes.
There are two more important functions in the DataService that I would like to explain. The first one is the getUserThreads which fetches threads created by a specific user. It uses the orderByChild method to locate the threads/user/uid key in compination with the equalTo method to match only a specific key.

Add the auth.service.ts file under the services folder. The AuthService uses the firebase.auth.Auth Firebase interface for authenticating users in Firebase. Mind that there are several providers you can sign in with, such as Github or Google but we will use the signInWithEmailAndPassword method.

There’s another one service we need to create, the SqliteService which is responsible for manipulating local data in the mobile device when working in offline mode. But let’s ignore native components at the moment and keep setting core components. Add the app.providers.ts file under the app folder. This file exports all services to be available in our Angular app.

Component Directives

We will create an ThreadComponent to display threads in the ThreadsPage list. Each thread will be responsible to listen for events that only happens upon that, which in our case will be the number of comments added. Add a new folder named directives under shared and create the thread.component.ts.

The on and off functions starts and stops listening for data changes at a particular location. This is how each thread will automatically update the number of comments posted on that thread at real time. Firebase will send the update to all connected users immediatly.
Another importan function is the viewComments which informs the parent component (ThreadsPage) to open the CommentsPage for the specific thread. Add the thread.component.html template for this component in the same folder

You may have noticed that this component uses an element forum-user-avatar. It’s another component we are going to create and will be responsible for rendering user’s profile picture uploaded in Firebase’s storage. Add the user-avatar.component.ts under the directives folder.

This component accepts an @Input() parameter and set’s the imageUrl property. We would like though this image to be zoomed when clicked. It is high time for us to see the first native feature in the Forum app. We are going to use the Photo Viewer Ionic Native plugin to accomplish our goal. First thing we need to do is run the following command and install the cordova plugin.

ionic plugin add com-sarriaroman-photoviewer

Inside the component we import the PhotoViewer Typescript wrapper from ionic-native and we bind the click event to call the static show method. That’s all needed!

Login & Register on Firebase

Users should be authenticated in order to view/add threads and comments so let’s procceed with those views first. Add a folder named signup under pages. In Ionic, it’s common to create three files foreach page. One .ts Angular Component which holds the logic, one .html to hold the template and a .scss file for setting stylesheets. Go ahead and create the signup.ts, signup.html and signup.scss files under the signup folder. The SignupPage requires basic information from user. More specifically a unique email address and a password that are required from Firebase itself to create the account and some other data we would like to keep, such as a username and date of birth. We would also like to add validation logic in the signup page and for this we ‘ll use Angular Forms. Let’s have a preview of this page first.
Set the signup.ts contents as follow:

We need to create the EmailValidator and the CheckedValidator validators. Add a folder named validators under the shared folder and create the following two files, email.validator.ts, checked.validator.ts.

The SignupPage component make use of two ionic components to notify user that something is happening or happened. The first one is the Toast which displays a message when registration process is completed.

There are two more important functions in the signup.ts, the CreateAndUploadDefaultImage() and the startUploading. The first one reads a local file named avatar.png which exists under www/images folder. Copy the image file from here and paste it inside the www/images folder (or another one of your choice, just make sure to name it profile.png). The startUploading method uses the method described here and uploads a default image on the Firebase storage that we set at the start of this post. We will use the same method to upload files captured by the mobile’s Camera or picked from the mobile’s photo album later on the Profile page.
The Login page is much simpler than the signup. Add the login.ts and the login.html files under a new folder named login in pages.

Nothing that we haven’t seen already here. Just simple validation logic and a call to the AuthService signInUser method. Notice however that in a successfull login we make sure to set the root of the NavController to the TabsPage. I recommend you to spend some time reading the basics of the Nav API as well.

Threads Page

This page is responsible to display all threads existing in Firebase ordered by priority. The thread with the largest priority is being displayed first. Add a threads folder under pages and create the threads.html template first.

There are 4 basic parts in the template. The first one is the ion-segment which is just a container for buttons. The segment allows the user to change between all and his/her favorite threads. They are just buttons, nothing more. The second important component in the template is the ion-toolbar which allows the user to search in public (only, not favorites) threads.
We also use an ion-refresher element for refreshing the entire list. The truth is that we don’t need this functionality that much because we will bind events on Firebase which will notify the app each time a new thread is being added. Then we have an ion-list that renders the currently loaded threads and last but not least an ion-infinite-scroll element. This component will allow us to support pagination and every time the user scrolls and reaches the bottom of the page, the next batch of threads will be loaded from firebase. For this to work we need to keep track the priority of the last thread loaded in our application (and that’s why we used priorities..). For simplicity the refresher and the infinite scroll components will be enabled only when the ‘All’ segment button is pressed and the user is connected to the network. That’s why you see some *ngIf conditions on the template. Once again get the entire source-code of the threads.ts file here. I will explain the most important methods of the ThreadsPage component. We need the ViewChild from @angular/core and the ionic Content so we can scroll up and down the ion-content. We import the NavController, the ThreadCreatePage and ThreadCommentsPage so we can push those pages on the stack while being always at the Threads tab. We also import all our custom services for both online (Firebase) and offline (SQLite) CRUD operations. We also import Events from ionic for sending and responding to application-level events across the Forum app. One case where we are going to use Events is get notified in case of network disconnection or re-connection.

First thing we need to do, is decide whether we are connected in Firebase or not and fetch the data from internet of the SQLite database respectively. This is what ngOnInit() and checkFirebase() are for.

checkFirebase waits for at least five seconds before decides to load data from the database. DataService listens to a specific location in Firebase that check the client’s connection status which is returned by the isFirebaseConnected() function.
There are three key variables on this component:

Variable threads holds items that are being displayed in the ion-list. Either is the ‘All’ segment button selected or the ‘Favorites’, this variable should hold the right data. Variable newThreads holds new items added from other users and is being populated instantly because of the following listening event:

What this line of code does, is start listening changes in the statistics/threads Firebase location which we populate only when we add a new thread. And because we set it equal to new thread’s priority here is the onThreadAdded function as well.

This function retrieves the new created thread, adds it in the newThreads and publish a thread:created event. The TabsPage component which holds the tabs, is subscribed to this event in order to display a badge on the Threads tab. Here’s how how it looks like: On the right you can see that I change the statistics/threads value on purpose, so that the app thinks someone has created a new thread..
We also subscribe to a threads:add event in order to add all new threads that have been created mostly by other users.

self.events.subscribe('threads:add', self.addNewThreads);

This event will fire from the TabsPage component when Threads tab has a badge containing the number of new threads have been added in Firebase.

TabsPage component will receive the threads:viewed event and will remove the badge form the tab. The ngOnInit() function also subscribes to the network:connected event in order to get notified when client reconnects.

self.events.subscribe('network:connected', self.networkConnected);

When this event fires, in case connection exists we reload threads from Firebase, otherwise we make sure to reset the mobile’s local SQLite database and save the currently local threads. This is just a choice we made to keep things simple and always make SQLite contain the latest loaded threads on the app.

The getThreads() function is quite important since is the one that loads threads from Firebase. In case the ‘All’ segment button is pressed then we retrieve the threads ordered by priority while keeping track of priorities loaded using the self.start variable. If the ‘Favorites’ button is pressed then we enumerate user’s favorite threads and foreach key retrieved, we download the respective thread and add it to the array.

The searchThreads() function searches Firebase only when ‘All’ segment button is pressed. It’s a very simple implementation that checks if the title of a thread contains the query text entered by the user.

The last two functions, createThread and viewComments are responsible to push new Pages in the stack. The first one renders the ThreadCreatePage page (we ‘ll create it shortly) using a Modal while the latter simply pushes the ThreadCommentsPage with the thread’s key passed as parameter. The pushed page will read the parameter in order to load the comments posted on that thread.

Add a new folder named thread-comments and create a thread-comments.ts file. Copy the contents from the repository. Let me explain the core parts of this component. On init, we get thread’s key passed from the previous page using NavParams. Then we load those comments on the page. The structure in Firebase looks like this..
Above you can see two comments for two different threads.

This page allows the user to mark the thread as Favorite. It does that using an Ionic ActionSheet component. If user adds the thread to his/her favorite collection, a key-value pair is added under the currently logged in user object in Firebase.
Here are the thread-comments.html template and the thread-comments.scss custom stylesheets as well.

There’s a Fab button on the template that opens the CommentCreatePage. The logic is all the same so just create a folder named comment-create under pages and add the following comment-create.ts, comment-create.html files.

Profile Page

This page displays some basic info about the user such as username or date of birth, fields that created during registration plus some statistics, such as how many threads and comments has the user created. More over will allow the user to upload a new image from his/her mobile Camera or Album folder. For this we will need to import a cordova plugin. Add a folder named profile under pages and create a profile.ts file. Copy the contents from here. Let’s explain the most important parts of this component. The imports statements should be familiar to you by now except for a new one, the Camera ionic-native plugin. Run the following command to install this plugin.

The loadUserProfile is the core function that gets all user’s data. It calls the getUserData() that fills the Firebase’s account data, then loads user’s image from the storage using the getDownloadURL function. It also calls the getUserThreads() and getUserComments() functions to count the number of threads and comments submitted by this user.

Depending of what the user selects, the openCamera() function will be called with the respective source parameter. Of course all cordova plugins are only available while running the app on your mobile, not in the browser. The openCamera() function will open either the mobile’s Camera or the Photo album gallery and when done, will capture and convert the data into a Blob which is required by Firebase for uploading files. The startUploadingImage function is quite similar with the one described in the signup page.

The About tab page displays some info about the app. It is the simplest page and the only noticable thing to explain is the InAppBrowser plugin used. We want to be able to open links in browser through this page so go ahead and install the plugin.

ionic plugin add cordova-plugin-inappbrowser

Opening URLs in app Browser couldn’t be easier. Add a new folder named about in pages and create the about.ts file.

SQLite Service

We ‘ve said that we want our app to be able to display content (at least some threads) while being in offline mode. For this we need to have our data stored locally on the device. We will use the SQLite cordova plugin to accomplish our goal, and we ‘ll make sure that every time the user disconnects, the currently loaded threads are being saved in a database on the mobile device. You can store any data you wish but for simplicity we will only store threads and users. In case you are unfamiliar with SQLite, here is a good tutorial to start with. First of all, install SQLite plugin by running the following command.

ionic plugin add cordova-sqlite-storage

Add an sqlite.service.ts file under shared/services folder and paste the contents from here. First we import all modules needed.

Ok, we save data but how do we read them? There is a getThreads() function called from the ThreadsPage component which not only selects threads from the Threads table but also joins records with the Users. I have also created a printThreads method in order to understand how easy is reading data using SQLite.

Bootstrap the Ionic Forum app

The last component remained to add is the first been called when the app fires. Copy the contents of the ForumApp component in the app.ts from here. Let’s take it step by step. The ngOnInit() function ensures that when user is unauthenticated, the LoginPage becomes the root page. Don’t use nav.push here, cause pressing the hardware back button will render the previous page on the stack.

We import the Network ionic native plugin for detecting network changes (connect-reconnect). Install the plugin by running the following command.

ionic plugin add cordova-plugin-network-information

Any plugin initialization code should be placed inside the platform.ready() event which ensures that all cordova plugins are available. We also make sure we are not deploying the app on our local browser using the window.cordova. This will prevent console errors when deploying your app in your local browser using the command ionic serve –lab

On connect or disconnect we publish the network-connected event so that subscribers do what they have to do, for example save currently loaded threads in the device database. We also reset the SQLite database in order to store only the currently loaded threads. This is probably not what you would do in a production app but we ‘ll do it to keep things simple. We want SQLite database to have always the last loaded threads and only. Another plugin we used is the SplashScreen. Install it by running the following command.

ionic plugin add cordova-plugin-splashscreen

We call the Splashscreen.hide() method in order to hide the splashscreen when the app starts, otherwise you may wait for some seconds due to default timeouts.

Theming

Theming your ionic app is crusial and app/theme folder contains SASS files either platform specific or generic. In case you used custom SASS stylesheets in your pages, like we did before, you need to import those files in the app.core.scss file otherwise you will not see the changes.

When you deploy your app for the first time (we will talk about this soon), you ‘ll see a default splash screen which apparently isn’t what you really want. You probably want to customise this image to reflect maybe your company’s brand. Ionic-CLI can do that for you with a sigle command but you need to make some preparations first. There is a resources folder in your application with two important files over there, the icon.png and the splash.png images. All you need to do is replace those files with your .png files. You need to make sure though that the files have proper sizes such as 1024×1024 for the icon.png and 2208×2208 for the splash.png. More over validate that your images are really .png files. Check why here. The ionic-command you need to run next in order to generate all the required files for you is the following.

ionic resources

Before running that command though you need to add at least one platform module to your app. Run one of the following commands depending which platform you wish to build for.

ionic platform add android
ionic platform add ios

ion-resources command will place new generated files inside the resources/platform/icon and resources/platform/splash respectively.

Running the Forum app

If you want run the Forum app in your browser, all you have to do is type the following command.

ionic serve --lab

This command will open the app in your default browser and display it in three different modes, IOS, Android and Windows. This mode is more than enough during development but remember, you cannot test native features such as the Camera or the Network plugins we added before. When you decide to run the app on your device, either this is an IOS, an Android or a Windows, you need to install some prerequisites first. Following are the steps you need to follow depending on the type of your device.

I installed Android Studio. Next I opened it and navigate to Tools/Android/SDK Manager

Install and add the SDK Android packages I was interested to build my app for.

Set my device properly. Mind that I followed only the Run on a Real Device steps.

Run the command ionic platform add android

Connect my device on my computer and run the command ionic run android.

In case you have trouble deploying the app on your phone, check your environment variables. Here’s what I have.

Debugging in Chrome

You may ask yourself how do I know if my app crashes or throws an exception while running on the device? Fortunatelly, Chrome gives you the ability to check what is going on in your app while running on the device. All you have to do is connect your device to your computer, open developer tools or press F12 and select More tools -> Insperct devices.
Open the Forum app and Chrome will detect your device and WebView running the app.
Click Inspect and a new window will open, display the contents of your device in real time. You can even control your app running in WebView from the browser. Mind that is very possible for the app to get slow when debuggin in Chrome but the important thing is that you can see all your logs in the console.

Discussion – Architecture

What we created is a mobile app running on client devices and a backend infrastructure hosted on Firebase that not only stores and serves all data but also handles all the authentication logic itself. More over it syncs all data instantly to all connected clients.
Is this schema sufficient? Maybe for smalls apps, apps that handle notes or todo items but certainly not for complicated ones. The latter require business logic which in turn may require complex operations which is kind of difficult to execute on Firebase. Even if you could execute complex queries on Firebase, it is unacceptable to keep the business logic on the client’s device. The missing part on the previous architecture is a web server, an API that could execute server side code and also communicate with Firebase as well. In many cases, a relational database is required too. Both the API and app clients may communicate directly with Firebase but for different reasons. Let’s take a look how that architecture would look like and then give an example.
Consider the scenario where a user decides to post a comment on a thread. The app doesn’t submit the comment directly to Firebase but instead sends an HTTP POST request to the API, containing all comment’s data (content, user, thread key, etc..). The API runs validation logic such as to ensure that the comment doesn’t contain offensive words which in turn are stored in an SQL Server database. Or it could check that the user posted the comment is eligible / allowed to post comments on that thread. On successfull validation the API would submit only the ammount of data needed to the corresponding location in Firebase which finally will make sure to sync the comment to all connected clients.

Conclusion

That’s it we have finished! We have seen how to build a full featured Ionic 2 application using Firebase infrastucture. We started from scratch, setting the Firebase envrironment and installing Ionic 2 CLI. We described how to use native device features by installing Cordova plugins and how to build for a specific flatform. I hope you enjoyed this post as much as I did.

Source Code: You can find the source code for this project here where you will also find instructions on how to run the application.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Angular 2 and TypeScript have fetched client-side development to the next level but till recently most of web developers hesitated to start a production SPA with those two. The first reason was that Angular was still in development and the second one is that components that are commonly required in a production SPA were not yet available. Which are those components? DateTime pickers, custom Modal popups, animations and much more.. Components and plugins that make websites fluid and user-friendly. But this is old news now, Angular is very close to release its final version and community been familiarized with the new framework has produced a great ammount of such components.

What this post is all about

This post is another step by step walkthrough to build Angular 2 SPAs using TypeScript. I said another one, cause we have already seen such a post before. The difference though is that now we have all the knowledge and tools to create more structured, feature-enhanced and production level SPAs and this is what we will do on this post. The Schedule.SPA application that we are going to build will make use of all the previously mentioned components following the recommended Angular style guide as much as possible. As far as the back-end infrastructure (REST API) that our application will make use of, we have already built it in the previous post Building REST APIs using ASP.NET Core and Entity Framework Core. The source code for the API which was built using .NET Core can be found here where you will also find instructions how to run it. The SPA will display schedules and their related information (who created it, attendees, etc..). It will allow the user to manimulate many aspects of each schedule which means that we are going to see CRUD operations in action. Let’s see the in detail all the features that this SPA will incorporate.

Not bad right..? Before start building it let us see the final product with a .gif (click to view in better quality).

Start coding

One decision I ‘ve made for this app is to use Visual Studio Code rich text editor for development. While I used VS 2015 for developing the API, I still find it useless when it comes to TypeScript development. Lots of compile and build errors may make your life misserable. One the other hand, VS Code has a great intellisense features and an awesome integrated command line which allows you to run commands directly from the IDE. You can use though your text editor of your preference. First thing we need to do is configure the Angular – TypeScript application. Create a folder named Scheduler.SPA and open it in your favorite editor. Add the package.json file where we define all the packages we are going to use in our application.

We declared all the Angular required packages (Github repo will always be updated when new version releases) and some others such as ng2-bootstrap which will help us incorporate some cool features in our SPA. When some part of our application make use of that type of packages I will let you know. Next add the systemjs.config.js SystemJS configuration file.

I ‘ll make a pause here just to ensure that you understand how SystemJS and the previous two files work together. Suppose that you want to use a DateTime picker in your app. Searching the internet you find an NPMpackage saying that you need to run the following command to install it.

npm install ng2-bootstrap --save

What this command will do is download the package inside the node_modules folder and add it as a dependency in the package.json. To use that package in your application you need to import the respective module in the app.module.ts (as of angular RC.6 and later). in the component that needs its functionality like this.

In most cases you will find the import statement on the package documentation. Is that all you need to use the package? No, cause SystemJS will make a request to http://localhost:your_port/ng2-bootstrap which of course doesn’t exist.
Modules are dynamically loaded using SystemJS and the first thing to do is to inform SystemJS where to look when a request to ng2-datetime dispatches the server. This is done through the map object in the systemjs.config.js as follow.

From now on each time a request to ng2-bootstrap reaches the server, SystemJS will map the request to node_modules/ng2-bootstrap which actually exists since we have installed the package. Are we ready yet? No, we still need to inform SystemJS what file name to load and the default extension. This is done using the packages object in the systemjs.config.js.

Those are mostly Angular dependencies plus jquery and lodash that our SPA will make use of. We are going to use some client-side external libraries such as alertify.js and font-awesome. Add a bower.json file and set its contents as follow.

Angular & TypeScript in action

Add a folder named app at the root of the application and create four subfolders named home, schedules, users and shared. The home folder is responsible to display a landing page, the schedules and users are the basic features in the SPA and the latter shared will contain any component that will be used across the entire app, such as data service or utility services. I will start pasting the code from bottom to top, in other words from the files that bootstrap the application to those that implement certain features. Don’t worry if we haven’t implement all the required components while showing the code, we will during the process. I will however been giving you information regarding any component that haven’t implemented yet.

As of Angular RC.6 and later it is recommended to create at least three basic files to init an Angular 2 app. An app.component.ts to hold the root container of the app, an app.module.ts to hold the app’s NgModule and the previous main.ts to bootstrap the app. Go ahead and create the app.component.ts under the app folder.

We need the viewContainerRef mostly for interacting with ng2-bootstrap modals windows. The app.module.ts is one of the most important files in your app. It declares the NgModules that has access to, any Component, directive, pipe or service you need to use accross your app. Add it under app folder as well.

We imported Angular’s modules, ng2-bootstrap modules, custom components, directives, pipes and services that we are going to create later on. The DataService holds the CRUD operations for sending HTTP request to the API, the ItemsService defines custom methods for manipulating mostly arrays using the lodash library and last but not least the NotificationService has methods to display notifications to the user. Now let us see the routing in our app. Add the app.routes.ts file as follow.

This is how we use the new Component Router. At the moment you can understand that http://localhost:your_port/ will activate the HomeComponent and http://localhost:your_port/users the UserListComponent which display all users. RxJS is a huge library and it’ s good practice to import only those modules that you actually need, not all the library cause otherwise you will pay a too slow application startup penalty. We will define any operators that we need in a rxjs-operators.ts file under app folder.

Shared services & interfaces

Before implementing the Users and Schedules features we ‘ll create any service or interface is going to be used across the app. Create a folder named shared under the app and add the interfaces.ts TypeScript file.

In case you have read the Building REST APIs using .NET and Entity Framework Core you will be aware with most of the classes defined on the previous file. They are the TypeScript models that matches the API’s ViewModels. The last interface that I defined is my favorite one. The Predicate interface is a predicate which allows us to pass generic predicates in TypeScript functions. For example we ‘ll see later on the following function.

This is extremely powerfull. What this function can do? It can remove any item from an array that fulfills a certain predicate. Assuming that you have an array of type IUser and you want to remove any user item that has id<0 you would write..

We proceed with the directives. Add a folder named directives under shared. The first one is a simple one that toggles the background color of an element when the mouse enters or leaves. It ‘s very similar to the one described at official’s Angular’s website.

The second one though is an exciting one. The home page has a carousel with each slide having a font-awesome icon on its left.
The thing is that when you reduce the width of the browser the font-image moves on top giving a bad user experience.
What I want is the font-awesome icon to hide when the browser reaches a certain width and more over I want this width to be customizable. I believe I have just opened the gates for responsive web design using Angular 2.. Add the following MobileHide directive in a mobile-hide.directive.ts file under shared/directives folder.

What this directive does is bind to window.resize event and when triggered check browser’s width: if width is less that the one defined or the default one then hides the element, otherwise shows it. You can apply this directive on the dom like this.

The div element will be hidden when browser’s width is less than 772px..
You can extend this directive by creating a new Input parameter which represents a class and instead of hiding the element apply a different class!

Shared services

@Injectable() services that are going to be used across many components in our application will also be placed inside the shared folder. We will separate them though in two different types, core and utilities. Add two folders named services and utils under the shared folder. We will place all core services under services and utilities under utitlities. The most important core service in our SPA is the one responsible to send HTTP requests to the API, the DataService. Add the data.service.ts under the services folder.

The service implements several CRUD operations targeting the API we have built on a previous post. It uses the ConfigService in order to get the API’s URI and the ItemsService to parse JSON objects to typed ones (we ‘ll see it later). Another important function that this service provides is the handleError which can read response errors either from the ModelState or the Application-Error header. The simplest util service is the ConfigService which has only one method to get the API’s URI. Add it under the utils folder.

Make sure to change this URI to reflect your back-end API’s URI. It’ s going to be different when you host the API from the console using the dotnet run command and different when you run the application through Visual Studio. The most interesting util service is the ItemsService. I don’t know any client-side application that doesn’t have to deal with array of items and that’s why we need that service. Let’s view the code first. Add it under the utils folder.

We can see extensive use of TypeScript in compination with the lodash library. All those functions are used inside the app so you will be able to see how they actually work. Let’s view though some examples right now. The setItem(array: Array, predicate: Predicate, item: T) method can replace a certain item in a typed array of T. For example if there is an array of type IUser that has a user item with id=-1 and you need to replace it with a new IUser, you can simply write..

Here we passed the array of IUser, the predicate which is what items to be replaced and the preplacement new item value. Continue by adding the NotificationService and the MappingService which are pretty much self-explanatory, under the utils folder.

Features

Time to implement the SPA’s features starting from the simplest one, the HomeComponent which is responsible to render a landing page. Add a folder named home under app and create the HomeComponent in a home.component.ts file.

Despite that this is the simplest component in our SPA it still make use of some interesting Angular features. The first one is the Angular animations and the second is the the MobileHideDirective directive we created before in order to hide the font-awesome icons when browser’s width is less than 772px. The animation will make the template appear from left to right. Let’s view the template’s code and a preview of what the animation looks like.

Add a folder named Schedules. As we declared on the app.routes.ts file schedules will have two distinct routes, one to display all the schedules in a table and another one to edit a specific schedule. The ScheduleListComponent is a quite complex one. Add the schedule-list.component.ts under schedules as well.

Firstly, the component loads the schedules passing the current page and the number of items per page on the service call. The PaginatedResult response, contains the items plus the pagination information. The component uses PAGINATION_DIRECTIVES and PaginationComponent modules from ng2-bootstrap to render a pagination bar under the schedules table..

The next important feature on this component is the custom modal popup it uses to display schedule’s details. It makes use of the ModalDirective from ng2-bootstrap. This plugin requires that you place a bsModal directive in your template and bind the model properties you wish to display on its template body. You also need to use the @ViewChild(‘childModal’) for this to work. Let’s view the entire schedule-list.component.html template and a small preview.

The ScheduleEditComponent is responsible to edit the details of a single Schedule. The interface used for this component is the IScheduleDetails which encapsulates all schedule’s details (creator, attendees, etc..). Add the schedule-edit.component.ts file under the schedules folder.

Don’t forget that we have also set server-side validations, so if you try to edit a schedule and set the start time to be greater than the end time you should receive an error that was encapsulated by the server in the response message, either on the header or the body.
The Users feature is an interesting one as well. I have decided on this one to display each user as a card element instead of using a table. This required to create a user-card custom element which encapsulates all the logic not only for rendering but also manipulating user’s data (CRUD ops..). Add a folder named Users under app and create the UserCardComponent.

The logic about the modal and the animations should be familiar to you at this point. The new feature to notice on this component are the @Input() and @Output() properties. The first one is used so that the host component which is the UserListComponent pass the user item foreach user in a array of IUser items. The two @Output() properties are required so that a user-card can inform the host component that something happend, in our case that a user created or removed. Why? It’s a matter of Separation of Concerns. The list of users is maintained by the UserListComponent and a single UserCardComponent knows nothing about it. That’s why when something happens the UserListComponent needs to be informed and update the user list respectively. Here’s the user-card.component.html.

The removeUser and userCreated are the events triggered from child UserCardComponent components. When those events are triggered, the action has already finished in API/Database level and what remains is to update the client-side list. Here’s the template for the UserListComponent.

The SPA uses some custom stylesheet styles.css which you can find here. Add it in a new folder named assets/styles under the root of the application. At this point you should be able to run the SPA. Make sure you have set the API first and configure the API’s endpoint in the ConfigService to point it properly. Fire the app by running the following command.

npm start

Conclusion

That’s it we have finished! We have seen many Angular 2 features on this SPA but I believe the more exciting one was how TypeScript can ease client-side development. We saw typed predicates, array manipulation using lodash and last but not least how to install and use 3rd party libraries in our app using SystemJS.

Source Code: You can find the source code for this project here where you will also find instructions on how to run the application.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2016/06/27/angular-2-crud-modals-animations-pagination-datetimepicker/feed/61angular.io-logochsakellangular-scheduler-spaangular-crud-modal-animation-02angular-crud-modal-animation-03angular-scheduler-spa-02angular-scheduler-spa-03angular-scheduler-spa-04angular-crud-modal-animation-04angular-crud-modal-animation-05facebooktwitter-smallBuilding REST APIs using ASP.NET Core and Entity Framework Corehttps://chsakell.com/2016/06/23/rest-apis-using-asp-net-core-and-entity-framework-core/
https://chsakell.com/2016/06/23/rest-apis-using-asp-net-core-and-entity-framework-core/#commentsThu, 23 Jun 2016 18:11:21 +0000http://chsakell.com/?p=4382Read More ›]]>
The source code for this post has been updated to VS 2017 (master branch). There is also a VS2015 branch for Visual Studio 2015.

ASP.NET Core and Entity Framework Core are getting more and more attractive nowadays and this post will show you how to get the most of them in order to get started with building scalable and robust APIs. We have seen them in action on a previous post but now we have all the required tools and knowledge to explain things in more detail. One of the most key points we are going to show on this post is how to structure a cross platform API solution properly. On the previous post we used a single ASP.NET Core Web Application project to host all the different components of our application (Models, Data Repositories, API, Front-end) since cross-platform .NET Core libraries weren’t supported yet. This time though we will follow the Separation of Concerns design principle by spliting the application in different layers.

What this post is all about

The purpose of this post is to build the API infrastructure for an SPA Angular application that holds and manipulates schedule information. We will configure the database using Entity Framework Core (Code First – Migrations), create the Models, Repositories and the REST – MVC API as well. Despite the fact that we ‘ll build the application using VS 2015, the project will be able to run in and outside of it. Let’s denote the most important sections of this post.

Create a cross platform solution using the Separation of Concerns principle

Create the Models and Data Repositories

Apply EF Core migrations from a different assembly that the DbContext belongs

Build the API using REST architecture principles

Apply ViewModel validations using the FluentValidation Nuget Package

Apply a global Exception Handler for the API controllers

In the next post will build the associated Angular SPA that will make use of the API. The SPA will use the latest version of Angular, TypeScript and much more. More over, it’s going to apply several interesting features such as custom Modal popups, DateTime pickers, Form validations and animations. Just to keep you waiting for it let me show you some screenshots of the final SPA.
Are you ready? Let’s start!

Create a cross platform solution

Assuming you have already .NET Core installed on your machine, open VS 2015 and create a blank solution named Scheduler. Right click the solution and add two new projects of type Class Library (.NET Core). Name the first one Scheduler.Model and the second one Scheduler.Data.
You can remove the default Class1 classes, you won’t need them. Continue by adding a new ASP.NET Core Web Application (.NET Core) project named Scheduler.API by selecting the Empty template.

Create the Models and Data Repositories

Scheduler.Model and Scheduler.Data libraries are cross-platform projects and could be created outside VS as well. The most important file that this type of project has is the project.json. Let’s create first our models. Switch to Scheduler.Model and change the project.json file as follow:

As you can see there are only three basic classes, Schedule, User and Attendee. Our SPA will display schedule information where a user may create many schedules One – Many relationship and attend many others Many – Many relationship. We will bootstrap the database later on using EF migrations but here’s the schema for your reference.
Switch to Scheduler.Data project and change the project.json file as follow:

Before moving to the Scheduler.API and create the API Controllers let’s add a Database Initializer class that will init some mock data when the application fires for the first time. You can find the SchedulerDbInitializer class here.

Build the API using REST architecture principles

Switch to the Scheduler.API ASP.NET Core Web Application project and modify the project.json file as follow:

We referenced the previous two projects and some tools related to Entity Framework cause we are going to use EF migrations to create the database. Of course we also referenced MVC Nuget Packages in order to incorporate the MVC services into the pipeline. Modify the Startup class..

We may haven’t created all the required classes (dont’ worry we will) for this to be compiled yet, but let’s point the most important parts. There is a mismatch between the project that the configuration file (appsettings.json) which holds the database connection string and the respective SchedulerDbContext class leaves. The appsettings.json file which we will create a little bit later is inside the API project while the DbContext class belongs to the Scheduler.Data. If we were to init EF migrations using the following command, we would fail because of the mismatch.

dotnet ef migrations add "initial"

What we need to do is to inform EF the assembly to be used for migrations..

We have added Cors services allowing all headers for all origins just for simplicity. Normally, you would allow only a few origins and headers as well. We need this cause the SPA we are going to create is going to be an entire different Web application built in Visual Studio Code.

When posting or updating ViewModels through HTTP POST / UPDATE requests to our API we want posted ViewModel data to pass through validations first. For this reason we will configure custom validations using FluentValidation. Add a folder named Validations inside the ViewModels one and create the following two validators.

We will set front-end side validations using Angular but you should always run validations on the server as well. The ScheduleViewModelValidator ensures that the schedule’s end time is always greater than start time. The custom errors will be returned through the ModelState like this:

if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}

Add a new folder named Mappings inside the ViewModels and set the Domain to ViewModel mappings.

I decided on this app to encapsulate pagination information in the request/response header and only. If the client wants to retrieve the 5 schedules of the second page, the request must have a “Pagination” header equal to “2,5”. All the required information the client needs to build a pagination bar will be contained inside a corresponding response header. The same applies for custom error messages that the server returns to the client e.g. if an exception occurs.. through the global exception handler. Add an Extensions class inside the Core folder to support the previous functionalities.

The SPA that we ‘ll build on the next post will render images too so if you want to follow with me add an images folder inside the wwwroot folder and copy the images from here. The only thing remained is to create the API MVC Controller classes. Add them inside a new folder named Controllers.

At this point your application should compile without any errors. Before testing the API with HTTP requests we need to initialize the database. In order to accomplish this add migrations with the following command.

dotnet ef migrations add "initial"

For this command to run successfully you have two options. Either open a terminal/cmd, and navigate to the root of the Scheduler.API project or open Package Manager Console in Visual Studio. In case you choose the latter, you still need to navigate at the root of the API project by typing cd path_to_scheduler_api first..
Next run the command that creates the database.

dotnet ef database update

Testing the API

Fire the Web application either through Visual Studio or running dotnet run command from a command line. The database initializer we wrote before will init some mock data in the SchedulerDb database. Sending a simple GET request to http://localhost:your_port/api/users will fetch the first 6 users (if no pagination header the 10 is the pageSize). The response will also contain information for pagination.
You can request the first two schedules by sending a request to http://localhost:your_port/api/schedules with a “Pagination” header equal to 1,2.
Two of the most important features our API has are the validation and error messages returned. This way, the client can display related messages to the user. Let’s try to create a user with an empty name by sending a POST request to api/users.
As you can see the controller returned the ModelState errors in the body of the request. I will cause an exception intentionally in order to check the error returned from the API in the response header. The global exception handler will catch the exception and add the error message in the configured header.

Conclusion

We have finally finished building an API using ASP.NET Core and Entity Framework Core. We separated models, data repositories and API in different .NET Core projects that are able to run outside of IIS and on different platforms. Keep in mind that this project will be used as the backend infrastructure of an interesting SPA built with the latest Angular version. We will build the SPA in the next post so stay tuned!

Source Code: You can find the source code for this project here where you will also find instructions on how to run the application in or outside Visual Studio.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2016/06/23/rest-apis-using-asp-net-core-and-entity-framework-core/feed/67dotnet-core-api-14chsakelldotnet-core-api-03dotnet-core-api-05dotnet-core-api-06dotnet-core-api-14dotnet-core-api-01dotnet-core-api-02dotnet-core-api-07dotnet-core-api-08dotnet-core-api-09dotnet-core-api-10dotnet-core-api-11dotnet-core-api-12dotnet-core-api-13facebooktwitter-smallMigrating ASP.NET 5 RC1 apps to ASP.NET Corehttps://chsakell.com/2016/05/21/migrating-asp-net-5-rc1-apps-to-asp-net-core/
https://chsakell.com/2016/05/21/migrating-asp-net-5-rc1-apps-to-asp-net-core/#commentsSat, 21 May 2016 17:50:33 +0000http://chsakell.com/?p=4295Read More ›]]>.NET Core RC2 is a major update from the November RC1 release and since announced all those that developed apps using the RC1 version are now required to upgrade them. The new version of .NET Core includes new APIs, performance and reliability improvements and a new set of tools as well. This means that not migrating your RC1 apps to it is not even an option. On the very first day of this year I released the Cross-platform SPAs with ASP.NET Core 1.0, Angular 2 & TypeScript post where we have seen how to get started developing Single Page Applications using ASP.NET 5, Angular 2 and Typescript. The post started with ASP.NET 5 beta and Angular 2 beta versions as well, but I made a promise that I will be upgrading the app as soon as new releases are out. Angular 2 was upgraded to RC.1 and now it’s time for the big change we have all waiting for. Upgrade from ASP.NET 5 RC1 to ASP.NET Core.

What is this post all about

This post will describe the changes needed in order to upgrade the PhotoGallery SPA application we built together, from ASP.NET 5 RC.1 to ASP.NET Core. You do know the PhotoGallery app or not, it really doesn’t matter. In case you have your own ASP.NET 5 application and you are interesting to migrate it, then you are on the right place. The interesting part is that the PhotoGallery app had incorporated many important features such as Entity Framework Core 7 with migrations and MVC services so you will have the chance to see not only the changes required to get the upgrade right but also some problems I encountered during the process. Before starting let me inform you that I have moved the ASP.NET 5 version of the project to its own Github branch named RC_1 so it is always available as a reference. The master branch will always contain the latest version of the app. You can view the RC_1 branch here.

Starting migration…

The first thing you have to do is remove all previous versions of .NET Core from your system which obviously is different for different operating systems. On Windows you can do this through the control panel using Add/Remove programs. In my case I had two versions installed.
Believe it or not this is where I got the first issue and un-installation failed. I got a setup blocked message for some reason and also asked for a specific .exe file in order for the process to continue. It turned out that the web installer file required was this file so in case you get the same exception download it and select it if asked. At the end of this step you shouldn’t have any version of ASP.NET 5 in the Add/Remove programs panel.

In a nutchel, .NET Core CLI replaces the old DNX tooling. This means no more DNX, dnu or dnvm commands, only dotnet. Find more about their differences here.
By the way in case you want to remove DNVM you have two options. Either run dnvm list to get all installed versions and then run dnvm uninstall version_to_delete or simply remove the runtime folders from the user’s profile folder.

dnvm list
dnvm uninstall version_to_delete

After doing this, if you open a new terminal dnvm will not be recognized by the system. For deleting DNU and DNX delete the %USERPROFILE%\.dnx folder and any reference exists in the PATH environment variable.

Project Configuration

It’s time to open the PhotoGallery ASP.NET 5 application and convert it to ASP.NET Core one. Open the solution (or your own ASP.NET 5 project) and make sure you have the Github RC_1 branch version. I must say that at this point and ASP.NET 5 uninstalled the project still worked as charmed. The first thing you need to change is the SDK version that the application is going to use. This is being set in the global.json folder under the Solution items folder. Change it as follow:

Notice that this is the exact version that the previous command printed. We continue with the project.json. Before showing you the entire file let’s point some important changes. The compilationOptions changes to buildOptions as follow:

Feel free to compare it with the ASP.NET 5 version. I have highlighted some important dependencies cause you certainly cannot ignore. For example you need to declare the Microsoft.EntityFrameworkCore.Tools if you want to work with EF migrations. At this point I noticed that Visual Studio was complaining that the NPM packages weren’t successfully installed. More over it seemed that it was trying to download extra packages not defined in the package.json as well.
What I did to resolve this is make Visual Studio use my own Node.js version. Right clink on the npm folder and select Configure External Tools.
Add the path to your Node.js installation folder and make sure to set it to the top. VS will use this from now on.

Code refactoring

After all those settings I believe the solution had at least 200 compilation errors so my reaction was like..
The first thing I did is fix all the namespaces. If you remember we renamed all Microsoft.AspNet.* dependencies to Microsoft.AspNetCore.* so you have to replace any old reference with the new one. Another important naming change is the one related to Entity Framework. The core dependency in the project.json is the “Microsoft.EntityFrameworkCore”: “1.0.0-rc2-final” which means there is no Microsoft.Data.Entity any more. Let’s compare the namespaces in the PhotoGalleryContext class which happens to inherit DbContext:

using Microsoft.EntityFrameworkCore;
using PhotoGallery.Entities;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore.Internal;
using Microsoft.EntityFrameworkCore.Metadata.Internal;
namespace PhotoGallery.Infrastructure
{
public class PhotoGalleryContext : DbContext
{

Compare it with the old version. You can find more info about upgrading to Entity Framework RC2 here.
Here is an example of namespace changes all MVCController classes needed:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using PhotoGallery.Entities;
using PhotoGallery.ViewModels;
using AutoMapper;
using PhotoGallery.Infrastructure.Repositories;
using PhotoGallery.Infrastructure.Core;
using Microsoft.AspNetCore.Authorization;
namespace PhotoGallery.Controllers
{
[Route("api/[controller]")]
public class AlbumsController : Controller
{
// code omitted

One of the key changes in ASP.NET Core is how the application fires. You need to define a Main method in the same way you would as if it was a console application. Why? Because believe it or not ASP.NET Core applications are just console applications. This means that you need to define an entry point for your application. You have two choices: Either create a new Program.cs file and define it over there or instead use the existing Main method in the Startup.cs file as follow:

IApplicationEnvironment changed to IHostingEnvironment and also the way you add Entity Framework services to the application service provider. You may ask yourself what happens now that Entity Framework migrated to RC2? Do EF migrations work as used to? The answer is yes, there aren’t huge changes in the way you use EF migrations. I encountered though an issue while trying to add migrations so let me point it out. First of all make sure you have all the required dependencies and tools defined in order to use EF migrations. Then open the Package Manager Console and instead of running the old command dnx ef add migrations run the following:

dotnet ef migrations add initial

When I run the command I got the following error:
It turns out that if you run the command from the Powershell 5 you wont get the error. If you still want to run commands from the Package Manager Console as I did the only thing to do is navigate to the project’s root first using a cd file_to_root command and then run the command. Here’s what I did.
Then I run the database update command and the database was successfully created.

Launching

There are two more changes I did before firing the application on IIS. Firstly I changed the launchSettings.json file under Properties as follow:

Having done all these changes I was able to launch the app both from IIS and the console. In order to run the app from the console type the dotnet run command.

Conclusion

That’s it, we have finally finished!
Migrating an ASP.NET 5 application to ASP.NET Core is kind of tricky but certainly not impossible. Now you can also create a brand new ASP.NET Core Web application through Visual Studio 2015 by selecting the respective template.
As far as PhotoGallery SPA I used for this post, as I mentioned the master branch will always have the latest updates while the RC_1 keeps the ASP.NET 5 RC1 version. You can check the Upgrade from ASP.NET 5 to ASP.NET Core commit here. The full source code is available here with instructions to run the app in and outside of Visual Studio.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook

Twitter

.NET Web Application Development by Chris S.

]]>https://chsakell.com/2016/05/21/migrating-asp-net-5-rc1-apps-to-asp-net-core/feed/9asp5-to-aspcore-19chsakellasp5-to-aspcore-19asp5-to-aspcore-01asp5-to-aspcore-02asp5-to-aspcore-03asp5-to-aspcore-04asp5-to-aspcore-06asp5-to-aspcore-07asp5-to-aspcore-21asp5-to-aspcore-22asp5-to-aspcore-08asp5-to-aspcore-10asp5-to-aspcore-11asp5-to-aspcore-12ezgif.com-gif-makerasp5-to-aspcore-13asp5-to-aspcore-14asp5-to-aspcore-15asp5-to-aspcore-16finaly-finished-migrationasp5-to-aspcore-17asp5-to-aspcore-18facebooktwitter-smallDynamic templates in AngularJShttps://chsakell.com/2016/04/13/dynamic-templates-in-angularjs/
https://chsakell.com/2016/04/13/dynamic-templates-in-angularjs/#commentsWed, 13 Apr 2016 13:29:20 +0000http://chsakell.com/?p=4255Read More ›]]>Sometimes you may find yourself that the common patterns in web development cannot support your UI requirements in terms of how dynamic a specific view can be. Let me explain what I mean by a simple example: Assume that you have a route in your Single Page Application that displays all the azure services that a user is registered at. Each registered service has its own section on the template an of course specific functionality such as unregister, add or remove features. The question is which is the proper way to implement such a view? You could create a template such as the following:

Indeed with this template you could display or hide specific sections depending on if the user is registered or not respectively but there are some drawbacks:

The servicesCtrl controller needs to implement the functionality for all the services even if the user isn’t registered at all of them.

The template isn’t sufficient in case we wanted to display the registered services in a specific order (dynamic behavior)

The solution: Dynamic compiled directives

The solution to the problem is to use custom directives for each service and add it to the template dynamically only if the user is registered. With this pattern you could add each service section in any order you want, assuming this info comes from an api. More over each directive would encapsulate its own functionality and only in the relative controller, template and css file.

Show me some code

I have created an SPA that solves the scenario we presented before related to azure services. Each post is an opportunity to learn new things so I found this one to also present you a proper way to create your own AngularJS 3rd party library. By 3rd party library i mean that you can encapsulate any custom angularJS elements or API you want in a single my-library.js file and use it in any angularJS application. The source code of the application is available for download on Github. Let’s take a look at the architecture first:
We can see that for each service we have a respective directive with all the required files along. More specifically we have directives for 4 azure services, Active Directory, BizTalk, RedisCache and Scheduler. Here is the code for the RedisCache directive which is responsible to display and handle RedisCache service related functionality.

Notice in the redisCache.html the bindings through a service object. But where does this $scope object came from? The answer is behind the azurePanel directive which works as a container to display the azure services. Let’s examine its code:

As you can see each service has definitely a type which is being mapped to certain azure-directives. Depending on the service type you may add any custom property that azure-service directive may need (dynamic behavior).

How to consume this external-lib?

The final product of such a library should be a JavaScript plus a css file. In order to achieve this you need to write grunt or gulp tasks that will concatenate, minify and generally optimize and package your library. I have done this using Gulp and you can see those tasks here.
In the SPA app’s side the only thing you need to declare in order to view a user’s services is the following:

The selectedUser is just an id which will be used from the lib’s api to fetch user’s services. In the app I have declared two users’s that have been registered in azure services in a different order. You can switch the selected user and check how this library works.
That’s it we have finished! We saw how to create a re-usable angularJS library and how to create dynamic views using AngularJS. You can download the project I created from here where you will also find instructions on how to run it.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.