The Sheep-Pen of the Shaun

News

Shaun, the author of this blog is a semi-geek, clumsy developer, passionate speaker and incapable architect with about 10 years’ experience in .NET and JavaScript. He hopes to prove that software development is art rather than manufacturing. He's into cloud computing platform and technologies (Windows Azure, Amazon and Aliyun) and right now, Shaun is being attracted by JavaScript (Angular.js and Node.js) and he likes it.

Shaun is working at Worktile Inc. as the chief architect for overall design and develop worktile, a web-based collaboration and task management tool, and lesschat, a real-time communication aggregation tool.

Post Categories

.NET

When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.

Consume Windows Azure Storage

Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role.

The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application.

We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”.

In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”.

You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here.

The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here.

Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this.

Now let’s create a new function, copy the records from WASD to table service.

1. Delete the table named “resource”.

2. Create a new table named “resource”. These 2 steps ensures that we have an empty table.

3. Load all records from the “resource” table in WASD.

4. For each records loaded from WASD, insert them into the table one by one.

5. Prompt to user when finished.

In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button.

Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name.

In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key.

When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously.

As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern.

For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel.

Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser.

Can you find a bug in the code below? I will describe it later in this post.

Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below.

Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service.

Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected.

The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic.

The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code.

Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser.

Run it locally and now we can find the response was sent after all entities had been inserted.

Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here.

In the code below I queried an entity by the partition key and row key, and return the proper localization value in response.

Finally if we want to publish this application to the cloud we should change the database connection string and storage account.

For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.

Consume Service Runtime

As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values.

For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local.

And the can also use the endpoint which defined in role environment to our Node.js application.

In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client.

But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it.

To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations.

Start the Node.js at local emulator we can find it retrieved the connections, storage account for local.

And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.

Summary

In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point.

I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js.

Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform.

Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.

PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.

Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches.

Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.

It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”.

Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing.

Then in the deployment page we can see now WAWS supports three Git publishing modes:

- Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI.

- Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment.

- Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.

Now let’s back to GitHub and create a new publish repository.

Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks.

We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub.

I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder.

We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.

Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories.

After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows.

So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file.

1:<html>

2:<head>

3:</head>

4:<body>

5:<h1>

6: I came from GitHub, WOW!

7:</h1>

8:</body>

9:</html>

Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment.

If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well.

In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation.

But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role.

- WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases.

- WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation.

- WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances.

- WACS provides the service configuration features which can be changed when the role is running.

- WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription.

- Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.

Run Node.js in Worker Role

Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class.

Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”.

Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root.

Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension.

After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed.

Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345.

1:var http = require("http");

2:var port = 12345;

3:

4: http.createServer(function (req, res) {

5: res.writeHead(200, { "Content-Type": "text/plain" });

6: res.end("Hello World\n");

7: }).listen(port);

8:

9: console.log("Server running at port %d", port);

Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this.

Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared.

Open the browser to verify the website hosted by our worker role.

Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80.

Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP.

And then changed the “index.js”, let our web server listen on 80.

1:var http = require("http");

2:var port = 80;

3:

4: http.createServer(function (req, res) {

5: res.writeHead(200, { "Content-Type": "text/plain" });

6: res.end("Hello World\n");

7: }).listen(port);

8:

9: console.log("Server running at port %d", port);

Then publish it to Windows Azure.

And then in browser we can see our Node.js website was running on WACS worker role.

We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.

Use NPM Modules

When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work.

Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”.

But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple.

Please be careful when use this tool. I created only for demo so do not use it directly in a production environment.

Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project.

Now let’s change the “index.js” to use express.

1:var express = require("express");

2:var app = express();

3:

4:var port = 80;

5:

6: app.configure(function () {

7: });

8:

9: app.get("/", function (req, res) {

10: res.send("Hello Node.js!");

11: });

12:

13: app.get("/User/:id", function (req, res) {

14:var id = req.params.id;

15: res.json({

16:"id": id,

17:"name": "user " + id,

18:"company": "IGT"

19: });

20: });

21:

22: app.listen(port);

Finally let’s publish it and have a look in browser.

Use Windows Azure SQL Database

We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86.

Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD.

Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.

Summary

In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting.

But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here.

In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.

In the first post of this series I introduced on how to run Node.js application on Windows. In the second one I demonstrated about how to use SQL Server in Node.js. That was the steps when I was learning Node.js, firstly made it work on Windows, then try to use SQL Server. But returned back to my original goal, I need to implement a synchronization service for Wang Tao’s worktile, and planed to have a prototype hosted on Windows Azure. So the previous two posts are preparation. In this post I am going to describe how to host a Node.js website on Windows Azure Web Site.

Node.js on Windows Azure Web Site via Git Deployment

Windows Azure Web Site (a.k.a. WAWS) is a new service which introduced in Microsoft Meet Windows Azure event on June. It provides a low-cost, build-in templates and simple way to host websites on the cloud. For more information about WAWS, please refer to my previous posts here, here and here.

WAWS is the simplest way for us to deploy our Node.js website I think.

- IIS and IISNode are installed and configured in WAWS environment, so that we don’t need to install and anything when deployment.

- WAWS supports multiple deployment approaches such as TFS, FTP and Git. For Node.js, FTP and Git deployment are very easy and quick.

- WAWS provides three types of scale mode, which are free, shared and reserved. The cost is lower than Windows Azure Cloud Service (a.k.a. WACS) if we only need a website with a database.

Let’s have a look on how simple and easy to have a website on WAWS in Node.js. First of all, we need to create a new WAWS. Since I will demonstrate how to use WASD from Node.js later this post we will create a WASD here as well. Go to windows azure developer portal, from the NEW button select COMPUTER, WEB SITE, CREATE WITH DATABASE.

Once the website was created we go to its dashboard, click the Set up Git publishing link on the right side.

After few seconds Windows Azure will finish the Git deployment configuration. If this is the first time you configured Git or FTP deployment on Windows Azure, you need to click the Reset deployment credentials link on the dashboard to provide the deployment user name and password.

Next, let’s clone the Git repository from Windows Azure to our local disk. If you don’t have Git installed on your machine you can download it here. After installed we will open the Git Bash from the start menu Git folder. In the command windows navigate to the folder we want to clone. For example in my case I ‘m going to use the folder “D:\Research\nodejs\apps\nodejssample” so I need to go to ““D:\Research\nodejs\apps” and make sure the folder “nodejssample” does NOT exists, since Git will create this folder when it clones.

Back to the developer portal deployment page, copy the Git URL and then execute the Git clone command as shown below. When it needs password, specify the password we set in the previous step.

Now the repository had been cloned from WAWS to our local disk. Then we will use another GUI tool to commit and push our changes named GitHub for Windows.

GitHub for Windows is a GUI tool running on Windows for easily control the repositories on GitHub. We can use this tool to control our repository cloned from WAWS as well. You can download it here.

Open the GitHub for Windows, open the folder where the repository we had just cloned in file explorer, and then drag this folder into the GitHub for Windows.

When we clicked this repository on the GitHub windows we need to input the credentials, which is what we specified in developer portal. Currently there no changes in local folder.

Since the WAWS hosts Node.js application through IIS and IISNode, it will start the JavaScript file named “server.js”. So we must create a source file named “server.js” and this is the entry of our website. Create a web server and let it listen on port which come from “process.env.PORT”.

“process.env.PORT” means it will retrieve the port number from the environment variant named “PORT”. Since the WAWS hosts all website behind the windows azure router and firewall, this environment variant represents the correct internal port our WAWS is listening on.

1:var http = require("http");

2:

3: http.createServer(function (req, res) {

4: res.writeHead(200, {"Content-Type": "text/plain"});

5: res.end("Hello Node.js and Windows Azure Website!\n");

6: }).listen(process.env.port);

7:

8: console.log("Server started.");

Once we saved the file and back to GitHub window we would found it detected this change. Then we can commit it.

Then click the “publish” button on top of the window. It will publish our changes to the WAWS remote repository, and then WAWS will begin to deploy.

Now back to windows azure developer portal, there will be a new deployment in the deployment page under this website and have a try.

NPM Modules in Windows Azure Web Site

Use NPM modules and deploy modules alone with our Node.js website is pretty simple, too. All modules we added are located at the “node_modules” subfolder so we do not need any extra effort.

For example, I installed the “express” module in my website through the NPM command. It downloaded all necessary files to “node_modules” subfolder. And then I will change my code to use “express”.

1:var express = require("express");

2:var app = express();

3:

4: app.get("/", function(req, res) {

5: res.send("Hello Node.js, Express and Windows Azure Web Site.");

6: });

7:

8: app.get("/Echo/:value", function(req, res) {

9:var value = req.params.value;

10: res.json({

11:"Value" : value,

12:"Time" : new Date()

13: });

14: });

15:

16: console.log("Web application opened.");

17: app.listen(process.env.PORT);

Then go to GitHub, commit and sync to remote repository.

Then in windows azure we will found the new deployment.

And if we refresh the website home page we will find the new content. Also we can test the new function we added in this changes.

Work with Windows Azure SQL Database

Let’s continue use the SQL schema and data I mentioned in my previous post, execute the script below under the SQL Database I created at the beginning.

98: INSERT [dbo].[Resource] ([Key], [Culture], [Value]) VALUES (N'View_HomIndex_Featured', N'en-US', N'To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. The page features <mark>videos, tutorials, and samples</mark> to help you get the most from ASP.NET MVC. If you have any questions about ASP.NET MVC visit <a href="http://forums.asp.net/1146.aspx/1?MVC" title="ASP.NET MVC Forum">our forums</a>.')

110: INSERT [dbo].[Resource] ([Key], [Culture], [Value]) VALUES (N'View_HomIndex_Suggest_1_Val', N'en-US', N'ASP.NET MVC gives you a powerful, patterns-based way to build dynamic websites that enables a clean separation of concerns and that gives you full control over markup for enjoyable, agile development. ASP.NET MVC includes many features that enable fast, TDD-friendly development for creating sophisticated applications that use the latest web standards. <a href="http://go.microsoft.com/fwlink/?LinkId=245151">Learn more...</a>')

Then we need to add the node-sqlserver module in our local repository. As I mentioned in the previous post the node-sqlserver works with SQL Server and WASD. But there’s a problem if we are using NPM to install it. If you remembered, I specified that when NPM installed node-sqlserver it will build some C++ code by Python. The output binary depends on the local machine and Python whether it’s x86 or x64. If our local machine was x86 then this is no problem. But if our local machine is x64, the node-sqlserver module cannot be used on WAWS, since in WAWS all website are hosted in IIS in x86 WOW mode.

One solution is to change our development machine to x86, which may not be acceptable since x64 is preferred for other Windows Azure services development such as WASD and WACS.

Or maybe we can find a x86 machine to download and compile a x86 version of node-sqlserver for us, then copy it to our working PC.

Or we can download the x86 version node-sqlserver from Microsoft, which named “Microsoft Driver for Node.JS for SQL Server Preview” found here. Then follow its instruction we could install and copy the “node-sqlserver” folder under the “node_modules” folder in our repository.

Now we can use the similar source code in previous post to work with WASD, just need to change the connection string. The code would be like this.

The connection string can be found in our WASD page in developer portal. In Node.js we need to use the ODBC connection string and make sure change the password before commit and sync.

Save the file and in GitHub window commit and sync it. Our website will be deployed automatically.

If we changed a little bit of our C# console application then we can test the post function in our Node.js application hosted on WAWS. The updated C# console code would like this, just changed the remote URL.

Summary

In this post I demonstrated on how to host Node.js website on Windows Azure Web Site. It’s very simple and easy to deploy by using the Git command client and the new GitHub GUI.

I also described how to use Windows Azure SQL Database from Node.js on Windows Azure Web Site. Make sure you imported the right version of sqlserver.node. For better development and cloud environment we can have two node-sqlserver modules, x86 version in folder “node-sqlserver-x86” while x64 in “node-sqlserver-x64”. Then we can import the x64 on local development and changed it to “node-sqlserver-x86” before commit and sync to azure.

In Windows Azure there are some other services besides SQL Database such as Storage, Service Bus, etc.. Almost all Windows Azure services can be used by Node.js through another module named “azure”, which is Windows Azure Node.js SDK. In the next post I will demonstrate on how to host our Node.js application on Windows Azure Cloud Service Worker Role, how to consume the Storage Service and how to retrieve the Cloud Service configuration.

In my previous post I demonstrated how to install, configure and run Node.js application on Windows. In this post I will cover a specific and short topic, which is how to use SQL Server in Node.js.

Install SQL Service Module

In NPM there are many SQL Server modules. From the command windows we can use “npm find” to find the modules for specified terms. For example, below is the result when I found “sqlserver”.

In this post I will describe one of them, the “node-sqlserver” module. This module was built by Microsoft. It utilizes a library built in C++ as the bridge between the JavaScript API and the backend SQL Server Native Client. When we download and install this module, it will retrieve the source code from NPM, and it will compile the C++ code by a script written in Python. Hence there are some requirement components.

- Node.js: version 0.6.10 or higher. But per MS said it’s not fully tested under the 0.8.x Node.js.

If we have those installed then we can use NPM to download and install this module.

After it installed there is still one step need to be done manually. Seems like a bug in the module installation package, by default the binary of the SQL client (which compiled from the C++ source code) was not in the module’s library folder. So we need to do it by ourselves.

For example, assuming we installed node-sqlserver under the folder “D:\Research\nodejs\apps”, then we need to go to “D:\Research\nodejs\apps\node_modules\node-sqlserver\build\Release”, find a file named “sqlserver.node”, copy it to “D:\Research\nodejs\apps\node_modules\node-sqlserver\lib”.

The sqlserver.node will be compiled based on whether we installed x86 or x64 of Python. Basically if we are using x64 Node.js then we should use x64 Python to get x64 sqlserver.node. Otherwise the Node.js will raise an exception said it cannot find “sqlserver.node”.

Querying in Node.js

Here I would like to use the database I introduced in my Caspar project. It only contains one table with 3 columns which is very simple. The full schema and data creation script is listed below.

98: INSERT [dbo].[Resource] ([Key], [Culture], [Value]) VALUES (N'View_HomIndex_Featured', N'en-US', N'To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. The page features <mark>videos, tutorials, and samples</mark> to help you get the most from ASP.NET MVC. If you have any questions about ASP.NET MVC visit <a href="http://forums.asp.net/1146.aspx/1?MVC" title="ASP.NET MVC Forum">our forums</a>.')

110: INSERT [dbo].[Resource] ([Key], [Culture], [Value]) VALUES (N'View_HomIndex_Suggest_1_Val', N'en-US', N'ASP.NET MVC gives you a powerful, patterns-based way to build dynamic websites that enables a clean separation of concerns and that gives you full control over markup for enjoyable, agile development. ASP.NET MVC includes many features that enable fast, TDD-friendly development for creating sophisticated applications that use the latest web standards. <a href="http://go.microsoft.com/fwlink/?LinkId=245151">Learn more...</a>')

It’s very simple to connect to the SQL Server through node-sqlserver. First of all, we need to import this module. Then we invoke its “open” function to open the connection. The first parameter is the connection string while the second parameter is a callback, which will be invoked once the connection was opened or failed.

Once we have the connection opened, we can execute any T-SQL commands through the “queryRaw” function. This function has two parameters. The first one is the T-SQL command while the second one is a callback that will be invoked once the query was performed completed or failed.

1: sql.open(connectionString, function(err, conn) {

2:if(err) {

3:// error handler here

4: }

5:else {

6:var command = "{YOUR_COMMAND_HERE}";

7: conn.queryRaw(command, function(err, results) {

8:if(err) {

9:// error handler here

10: }

11:else {

12:// results in JSON format here

13: }

14: });

15: }

16: });

The parameter “results” represents all records that in JSON format. It contains two parts. The first part named “meta”, which is the metadata information of each columns of the result. The second part named “rows” which is an array contains all records.

For example, below is the query result against the table I had just created. In the meta part it said there are three columns in the result which are Key, Culture and Value. And the rows part is an array of all records.

Hence if we need to retrieve the data from the result we have to use the rows array. Currently the module doesn’t support getting result value from the column name.

1: conn.queryRaw(command, function(err, results) {

2:if(err) {

3:// error handler here

4: }

5:else {

6:for(var i = 0; i < result.rows.length; i++) {

7: console.log(i + ": " + "Key = " + result.rows[i][0]);

8: console.log(i + ": " + "Culture = " + result.rows[i][1]);

9: console.log(i + ": " + "Value = " + result.rows[i][2]);

10: }

11: }

12: });

You can find the source code and the wiki of node-sqlserver on its GitHub page. More information about this module please refer here.

Let’s create a simple web server to demonstrate how to use the node-sqlserver module. I created a new Node.js application named “hellosql.js”, installed two modules: express and node-sqlserver. Then when user navigate to the home page I will connect to the database and return all the records in JSON type.

And since currently node-sqlserver doesn’t support SQL parameter so we have to use text combination if we need some query criteria. In the code below I added another path that the user can specify resource key and culture in URL, then our application will search for the value.

Insert, Update and Delete

As I mentioned, the “queryRaw” function can be used to insert, update and delete rows against the database. In fact we can specify any T-SQL commands in this function. For example we can pass “EXEC sp_GetItem ‘MY_KEY’, ‘en-US’” to invoke the store procedure.

The code below I added a route for user to insert a new record in the database. The user must send the request by HTTP POST method and specify the values in the request body. The application retrieves the values from the body, combine the SQL command and then invoke the node-sqlserver module to insert the data.

Let’s run this console application and specify the values we want to insert into database.

And then back to the database we can see it had been inserted.

Summary

In this post introduced a SQL Server Node.js module named “node-sqlserver”. This module is created by Microsoft which works with SQL Server as well as Windows Azure SQL Database. It’s still in the early preview phase so the functionality may not be perfect, such as not supporting SQL parameter, not supporting store procedure and not supporting getting result value by column name, etc.. But Microsoft is working to add more features.

In the next post, I will describe how to run Node.js application on Windows Azure. I will demonstrate how to use Windows Azure Web Site to host our Node.js application and how to connect to the Windows Azure SQL Database as well.

Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js.

Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js.

Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.

(Brief) Introduction of Node.js

I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage.

Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”.

Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc..

Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations.

When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously.

Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back.

If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user.

And if the second AT finished its job, the ST will response back to the second user in the same way.

As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model.

The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them.

Node.js on Windows

To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console.

As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command.

For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website.

1:var http = require("http");

2:

3: http.createServer(function (req, res) {

4: res.writeHead(200, {"Content-Type": "text/plain"});

5: res.end("Hello World\n");

6: }).listen(1337, "127.0.0.1");

7:

8: console.log("Server running at http://127.0.0.1:1337/");

This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows.

Then open a browser and navigate to http://localhost:1337/.

As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward.

But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.

NPM and Node.js Modules

Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name.

For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module.

Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day.

Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js.

And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port.

1:var express = require("express");

2:var app = express();

3:

4: app.get("/", function(req, res) {

5: res.send("Hello Node.js and Express.");

6: });

7:

8: app.get("/Echo/:value", function(req, res) {

9:var value = req.params.value;

10: res.json({

11:"Value" : value,

12:"Time" : new Date()

13: });

14: });

15:

16: console.log("Web application opened.");

17: app.listen(12345);

For more information and API about the “express”, please have a look here.

Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser.

And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result.

The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM.

- underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc..

- wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.

Node.js and IIS

I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”.

Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.

Summary

In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS.

Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.

I also tried to provide some solutions which I mentioned in the original post but didn’t implement.

Thanks and History

Since I posted the original post two years ago the view count is over 30 hundred and there are about 55 comments till now. I would like to take this opportunity to say thank you for all the people who read and commented to that post. Your feedback gave me the passion to continue blog posting. And it’s my great honor.

I wrote that post when I was working at Ethos Technologies on a web application project. At that moment I need to add localization feature to an ASP.NET MVC 2 web application.

Since I changed my career to IGT I’m more focusing on the cloud computing, backend service development and architecture especially in Windows Azure and WCF. Even though I’m keeping an eye on ASP.NET platform I didn’t pay much time on using them. This is the reason that I didn’t provide enough answers to the questions in that post.

This week my team had released a new version of our production successfully I would like to back to the ASP.NET world and using a separate post to get my localization solution upgraded. So, before read this post I’d like to suggest you take several minutes to quick go through the original one which should provide some background knowledge.

- Moved the localization codes in a separated project so that it can be used more easily.

- Provided a solution to abstract the localization resource provider, so that we can put the localization items in any data source such as database, configuration file, and of course the resource file.

- Updated the language selector helper class so that we can use texts and images, scripts, etc..

Principle of Localization

The basic principle of localization is to retrieve the the localized value from the center localization framework by specifying the localization key. Never ever put any final localized string in the application. For example, when we need to print “Hello World”, we should use the localization framework to retrieve the localized “Hello World” to us by providing the culture and the key.

In ASP.NET MVC, all strings in the application, not only in view pages but those in the controllers, models and backend services, should specify the localization key, and then use that key to get the final value from the localization framework.

The localization framework takes the responsible for:

- Helps the ASP.NET MVC application to get the culture based on the URL and set to the UI culture.

- Helps to figure out the current culture from the URL and render the proper URLs based on the currently culture.

- Html helper to render the language select section in views.

- Retrieve the localized value based on the localization key.

Upgraded to ASP.NET MVC 4 and Separated Project

We still need to use the route feature of ASP.NET to help us implement the localization feature. When the application was started we need to register new routes with the language information. So that we know which culture the user set currently.

In ASP.NET MVC 4, all configuration procedures are split into the App_Start folder. So we will add a new static class in our localization project and this should be invoked in the Global.asax, Application_Start function.

In this post there will be a project contains all codes for localization. And then an ASP.NET MVC 4 web application will be using this project.

An other class which contains all constants used by the localization framework.

1:internalstaticclass Constants

2: {

3:internalstaticstring ROUTE_NAME = "Localization";

4:internalstaticstring ROUTE_PARAMNAME_LANG = "lang";

5: }

Then we can add the route in Global.asax. One thing need to be noticed that our localization routes must be in front of the default route. So we must invoke the LocalizationConfig.RegisterRoutes(RouteTable.Routes) before the default route register RouteConfig.RegisterRoutes(RouteTable.Routes).

I really don’t know why Microsoft doesn’t provide the insert or reorder functions in its RouteCollection class. It utilized a Collection<RouteBase> to store all routes but utilizes a private dictionary to store the relationship between the route name and route object. And there’s no way for us to touch the private dictionary and set the order of routes.

1:protectedvoid Application_Start()

2: {

3: AreaRegistration.RegisterAllAreas();

4:

5: WebApiConfig.Register(GlobalConfiguration.Configuration);

6: FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);

7:

8:// register the localization routes

9:// note: this must be invoked before the RouteConfig.RegisterRoutes

10: LocalizationConfig.RegisterRoutes(RouteTable.Routes);

11:

12: RouteConfig.RegisterRoutes(RouteTable.Routes);

13: BundleConfig.RegisterBundles(BundleTable.Bundles);

14: AuthConfig.RegisterAuth();

15: }

The next step is to implement the localization procedure in all controllers. In the previous post I created a base controller which override the ExecuteCore method, then put the localization related logic there. Then all controllers in my project should be inherited from this base controller to have the localization feature enabled.

But there are some problems in this solution. The first one is, in ASP.NET MVC 4, the ExecuteCore method will NOT be invoked by default. So I need to move the localization logic to the BeginExecuteCore.

In order to support the synchronized controller in ASP.NET MVC 4 the ExecuteCore will not be invoked unless you set the DisableAsyncSupport to FALSE. For more information about this breaking change please refer here and here.

The second one is related with architecture. Putting the localization logic in a base controller is OK. But this means you must have all your controllers inherited from this base controller. This could be a problem if you have an existing base controller which cannot be modified. For example, assuming we are going to create an ASP.NET MVC website and all controllers must be inherited from the Contoso.BaseController which contains some common business logic of my company, and this base controller defined in an assembly that could not be able to change. Now if I need the localization, all controllers must be inherited from the localization base controller, as well as the company base controller. How can we do that?

Hence, I decided to move my localization logic into a standalone class. Then we can create a base controller class from the Contoso.BaseController and invoke our localization logic in its BeginExecuteCore, and have all my controllers from this base one so that all of them have the common business logic and the localization feature.

I moved the code from the base controller’s ExecuteCore to this new helper class.

Oops! Bugs in System.ComponentModel.DataAnnotations

All migration job are done. Let’s have a try. First of all we need two resources files one for English and the other for Chinese. Then put some values there.

In order to make this sample simple I just demonstrate the localization feature in the three major places in ASP.NET MVC: view pages, controllers and model attributes. Now let’s have a very quick test. In the home index view I specified the title through the resource.

1: @{

2: ViewBag.Title = Resources.Global.View_HomeIndex_Title;

3: }

4: @section featured {

5:<sectionclass="featured">

6:<divclass="content-wrapper">

7:<hgroupclass="title">

8:<h1>@ViewBag.Title.</h1>

9:<h2>@ViewBag.Message</h2>

10:</hgroup>

11:<p>

12: ... ...

13:</p>

14:</div>

15:</section>

16: }

17: ... ...

18: ... ...

In the home controller I set the message through the resource.

1:publicclass HomeController : DefaultController

2: {

3:public ActionResult Index()

4: {

5: ViewBag.Message = Resources.Global.Controller_HomeIndex_Message;

6:

7:return View();

8: }

9: }

And in the account login model we specified the user name field’s display and validation message through the resource on the attributes.

Let’s start our MVC application and have a look. First, the proper string was shown in the home page which one comes from the view and the other come from the controller.

And it worked well once we clicked on the language select bar.

Then click login and verify the localization in System.ComponentModel.DataAnnotations attributes. Oops!

This error message means that there’s no public and static string property in the class generated from our resource file, so that the ASP.NET MVC cannot retrieve the localized string from the DisplayAttribute.

If we dig into the source code of the DisplayAttribute in System.ComponentModel.DataAnnotations we will find that it retrieves the localized value through an internal class named LocalizableString. And the LocaliableString class tried to find a visible property in the type defined in DisplayAttribute.ReourceType with the name of DisplayAttribute.Name.

But if we opened the Resource.Global class we will find that the class and all its properties were generated as “internal”. This means it cannot be retrieved by the DisplayAttribute since its IsVisible is FALSE. Someone believe this is a bug of DataAnnotations assembly since another attribute, ValidationAttribute retrieves the resource from the properties only if it’s static.

I’m not sure if this was implemented “by design” or a mistake in .NET BCL. I hope it could be fixed in the “.NET 5.0” but now we need to have some workaround. One solution is to move the resources in another project so that we can specify its access mode from “internal” to “public”.

Alternatively we can force Visual Studio to generate the resource class as “public”. Select the resource files and open its property windows, change the Build Action to Embedded Resource and change the Custom Tool to PublicResXFileCodeGenerator.

We also need to modify the codes those are using the resource class since this generator will have the output class under the App_GlobalResouces namespace. Now let’s retry our web application and we will see the localization strings in view page, controller and model attributes are working well.

Beyond the Resources: Localization Source Provider

In my previous post there are some comments that wondered to know how to use database to store the localization strings instead of the resource files. At that moment I replied that we can implement our own resource manager. But this might not be a simple task.

In ASP.NET MVC there are three places we may need the localization: views, controllers and models. It should not be a big problem that retrieving the localized string through database in views and controllers, since we have to full control of coding. For example, assuming that we created an interface named ILocalizationSrouce which contains one method that returns the localized strings. Then in views and controllers we can simply implement like the pseudo-code below.

As I described in the previous post, when received a HTTP request ASP.NET MVC will create the related controller, use the action invoker and build the model binder. By default, the model binder will validate the input model. In ASP.NET MVC 4 almost all parts can be extended. So when the default model binder tried to validate a property of the model it will retrieve the appropriate ModelValidator and call its Validate method. The validation result will be turned back and will be added into the BindingContext.ModelState.

ASP.NET MVC 4 utilizes DataAnnotationsModelMetadataProvider to retrieve the metadata for each models, and utilizes DataAnnotationsModelValidatorProvider for each model validation. And both of these providers leverage the DisplayAttribute and ValidationAttribute in DataAnnotations to retrieve the localized display name and error message. For example, in ValidationAttribute it invoked the a private method named SetResourceAccessorByPropertyLookup to get the delegation on how to retrieve the localized string. And then invoke this delegation (_errorMessageResourceAccessor) and add the display name. Then return the error message alone with the ModelValidationResult.

The problem is that, Microsoft doesn’t provide any extension points in DataAnnotations assembly. When it needs the localized validation message it finds the static string property defined in the ResourceType. If yes then it will use reflection to invoke this property and retrieve the localized string. Otherwise, it will throw an exception. All logic defined in the DataAnnotations assembly.

In order to use any kinds of localization source there are two solutions. The first one is very straightforward. Since the default behavior of DataAnnotations is to find the static string property for each localization items, we can create our own class which have the properties for each items.

For example, I created a table in SQL Server which contains all localization strings. The structure of this table would be like this. The primary key consists of Key and Culture.

Then copy the resource values into this table.

Now let’s create a class which contains the static properties for each localization items. It will connect to the database and use the current UI culture and the key to find the proper localized string.