Live DevOps in Japan!https://blogs.technet.microsoft.com/livedevopsinjapan
Tue, 23 Jan 2018 14:57:24 +0000en-UShourly1Simulating 1000 agents using ACS, ACI, and Durable Functions – (1) ACIhttps://blogs.technet.microsoft.com/livedevopsinjapan/2018/01/23/simulating-1000-agents-using-acs-aci-and-durable-functions-1-aci/
https://blogs.technet.microsoft.com/livedevopsinjapan/2018/01/23/simulating-1000-agents-using-acs-aci-and-durable-functions-1-aci/#respondTue, 23 Jan 2018 14:57:24 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2635I did some experiment to simulate 1000 agents using several measures. I'd like to share some learning and code in this post.

Problem Overview

I wanted to simulate 1000 agents with cost effective way with the simplest solution. Also, wanted to use .Net based technology as much as possible. In this case, How can we use Azure technologies?
On this experiment, I use ACI, ACS (Kubernetes with Go lang), Durable Functions. This system is simple. Just want to printout when we've got a queue. They have 1000 client in on Premise. If possible we want to achieve less than 1 sec latency from sending message to the queue to store print state to the Storage Table.

The architecture is also simple. When customer want to print on a web server, send a message to Storage Queue. Then Azure Functions create data on a table storage. Print Agents are always polling the table storage. Once get a data, it print the data and sends a queue to Storage Queue. Then Azure Functions store the print status to the queue.

Now you are ready to use this agent simulator using docker. 'tsuyoshiushio' is my DockerHub repository name.

Azure Container Instance

This is the Serverless docker hosting service. If it works, it might be a very good solution. Since this client requires different environment variables for one by one containers, I need some code to control the containers. You can refer the whole source code on my GitHub. You can create containers quite easily.

Then create an instance. On the actual code, I needed to deploy one more client called spammer. It sends a lot of queue. Once finished, I wanted to terminate. That is why I specify RestorePolicy = "Never".

This must be the best solution for this time! Very easy! However, it wasn't I can't generate enough performance for this. Since this is preview, we have some limitation.
By default, Container Group limitation is 20 for a subscription. Number of container per container group is 60. We can deploy 1200 containers by default. The biggest problem is container create 60 per hour. I can't create 1000 container. obviously. I needed to contact support service to expand the quotas. However, I'm not sure if we can try 1000 containers since this is preview. This is totally great for "Serverless" solution. However, I needed to give up this time.

However, operating ACI with C# is very easy! We can use it for a lot of purposes!

Conclusion

ACI interface is very good and easy to operate using Azure SDK. However, it have some quota limitation. That is why, I give up this time. However ACI is very good tool to create a some containers with serverless architecture!

I ask the support to expand it, if I can do it, I'll edit this blog.

On the next blog, I'm talking about the k8s with go lang strategy to achieve this issue.

I want to queue to the agent_1 queue for Agent One, and the agent_2 queue for Agent Two. Azure Functions already has this functionality called Binder.
This is the function comes WebJobs. Until recently, we only have a documentation on webjobs GitHub. However, now we have it as the Azure Function documentation!

Let's try it. We can use IBinder interface and BindAsync<T>() method. However, problem is, we might not know which Class can we choose for type T. I'd like to share which class can we user for this purpose. One of the biggest pain of this issue is, there is no error if you use a wrong type for T. It looks success however, there are no queue has been emitted.

I'd like to share some successful scenarios. This function trigged by the "acceptprint" queue then send queue message to "agent_1" or "agent_2" queue according to the AgentId value.

However, we can't use T, string, byte[] and BrokerMessage as type T in this case. BrokeredMessage doesn't have setter for message body. I saw a sample for blob storage to use TextWriter or Stream. However, it works with no-error but it doesn't actually work. I recommend to use ICollector<T> or IAsyncCollector<T> in this case.

Storage Queue

I also tried Storage Queue. It was easy. I found a sample in this page. Although it is for Webjobs.

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/26/azure-functions-dynamic-queue-message-routing-for-storage-queue-and-service-bus-samples/feed/0Azure Functions CI/CD pipeline for Node.js using VSTShttps://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/13/azure-functions-cicd-pipeline-for-node-js-using-vsts/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/13/azure-functions-cicd-pipeline-for-node-js-using-vsts/#respondWed, 13 Dec 2017 11:59:35 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2475In this blog post, I'd like to share how to create CI/CD pipeline for Azure Functions (Node.js). I can find several posts for C#. However, we can't find it for Node.js. I'd like to share some knowledge as a tutorial style to build whole pipeline.

Sample Project

You can find a sample project on the GitHub. It includes source code, test code, package.json and exported file from my build / release definition. You can import the same CI/CD pipeline as me.

Prerequisite

Configure Build definition

Project structure

If you don't need extra package for your Azure Functions, you don't need package.json. However, if you want to add extra packages, execute unit testing, and static analysis, I recommend to have a package.json on your FunctionApp directory. The sample package structure on the github is like this.

The sample project have two functions. Scheduler and Worker. Scheduler get an request by HttpTrigger and pass the value to the QueueBindings. Worker receives Queue then emit a log message. Simple functions. Tests directory include unit test for the functions.

Import build definition

For your convenience, I export my Build/Release definition and put on my GitHub. You can import the definition on your VSTS account.

On the VSTS, we have npm task with install Command. Then all npm packages are restored. It create node_modules package on the package root directory.

2. Unit Testing

Writing Unit Testing for Azure Fucntions (Node.js) is not so difficult. Azure Functions has two parameter, one is context and the other is req. You just mock it using json object and check the value of output bindings. You can use Sinon.JS for if the some functions has been called.

On the pipeline, I simply call this script using npm task. It will create test-results.xml on the FunctionApp root directory. Then using Publish Test Results task, you can push the result to the VSTS. Then you can see the test result on the VSTS.

3. Static Analysis

This step is optional. I use nsp for security checking. If the npm packages has vulnerabilities, it stop the build.ESLint for liniting. I just call these script by npm tasks.

"security-check": "nsp check","lint": "eslint ."

4. Bundle functions

This section is very important for performance. If you use npm packages with node_modules directory, it might cause a performance issue. It leads a long cold start time of a function. You can avoid this problem using azure-functions-pack packages. This package make a lot of files of a function with a lot of dependencies into single file just this command.

"pack": "funcpack pack .",

It create a .funcpack directory. It includes a single fine which bundle all dependencies. It improve a performance a lot.

On the pipeline,

Install Azure Functions Pack (npm task)

Exec funcpack pack . (npm task)

Delete node_modules ( Delete Files task)

Delete Tests directory ( Delete Files task)

This pipeline remove node_modules and Tests directory. It is not needed for deploy to azure after you use funcpack. Remove these.

5. Archive and publish

If you want to publish your functions to azure, you need to archive it using zip. In this pipeline, I copy whole file to the $(Build.ArtifactStagingDirectory)/functionapp directory (Copy Files task), then Archive it (Archive files) into FunctionApp.zip file. If you want to deploy Function App, you need to uncheck the "Prefix root folder name to archive paths" check box. If selected, the root folder name will be prefixed to the file paths within the archive.

Then publish the zip file to the drop directory. using Publish Artifact task. Then you can refer it from the Release definition. If you enabled Continuous Integration on the Triggers tab, VSTS automatically start this pipeline when the new change introduced on your repo.

Configure Release definition

Create a slot on your FunctionApp

Before starting the configuration, Let's create deployment slot. If you don't need the deployment slot, you can skip this step. However, if you use it, it might be very good way to deploy functions. Once you deploy to the Deployment slot, you can test it without affect to the production, once it becomes OK then swap it. Then the slot application turn into production with zero-downtime. Go to the Function App on your azure, click Slots. Then create a slot.

Deploy to the slot

Using the Azure App Service Deploy task, 1. Select your Subscription, App Type (Function App), App Service Name(Your Function App name). Then 2. Select Resource Group and slot name. Finally 3. Select the zip file which you created on the Build definition.

Then I add Manual Intervention to review the deployment. However it is optional.

Swap Slot

Swap slot is almost the same as the Deploy task. It swaps the slot into production.

Resource

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/13/azure-functions-cicd-pipeline-for-node-js-using-vsts/feed/0Azure Functions v2 C# Script Async sample with SendGridhttps://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/azure-functions-v2-async-sample-with-sendgrid/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/azure-functions-v2-async-sample-with-sendgrid/#respondMon, 04 Dec 2017 21:15:03 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2446I wanted to use Send Grid with Async method. However, all of the example on the Internet aim for Sync method. I'd like to introduce how to write it.

Note, this example is for Azure Functions V2.

Prerequisite

You need FunctionApp with V2 and SendGrid extension. After creating a FunctionApp then go to portal. Then you can change the Runtime version. For the SendGrid extension, click the "Integrate", then choose SendGrid as out put bindings. Then it ask you if you install the SendGrid extension. Just install it.

Code

Azure Functions V2 for the C# Script we can see a lot of changes. It has several change from V1. SendGrid provide Mail class now it is SendGridMessage class. Also, the HttpRequestMessage truns into Microsoft.AspNetCore.Mvc.HttpRequest. Return value is from HttpResponseMessage into IActionResults .

I can't find any example for Async functions for C# Script for V2, however, I just make it Async and returns Task<IActionResults>.

Resource

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/azure-functions-v2-async-sample-with-sendgrid/feed/0CI / CD pipeline for VSTS market place extension for k8shttps://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/ci-cd-pipeline-for-vsts-market-place-extension-for-k8s/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/ci-cd-pipeline-for-vsts-market-place-extension-for-k8s/#respondMon, 04 Dec 2017 01:04:27 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2357I have a project called Kubernetes extension for VSTS. You can use it on the VSTS Market place.

These days, official Kubernetes Task has been released. So I added advanced feature.

Separate download and execution of the task

Helm support

Istio support

k8s deployment (not only for ACS and AKS but others)

The implementation is very simple, however, I need full attention when I release it. I just start as my hobby project, however, now I have 320 uses. Some may use in production. I did it manually, as a DevOps guy, I wanted to automate it. However, I have some obstacles for that.

Integration testing with k8s cluster. However, I developed on a Windows machine.

How can I test the extension installation for a vsts account?

Long Build Time (13 min) for CI. Release is more than that.

Over size issue of vsix file

In this blog post, I'd like to share how I solve this problems. If you have better solution, please let me know.

Strategy

I started with coming up with a strategy how to test it. I can publish the VSTS task automatically however, I'd like to test before the publish. Also, After install the extension, I need to test on a k8s cluster. My idea is

1. Create a VSTS account

When I created the pipeline the first time, I create a project for this extension with a VSTS account. However, in this case, it cause a problem. The VSTS account has other projects. It uses VSTS tasks. VSTS extension/tasks deployment is for a VSTS account not for a VSTS project. Which means when I deploy a broken version, it will affect to the projects. I decided to create a new VSTS account only for this purpose.

This is not good practice. lol. However, when I wrote this, I'm a newbie of node and typescript. However these configuration works. I decided use this time. then I'll refactor it in the future. I'll execute these commands. The point is, VSTS task is node application however, a lot of people using TypeScript. I love TypeScript as well. You need to care something if you use TypeScript for CI / CD pipeline.

2.1. Preparation tasks

Before starting the unit testing, we need to install several tools.

npm install (npm)

Display name: npm install

Command: install

install typescript (npm)

Display name: install typescript

Command: custom

Command and arguments: install typescript@2.1.5 --global-style

NOTE: I try to use --global however, it didn't work. When I tried it, tsc 1.4.0 is enabled. I also try to set the PATH environment variables, however, it should be set on the task feature.

install tfx-cli command (npm)

Display name: install tfx-cli command

Command: custom

Command and arguments: install tfx-cli@v0.4.11 --global

2.2. Unit Test and Refresh node_modules

When I created the CI pipeline for the first time, one of the biggest issue was, the size of the artifact. It was very big. When I release it, I caught this error.

We need to remove typings and typescript package for the production. However, we need to compile ts files to js files. Ideally, if I can install tsc command with --global option, I can do it but it doesn't work. tsc 1.4.0 is already installed and it becomes valid. Also, I'd like to remove development dependency like mocha and chai, I decided to remove node_modules after the test, then re-install the npm packages. It might be best practice. If you know better idea, please let me know.

FYI: If I dont' install TypeScript and development dependencies, the effect is obvious.

$ du -h node_modules/
:
1.1M node_modules/

npm report (npm)

Display name: npm report

Command: custom

Command and arguments: run report

NOTE: this command compile ts file , unit testing, and reporting.

Delete files (npm)

Display name: Delete files from

Contents: node_modules

npm custom (npm)

Display name: npm custom

Command: custom

Command and arguments: install --production

NOTE: Install only production dependencies.

Publish Test Results and test-results.xml (Publish Test Results)

Display name: Publish Test Results and test-results.xml

Test result format: JUnit

Test results files: test-results.xml

Search folder: $(System.DefaultWorkingDirectory)

npm run deploy(npm)

Display name: npm run deploy

Command: custom

Command and arguments: run deploy

NOTE: This extension has 6 tasks. I need to copy js file and node_modules to each task directories.

2.3. Archive it!

At the first time, the CI build took 13+ min. Because of a big sized, and a lot of files. I archive it before the upload. then build time becomes 3 min. It is massive improvement. It also take a lot of time for release pipeline. Now, it is 2.3 min even if I enabled debugging option.

NOTE: If you create an presonal access token, Please choose "All accessible accounts" for Account. The default value is only valid for the vsts account. Which means you can't right to deploy it into market place.

3.2. Test

I use "Hosted Linux Preview". These tasks are only for Linux preview.

Then, you need to configure the connection string for AKS cluster, then you can any integration test you want using the tasks.
When you need any file for testing, you can include the private repo of VSTS.

Conclusion

Resource

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/ci-cd-pipeline-for-vsts-market-place-extension-for-k8s/feed/0Migration tips from IaaS to Web App for Containers on Azurehttps://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/13/migration-tips-from-iaas-to-web-app-for-containers-on-azure/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/13/migration-tips-from-iaas-to-web-app-for-containers-on-azure/#respondFri, 13 Oct 2017 02:17:36 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2205Web App for Containers is a very good service to migrate from IaaS world to the fully-managed container world. Without considering about the infrastructure, the PaaS platform provide Scaling feature and DevOps related tips like Continuous Deployment, Blue Green Deploy and Monitoring. This service is very good for a user who uses IaaS with OSS stack. It is very simple. If you need to handle complex situation, you might need to go orchestrator like Azure Container Service. However, if you don't need it, Simply you just want to use docker to pack your IaaS.

I noticed that this use case could be like a crossroad. Someone who familiar with WebApps but not for container might use this. Also, someone who is good at container but not WebApp might be interested this solution. If you are someone like them, you might like this post.

Tips 1. Make sure your container doesn't exit.

My colleague said that "Hey, I create a docker image from a docker image for local development environment. It works on my Mac, however, once deployed on the Web App for Containers, It doesn't seem to work. Why". You can check if your docker image doesn't exit after docker run. For example, if you start Nginx as a back ground process on the docker image and execute docker run, then the docker container start nginx daemon internarlly then exit it. You might specify startup.sh script on the Dockerfile. However, if the startup.sh script should not be finished. If it finish, docker run finish as well. It might happen on your Web App for Containers. Then you can't find a good log on the Web App logs. You can test it on your PC like this.

docker run -d -p 80:80 docker_image_name
docker ps

If your docker process can't find, then your docker container finished on the Web App for Container. The solution is try to stop the shell script like this. You can add this line on your startup.sh

tail -f /dev/null

Then the shell stop on the line, so the docker container keeps alive. But why it worked on locally? This time the case was like this. They start /bin/bash at the end of the startup.sh and they run the docker image like this.

docker -itd docker_image_name -p 80:80

-it means interactive mode. At the end of the startup.sh exec the /bin/bash. It starts bash. Which means the container keep on running until you exit from the container. However, if you deploy it to WebApp for Containers, it runs it without -itd option. This tips is very common tips for container users. However, someone new to container might trapped.

Tips 2. How to specify the port number?

When you open the Azure Portal, you can't find where to specify the PORT. docker run command have -p option. How the Web App for Container decide the port number? The answer is, the port number is automatically detected for you. However once it doesn't work, you can specify the port number on the App Settings on the portal.

For more detail, Please refer https://docs.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq

The image is like this. The port number should be the port number inside of the container. Web App for Container will generate random port number as the port number outside of the container. Also it is exposed to the server. The Load Balancer dispatch the request (http/https) to the random port number. It is also route into the nginx which reside on the container. You can see the acutualy docker run command on the portal. See Advanced Tools. You can find the log from the Bash script on the Advanced Tools called Kudu.

Tips 3. Chrome for Mac users

If you use Mac, I recommend to use chrome not safari. If you use safari and open the Kudu, you might can't type a letters. Chrome works fine.

<- not works.

Tips 4. When it deploy container to the Web App for Containers?

We can configure the docker image to deploy howerver, when it happens. The first time your configure it, it automatically deployed to the Web App. If you update the image on the registory, when it will be upgraded?

I have two answers.

When you stop the server then start the server, the web app will get the new image form the registry. If you just restart the server, it won't fetch the new image from the registry.
However, once you specify the Continuous Deployment, if the registry has been changed, it will automatically deployed.

But what if you want to control the deployment timing? I'll answer on the CI/CD section.

Tips 5. Debugging the container

At the first time, old container user like me, might feel that the Web App for Container is very hard to debug. Because we can't use docker exec or docker logs to the Web App for Containers. Then how to debug the container?

As I said, you can use the Advanced tool (kudu). However, you might want to see it on your local PC. Then you can use Azure CLI. The tool written by python, you can install on your PC, you can use docker image, also, you can use it on the Azure Portal console.

If you keep on watching the Web App for Containers log, you can use this command.

Tips 6. Use SSH for debugging

Someone who comes from Docker world, enabling ssh.d on the container is the wrong practice. However, Web App for Container world, it is a best practice. You might surprise it. The reason why is if you add ssh.d can be a security hole. However, on the Web App for Container world, you can only ssh to the container from the Web App Server and Only one port is allow to expose. Now you can ssh from the Portal also, you can even execute sudo command on the container. It is very helpful to debug it. To enable it, you can refer this document.

Tips 7. Keep Docker image small

Some people comes from IaaS world to Container, you might choose a base docker image like CentOS or Ubuntu. I don't recommend it. Because it's size is too big.

My customer's image was 1.8GB with CentOS base image. Instead, I use Alpine based image with 279MB. Also, I decompose the parts and make it simple then it becomes around 108MB. The image size matters. It consume a lot of time to build/deploy it. So I recommend to create a simple small image using container optimized image like alpine. Docker official image like nginx, php is already optimized. You can refer and learn from the Dockerfile of these images if you want to customize it. Also, docker image should be small service with single responsibility. You can't run a lot of daemons on single docker container. decompose it. It will help to simplify and easy to maintain your Dockerfile.

If you build/push your docker image locally, you might find it consume a lot of disk space. you can clean up via this command. It will remove all unused containers, volumes, and images.

$ docker system prune -a

Tips 7. 403 fobbiden on Nginx

You might encounter, 403 forbidden if you just move config files from the current environment. for example, You might see this log.

Web App for Containers 's domain is your_web_app_name.azurewebsites.net . If you leave the configuration on your nginx.conf, the requested server_name does not much the server_name. It causes 403.

server {
listen 80;
server_name *.xxxxx.jp;

Tips 8. SSL/TLS is configured

If you configure SSL on your nginx on your container, just remove it. The Web App for Containers look after it. Serverside SSL certificate is already installed on the Web App for Containers. You don't need to buy it.

Also, even if you specify "WEBSITES_PORT = 443", you can't route SSL to your container directory. The load balancer do the SSL offloading. The load balancer pass your request as a plain http request to the container. If you want to specify, your own serverside certificate, you can do it. If so, don't forget to configure the Custom Domain. Also, if you want to use client certificate, you can refer this article.

Tips 9. Blue Green Deployment with VSTS

If you use Azure, I recommend to use Visual Studio Team Services for enabling Continuous Integration / Continuous Delivery (CI/CD). Especially if you have a lot of manual process to build/deploy, this service helps you very well. I really love it. Visual Studio Team Service (VSTS) sounds like Visual Studio specific tool. However, it is totally great tool for open source docker guy as well. My customer who is running Ruby with Container, they deployed their apps to the GCP platform by the VSTS. Because it is very easy to configure and very good speed without configuring build agent. If you use Jenkins, you need to create a Build machine in somewhere. However, if you use VSTS, you don't need it. Also, VSTS include Git repository and Kanban with great Release management feature. One of the very good point is the integration with Azure. If you packed your app into container, you can create CI/CD pipeline very easily with decent control.

Once you create a Dockerfile and push into the git of the VSTS, you can configure build definition with Docker very easily. You can create a build pipeline to enable build docker file and push to your registry in a few minutes. All you need is just choose the subscription and registry. It was deadly easy! You can use "Hosted Linux" as an agent. VSTS automatically launch Linux VM with VSTS agent with docker.

Build Pipeline

You can choose a lot of build pipeline templates

If you try docker, please choose Hosted Linux Preview for Agent

Then choose the environment of yours.

Adding Subscription (Optional)

If you need to access other subscriptions, you can configure new endpoint. just click "Manage" link the next to the Azure subscription. All need to know is Service Principal. If you use your own subscription, you don't need this operation.

On the target subscription, you can try this command.

azadsp create-for-rbac --name {appId} --password"{strong password}"

Then you'll get the service principal. App(Client) ID, Password, Tenant Id, Subscription ID (You can see on your Azure Portal). Then fill this form. You can test the configuration with the "Verify connection" link.

Release Pipeline

Also, I come up with how to handle the blue green deployment using Web App for Containers and VSTS. One of the difficulty is we can't decide the timing of the deployment. However, If we have a deployment slot for blue green deployment (Web App can use Blue Green Deployment using Deployment slot), when the Web App pull the images?

I want to control it. However, if we configure continuous deployment, it always update the latest image. One of the simple idea is like this. On the release, you just create a pipeline Stop WebApp -> Start WebApp -> Manual Intervention -> Swap Slots. Stop/Start Web App cause a pull new image from your registry. You don't need to enable the continuous deployment option. Then You can enable blue green deployment quite easily.

Conclusion

I just share nine tips for Web App for Containers. I hope this post helps.

Resource

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/11/enabling-typescript-local-debugging-with-azure-functions-on-mac/feed/0Giving minimum access privilege using Service Principalhttps://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/09/giving-minimum-access-privilege-using-service-principal/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/09/giving-minimum-access-privilege-using-service-principal/#respondMon, 09 Oct 2017 08:28:19 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2085Sometimes, you may want to provide the minimum privilege for some Azure resource. I'd like to explain how to do it using Log Analytics search minimum access.

Create a Service Principal

The most easiest way is using Azure CLI 2.0. This command create an Service Principal for Azure. You can choose any name and password of the Service Principal. If you use the service principal, you can access almost any resource of your subscription.

Resource

]]>https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/09/giving-minimum-access-privilege-using-service-principal/feed/0Image uploading with Azure Functions node.js and Angular 4https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/07/image-uploading-with-azure-functions-node-js-and-angular-4/
https://blogs.technet.microsoft.com/livedevopsinjapan/2017/10/07/image-uploading-with-azure-functions-node-js-and-angular-4/#respondSat, 07 Oct 2017 10:39:23 +0000http://blogs.technet.microsoft.com/livedevopsinjapan/?p=2055In this blog post, I'd like to explain how to upload image to the Azure Blob storage from Angular 4 SPA application and Azure Functions HttpTrigger Node. In this experiment, I use Mac Book Pro with Azure Functions CLI with core branch. Which means Azure Functions 2.0 with local debugging one.

Binary Uploading Strategy

You can choose two strategies for uploading image. One is multipart/form or base64 encoding. In this usecase, I recommend to use base64 encoding. If you choose the multipart/form strategy, you need to parse multipart. However, every multipart parser written for experss not for azure functions. (e.g. busboy). Even if you use azure-function-express, you can't do it until now. The Azure Functions req object doesn't have some methods for the multipart parsers. If you want to go multipart/form for Azure Functions Node.js, you need to write multipart parser by yourself.

NOTE (2017/10/9):I tried to write the multipart parser. However, eventually I didn't do it. The current version of the Azure Functions (Javascript) forcefully convert binary into string. I wrote an issue and discuss it. I realized that it is known issue and they try to solve in the future version.

Azure Functions Settings for SPA

If you access form SPA to Azure Functions, you might encounter this error.

Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:4200' is therefore not allowed access. The response had HTTP status code 404.

This is the CORS problem. Javascript code from browser doesn't access the outside domain resource. For avoiding this issue on your local debugging enviornment, you need to add CORS setting on your local.settings.json. It requires server side configuration.

Image Upload using Angular 4

The code is very easy. I just do 1. base64 encoding using FileReader: readAsDataURL() method. Once I upload file on an input. Then 2. the Angular 4 invoke upload() method. Then "load" event happens, then it create an json object as a body and send it to the server.

After sending image data to the server, we need to update the image on the screen. Although I have a bindings to Car instance, I didn't directly map the image to the image. Instead I use image property then 3. I pass the base64 image data to the image property. We can pass the URL of the blob storage, however, it is asynchronous operations, we need to wait until the process has been done.

Decode base64 image with Azure Functions

Now you can get an image encoded in base64 via HttpTrigger of Azure Functions. Let's write a code for decode. To encode the image base64 image is like this according to the RFC2397 . Just decode these.

However, we have some problem. The uploaded image has a random number name. I'd like to specify the filename.

Specify the name binding for images

We can configure binding data runtime. However it is only for C#. We need to come up with other strategy. We can't configure the filename from our code. Instead, we can use the feature of the Azure Functions binding feature. You can see the route property. Once we accept the FileUpload/xxxxx url, Azure Functions pass the xxxxx to {filename}. On the output bindings, you can use {filename} as well.

I explain about how to deploy OpenFaaS on Kubernetes on Azure. However, OpenFaaS supports Swarm as well. What is the most easy way to deploy it? Since the OpenFaaS requires docker ce for 17.05+, we can't use Azure Container Service. Instead, we can use Docker for Azure. It is the most easiest way to deploy 17.06+ swarm mode cluster on Azure, currently.

Let's deploy it.

Deploy Docker for Azure

Go to this site. You can deploy the cluster directly. Just click "Deploy Docker Community Edition for Azure (Stable)"

Open the OpenFaaS potal

As you can see, All you need to do is find the externalLoadBalancer on the Resource Group, then access the URL with port 8080. You can refer OpenFaaS on ACS (Kubernetes) for seeing the faas-cli setting.