Posts

TL;DR – In my last article, I wrote about the steps it takes to build a Docker image with cross-compiled native libraries for arm-hf/arm32/Raspbian/RaspberryPi and .NET Core 2.0.0 DLLs compiled for linux-arm. However it takes too many manual steps upon running your own container. A better practice is to build a Dockerfile which you can download.

How to get started?

Run this on your dev machine, not your target Raspberry Pi. This is because the Docker image is based upon Debian 8 (jessie) x86-64 GNU/Linux. This is the environment needed to run the RPiToolchain as well as the .NET Core 2.0.0 SDK.

As a result of the cross-compilation and .NET Core 2.0.0 compilation, you get a tar ball which you can copy onto your target Raspberry Pi. The cleanest way to do so is by mounting a host directory onto your container before running it. Then copy the iot-edge-rpi.tar.gz to /mnt.

dockerrun-v/home/username:/mnt--nameiot-edge-rpi-itiot-edge-rpi

Copy iot-edge-rpi.tar.gz to your target Raspberry Pi. Easiest way is to use SCP.

I have an Intel® NUC Kit DE3815TYKHE and I finally got some time to reinstall the OS on it. I’d also installed an SSD drive which I had lying on the top shelve of my study room. My intention is to install Ubuntu Core on the eMMC and Ubuntu 16.04 on the SSD. Ubuntu Core to try out the snap packages including the snap package for Azure IoT Edge. Ubuntu 16.04 so that I have a local dev environment instead of having to constantly spin up my Linux VM in Azure just to try things out.

Upgrade BIOS. At the time of writing this post, the latest BIOS update for this NUC is version 0060.

Open up a shell window. Credit goes to the person who posted on this forum thread. It works like a charm. [Note: anywhere you see “XY” or “X”, change that to the correct drive letter (“X”) and partition number (“Y”) for your LL root partition. To list your partitions, just run lsblk command]
sudo mount /dev/sdXY /mnt
for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$i; done
sudo chroot /mnt
grub-install /dev/sdX
update-grub
exit
for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$i; done
sudo umount /mnt

I turned off UEFI boot, and just stuck to Legacy boot in the BIOS. Works for me.

I reckon that I’ll always try to start my technical articles with the following tl;dr to introduce a summary of a lengthy post.

TL;DR – Azure IoT Edge is a project which enables edge processing and analytics in IoT solutions. The modules within the IoT Edge gateway can be written in different programing languages (native C, as well as different module language bindings available such as Node.js, .NET, .NET Core and Java) and can run on platforms such as Windows, and various Linux distros. As part of what I do in my day job, I work with customers and partners as they build their edge modules. One of the key asks is to be able to write modules in .NET Core and deploy on Raspberry Pi due to its ease of use and popularity for PoC and prototyping purposes. This article explains how to run modules written for .NET Core within the same Azure IoT Edge framework.

This post is timely as the .NET Core engineering team just announced less than a week ago that the .NET Core Runtime ARM32 builds is now available. More details about this announcement available here. Please do take note that these builds are community supported, not yet supported by Microsoft and have a preview status. For prototyping purposes, I wouldn’t complain too much about this. My plan was to get the existing modules for .NET Core cross-compiled on my dev machine for linux-arm as well as .NET Core runtime 2.0.0. This cannot be performed on Raspbian because only .NET Core runtime is available, not SDK, and there are native runtime shared libraries for Raspbian which must be cross-compiled. I will demonstrate how you could pull a Docker container with the right cmake toolchain to cross-compile for Raspbian,

According to the documentation in .NET Core Sample for Azure IoT Edge, the current version of the .NET Core binding and sample modules were tested and written in .NET Core v1.1.1. However I’d also tested that this works with .NET Core 2.0.0.

Cross-compiling Azure IoT Edge native runtime for ARM32

If you do a search in NuGet, you will find that there are a number of NuGet packages for Azure IoT Edge which contain native runtime libraries for a number of platforms namely Windows x64, Ubuntu 16.04 LTS x64, Debian 8 x64 and .NET Standard. How about for Raspbian which is a flavour of Debian 8 on ARM32? Instead of waiting for this to be released, you cross-compile the runtime libraries. After all this is one of the benefits of the open-source nature of Azure IoT Edge project.

Within the Azure IoT Edge repo jenkins folder, we have a build script just for cross-compiling for Raspberry Pi. This script is called raspberrypi_c.sh and it will spit out an output which is the cmake toolchain file called toolchain-rpi.cmake. To make things simpler, the Azure IoT engineering team had created a Docker image. However there is no guarantee that this image will be kept on Docker Hub at all times. Now that it is, you can find it here. There are heaps of other Docker images available as well.

Note: .NET Standard is a specification for .NET APIs which form the base class libraries (BCL). In the original, specifying .NET Standard 1.3 means that .NET Core 1.0 implements .NET Standard 1.3, which also means that it exposes all APIs defined in .NET Standard versions 1.0 through 1.3. More information about this here.

ii. Comment <NetStandardImplicitPackageVersion> like the line listed here.

You can now publish specifically for the linux-arm runtime. In the shell script for tools/build_dotnet_core.sh, add a -r flag to the dotnet commands. E.g., in the lines for

7. Now it is time to cross-compile native runtime library for Raspbian 8 ARM32. Create a symbolic link to the RPiTools in /home/jenkins because the raspberrypi_c.sh script expects the RPiTools in the home directory which for the root user is the following:

ln -sd /home/jenkins/RPiTools/ /root/RPiTools

8. Run

chmod ./jenkins/raspberrypi_c.sh
./jenkins/raspberrypi_c.sh

Note: If you encounter cmake errors, just delete the install-dep directory and re-run the shell script above.

This creates a toolchain file at ./toolchain-rpi.cmake

9. Now is time to build Azure IoT Edge with .NET Core binding targeting 2.0.0 along with the cmake toolchain file.

11. Now it is worthwhile to commit the changes you have made in the Docker container in a new image which is properly tagged/labelled. You should also copy the entire iot-edge folder in a tar ball out of this Docker container, and move it to your Raspberry Pi device. Steps required to do this is outside of this tutorial.

12. You can check out the Azure IoT Edge samples from its GitHub repo.

13. I tried out this simulated_ble sample within the dotnetcore folder. Please make sure that you have .NET Core 2.0.0 runtime installed on your Raspberry Pi device. I added this within the a loaders section in my gw-config.json file. Actually I’m not too sure if this is how it works in Linux, but I was paranoid anyhow. The right way is to export the directory in the LD_LIBRARY_PATH environment variable, I think. 🙂

14. To be even more paranoid, I copied all the native runtime libraries (*.so) and DLLs for the .NET Core binding modules into my execution folder. I also copied and rename a native gateway host as gw and place it within the same folder.

TL;DR – Ingesting telemetry data is nothing new in the industrial IoT world. Typically captured data are stored in a historian. However not all “historised tags” are stored in the historian. All there to it then is some data infrastructure on-premises. In order to do advanced analytics, leading to machine learning, leading to AI, the first step is to ingest telemetry from a larger variety of data sources in the cloud which leads to interesting stream processing and analytics in the cloud. This post talks about the guts of a connected factory, and how to bridge with existing components and systems in a connected factory.

I posted this on LinkedIn 3 months ago. The setup was for the Azure IoT Suite Connected Factory pre-configured solution. I have been meaning to publish this, now’s the time. The integral parts of a connected factory are connected telemetry stations, of which OPC UA is the standard for industrial IoT machines and systems in your plant floor.

This post is about streaming the telemetry data from SCADA systems or MES to the cloud, with Azure IoT Hub being the cloud gateway for ingestion and for maintaining the digital twins of these physical systems. The component which allows this integration is the OPC UA Publisher for Azure IoT Edge.

This reference implementation demonstrates how Azure IoT Edge can be used to connect to existing OPC UA servers and publishes JSON encoded telemetry data from these servers in OPC UA “Pub/Sub” format (using a JSON payload) to Azure IoT Hub. All transport protocols supported by Azure IoT Edge can be used, i.e. HTTPS, AMQP and MQTT (the default).

The target deployment environment is a Process Control Network (PCN) in which the target OPC UA Server lives. My target environment is made out of Windows Server 2016 virtual machines in an on-premises data centre. Azure IoT Edge modules are packaged into a Docker container. The current requirement for Docker images on Windows is that it has to be a Windows image. Prior to this the Dockerfile recipe for building a Docker image and running the container is for Linux, which is great for many purposes, except for my target PCN environment. I made a pull request in the GitHub repo for OPC UA Publisher with my contribution for a Dockerfile.Windows which uses a Windows NanoServer image with the right version of .NET Core upon which this project depended. The pull request was accepted and merged by the engineering team behind this project and they improved the recipe based on the new Azure IoT Edge architecture. All within the spirit of open source and making contributions back into the community.

To test this out, I

Created a Windows Server 2016 VM in Azure with Docker extension enabled

Note: It is not a requirement to run the OPC UA Publisher for Azure IoT Edge within a Docker container. However doing so makes it easier to deploy your IoT Edge modules on your field gateway. It also allows you to perform orchestration of your containers thereafter. I do know that certain industrial IoT vendors who are deploying the bits directly onto their specialised hardware without the need for a Docker container.

Note: If you are using Zscaler, during the process of building your Docker image, if you encounter issues with dotnet restore, this is likely due to a certificate trust issue between the Docker container and Zscaler. Just alter the Dockerfile to add the Zscaler certificate to the Docker containers Trusted Root certificates and that would fix that error.

Once you got the Docker container running with the OPC UA Publisher for Azure IoT Edge, what next? The next logical step is to publish your OPC UA Server nodes. You can add more nodes to the publishednodes.json after running the OPC UA Publisher module. To do so you need to use an OPC UA client to connect to the Azure IoT Edge OPC UA Publisher module on its exposed endpoint on port 62222, and publish a node.

You could expose this port when you run the Docker container by adding the following arguments when you run Docker on the CLI:

If you have a simple OPC UA client you could use that to connect to this endpoint. I used the sample .NET Core Client and I could see the exposed OPC UA TCP endpoint allows you to add more nodes, create a subscription, etc. However this does not allow me to invoke the nodes, I reckon that I need a full .NET Application client to do so.

C:\UA-.NETStandardLibrary\SampleApplications\Samples\NetCoreConsoleClient>dotnet run opc.tcp://winozfactory:62222/UA/Publisher

Using the UA Sample Client, I am able to connect to the Azure IoT Edge OPC UA Publisher endpoint of opc.tcp://winozfactory:62222/UA/Publisher.

Then go to Objects->PublisherInstance, and call the PublishNode item. You need to provide the NodeID and the ServerEndpointURL as arguments. In my case, I want to subscribe to the simulated value in my UA Sample Server, so I provided a node ID of ns=5;i=40, and server endpoint URL of opc.tcp://winozfactory:51210/UA/SampleServer

Voila! Publishednodes.json was updated by the OPC UA Publisher without restart.

To prove that telemetry is ingested in Azure IoT Hub, use Device Explorer. Monitor the device which you used in the connection string when you started the OPA UA Publisher, you will see telemetry serialised into JSON.

Once the telemetry stream lands in Azure IoT Hub, the sky’s the limit literally as your data-in-motion, data-at-rest are both in the cloud. The next step is to hook this up to Azure Time Series Insights, besides many other options you have as part of implementing Lambda architecture using Azure 1st party or 3rd party services. We continue to add more features in Time Series Insights such as root cause analysis and time exploration updates. Read the article here, but here’s an excerpt:

“We’ve heard a lot of feedback from our manufacturing, and oil and gas customers that they are using Time Series Insights to help them conduct root cause analysis and investigations, but it’s been difficult for them to quickly pinpoint statistically significant patterns in their data. To make this process more efficient, we’ve added a feature that proactively surfaces the most statistically significant patterns in a selected data region. This relieves users from having to look at thousands of events to understand what patterns most warrant their time and energy. Further, we have made it easy to then jump directly into these statistically significant patterns to continue conducting an analysis.”

In my previous post, I shared about the workaround in order to share Internet connection via ICS when the option is disabled due to domain group policy. I learned that there is an easier option to share the Internet connection of your Wi-Fi adapter to devices connected to your Ethernet adapter, like a Raspberry Pi running Windows 10 IoT Core. Here are the steps:

Open Network and Sharing Center.

Change adapter settings.

Select both your Wi-Fi and Ethernet adapter.

Right-mouse click and select the option to bridge

Make sure that the Internet Protocol Version 4 (TCP/IPv4) properties are set to “Obtain an IP address automatically”.

You just got yourself a Raspberry Pi 2 (RPi 2). You could be running Raspbian or Windows 10 IoT Core. You don’t have access to a hub/switch/router to connect the RPi 2 for Internet connection. The next best solution is by connecting the RPi 2 to your PC via Ethernet and sharing your Wi-Fi’s internet connection via Internet Connection Sharing (ICS). When you go to the Wi-Fi adapter properties, you got some bad news:

What do you do? Here’s a workaround which is definitely NOT endorsed by your friendly network administrator, but it works. NOTE: This workaround is NOT permanent and it is not meant to flout your network administrator’s group policy because they are rules for good reasons; security, etc.

To enable sharing on the WiFi adapter, run the following command in a Command Prompt run as Administrator.

Go back to Wi-Fi adapter properties, now you will see the Sharing tab. In case you don’t see the Sharing tab, this could be due to the reason that you have not connected your Ethernet adapter (for those that comes in a USB dongle). You need at least two network adapters to be present in order to do ICS. Check the box that says “Allow other network users to connect through this computer’s Internet connection”.

Go to your Ethernet adapter properties. Check out the Internet Protocol Version 4 (TCP/IPv4) Properties. You will see the following preconfigured for you. Do not change these settings.

When you start up your Windows 10 IoT Core on your Raspberry Pi 2, you will see that the IP address is dynamically set to an IP address like 192.168.137.2. Voila, this means that you have Internet connection shared with your RPi 2.

7. Follow the PowerShell documentation here to use PowerShell to connect to your running device. You can also follow the instructions here to use SSH to connect to your device.

Configure remote machine IP as 192.168.137.2, or any other IP address which you got from Step 6. Run your project.

Check Device Explorer for event has been received at the IOT Hub.

Finally, a word of caution. If you don’t see ICS sharing available in your Wi-Fi adapter settings anymore, this is because the group policy has been re-applied to your machine. That’s ok, it’s meant to protect your machine after all. When you need to enable ICS for another instance, just re-do the steps above.

I really like the Azure Logic Apps. It reminds me of the good old days of workflows in WF except that this meant for simple workflow logic, but it does the trick. I particularly like the FTP Connector and the Azure BLOB connector. Due to the trigger function not yet implemented in the Azure BLOB connector, I found a workaround which was to use the FTP Connector, then use the Blob Connector as an action. But in this particular IoT scenario, it is hardly a workaround, it’s a necessity because the “thing” could only upload my payload in an FTP server or send an email with the payload as attachment. More about this IoT scenario I am working on in a later post.

For the past few days, I’d been stuck looking at this one error. When I clicked on the “Output Links” in my trigger history, this highly elusive message was shown:

“message”: “Unable to connect to the Server. The remote server returned an error: 227 Entering Passive Mode (104,43,19,174,193,60).\r\n.”

This is super weird because “227 Entering Passive Mode” is hardly an error, it’s a valid FTP status message for passive mode. So why is this an error? Before jumping into a conclusion that this is a bug in the FTP connector, I tried 3 different options for having a FTP server in Azure:
1. Azure websites
2. A Linux VM running vsftpd.
3. A Windows Server 2012 R2 Datacenter running FTP Server

I tried all of the above in that order. Only (1) worked, but my “thing” could not upload the event data into an Azure website FTP server. I wasn’t even going to try to fix what’s in my “thing” because this “thing” is pretty much a blackbox, if you will. It’s white actually but you get what I mean. So I tried options (2) and (3).

After trying out all kinds of configuration and creating incoming rules and Azure endpoints to allow traffic, it finally dawned upon me. If I want to make a VM instance available across a range of ports, I have to specify the ports by adding them as endpoints in my Azure management portal. However the easier solution is to enable an instance-level public IP address for that VM instance. With this configuration I can communicate directly with my VM instance using the public IP address. The advantage of doing so is that I can immediately enable a passive FTP server, because “passive” essentially means that it can choose ports dynamically.

These port ranges can be huge, so I don’t think you would want to specify/add 1001 port numbers as endpoints on your VM instance. While it costs to have a instance-level public IP address, it’s well worth the money. Trust me.

It’s been awhile since I blogged, and the embarrassing part about this is because my blog has been down and I haven’t really got the time to really troubleshoot. I did what anyone would do, searched online which pointed me to a few posts on MSDN and Stackoverflow but nothing really did it for me. So in case you found this post through the same searches, this could be the potential solution for you.

If you get a HTTP 500 error, and you should FTP into your Azure website deployment to figure out the exact error. Here’s where you can find your HTTP detailed errors:

Just click on the file according to the latest datetimestamp.

If your error looks like the following:

That means you have enabled Python in your Azure website, which you do not need because WordPress runs on PHP.

Go to your Azure website configuration and disable it like shown below:

As I pump telemetry data from my Raspberry Pi, I could see my CSV file created/updated. Just go to the container view in your BLOB storage, and download the CSV file.

Below is what my event data stream looks like. It shows event data points captured from two Raspberry Pis, one using the MPL3115 temperature sensor (part of the Xtrinsic sensor board), and another using MCP9808 temperature sensor. The fun begins as I could write some funky transformation logic in the query and do some real-time complex event processing.

This is a follow-up post to my previous one which is about sending accurate temperature data from the MCP9808 temperature sensor board to an Azure Event Hub. This is done through the MCP9808 Python library provided by Adafruit, and also one which I have repurposed from the Xtrinsic sensor board. This is the updated version. Inside the ~/Adafruit_Python_MCP9808/examples directory, I made a copy of the simpletest.py script as send2eventhub.py.

The event hub client is no different that the one I described in my previous post for sending data from the Xtrinsic sensor board. The event data is sent to the event hub via its REST API. It’s pretty simple.

Then to make sure that the event data is sent correctly and can be consumed from the event hub, I made use of Azure Stream Analytics which is the simplest way to set the event hub as an input. Otherwise you would have to write code to either directly receive events from the event hub or use the EventProcessorHost like how I used in my scalable event hub processor in an Azure worker role.

Since this is a separate topic by itself, I will be writing another post about how to create a new Stream Analytics job to add an event hub as the data stream from which events data will be consumed and transformed by the Stream Analytics job.