blaize.nethttps://www.blaize.net Personal Site of Blaize StewartThu, 13 Dec 2018 20:11:51 +0000en-UShourly111222827To Use or Not Use Blockchainhttps://www.blaize.net/2018/12/to-use-or-not-use-blockchain/ https://www.blaize.net/2018/12/to-use-or-not-use-blockchain/#respondThu, 13 Dec 2018 19:56:34 +0000https://www.blaize.net/?p=2703Just say the word “blockchain” and you’re likely to get mixed reactions from all kinds of people. Understanding blockchain and how it works at a conceptual level is key to understanding what uses it has and why one would want to use it.

A blockchain is a fundamentally a ledger – that is a list of records arranged in chronological order. Data in the list is stored in “blocks”of data. Each block in the list is inextricably linked or “chained” to theprevious records such that modifying any record in the list will corrupt the entire list. Naturally, the chained blocks are where the name blockchain comes from. This has a few implications. Data on the blockchain cannot only be created and read. It cannot be deleted or updated as with other forms of data storage. Technically, data can be “changed” by creating a new data that supersedes the old data but does not modify or remove the old data.

Blockchains are also distributed, meaning that the data is not centrally stored. In a manner of speaking, all the data on the block chain is communally owned by the participants of the blockchain. Each participant in the system is a node on the network. New data is added to the blockchain and i tgoes through a consensus algorithm that are used to achieve agreement among the nodes on the network about new data addition. There are numerous consensus algorithms which different tradeoffs but they all fundamentally have the same responsibility. Blockchains are sometimes referred to as “distributed ledgers”as it captures both how that storage is shared and the storage mechanism.

Immutable, distributed data is what makes perhaps the biggest motivators for using blockchains possible: eliminating the middle man. By enabling two parties to share exchange data directly rather than involve a trusted third party to broker the relationship. If data can be shared in a way that doesn’t require the third party, then blockchains might work. The beauty of blockchains too is that they by design ensure that agreement is reached among all involved parties agree about what new data is added to the blockchain.

Blockchains are being used for all kinds of applications from the original use case of cryptocurrencies to some more niche use cases like real estate and supply chain management. These solutions all take advantage of the nature of the immutable, distributed ledger to make things happen. For a given use case, choosing to use or not use a blockchain is not always clear. Moreover, a blockchain is rarely, if ever, the only data store that is used for a given application. Rather, it is often used alongside other data stores like traditional RDMS’s.

Choosing a whether to use a blockchain though is not as hard as it may seem. Fundamentally, it comes down to do you need to share data with another party without the need or want for a trusted third party and then handling some possible show stoppers like the flow chart below.

Suppose a real estate consortium composed of brokers, agents, lenders, lawyers, and government agencies wanted to create a blockchain to handle real estate transactions. Such a blockchain would provide a convenient way to share data among all the involved parties without the need for a centralized system to manage it all. Moreover, the nature of the blockchain can store contract states and signage of documents as a transaction progresses from listing, negotiations, contracts, deed transfers, and finally to final sales.

According to the flow chart, the real estate consortium would satisfy the first two decisions with a need to share data and no need for a centralized third party. In fact, the consortium would probably prefer not to have such a controlling faction yet be able to validate transactions at every point in the transactions. Per the third decision, real estate transactions are subject to compliance. Though, the kind of data that is stored on a blockchain would be the same sort of data that would be part of public records. Other more private data should be tokenized and stored on private data systems. The last decision though, the data does not need constant updates but constantly needs new data. This makes real estate ideal for use.

Another common use case would be electronic voting. In such a system, the idea of using a blockchain lends itself well to voting: no one party controls the data, it’s immutable, it’s verifiable, and it’s shared. Voting quickly satisfies the first two decisions. Compliance is an issue with voting to keep voting anonymous. To that end, blockchain does not provide a good solution for ID so this would have to be handled externally. However, because the data is anonymous on the blockchain, then it makes sense to use a blockchain in this case. Lastly, once a vote is cast, it doesn’t change, so moreover voting becomes a real-world application for using blockchain.

To that end, blockchain makes a lot of sense in some cases, but it is not a silver bullet as it is often sold. Before one dives into head first, one would do well to understand the problem he or she is trying to solve, how blockchain works, and whether blockchain is the right technology choice for that matter.

One can never be too paranoid about online security for a number of reasons. Containers are generally considered to be more secure by default that virtual machines because they substantially reduce the attack surface for a given application and its supporting infrastructure. This does not imply, however, that one should not be vigilant about secure containers. In addition to following secure practices for mitigating security risks with containers, those that use them should also use edge security to protect containers as well. Most applications that are being deployed into containers are in some way connected to the internet with ports exposed and so on. Traditionally, applications are secured with edge devices such as Unified Threat Management (UTM) that provides a suite of protection services including application protection. The nature of containers though makes using a UTM harder, because container loads are portable and elastic. Likewise, container loads are also being shifted to the cloud.

A Web Application Firewall (WAF) is a purpose-built firewall designed to protect against attacks common to web apps that doesn’t contain the lower level network security found in firewalls. One of the most widely used WAF’s is ModSecurity. Originally, it was written as a module for the Apache webserver, but it has since been ported to NGINX and IIS. ModSecurity protects against attacks by looking for:

SQL Injection

Insuring the content type matches the body data.

Protection against malformed POST requests.

HTTP Protocol Protection

Real-time Blacklist Lookups

HTTP Denial of Service Protections

Generic Web Attack Protection

Error Detection and Hiding

NGINX, though, is more than merely a web server. It can also act as a load balancer, reverse proxy, and do SSL offloading. Combined with ModSecurity, it has all the features to be a full-blown WAF. The NGINX/ModSecurity WAF has traditionally be deployed on VM’s and bare-metal servers, however it too can also be containerized. Using NGINX/ModSecurity in a container means that a container itself can be a WAF and carry with it all the advantages of containers. Likewise, it can scale and deploy with containers loads with on premise and cloud based solutions while VM’s and physical firewalls cannot.

The container for the WAF will usually have external ports exposed to an external network, such as the container host’s network, then reverse proxy web services in containers on an isolated and private container network. This gives a great deal of flexibility and security to the environment the WAF is protecting, given that isolated networks can span clusters environments like those on Docker Swarm.

WAF Container Network

This WAF container can be a service to create redundancy in the latest Docker Engine with multiple nodes. Docker’s internal load balancer allows for multiple containers to be exposed on the same external port when they are defined as a service.

The Dockerfile and script herein builds NGINX and ModSecurity from their sources inside a container, then uploads three config files. These files are configured with the defaults settings on.

nginx.conf – This is the NGINX configuration file that contains the directives for load balancing and reverse proxying.

Line 44 starts the section about enabling and disabling ModSecurity

Line 52 starts the section to configure the reverse proxy. For docker, this will usually be the name of the container that is being fronted by the app.

Line 53 contains the internal URL that nginx is proxying.

modsecurity.conf – this contains the configuration for modsecurity and some configuration for the defaults and exclusion of the rules used by mod security. Most everything in the modsecurity.conf file can be left as is.

Line 230 starts the configuration of the rules.

The rules are downloaded and installed (/usr/local/nginx/conf/rules) when the container is built. Individual rules can be disabled or enabled, or they can all be enabled.

crs-setup.conf – this configures the rules used by ModSecurity. The file has integrated documentation. Reading through this file explains what the settings are for. For more information about crs-setup.conf, visit OWASP’s website.

Using the Dockerfile is simple. Change directories to the Dockerfile, and build the image.

docker build --tag mywaf .

Then run it.

docker run --name my-container-name -p 80:80 mywaf

This creates container.

Also, the image can be used with Docker Compose. The docker-compose.yml is a simple example that will deploy a simple node application along with the WAF. Change directories to the docker compose file, then run.

docker-compose up

Use with Kubernetes It is possible to use the WAF with Kubernetes too. In short, you create a deployment and load balancer service with the WAF, then use the WAF to connect to your applicaiton running on a deployment with a a cluster IP service. Reference the kube.yml file in the code for specifics.

Then use kubectl to deploy the kube.yml file to your Kubernetes environment.

kubectl create -f kube.yml

]]>https://www.blaize.net/2018/12/securing-docker-containers-with-a-web-application-firewall-waf-built-on-modsecurity-and-nginx/feed/01989ESXi Server Build from an EliteBook 8570whttps://www.blaize.net/2018/11/esxi-server-build-from-an-elitebook-8570w/ https://www.blaize.net/2018/11/esxi-server-build-from-an-elitebook-8570w/#respondWed, 28 Nov 2018 16:44:19 +0000https://www.blaize.net/?p=2686I recently got the itch to build a new machine – not for gaming, rather for virtualization. I have a gaming rig already: an ASUS ROG GL551 (i7 6700HQ, 32GB RAM, and 2x SSD) that runs all the titles I like to play. My goal was to build a hypervisor for ESXi that wouldn’t break the bank. I was aiming for a system that had a hyperthreaded quad-core, 32 GB of RAM, and two storage drives with decent capacity, and multiple NIC’s.

My first hunch was to go out on eBay and buy an old workstation or server and use that. I found an old server that might have done the trick: a HP ProLiant DL380 G6 and even ordered it. But the server arrived without some of the advertised components, so that guy went back. After sending the server back, I began to ponder, would it be possible to build one out of a laptop and supplement it with some new or used parts? I sold off my last desktop years ago and have been using only laptops for some time now, so I have a bin with laptop parts.

I made an inventory of what I did have on hand that might be useful.

HP EliteBook 8570w Chassis

HP EliteBook 8570w Motherboard with 2 Memory Slots

NVIDIA Quadro K2000M

Intel i7 3630QM

16GB (2×8) PC3-12800 DDR3 RAM

2TB 5400RPM 2.5” HDD

3TB 7200RPM 3.5” HDD

128GB M.2 SSD

2x USB3 1GB Ethernet Adapters

1 USB/eSATA Drive Enclosure

After some digging, I learned that the i7 3630QM could support up to a maximum of 32GB of RAM. I also learned that HP did make a trim of the 8570w that could support 32GB of RAM so long as the CPU supported it and the motherboard had 4 RAM slots.

The first order of business was to verify if ESXi would even install on an 8570w. ESXi is not designed to run on laptops or even most consumer grade components — but that doesn’t mean that it won’t run, only that it’s not supported. In any case, I pieced together a system with the parts and sure enough, ESXi installed without a hitch and I was able to create a couple of VM’s without any problems.

Next, I found a used motherboard with 4 RAM slots on eBay for $45 and bought it. Unfortunately, it was broken when I got it, so again, that went back to the seller. I found another one for about $50 and bought it, and it was in prime condition. While I was at it, I bought 2 more 8GB SODIMMS too. I installed the motherboard with the 4 RAM slots and 32 GB of RAM and it worked perfectly.

With this now installed, I needed storage. I already had a couple of HDD’s that would do the trick, but I need 1 more to get me up to the storage threshold I wanted. I bought a 4TB internal drive and an optical drive to HDD caddy and installed the 2TB HDD, 4TB HDD, and 128GB M.2 SSD inside the box. So now I had a laptop with an i7 Quad Core, 32GB of RAM, NVIDIA Quadro K2000M, and 3 internal drives for a total of 6.1TB of storage… not too shabby by any estimation. 😊

With the base system built, I added some external devices to round it out. The next thing I did was buy an eSATA cable to go with my drive enclosure and I installed my 3TB HDD in the enclosure and plugged it in to the machines eSATA port, bringing the total storage to 9.1TB. Using the eSATA port freed up the USB3 ports so I could use the USB NIC’s that I had. Combined with the onboard 1GBs port, the machine now had 3x 1GBs ports and was ready to rock. After installing ESXi 6.7 on the box, I was able to add drivers for the USB NIC’s because ESXi doesn’t out of the box support these, thanks to VirtualGhetto. Now I had what I started out to build: a machine that I could use for virtualization.

So the final build looks like this:

HP EliteBook 8570w Chassis

HP EliteBook 8570w Motherboard with 4 Memory Slots

Intel i7 3630QM

NVIDIA Quadro K2000M

32GB (4×8) PC3-12800 DDR3 RAM

2TB 5400RPM 2.5” HDD

3TB 7200RPM 3.5” HDD (eSATA)

4TB 7200RPM 2.5” HDD

128GB M.2 SSD

3x 1GBs Ethernet ports

VMWare

]]>https://www.blaize.net/2018/11/esxi-server-build-from-an-elitebook-8570w/feed/02686How To Remove All Resources in a Resource Group Without Removing the Group On Azurehttps://www.blaize.net/2018/10/how-to-remove-all-resources-in-a-resource-group-without-removing-the-group-on-azure/ https://www.blaize.net/2018/10/how-to-remove-all-resources-in-a-resource-group-without-removing-the-group-on-azure/#respondFri, 26 Oct 2018 23:19:08 +0000https://www.blaize.net/?p=2663Sometimes, I’ve needed to remove all the resources in a resource group without actually having to remove the resource group itself. In the Azure Portal, it’s not possible to do this other than manually selecting each resource individually and deleting it. But now that Cloud Shell has GA’d you can easily use it to accomplish the same thing. It’s really quite simple.

Create a file on your computer called “removeall.json” and paste in the following contents.

What this is doing is deploying an empty ARM Template to the resource group in complete mode. Complete mode does not attempt to update existing resources, rather it will remove existing resources and then recreate the resources as defined in the ARM template. But since the ARM template you deployed is empty, it will simply remove remove the existing resources and not deploying anything new, an voila, you have an empty resource group without having to recreate it.

]]>https://www.blaize.net/2018/10/how-to-remove-all-resources-in-a-resource-group-without-removing-the-group-on-azure/feed/02663A Raspberry Pi Motion Detector with Azure Integrationhttps://www.blaize.net/2018/09/a-raspberry-pi-motion-detector-with-azure-integration/ https://www.blaize.net/2018/09/a-raspberry-pi-motion-detector-with-azure-integration/#respondWed, 12 Sep 2018 14:50:28 +0000https://www.blaize.net/?p=2637After my webinar last week, lots of people have been asking about how I did the motion detection demo wherein I used a Raspberry Pi based motion detector that used the Azure IoT Hub and more broadly other Azure resources. The demo was a little cheeky because I used a princess castle, toy bear, and toy police officer as models to demo the app. In reality, the triviality of this means that the device itself could probably send the messages, but where’s the fun in that? Azure is awesome, and the Azure IoT is designed for scale…image thousands of devices doing this!

The Raspberry Pi setup was pretty simple, as it required no soldering, breadboards, capacitors, resistors or anything of that nature – just an old USB webcam that I had lying around. The Pi itself was connected to an Ethernet hub and powered by a USB phone charger. The principle purpose of this demo was not to have a complex IoT device, rather it was more intended to highlight the capabilities of Azure IoT and how to integrate it with other Azure resources.

The Raspberry Pi has a NodeJS script that acts as a wrapper around a little Linux utility called Motion, which can use all kinds of device and streams for motion detection. It does this by looking for differences in frames taken from a video steam. The script looks for the output from motion, which in this case are JPG images that are saved should motion be detected. Below are some sample images.

When images are found, these are uploaded via IoT Hub to an Azure Storage Account. Once a fair number of images are sent, the script then sends a message to IoT Hub. IoT Hub is wired up to an Azure Service Bus Message Queue which receives the message publications. An Azure Function subscribes to the queue to handle the messages. The function simply creates an email with references to the images embedded, and then sends that email message out by way of SendGrid. All in all, this simple demo shows end-to-end what an Azure IoT app might look like.

IoT App Architecture

A few disclaimers: this is a demo and is by no means intended to be run as a production grade app. This code and setup are actually part of a larger project that I am working on that will involved AI and image recognition, but borrowed pieces from it to do an IoT demo. Also, This is not not a trivial setup, so it can take a while to do. Many of the details are glossed over in favor of brevity. If you have questions, please post in the comments or contact me. So with no further adieu, here’s how to set this up.

Create Resources

To create resources, follow the links below to Azure which have detailed instructions for how to create the necessary resources in the Azure Portal. Along with these links are some notes about what needs to be done on each of these to ensure that you get the right settings. Before creating all these resources it’s important to remember to get all of these resources on the same subscription, in the same resource group, and in the same region for best performance.

Create a Function App. Choose Windows for the Type and for the Billing model, select Consumption Plan. You can take the defaults here, but for demo purposes you can turn off App Insights. Make a note of the Region, Subscription, and Resource group you are using..

Create an IoT Hub in the same subscription, resource group, and region as you used for the Function App. Set the Pricing and scale to S1: Standard tier.

Create a Service Bus in the same subscription, resource group, and region as you used for the Function App and use the Basic tier.

Configure Resources

Once all of the resources are deployed…

Configure SendGrid by opening the SendGrid resource in your resource group. Click on Manage, which will take you to the SendGrid website. Logon to SendGrid with your SendGrid username and password. Click Settings -> API Key -> Create API Key. Give the Key a name and then take the defaults. After this, copy the generated key and paste it somewhere. You’ll need it later.

Configure the Service Bus by opening the Service Bus resource in your Resource group, then select + Queues. Create a queue from the Overview blade and make note of the queue name because you’ll need this later. The size and other settings can be left on their defaults.

Configuring the IoT Hub has three things to configure.

Configure a device by clicking on IoT devices -> Add. Give the device a Device ID and click Save. Make note of the the Device ID because you’ll need it later. Click on the device to pull up its details, then copy the Connection string (primary key) and paste it into a text editor. You’ll also need this later.

Configure the Service Bus connection to IoT Hub by selecting Endpoints -> Add endpoint. Choose Service Bus Queue for the Endpoint type and give the endpoint a name. Choose the queue you created when you configured the Service Bus.

Click the Files upload then choose the Storage account created by Function App and create a new storage container. Make a note of the container name because you’ll need this later.

Code the Function App

Now that the resources are configured, create a function in your function app to bring some of the resources together.

Open the Function App you created.

Click on the + icon next to Functions, then choose Create your own custom function.

Choose JavaScript under Service Bus Queue Trigger.

You can use whatever you want to for the Name. For the Service Bus connection, Click New, then choose the Service Bus you created. Type in the name of the queue you created Queue Name, and then click Create.

After the function creates, select Integrate under the function, select + New Output, then select SendGrid.

Supply an email address for the To address and the From address. For SendGrid API Key App Setting, click New, then use APIKey for the Key and the API Key you copied from SendGrid for the Value. Finally, click Save.

Click on the name of the function to bring up the code editor and paste in the following code. Change the value of container to be the same as the container you created when you configured the File upload on the IoT Hub, then Save the code.

Click on the Function App name (not the function itself), then click Platform Features, then Advanced Tools (Kudu). This will launch a new tab in your browser.

In Kudu, select Debug console, then CMD.

Browse to site -> wwwroot -> then the name of your function.

Install the script dependencies for the function. You may see some red text with “WARN” messages. You can ignore these.

npm install azure-storage

That’s all for configuring the Azure Function, now you can proceed to setting up the Raspberry Pi

Configure the Raspberry Pi

This project used a Raspberry Pi 2 running Ubuntu Core. Installing Ubuntu Core on a Raspberry Pi is pretty straight forward. Once Ubuntu Core is installed, connect to the Pi with an SSH client and run the following commands.

Get root access

sudo -i

Update your Pi.

apt-get update && apt-get upgrade

Get NodeJS.

curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -

Install NodeJS and Motion

apt-get install -y nodejs motion

Configure Motion.First, edit the motion.conf file.

nano etc/motion/motion.conf

Then, search (Ctrl + W) for target_dir and set the value to /home/ubuntu/iot/motion/. Save the file (Ctrl + O).

Create some folders

mkdir /home/ubuntu/iot /home/ubuntu/iot/motion

Change directories to the iot directory

cd /home/ubuntu/iot

Create a script called iot.js

nano iot.js

Paste in the following code. Change <YOUR DEVICE CONNECTION STRING> to the connection string you copied when you configured the IoT Device and <YOUR DEVICE ID> to the Device ID. Then, save the File (Ctrl + O).

Start Motion Detection

Back in the Azure Portal, select your Resource group, then your IoT Hub, select you Device, then Message to Device.

Type in ON in the Message Body, then click Send Message. (Conversely, you can turn off motion detection by sending an OFF message.)

Back in the console running the IoT Script, you should see a message indicating that motion detection is turned on.

Motion Detection is ON

Place an object in front of the connected camera. You should see activity in the console with images and messages being sent to the IoT Hub.

If everything is working right, you should start getting message in your inbox with images take by the camera!

]]>https://www.blaize.net/2018/09/a-raspberry-pi-motion-detector-with-azure-integration/feed/02637Creating a Daemon with .NET Core (Part 2)https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-2/ https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-2/#respondMon, 27 Aug 2018 00:48:40 +0000https://www.blaize.net/?p=2628This little project is a practical implementation of a blog post I wrote about implementing daemons in .NET Core. This daemon is a .NET Core console app that is using a Generic Host to host an MQTT Server based on the code in MQTTNet, which has become my go-to library for MQTT for .NET apps of all kinds. In reality, this really isn’t creating the server. MQTTNet already supplies much of the plumbing needed to do this. This is putting a wrapper around MQTTNet’s server API, which by itself can act as a service because it does implement the IHostedService interface we talked about in our last post.

To make this work, I started with the same basic structure from the previous project with a service class, a config class, and the Program.cs that acts as the entry point for the daemon. I renamed the project to MQTTDaemon, the service to MQTTService and the config to MQTTConfig. Likewise, I added the needed dependencies from Nuget to include MQTTNet. Here are the files respectively:

Program.cs

This simply wired up the renamed services with their new names. Notice the config still gets it’s configuration from the CLI or Environment. In this case, it’s looking for a port.

Using the Daemon with a Client

Now, you can build and run the app. To do so, first run dotnet build in the root directory of the app. Next, run dotnet run --MQTT:Port=1883. This will start the server on port 1883, the standard port for a MQTT Server.

With that running you can test it. There are dozens of tools for testing MQTT Servers and one such tool is MQTTLens, which is a Chrome plugin. The app is pretty straightforward. Click the + button next to Connections to add a connection, then set the Hostname to localhost and the Port to 1883. Click Create Connection at the bottom of the config to connect to the server.

MQTTLens Config

You’ll notice in the shell where your server is running that a new connection added. Now, you can subscribe to a topic. Enter the name of a topic and click Subscribe. This will create a new topic in the server and MQTTLens is now listening on this topic.

Conclusion

This little demo shows you how to wire up a daemon and do something practical with it. As mentioned, MQTTnet itself can act as a service too. You don’t need the wrapper, as the library has its own configuration classes, logger interfaces, service classes, etc. so you can can wire it up right in the Program.cs and literally have a 1 class MQTT daemon at your disposal. In our next installment in this series, we’ll look at how to use an .NET Core daemon in Docker containers.

]]>https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-2/feed/02628Creating a Daemon with .NET Core (Part 1)https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-1/ https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-1/#commentsMon, 13 Aug 2018 16:08:31 +0000https://www.blaize.net/?p=2620Daemons are as essential to servers as caffeine and pizza are to developers. The executables run as services on a server waiting to receive input usually from a network call, and respond to it by providing something back to the the user who called it. .NET Core is the venerable cross-platform development stack for Linux, Mac, and Windows. Up to now, .NET Core really hasn’t had a good story for writing daemons, but with the introduction asynchronous Main methods, GenericHosts, and RunConsoleAsync this is not only possible, but incredibly elegant in its implementation. It follows the name patterns that ASP.NET Core developers have come to love. The main difference though is that in ASP.NET Core, the Web Host service is provided to you as a service and you write controllers to handle requests. With a Generic Host, it falls to the you to implement the host.

In this little tutorial, we’ll go over the basics of writing a daemon using .NET Core and a Generic Host.

Create an app. The basic console app for the .NET Core template is little more than a Hello World app. However, it doesn’t take much to transform this into a daemon style application.

dotnet new console --name mydaemon

To make this app run like a daemon, there’s a few things that we need to change in the .csproj file, so the myadaemon.csproj file in a text editor.

This project file is an XML file that contains a basic set of sections. We need to add a section to tell the compiler to use C# 7.1. This allows your Main method to be asynchronous which is not allowed in the default version of C#, 7.0. Add the following code snippet before the closing </Project> tag.

Now, we need to add some dependencies to make the app work. These are all extensions that can be used in console apps but can also be used in web apps as well. Add the following code snippet before the closing </Project> tag. Here, you’re adding dependencies for command line and Environment configuration, the ability to log to the console, and adding dependencies to do dependency injection for the Generic Host.

Now, edit the Program.cs file. Add the following name spaces to the top of the file:

using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

Change the Main method’s signature to be asynchronous.

public static async Task Main(string[] args)

Replace the code in the main method with the following code snippet. This code is wiring up the configuration for the daemon using the HostBuilder class. The ConfigureAppConfiguration tells the builder where to get configuration information from, and in this case we’re using the command line and the environment. ConfigureServices tells the builder what services to use and how to instantiate them. In the case of a daemon, it’s most likely that your service will be a Singleton, meaning there is only one instance of the service for the entire duration of the app. It also adds the configuration POCO object to the the services for dependency injection. This implements the IOption interface in .NET Core, which can take a type that it will attempt to bind CLI parameters and environment variables to based on the field name. ConfigureLogging wires up console logging using the ILogger interface.

The secret sauce for making the program run as a daemon is the RunConsoleAsync method. This method puts the app in a wait state that is looking for Ctrl + C or Ctrl + Break without consuming CPU. While the app is up and running, so is your service as defined by the ConfigureServices method.

Create a new file called DaemonService.cs. This is the class file that defines your service.

Paste in the following code into the file. This class implements IHostedService and IDisposable. The IHostedService interface is used by the builder in Program.cs to create a service that is “hosted” in the console app. It has two basic methods: StartAsync and StopAsync that get called when the service is started and stopped respectively. These methods allow for a graceful startup and shutdown of the service. If a service implements IDisposable, then the Dispose method will be called. This is a nicety for any final cleanup steps needed after StopAsync is called. The constructor accepts a number of interfaces as parameters, which are resolved by the dependency injection built into the builder. The logger and config are some of the standard kinds of dependencies that many apps have.

Conclusion

This little tutorial demonstrates how to frame up a .NET core daemon using some of the newer features available in .NET net such as the Generic Host, have asynchronous Main methods, and use RunConsoleAsync to keep the console app running. Also, this structure makes the code testable and maintainable using patterns that are familiar to those using ASP.NET Core.

In Part 2, we’ll use this to actually make a useful daemon: a .NET Core MQTT Broker.

]]>https://www.blaize.net/2018/08/creating-a-daemon-with-net-core-part-1/feed/12620Using Rclone with Azure for a Low-cost Backuphttps://www.blaize.net/2018/07/using-rclone-with-azure-for-a-low-cost-backup/ https://www.blaize.net/2018/07/using-rclone-with-azure-for-a-low-cost-backup/#respondThu, 26 Jul 2018 18:32:49 +0000https://www.blaize.net/?p=2604While backups are often one of the most overlooked planks in a comprehensive data security plan, they are are probably among the most important things one can do for data security. It works as an insurance policy against data loss which can be caused by a myriad of things ranging from accidental deletion, to drive failure, to ransomeware attacks.

A good backup strategy usually will not co-locate the backup data with the original for a number of reasons. A few might be things like fires and theft. In the past, data was backed up to media removable and stored offsite in a place such as a safe deposit or the like. Nowadays with the high-speed internet and readily available cloud-based storage, backing up over the internet to the cloud is a possibility.

One such cloud storage is Azure Blob Storage. Originally, Azure only had 1 tier for blob storage that was general purpose. Recently though, Azure introduced storage tiers for Azure Storage accounts, it opened up blob storage to a whole new set of use cases. The 3 storage tiers are hot, cool, and archive. Hot storage is intended for applications that need data to be readily available and that will be read and written to fairly often. Archive storage is intended for long-term archival of data. Data is not stored in a readily available state, so to recover the data requires that it go through a “hydration” process that can take a lot of time. Cool storage sits between hot and archive offering a lower-cost option that is available for use, but not intended for access. Cool storage in most regions is $.01 per GB per month. This means that one terabyte is roughly $10 a month. Azure does not charge for writing to cool storage, but it does charge for reading from cool storage. Given that the intent of this is a backup, you need only read from it in the event of data loss. A few other benefits for using Azure include the fact that every byte written to Azure is written 3 times for data redundancy and data is encrypted at rest on Azure.

Azure Storage is only half the equation. To get data onto Azure Storage, you need a utility/agent that will move data from your local computer to the storage account and this is where RClone comes in. RClone is a command line utility that performs one-way syncs between your local data and the cloud. When it runs, it looks for changes on the local file system, then uploads those changes to the storage account. Anything unchanged is left alone. The initial upload will obviously take some time, but once it’s finished only changes are sent up.

To be clear, Azure does have a backup as a service offering, which can be used for more robust backups and schemes. However if you’re looking for a simple solution, this little “hack” might just be for you.

Use Rclone to create a container on Azure. The syntax is rclone mkdir remote:container, where remote is the name of the remote you created with rclone config and containername is the name of the blob container you’ll create on Azure.

rclone mkdir azure:backup

Now, Rclone is configured to talk to Azure and use it for backups.

Syncing a Directory

Rclone will sync a local directory with the remote container, storing all the files in the local directory in the container. Rclone uses the syntax, rclone sync source destination, where source is the local folder and destination is the container on Azure you just created.

rclone sync /path/to/my/backup/directory azure:backup

Scheduling a Job

Scheduling a backup is important to automating backups. Depending on your platform will depend on how you do this. Windows can use Task Scheduler while Mac OS and Linux can use crontabs.

Before scheduling a job, make sure you have done your initial upload and it has completed.

Windows

Create a text file called backup.bat somewhere on your computer and paste in the command you used in the section on Syncing a Directory. It will look something like the following. Specify the full path to the rclone.exe and don’t forget to save the file.

If you want to back up multiple directories, simply add multiple containers using rclone mkdir and add a new line for each directory in the batch file for the source and corresponding destination container.

Mac and Linux

Create a text file called backup.sh somewhere on your computer, and paste the command you used in the section on Syncing a Directory. It will look something like the following. Specify the full path to the rclone executable and don’t forget to save the file.

Add an entry to the bottom of the crontabs file. Crontabs are straight forward: the first 5 fields represent in order minutes, hours, days, months, and weekdays. Using * will denote all. To make the backup.sh run at Daily at 1:05 AM, use something that looks like this:

5 1 * * * /full/path/to/backup.sh

Save the crontabs and you’re ready to go.

If you want to back up multiple directories, simply add multiple containers using rclone mkdir and add a new line for each directory in the script for the source and corresponding destination container.

Conclusion

This simple utility offers a nice way to backup local data to Azure and will work for a lot of simple and even some more complex use cases. Here are a few Do’s and Dont’s

Dos

Backup documents, pictures, videos, content, and other sorts of files you can’t stand to lose.

Schedule a daily backup to make sure stuff does get backed up regularly.

Do check to make sure things are backing up occasionally.

Don’ts

Don’t backup programs and program directories.

Don’t use this for source control.

Don’t assume that you’ll never need a backup.

Happy Backing Up!

]]>https://www.blaize.net/2018/07/using-rclone-with-azure-for-a-low-cost-backup/feed/02604Using OpenVPN on Azure For a Low Cost, Private VPNhttps://www.blaize.net/2018/06/using-openvpn-on-azure-for-a-low-cost-private-vpn/ https://www.blaize.net/2018/06/using-openvpn-on-azure-for-a-low-cost-private-vpn/#respondWed, 13 Jun 2018 16:04:24 +0000https://www.blaize.net/?p=2575A personal VPN is a nice way of securing traffic between your device and the Internet. Securing your traffic is good for several reasons including safe browsing when one is away from a trusted network like one’s home or office. Untrusted networks would be those at coffee shops, airports, hotels, public libraries, and other places where you do not know who or what is on the network or might be sniffing network traffic. Moreover, it establishes a point of presence on the internet at some place other than where you are physically located because the VPN server is where the traffic enters the public Internet.

OpenVPN is an open source VPN solution that provides both client and server components for creating a VPN. Because it is open source, it has been ported to virtually every platform so there are clients for iOS, Android, MacOS, Windows, and virtually every other operating system known to man. Using OpenVPN on Azure is a great solution for a low cost, private VPN. With Azure, you can use a small B-Series VPN that will cost less than $10 a month if you leave it on all the time, and even less if you shut it down when not using the VPN. The only variable cost is bandwidth, which will depend on what you use the VPN for.

Deploying to Azure

Deploying OpenVPN to Azure is a cinch. All you need is an active Azure subscription and click the button below.

This will take you to a form to fill out.

Azure Form

For the Resource Group, supply a name.

For Location, select an Azure Data Center you want to be your point of presence. This is the location you will appear to be located at when using your VPN. For instance, choosing East US will make it appear you are accessing the Internet near Washington DC.

For Admin Username, type in a user name that can be used to access the virtual machine through SSH.

For Password, enter a strong password. This will be the same password used to access the virtual machine through SSH and the admin site.

For DNS Name for Public IP, choose just the first part of the hostname, and the rest will be generated. It is a combination of what is entered, the region, and cloudapp.azure.com. For instance, myvpn and East US would be myvpn.eastus.cloudapp.azure.com.

Lastly, agree to the Terms and Conditions and click Purchase. Wait until the deployment finishes.

What this button does behind the scenes is creates a B-Series VM on Azure and installs OpenVPN on the machine for you. This is performed by automation scripts for Azure Resource Manager (ARM) and shell scripts on the VPN server. If you’re interested in these, check them out here.

Connecting to the VPN

In the Azure Portal, you’ll need to locate the virtual machine that Azure created to get the host name for the virtual machine. This will be in the Resource Group you created whenever you created the VPN. In the resource group is a virtual machine called openvpnVM. Click on it and you will see the name next to DNS Name. Copy this to the clipboard.

DNS Name

In the browser address bar, type in https://, then paste in the name you copied, and this will bring you to the admin site. You may get an SSL error. Simply ignore it and proceed to the site.

You’ll be prompted to logon. Use admin for the user name and the password you supplied when you created the VPN.

Logon

Managing clients is simple. Type in the name of the client and click Add to add a client. To remove a client, click Revoke next to the client’s name.

Add Client

Once you’ve added a client, Download the client’s profile wich is an .ovpn file.

Download or Revoke Clients

Connect your client:Windows: use OpenVPN GUI. After installing the app, copy the .ovpn to the C:\Program Files\OpenVPN\config folder. Launch the GUI from your Start menu, then right click the icon in the Tool Tray, then click Connect. Disconnect by right clicking and selecting Disconnect.MacOS (OS X): use Tunnelblick. Download and install Tunnelblick. After downloading, double-click on the downloaded .ovpn file and import the configuration either for yourself or all users. Once imported, click the Tunnleblick icon on the menu bar and click Connect. Disconnect by clicking the Tunnelblick icon and selecting Disconnect.

Android: use OpenVPN Connect for Android. Download and install the app. Next, go to the admin site and create and/or download a profile. In the app, select Import from the menu, then select Import Profile from SD card. Find the profile in your Downloads folder and import the profile. Once downloaded, click Connect. To disconnect, open the app again and select Disconnect.

iOS: use OpenVPN Connect for iOS. Install the app, then browse to the admin site in Safari. Create and/or download a profile. After the profile is downloaded, select Open in Open VPN. Install the profile, then select Connect to connect to the VPN. To disconnect, open the app again and select Disconnect.

Stopping and Starting the VPN

For the ultimate cost savings, shutdown the VPN VM when you’re not using the VPN. Azure only bills for storage when the VM is not running.

You can start and stop the VPN easily in the Azure portal.

Start/Stop Azure Portal

For convenience though, using the Azure Mobile App for iOS or Android is simple. Install the App and login. In the list of resources, find openvpnVM and tap on it. On this panel, you can stop and start the VM. You can start the VPN from the app before you logon to the VPN and once you’re done using the VPN, shut it down.

Azure Mobile App

Also, star the VM so it will appear on your list of Favorites for quick access.

Conclusion

That’s it! You private VPN is ready to go. Happy safe browsing!

]]>https://www.blaize.net/2018/06/using-openvpn-on-azure-for-a-low-cost-private-vpn/feed/02575Interfacing .NET and Ethereum Smart Contracts with Nethereumhttps://www.blaize.net/2018/05/interfacing-net-and-ethereum-smart-contracts-with-nethereum/ https://www.blaize.net/2018/05/interfacing-net-and-ethereum-smart-contracts-with-nethereum/#respondMon, 21 May 2018 19:58:32 +0000https://www.blaize.net/?p=2563.NET is the venerable framework that indie and enterprise developers alike have come to love. The ability to choose a variety of languages as well as deploy to a variety of platforms ranging from mobile to servers makes .NET a great choice for all kinds of applications. While .NET does a lot, it doesn’t do everything. For instance, one cannot write client side code to run in a browser in .NET (unless you use Silverlight…) nor is there currently a language supported by .NET that enables developers to write smart contracts.

The language of choice for smart contracts though is Solidity. Solidity is a purpose-built language that assumes many things about the environment it is running it, which pretty much marries it to blockchain technologies. It’s these baked in assumptions that conversely preclude other languages though.

Smart-contracts by themselves though are only half the story. For apps to be complete, smart contracts need something to call them by way of RPC’s. This is where Web3 clients come in. A Web3 client is simply the client side interface that enables client apps to interface with smart contracts running on Ethereum, so as long as one exists for a given language, then it can interface with Ethereum.

For .NET, really the only game in town for Ethereum is a project called Nethereum. (Nethereum is a portmanteau of .NET and Ethereum). This library attempts to replicate the same functionality provided by Web3.js available for JavaScript apps like those that run in a browser and with NodeJS

In this tutorial, we’ll deploy a smart contract to Ganache and then create a simple .NET app using .NET core to interface with the smart contract.

Create and Deploy a Smart Contract

In a terminal, command prompt, or Powershell session, install Truffle. Truffle is a framework and set of utilities that helps facilitate Solidity development of smart contracts. NPM is need to complete these commands.

npm install -g truffle

Create a folder and run truffle init in the folder.

truffle init

Truffle will create a few new folders: contract, test, and migration. In the contract folder, create a new file called Vote.sol in the contracts folder.

Paste in the following code into the newly create Vote.sol and save the file. This contract simply tracks the vote count for 2 candidates. The contract uses the message sender (that is, an account address) as the voter. It only allows 1 vote per account.

With Nethereum, it’s easy to wire up a Smart Contract to any .NET app. Because Nethereum is .NET, it can be used in .NET Core apps, .NET Standard, Xamarin, and all sorts of Windows apps. With Nethereum, the power of Ethereum and .NET is now at your disposal!