Introduction

Recently I got confronted with a very small use case of IoT temperature gathering. just an ideal case just to see what we can achieve with some out of the box OpenSource software and some Raspberry Pi’s.

The test case

So the test case is pretty complex:
– I have a freezer which is positioned in a potential hot area, I am suspecting a possibility of near unfreezing during a hot summer, I want to measure this
– The thermostat inside the house is always off, I am suspecting it’s a rather structural problem and it always adds a few degrees. But I want to map and measure this
– The garage, which is outside the reach of my WiFi, might potentially be a new site for moving my mini data center to. I don’t know if my hardware would survive this wet place
– The misses always opens Google to see the weather forecast, I think I can achieve a mini-weather station GUI. I also like to use this as the temperature readouts of all servers

VM 2 – MQTT

A message queue for transmitting data: http://mqtt.org/.MQTT is a machine-to-machine (M2M)/”Internet of Things” connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.
This VM doesn’t consumes a lot of memory.

Specifications:
CPU: 1vCPU
RAM: 256MB
Disk: 6GB
OS: Ubuntu 16.04 LTS

Control Center

This small control center will show the TFT screen with all temperature sensors. It will also handle the data transfer of the MGTT bridge and process temperatures sent by sensor 1. I try to keep the bandwidth as low as possible so I am only sending the bare minimum over MQTT and let this server handle the ELK communication. 1MB traffic costs me 0.10 cents, any ASCII character too much will cost money, by using MQTT I can keep my data traffic cost to below 0.30 cents each month.

What is noticable is that there isn’t any data on this disk once we add it. Do note that in BTRFS you are responsible for balancing out your RAID5 once you start adding disks. Before we balance let’s verify the md5sums so we can see if the balance did any harm.

Introduction

This post will handle the creating of a RAID0 and the conversion of RAID5. This is a use case for the creation of my NAS. As I don’t have spare disks laying around, I will migrate the data from my current RAID5 to a BTRFS RAID0 and then decomission the old RAID5 and add a disk to the RAID0 to create a new BTRFS RAID5 system.

Test One: creating a RAID0

Our first test is to create a RAID0 on our BTRFS file system. I am currently using three 5GB disks. For RAID0 I will be using /dev/sdb and /dev/sdc

Setting up BTRFS and upgrading the kernel

My test setup is based on Ubuntu 14.04.3 LTS, this version is still using 3.19 kernel, for BTRFS it’s better to use a newer stable version so we will update the kernel. In my case I will be updating to kernel 4.1.13. (At the moment of testing this is thle latest stable: https://www.kernel.org/)
Download the header files from Ubuntu

Test 3: Killing one off

In this case whe physically (or virtually disconnect a disk and see what happens and repair it)
So when booting you will see the ‘An error occured while mounting /media/btrfs-raid1’. Press s to skip.
The program ‘lsblk’ will list sdd missing. This is normal.

Planning a NAS upgrade: Ubuntu 16.04 LTS

For Ubuntu 16.04 LTS I intend to upgrade and move away from my trusted mdadm RAID. For this use I will be testing to see if there are any new file systems that show some versatility in RAID creation, expanding and maintaining.
I will be looking at BTRFS for my RAID1 and RAID5 needs and MHDDFS for JBOD.

Introduction

In this little post I’ll be comparing VMware ESXi and Citrix XenServer.

The base image consists from a Windows 10: 32-bit image. On this image I installed Passmark software CPU Bench. Each VM will have 2048MB RAM available and only one instance will be running on the hypervisor hosts.

The base score for the i3-5010u is Score: 3054, Single threaded score: 1178

Testing

For each test I will gradually up the CPU power. The P symbol means Passmark score, S is the equivalent of the Single Thread score.

Test 1: 1 vCPU

You can see that in the single threaded score there is a penalty of around 15-18% for Citrix Xen Server. VMware does a better job with only around 3% loss.
As for the Passmark test, we can see that VMware does a better job than XenServer. However the result is quite small. VMware performs arounf 5% better on a single core VM.

Test 2: 2 vCPUs

In this test we can see that VMware seems to keep a rock steady score for the single threaded performance. Citrix XenServer stays the same too. Adding more virtual CPUs doesn’t seem to add any extra advantage to the single threaded performance.
However we can see in this test that with adding an extra CPU to both virtual machines, the passmark score seems to double. Good evolution, we are getting there.

Test 3: 4 vCPUs

This test is a little bit odd. XenServer allows a setup of 4 virtual cores on one CPU. And these results are in the same range as the 2×1 test. As the i3-5010U has 2 cores and 2 threads on each. This proves that XenServer will offer more options than VMware, though it’s worth checking your hardware layout before throwing around cores.

This test is the final score. Noticable, the single thread score doesn’t improve much. You can see the Citrix Xen Server stays the lowest of the two.

All results

Conclusion

You can see Citrix XenServer stays below VMware in raw performance. Is Citrix XenServer useless? No, despite the lower CPU performance, each hypervisor does more than the ones found in VMware. In VMware you will need a vCenter running to perform some of the advanced features. Example: In Citrix XenServer environment the configuration concerning high availability is found in each hypervisor.
This could explain some of the differences found in performance.

Implementation

In the past I mounted the network location directly to ‘/var/www/owncloud/’ but this isn’t possible anymore, as I want this folder to be used for the RAM disk.
Lets create a new mount point named ‘/media/network’ and change our fstab to reflect this change.

sudomkdir-p/media/network
sudonano/etc/fstab

sudo mkdir -p /media/network
sudo nano /etc/fstab

Unmount and remount everything again and verify that it is mounted.

sudoumount/var/www/owncloud
sudomount-adf-h

sudo umount /var/www/owncloud
sudo mount -a
df -h

Now we shall create the RAM disk. Verify that your installation is less than 192MB big (hint du).

The first script will load the files from the network source. It will stop Apache2 whilst synchronizing the data to the ‘/war/www/owncloud’ folder. Also there is a force option to explicit force the download from the ‘/media/network’ location. If we don’t force this, unison will detect the newly created RAM disk as a newer version and will commence deleting the files we need to run Owncloud!
When everything is done, a tmp file will be written to flag that the unison cron job may synchronize files.

Today (24/07/2014) I installed the Google Apps components from the Google SDK installer. However when I try to run my goapps application with the command: ‘goapp serve myapp/’

I am receiving an error: ‘C:\Program’ is not recognized as an internal or external command. The problem here is that the ‘goapp.bat’ file tries to access an executable file in the ‘C:\Program Files\Google\Cloud SDK\…’ folder. Because Windows is (still) super terrible at handling spaces in folder names in scripts, it fails.

The solution is to go to the ‘C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine’ folder and edit the ‘goapp.bat’ file.
At the bottom of the file you will see:

:: Note that %* can not be used with shift.
%GOROOT%\bin\%EXENAME%%1 %2 %3 %4 %5 %6 %7 %8 %9

:: Note that %* can not be used with shift.
%GOROOT%\bin\%EXENAME% %1 %2 %3 %4 %5 %6 %7 %8 %9

Now add some quotes to this last line and your problem should be fixed.

:: Note that %* can not be used with shift.
"%GOROOT%\bin\%EXENAME%"%1 %2 %3 %4 %5 %6 %7 %8 %9

:: Note that %* can not be used with shift.
"%GOROOT%\bin\%EXENAME%" %1 %2 %3 %4 %5 %6 %7 %8 %9

Once these changes are saved, go to the ‘C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\’ folder. There’s ‘goapp.cmd’ file that gets added to the Windows path. Rename this file to ‘goapp.bck’ and copy your ‘goapp.bat file’.
In this last file change the last line again to:

:: Note that %* can not be used with shift.
"%GOROOT%\..\..\platform\google_appengine\goapp"%1 %2 %3 %4 %5 %6 %7 %8 %9

:: Note that %* can not be used with shift.
"%GOROOT%\..\..\platform\google_appengine\goapp" %1 %2 %3 %4 %5 %6 %7 %8 %9

Introduction

Just an introduction to one of my side projects.

One late evening I decided to get creative for a while. So I came up with the design for a semi-PaaS Windows 3.11 system.
Why?
– Because it’s fun. I’ve always loved legacy systems because of their simplicity. Simplicity which allows me to grasp the history of complex current generation systems. The main purpose would be to see if I can meld old technology together with new technology.
– It hasn’t been done before. At least not that I know of. And if I wanted to create an up-to-date system/design which would serve a business purpose, I would prefer to get paid for doing this. This is my spare time.
– Gaming. You have to admit it, old-school games are fun. Anyone can download and install a DosBox and play Warcraft 2 games offline. However netplay on a server would be awesome.

Design

This is the initial design I’ve had in mind, it lacks quite a lot of advanced features. The goal is to use as much out of the box components as possible. I don’t want to write my own servers or other components as this will take a huge amount of time and will likely not scale at all.

RDP gateway

This gateway is an Amazon EC2 instance (t1.micro) configured with HAProxy to proxy RDP connections to each instance and shield the node server from other external traffic. Each instance will receive an RDP port 3500 + n to connect.

A reset of an instance wipes drive C: (and repairs it from the template) but should keep all data on the D: drive.

Node Manager

Installed on each node, this manager allows JSON calls between the front-end component and physical state of the system. It will allow the GUI to send messages concerning:
a) System utilization
b) Instance management

Communication between node and front-end should be done using HTTPS and will utilize Apache2 to server HTTPS traffic.

Feasibility study

Study 1: RDP connection

Goal: Complete an RDP connection trough the internet and see if the performance of the RDP connection is enough for a Windows 3.11 instance running at 800×640. This RDP connection should use the VirtualBox RDP capabilities (found in the extra bundle).
Level: Critical
Status: Completed
Results: All objectives have been met.

Study 2: Clone template with VirtualBox

Goal: This test should create and maintain a new instance created from a previous Windows 3.11 instance (called template).
Level: Critical
Status: Ongoing

Study 3: Separate hosts on virtual LAN segment

Goal: This feasibility study should test if there is no traffic possible between each host configured in an internal networking mode. Preferably by using iptables and/ or Coyote Linux for routing network traffic.
Level: High
Status: Ongoing

Final notes

This system is far from perfect, and a lot of work needs to be done. I still need to confirm two feasibility statuses. If study 2 fails, this project will be scrapped.
This a project which is done entirely in my spare time the release date will be when it’s done.