Introduction

For my little Windows 3.11 PaaS system I fell on a dead track with VirtualBox. So I’ve been researching another way to virtualize Windows 3.11 and I found qemu. Below is my little take at emulating Windows 3.11.

Installing qemu-kvm

Installing is pretty easy, just grab all needed packages. I am using the package ‘virt-manager’ as a GUI frontend.

Both types I’ll leave as ‘Generic’. Also select the install image. My windows 3.11 source is an ISO file.

Select the amount of memory and CPU. In the virtual machine manager there is a little bug that won’t allow you to assign less than 50MB. But this shouldn’t be a problem, we’ll fix this later. As for CPU, use one.
et/wp-content/uploads/2014/05/qemu-4.png”>

Press the ‘Select managed…’ option here and navigate to the disks you’ve made with the ‘qemu-img’ command. The type will be wrong (raw) but we will fix this later too.

Last step of the wizard. Here by default the hypervisor will be ‘kvm’. My recent findings have found this to cause some stability issues with Windows 3.11. Select qemu instead. As architecture select i686. This is your default 32-bit architecture.

So that’s it. Create the image and let’s continue. Once your virtual machine is created select the blue ‘i’ button to edit the machine a little bit further.

Press the ‘Memory’ tab and assign 32MB. 32 should be enough for Windows 3.11.

Next go to ‘Boot options’ and activate floppy and hard drive. The floppy should go first before we boot from hard drive.

Once this is done, fix your disk one. Select ‘qcow2’ as type and make sure the disk bus is ‘IDE’.

After this assign the second hard drive. Press the ‘Add hardware button’ below and select ‘Storage’. From this menu assign the existing image as disk two.

Last step is the floppy drive. Add a new storage drive and select floppy from the dropdown list and press Finish.

That’s it now your virtual machine is configured to run.

Installing Windows 3.11 / MS-DOS

Next step would be to install the operating system. From the settings page you can connect and disconnect floppies to install your operating system. Press the ‘Disconnect’ button to disconnect the floppy image and press ‘Connect’ to reconnect an image.

Here we go, one fresh MS-DOS 6.22 install.

I won’t explain the other details of installing Windows 3.11, as this post will only cover qemu-kvm. However a little hint: you will need the tools listed on http://www.scampers.org/steve/vmware/

Managing with virtsh

Managing a running virtual machine is very easy. The tool to use for this is called ‘virsh’.

To suspend a machine use ‘virsh suspend’ followed by your virtual machine name. (In my case ‘TEMPLATE’). A suspend will keep your machine in RAM. However it won’t be using any other system resources (except disk space).

virsh suspend TEMPLATE

virsh suspend TEMPLATE

To resume a suspended state, use ‘resume’.

virsh resume TEMPLATE

virsh resume TEMPLATE

To fully dump your running virtual machine use save. This will create an image file of your running config and will unload any RAM assigned to this machine.

virsh save TEMPLATE ~/qemu/template/suspend

virsh save TEMPLATE ~/qemu/template/suspend

First time you will need to change the rights of your suspend image as by default it will be owned by ‘root’. If you try to resume a suspended machine owned by root you will get a permission denied error.

sudochown`id -un` ~/qemu/template/suspend

sudo chown `id -un` ~/qemu/template/suspend

To resume a saved virtual machine you can use the ‘restore’ command followed by your image file.

virsh restore ~/qemu/template/suspend

virsh restore ~/qemu/template/suspend

To view the stats of your virtual machine you can use following command:

virsh -c qemu:///system list

virsh -c qemu:///system list

It will show the state of your machines. A machine which has been saved to disk won’t show up in this table though.

Disable Intel Speedstep

Disable your Intel Speedstep and C-Bit in the BIOS. The manual states that Intel Speedstep could ‘make your system unstable’. On this board, yes it does.

SATA cables + Boot disk to Intel controller

The manual recommended the use of the Intel RAID controller for OS disks. (Which I didn’t) So I swapped the SATA cable with a more expensive one (found some postings of people reporting e better stability using better SATA cables), and moved the boot disk to the Intel SATA controller.

These steps solved my instability with this board. Whilst on paper this board is the most awesome buy you could do (passive cooled, 12 SATA ports, quad core Atom, 20 Watt). In reality it’s as picky as a spoiled toddler. Definitely a not buy. At the price of ~€350 this is quite an expensive pain in the ass.

Introduction

Owncloud is pretty awesome, it provides me with my files everywhere I want on the world. However sometimes accessing my files is rather trivial. Think in terms of hotel lobbies, public access points. Sometimes there are some real restrictions on ports being used. By default my ISP blocks all server traffic below 1024, which is in my opinion a rather rude. I want my files! Luckily we can use the Amazon t1.micro (free tier) to provide a solution to this.

Preparing the Amazon image

So select a free tier Amazon t1.micro. This should be free the first year so no worries. As for configuration. Open the SSL and HTTPS port. Once this instance is running login to the instance as ‘ec2-user’ with your certificate file.

Introduction

Owncloud is simply amazing. It’s like a Dropbox at home.
For my NAS I will be running this program in an instance in a virtual machine. This is done because I’ll be opening this machine to the outside of the world. Also it’s much easier to backup and dispose.

The VMWare instance

Let’s start with configuring the VMWare instance. I’ll be using the Ubuntu LTS server edition for this instance, as it uses less system resources than a full desktop environment.

That’s about it, now you can follow the http:///owncloud link and configure your Owncloud. You will need a MySQL database for this application.

Optional: Moving Owncloud to RAID1 share

I prefer to move my data and Owncloud to a network share which is backed by a RAID1 configuration. In case one of my automatic updates shits the server.

Create a mount point for your data. I’ll be using \\192.168.1.10\owncloud as share. The username will be ‘www-data’. As Apache2 uses this username to read and write.
Create the account on the host system and create the share directory.

Introduction

Just an introduction to one of my side projects.

One late evening I decided to get creative for a while. So I came up with the design for a semi-PaaS Windows 3.11 system.
Why?
– Because it’s fun. I’ve always loved legacy systems because of their simplicity. Simplicity which allows me to grasp the history of complex current generation systems. The main purpose would be to see if I can meld old technology together with new technology.
– It hasn’t been done before. At least not that I know of. And if I wanted to create an up-to-date system/design which would serve a business purpose, I would prefer to get paid for doing this. This is my spare time.
– Gaming. You have to admit it, old-school games are fun. Anyone can download and install a DosBox and play Warcraft 2 games offline. However netplay on a server would be awesome.

Design

This is the initial design I’ve had in mind, it lacks quite a lot of advanced features. The goal is to use as much out of the box components as possible. I don’t want to write my own servers or other components as this will take a huge amount of time and will likely not scale at all.

RDP gateway

This gateway is an Amazon EC2 instance (t1.micro) configured with HAProxy to proxy RDP connections to each instance and shield the node server from other external traffic. Each instance will receive an RDP port 3500 + n to connect.

A reset of an instance wipes drive C: (and repairs it from the template) but should keep all data on the D: drive.

Node Manager

Installed on each node, this manager allows JSON calls between the front-end component and physical state of the system. It will allow the GUI to send messages concerning:
a) System utilization
b) Instance management

Communication between node and front-end should be done using HTTPS and will utilize Apache2 to server HTTPS traffic.

Feasibility study

Study 1: RDP connection

Goal: Complete an RDP connection trough the internet and see if the performance of the RDP connection is enough for a Windows 3.11 instance running at 800×640. This RDP connection should use the VirtualBox RDP capabilities (found in the extra bundle).
Level: Critical
Status: Completed
Results: All objectives have been met.

Study 2: Clone template with VirtualBox

Goal: This test should create and maintain a new instance created from a previous Windows 3.11 instance (called template).
Level: Critical
Status: Ongoing

Study 3: Separate hosts on virtual LAN segment

Goal: This feasibility study should test if there is no traffic possible between each host configured in an internal networking mode. Preferably by using iptables and/ or Coyote Linux for routing network traffic.
Level: High
Status: Ongoing

Final notes

This system is far from perfect, and a lot of work needs to be done. I still need to confirm two feasibility statuses. If study 2 fails, this project will be scrapped.
This a project which is done entirely in my spare time the release date will be when it’s done.

That’s it, now login to your plex environment with http://:32400/manage. You alse need to have a Plex account but once you do you can add libraries to your Plex server. I recommend the Ouya Plex client or RasPlex to connect.

Sickbeard

Almost finished now, for my series I like to use Sickbeard. It’s an awesome tool that manages to capture meta data for series. It shows the quality of your series on your home NAS and the completeness.

Before we can start with Sickbeard, you need the ‘python-cheetah’ module. This module is needed by Sickbeard.

sudoapt-get install python-cheetah

sudo apt-get install python-cheetah

Let’s download the tarball (yet again, I don’t like Git for installations).

Sickbeard runs at http://:8080, from there you can configure your Sickbeard installation.

Transmission

Last service up is Transmission. Any good home NAS must have this. It’s the most awesome remote tool to schedule torrents.

By default it should be installed. For those who don’t have it:

sudoapt-get install transmission

sudo apt-get install transmission

To start Transmission I created a startup script that allows me to run this service once. As with the RDP environment there is a chance that Transmission gets started twice due to the session creation in RDP. A simple hack is to create a script that avoids this.

Introduction

Requirements: for the backup data I will be using a partimage file.

The OS of my nas will be Xubuntu 14.04. This distro is fairly lightweight for a NAS system and gives me a sleek GUI interface. I could do without a GUI but this makes some of the services quite ‘Spartan’ to handle. A NAS is not a production environment. I want to handle sudden events, light, swift and simple. There’s no real point in debugging your NAS at 11:00 PM in a command line interface when you need to go to work at 5:00 AM in the morning.

For detailed instructions about how to install Xubuntu I’d like to refer you to Google:
The only thing that needs to be changed in the installation is login by default. This is a must, if you want to config services which will run at boot time with a GUI, you’ll need an active session to start these programs.

Let’s start with the basics (in case you didn’t download the latest updates while installing):

sudoapt-get updatesudoapt-get upgrade

sudo apt-get update
sudo apt-get upgrade

Installing remote access (OpenSSH, XRDP)

Start by installing openssh, this will be the backbone of our communication with the NAS server.

sudoapt-get installssh

sudo apt-get install ssh

By default the openSSH deamon times out, I don’t really like this so I’ll be adding a ServerAliveInterval.

sudonano/etc/ssh/ssh_config

sudo nano /etc/ssh/ssh_config

And add following line to it:

ServerAliveInterval 60

ServerAliveInterval 60

Next, I chose Xubuntu for a reason, I want to have XRDP installed. Scarygliders has a neat install tool which works for all *untu distro’s. I really recommend you use this file. It will take quite some time and is as slow as a snail but it works. It works flawless.
Note: It should work for all Ubuntu based distributions, however for Lubuntu and Bhodi it doesn’t seem to work very well. Xubuntu gave me a near perfect XRDP session.

I don’t like to have git installed on this system so I’ll just grab the master.zip and unzip it.

Migrating data from old drive

This part assumes you took a backup from the old drive with partimage. If you don’t have any data to migrate, you can skip this test.

First install partimage to be able to restore the data.

sudoapt-get install partimage

sudo apt-get install partimage

Next grab the block size of the partition you wish to restore. You can find this one by using fdisk and dividing the resulting size by 1024. Add some extra blocks as this result isn’t 100% exact. In my case the old backup disk was /dev/sdh1.

sudofdisk-l/dev/sdh1

sudo fdisk -l /dev/sdh1

Next create an empty image file with the disk size found in the previous step.

ddif=/dev/zero of=restore.img bs=1024count=31719727

dd if=/dev/zero of=restore.img bs=1024 count=31719727

Associate this empty disk image with a loopback device (loop0).

sudo losetup /dev/loop0 restore.img

sudo losetup /dev/loop0 restore.img

Now you can restore the image with partimage. In my case my backup image is called ‘image.000’ and resides on a disk mounted on: ‘/media/nas/05885c86-ae41-4839-b0dc-f1282c59dea4’

Migrating MySQL data (optional)

Sometimes it’s not possible to have a MySQL dump available. Lucky all data can be migrated from an old installation. In this example the disk is mounted on ‘/media/nas/backup/’. If you don’t have any old MySQL data to migrate, skip this step.

During this install you will be asked for a root password for the MySQL server.

The problem with VMWare 10.0.0 and a linux kernel 3.13 is that it just won’t work. As Xubuntu 14.04 uses this kernel, this system also suffers from this error. A patch can be found at:
Below is the content of the page (In case it vanishes):

#Change directory into the vmware module source directorycd/usr/lib/vmware/modules/source# untar the vmnet modulestar-xvf vmnet.tar
#run a the patch you should have just saved earlierpatch vmnet-only/filter.c < ~/vmnet313.patch
# re-tar the modulestar-uvf vmnet.tar vmnet-only
#delete the previous working directoryrm-r vmnet-only
# run the vmware module build program. (alternatively just run the GUI app)/usr/lib/vmware/bin/vmware-modconfig --console--install-all

Tip: When running VMWare images on a CPU which allows you to scale the frequency then your clock might get a little bit off if you don’t install the VMWare tools.
A little workaround to this is to add this to your cron jobs. (My example BE, Brussels NTP server Telenet)

sudo crontab -e

sudo crontab -e

00 1 * * * ntpdate ntp.telenet.be

There you go, one fresh NAS server ready to serve your content and configured to add more scalable services.

Introduction

So in the past I’ve always assembled NAS devices from old PC’s. My previous NAS device was an old Intel Q6600 processor. This gave me a thermal design power of 105 Watt. Ouch! Time to upgrade for something much power efficient. http://ark.intel.com/products/77987 this little quad core has an octa core and 20 Watt of TDP.

Hardware

Setup

My data disks will be set up according to the following specification:
– one RAID5 (three 3TB dives) for semi critical data;
– one RAID1 (two 1TB drives) for critical files that I never want to lose (backups);
– one 3 TB disk used for files that may be lost (virtual machines and such).

The OS will be installed on the SSD drive to provide a fast and stable system. Right now I’ve also included an old 1.5TB disc drive to the system, in the future I wish to replace this by another 3TB when the need for extra data on RAID5 arises.

All RAID configurations will be handled by software raid. There is no sane reason why you should spend too much money for a hardware RAID on a home NAS system. My system using software RAID performs at least at 83MB/s (picture taken from when system was done, copying external disk to NAS).

Building the system

I don’t really want to explain this. It’s pretty straightforward. If you can’t read a manual or build your system then I recommend closing this page and buying a Synology instead. But for those interested: here are some juicy pictures of me building my NAS .

Unpacking the little motherboard (yes a passive cooled octa-core).

Clearing out my old NAS system (the Q6600 PC).

The final result before closing the case. I simply love the 5 hot swappable bays.

Booting the NAS with my little 10″ debug screen. One of the best purchases I’ve ever made in the past.