Currently, my blog is served through WordPress. Visitors either enter the site through the redirect at http://www.antonyjepson.com or by web searches. While WordPress is a simple way to maintain a blog, at times I would like to have a bit more control over my content in the event WordPress disappears or my credentials are hacked.

What follows is an investigation in self-hosting alternatives.

My baseline requirements were as follows:

Hosted in the EU

Easy off-site back up

Embedding of images permitted

At least 32GB of space for content

Thorough documentation of the service used

Basic analytics

Light-weight and fast loading pages

Reduced susceptibility to DDOS attacks

$10.00 / mo maximum base price

Supported in mobile / desktop web

My stretch requirements were:

Replicated across the 7 continents (in the event a post becomes popular).

Moderated comments

A/B testing on post content

Encrypted access via SSL

Starting with the hosting infrastructure, I considered various options.

Hosting

First was Amazon Elastic Compute Cloud, with which I am very familiar with and have used for years. Referencing the EC2 Instance comparison in the Ireland region chart, a small T2 instance came up to $166.44 annually when reserved upfront for the year. It didn’t come with storage, so a 32GB general purpose EBS for one year comes to $0.11 / GB-month * 12 months * 36GB = $47.52 annually. Additionally, I would need to create a snapshot every two weeks with 2 months of cumulative back ups which would cost at worst case $0.05 * 8 back ups * 12 months * 32 GB = $172.80 / annually. Network I/O for Amazon is sufficiently cheap (monitored at the 10 TB / month scale) so that will not be included in the calculation. The final cost comes to $387 annually = $32 / mo. If we reduced the scale of back ups, to say, twice a month, the final cost would come to $31 / mo. Clearly, this is a quite a bit above my target of $10.00/mo.

Next, I considered hosting it on Windows Azure. Looking at the virtual machine categories, the A1 instance seemed sufficient, costing $20.83 monthly. This tier comes with two disks, a 20GB operating system disk and a 70GB temporary storage disk. Evidently, the temporary storage disk would not be the best location for the blog in the event of termination or other failure.

While Windows Azure seemed tempting, I wanted to drop the price even further. The next choice was Digital Ocean – well known for their digital `droplets’. Unfortunately, the full pricing scale was behind a sign-in page. On the public facing side, $10 monthly would secure a droplet (instance) with 1GB memory, 1 core processor (very vague), 30GB SSD, and 1TB transfer. While this certainly seemed like the best option, I wanted to make sure I evaluated 5 options.

A great alternative is actually using a static site generator and hosting the website on Amazon S3. This means that there are no security updates to worry about. Unfortunately, this would require me to run the site generator locally on my computer and back up my blog manually. The cost for Amazon S3 in Europe is $0.0300 / GB. So 32GB would run me $0.96 GB / mo. I am changed per 1,000 for the GET requests and they run at $0.005 per 1,000. A scenario I used to evaluate the price was if one of my posts went viral and got 30,000 viewers in one day and the page used 28kB of space, I would need to pay (3 items * 30,000 GET requests * $0.005 / 1,000 GET requests * 28kB * 0.01 / GB) = $0.45. Not bad!

The final option investigated was GitHub Pages. This lets you host a website directly from your public GitHub repository. I am also familiar with GitHub. While it is free, this does not let me select the region for hosting the page. Therefore, this was not a valid option.

After all options considered, I decided to move forward with static hosting on Amazon S3 for hosting my blog, with a back up at https://antonyjepson.wordpress.com in the event it went down or, worst case, I could no longer pay.

Now, let us look at the blogging platform choices.

Blogging platform

As much as WordPress gets a bad rep for not being a light-weight place to host content, it has millions of monthly-active-users. Transitioning to the new hosting engine, I wanted it to be simple, light-weight (so it could run on an cheap albeit underpowered virtual machine), and relatively secure. Furthermore, I wanted the resulting content to be performant on both mobile and desktop, with mobile being the primary form factor. Finally, as a stretch goal, commenting would be great and attract recurring viewers to my site.

I first considered Jekyll for the site generator. At a high level, it takes a text file and processes it into a webpage. While I have a more than adequate understanding of HTML+CSS, having to deal with the finer points of them would definitely detract from writing quality content. It enables me to write the posts in a blog optimised language like Textile. Referencing is updated each time my content is converted and published to the web.

Second alternative was Hugo. Hugo specialises in having partial `compilation’ compared to the monolithic compilation offered by Jekyll. While this might be great if I had 1,000’s of pages in my blog, I anticipate the size of it to grow the low hundred’s so I don’t think it makes sense to deviate from the most supported option.

Based on the above, I opted to go for Jekyll.

Moving forward

This is not a simple process and some additional voodoo will likely be required to enable SSL support and commenting support (likely with Disqus). Expect changes on http://www.antonyjepson.com over the coming weeks.

With 1080p (and in some cases 2K) cameras now being standard on mobile phones, it’s easier than ever to create high quality video. Granted, the lack of quality free video editors on Windows / Linux leaves something to be desired.

I played with Blender VSE (Video Sequence Editor) to try and create a montage of my most recent motorcycle rides but the interface was non-intuitive and had a rather high learning curve.

So, I turned to the venerable ffmpeg to create my video montage.

Selecting the source content
Before jumping to the command line, you will need to gather the list of clips you want to join and have a basic idea of what you want to achieve. Using your favourite video player (VideoLAN Player, in my case), play through your captured videos and find the timeframe for trimming.

Applying effects
Note: If you want to speed up processing time while you get the hang of this, you can scale the videos down and then apply the effects to the full size video once you’re satisified with the output.$ ffmpeg -i ./output-1.mov -vf scale=320:-1 ./output-1s.mov
-1 on the scale filter means determine the height based on the aspect ratio of the input file.

First, I’ll show how to add the effects individually (at a potential loss of quality). Then we will follow up by chaining the filters together.

Unfortunately, as H.264 does not support alpha transparency, we will need to use the filtergraph to let us apply alpha (for the fading) to the stream before outputting to the final video. First, let’s rebuild the above command as a filter graph.
$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -filter_complex '[0:v]fade=t=out:st=45.0:d=2.0[out1];[1:v]fade=in:st=0.0:d=2.0, fade=t=out:st=37.0:d=2.0[out2];[2:v]fade=in:st=0.0:d=2.0[out3]' -map '[out1]' ./output-1sf.mov -map '[out2]' ./output-2sf.mov -map '[out3]' ./output-3sf.mov

This uses the filter_complex option to enable a filtergraph. First, we list the inputs. Each input is handled in order and can be access by the [n:v] operator where ‘n’ is the input number (starting from 0) and ‘v’ means access the video stream. As you can tell the audio was not copied from the input streams in this command. Semicolon is used to separated parallel operations and the comma separates linear operations (operating upon the same stream).

I recently damaged my Windows 7 installation upgrading to Windows 10. The root cause was my dual-boot configuration with Gentoo Linux. Due to having the EFI partition on a drive separate from the drive containing the Windows (C:\), the installation failed. The error message was entitled “Something Happened” with the contents “Windows 10 installation has failed.” This took a lot of time to debug, and unfortunately, using Bootrec and other provided tools on the Windows 10 installation medium did not resolve the issue.

Here are the steps I followed to back up my Linux data.

Created a LVM (Logical Volume Management) snapshot volume proving a stable image for the back up. The L parameter specifies how much space to set aside for filesystem writes that happen during the back up.# lvcreate -L512M -s -n bk_home /dev/mapper/st-home

Mounted the snapshot volume. As I was using XFS, I needed to specify the nouuid option or the mount would fail with a bad superblock.# mount /dev/st/bk_home /mnt -onouuid,ro

Used tar to back up the directory and piped the output to GPG to encrypt the contents (as this will be going to a external HDD not covered under my LUKS encrypted volume). Because this back up was only stored temporarily, I opted for symmetric encryption to simplify the process.# tar -cv /mnt | gpg -c -o /media/HDD/st-home.tar.gpg

Short post: when you agree to the terms and conditions of Google sponsored WiFi (e.g. at Starbucks) your DNS resolution settings are updated to point to Google’s DNS servers. While this does result in hands-off protection from malicious websites it also enables Google to track your browsing habits and gather a large representative sample of the habits of people that use that particular WiFi network.

In Linux, look at your /etc/resolv.conf to determine if your DNS server has changed. Google’s servers are: 8.8.8.8 and 8.8.4.4.

I recommend checking this file each time you connect to a public WiFi network.

I have not upgraded my computer systems for a while and have been using a combination of Windows 7 and Mac OS X as my main OSes. Recently, I had the opportunity to purchase a Dell M6800. Below is a walkthrough of how I got Arch Linux configured on this monster of a machine.

Partitioning

This computer was configured with two hard discs: a 512GB SSD and a 512GB HDD. The first was used as the boot drive. It has been a long time since I configured a hardware (not VM) Linux machine from scratch and spent several hours selecting the optimal partition layout to set me up for the next 5 years. I wanted a partition layout that was easy to reconfigure, hard to configure improperly, and encryptable.

Before even considering the layout, I had to choose which partition table format I would go for. In the past, I would opt for the Master Boot Record (MBR) format which gave me interoperability with Windows. Now, with Windows 8 and beyond requiring UEFI support for Windows 8 certified PCs, there is no real reason to stick with MBR. For this machine, I selected the GUID partition table (GPT).

With that in mind, I considered the following scenarios:

(1) Simple scheme using GPT
In the past, due to the requirement of dual-booting for university, I had opted for a MBR configuration. I considered for this machine the following layout.

Benefits of this layout was that it was simple. Drawbacks: could not easily modify in the future without copying data to an external disc and copying back. Also, it would be a waste of space if I did not use Windows.

Furthermore, with /home and / separated, I would have to set up encryption twice so I could make mistakes that would render encryption useless or open up attack opportunities.

(2) Linux Unified Key Setup upon Logical Volume Management (LUKS upon LVM)
This setup would give me all the flexibility of LVM with the added benefit of encryption. This means, I could extend a logical volume within Linux across multiple drives in the event my SSD ran out of space or, say, I wanted to implement a RAID configuration. Unfortunately, again, this would require multiple partitions and key configurations which would be cumbersome to manage.

(3) LVM upon LUKS
This setup would prohibit me from doing the above in (2), namely, spreading out partitions across physical media. However, it would be the easiest to configure and would give me encryption across my entire drive. Because I had an SSD, I was not too concerned about any r/w performance penalty that I would likely encounter having all these abstractions in place. Here is the partition scheme I opted for, using the GPT.

Create the file systems
In the past, I would have went for ext4, but wanted to really make sure I was taking advantage of my SSD (despite the performance penalty from LVM and encryption), so I went with XFS.
#> mkfs.xfs /dev/vg0/root
#> mkfs.xfs /dev/vg0/home
#> mkfs.xfs /dev/vg0/tmp

So, now I have achieved my initial scenario. The Arch Linux Installation guide shows how to mount the file system and install packages as appropriate.

Other Callouts on the Dell M6800

– At times, I encountered some rather scary looking write errors (likely due to the write scheduler being used). I prevented further occurences by adding libata.force=noncq to my GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and regenerating the configuration.
– I did not remember to install wpa_supplicant to connect to WiFi so I had to procure an Ethernet cable to download it before configuring netcfg to connect to my home network.
– Audio was enabled on Firefox by installing its optional dependencies (use pacman -Qi firefox to list them) and installing pulseaudio and pulseaudio-alsa. Remember to turn off the suspend-on-idle module to prevent pops when playing videos.
– The included Nvidia graphics card is meant for an Optimus configuration, so use the Intel supplied graphics card as the default. You can use bumblebee to offload 3D rendering applications to the Nvidia graphics card.
– Not Dell specific but I needed to remember to update my resolv.conf when connecting to OpenVPN servers. This fixed my DNS resolution issues.

uname -a

Apple has introduced a number of positive changes with iOS 9. This post details them the items I have found through ad-hoc usage.

– Mail app has icons when swipe is shown
– Selecting and deselecting text now has animations
– Passbook has be renamed to Wallet
– Swiping down from the home screen has more searchable features; equivalent to swiping left on the home screen
– Low battery mode
– Double-tap home button shows new switching mode
– Settings app now has search
– Apple Music is now default on US builds
– Maps has transit directions for some markets
– Context menus are now more Palm-esque, as in, do not extend across the screen
– More sharing options
– Notes supports additional media
– Recommended apps to use now shows up reliably on the lock screen

I needed a simple calendar to track some projects and could not find a one online that matched my requirements without a watermark. I spent a few minutes creating a minimalist calendar for 2015 (including the months already passed).

Feel free to use it to track your projects and let me know if it was useful.

For small personal projects I often use git to track my work. Sometimes, I’ll work from a different computer and wish I could clone the repository and continue where I left off. I recently set up a git server that allowed me to do this and all I needed was ssh access.

Provided you have a remote server located at gitserver with user admin, you can set one up by doing the following.