As the need for Two factor authentication is a requirement for PCI-DSS (Payment Card Industry standard) and SSH Key with password is not always deemed to be an acceptable form of Two factor authorisation there is now a surge in different forms of two factor auth, all with their own pros and cons.

For a small business or ‘Prosumer’ (professional consumers) the market incumbent (RSA) is not a viable option due to the price of the tokens and the software / appliance that is required. There are cheaper (or free!) alternatives for which two that I’ve used at Google Authenticator, and Yubikey.

Google Authenticator is an OATH-TOTP system that much like RSA generates a one time password once every 30 seconds. It’s avaiable as an App for the Big three mobile platforms (iOS, Android and Blackberry).

Yubikey is a hardware token that emulates a USB keyboard, that when the button is pressed, generates a one time password. This is supported by services such as lastpass.

Both solutions have the ability to be used with their own PAM modules. Installation of either is simple, but what happens if you want to use both, but only require one of these.

In the above example the user must enter a password and then provide either their yubikey or their google_authenticator.

Should the password be incorrect the user will still be prompted for their yubikey or google authenticator, but will then fail. Should they provide a password and then their yubikey, they will not be asked for their google authenticator. Should they provide password and not a yubikey, they will be prompted for their google authenticator!

(Disclaimer – I’m a Rackspace Employee, the postings on this site are my own, may be bias, and don’t necessarily represent Rackspace’s positions, strategies or opinions. These tests have been preformed independently from my employer by my self)

As Rackspace have recently launched a ‘beta’ Cloudfiles service within the UK I thought I would run a few tests to compare it to Amazon’s S3 service running from Eire (or Southern Ireland).

I took a set of files, totalling 18.7GB, with file sizes ranging from between 1kb and 25MB, text files, and contents being mainly Photos (both JPEG and RAW (cannon and nikon), plain text files, GZiped Tarballs and a few Microsoft Word documents just for good measure.

The following python scripts were used:

Cloud Files
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

import cloudfiles

import sys,os

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

local_file_list=sys.stdin.readlines()

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

containers=cf.get_all_containers()

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

def upload_cf(local_file):

u=backup_container.create_object(local_file)

u.load_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_cf(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

import cloudfiles

import sys,os

#Setup the connection

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

#Get a list of containers

containers=cf.get_all_containers()

# Lets setup the container

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

#Create the container if it does not exsit

try:

backup_container

except NameError:

backup_container=cf.create_container(dest_container)

# We've now got our container, lets get a file list

def build_remote_file_list(container):

remote_file_list=container.list_objects_info()

forremote_file inremote_file_list:

f=open(remote_file['name'],'w')

rf=container.get_object(remote_file['name'])

print remote_file['name']

forchunk inrf.stream():

f.write(chunk)

f.close()

remote_file_list=build_remote_file_list(backup_container)

s3
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','api_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file

f=open(remote_file,'w')

rf=container.get_key(remote_file)

#print remote_file['name'

rf.get_file(f)

f.close()

local_file_list=sys.stdin.readlines()

def upload_s3(local_file):

k=Key(backup_container)

k.key=local_file

k.set_contents_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_s3(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','apt_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file.name

f=open(remote_file.name,'w')

rf=container.get_key(remote_file.name)

#print remote_file['name'

rf.get_file(f)

f.close()

remote_file_list=build_remote_file_list(backup_container)

The test was preformed from a Linux host which has a 100MBit connection (Uncapped/unthrottled) in London, however the test was also preformed with almost identical results from a machine in Paris (also 100mbit). Tests were also run from other locations (Dallas Fort Worth – Texas, my home ISP (bethere.co.uk)) however these locations were limited to 25mbit and 24mbit , and both reached their maximum speeds. The tests were as follows:

Download files from Rackspace Cloudfiles UK (these had been uploaded previously) – This is downloaded directly via the API, NOT via a CDN

Upload the same files to S3 Ireland

Upload the same files to a new “container” at Rackspace Cloudfiles UK

Download the files from S3 Ireland – This is downloaded directly via the API, NOT via a CDN

The average speeds for the tests are as follows:Cloudfiles
Download: 90Mbit/s
Upload: 85MBit/sS3 Ireland
Download: ~40Mbit/s
Upload : 13Mbit/s

Observations

Cloud files seems to be able to max out a 100mbit connection for both File

S3 seems to have a cap of 13mbit for inbound file transfers?

S3 seems to either be extremely unpredictable on file transfer speeds for downloading files via the API, or there is some form of cap after a certain amount of data transferred, or there was congestion on the AWS network

Below is a graph showing the different connection speeds achieved using CF & S3

As mentioned before this is a very unscientific test (and I’d say that these results have not been replicated from as many locations or as many times as I’d like to, so I would take them with a pinch of salt) , but it does appear that Rackspace cloudfiles UK is noticeably faster than S3 Ireland

At the new year I decided that I was fed up with having my main Unix server acting as a Router (amongst other things) and decided to bite the bullet and get a full blown router. Here in lay a dilema. Being the fact that I’m a geek, I couldn’t settle for a “home” unhackable router. So this instantly ruled out most of the commercial available routers, baring those that run OpenWRT. Now don’t get me wrong, OpenWRT is more than capable, but I just didn’t feel like having to worry about hardware support, fighting with IPTables and getting hardware that probally wouldn’t scale. Now before anyone starts thinking “Scaling, but this is for a home connection!”, this is true. However I do sync my DSL at the full 24244 kbps Downstream, and 2550 kbps upstream (I live under 200m from the exchange according to my line attenuation, also my ISP doesn’t bandwidth cap, and allow for FastPath and similar to be enabled. Go BeThere!) . Also at the time, I was seriously considering investing in a secondary connection for additional bandwidth. This meant that I was left with a few choices

Build my Own. Using something like an ALIX/Sokeris and use something like FreeBSD (or something with a webgui for when I feel rather lazy, such as m0n0wall or pfsense. Both I’ve used previously with great success)

Cisco. Yes, the 800 pound gorrila of home. A ‘cheap’ 1800 or similar was going to set me back about £400, however this would have provided me most of what I needed.

RotuerBoard. These where, to me at least, relativly unknown. I originally looked at them for building my own system with them, and then discovered RouterOS came with the boards. This was an instant sale.

After my first look at RouterOS I was basically sold. Main reasoning behind this was that it was a comercial Linux distribution, that actually worked well as a router, and shipped with both a CLI (Nortel-esq in this case) and a *shock* gui application. It also met my main criteria.

Support for 802.1Q. I have multiple vLANs at home so having support for dot1q was a necessity

Support for 802.3ad. As I have a few machines connecting via the router, I needed the throughput, as I don’t have gigabit switching LACP support was a necessity.

Support for Wireless. All good routers for the home (even a geeky one) need support for 802.11(a/b/g).

Support for SubSSIDs. Relating to the above, I didn’t want to have 7 wireless cards for my various networks

Support for WPA2-PSK and WPA2-EAP. I use RADIUS to authenticate all my personal stations to a central authentication system, but I don’t want to have to add guests to this, so PSK should also be supported.

Support for OpenVPN. I don’t like having my traffic to / from home going in the clear at all, so I needed to be able to connect via a VPN of some sort, My preference is OpenVPN for c2s vpns (s2s is still IPSEC…. which leads onto the next point)

Support for IPSec. I connect to various friends networks, and yet again, don’t want this sort of traffic in the clear, we made the standard IPSec (3des/md5) a while back

Support for “Unlimted” Firewall rules. This may sound silly, but anyone who has worked with the lowend Sonicwalls will know what I mean, only being able to put 20 rules is EXTREMELY restrictive especially with multiple vlans! (I’ve got roughly 300 rules)

Support for setting DHCP options. I used VMWare ESX at home for my test lab, so I require to be able to setup the DHCP server to be able to send the correct options for PXE (or gPXE) so this was a requirement

Quick booting. As silly as this may sound, I don’t want boot times of upwards of 30 seconds for my router.

Support for Bridging of interfaces with Firewall rules. This one is rather self explanatory really!

Support for UPnP. Lets face it, UPnP is required for any form of Voice/Video chat these days over the main IM networks (YIM/AIM/MSNIM)

Support for NetFlow or similar. This one is a nice to have, as I like to use flow-tools to generate a rough guess on what type of traffic is flowing through my network

Support for Traffic Shaping. Ah yes, the holy grail of routers. Unfortunately the likes of TC on linux requires a degree in astrophysics to get working how you’d like!

Easy configuration.

After discovering (via the x86 installable and the demo units) that RouterOS would let me do all of the above, I decided to give it a whirl.

Recently I’ve become more and more annoyed with my SKY-HD’s disk spinning up and down, and then the power appearing to be cut to the drive, meaning that theres a rather loud click comes from it. Not a problem if you’re watching TV, as this only occurs when the box is in standby. Very annoying if you’re having problems sleeping and the thing is going clunk ever 30 minutes or so. I’ve been told that I can change a disk spin down somewhere on the box, however this doesn’t appear to have made any difference. Another issue that is compounding the annoyance is that the SKY-HD box is almost impossible to use with a single tuner.

I decided to resurrect my HTPC and attempt to get SKY going into that. There where 4 major requirements for this

1. Has to be able to play content – I pay a silly amount a month just for 3 HD channels (BBC, Discovery and History) :: This meant that a DVB-S2 receiver was required

2. Has to be able to decode pay for channels – I pay a subscription to them, I’ll be damned if I don’t get my channels! :: This meant that either a SoftCAM or a CI slot and CAM were required

3. Has to be local to the machine, I want a raw MEPG2/h264 stream going to the media pc, and not any additional transcoding, also one less set of CPUs is a good thing ™ – This isnt a poke at a specific Linux Based satellite receiver at all :: This meant internal cards or locally attached devices (USB2/Firewire)

4. The HTPC must be running software that can play my videos – I don’t want to have my popcorn hour AND a HTPC to do my video. :: This meant using a Media Centre type application, this does however exclude Microsoft’s Windows Media Centre, as it doesn’t play MKVs/OGMs etc

Relatively small requests one would think, but apparently not! I was left with a few choices for Card, however the one that seemed to come out tops was the Digital-EverywhereFloppyDTV/S2. This meets requirements 1&3, by being able to decode DVB-S2 signals, and also is sending data via the Firewire bus.

In order to meet requirement 2 I opted for the “Dragon Cam” (or specifically the T-Rex 4.1). This is a Conditional Access Module, which along with a valid SKY viewing card, preforms the VideoCrypt (NDS) decryption. This does have one, annoying, caveat. The smartcard must go into a SKY box every 4->6 weeks to have a “new installation” done, as the CAM will not rewrite the new decryption codes to the card.

Infinity Unlimited USB Card Programmer @ £60 – This was required to do the initial loading of the T-Rex CAM, however can be returned / resold / similar as its a once off requirement

So all in all £220 to view/record on a media pc. This is for a single tuner only, as I don’t have access to multiple drops from the buildings satellite distribution system (which is rather amusing, as these are “executive” flats, built in the last 3 years, and yet all flats only have a single drop for satellite). Multi Drops can be done using a SoftCAM, where the CAM is replaced with a USB Smartcard programmer, but only one is required, meaning that the first channel would be £220, but each after that would be £160 (or £130 if an internal was to be used). Of course the Legality of using a SoftCAM is extremely questionable, where as a non official sky receiver is only marginally.