Meta

Category: Software

A while back I mentioned a follow up to an old blog post about the RouterBoard that i’d recently purchased and setup for home use. This is a very belated update on that board.

My requirements have since changed from the original post, but not dramatically so. The requirement for LACP has disapeared, IPSec is no longer used, but a requirement for Dynamic Routing has appeared.

All in all, I have to say that I still cannot recommend RouterOS enough. I’ve been using it the past 4 1/2 years, and have recommended a large number of folks to use it.

The main reason behind this is that it just works, there’s not really any faffing about that needs to be done, and if you’re running the stable release, everything does just work.

Feature wise, this is right up there with some of the big brands (Cisco, Juniper et all), however its fair to say not with the same price tag.

(Disclaimer – I’m a Rackspace Employee, the postings on this site are my own, may be bias, and don’t necessarily represent Rackspace’s positions, strategies or opinions. These tests have been preformed independently from my employer by my self)

As Rackspace have recently launched a ‘beta’ Cloudfiles service within the UK I thought I would run a few tests to compare it to Amazon’s S3 service running from Eire (or Southern Ireland).

I took a set of files, totalling 18.7GB, with file sizes ranging from between 1kb and 25MB, text files, and contents being mainly Photos (both JPEG and RAW (cannon and nikon), plain text files, GZiped Tarballs and a few Microsoft Word documents just for good measure.

The following python scripts were used:

Cloud Files
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

import cloudfiles

import sys,os

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

local_file_list=sys.stdin.readlines()

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

containers=cf.get_all_containers()

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

def upload_cf(local_file):

u=backup_container.create_object(local_file)

u.load_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_cf(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

import cloudfiles

import sys,os

#Setup the connection

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

#Get a list of containers

containers=cf.get_all_containers()

# Lets setup the container

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

#Create the container if it does not exsit

try:

backup_container

except NameError:

backup_container=cf.create_container(dest_container)

# We've now got our container, lets get a file list

def build_remote_file_list(container):

remote_file_list=container.list_objects_info()

forremote_file inremote_file_list:

f=open(remote_file['name'],'w')

rf=container.get_object(remote_file['name'])

print remote_file['name']

forchunk inrf.stream():

f.write(chunk)

f.close()

remote_file_list=build_remote_file_list(backup_container)

s3
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','api_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file

f=open(remote_file,'w')

rf=container.get_key(remote_file)

#print remote_file['name'

rf.get_file(f)

f.close()

local_file_list=sys.stdin.readlines()

def upload_s3(local_file):

k=Key(backup_container)

k.key=local_file

k.set_contents_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_s3(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','apt_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file.name

f=open(remote_file.name,'w')

rf=container.get_key(remote_file.name)

#print remote_file['name'

rf.get_file(f)

f.close()

remote_file_list=build_remote_file_list(backup_container)

The test was preformed from a Linux host which has a 100MBit connection (Uncapped/unthrottled) in London, however the test was also preformed with almost identical results from a machine in Paris (also 100mbit). Tests were also run from other locations (Dallas Fort Worth – Texas, my home ISP (bethere.co.uk)) however these locations were limited to 25mbit and 24mbit , and both reached their maximum speeds. The tests were as follows:

Download files from Rackspace Cloudfiles UK (these had been uploaded previously) – This is downloaded directly via the API, NOT via a CDN

Upload the same files to S3 Ireland

Upload the same files to a new “container” at Rackspace Cloudfiles UK

Download the files from S3 Ireland – This is downloaded directly via the API, NOT via a CDN

The average speeds for the tests are as follows:Cloudfiles
Download: 90Mbit/s
Upload: 85MBit/sS3 Ireland
Download: ~40Mbit/s
Upload : 13Mbit/s

Observations

Cloud files seems to be able to max out a 100mbit connection for both File

S3 seems to have a cap of 13mbit for inbound file transfers?

S3 seems to either be extremely unpredictable on file transfer speeds for downloading files via the API, or there is some form of cap after a certain amount of data transferred, or there was congestion on the AWS network

Below is a graph showing the different connection speeds achieved using CF & S3

As mentioned before this is a very unscientific test (and I’d say that these results have not been replicated from as many locations or as many times as I’d like to, so I would take them with a pinch of salt) , but it does appear that Rackspace cloudfiles UK is noticeably faster than S3 Ireland

I’m in the process of migrating from my iPhone 3g to an HTC desire. So far i’m really impressed with the desire, but a full view on it is reserved for a month or so after I’ve used it day in day out!

One thing that I did quite want was to have my SMS messages migrated from my iPhone to the Desire. As the iPhone keeps the SMSes in a SQLite DB this wasn’t to hard. I’m going to post the procedure and the script I used later!

I’ve recently upgraded my iTunes installation on my MacBookPro to 8.1.1 and to my horror found that I’m no longer able to connect to my DAAP library on my NAS.

This is rather strange as the issue has only just appeared in 8.1.1 and does not appear on my windows machines which reside on a different network, and have Bonjour / Rendezvous mDNS traffic broadcast locally by RendevousProxy. After much annoyance, I decided to do a quick check of what an older iTunes library was sending out, and comparing that to Avahi. It turns out that my Avahi configuration was missing some vital Text Records. This wasn’t an issue in previous revisions of the iTunes client, but appears to be an issue in 8.1.1.

I updated my daap.service file in /etc/avahi/services/ to the following

1

2

3

4

5

6

7

8

9

10

<!--*-nxml-*-->

%h

_daap._tcp

3689

txtvers=1

iTSh Version=131073

Version=196610

And restarted Avahi for good measure and now can connect to my mt-daapd library again!