Categories

Meta

Author: Welby

Welby McRoberts is Lead Infrastrucure engineer at a large managed hosting & cloud computing company working on large-scale deployments and solutions. Follow Welby on Twitter @welbymcroberts or his Personal Site

(Disclaimer – I’m a Rackspace Employee, the postings on this site are my own, may be bias, and don’t necessarily represent Rackspace’s positions, strategies or opinions. These tests have been preformed independently from my employer by my self)

As Rackspace have recently launched a ‘beta’ Cloudfiles service within the UK I thought I would run a few tests to compare it to Amazon’s S3 service running from Eire (or Southern Ireland).

I took a set of files, totalling 18.7GB, with file sizes ranging from between 1kb and 25MB, text files, and contents being mainly Photos (both JPEG and RAW (cannon and nikon), plain text files, GZiped Tarballs and a few Microsoft Word documents just for good measure.

The following python scripts were used:

Cloud Files
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

import cloudfiles

import sys,os

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

local_file_list=sys.stdin.readlines()

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

containers=cf.get_all_containers()

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

def upload_cf(local_file):

u=backup_container.create_object(local_file)

u.load_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_cf(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

api_username="USERNAME"

api_key="KEY"

auth_url="https://lon.auth.api.rackspacecloud.com/v1.0"

dest_container="CONTAINER"

import cloudfiles

import sys,os

#Setup the connection

cf=cloudfiles.get_connection(api_username,api_key,authurl=auth_url)

#Get a list of containers

containers=cf.get_all_containers()

# Lets setup the container

forcontainer incontainers:

ifcontainer.name==dest_container:

backup_container=container

#Create the container if it does not exsit

try:

backup_container

except NameError:

backup_container=cf.create_container(dest_container)

# We've now got our container, lets get a file list

def build_remote_file_list(container):

remote_file_list=container.list_objects_info()

forremote_file inremote_file_list:

f=open(remote_file['name'],'w')

rf=container.get_object(remote_file['name'])

print remote_file['name']

forchunk inrf.stream():

f.write(chunk)

f.close()

remote_file_list=build_remote_file_list(backup_container)

s3
Upload

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','api_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file

f=open(remote_file,'w')

rf=container.get_key(remote_file)

#print remote_file['name'

rf.get_file(f)

f.close()

local_file_list=sys.stdin.readlines()

def upload_s3(local_file):

k=Key(backup_container)

k.key=local_file

k.set_contents_from_filename(local_file)

forlocal_file inlocal_file_list:

local_file=local_file.rstrip()

local_file_size=os.stat(local_file).st_size/1024

print"uploading %s (%dK)"%(local_file,local_file_size)

upload_s3(local_file)

Download

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

from boto.s3.connection import S3Connection

from boto.s3.key import Key

import sys,os

dest_container="CONTAINER"

s3=S3Connection('api','apt_secret')

buckets=s3.get_all_buckets()

forcontainer inbuckets:

ifcontainer.name==dest_container:

backup_container=container

def build_remote_file_list(container):

remote_file_list=container.list()

forremote_file inremote_file_list:

print remote_file.name

f=open(remote_file.name,'w')

rf=container.get_key(remote_file.name)

#print remote_file['name'

rf.get_file(f)

f.close()

remote_file_list=build_remote_file_list(backup_container)

The test was preformed from a Linux host which has a 100MBit connection (Uncapped/unthrottled) in London, however the test was also preformed with almost identical results from a machine in Paris (also 100mbit). Tests were also run from other locations (Dallas Fort Worth – Texas, my home ISP (bethere.co.uk)) however these locations were limited to 25mbit and 24mbit , and both reached their maximum speeds. The tests were as follows:

Download files from Rackspace Cloudfiles UK (these had been uploaded previously) – This is downloaded directly via the API, NOT via a CDN

Upload the same files to S3 Ireland

Upload the same files to a new “container” at Rackspace Cloudfiles UK

Download the files from S3 Ireland – This is downloaded directly via the API, NOT via a CDN

The average speeds for the tests are as follows:Cloudfiles
Download: 90Mbit/s
Upload: 85MBit/sS3 Ireland
Download: ~40Mbit/s
Upload : 13Mbit/s

Observations

Cloud files seems to be able to max out a 100mbit connection for both File

S3 seems to have a cap of 13mbit for inbound file transfers?

S3 seems to either be extremely unpredictable on file transfer speeds for downloading files via the API, or there is some form of cap after a certain amount of data transferred, or there was congestion on the AWS network

Below is a graph showing the different connection speeds achieved using CF & S3

As mentioned before this is a very unscientific test (and I’d say that these results have not been replicated from as many locations or as many times as I’d like to, so I would take them with a pinch of salt) , but it does appear that Rackspace cloudfiles UK is noticeably faster than S3 Ireland

As promised here’s a copy of my iPhone to Android script. Just a quick and dirty python script that reads in a backup from itunes and converts it to a bit of XML able to be read in by SMS Backup and Restore on the android platform

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

from sqlite3 import *

from sqlite3 import *

from xml.sax.saxutils import escape

import codecs

import re

f=codecs.open('sms.xml','w','utf-8')

f.write('''

''')

# This is 31bb7ba8914766d4ba40d6dfb6113c8b614be442.mddata or 31bb7ba8914766d4ba40d6dfb6113c8b614be442.mdbackup usally

c=connect('sms.db')

curs=c.cursor()

curs.execute('''SELECT address,date,text,flags FROM message WHERE flags &lt;5 ORDER BY date asc''')

I’m in the process of migrating from my iPhone 3g to an HTC desire. So far i’m really impressed with the desire, but a full view on it is reserved for a month or so after I’ve used it day in day out!

One thing that I did quite want was to have my SMS messages migrated from my iPhone to the Desire. As the iPhone keeps the SMSes in a SQLite DB this wasn’t to hard. I’m going to post the procedure and the script I used later!

I’ve moved this site (and a few others) from its semi-temporary home of one of my NAS boxes (proper x86 machines remember, not any of these silly 200MHz mips devices!) to a Slice at the wonderful slicehosts. I’d Highly recommend them.

Tagging along at lunch with a few colleagues at work today to the local subway and notice a new set of adverts on the window.
It appears that subway are imitating Apple’s “there’s an app for that”. It turns out that this campaign has been done by McCann Erickson and is a ‘light-hearted’ campaign complete with UK TV adverts. The phrase Imitation is the sincerest form of flattery comes to mind.

In most large enterprises there is a requirement to comply with various standards. The hot potato in the Ecommerce space at the moment (and has been for a few years!) is PCI-DSS.

At $WORK we have to comply with PCI-DSS with the full audit and similar occurring due to the number of transactions we perform. Recently we’ve deployed lighttpd for one of our platforms, which has caused an issue for our Information Security Officers and Compliance staff.

PCI-DSS 6.6 requires EITHER a Code review to be preformed, which whilst this may seem to be an easy task, when you’re talking about complex enterprise applications following a very……… agile development process it’s not always an option. The other option is to use a WAF (Web Application Firewall). Now there are multiple products available that sit upstream and perform this task. There is however an issue if you use SSL for your traffic. Most WAF will not do the SSL decryption / reencryption between the client and server (effectively becoming a Man in the Middle). There are however a few products which do this, F5 networks’ ASM being one that springs to mind. Unfortunately this isn’t always an option due to licensing fees and similar. An alternative is to run a WAF on the server its self. A common module for this is Mod_Security for Apache. Unfortunately, a similar module does not exist for Lighttpd.

In response to $WORKs requirement for this I’ve used mod_magnet to run a small lua script to emulate the functionality of mod_security (to an extent at least!). Please note that mod_magent is blocking, so will cause any requests to be blocked until the mod_magnet script has completed, so be very careful with the script, and ensure that it’s not causing any lag in a test environment, prior to deploying into live!

Below is a copy of an early version of the script (most of the mod_security rules that we have are specific to work, so are not being included for various reasons), however I’ll post updates to this soon.

For those who know me you’ll know that I go into London on occasion but did like the fact that Woking gas no fast ticket system and hence no cheaper fares are available. You’ll also know that me and the ticket machine don’t always see eye to eye!
Imagine my glee in finding out that ousted is now available on the over ground network. Now correct me if I’m wrong but Woking is a feeder town, nothing more. So it would be safe to asume that feeder towns would get oyster. How wrong I was. Turns out that South West Trains can’t be bothered to accept oyster. Now this wouldn’t be a major issue, if the auto ticket machines worked. It takes a good 5 minuites to get a ticket by card from them. To add to the irony TFL have the backup oyster stuff running out of a computer building not even 2 miles away. All that I’m asking for is the ability for me to pre pay for say 5 trips into London and the single fare on the tube or bus, as the only +tube options we have on the auto system is zomes 123456! Why can’t oyster just be added! It would make things simpler and all in all a lot more 21st century! Also think on the trees or something!

At the new year I decided that I was fed up with having my main Unix server acting as a Router (amongst other things) and decided to bite the bullet and get a full blown router. Here in lay a dilema. Being the fact that I’m a geek, I couldn’t settle for a “home” unhackable router. So this instantly ruled out most of the commercial available routers, baring those that run OpenWRT. Now don’t get me wrong, OpenWRT is more than capable, but I just didn’t feel like having to worry about hardware support, fighting with IPTables and getting hardware that probally wouldn’t scale. Now before anyone starts thinking “Scaling, but this is for a home connection!”, this is true. However I do sync my DSL at the full 24244 kbps Downstream, and 2550 kbps upstream (I live under 200m from the exchange according to my line attenuation, also my ISP doesn’t bandwidth cap, and allow for FastPath and similar to be enabled. Go BeThere!) . Also at the time, I was seriously considering investing in a secondary connection for additional bandwidth. This meant that I was left with a few choices

Build my Own. Using something like an ALIX/Sokeris and use something like FreeBSD (or something with a webgui for when I feel rather lazy, such as m0n0wall or pfsense. Both I’ve used previously with great success)

Cisco. Yes, the 800 pound gorrila of home. A ‘cheap’ 1800 or similar was going to set me back about £400, however this would have provided me most of what I needed.

RotuerBoard. These where, to me at least, relativly unknown. I originally looked at them for building my own system with them, and then discovered RouterOS came with the boards. This was an instant sale.

After my first look at RouterOS I was basically sold. Main reasoning behind this was that it was a comercial Linux distribution, that actually worked well as a router, and shipped with both a CLI (Nortel-esq in this case) and a *shock* gui application. It also met my main criteria.

Support for 802.1Q. I have multiple vLANs at home so having support for dot1q was a necessity

Support for 802.3ad. As I have a few machines connecting via the router, I needed the throughput, as I don’t have gigabit switching LACP support was a necessity.

Support for Wireless. All good routers for the home (even a geeky one) need support for 802.11(a/b/g).

Support for SubSSIDs. Relating to the above, I didn’t want to have 7 wireless cards for my various networks

Support for WPA2-PSK and WPA2-EAP. I use RADIUS to authenticate all my personal stations to a central authentication system, but I don’t want to have to add guests to this, so PSK should also be supported.

Support for OpenVPN. I don’t like having my traffic to / from home going in the clear at all, so I needed to be able to connect via a VPN of some sort, My preference is OpenVPN for c2s vpns (s2s is still IPSEC…. which leads onto the next point)

Support for IPSec. I connect to various friends networks, and yet again, don’t want this sort of traffic in the clear, we made the standard IPSec (3des/md5) a while back

Support for “Unlimted” Firewall rules. This may sound silly, but anyone who has worked with the lowend Sonicwalls will know what I mean, only being able to put 20 rules is EXTREMELY restrictive especially with multiple vlans! (I’ve got roughly 300 rules)

Support for setting DHCP options. I used VMWare ESX at home for my test lab, so I require to be able to setup the DHCP server to be able to send the correct options for PXE (or gPXE) so this was a requirement

Quick booting. As silly as this may sound, I don’t want boot times of upwards of 30 seconds for my router.

Support for Bridging of interfaces with Firewall rules. This one is rather self explanatory really!

Support for UPnP. Lets face it, UPnP is required for any form of Voice/Video chat these days over the main IM networks (YIM/AIM/MSNIM)

Support for NetFlow or similar. This one is a nice to have, as I like to use flow-tools to generate a rough guess on what type of traffic is flowing through my network

Support for Traffic Shaping. Ah yes, the holy grail of routers. Unfortunately the likes of TC on linux requires a degree in astrophysics to get working how you’d like!

Easy configuration.

After discovering (via the x86 installable and the demo units) that RouterOS would let me do all of the above, I decided to give it a whirl.

I’ve recently upgraded my iTunes installation on my MacBookPro to 8.1.1 and to my horror found that I’m no longer able to connect to my DAAP library on my NAS.

This is rather strange as the issue has only just appeared in 8.1.1 and does not appear on my windows machines which reside on a different network, and have Bonjour / Rendezvous mDNS traffic broadcast locally by RendevousProxy. After much annoyance, I decided to do a quick check of what an older iTunes library was sending out, and comparing that to Avahi. It turns out that my Avahi configuration was missing some vital Text Records. This wasn’t an issue in previous revisions of the iTunes client, but appears to be an issue in 8.1.1.

I updated my daap.service file in /etc/avahi/services/ to the following

1

2

3

4

5

6

7

8

9

10

<!--*-nxml-*-->

%h

_daap._tcp

3689

txtvers=1

iTSh Version=131073

Version=196610

And restarted Avahi for good measure and now can connect to my mt-daapd library again!