Tagplanet:canonical

One of the nice things about the Raspberry Pi 2 is it has a Cortex-A7-based ARMv7 CPU, as opposed to the original Pi's ARMv6 CPU. This not only allows many more distributions to run on it (as most armhf distributions are compiled to ARMv7 minimum), but also brings with it the performance benefits associated with userland ARMv7 code. After releasing an Ubuntu 14.04 (trusty) image for the Raspberry Pi 2, I decided to pit Raspbian (which uses an ARMv6 userland for compatibility between the original Pi and the Pi 2) against Ubuntu (which is only compiled to ARMv7). I also benchmarked a Utilite Pro, an ARM system with a faster CPU and built-in SSD, and a modern Intel server.

Raspbian wheezy was tested on both Raspberry Pi models, while Ubuntu trusty was also tested on the Raspberry Pi 2, along with the rest of the systems. All installations were current as of today. The systems were tested with nbench (BYTEmark), OpenSSL and Bonnie++.

Results

This is a hand-picked assortment of test results; for the full raw results, see below.

Test

RPi BRaspbian

RPi 2Raspbian

RPi 2Ubuntu

UtiliteUbuntu

i5-4690KUbuntu

Numeric sort

217.2

450.72

421.55

334.63

2,385.1

FP emulation

41.334

70.276

55.108

52.454

795.9

IDEA

694.72

1,308.5

1,573.3

1,315

15,059

md5 1024

37,008.46

62,628.86

69,563.39

80,632.53

670,637.40

aes-256 cbc 1024

11,969.50

18,445.31

17,295.36

20,986.47

124,509.53

sha512 1024

8,491.32

11,838.81

20,718.25

25,803.70

431,647.74

whirlpool 1024

1,584.61

2,949.80

2,747.05

2,687.46

135,009.28

rsa 1024 verify

1,540.3

2,649.6

2,630.5

2,890.8

114,074.5

ecdsa 256 verify

73.2

126.3

138.0

161.1

4,329.6

Block output

7,520

11,028

11,299

48,214

62,762

Block input

13,233

23,015

22,997

125,954

284,914

Random seeks

524.7

1,054

874.6

3,218

444.5

Notes

Interestingly, many of the BYTEmark tests on the Pi 2 were faster on Raspbian than on Ubuntu. But keep in mind that these are tests from the 1990s, and are not taking advantage of modern optimizations (like the floating point emulation test). Many OpenSSL tests performed better on Ubuntu, but not all.

Edit: The slower nbench results in Ubuntu appear to be due to a running LSM (Linux Security Module). When Ubuntu is running with AppArmor (default) or SELinux enabled, it's marginally slower than Raspbian, but with LSMs disabled, it's marginally faster than Raspbian. (The Raspbian kernel has no LSM modules compiled in.) I'm keeping these test results as they are because AppArmor is enabled by default, but keep that in mind.

Raspbian/Ubuntu aside, virtually all of the tests were faster on the Pi 2 than the original Pi.

Bonnie++ tests were roughly the same between Raspbian and Ubuntu on Pi 2, and were decently faster than the original Pi (though in this test an older SDHC card was used for the original Pi, so it's not apples to apples). The SSD on the Utilite blows them away though.

All of the CPU tests are single-threaded, and do not take multi-core performance into consideration.

This was not a controlled scientific test. I did not run multiple tests on each system and average them together, and in the Intel system's case, it was an active (but low volume) server.

All Bonnie++ tests were run with swap disabled and on the boot drive, except for the Intel system where the boot drive (an SSD) did not have enough space for a full test. (Bonnie++ requires twice the amount of RAM as disk to run. On 512 MiB / 1 GiB / 2 GiB systems that's fine, but I didn't have 64 GiB free on the the Intel system's boot drive.

I've closed comments on this blog post. If you are looking for help, please see this post on the raspberrypi.org forums. If you post there, you'll be reaching a wider audience of people (including myself) who can help you. Thanks for all of your comments!

After my last post, I went and ported Sjoerd's Raspberry Pi 2 Debian kernel patchset to Ubuntu's kernel package base (specifically 3.18.0-14.15). The result is an RPi2-compatible 3.18.7-based kernel which not only installs in Ubuntu, but has all the Ubuntu bells and whistles. I also re-ported flash-kernel based on Ubuntu's package, recompiled raspberrypi-firmware-nokernel, created a linux-meta-rpi2 package, and put it all in a PPA.

With that all done, I decided to go ahead and produce a base Ubuntu trusty image. It's 1.75GB uncompressed so you can put it on a 2GB or larger MicroSD card, and includes a full ubuntu-standard setup. Also included in the zip is a .bmap file; if you are writing the image in Linux you can use bmap-tools package to write only the non-zero bytes, saving some time. Otherwise it's the same procedure as other Raspberry Pi images.

(PS: If this image becomes popular, I should point out ahead of time: This is an unofficial image and is in no way endorsed by my employer, who happens to be the company who produces Ubuntu. This is a purely personal undertaking.)

My Raspberry Pi 2 arrived yesterday, and I started playing with it today. Unlike the original Raspberry Pi which had an ARMv6 CPU, the Raspberry Pi 2 uses a Broadcom BCM2836 (ARMv7) CPU, which allows for binary compatibility with many distributions' armhf ports. However, it's still early early in the game, and since ARM systems have little standardization, there isn't much available yet. Raspbian works, but its userland still uses ARMv6-optimized binaries. Ubuntu has an early beta of Ubuntu Snappy, but Snappy is a much different environment than "regular" Ubuntu.

I found this post by Sjoerd Simons detailing getting Debian testing (jessie) on the Pi 2, and he did a good job of putting together the needed software, which I used to get a clean working install of Ubuntu trusty on my Pi 2. This is meant as a rough guide, mostly from memory -- I'll let better people eventually take care of producing a user-friendly system image. This procedure should work for trusty, utopic, and vivid, and might work for earlier distributions.Continue reading

A year ago, I launched M29, a URL shortener with a twist. Apparently I forgot to announce it here. Whoops.

Normal URL shorteners are fairly simple. You submit a long URL. The service generates a short URL. The long and short URL are placed in a backend database. If you go to the short URL, it redirects to the long URL.

This means that the URL shortener service has a large database of URLs available to it. While 99% of the contents of this database may be mundane, it's still a large, centralized source of information. Very relevant to the recent NSA news, for example.

M29's twist is, except when serving the redirect, it does not know anything about the contents of the long URLs. This is accomplished by generating an AES-128 key, using it to encrypt the long URL, and then splitting the key in two. One half of the key is stored in the backend service, and the other half is encoded as part of the short URL itself. This means the only time the two parts of the key come together is when the short URL is requested for the redirect.

Getting from a long URL to a short URL can be done one of several ways. If you go to m29.us and have Javascript enabled, the client side actually loads an AES libary, builds the key, encrypts the URL, and sends the encrypted URL and half of the key to the server, all processed on the client side. If you don't have Javascript enabled, this task is farmed out to the server side, which generates a random key, encrypts, makes the database insert, returns the short URL, then throws away half of the key. M29 also has a featureful API which lets you do these tasks yourself. (It is also compatible with the goo.gl API, which is easy to work with and has several tools available.)

The net effect is, while I currently have a database of about 10,000 entries, I cannot read them. Source IP and URI logging are not done on the server, so the only way I can find a long URL is if I load a full short URL, which is not possible given just the backend database.

Anyway, this weekend I did some work on M29, including adding a bit.ly-style preview option (append a "+" to the short URL to get its info), among other small feature additions and fixes. It was then I realized, by going to that above short URL (the first URL generated and used in documentation) that the one-year anniversary of the service is today.

Turns out there is a bug in Steam for these platforms, where it tries to launch the Windows version of Half-Life for GoldSrc mods from within Steam. However, Half-Life can be manually launched and pointed at the mod.

I have released a new version of SteamLink as a zip file. If you would like to run Half-Life: Uplink on Linux or OS X, simply download and extract the zip, and run the installer shell script. It will determine the Half-Life installation directory, install the mod, and give you a symlink to a script to launch it.