If net neutrality ends, then broadcasters and news and entertainment corporations with deep pockets can pay to put their content in front of ISP users. There’s no incentive for the ISPs to prioritise or perhaps even carry packets to the second-tier domains. Thus we end up with lowest denominator content being the norm. Not that different from what we currently receive from our broadcasters. Obviously, this is a comfortable place for both the the media industry and government to be in. It’s the status quo. Channel 5, ITV 2, 3 & 4, 24/7. No Wikipedia, no Vimeo, no blogs, and so on.

If a government can both lean on hosting and other internet companies to stop providing their services to sites that the government would rather not see, and yet at the same time exercise the ability to remove a company from the internet by confiscating their domain, at what point will one threat follow another? Is it feasible that a US ISP who provides hosting to wikileaks, and who refuses to stop serving them, will then have their own domain confiscated?

So, to me it seems that some heavy-handed governmental censorship is not far in the offing. Wikileaks is just the start and so I’m siding with the Internet on this one.

“The free distribution of data, and resistance to top-down evaluation of the merit of that data, is what the web excels at. It is more important now than ever before that individuals are allowed to publish and consume information as they see fit, within the bounds of the law. The world wide web, must be allowed to operate neutrally and independently of governments and corporations, including domain name registrars, ISPs, data carriers and other and infrastructure providers. Everyone who uses the web benefits from such independence, and should promote and support it wherever possible.”

which uses the // protocol-relative URL to download the external jquery file. Because it’s served over HTTPS, you won’t benefit from using the browser’s cached version, but it is one less file to be served by you.

Don’t forget your favicon

So now I was at the point of being certain that every resource on the page was being served over HTTPS, yet I was still getting the dreaded mixed content warning. Then I realised I hadn’t explicitly put a favicon link in the html. A quick check in the logs seemed to confirm this. The implicit favicon.ico request was being made by Chrome, but using HTTP.

Adding the icon link seemed to do the trick.

<link href='/images/favicon.ico' rel='shortcut icon' />

Proxying Google maps

Once final problem was that as part of the registration process, I was showing a Google Map iframe.

I didn’t want this page served over HTTP just to avoid the mixed content warning, especially as the map page contains personal details.

As I’m using nginx to serve the site, and it’s relatively easy to proxy content served from local application servers like mongrel and unicorn, I wondered if we could do something similar with the requests to http://maps.google.com.

Max was writing some rake tasks today and it reminded me to finish off this post which has sat unfinished for months.

Bob’s been porting Charanga’s music-teaching desktop software from PC to Macs. The port is based on the work we’ve done in the past couple of years for our online products and means that we can now have an online offering as well as PC and Mac desktop products, all built from the same codebase.

Each of these 11 products will come as either a hybrid DVD or CDROM, with both the PC and Mac version on, but only visible to relevant platform. Lots of CD burning products for Macs out there make it easy to burn these kind of ROMs, but the big problem is that they all need to be made manually. And with 11 different products, that’s 11 different manual processes, any one mistake of which could ruin the master that we’re sending off to the publisher.

So why not do the same with the burning of the CDROMs? Entirely automate the process so there’s no room for manual error…

Rake

Rake – Ruby Make – operates on a rakefile which defines lists of tasks, with optional requisite tasks that must first be completed. Given that building and burning the ROMs consists of a bunch of identical steps, differentiated only by the files that need to go on the relevant product’s CDROM or DVD, it sounds like an ideal tool, so let’s go ahead and build a skeleton rakefile…

There are a bunch of files that are common to all the products, plus product specific files. These get pulled out of subversion (yes, yes, we’re only just migrating to git), copied into the product filestructure, the PC content gets added, the hybrid ISO image gets created and then we use this to physically burn the ROM.

If we make each preceding step a prerequisite of the parent task, we can break the the steps down into nice self-contained pieces and have a single task invoke all the others below it.

Ultimately, I wanted to be able to stick a DVD or CDROM into the drive and then call:

The end result of this is that we end up with 11 burn_<a product name>_dvd tasks, which each invoke in turn

update_repository

build_<a product name>_dmg

build_<a product name>_dvd

and ends up with a burnt, hybrid DVD being made for you.

The critical part happens at line 163 where the Mac .dmg is mounted under /Volumes, and is writeable, has the PC content written to it. It seems that hdiutil only likes mounted images when creating hybrid images. I experimented with various other options (using directories in /tmp, etc) but for the hybrid image to built correctly, it seems this is your only choice. The -hide switches list via globs, which files to hide from each filesystem.

Here’s the script to save as /etc/init.d/unicorn (in case the gist doesn’t embed below) – don’t forget to run sudo /usr/sbin/update-rc.d -f unicorn defaults to link it up to your rc.d scripts for running at boot time.

Inspired by Makato’s X-Factor real-time Twitter experiments and with just a couple of days to go until the country goes to the polls, I wondered if we could glean an outcome from the Twittersphere, especially as we get nearer to the actual count and results.

Implementation

My real-time election site, monitors the twitter streaming API for mentions of the four three main British political parties, plus the Greens (hey, I live in Brighton Pavillion and they look like they might get their first seat) and their leaders, and tallies a total score and current ratio of tweets.

I’ve since updated it and it monitors all the parties, plus various independents and niche (read 0 current seats in parliament) parties and also tallies phrases like “I voted for”, “I’m voting” etc to count actual votes as well as mentions.

Server set-up

The static portions of the site are served by nginx (which is itself an asynchronous evented server like node.js). Initially I’d tried to serve the whole thing via node.js using the paperboy.js static file serving module but I’d need both paperboy and the node.ws.js websocket server to share port 80, which would mean some re-engineering, and given that the election is in two days, wanted to get something up quickly!

So my architecture is static files served out of docroot handled by nginx, and the websocket running on a high port for use by the browser clients.

This high-port usage is itself a problem, as I’d imagine that many corporate firewalls block all bar 80 and maybe 8080, which is why the combined server running entirely out of node would be a good eventual goal.

To try and solve this, I thought I’d try and proxy the high port via nginx. You can turn the proxy buffer off in nginx, making it ideal for this, but I hit another hurdle with the Flash implementation of the websocket that will be used by most browsers (only Chrome is currently able to use native HTML5 websockets).

I’m sold hookline and sinker on the AWS platform. I’m especially impressed at the product innovation and ever-reducing prices.

A few days ago Amazon announced versioning for S3. This means that with the versioning flag for a bucket switched on, you can retrieve earlier versions of your files. Sweet.

Now, because I’m lazy, I tend to use S3Fox or Cyberduck for setting ACLs and creating european buckets and so on.

Neither of these have updated yet to support the versioning flag, and the AWS Console doesn’t have an S3 interface, so I thought I’d get my hands dirty and find out how to do it with the REST interface.

You issue a PUT to your bucket with the versioning querystring and the relevant XML:

The March 2010 internship positions should go live on Monday 7th December. Currently the placements page is only showing 40 positions, but the additional 60 places will be there from Monday. Then you’ll be able to search for our role.

Our instrumental, vocal and curriculum music e-learning system is used by 55 local authorities and thousands of teachers and students. You’ll be undertaking user testing, making recommmendations and working with our developers to implement some of these, gaining invaluable experience in the process.

A group of Brighton & Hove Montessori parents are campaigning for a state-funded Montessori primary school to be opened in the city. This builds on the precedent of five other state-finded primaries that have opened in the UK.

At the moment, I’m only peripherally involved, but I did make this poster. The last time I did any print-work was using a pre-historic version of QuarkXPress, so this was an interesting challenge.

If you’ve got a child aged 3-11 and you’re concerned about or dissatisfied with the increasingly restricted range of schooling choices for your child in Brighton & Hove, then we’d welcomeyoursupport