Archive

If you want to send confidential content via email you have to think about quite a few things. Its not enough to secure the channel you have to also secure the content. And when I say secure the channel, you have to hope it is, because while a lot of email endpoints or MTAs are now setup with optimistic TLS, some are not, and you won’t really know until you’ve been hacked.

And of course your metadata isn’t encrypted, and this might be as revealing as your content in some cases. DMARC, SPF, DKIM and so on will help, or you could run a scan with a tool like SSL labs on the domain, or some of the available mail test services against a known MTA, but that might be beyond some. As for mobile, things vary from OS to OS, version to version and so on.

PGP is still your friend here, but if you have your tinfoil hat on as well you might want to get away from some of the global multi-function services that have interests in aspects of your behaviour, your traffic patterns, email relationships and their attitudes to law enforcement requests. So, here is a quick, off the cuff comparison of a couple of services: MailFence and ProtonMail. Tell me where I have my facts wrong if you spot something!

An announcement today [1] that Google will fund a team led by Dr Alexandre Passant [2] on mobile social network applications. I wonder how much it will be like my own SkyTwenty platform [3] – far from being a social networking or blogging system but a mobile location service, with anonymity, access control and data shrouding. The DERI work is apparently going to build on some Google tech – “PubSubHubbub”[4] – and DERIs own hub based mobile blogging service – SMOB[5] – v2 of which was released in Jan 2010, integrates with Sindice. Distributed storage, Git like ? What about anonymity, provenance, trust, security, and scalabilty ?

I was just about to start writing a CORS[1] servlet filter so that I can move one of my apps onto an independent EC2 host and give it more memory when I came across the CometD project [2] (DOJO event bus in Ajax, interesting in itself), which makes use of Jetty7’s CrossOriginFilter [3].

This seems to do all you need to allow your servlet interact with cross domain requests and built Javascript RIAs that mash up and link data, semantic or not. The filter allows a list of allowable domains to be set, among other things, so that you can add it to any of your servers, map it to any of your servlets, and allow different clients you trust and want access to your data to get to it.

Saves me having to write it, and it looked like it was going to be painful to do fully and correctly, so its a real relief to see in Jetty7. All credit to the developers there.

Not sure about the licensing aspects (EPL1 + Apache2) , but you can lift the source, remove the Eclipse logging dependency, and alter as you see fit for your version of servlet engine. I’m trying this now with Tomcat6 and another Jetty6 instance, just as soon as I can get my apps separated and onto different domains (without the filter, a request from localhost to a remote domain using jQuery seems to get thru just fine for some reason)

Separating services onto different hosts on EC2

I wanted to move one of my http services (Joseki) onto a new host so as to be able to give the JVM more memory and avoid EC2 unceremoniously killing it when it asked for too much. The tomcat service with the webapps would remain put. So I

created an AMI from my running instance

create a new instance from it

resinstalled ddclient because it didnt seem to work

created a new DynDNS account thinking it was tied to my account rather than the host, but that didnt make any difference

checked the ddclient cache file – it seemed to have the right ip addresses – ie one for the tomcat services, and another for the joseki host. However dyndns showed that all hostnames were backed by the same ip address. I suspected cache so i did a ‘sudo ddclient -force ‘ and this seems to have updated DynDNS correctly

changed my js files so that all sparql would be directed to the new Joseki host, and started testing

Getting JSONP where there is only JSON

Now I expected that things wouldn’t work – Joseki is on a different host than where the js files have been loaded from – making an Ajax call there shouldnt work should it – unless I was using jsonp – but Joseki doesnt do jsonp !

So, I checked my code, and it is making JSONP calls – I’m doing a jQuery $ajax call like this

So, whats going on ? Have I not understood this whole cross-domain thing [4], or is jQuery doing something strange ?

Well turns out that Ive forgotten a trick I used to get this to work before : whats actually going in is this

jQuery, rather than using xmlhttp request, is making a DOM call to insert a <script> tag (which can make cross domain calls), because Ive specified dataType:”jsonp”.

The url for the script (specified in url:remoteurl) uses my new hostname – but – happens to include (specified in url:remoteurl) “&output=json” and the necessary SPARQL of course. [5]

The script tag gets processed making the GET call to the remote URL, the sparql runs on the remote/cross-domain server, and the JSON response is processed by the callback specified in the success:callback option

if you change it to a json call dataType:"json"

the request is made (and visible in remote access logs) but the response is aborted – Im not sure if its the browser doing this, or jQuery.

So, hey presto, JSONP where the server does not explicitely support it. However, it won’t work with POST of course (script tags), so for SPARQL update or insert it will be an issue. CORS really should be used here for that…..

Back to the real topic though, CORS. I installed the code [3], modified with some more debug logging so I could see what was going on. Having changed the client javascript to make a jQuery.$ajax call with dataType:json rather than jsonp I expected this to work straight out – after all, Gecko on Firefox 3.6 does the hard work with the headers [6] for “simple” requests (no credentials, not a POST, no custom [non http1.1] headers), so jQuery using XmlHttpRequest should be fine – but it was not. This turned out to be a false negative tho, as my broadband provider is being rubbish today, and when I switched to my rubbish 3G dongle it time out so often that it looked like failure.

Now the strange thing is that when I remove the CORSFilter servlet mapping from web.xml, jQuery still sends the dataType:json request, Joseki receives it, but the response is never processed by the callback. A Mozilla Hacks post [7] says this :

In Firefox 3.5 and Safari 4, a cross-site XMLHttpRequest will not successfully obtain the resource if the server doesn’t provide the appropriate CORS headers (notably the Access-Control-Allow-Origin header) back with the resource, although the request will go through. And in older browsers, an attempt to make a cross-site XMLHttpRequest will simply fail (a request won’t be sent at all).

Reverting to having the filter in place seems to fix things, but Im still not 100% convinced that things are correct. I suppose the “will not successfully obtain the resource” is vague enough to be an acceptable explanantion for when I dont have CORSFilter in place, but I would have thought that sending the request and putting load on the network and target server wasn’t something that Mozilla really want to happen.

But when in place, the CORSFilter is getting the origin header, and setting the AC-AO response header, so its behaving. I expect it will be different in a range of other browsers (ie Internet Exploder). So for now, its not broken, I’m not going to fix it any more. YMMV 🙂

And by the way, a t1.micro on EC2 isnt really up to it for even a smallish dataset of 340k triples. It does, just, but you get what you pay for here.