MathML browser extensions

It's really annoying that there still isn't universal support for MathML. Things have got better recently though, with Safari finally supporting it. Of the major browsers, though, there are still two that don't.

First, IE. It doesn't matter. It is only used for three things: first, to download a browser that doesn't suck; second, if it is chosen by someone who is stupid, ignorant or malicious; third, by people who need to compare browsers.

Second, Chrome. Despite its lack of MathML, Chrome is IMO the best browser out there. It works on multiple platforms, has lots of extensions available, and has very good support for synchronising things like bookmarks and open pages across devices.

And now there's a plugin for it to support MathML! And of course Chrome supports it without a plugin on iOS, because it uses the Safari rendering engine there.

update: Safari on the desktop still doesn't render it particularly well, despite it working just fine on my phone.

New journals

My main journal has been mostly filling up with book reviews, which tends to hide all the other content, so I have decided to split things up a bit.

All my reviews and cooking posts will now appear in separate dedicated journals, and will shortly disappear from the default view of my main journal. However, they will still be available at the old URLs if you link to any particular post or keyword, including if you link to keyword-specific RSS feeds.

Star ratings re-revisited

I did, very briefly, consider a completely different rating system for my reviews, instead of just awarding 0 to 5 shiny gold stars.

I considered rating books out of ten on several axes - for example, entertainment, literary merit, imagination, consistency. I would then combine them by treating those scores as the co-ordinates of a point in an N-dimensional space, the overall rating being the distance of that point from the origin, or equivalently, they are components of a velocity vector in an N-dimensional space. Let me give a couple of examples:

The Quantum Thief might score 8/10 for entertainment, 10/10 for literary merit, 9/10 for imagination, and 10/10 for consistency. The score, then, is sqrt(82+102+92+102) = 18.6. A perfect score on those axes would be sqrt(4*102) = 20. So to normalise to a score out of ten we divide by 2, giving 9.3/10. I actually gave it 5/5.

A Mighty Fortress, on the other hand, might get 5/10 for entertainment, 2/10 for literary merit, 2/10 for imagination, and 8/10 for consistency, for a score of 9.8, which normalises to 4.9/10. I actually gave it 2/5.

There are at least three obvious reasons why I didn't go with this.

Maximum marks on one axis gets you half way to perfection with four axes, even closer with fewer. I don't want to give undue weight to good marks in any one axis. We could perhaps solve this by making it harder to attain maximum velocity in any direction the closer you get to the maximum. The physicists in the audience may now run away screaming;

different type of book require different axes. eg fiction vs textbook vs biography;

it over-complicates things, and is just a poor attempt to hide how subjective reviews are. Note that in the numbers above, I fudged the individual axis scores for both books so they'd mostly agree with the scores I actually gave :-)

Star ratings revisited

Just over a year ago I started awarding books and things that I reviewed shiny gold stars. I also retrospectively scattered stars on some of my older reviews.

I thought it would be a good idea to see how many of each I'm awarding, and so how well I'm sticking to my rating system. I'm expecting a normal distribution, with the mean somewhat above 3 stars to reflect the fact that I deliberately don't read shite, and that lots of what I read is because other people have raved about it. Well, the results are in ...

17

24

24

19

1

0

I think this is good. It's roughly what I'd expect given my reviewing criteria and the small number of options available. If I had a larger scale to work with - if, say, I was awarding marks out of 20 - I'd expect a smoother drop-off, and at both ends instead of just at the bottom end.

Meet Bramble

Until recently, this 'ere blog was run using the excellent Bryar software, which I maintained after its original author stopped supporting when he stopped being a programmer and went off to Save The Heathen. Bryar fitted by needs and worked well, but it had several features that I just never used, and others that were never fully implemented. I added a few features that I wanted, but it was getting harder and harder to hack on as my needs diverged more and more from Simon's.

So a few weeks ago, I wrote my own replacement from scratch. It's called Bramble, because areas of bramble bushes are sometimes called a briar patch. You probably didn't notice when I deployed it, because it supports all the old URLs so nothing broke*. And by leaving out support for stuff that I never used anyway, it's become easier to add shiny new features. For example, it was about 30 seconds work to make it support wildcards in IDs, so if you want to see all my book reviews you can click here. Naturally, these views of the data are also available in RSS.

* you may have wondered why it re-published everything one night in the RSS feeds - that's because the canonical URLs of all my posts changed. They now look REST-ish, like http://www.cantrell.org.uk/david/journal/id/remixing-insanity instead of http://www.cantrell.org.uk/david/journal/index.pl?id=remixing-insanity. The old URLs will still work though.

Webkit and ASCIIMathML

As you will no doubt know by now, I occasionally perpetrate mathematics. But it sucks to have to say in something like this "take the product of p(i)int(n/p(i)) for i=1 to i=Φ(n)". It would be much better if I could embed a proper formula.

There's a standard way of doing this, called MathML, and it's fucking horrible. And in any case, browser support for MathML is piss-poor. However, it's getting better - Firefox now supports it fairly well, and Webkit does too, although not quite as well as Firefox. There are also a few useful tools for making MathML suck less, in particular ASCIIMathML, which I am now using.

Using that, I type the above equation thus:

\prod_(i=1)^(\phi(n))p(i)^(\lfloorn/(p(i))\rfloor)

and it renders thus in your browser:

`\prod_(i=1)^(\phi(n))p(i)^(\lfloorn/(p(i))\rfloor)`

it should render something like this:

Note that at the time of writing, Webkit doesn't properly render the floor(n/p(i)) as a superscript to p(i).

Even with the flaws in common browsers (and the complete lack of support in many, including Safari) I'm going to start using it, because it's just so damned useful. Webkit and Firefox support it, which is good enough for me, and because Webkit is really just the nightly builds of Safari, we can expect a near-future release of Safari to support it too.

Incidentally, this journal entry exposes a bug in ASCIIMathML as well as in the current Webkit - can you spot it?

Update: it took the author of ASCIIMathML mere hours to respond to my bug report with a fix. Makes me rather ashamed of some of the bugs that have been mouldering in my RT queue for over a year :-)

Buy my stuff!

Negative keywords

Nearly three years ago I added keyword support to this 'ere journal. Well, now it supports negative keyword filtering. So if you want to see posts that are not tagged "geeky", for example, here's the linky.

Ill

I am ill. I've been ill since Thursday, with a cold. You're meant to be able to cure a cold with [insert old wives tale remedy here] in 5 days, or if you don't, it'll clear itself up in just under a week. So hopefully today is the last day.

So what have I done while ill?

On Friday I became old (see previous post), and went to the Byzantium exhibition at the Royal Academy. It was good. You should go.

Saturday was the London Perl Workshop. My talk on closures went down well, and people seemed to understand what I was talking about. Hurrah! I decided that rather than hang around nattering and going to a few talks, I'd rather hide under my duvet for the rest of the day.

I mostly hid on Sunday too, and spent most of the day asleep. In a brief moment of productivity, I got my laptop and my phone to talk to each other using magic interwebnet bluetooth stuff. I'd tried previously without success, but that was with the previous release of OS X. With version X.5 it seems to Just Work, so no Evil Hacks were necessary.

The cold means that I can't taste a damned thing, not even bacon. So now I know what it's like to be Jewish. Being Jewish sucks.

And today, I am still coughing up occasional lumps of lung and making odd bubbling noises in my chest, although my nasal demons seem to be Snotting less than they were, so hopefully I'll be back to normal tomorrow.

35

Today I am 35, and, having attained half of my allotted three score and ten in this vale of tears, am officially Over The Hill.

While I have noticed that suddenly all the Yoof are "having it large" with their ghetto blasters and hard core pornography (it's amazing how much I just didn't notice yesterday when I was a 34 year old youngster), I am pleased to report that I have not yet shit myself.

Thanks, Yahoo!

[originally posted on Apr 3 2008]

I'd like to express my warm thanks to the lovely people at Yahoo and in particular to their bot-herders. Until quite recently, their web-crawling bots had most irritatingly obeyed robot exclusion rules in the robots.txt file that I have on CPANdeps. But in the last couple of weeks they've got rid of that niggling little exclusion so now they're indexing all of the CPAN's dependencies through my site! And for the benefit of their important customers, they're doing it nice and quickly - a request every few seconds instead of the pedestrian once every few minutes that gentler bots use.

Unfortunately, because generating a dependency tree takes more time than they were allowing between requests, they were filling up my process table, and all my memory, and eating all the CPU, and the only way to get back into the machine was by power-cycling it. So it is with the deepest of regrets that I have had to exclude them.

I recognise all the words, even the egregiously mis-typed ones, but I have very little idea what on earth she's talking about. And then in response to this much older post she wrote the utterly incomprehensible:

thanksANot skynet - who has time to RTFM - nerds. Well fine - who wants to read "novels" anyhow; give me a dictionary any day, a back-page, the www is FullOfReference material. any speech-reader will tell you so. .. HeY /usr or $ sudo/bla/bla - makes my synapses bleed. keep your FM .. thanks for developing the thingie ... U godU-for-knowing-where-to-bloody-well start typing /usr or /root or even for knowing what root the user is pub-blishing and selling their crappy t-shirts and mugs and spanky code. gimme a horse and cart <=;P

Mmmm, spanky code.

These two comments were posted about 50 minutes apart from a Virgin Broadband account in Australia. I can only assume that Kelly has drunk a few too many fermented koalas.

Compact tag clouds

I've decided that using different sized text for the tag cloud works better than using different colours, but it has the drawback of eating a lot of screen space. I need to find an algorithm to pack the text in more efficiently so that it doesn't waste so much vertical space.

Actually, I already have an algorithm to do it in my head. Unfortunately it's, umm, rather inefficient. In fact I think it's O(N!) which would be fine if I only had 10 tags, but I have 52 so far, and am still occasionally adding tags.

52! is roughly 8e69. That's 8 followed by 69 zeroes. 69, dude!

Of course, this is a variant on the rectangular packing problem, which is itself a variant of the knapsack problem, which is NP-complete, so I'm going to have to come up with a heuristic that will return a reasonable (but not optimal) solution quickly.

I've decided that the best heuristic is to ask for pointers to code that other people have written that will do the job for me :-)

My constraints are that I need to fit an arbitrary number of rectangles of arbitrary size into a rectangle of fixed width but whose height can vary as necessary, with minimum wasted space. And I'd prefer a perl or javascript solution.

To my anonymous benefactor ...

Today I got a copy of "Haskell: the Craft of Functional Programming" in the post, which, while it's been on my wish list for ages, I'd not got round to actually buying. So - many thanks to whoever got it for me, it is most appreciated.

Face Crack

Oops. I eventually gave in and got a Face Crackaccount. In my defence, I dunnit so I can play Go online. I find the interface much better than KGS, and it's nice to be able to play a few moves and then leave the game for a bit and come back later, which isn't practical on KGS.

New Bryar feature

I just added support for specifying keywords (which the cool kids call "tags") in your Bryar postings. Please see if you can break things while I test them on this journal.

Please note that I've not yet added keywords to my old postings. That will take quite a while to do, so please be patient. At the time of writing, keywords 'bryar' and 'whisky' should do vaguely useful things.

Journal now with 30% more shiny!

Look on the right! The list of all my archived posts has been broken down by year. By default you'll see archived posts from the current year, clicky-clicky on the numbers at the top to get previous years.

It's all done with CSS and Javascript, so I only needed to fiddle with the templates and not with the application that drives the journal. And of course it degrades gracefully and everything is still available in non-CSS non-Javascript browsers. I even tested it in lynx. Hooray!

I've also slimmed the page down considerably by using a named style for journal entry links instead of embedding style info in the page for every link, and for archived entries have reduced the length of their links considerably using the <BASE HREF> tag. That gave a weight reduction of something like 40%.

Buh-bye Google

In a month and a half, Google's ads netted me a grand total of US$5.87. Translated into English, that's just £3.20. No matter how amusing I may find some of the ads, it's not worth bothering with. So they're gone.

Stupid spammers

In the last month, there have been well over 400 attempts at spamming this journal. All have failed. And yet the spammers still try. And I get email notifying me each time, because there's always a possibility that a legitimate comment might get classified as spam and need to be manually approved.

Ah well, I have the IP for each of those 400-odd spams, and using routeviews.org I can easily turn them into a considerably shorter list of netblocks. And then auto-create a shitload of Deny from rules. 104 of them, to be precise. It will be interesting to see if the spammers notice their lack of access and keep trying.

When ads go wrong

I've been keeping an eye on things, and most of the time, Google puts pretty good well-targeted ads on these pages. The only real exception was on my page about spam, which kept getting ads for dodgy anti-spam products, which was clearly silly, so I've removed 'em from that page.

However, on occasion it goes amusingly wrong. Not Google's fault, but some idiot has obviously bought an ad for thousands of keywords without thinking about it, and so this 'ere journal is currently advertising ...

Now with ads

I just added Google ads to the rest of this site, mostly as an experiment to see how it worked. If I make any pennies from it that'll be nice too but I'm not really expecting to, my traffic's too low.

Adding them across the whole site was REALLY easy because everything's templated, and I like the control over colour that Google give. The only reason they're not here in the journal too is cos that's a different template. Anyway, the constantly changing subject matter would only confuse the poor dears.

I, pornographer

A company called Sonicwall, who provide dodgy internet filtering disservices, have determined that this site is pornographic. So that's DHA's next job sorted for when the bottom falls out of the programming market.

I have emailed them asking for an explation. I don't expect to get one.

Incidentally, this is the second time I've been incorrectly accused of being a pornographer. A pint for whoever can tell the amusing story of the first time in a comment here!

Welcome to the new Bryar thing

I'll now be writing my journal here instead of at Livejournal. All my old Livejournal posts are archived here for convenience, but with commenting disabled. You can, of course, still comment at the old site.