It was initially surprising how the battle lines were drawn in the debate around harmonization. On the one hand there were those who felt governments should step in and regulate the technologies that were allowed in different spectrum bands to obtain harmonisation across Europe; while on the other hand there were those who thought that the market should be left unfettered to decide for itself. That divide isn’t surprising. What floored me was who was on which side. The government (Ofcom) wanted to keep as light a touch as possible so that the market was given free reign. The industrialists wanted heavy handed intervention. This was strange to the point that the (excellent) speaker from Ofcom (William Webb, who Richard had once as a visitor to the lab) put up a slide as a homage to Machiavelli! Luckily I lived through the Thatcher years so nothing phases me 😉 As the discussion progressed I did start to work out why the protagonists had chosen the sides that they had. William paraphrased this Machiavelli quote:

There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. For the reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries … and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it.

So, the current market incumbents want increased regulation as it ensures their markets and allows them to more easily build pan European services. But Ofcom want to ensure that the market itself decides since they feel new entrants are more likely to be able to innovate new services under a lighter regime, new entrants who wont have a voice currently, either because they are too small to dedicate time to lobbying, too cautious to reveal their game-plan to their competitors, or because they just don’t realise that regulators are looking for consultation.

Another interesting point William made was around license exempt spectrum and innovation. We’ve argued that leaving portions of the radio frequency spectrum unlicensed will encourage innovation and new markets will emerge, as happened with wifi. William agreed that there was innovation in unlicensed spectrum, but argued that there was no evidence that there is more innovation in unlicensed spectrum. William argued that just because an industry is old it doesn’t mean it wasn’t innovative, and so licensed spectrum is littered with innovative services. William went further and argued that the value generated by services in licensed spectrum is actually greater than that generated in unlicensed. I can’t wait to sit down with more economically minded colleagues than I and work these issues through.

The event closed with John Burns from Aegis Systems talking about spectrum use in the public sector. Spectrum ownership by the military is seen as an under utilized national resource, which is true, but John did a great job of showing us what parts of the spectrum are in use and he explained why technologies like radar need seemingly extravagant swathes of spectrum in order to work properly.

There were other issues that struck me during the day. One was the mismatch between my expectation of what would be discussed and what actually was. Looking at the subtitle for the event “what users want, what’s becoming available and technologies that will make a difference” I’d expected lots of time to be spent discussing users, by which I meant consumers of spectrum enabled services. But that’s not what “users” meant. For this audience the users were service providers, broadcasters and cellular operators who ‘use’ spectrum to provide services to consumers. I guess I should have spotted something was awry when, in an event of over 60 delegates I was the only one wearing jeans! As Dorothy might say “Toto, I’ve a feeling we’re not in Redmond anymore”.

This morning was our monthly HCI Reading Group that Alan Blackwell runs under the Crucible umbrella. Simon and Luke Church presented Jeanette Wing‘s “Computational Thinking” – definitely the shortest paper we’ve had so far.
Simon’s approach to the paper was educational, since he’s been thinking about how we teach ‘computing’ to young kids. There must be something in the ether about this as I had a similar conversation with Steve Drucker on his last visit to Cambridge. Simon was particularly impressed with Computer Science Unplugged.
Luke’s approach to the paper was more political. We talked about the potential negative impact on society of uncritical utopian views of computational thinking.
One of the most fascinating critiques of computational thinking came up when Alan was talking about ambiguity. We discussed whether/why Computer Science was particularly bad at dealing with ambiguity and used poetry as an example. Poetry cannot be studied as if it was objective, the ambiguity of a poem can only be studied in relation to the subjectivity of the reader.

We have Tuck Leong interning with Richard at the moment continuing his research into the ways in which randomness may be used to help people experience their own content afresh. For his doctorate he’s mainly looked at music (i.e. shuffle listening experiences). By a weird piece of synchronicity (what an apposite concept here) I’d also been thinking about randomness recently:
i) I had a long and interesting talk with Margaret Pearson about the I Ching recently, after we bumped into each other at the Oast House Quaker Meeting in Cambridge. (Which reminds me: I promised her to type out the bits of His Dark Materials where Pullman references the I Ching).
ii) I was reading I was reading Charles Petzold’s coding blog a few weeks ago and came across a post he has on randomness in which Petzold says:
“Wouldn’t it have been interesting for Steve Wozniak or Don Estridge to have also decided that every computer needed a hardware random number generator, and for that feature to have become a standard part of the machines we use today?”
Though sadly he doesn’t go on to speculate on what applications that might enable.
iii) The recent John Cage piece I saw Stephen Gutman perform at Kettle’s Yard reminded me of Cage’s use of randomness, from the performers throwing dice to decide what to do next through to the I Ching computer program that the lecture on anarchy I saw Cage give in Islington relied on.
Tuck has similar inspirations, in fact some of Tuck’s fieldwork-probes sound like wonderful performances of randomness in their own right. So we are having great fun planning what to build and test around photo browsing.

I’d assumed that cassette recorders were something I wouldn’t see again. But they came up yesterday in Will’s guitar lesson. Will has electric guitar lessons after school with an amazing teacher called Dan Collins. Yesterday he was teaching Will a latin chord sequence to practise his rhytm guitar skills and a phrygian shaped scale to improvise over the chord sequence. Obviously Will cannot play the chord sequence while improvising from the scale so Dan recommended an old fashioned shoe-box speaker-on-the-top cassette recorder. Apparently you can pick them up cheaply on ebay or Argos even have a Sony one still. Dan favours them to the digital equivalent because of ease of use – something about the physicality of the tape and the playback and record buttons. It put me in mind of our recent reading group choice: Philip Faulkner and Jochen Runde’s “Getting to Grips with Technology” in which they cover the use of familiar form and interection techniques in the design of digital equipment for DJs.