30 November 2010

It will not have escaped your notice that the patent system has been the subject of several posts on this blog, or that the general tenor is pretty simple: it's broken, and nowhere more evidently so than for software. Anyone can see that, but what is much harder is seeing how to fix it given the huge vested interests at work here.

29 November 2010

A couple of days ago I wrote about the deal between the regional government of Puglia and Microsoft, noting that it was frustrating that we couldn't even see the terms of that deal. Well, now we can, in all its glorious officialese, and it rather confirms my worst fears.

Not, I hasten to add, because of the overall framing, which speaks of many worthy aims such as fighting social exclusion and improving the quality of life, and emphasises the importance of "technology neutrality" and "technological pluralism". It is because of how this deal will play out in practice.

That is, we need to read between the lines to find out what the fairly general statements in the agreement will actually mean. For example, when we read:

[joint analysis of the technological discontinuities underway and of the state of the art in research materials and IT development, both on the desktop and in the data centre (for example, cloud computing and mobile)]

will Microsoft and representatives of the Puglia administration work together to discuss the latest developments in mobile, on the desktop, or data centres, and come to the conclusion: "you know, what would really be best for Puglia would be replacing all these expensive Microsoft Office systems by free LibreOffice; replacing handsets with low-cost Android smartphones; and adopting open stack solutions in the cloud"? Or might they just possibly decide: "let's just keep Microsoft Office on the desktop, buy a few thousands Windows Mobile 7 phones (they're so pretty!), and use Windows Azure, and Microsoft'll look after all the details"?

[To encourage the educational and teaching world to access and use the most up-to-date IT systems]

will this mean that teachers will explain how they need low-cost solutions that students can copy and take home so as not to disadvantage those unable to pay hundreds of Euros for desktop software, and also software that can be modified, ideally by the students themselves? And will they then realise that the only option that lets them do that is free software, which can be copied freely and examined and modified?

Or will Microsoft magnanimously "donate" hundreds of zero price-tag copies of its software to schools around the province, as it has in many other countries, to ensure that students are brought up to believe that word processing is the same as Word, and spreadsheets are always Excel. But no copying, of course, ("free as in beer" doesn't mean "free as in freedom", does it?) and no peeking inside the magic black box - but then nobody really needs to do that stuff, do they?

specify and communicate to the Region the initiatives and resources (for example: technical personnel and specialists, software necessary for the joint activities) which it intends to make available for the creation of the joint Microsoft-Regional centre of competence centre]

are we to imagine that Microsoft will diligently provide a nicely balanced selection of PCs running Windows, some Apple Macintoshes, and PCs running GNU/Linux? Will it send along specialists in open source? Will it provide examples of all the leading free software packages to be used in the joint competency centre? Or will it simply fill the place to the gunwales with Windows-based, proprietary software, and staff it with Windows engineers?

The point is the "deal" with Microsoft is simply an invitation for Microsoft to colonise everywhere it can. And to be fair, there's not much else it can do: it has little deep knowledge of free software, so it would be unreasonable to expect it to explore or promote it. But it is precisely for that reason that this agreement is completely useless; it can produce one result, and one result only: recommendations to use Microsoft products at every level, either explicitly or implicitly.

And that is not an acceptable solution because it locks out competitors like free software - despite the following protestations of support for "interoperability":

[Microsoft shares the approach adopted by the Puglia Region, and is an active part of initiatives at an international level to promote the interoperability of systems, independently of the technology used.]

In fact, Microsoft is completely interoperable only when it is forced to be, as was the case with the European Commission:

In 2004, Neelie Kroes was appointed the European Commissioner for Competition; one of her first tasks was to oversee the fining brought onto Microsoft by the European Commission, known as the European Union Microsoft competition case. This case resulted in the requirement to release documents to aid commercial interoperability and included a €497 million fine for Microsoft.

That's clearly not an approach that will be available in all cases. The best way to guarantee full interoperability is to mandate true open standards - ones made freely available with no restrictions, just as the World Wide Web Consortium insists on for Web standards. On the desktop, for example, the only way to create a level playing-field for all is to use products based entirely on true open standards like Open Document Format (ODF).

If the Puglia region wants to realise its worthy aims, it must set up a much broader collaboration with a range of companies and non-commercial groups that represent the full spectrum of computing approaches - including Microsoft, of course. And at the heart of this strategy it must place true open standards.

Update: some good news about supporting open source and open standards has now been announced.

The announcement that Attachmate would acquire Novell for $2.2 billion has naturally provoked a flurry of comments and analyses in the free software world. But it's important to pick apart the news to find out what is truly new – and to distinguish between what this changes, and what remains the same.

27 November 2010

There's a nice little argument bubbling away over in the south of Italy. It concerns the decision of Nichi Vendola, the president of Puglia, to sign a deal with Microsoft. The motivation, according to Signor Vendola, in the translation of Marco Fioretti, who has been tracking this episode, is that this:

“represents the beginning of an important collaboration partnership, whose goal is to promote innovation and excellence in creation, development and usage of ICT technologies and solutions, adding value to the role of the Region in direct relationships with the biggest international groups of that sector”.

So far, so depressingly normal you might think. Well, it would be, were it not for the fact that the party that Signor Vendola leads in the region, Sinistra Ecologia Libertà (SEL), has the following to say on the subject of technology:

[We believe that speaking about copyleft, free software and Net neutrality is a necessity for a modern party, just as it is to speak about work, the environment, the economy and civil rights.

...

For this reason, we have adopted the expression "the Ecology of Knowledge", because we believe that all those movements which are opposed to the privatisation of knowledge must unite. Those who are opposed to gene or software patents, or ask for a radical reform of copyright law, and those who support free software share a common idea: that culture must be free.]

Fine words, but hard to square with a deal that places an entire region in the very unfree grip of Microsoft, hardly a great supporter of free software or opponent of software patents.

Understandably, then, Italian free software activist have been questioning this very inconsistent move, and now Signor Vendola has responded to the barrage of criticism:

[Signing the protocol of understanding with Microsoft has given rise to some perplexity in those who believe that this initiative could put in question free software and the free circulation of knowledge. The temptation to call for a referendum is very strong. Is Microsoft the enemy of Puglia, or of Italy? Or are the other giants of IT? In my opinion, we must look at this dispute with the same courage that helps us decipher the politics of these dark days.]

Whoa, hang on a minute: "courage"? We're talking about making a decision based on the technical facts. But anyway, let's go on and hear what the man has to say in explanation:

[We must admit that in a new century that is opening up to cloud computing, open data government, Net neutrality, and to the collapse in the price of apps, the job of politics is no longer to choose among competitors, but to broaden the motorway of the information society.

The true enemies of 2010 (and perhaps it will be more clear in 2015) are not Windows, Google, Leopard or the iPad. The true enemy is the digital divide in which this country is imprisoned. Less copper and more fibre.]

What on earth is he talking about? After having made an unjustified choice to sign a deal with Microsoft (one whose terms haven't even been revealed, as far I can tell now available - see my detailed comments on the text), he tries to simply avoid the central question "Why?" by saying in true Tony Blair fashion that it is time to move on, and that it's not about competitors, but about the iPad and fibre optic cables, the price of apps and Net neutrality. He then changes subject yet again by bringing in the topic of Italy's digital divide.

Now, closing the digital divide is certainly a hugely important undertaking, but if anything can do that it is *free* software, which can be distributed to everyone in Puglia - to every school, and to every business. Microsoft's offerings are precisely the last thing that will close that digital divide.

Indeed, the divide is there largely *because* of Microsoft. By virtue of its monopolistic hold on the desktop market it has been able to impose artificially high prices on a sector whose marginal costs of production are zero. This implies that that natural price of software is also zero - as is exactly the case for free software. Anything higher than zero makes the digital divide deeper - which means that Microsoft's inflated prices have helped excavate not so much a digital divide as a digital chasm.

So Signor Vendola's bizarre "explanation" of his move - which, of course is a non-explanation, and the Italian equivalent of saying: "ooh, look, a squirrel" - is in fact a superb reason why he should in fact be supporting open source, just as his party professes to do on its Web site.

However, there is some good news here. And that is the fact that Signor Vendola felt impelled to offer some kind of explanation, however unsatisfactory. This means that he is feeling the effects of the outcry, and knows that he cannot simply ignore it.

The message is clear: Italian free software activists must (a) continue to pile on the pressure until he cancels this deal with Microsoft, and (b) non guardare lo scoiattolo.

26 November 2010

It is a truism that slow-moving law cannot keep up with fleet-of-foot digital technology, so that makes the rare court decision dealing with the details of how people use the Web of particular importance. Here's an interesting case that has just been handed down.

Wikipedia is often regarded as little more than a poor person's encyclopedia, providing a handy reference collection of basic facts. But there's another side that I predict will be recognised increasingly: as a key corpus of texts in languages that lack traditional large-scale publishing to preserve their cultures.

"Some Indian-language Wikipedias are already the largest online repositories of information in their respective languages," Bhati said. "Regular community meetings such as the one we had today in Ahmedabad can help spread the word about our mission."

This facet is even more important for languages with a relatively small numbers of speakers, or perhaps threatened with outright extinction. Wikipedia acts as a natural focus for the creation of texts in these languages that might otherwise be missing - a repository of linguistic wisdom that can be shared and built on. In this way, it plays an important role not just in spreading knowledge about the world, but also about the languages that people use to talk about that world. (via @klang67)

As a I wrote a couple of days ago, the current flood of open data announcements, notably by the UK government, is something of a two-edged sword. It's great to have, but it also imposes a correspondingly great responsibility to do something useful with it.

25 November 2010

A couple of days ago I wrote that ACTA was doomed because its attempts to enforce copyright through even more punitive measures will simply alienate people, and cause more, not less, copyright infringement. Here's indirect support for that view from a rather surprising source: a paper [.pdf] published by WIPO (although it does emphasise "The views expressed in this document are those of the author and not necessarily those of the Secretariat or of the Member States of WIPO").

In the context of enforcement it has the following to say about the continued failure to "educate" (= indoctrinate) people about the sanctity of copyright, noting that it is a lost cause because piracy is so widely accepted today:

The most comprehensive comparative analysis of these issues to date is a 2009 Strategy One study commissioned by the International Chamber of Commerce. Strategy One examined some 176 consumer surveys and conducted new ones in Russia, India, Mexico, South Korea, and the UNITED KINGDOM. Like nearly all other surveys, Strategy One’s work showed high levels of acceptance of physical and digital piracy, with digital media practices among young adults always at the top of the distribution. The group concluded that “hear no evil, see no evil, speak no evil’ has become the norm” (ICC/BASCAP 2009). At this point, such findings should come as no surprise. In the contexts in which we worked, we can say with some confidence that efforts to stigmatize piracy have failed.

There is little room to maneuver here, we would argue, because consumer attitudes are, for the most part, not unformed — not awaiting definition by a clear antipiracy message. On the contrary, we consistently found strong views. The consumer surplus generated by piracy in middle-income countries is not just popular but also widely understood in economic justice terms, mapped to perceptions of greedy United States of America and multinational corporations and to the broader structural inequalities of globalization in which most developing-world consumers live. Enforcement efforts, in turn, are widely associated with the United States of America pressure on national governments, and are met with indifference or hostility by large majorities of respondents.

It also makes this rather interesting point about the changing nature of people's music collections:

The collector, our work suggests, is giving ground at both the high end and low end of the consumer income spectrum. Among privileged, technically-proficient consumers, the issue is one of manageable scale: the growing size of personal media libraries is disconnecting recorded media from traditional notions of the collection — and even from strong assumptions of intentionality in its acquisition. A 2009 survey of 1800 young people in the UNITED KINGDOM found that the average digital library contained 8000 songs, with 1800 on the average iPod (Bahanovich and Collopy 2009). Most of these songs — up to 2/3 in another recent study — have never been listened to (Lamer 2006). If IFPI’s figures are to be trusted, up to 95% are pirated (IFPI 2006).

Such numbers describe music and, increasingly, video communities that share content by the tens or hundreds of gigabytes — sizes that diminish consumers’ abilities to organize or even grasp the full extent of their collections. Community-based libraries, such as those constituted through invitation-only P2P sites, carry this reformulation of norms further, structured around still more diffuse principles of ownership and organization.

What's really fascinating for me here is that it clearly describes the trend towards owning *every* piece of music and *every* film ever recorded. The concept of owning a few songs or films will become meaningless as people have routine access to everything. Against that background, the idea of "stopping" filesharing just misses the point completely: few will be swapping files - they will be swapping an entire corpus.

The whole report is truly exciting, because it dares to say all those things that everyone knew but refused to admit. Here are few samples of its brutal honesty:

To be more explicit about these limitations, we have seen no evidence — and indeed no claims — that enforcement efforts to date have had any impact on the overall supply of pirated goods. Our work suggests, rather, that piracy has grown dramatically by most measures in the past decade, driven by the exogenous factors described above — high media prices, low local incomes, technological diffusion, and fast-changing consumer and cultural practices.

...

we see little connection between these efforts and the larger problem of how to foster rich, accessible, legal cultural markets in developing countries — the problem that motivates much of our work. The key question for media access and the legalization of media markets, in our view, has less to do with enforcement than with fostering competition at the low end of media markets — in the mass market that has been created through and largely left to piracy. We take it as self-evident, at this point, that US$15 DVDs, US$12 CDs, and US$150 copies of MS Office are not going to be part of broad-based legal solutions.

Fab stuff - even if it is not quite official WIPO policy (yet....) (Via P2Pnet.)

23 November 2010

Last Friday, I went along to what I thought would be a pretty routine press conference about open data - just the latest in a continuing drip-feed of announcements in this area from the UK government. I was soon disabused.

22 November 2010

There is a great new paper out with the title "ACTA as a New Kind of International IP Law-Making":

The ACTA negotiations are important not only for the potential impact of the treaty itself, but for what they can teach us about the dynamics of intellectual property law-making and the structure of the IP treaty framework. This paper draws two broad lessons from the progress of the ACTA to date which, while not entirely new, can be understood in a new light by looking at the detailed development of the ACTA text: (1) that the global IP 'ratchet' is not inexorable; and (2) that the international IP treaty framework is very poorly adapted to developing exceptions. The relevance of these lessons for negotiators, scholars and advocates is also discussed.

It's very thorough, and well-worth reading all the way through. But I'd like to single out the following section as particularly worthy of attention:

there is the question of public perceptions as to the value and fairness of the agreement. A perception that it is fair as between stakeholders is important to IP law, which it is not readily ̳self-enforcing.‘ By this I mean that IP law requires people to self-consciously refrain from behaviours that are common, easy, and enjoyable: infringement is so easy to do and observing IP rights, particularly copyright, involves, particularly these days, some self-denial. IP law therefore needs support from the public in order to be effective, and in order to receive any such support IP law needs to address the needs of all stakeholders. 135 Treaties that strengthen enforcement without addressing the needs of users look unfair and will bring IP law further into disrepute.

I think this is a profound point. As we know, copyright infringement is taking place on a massive scale, especially among younger members of society. It's clear that this is largely because they do not perceive present copyright law as either reasonable or fair, and so they simply ignore it.

ACTA will make copyright law less fair and even more unreasonable. The inevitable consequence will be that people will respect its laws even less, and feel even more justified in doing so. And so we have a paradox: the more that ACTA is put into practice, the more it will weaken the edifice it was supposed to buttress. (Via @StopActaNow @FelixTreguer.)

Openness is inherently political, because it dares to assert that we little people have a right to see what the powerful would hide. There's no clearer proof of that point than the MPs' expenses scandal last year. You might think that was a battle where openness prevailed; sadly you would be wrong, as a recent press release from the Independent Parliamentary Standards Authority (IPSA) reveals.

Musopen is a non-profit dedicated to providing copyright free music content: music recordings, sheet music and a music textbook. This project will use your donations to purchase and release music to the public domain. Right now, if you were to buy a CD of Beethoven's 9th symphony, you would not be legally allowed to do anything but listen to it. You wouldn't be able to share it, upload it, or use it as a soundtrack to your indie film- yet Beethoven has been dead for 183 years and his music is no longer copyrighted. There is a lifetime of music out there, legally in the public domain, but it has yet to be recorded and released to the public.

This is such an eminently sensible idea: releasing music into the public domain that all can use as they wish. I don't think this other project is public domain (anyone know?), but it's still a nice gesture:

Free downloads of the complete organ works of Johann Sebastian Bach, recorded by Dr. James Kibbie on original baroque organs in Germany, are offered on this site.

Here's where the money's coming from:

This project is sponsored by the University of Michigan School of Music, Theatre & Dance with generous support from Dr. Barbara Furin Sloat in honor of J. Barry Sloat. Additional support has been provided by the Office of Vice-President for Research, the University of Michigan.

It's another model that would be good to see utilised elsewhere, ideally with the results being put into the public domain. (Via @ulyssestone, @alexrossmusic)

21 November 2010

A Cellist was held at Heathrow Airport and questioned for 8 hours this week. A terrorist suspect? False passport? Drug smuggling? If only it was so dramatic and spectacular. Her crime was coming to the UK with her cello, to participate in musicology conference organised by the School of Music at the University of Leeds and it was for this reason that Kristin Ostling was deported back to Chicago. What was UK Borders Agency (UKBA) thinking? That she would sell her cello to earn some cash, or do a spot of moonlighting at some secretive classical music gig, while she was here?

The Conference organiser, Professor Derek Scott informed the Manifesto Club that “She was not being paid a penny for this, but these zealous officers decided that playing a cello is work and, paid or unpaid, she could not be allowed in.”

Lovely logic here: if you are a professional cellist it follows that putting bow to string is work, and therefore not permitted according to the terms of your visa. And as the article explains, it's the same for painters and photographers: if you dare to create a masterpiece here in the UK, you might end up being deported, and blacklisted.

Now, call me old fashioned, but it seems to me that we should actually be *begging* artists to come here and create: it not only enriches the cultural ecosystem based on the UK and all it contains, it makes it more likely that other artists and non-artists will want to come to the country to see where these works were spawned, bringing with them all that valuable touristic dosh that everyone seems to be scrabbling after these days.

But the problem is really deeper than this simple loss of these earnings. What is really disturbing is the crass way the UK Borders Agency equates artistic creation with work: if you act as an artist - even if you are not paid - you are theoretically doing something that should have a price on it. This is really part and parcel of the thinking that everything should be copyrighted and patented - that you can't do stuff for free, or simply give away your intellectual creations.

It's a sick viewpoint that leads to kind of shaming situations described above. And of course, in the usual way, the people imposing these absurd practices haven't though things through. After all, if musicians can't play, or artists paint, when they come to the UK, surely that must mean by the same token that visiting foreign mathematicians can't manipulate formulae, and philosophers are forbidden to think here...?

Britons will be forced to apply online for government services such as student loans, driving licences, passports and benefits under cost-cutting plans to be unveiled this week.

Officials say getting rid of all paper applications could save billions of pounds. They insist that vulnerable groups will be able to fill in forms digitally at their local post offices.

The plans are likely to infuriate millions of people. Around 27% of households still have no internet connection at home and six million people aged over 65 have never used the web.

Lord Oakeshott, a Liberal Democrat Treasury spokesman, said: "We must cut costs and boost post offices as much as we possibly can, but many millions of people – not just pensioners – are not online and never will be. They must never be made to feel the state treats them as second-class citizens."

As an out-and-out technophile, I have a lot of sympathy with this move. After all, it's really akin to moving everyone to electricity. But it does mean that strenuous efforts must be made to ensure that everyone really has ready access to the Internet.

And that, of course, is a bit of a problem when the ultimate sanction of the Digital Economy Act is to block people's access (even if the government tries to deny that it will "disconnect" people - it amounts to the same thing, whatever the words.) If, as this suggests and I think is right, the Internet becomes an absolutely indispensable means of exercising key rights (like being able to communicate with the government) then it inevitably makes taking those rights away even more problematic.

So I predict that the more the present coaltion pushes in this direction, the more difficulties it will have down the line with courts unimpressed with people being disadvantaged so seriously for allegedly infringing on a government-granted monopoly: this makes a response that was never proportionate to begin with even more disproportionate.

20 November 2010

Yesterday I went along to the launch of the next stage of the UK government's open data initiative, which involved releasing information about all expenditure greater than £25,000 (I'll be writing more about this next week). I realised that this was a rather more important event than I had initially thought when I found myself sitting one seat away from Sir Tim Berners-Lee (and the intervening seat was occupied by Francis Maude, Minister for the Cabinet Office and Paymaster General.)

Sir Tim came across as a rather archetypal professor in his short presentation: knowledgeable and passionate, but slightly unworldly. I get the impression that even after 20 years he's still not really reconciled to his fame, or to the routine expectation that he will stand up and talk in front of big crowds of people.

He seems much happier with the written word, as evidence by his excellent recent essay in the Scientific American, called "Long Live the Web". It's a powerful defence of the centrality of the Web to our modern way of life, and of the key elements that make it work so well. Indeed, I think it rates as one of the best such piece I've read, written by someone uniquely well-placed to make the case.

But I want to focus on just one aspect here, because I think it's significant that Berners-Lee spends so much time on it. It's also timely, because it concerns an area that is under great pressure currently: truly open standards. Here's what Berners-Lee writes on the subject:

The basic Web technologies that individuals and companies need to develop powerful services must be available for free, with no royalties. Amazon.com, for example, grew into a huge online bookstore, then music store, then store for all kinds of goods because it had open, free access to the technical standards on which the Web operates. Amazon, like any other Web user, could use HTML, URI and HTTP without asking anyone’s permission and without having to pay. It could also use improvements to those standards developed by the World Wide Web Consortium, allowing customers to fill out a virtual order form, pay online, rate the goods they had purchased, and so on.

By “open standards” I mean standards that can have any committed expert involved in the design, that have been widely reviewed as acceptable, that are available for free on the Web, and that are royalty-free (no need to pay) for developers and users. Open, royalty-free standards that are easy to use create the diverse richness of Web sites, from the big names such as Amazon, Craigslist and Wikipedia to obscure blogs written by adult hobbyists and to homegrown videos posted by teenagers.

Openness also means you can build your own Web site or company without anyone’s approval. When the Web began, I did not have to obtain permission or pay royalties to use the Internet’s own open standards, such as the well-known transmission control protocol (TCP) and Internet protocol (IP). Similarly, the Web Consortium’s royalty-free patent policy says that the companies, universities and individuals who contribute to the development of a standard must agree they will not charge royalties to anyone who may use the standard.

There's nothing radical or new there: after all, as he says, the W3C specifies that all its standards must be royalty-free. But it's a useful re-statement of that policy - and especially important at a time when many are trying to paint Royalty-Free standards as hopeless unrealistic for open standards. The Web's continuing success is the best counter-example we have to that view, and Berners-Lee's essay is a splendid reminder of that fact. Do read it.

18 November 2010

Regular readers of this blog will know that I've tracked the rather painful history of attempts to increase the deployment of free software in Russia, notably in its schools. Well, that saga continues, it seems, with doubts being expressed about the creation of a Russian national operating system based on GNU/Linux:

[Google Translate: Sometimes we hear that the idea of a national software platform contains a logical contradiction. After all, if this platform really will be created based on the ACT, then this software will be more than 90% are not produced in Russia and abroad. Accordingly, the NPP, we will, more likely, some kind of US-German-Indian, not Russian.]

That story will doubtless run and run. But what interested me was the accompanying quote from Nikolai Pryanishnikov, president of Microsoft in Russia; it's a corker:

[Google Translate: "Microsoft supports technological neutrality and considers that the choice of OS should be caused solely as the greatest operating system, its economic efficiency, standing practical problems, safety, rather than ideological considerations.

From our point of view, the most effective for the development of an innovative economy in the country seems not to create an analogue of the existing OS, which will take huge amounts of money and time, and taking as basis the most popular operating systems, proven by Russian security services, to create custom applications and solutions, investing in this funds in promising scientific Russian developments. We must bear in mind that Linux is not a Russian OS and, moreover, is at the end of its life cycle."]

The idea that "Linux is at the end of its life cycle" is rather rich coming from the vendor of a platform that is increasingly losing market share, both at the top and bottom end of the market, while Linux just gets stronger. I'd wager that variants of Linux will be around rather longer than Windows.

Update: the Russian publication CNews Open, from which the story above was taken, points out that Russia is aiming to create a national software platform, not a national operating system. Quite what this means seems to be somewhat unclear:

The European Commission looms large in these pages. But despite that importance, it remains - to me, at least - an opaque beast. Hugely-important decisions are emitted by it, as the result of some long and deeply complex process, but the details remain pretty mysterious.

17 November 2010

There's an important conference taking place in Brussels next week: "Tensions between Intellectual Property Rights and the ICT standardisation process: reasons and remedies - 22 November 2010". It's important because it has a clear bearing on key documents like the forthcoming European Interoperability Framework v2.

It all sounds jolly reasonable:

Key ICT standards are perceived by many as critical technology platforms with a strong public interest dimension. However, concerns are voiced that Intellectual Property Rights (IPRs) and their exclusivity potential, may hinder or prevent standardisation.

The European Commission and the European Patent Office (EPO) are organising a conference to address some specific issues on patents and ICT standards: are today’s IPR features still compatible with fast moving markets and the very complex requirements of ICT standardisation in a global knowledge economy environment? Where are problems that we can we fix?

Unfortunately, I can't go - actually, better make that *fortunately* I can't go, because upon closer inspection the agenda [.pdf] shows that this is a conference with a clear, er, agenda: that is, the outcome has already been decided.

You can tell just by its framing: this is "a conference to address some specific issues on patents and ICT standards". ICT is mostly about software, and yet software cannot be patented "as such". So, in a sense, this ought to be a trivial conference lasting about five minutes. The fact that it isn't shows where things are going to head: towards accepting and promoting patents in European standards, including those for software.

That's not really surprising, given who are organising it - the European Commission and the European Patent Office (EPO). The European Commission has always been a big fan of software patents; and the EPO is hardly likely to be involved with a conference that says: "you know, we *really* don't need all these patents in our standards."

Of course, the opposite result - that patents are so indescribably yummy that we need to have as many as possible in our European ICT standards - must emerge naturally and organically. And so to ensure that natural and organic result, we have a few randomly-selected companies taking part.

For example, there's a trio of well-known European companies: Nokia, Ericsson and Microsoft. By an amazing coincidence - as an old BBC story reminds us - all of them were fervent supporters of the European legislation to make software patentable:

Big technology firms, such as Philips, Nokia, Microsoft, Siemens, and telecoms firm Ericsson, continued to voice their support for the original bill.

So, no possible bias there, then.

Then there are a couple of outfits you may have heard of - IBM and Oracle, both noted for loving software patents slightly more than life itself. So maybe a teensy bit of bias there.

But wait, you will say: you are being totally unfair. After all, is there not an *entire* massive one-hour session entitled "Open source, freely available software and standardisation"? (although I do wonder what on earth this "freely available software" could be - obviously nothing so subversive as free-as-in-freedom software.)

And it's true, that session does indeed exist; here's part of the description:

This session will explore potential issues around standardisation and the topic of open source software and free licences. We will look at examples of how standards are successfully implemented in open source. We will also consider licensing issues that may exist regarding the requirement to pay royalties for patents present in standards, as well as other licensing terms and conditions in relation to the community approach common in open source and free software technology development.

But what's the betting that those "examples of how standards are successfully implemented in open source" will include rare and atypical cases where FRAND licences have been crafted into a free software compatible form, and which will then be used to demonstrate that FRAND is the perfect solution for ICT licensing in Europe?

Luckily, we have Karsten Gerloff from the FSFE to fight against the software patent fan club, and tell it as it is. Pity he's on his own on this though - and no, poor Erwin Tenhumberg does not count. He may be "Open Source Programme Manager, SAP", but SAP is one of the fiercest proponents of patenting software in Europe, as I've discussed a couple of times.

So this leaves us with Karsten against the collective might of the European Commission, EPO, Microsoft, Nokia, Ericsson, IBM, Oracle and SAP: clearly they'll be some of that "tension", as the conference title promises, but a fair fight conducted on a level playing-field? Not so much....

16 November 2010

Although I don't use it much myself, I've heard that Facebook is quite popular in some quarters. This makes its technological moves important, especially when they impact free software. Yesterday, we had what most have seen as a pretty big announcement from the company that does precisely that:

15 November 2010

As you may have noticed, I've been writing quite a lot about the imminent European Interoperability Framework (EIF), and the extent to which it supports true open standards that can be implemented by all. Of course, that's not just a European question: many governments around the world are grappling with exactly the same issue. Here's a fascinating result from India that has important lessons for the European Commission as they finalise EIF v2.

On 17 of October, Wang Yi retweeted a post by Hua Chunhui who satirically challenged the anti-Japanese angry youths in China by inviting them to destroy the Japan pavilion in Shanghai Expo. She added a comment, “Angry youth, come forward and break the pavilion!” in her retweet.

The police interpreted her satire as a public order disturbance and asked the Labour Re-education committee to sentence her to one year labour camp, from November 15 2010 to November 9 2011 in Zhenzhou Shibali river labour re-education camp.

People will point out one year in a labour camp is very different from the few thousand quid fine meted out to Paul Chambers, and I of course would agree: the UK is not China.

But the *attitude* - that humour or satire is a "threat" of some kind, and must be punished in the courts - is shockingly similar. And that is what is most disturbing for me here in the UK about the #twitterjoketrial case: the authorities here are now *thinking* like their Chinese counterparts (who must be delighted to have this high-profile backing for their approach from those hypocritical Westerners). We are on the road to China.

Is this really the journey we want to take? Weren't we trying to get China to come to us?

[Google Translate: Just in time for one of the many 20th Birthdays of the World Wide Web has now published the Federal Court judge, found that a link is in the copyright infringement can be a ( ruling. pdf ). Treated in dispute, the applicant had operated a Web site with maps, which was designed so that one should only come via a search form on the home page to the desired base.]

I mean, come on: this isn't about copyright - the content is freely available; it's about how you get to that copyright material.

Thus the real issue here seems to be that a site owner is worried about losing advertising revenue if people can skip over the home page. But the solution is simple: just put ads on the inner pages of the site, too. That way, you get the best of both worlds: directly-addressable content that also generates revenue. Is that so hard?

Once upon a time, the Netcraft Web server market share was reported upon eagerly every month for the fact that it showed open source soundly trouncing its proprietary rivals. We don't hear much about that survey these days - not because things have changed, but for that very reason: it's now just become a boring fact of life that Apache has always been the top Web server, still is, and probably will be for the foreseeable future. I think we're fast approaching that situation with the top500 supercomputing table.

12 November 2010

I know you probably didn't notice, but I posted very little on this blog last week - nothing, in fact. This was not down to me going “meh” for a few days, but rather because I was over-eager in accepting offers to talk at conferences that were almost back to back, with the result that I had little time for much else during that period.

Alan Turing’s paper entitled "On Computable Numbers with an Application to the Entscheidungs-problem" appeared on November 12, 1937, somewhat contemporaneously with Konrad Zuse’s work on the first of the Z machines in Germany, John Vincent Atanasoff ‘s work on the ABC, George Stibitz’s work on the Bell Telephony relay machine, and Howard Aiken’s on the Automatic Sequence Controlled Calculator.

Later renamed the Turing Machine, this abstract engine provided the fundamental concepts of computers that the other inventors would realise independently. So Turing provided the abstraction that would form the basic theory of computability for several decades, while others provided the pragmatic means of computation.

HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will. It provides a single user-interface to large classes of information (reports, notes, data-bases, computer documentation and on-line help). We propose a simple scheme incorporating servers already available at CERN.

One of the most shocking aspects of Oracle's lawsuit against Google alleging patent and copyright infringement was its unexpected nature. The assumption had been that Google was a big company with lots of lawyers and engineers, and had presumably checked out everything before proceeding with the Android project. And then suddenly it looked as if it had made the kind of elementary mistakes a newbie startup might commit.

11 November 2010

Eric Whitacre is that remarkable thing: a composer able to write classical music that is at once completely contemporary and totally approachable even at the first hearing.

Just as, er, noteworthy is his total ease with modern technology. His website is undoubtedly one of the most attractive ever created for a composer, and uses the full panoply of the latest Internet technologies to support his music and to interact with his audience, including a blog with embedded YouTube videos, and links to Twitter and Facebook accounts.

Perhaps the best place to get a feel for his music and his amazing facility with technology is the performance of his piece "Lux Aurumque" by a "virtual choir" that he put together on YouTube (there's another video where the composer explains some of the details and how this came about.)

Against that background, it should perhaps be no surprise that on his website he has links to pages about most (maybe all?) of his compositions that include not only fascinating background material but complete embedded recordings of the pieces.

Clearly, Whitacre has no qualms about people being able to hear his music for free, since he knows that this is by far the best way to get the message out about it and to encourage people to perform it for themselves. The countless comments on these pages are testimony to the success of that approach: time and again people speak of being entranced when they heard the music on his web site - and then badgering local choirs to sing the pieces themselves.

It's really good to see a contemporary composer that really gets what digital music is about - seeding live performances - and understands that making it available online can only increase his audience, not diminish it. And so against that background, the story behind one of his very best pieces, and probably my current favourite, "Sleep", is truly dispiriting.

Originally, it was to have been a setting of Robert Frost’s "Stopping By Woods on a Snowy Evening". The composition went well:

I took my time with the piece, crafting it note by note until I felt that it was exactly the way I wanted it. The poem is perfect, truly a gem, and my general approach was to try to get out of the way of the words and let them work their magic.

But then something terrible happened:

And here was my tragic mistake: I never secured permission to use the poem. Robert Frost’s poetry has been under tight control from his estate since his death, and until a few years ago only Randall Thompson (Frostiana) had been given permission to set his poetry. In 1997, out of the blue, the estate released a number of titles, and at least twenty composers set and published Stopping By Woods on a Snowy Evening for chorus. When I looked online and saw all of these new and different settings, I naturally (and naively) assumed that it was open to anyone. Little did I know that the Robert Frost Estate had shut down ANY use of the poem just months before, ostensibly because of this plethora of new settings.

Thanks to copyright law, this is the prospect that Whitacre faced:

the estate of Robert Frost and their publisher, Henry Holt Inc., sternly and formally forbid me from using the poem for publication or performance until the poem became public domain in 2038.

I was crushed. The piece was dead, and would sit under my bed for the next 37 years because of some ridiculous ruling by heirs and lawyers.

Fortunately for him - and for us - he came up with an ingenious way of rescuing his work:

After many discussions with my wife, I decided that I would ask my friend and brilliant poet Charles Anthony Silvestri (Leonardo Dreams of His Flying Machine, Lux Aurumque, Nox Aurumque, Her Sacred Spirit Soars) to set new words to the music I had already written. This was an enormous task, because I was asking him to not only write a poem that had the exact structure of the Frost, but that would even incorporate key words from “Stopping”, like ‘sleep’. Tony wrote an absolutely exquisite poem, finding a completely different (but equally beautiful) message in the music I had already written. I actually prefer Tony’s poem now…

Not only that:

My setting of Robert Frost’s Stopping By Woods on a Snowy Evening no longer exists. And I won’t use that poem ever again, not even when it becomes public domain in 2038.

So, thanks to a disproportionate copyright term, a fine poem will never be married with sublime music that was originally written specially for it. This is the modern-day reality of copyright, originally devised for "the encouragement of learning", but now a real obstacle to the creation of new masterpieces.

10 November 2010

I consider myself fortunate to have been around at the time of the birth of the Internet as a mass medium, which I date to the appearance of version 0.9 of Netscape Navigator in October 1994.

This gives me a certain perspective on things that happen online, since I can often find parallels from earlier times, but there are obviously many people who have been following things even longer, and whose perspective is even deeper. One such is Mark Pesce who also happens to be an extremely good writer, which makes his recent blog posting about the "early days" even more worth reading:

Back in the 1980s, when personal computers mostly meant IBM PCs running Lotus 1*2*3 and, perhaps, if you were a bit off-center, an Apple Macintosh running Aldus Pagemaker, the idea of a coherent and interconnected set of documents spanning the known human universe seemed fanciful. But there have always been dreamers, among them such luminaries as Douglas Engelbart, who gave us the computer mouse, and Ted Nelson, who coined the word ‘hypertext’. Engelbart demonstrated a fully-functional hypertext system in December 1968, the famous ‘Mother of all Demos’, which framed computing for the rest of the 20th century. Before man had walked on the Moon, before there was an Internet, we had a prototype for the World Wide Web. Nelson took this idea and ran with it, envisaging a globally interconnected hypertext system, which he named ‘Xanadu’ – after the poem by Coleridge – and which attracted a crowd of enthusiasts intent on making it real. I was one of them. From my garret in Providence, Rhode Island, I wrote a front end – a ‘browser’ if you will – to the soon-to-be-released Xanadu. This was back in 1986, nearly five years before Tim Berners-Lee wrote a short paper outlining a universal protocol for hypermedia, the basis for the World Wide Web.

Fascinating stuff, but it was the next paragraph that really made me stop and think:

Xanadu was never released, but we got the Web. It wasn’t as functional as Xanadu – copyright management was a solved problem with Xanadu, whereas on the Web it continues to bedevil us – and links were two-way affairs; you could follow the destination of a link back to its source. But the Web was out there and working for thousand of people by the middle of 1993, while Xanadu, shuffled from benefactor to benefactor, faded and finally died. The Web was good enough to get out there, to play with, to begin improving, while Xanadu – which had been in beta since the late 1980s – was never quite good enough to be released. ‘The Perfect is the Enemy of the Good’, and nowhere is it clearer than in the sad story of Xanadu.

The reason copyright management was a "solved problem with Xanadu" was because of something called "transclusion", which basically meant that when you quoted or copied a piece of text from elsewhere, it wasn't actually a copy, but the real thing *embedded* in your Xanadu document. This meant that it was easy to track who was doing what with your work - which made copyright management a "solved problem", as Pesce says.

I already knew this, but Pesce's juxtaposition with the sloppy, Web made me realise what a narrow escape we had. If Xanadu had been good enough to release, and if it had caught on sufficiently to establish itself before the Web had arrived, we would probably be living in a very different world.

There would be little of the creative sharing that undelies so much of the Internet - in blogs, Twitter, Facebook, YouTube. Instead, Xanadu's all-knowing transclusion would allow copyright holders to track down every single use of their content - and to block it just as easily.

I've always regarded Xanadu's failure as something of a pity - a brilliant idea before its time. But I realise now that in fact it was actually a bad idea precisely of its time - and as such, completely inappropriate for the amazing future that the Web has created for us instead. If we remember Xanadu it must be as a warning of how we nearly lost the stately pleasure-dome of digital sharing before it even began.

A little while back I was pointing out how free software licences aren't generally compatible with Fair, Reasonable and Non-Discriminatory (FRAND) licensing, and why it would be hugely discriminatory if the imminent European Interoperability Framework v 2 were to opt for FRAND when it came to open standards, rather than insisting on restriction-free (RF) licensing.

I noted how FRAND conditions are impossible for licences like the GNU GPL, since the latter cannot pay per copy licensing fees on software that may be copied freely. As I commented there, some have suggested that there are ways around this - for example, if a big open source company like Red Hat pays a one-off charge. But that pre-supposes that licence holders would want to accommodate free software in this way: if they simply refuse to make this option available, then once again licences like the GNU GPL are simply locked out from using that technology - something that would be ridiculous for a European open standard.

Now, some may say: “ah well, this won't happen, because the licensing must be fair and reasonable”: but that then begs the question of what is fair and reasonable. It also assumes that licensors will always want to act fairly and reasonably themselves - that they won't simply ignore that condition. As it happens, we now have some pretty stunning evidence that this can't be taken for granted.

One of the frustrating things about being on the side of right, justice, logic and the rest is that all of these are trumped by naked insider power - just look at ACTA, which is a monument to closed-door deals that include rich and powerful industry groups, but expressly exclude the little people like you and me.

Against that background, it becomes easy to understand why Larry Lessig decided to move on from promoting copyright reform to tackling corruption in the US political machine. The rise of great sites like the Sunlight Foundation, whose tagline is "Making Government Transparent and Accountable" is further evidence of how much effort is going into this in the US.

The UK is lagging somewhat, despite the fact that in terms of pure open data from government sources, we're probably "ahead". But it's clear that more and more people are turning their thoughts to this area - not least because they have made the same mental journey as Lessig: we've got to do this if we are to counter the efforts of big business to get what they want regardless of whether it's right, fair or even sensible.

We are excited to announce the Who’s Lobbying site launches today! The site opens with an analysis of ministerial meetings with outside interests, based on the reports released by UK government departments in October.

That analysis consists of treemaps - zoomable representations of how much time is spent with various organisations and their lobbyists:

For example, the treemap shows about a quarter of the Department of Energy and Climate Change meetings are with power companies. Only a small fraction are with environmental or climate change organisations.

It's still a little clunky at the moment, but it gives a glimpse of what might be possible: a quick and effortless consolidated picture of who's getting chummy with whom. As the cliché has it, knowledge is power, and that's certainly the case here: the more we can point to facts about disproportionate time spent with those backing one side of arguments, the easier it will be to insist on an equal hearing. And once that happens, we will be halfway there; after all, we *do* have right on our side...

A remarkable continuity underlies free software, going all the way back to Richard Stallman's first programs for his new GNU project. And yet within that continuity, there have been major shifts: are we due for another such leap?

08 November 2010

I was invited to give a talk at two recent conferences, the Berlin Commons Conference, and FSCONS 2010. It's generally a pleasure to accept these invitations, although I must confess that I found two major conferences with only two days between them a trifle demanding in terms of mental and physical stamina.

Indeed, both conferences were extremely stimulating, and I met many interesting people at both. However, more than anything, I was struck by huge and fundamental differences between them.

The Berlin Commons Conference was essentially the first of its kind, and a bold attempt to put the concept of the commons on the map. Of course, readers of this blog will already know exactly where to locate it, but even for many specialists whose disciplines include commons, the idea is still strange. The conference wisely sought to propel the commons into the foreground by finding, er, common ground between the various kinds of commons, and using that joint strength to muscle into the debate.

That sounded eminently sensible to me, and is something I have been advocating in my own small way (not least on this blog) for some time. But on the ground, achieving this common purpose proved much harder than expected.

In my view, at least, this was down largely to the gulf of incomprehension that we discovered between those working with traditional commons - forests, water, fish etc. - and the digital commons - free software, open content, etc. Basically it seemed to come down to this: some of the former viewed the latter as part of the problem. That is, they were pretty hostile to technology, and saw their traditional commons as antithetical to that.

By contrast, I and others working in the area of the digital commons offered this as a way to preserve the traditional, analogue commons. In particular, as I mentioned after my talk at the conference (embedded below), the Internet offers one of the most powerful tools for fighting against those - typically big, rich global corporations - that seek to enclose physical commons.

I must say I came away from the Berlin conference a little despondent, because it was evident that forming a commons coalition would be much harder than I had expected. This contrasted completely with the energising effect of attending FSCONS 2010 in Gothenburg.

It's not hard to see why. At the Swedish conference, which has been running successfully for some years, and now attracts hundreds of participants, I was surrounded by extremely positive, energetic and like-minded people. When I gave my talk (posted below), I was conscious that intentionally provocative as I was, my argument was essentially pushing against an open door: the audience, though highly critical in the best sense, were in broad agreement with my general logic.

Of course, that can make things too easy, which is dangerous if it becomes routine; but the major benefit of being confirmed in your prejudices in this way is that it encourages you to continue, and perhaps even to explore yet further. It has even inspired me to start posting a little more prolifically. You have been warned....

About Me

I have been a technology journalist and consultant for 30 years, covering
the Internet since March 1994, and the free software world since 1995.

One early feature I wrote was for Wired in 1997:
The Greatest OS that (N)ever Was.
My most recent books are Rebel Code: Linux and the Open Source Revolution, and Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business.