31 August 2007

There seems to be some magic about free software: whenever a certain class of (intelligent) lawyer comes into contact with it, it redeems them, and turns them into enlightened benefactors. Eben Moglen is the paradigmatic case, but here's another: Mark Radcliffe. You don't have to take my word - this is what Matt has to say:

If it has to do with open source and it affects your rights therein, Mark was probably at the fulcrum.

The Swedish Institute of Standards (SIS) has invalidated the vote that controversially approved the OOXML standard at a meeting this week.

The organisation issued a statement saying that it had seen evidence suggesting one of the participants in the workgroup had broken the rules and voted with more than one vote. This procedural irregularity, and not any concerns over the merit of the OOMXL proposal, is the only reason behind the decision, the group said.

I've written a lot here and elsewhere about Microsoft's faux-open file format OOXML. I've also noted that there is an unhealthily close relationship between the BBC and Microsoft over the former's iPlayer and its chosen file formats. Now it seems that this kind of chummy lock-in is happening elsewhere, at the UK's National Archives and beyond.

The National Archive story is not new, but dates back to July of this year, when I first noted it. Here's what a Microsoftie said:

The announcement we just made with the National Archives is trying to address the issue of digital conservation head-on. With billions of documents in the world wrapped up in proprietary document formats (from Microsoft and many many other vendors) we felt it was important to focus on how we can help the body in the UK which has the biggest headache and do what we can to assist them in:

* Migrating documents to the latest Office format (Open XML) via our document conversion tools to ensure they can be accessed by the public in the future

Since then, people have started taking notice, to the extent that there is now an e-petition all us Brits can sign asking that nice Mr Brown to use ODF instead of OOXML for the National Archives (but don't hold your breath.)

However, I've just noticed that the Microsoftie quoted above mentions this little factette:

Well, we've actually been working with The British Library and The National Archive for about 18 months now on digital preservation with some other European organisations as members of an EU project called Planets.

The Planets consortium brings together the unique experience required to research, develop, deliver and productise practical digital preservation solutions. Coordinated by the British Library

The British Library, you may recall, is also in cahoots with Microsoft when it comes to locking up our digital heritage. Some now we have the prospect of the OOXML cancer spreading to other institutions, and large chunks of European culture being locked up in proprietary formats.

30 August 2007

The new NWZ-A810 and NWZ-S610 series will have a QVGA screen for video playback...more importantly, the players support secure Windows Media Audio (WMA), as well as non-secure AAC and MP3 music formats

What? You mean MP3 isn't secure - that somebody might break into my PC if I foolishly adopt that format? Who would have guessed?

Oh, I see: that's "secure" as in "securely manacled to the wall"; this is not "secure" WMA, it's DRM'd WMA. C'mon Rafat, you can do better than this: PaidContent readers look to you to be told things as they really are in the content world, not to be fed marketing disinformation like this.

A proposal to set up a Pan-African Intellectual Property Organisation (PAIPO) though still in its infancy already faces opposition and concern, including from those who fear that Africa is signing up to stricter IP protection levels than the continent is ready for, sources say.

Africa needs more brutal intellectual monopolies like it needs more malaria - in fact the former would probably increase the latter, as generic drugs become more expensive or are eliminated etc.

One of the things I admire about Richard Stallman is the clarity of his thinking. So I was interested to come across these thoughts on art/non-functional works, and why the imperatives for freedom are different here compared to software, say:

If you use something to do jobs in your life, you must be free to change it today, and then distribute your changed version today in case others need what you need.

Art contributes something different to society. You appreciate it. Modifying art can be a further contribution to art, but it is not crucial to be able to do that today. If you had to wait 10 years for the copyright to expire, that would be ok.

Interesting, too, the emphasis on sharing:

I don't think that non-functional works must be free. It is enough for them to be sharable.

It must be a bit irksome being an antitrust regulator in the United States when your European counterparts are (a) more likely to interfere with the private sector and (b) look disdainfully at federal agencies as wishy-washy.

Which is probably why William Kovacic, one of the Federal Trade Commission's five members, spent nearly an hour on Monday defending the American approach as reasoned and no less thorough than that of its cross-Atlantic counterparts. There is a "tendency on the part of our European colleagues to dismiss the U.S. experience," he said.

I've covered the dispute between the US and Antigua over online gambling before, but it looks like there's a chance the perfect endgame is actually going to play out:

Mr. Mendel, who is claiming $3.4 billion in damages on behalf of Antigua, has asked the trade organization to grant a rare form of compensation if the American government refuses to accept the ruling: permission for Antiguans to violate intellectual property laws by allowing them to distribute copies of American music, movie and software products, among others.

That is, either the US is forced to admit to the global community its hypocritical attitude to online gambling, and allow foreign companies to operate sites accessible by Americans; or the entire edifice of intellectual property in the US is rogered; or the WTO implodes.

In one of history's more absurd acts of totalitarianism, China has banned Buddhist monks in Tibet from reincarnating without government permission. According to a statement issued by the State Administration for Religious Affairs, the law, which goes into effect next month and strictly stipulates the procedures by which one is to reincarnate, is "an important move to institutionalize management of reincarnation."

The Partnership for Research Integrity in Science & Medicine (PRISM) was established to protect the quality of scientific research, an issue of vital concern to:

* scientific, medical and other scholarly researchers who advance the cause of knowledge;

* the institutions that encourage and support them;

* the publishers who disseminate, archive and ensure the quality control of this research; and

* the physicians, clinicians, engineers and other intellectual pioneers who put knowledge into action.

Implying, of course, that open access has has no research integrity, does not advance the cause of knowledge, and is somewhow opposed to intellectual pioneers.

This is a really important development, because it is the clearest demonstration yet that the traditional publishers see open access as a real threat, and that it is succeeding - you don't take these kind of measures against something that is flailing. For the best rebuttal of the misleading issues and non-issues raised and obfuscated on this site, see Peter Suber's comments.

If you allow people to read and listen to your ideas for free, you just might actually benefit the kingdom of God. At the same time you increase your own audience, and your potential impact. Who knows — if you are actually good at what you do, someone might hear you speak and think that you have something important to say, or that the way you are saying things resonates with people better than the same message packaged differently has done so in the past. This could even result in an invitation to speak at a conference or colloquium, or to write an article for publication.

What I discovered was that - with the caveat of a necessary network connection - life is just fine without a disk. Between the Firefox Web browser, Google’s Gmail and and the search engine company’s Docs Web-based word processor, it was possible to carry on quite nicely without local data

Interestingly, I had not one but three of my computers die within the space of a few weeks. Like Markoff, I am obsessive about backing up data, so I didn't lose anything important - other than the ability to access local files.

So I'm now largely living La Vida Online: Firefox as my computing environment, accessing Gmail, Writely (as I still prefer to call it), plus a few other online sites for storing various kinds of files and links. It works pretty well, and even when I get some new systems up and running, I'm aiming at prolonging my stay in Cloud Cuckoo Land.

I've not written about the Rambus case before, because it seemed frankly rather dull. But I was wrong: there is an important principle at its heart:

European Union regulators have charged Rambus Inc. with antitrust abuse, alleging the memory chip designer demanded ''unreasonable'' royalties for its patents that were fraudulently set as industry standards.

The EU's preliminary charges, announced Thursday, come weeks after the U.S. Federal Trade Commission ruled the company deceived a standards-setting committee by failing to disclose that its patented technology would be needed to comply with the standard.

As a result, every manufacturer that wanted to make synchronous dynamic access memory chips had to negotiate a license with Rambus.

Both EU and U.S. antitrust officials allege that this allowed Rambus gain an illegal monopoly in the 1990s for DRAM chips used in personal computers, servers, printers, personal digital assistants and other electronics.

Clearly these kinds of patent ambushes are potentially a general problem, and indicate why real standards must only allow completely patent-free technologies. If a company wants its patented technology to become a standard, it must give its patents.

In Linux circles, Microsoft's anti-Linux site, Get the Facts, was better known as Get the FUD, and was seen as more of a joke than a convincing argument in favor of Microsoft products over Linux. Microsoft may have come to agree that the site was not serving any useful purpose, as the company closed it down on Aug. 23.

Upholding licences is crucial to the success of free software, so potentially, this looks really bad news:

Open-source software and the licenses that govern it suffered a serious setback in a San Francisco District Court earlier this month, following a preliminary decision that could effectively deprive open source licensors from being able to get a court injunction to stop the violation of the terms of their license going forward.

Although the judge's analysis is superficially worrying for the way he interprets the licence, there is an important fact in this particular situation, which has already involved tussles over software patents:

At issue was model train software code that Jacobsen and some other open source developers wrote, called the Java Model Railroad Interface, or JMRI, which is licensed under the Open Source Initiative approved Artistic License.

Now the Artistic Licence, originally drawn up by Larry Wall for Perl - and whose name was chosen purely for the pun it allowed - is a notoriously loose licence. IANAL, but it seems to me that the problem the judge has with granting an injunction against the model train software company is that the Artistic Licence simply gives, well, too much licence.

I may be wrong, but I think the far more demanding GNU GPL would avoid this problem - another good reason for choosing a more rigorous licence. We shall see whether I am right....

Those open and sharing memes are spreading like wildfire in the most surprising places - like law, for example. In the US, there are two new important projects to place legal decisions online, freely available to all. There's Carl Malamud's database of legal opionions, and Tim Wu's AltLaw project.

But the thing that interested me most was a comment to the announcement of latter, which pointed out that that these US-based efforts are actually trailing equivalent moves elsewhere. The excellent site World Legal Information Institute has links to over a dozen of them. Shame on me for not discovering them sooner.

Nothing new here for readers of this blog, but good to see others moving in the same direction:

I believe we should consider anything we publish on the web as an advertisement: promotion material, and only that. We can use this to sell the following:

* pretty or convenient copies (maybe we will see the reappearance of the artful music album!) * signed copies * limited edition high quality copies (things one can proudly display on a wall at home) * time: live performances!

One of the great but rather submerged stories in the open source world is stack integration. With the exception of the LAMP stack, free software solutions have been rather fragmented, with little inter-project coordination. One important development in this space is the creation of the Open Solutions Alliance, whose main task is ensuring better cooperation between disparate products.

I wrote about this recently, and I notice that the OSA blog is quite active at the moment. It's a good place to find out what exactly is happening in this important but neglected area.

The ODF vs. OOXML battle is really hotting up - a sure sign that this is important. One of the key issues is whether OOXML can ever be fully implemented by anyone else other than Microsoft: if it can't, then it can hardly be called a true open standard. Here's some analysis that suggest is can't. Not that will stop it becoming one....

17 August 2007

The open source anti-virus software project ClamAV is one of my favourite pieces of free code. I've used it for years now, and recommended it to dozens of people. But I've always been a bit worried about its business model: could it continue to grow?

Well, now it looks like it can, since Sourcefire, creator of SNORT, has acquired the project:

With nearly 1 million unique IP addresses downloading ClamAV malware updates daily across more than 120 mirrors in 38 countries, ClamAV is one of the most broadly adopted open source security projects worldwide. ClamAV has also been recognized as comparable in quality and coverage to leading commercial anti-virus solutions. Most recently, at LinuxWorld this year, ClamAV was one of only three anti-virus technologies to provide a 100% detection rate in their live 'Fight Club' test featuring live submissions from the show audience.

Under terms of the transaction, Sourcefire has acquired the ClamAV project and related trademarks, as well as the copyrights held by the five principal members of the ClamAV team including project founder Tomasz Kojm. Sourcefire will also assume control of the open source ClamAV project including the ClamAV.org domain, web site and web site content and the ClamAV Sourceforge project page. In addition, the ClamAV team will remain dedicated to the project as Sourcefire employees, continuing their management of the project on a day-to-day basis.

As the above points out, ClamAV was one of only three anti-virus technologies to provide a 100% detection rate, and this only reinforces my confidence is using it day-in, day-out. If you don't know it, do take a look. (Via Matthew Aslett.)

16 August 2007

An unexpected implication in the legislating procedure of the proposed EU Directive on criminal measures aimed at ensuring the enforcement of intellectual property rights (IPRED2) puts legitimate businesses under clear threat of criminal sanctions.

With The New York Times and The Wall Street Journal said to be looking at removing the “pay wall” around their online content, and others – including CNN, Google and AOL – having already done so, one question springs to mind: Are we seeing the death of paid content online, and the return of free as a business model?

Yup - at least, free as in beer: now we need to work on the free as in freedom part.

This piece from The Courant is like the coelacanth: not very pretty, but fascinating for its atavistic traits:

Unlike copyright-protected software, such as Microsoft's Windows, open source software is available either as a free public-domain offering or under a nominal licensing fee.

Well, no. To be strictly open source, software must have an OSI-approved licence. Such licences generally (always?) depend on copyright law for their enforcement. So, by definition, open source software uses copyright as much as Microsoft's Windows, just for different ends.

This was a common confusion when free software started appearing in the mainstream, but it's quite surprising to see it popping up nowadays.

The United States has benefitted in many ways from having public data sets that are freely used by scholars, commercial firms, consultants, and the public. An example of this is the TIGER system (Topologically Integrated Geographic Encoding and Referencing system http://www.census.gov/geo/www/tiger/) Many countries do not, and one British geospatial expert estimated that the closed nature of their system has cost them one billion pounds in lost business.

After a year of negotiations, academic geographers have conceded defeat in their attempt to find a way to make a pioneering 3D representation of the capital, Virtual London, available to all comers via the Google Earth online map.

Followers of Technology Guardian's Free Our Data campaign will have guessed the reason: Virtual London is partly derived from proprietary data owned by the government through its state-owned mapping agency, Ordnance Survey (OS). What makes the situation bizarre is that Virtual London's development was funded by another arm of the government, the office of the mayor of London.

In other words, I helped pay for this information, twice - as taxpayer, and as London ratepayer - and yet I am not allowed to access it.

The Ordnance Survey's excuse is pathetic:

OS said granting Google special terms for Virtual London would be unfair on other licensees. "We provide an open, fair and transparent set of terms for providers seeking to operate in the same commercial space as each other. We cannot therefore license Google in a different way to other providers. We are completely supportive of anyone putting our data on the web as long as they have a licence to do so." Google would not comment.

Commercial space - and what about the public space, you know those tiresome little people that pay for your salary?

About 18 months ago, I wrote a post called "Open Source's Best-Kept Secret" about Eclipse, how wonderful it was, and yet how few knew about it. Now what do I find?

Eclipse may be the most important open-source "project" that people outside the industry, and even some within it, have never heard of.

Yup, Matt and I agree again. His piece is an excellent interview with the head of Eclipse, Mike Milinkovich. I also interviewed him recently, for my feature about the open source ecosystem in Redmond Magazine. Matt's ranges more widely, and is probably the best intro to what Eclipse is up to, how it functions, and why it is so important.

Indeed, I wonder whether it will actually prove to be the most important open source project of all in the long term. As Matt points out:

In late June, Eclipse made available the largest-ever simultaneous release of open-source software, called Europa: 17 million lines of code, representing the contributions of 310 open-source developers in 19 countries. Twenty-one new tools were included in the "Europa" release, all free to download.

Think about that. The Linux kernel has around 6 million lines of code.... The Java Development Kit that Sun open sourced has 6.5 million.... Sun's StarOffice release in 2000 (which was believed to be the largest open-source release to that point) had 9 million.... Firefox has 2.5 million.

15 August 2007

But now an international scientific counterculture is emerging. Often referred to as “open science,” this growing movement proposes that we err on the side of collaboration and sharing. That’s especially true when it comes to creating and using the basic scientific tools needed both for downstream innovation and for solving broader human problems.

Open science proposes changing the culture without destroying the creative tension between the two ends of the science-for-innovation rope. And it predicts that the payoff – to human knowledge and to the economies of knowledge-intensive countries like Canada – will be much greater than any loss, by leveraging knowledge to everyone’s benefit.

"Sharing the fruits of science", it's called. Nothing new, but interesting for the outsider's viewpoint. (Via Open Access News.)

I've been wittering on about personal genomics for some time: well, it's here, people. If you don't believe, me, take a look at this site (note, it's one of those old-fashioned FTP thingies, but Firefox should cope just fine).

Not much to see, you say? Just a couple of boring old directories - one called "Venter", the other "Watson". And inside those directories, lots of pretty massive files - some 35 Mbytes, some double that. And inside those files? Oh, just some boring letters; you know the kind of thing - AAGTGGTACCATTGACGCACAGGACACAGTG etc.

"I will predict that virtually every open source company (including Red Hat) will eventually be acquired by a big proprietary software company."

Thus spake Tim O'Reilly in the comments to one of his other posts. Tim believes that open source, at least as defined by open-source licensing, has a short shelf-life that will be consumed by Web 2.0 (i.e., web companies hijacking open-source software to deliver proprietary web services) or by traditional proprietary software vendors.

In other words, why don't I just give up, sell out, and go home? I guess I would if I thought that Tim were right. He's not, not in this instance.

There's something more fundamental going on here than "Proprietary software meets open source. Proprietary software decides to commandeer open source. Open source proves to be a nice lapdog to proprietary software." I actually believe that open source, not proprietary software, is the natural state of the industry, and that Tim's proprietary world is anomalous.

I particularly liked this distinction between the service aspects of software, and the attempts to view it as an instantiation of various intellectual monopolies:

Suddenly, the license matters more, not less, because it is the license that ensures the conversation focuses on the right topic - service - rather than on inane jabberings that only vendors care about. You know, like intellectual property.

And there's another crucial reason why proprietary software companies can't just open their chequebooks and acquire those pesky open source upstarts. Unlike companies who seem to think that they are co-extensive with the intellectual monopolies they foist on customers, open source outfits know they are defined by the high-quality people - both employees and those out in the community - that code for the customers.

For example, one reason people take out subscriptions to Red Hat's offerings is that they get to stand in line for the use of Alan Cox's brain. Imagine, now, that proprietary company X "buys" Red Hat: well, what exactly does it buy? Certainly not Alan Cox's brain, which will leave with him (one hopes) when he moves immediately to another open source company (or just hacks away in Wales for pleasure). Sure, the purchaser will have all kinds of impressive legal documents spelling out what it "owns" - but precious little to offer customers anymore, who are likely to follow wherever Alan Cox and his ilk go.

Software developers who work on the cutting edge of open source Web and enterprise content technologies to ensure that collaboratively created knowledge is available now and in the future.

Fedora Commons is the home of the unique Fedora open source software, a robust integrated repository-centered platform that enables the storage, access and management of virtually any kind of digital content.

So not only is Fedora an organisation - recently funded to the tune of $4.9 million by the Gordon and Betty Moore Foundation - aiming to create a commons of "intellectual, organizational, scientific and cultural heritage", but it is also a piece of code:

Institutions and organizations face increasing demands to deliver rich digital content. A scan of the web reveals complex multi-media content that combines text, images, audio, and video. Much of this content is produced dynamically through the use of servlet technology and distributed web services.

Delivery of rich content is possible through a variety of technologies. But, delivery is only one aspect of a suite of content management tasks. Content needs to be created, ingested, and stored. It needs to be aggregated and organized in collections. It must be described with metadata. It must be available for reuse and refactoring. And, finally, it must be preserved.

Without some form of standardization, the costs of such management tasks become prohibitive. Content managers find themselves jury-rigging tasks onto each new form of digital content. In the end, they are faced with a maze of specialized tools, repositories, formats, and services that must be upgraded and integrated over time.

Content managers need a flexible content repository system that allows them to uniformly store, manage, and deliver all their existing content and that will accommodate new forms that will inevitably arise in the future.

Fedora is an open source digital repository system that meets these challenges.

In fact, Fedora is nothing less than "Flexible Extensible Digital Object Repository Architecture". So the name is logical - pity it's so confusing in the context of open source.

The need for a Linux Weather Forecast arises out of Linux’s unique development model. With proprietary software, product managers define a “roadmap” they deliver to engineers to implement, based on their assessments of what users want, generally gleaned from interactions with a few customers. While these roadmaps are publicly available, they are frequently not what actually gets technically implemented and are often delivered far later than the optimistic timeframes promised by proprietary companies.

Conversely, in Linux and open source software, users contribute directly to the software, setting the direction with their contributions. These changes can quickly get added to the mainline kernel and other critical packages, depending on quality and usefulness. This quick feedback and development cycle results in fast software iterations and rapid feature innovation. A new kernel version is generally released every three months, new desktop distributions every six months, and new enterprise distributions every 18 months.

While the forecast is not a roadmap or centralized planning tool, the Linux Weather Forecast gives users, ISVs, partners and developers a chance to track major developments in Linux and adjust their business accordingly, without having to comb through mailing lists of the thousands of developers currently contributing to Linux. Through the Linux Weather Forecast, users and ecosystem members can track the amazing innovation occurring in the Linux community. This pace of software innovation is unmatched in the history of operating systems. The Linux Weather Forecast will help disseminate the right information to the ever growing audience of Linux developers and users in the server, desktop and mobile areas of computing, and will complement existing information available from distributions in those areas.

Good to see Jonathan Corbet, editor of LWN.net, for whom I write occasionally, spreading some of his deep kernelly knowledge in this way.

I wrote recently about the plight of the Tibetan people. One of the problems is that it is hard for an average non-Tibetan to do much to help the situation. So I was pleased that Boing Boing pointed me to what sounded a worthy cause that might, even if in a small way, help preserve Tibetan culture:

The Tibetan Endangered Music Project has so far recorded about 400 endangered traditional Tibetan songs. We now have the opportunity to make these songs available online, at a leading Tibetan language website (www.tibettl.com). However, this volunteer run website is unable to fund hosting for our material. The cost of hosting space is 1.5 RMB (less than 20 US cents) for every MB. One song in mp3 format is approximately 1.5 MB. 1900 USD would allow us to buy 10 GB of hosting space, which will take care of all our needs for the forseeable future (allowing 6700 1.5 MB songs to be uploaded). It would also allow us to expand to video hosting in the future, or to provide high quality (.wav) formats instead of only compressed mp3 format.

Despite being signed up and in, I could not - can not - find anywhere to give money to this lot. Now, naively, I would have thought that a site called GiveMeaning, expressly designed to help people give money to worthy causes, would, er, you know, help people give money, maybe with a big button saying "GIVE NOW". But what do I know? I've only been using the Web for about 14 years, so maybe I'm still a little wet behind the ears.

On the other hand, it could just be that this is one of the most stupid sites in the known universe, designed to drive altruists mad as a punishment for wanting to help others. Either way, it looks like the Tibetan musical commons is going to have to do without my support, which is a pity.

Wikipedia is famously open, so in general anyone can edit stuff. But this editing is also done in the open, in that all changes are tracked. Now, some people edit anonymously, but their IP addresses are logged. This information too is freely available, so here's an idea that some bright chap had:

Griffith thus downloaded the entire encyclopedia, isolating the XML-based records of anonymous changes and IP addresses. He then correlated those IP addresses with public net-address lookup services such as ARIN, as well as private domain-name data provided by IP2Location.com.

The result: A database of 5.3 million edits, performed by 2.6 million organizations or individuals ranging from the CIA to Microsoft to Congressional offices, now linked to the edits they or someone at their organization's net address has made.

As a result, dedicated crowd-sourcers are poring over Wikipedia, digging out those embarrassing self-edits. For example:

On Christmas Eve 2004, a Disney user deleted a citation on the "digital rights management" page to DRM critic Cory Doctorow along with a link to a speech he gave to Microsoft's Research Group on the subject. Later, a Disney user altered the "opponents" discussion of the entry, arguing that consumers embrace DRM: "In general, consumers knowingly enter into the arrangement where they are granted limited use of the content."

or:

"Removed ECHELON link, irrelevant to article," reads the comment explaining this cut. The contributor's IP address belongs to the National Security Agency.

or even:

Microsoft's MSN Search is now "a major competitor to Google". Take it from this anonymous contributor, whose IP address belongs to Waggener Edstrom, Microsoft's PR firm.

I'm a big fan of Lulu.com, the self-publishing company, not least because the man behind it, Bob Young, also co-founded Red Hat, and is one of the most passionate defenders of the open source way I have come across.

So the news that CreateSpace is going into the on-demand publishing business is interesting - especially since the company is a subsidiary of Amazon, which means that self-published authors will be able to hitch a ride on the Amazon behemoth. But as far as I can tell, Lulu still offers a more thorough vision, with its global reach and finer-grained publishing options. But if nothing else, Amazon's entry into this space will serve to validate the whole idea in the eyes of doubters.

the Google Project has, however unintentionally, made not only conventional libraries themselves, but other projects digitizing cultural artifacts appear inept or inadequate. Project Gutenberg and its 17,000 books in ascii appear insignificant and superfluous beside the millions of books that Google is contemplating. So do most scanning projects by conventional libraries. As a consequence of the assumed superiority of Google’s approach, therefore, it is highly unlikely that either the funds or the energies for an alternative project of similar magnitude will become available, nor are the libraries who are lending their books (at significant costs to their funds, their books, and their users) likely to undertake such an effort a second time. With each scanned page, Google Books’ Library Project, by its quantity if not necessarily by its quality, makes the possibility of a better alternative unlikely. The Project may then become the library of the future, whatever its quality, by default. So it does seems important to probe what kind of quality Google Book Project might present to an ordinary user that Google envisages wanting to find a book.

But also unsatisfactory:

The Google Books Project is no doubt an important, in many ways invaluable, project. It is also, on the brief evidence given here, a highly problematic one. Relying on the power of its search tools, Google has ignored elemental metadata, such as volume numbers. The quality of its scanning (and so we may presume its searching) is at times completely inadequate. The editions offered (by search or by sale) are, at best, regrettable.

The public domain is a vastly underappreciated resource - which probably explains why there have been so many successful assaults on it in recent years through copyright, patent and trademark extensions. But now, it seems, people are starting to wake up to its central importance for the digital world:

The new tools of the information society make that public domain material has a considerable potential for re-use - by citizens or for new creative expressions (e.g. documentaries, services for tourism, learning material). It contains published works, such as literary or artistic works, music and audiovisual material for which copyright has expired, material that has been assigned to the public domain by the right holders or by law, mathematical methods, algorithms, methods of presenting information and raw data, such as facts and numbers. A rich public domain has, logically, the potential to stimulate the further development of the information society. It would provide creators – e.g. documentary makers, musicians, multimedia producers, but also schoolchildren doing a Web project – with raw material that they can build on and experiment with, without high transaction or other costs. This is particularly important in the digital context, where the integration of existing material has become much easier.

Although there is some evidence of its importance, there has been no systematic attempt to map or measure its social and economic impact. This is a problem when addressing policy issues that build on public domain material (e.g. digital libraries) or that have an impact on the public domain (e.g. discussions on intellectual property instruments) in the digital age.

Call for tender: "Assessment of the Economic and Social impact of the Public Domain in the Information Society" was published today in the Supplement to the Official Journal of the European Union 2007/S 151-187363. The envisaged purpose of the assessment is to analyse the economic and social impact of the public domain and to gauge its potential to contribute for the benefit of the citizens and the economy.

The Portuguese Ministry of Education is doing the sensible thing and giving away a CD full of free (Windows) software to 1.6 million students, saving itself (and the taxpayers) around 300 million Euros. Nothing amazing about that, perhaps, since it's a sensible thing to do (not that everyone does it).

What's more interesting, for me, at least, is the set of software included on the CD:

* OpenOffice.org* Firefox* Thunderbird* NVU* Inkscape* GIMP

These are pretty much the cream of the free software world, and show the increasing depths of desktop apps. Also interesting are the specifically educational programs included:

Modellus enables students and teachers (high school and college) to use mathematics to create or explore models interactively.

It's always surprised me that that more use isn't made of free software in education, since the benefits are obvious: by pooling efforts, duplication is eliminated, and the quality of tools improved. (Via Erwin Tenhumberg.)

13 August 2007

Here's an interesting example of major open source projects meeting to produce a highly-targeted commercial product:

Red Hat Developer Studio is a set of eclipse-based development tools that are pre-configured for JBoss Enterprise Middleware Platforms and Red Hat Enterprise Linux. Developers are not required to use Red Hat Developer Studio to develop on JBoss Enterprise Middleware and/or Red Hat Linux. But, many find these pre-configured tools offer significant time-savings and value, making them more productive and speeding time to deployment.

It's not often that Google kills off one of its services, especially one which was announced with much fanfare at a big mainstream event like CES 2006. Yet Google Video's commercial aspirations have indeed been terminated: the company has announced that it will no longer be selling video content on the site. The news isn't all that surprising, given that Google's commercial video efforts were launched in rather poor shape and never managed to take off. The service seemed to only make the news when embarrassing things happened.

Yet now Google Video has given us a gift—a "proof of concept" in the form of yet another argument against DRM—and an argument for more reasonable laws governing copyright controls. How could Google's failure be our gain? Simple. By picking up its marbles and going home, Google just demonstrated how completely bizarre and anti-consumer DRM technology can be. Most importantly, by pulling the plug on the service, Google proved why consumers have to be allowed to circumvent copy controls.

12 August 2007

I have referred to radio spectrum as a commons several times in this blog. But there's a problem: since spectrum seems to be rivalrous - if I have it, you can't - this means that the threat of a tragedy of the commons has to be met by regulation. And that, as we see, is often unsatisfactory, not least because powerful companies usually get the lion's share.

But it seems - luckily - I was wrong about spectrum necessarily being rivalrous:

Software defined radio that is beginning to emerge from the labs into actual tests has the ability to render all spectrum management moot. Small wonder that the legal mandarins there have begun to sneer that open source SDR cannot be trusted.

In other words, when you make radio truly digital, it can be intelligent, and simply avoid the problem of commons over-use.

China has seen a sharp increase in requests for patents, according to the UN's intellectual property agency.

The number of requests for patents in China grew by 33% in 2005 compared with the previous year.

That gives it the world's third highest number behind Japan and the United States, the agency said.

Why is this important? Well, currently, patents are being pushed largely by the US as a way of asserting itself economically, notably against that naughty China, which, it is frequently claimed, just rips off the West's ideas. But as China becomes one of the world's leading holders of patents, we can expect to see it start asserting those against everyone else - including the US. Which might suddenly find that it is not quite so keen on those unfair intellectual monopolies after all....

Judge Dale Kimball has issued a 102-page ruling [PDF] on the numerous summary judgment motions in SCO v. Novell. Here it is as text. Here is what matters most:

[T]he court concludes that Novell is the owner of the UNIX and UnixWare Copyrights.

That's Aaaaall, Folks! The court also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent". That's the ball game. There are a couple of loose ends, but the big picture is, SCO lost. Oh, and it owes Novell a lot of money from the Microsoft and Sun licenses.

But there's another interesting aspect to this: SCO lost, and Novell won:

But we must say thank you to Novell and especially to its legal team for the incredible work they have done. I know it's not technically over and there will be more to slog through, but they won what matters most, and it's been a plum pleasin' pleasure watching you work. The entire FOSS community thanks you for your skill and all the hard work and thanks go to Novell for being willing to see this through.

As I've written elsewhere, we really can't let Novell fail, whatever silliness it gets up to with Microsoft: it is simply too important for these kinds of historical reasons.

10 August 2007

It's a pity that reports from the House of Lord's Science and Technology Committee are so long, because they contain buckets of good stuff - not least because they draw on top experts. A case in point is the most recent, looking at personal Internet security, which includes luminaries such as Bruce Schneier and Alan Cox.

The recommendations are a bit of a mixed bag, but one thing that caught my eye was in the context of making suppliers liable for their software. As Bruce puts it:

“We are paying, as individuals, as corporations, for bad security of products”—by which payment he meant not only the cost of losing data, but the costs of additional security products such as firewalls, anti-virus software and so on, which have to be purchased because of the likely insecurity of the original product. For the vendors, he said, software insecurity was an “externality … the cost is borne by us users.” Only if liability were to be placed upon vendors would they have “a bigger impetus to fix their products”

Of course, product liability might be a bit problemtatic for free software, but again Schneier has a solution:

Any imposition of liability upon vendors would also have to take account of the diversity of the market for software, in particular of the importance of the open source community. As open source software is both supplied free to customers, and can be analysed and tested for flaws by the entire IT community, it is both difficult and, arguably, inappropriate, to establish contractual obligations or to identify a single “vendor”. Bruce Schneier drew an analogy with “Good Samaritan” laws, which, in the United States and Canada, protect those attempting to help people who are sick or injured from possible litigation. On the other hand, he saw no reason why companies which took open source software, aggregated it and sold it along with support packages—he gave the example of Red Hat, which markets a version of the open source Linux operating system—should not be liable like other vendors.

UK users will have to pay a premium for Dell's Linux PCs, despite Dell's claim to the contrary.

Customers who live in the UK will have to pay over one-third more than customers in the US for exactly the same machine, according to detailed analysis by ZDNet.co.uk.

The Linux PCs — the Inspiron 530n desktop and the Inspiron 6400n notebook — were launched on Wednesday. The 530n is available in both the UK and the US, but the price differs considerably.

Comparing identical specifications, US customers pay $619 (£305.10) for the 530n, while UK customers are forced to pay £416.61 — a premium of £111, or 36 percent. The comparison is based on a machine with a dual-core processor, 19" monitor, 1GB of RAM and a 160GB hard drive. The same options for peripherals were chosen.

I have been a mathematician since the age of eight. As such, I tend to look at the world through the optics of mathematics. For this reason, I have never understood why people believe that they can model financial markets: they're clearly far too complex/chaotic to be reduced to any equation, and trying to extrapolate with computers - no matter how powerful - is just doomed to failure.

I hear many Risk Arb players at big shops are getting creamed. It seemed like you make money for 3 years, then give it all back in a couple weeks. Classic mode-mean trade: mode is positive, mean is zero.

In fact, what is most surprising - nay, shocking - is that this apparently unshakeable belief in the existence of some formula/method that will one day allow such markets to be tracked accurately enough to make dosh consistently is equivalent to a belief in horoscopes. After all, horoscopes are all about "deep" correlations - between the stars and your life. Maybe financial markets should try casting a few - they'd be just as likely to succeed as the current methods. (Via TechDirt.)

When Dale Lee Underdahl was arrested on February 18, 2006, on suspicion of drunk driving, he submitted to a breath test that was conducted using a product called the Intoxilyzer 5000EN.

During a subsequent court hearing on charges of third-degree DUI, Underdahl asked for a copy of the "complete computer source code for the (Intoxilyzer) currently in use in the State of Minnesota."

An article in the Pioneer Press quoted his attorney, Jeffrey Sheridan, as saying the source code was necessary because otherwise "for all we know, it's a random number generator."

What's significant is that this shows a growing awareness that if you don't have the source code, you don't really have any idea how something works. And if you don't know that, you can hardly use it to make important decisions - or even unimportant ones, come to that. Obviously, this has clear implications for e-voting, and the need for complete source code transparency.

Interesting post here from Mozilla's Mitchell Baker, which shows that she's beginning to regard Firefox as a commons:

Firefox generates an emotional response that is hard to imagine until you experience it. People trust Firefox. They love it. Many feel -- and rightly so -- that Firefox is part "theirs." That they are involved in creating Firefox and the Firefox phenomena, and in creating a better Internet. People who don't know that Firefox is open source love the results of open source -- the multiple languages, the extensions, the many ways people use the openness to enhance Firefox. People who don't know that Firefox is a public asset feel the results through the excitement of those who do know.

Firefox is created by a public process as a public asset. Participants are correct to feel that Firefox belongs to them.

Absolutely spot-on. But I had to smile at the following:

To start with, we want to create a part of online life that is explicitly NOT about someone getting rich. We want to promote all the other things in life that matter -- personal, social, educational and civic enrichment for massive numbers of people. Individual ability to participate and to control our own lives whether or not someone else gets rich through what we do. We all need a voice for this part of the Internet experience. The people involved with Mozilla are choosing to be this voice rather than to try to get rich.

I know that this may sound naive. But neither I nor the Mozilla project is that naive, and we are not stupid. We recognize that many of us are setting aside chances to make as much money as possible. We are choosing to do this because we want the Internet to be robust and useful even for activities that aren't making us rich.

Only in America do you need to explain why you prefer to make the world a better place rather than making yourself rich....

Younger readers of this blog probably don't remember the golden cyber-age known as Dotcom 1.0, but one of its characteristics was the constant upgrading of the basic HTML specification. And then, in 1999, at HTML4, it stopped, as everyone got excited about XML (remember XML?).

It's been a long time coming, but at last we have HTML5, AKA Web Applications 1.0. Here's a good intro to the subject:

Development of Hypertext Markup Language (HTML) stopped in 1999 with HTML 4. The World Wide Web Consortium (W3C) focused its efforts on changing the underlying syntax of HTML from Standard Generalized Markup Language (SGML) to Extensible Markup Language (XML), as well as completely new markup languages like Scalable Vector Graphics (SVG), XForms, and MathML. Browser vendors focused on browser features like tabs and Rich Site Summary (RSS) readers. Web designers started learning Cascading Style Sheets (CSS) and the JavaScript™ language to build their own applications on top of the existing frameworks using Asynchronous JavaScript + XML (Ajax). But HTML itself grew hardly at all in the next eight years.

Recently, the beast came back to life. Three major browser vendors—Apple, Opera, and the Mozilla Foundation—came together as the Web Hypertext Application Technology Working Group (WhatWG) to develop an updated and upgraded version of classic HTML. More recently, the W3C took note of these developments and started its own next-generation HTML effort with many of the same members. Eventually, the two efforts will likely be merged. Although many details remain to be argued over, the outlines of the next version of HTML are becoming clear.

This new version of HTML—usually called HTML 5, although it also goes under the name Web Applications 1.0—would be instantly recognizable to a Web designer frozen in ice in 1999 and thawed today.

Many people have a strangely ambivalent attitude to Wikipedia. On the one hand, they recognise that it's a tremendous resource; but on the other, they point out it's uneven and flawed in places. Academics in particular seem afflicted with this ambivalence.

So I think that this move by a group of academics to roll up their digital sleeves and get stuck into Wikipedia is important:

Some of our colleagues have determined to improve it with their own contributions. Here are some instances in which they have assumed significant responsibility for their fields:

# History of Science: Sage Ross and 80 other specialists in the field are contributing.# Military History: Over 600 amateur and professional specialists in many sub-fields are contributing.# Russian History: Marshall Poe and over 50 other specialists in the field are contributing.

Clearly, the more people that take part in such schemes, the better Wikipedia will get - and the more people will improve it further. (Via Open Access News.)

08 August 2007

There's a new Firefox support site around that's aimed at absolute beginners. Smart move, now that Firefox is beginning to bleed beyond the world of geeks and their immediate family.... (Via Linux.com.)

As a big fan of both freedom and Tibet, it seems only right that I should point to the Students for a Free Tibet site. Against a background of increasing repression and cultural genocide by the Chinese authorities in Tibet, it will be interesting to see what happens during the run-up to the 2008 Olympics and the games themselves. On the one hand, China would clearly love to portray itself as one big happy multi-ethnic family; on the other, it is unlikely to brook public reminders about its shameful invasion and occupation of Tibet.

I can only admire those Tibetans who speak up about this, and even daring to challenge, publicly, the Chinese authorities, even within China itself. One of the highest-profile - and hence most courageous - of these is Lhadon Tethong:

A Tibetan woman born and raised in Canada, Lhadon Tethong has traveled the world, working to build a powerful youth movement for Tibetan independence. She has spoken to countless groups about the situation in Tibet, most notably to a crowd of 66,000 at the 1998 Tibetan Freedom Concert in Washington, D.C. She first became involved with Students for a Free Tibet (SFT) in 1996, when she founded a chapter at University of King’s College in Halifax, Nova Scotia. Since then, Lhadon has been a leading force in many strategic campaigns, including the unprecedented victory against China’s World Bank project in 2000.

Lhadon is a frequent spokesperson for the Tibetan independence movement, and serves as co-chair of the Olympics Campaign Working Group of the International Tibet Support Network. She has worked for SFT since March 1999 and currently serves as the Executive Director of Students for a Free Tibet International.

One of the great things about open source is its transparency: you can't easily hide viruses or trojans, nor can you simply filch code from other people, as you can with closed source. Indeed, the accusations made from time to time that open source contains "stolen" code from other programs is deeply ironic, since it's almost certainly proprietary, closed software that has bits of thievery hidden deep within its digital bowels.

The same is true of open access and open data: when everything is out in the open, it is much easier to detect plagiarism or outright fraud. Equally, making it hard for people to access online, searchable text, or the underlying data by placing restrictions on its distribution reduces the number of people checking it and hence the likelihood that anyone will notice if something is amiss.

A nicely-researched piece on Ars Technica provides a clear demonstration of this:

Despite the danger represented by research fraud, instances of manufactured data and other unethical behavior have produced a steady stream of scandal and retractions within the scientific community. This point has been driven home by the recent retraction of a paper published in the journal Science and the recognition of a few individuals engaged in dozens of acts of plagiarism in physics journals.

By contrast, in the case of arXiv's preprint holdings, catching this stuff is relatively easy thanks to its open, online nature:

Computer algorithms to detect duplications of text have already proven successful at detecting plagiarism in papers in the physical sciences. The arXiv now uses similar software to scan all submissions for signs of plagiarized text. As this report was being prepared, the publishing service Crossref announced that it would begin a pilot program to index the contents of the journals produced by a number of academic publishers in order to expose them for the verification of originality. Thus, catching plagiarism early should be getting increasingly easy for the academic world.

Note, though, that open access allows *anyone* to check for plagiarism, not just the "authorised" keepers of the copyrighted academic flame.

Similarly, open data means anyone can take a peek, poke around and pick out problems:

How did Dr. Deb manage to create the impression that he had generated a solid data set? Roberts suggests that a number of factors were at play. Several aspects of the experiments allowed Deb to work largely alone. The mouse facility was in a separate building, and "catching a mouse embryo at the three-cell stage had him in from midnight until dawn," Dr. Roberts noted. Deb was also on his second post-doc position, a time where it was essential for him to develop the ability to work independently. The nature of the data itself lent it to manipulation. The raw data for these experiments consisted of a number of independent grayscale images that are normally assigned colors and merged (typically in Photoshop) prior to analysis.

Again, if the "raw data" were available to all, as good open notebook science dictates that they should be, any manipulation could be detected more readily.

Interestingly, this is not something that traditional "closed source" publishing can ever match using half-hearted fudges or temporary fixes, just as closed source programs can never match open ones for transparency. There is simply no substitute for openness.

For many years, the only decent free end-user app was GIMP, and the history of open source on the desktop has been one of gradually filling major holes - office suite, browser, email etc. - to bring it up to the level of proprietary offerings.

Happily, things have moved on, and it's now possible to use free software for practically any desktop activity. One major lack has been project planning, traditionally the (expensive) realm of Microsoft Project. No longer it seems. With the launch of OpenProj, the open source world now has a free alternative, for a variety of platforms.

It's still too early to say how capable the program is, but it's certainly a welcome addition. The only other concern is the licence, which seems not to have been chosen yet, although an OSI-approved variant is promised.

07 August 2007

It is, of course, hard to choose from the rather crowded field of contenders, but this one certainly takes the biscuit:

An Information and Application Distribution System (IADS) is disclosed. The IADS operates, in one embodiment, to distribute, initiate and allow interaction and communication within like-minded communities. Application distribution occurs through the transmission and receipt of an "invitation application" which contains both a message component and an executable component to enable multiple users to connect within a specific community. The application object includes functionality which allows the user's local computer to automatically set up a user interface to connect with a central controller which facilitates interaction and introduction between and among users.

A system to create an online community - including, of course, that brilliant stroke of utterly unique genius, the "invitation application": why couldn't I have thought of that? (Via TechCrunch.)

This is an important story - not so much for what it says, but for the fact that it is being said by a major US title like Newsweek:

Since the late 1980s, this well-coordinated, well-funded campaign by contrarian scientists, free-market think tanks and industry has created a paralyzing fog of doubt around climate change. Through advertisements, op-eds, lobbying and media attention, greenhouse doubters (they hate being called deniers) argued first that the world is not warming; measurements indicating otherwise are flawed, they said. Then they claimed that any warming is natural, not caused by human activities. Now they contend that the looming warming will be minuscule and harmless. "They patterned what they did after the tobacco industry," says former senator Tim Wirth, who spearheaded environmental issues as an under secretary of State in the Clinton administration. "Both figured, sow enough doubt, call the science uncertain and in dispute. That's had a huge impact on both the public and Congress."

Even though the feature has little that's new, the detail in which it reports the cynical efforts of powerful industries to stymie attempts to mitigate the damage that climate change will cause is truly sickening. It is cold (sic) comfort that the people behind this intellectual travesty will rightly be judged extremely harshly by future generations - assuming we're lucky enough to have a future. (Via Open the Future.)

I've been tracking the goings-on at ICANN, which oversees domain names and many other crucial aspects of the Internet, for many years now, and I've yet to see anything good come out of the organisation. Here's someone else who has problems with them:

In this Article, I challenge the prevailing idea that ICANN's governance of the Internet's infrastructure does not threaten free speech and that ICANN's governance of the Internet therefore need not embody special protections for free speech. I argue that ICANN's authority over the Internet's infrastructure empowers it to enact regulations affecting speech within the most powerful forum for expression ever developed. ICANN cannot remain true to the democratic norms it was designed to embody unless it adopts policies to protect freedom of expression. While ICANN's recent self-evaluation and proposed reforms are intended to ensure compliance with its obligations under its governance agreement, these proposed reforms will render it less able to embody the norms of liberal democracy and less capable of protecting individuals' fundamental rights. Unless ICANN reforms its governance structure to render it consistent with the procedural and substantive norms of democracy articulated herein, ICANN should be stripped of its decision-making authority over the Internet's infrastructure.

06 August 2007

A small step, but one of an increasing number towards wider availability of open source on the desktop/laptop:

Lenovo and Novell today announced an agreement to provide preloaded Linux* on Lenovo ThinkPad notebook PCs and to provide support from Lenovo for the operating system. The companies will offer SUSE Linux Enterprise Desktop 10 from Novell to commercial customers on Lenovo notebooks including those in the popular ThinkPad T Series, a class of notebooks aimed at typical business users, beginning in the fourth quarter of 2007. The ThinkPad notebooks with the Linux-preload will also be available for purchase by individual customers.

LiveContent is an umbrella idea which aims to connect and expand Creative Commons and open source communities. LiveContent works to identify creators and content providers working to share their creations more easily with others. LiveContent works to support developers and others who build better technology to distribute these works. LiveContent is up-to-the-minute creativity, "alive" by being licensed Creative Commons, which allows others to better interact with the content.

LiveContent can be delivered in a variety of ways. The first incarnation of LiveContent will deliver content as a LiveCD. LiveCDs are equivalent to what is called a LiveDistro. LiveCDs have traditionally been a vehicle to test an operating system or applications live. Operating systems and/or applications are directly booted from a CD or other type of media without needing to install the actual software on a machine. LiveContent aims to add value to LiveDistros by providing dynamically-generated content within the distribution.

Let's hope this catches on - we need more synergy in the world of openness.

05 August 2007

I'm rather slow in this one, but it's such a good example of how everyone gains from public collaboration - including Google, whose CTO of Google Earth, Michael Jones, is speaking here:

This is Hyderabad, and if you see the dark areas, those correspond to roads in low detail. If you zoom in, you'll see the roads, and if you expand a little bit, you'll see both roads and labelled places... there's graveyards, and some roads and so forth.

Now, everything you see here was created by people in Hyderabad. We have a pilot program running in India. We've done about 50 cities now, in their completeness, with driving directions and everything - completely done by having locals use some software we haven't released publicly to draw their city on top of our photo imagery.

This is the future, people - your future (though I do wonder about the map data copyright in these situations).

After careful consideration, the Cushing/Whitney Medical and Kline Science Libraries have decided to end their support for BioMed Central's Open Access publishing effort. The libraries previously covered 100% of the author page charges which allowed these papers to be made freely available worldwide via the Internet at time of publication. This experiment in Open Access publishing has proved unsustainable. The libraries' support will continue for all Yale-authored articles currently in submission to BioMed Central as of July 27, 2007.

The libraries’ BioMedCentral membership represented an opportunity to test the technical feasibility and the business model of this OA publisher. While the technology proved acceptable, the business model failed to provide a viable long-term revenue base built upon logical and scalable options. Instead, BioMedCentral has asked libraries for larger and larger contributions to subsidize their activities. Starting with 2005, BioMed Central page charges cost the libraries $4,658, comparable to a single biomedicine journal subscription. The cost of page charges for 2006 then jumped to $31,625. The page charges have continued to soar in 2007 with the libraries charged $29,635 through June 2007, with $34,965 in potential additional page charges in submission.

Eeek: I wonder what the backstory to all this is?

Update 1: Matthew Cockerill, Publisher, BioMed Central, has put together a reply to Yale's points. But I can't help feeling that this one will run for a while yet.

Wikis were born under the Hawaiian sun (well, the name was), so perhaps it's appropriate that Sun should have set up its own wikis, in a further sign that Sun gets it, and that wikis are almost mainstream now. (Via Simon Phipps.)

03 August 2007

Free and open source software (FOSS) has roots in the ideals of academic freedom and the unimpeded exchange of information. In the last five years, the concepts have come full circle, with FOSS serving as a model for Open Access (OA), a movement within academia to promote unrestricted access to scholarly material for both researchers and the general public.

"The philosophy is so similar that when we saw the success that open source was having, it served as a guiding light to us," says Melissa Hagemann, program manager for Open Access initiatives at the Open Society Institute, a private foundation for promoting democratic and accessible reform at all levels of society. Not only the philosophy, but also the history, the need to generate new business models, the potential empowerment of users, the impact on developing nations, and resistance to the movement make OA a near twin of FOSS.

The parallels between this movement - what has come to be known as “open access” – and open source are striking. For both, the ultimate wellspring is the Internet, and the new economics of sharing that it enabled. Just as the early code for the Internet was a kind of proto-open source, so the early documentation – the RFCs – offered an example of proto-open access. And for both their practitioners, it is recognition – not recompense – that drives them to participate.

Great minds obviously think alike - and Bruce does have some nice new quotations. Read both; contrast and compare.

MercExchange has utilized its patents as a sword to extract money rather than as a shield to protect its right to exclude or its market share, reputation, good will, or name recognition, as MercExchange appears to possess none of these.

One of the criticisms commonly levelled at free content is that it cannibalises existing paid-for content in a way that is economically unsustainable. So it's good to see this kind of development as a counter-example:

The founders of Wikitravel (www.wikitravel.org), the Webby Award-winning online travel guide, today announced the launch of Wikitravel Press (www.wikitravelpress.com), a company for publishing Wikitravel content in book form.

Wikitravel uses the wiki-based collaborative editing technology made popular by Wikipedia. Wikitravel guides are built on the principle that travelers often get their best information from other travelers. The website offers over 30,000 travel guides in seventeen languages, with over 10,000 editorial contributions per week. Wikitravel won the Best Travel Website category in the 2007 Webby Awards.

Wikitravel Press builds upon this extraordinary community participation to create continually updated, reliable guidebooks, combined, abridged or changed by paid editors, published on demand and shipped anywhere in the world. Wikitravel Press will hire book editors to assemble relevant destination guides, abridge or expand them, and do final copy-editing and fact-checking.

One of the central themes of this blog is that the ideas behind free software can be applied much more widely - indeed, that open source is really just the beginning of something much bigger. I've written about many of the experiments in applying open source ideas outside software, but there are now so many of them that it's hard keeping up.

So I was particularly pleased to find out about this extensive listing of such activities, put together by the Open TTT consortium, itself an interesting project in openness:

OPEN TTT is a EU-funded project (SSA-030595 INN7) that aims at bridging the separate worlds of technology transfer and open source software (OSS), by introducing novel methodologies for helping companies in the take up of technology and innovation and leveraging the peculiarities of the open access model. The approach is based on the creation of mini-clusters, interest-driven group of SMEs and the matching of suitable open source software adapted to the cluster needs. The project covers four thematic areas: Logistic & Transport, Industrial production, Energy & environment and Public Administrations. On these areas, suitable open source software will be examined and assessed, and a mediation will be created between companies interested in its use and software developers or commercial entities that provide suitable support.

02 August 2007

Here's a useful voice to have in the debate about copyright reform, Pamela Samuelson:

The Copyright Act of 1976 is far too long, complex, and largely incomprehensible to non-copyright professionals. It is also the work product of pre-computer technology era. This law also lacks normative heft. That is, it does not embody a clear vision about what its normative purposes are.

This article offers the author's preliminary thoughts about why copyright reform is needed, why it will be difficult to undertake, and why notwithstanding these difficulties, it may nonetheless be worth doing. It offers suggestions about how one might go about trimming the statute to a more managemable length, articulating more simply its core normative purposes, and spinning certain situation-specific provisions off into a rulemaking process.

Thirty years after enactment of the '76 Act, with the benefit of considerable experience with computer and other advanced technologies and the rise of amateur creators, it may finally be possible to think through in a more comprehensive way how to adapt copyright to digital networked environments as well as how to maintain its integrity as to existing industry products and services that do not exist outside of the digital realm.

Pity she's so defeatist:

The prospects of copyright reform are perhaps so dim that a reasonable person might well think it a fool’s errand to contemplate a reform project of any sort. It is, however, worth considering whether it would be a valuable project to draft a model copyright law, along the lines of model law projects that the American Law Institute as frequently promulgated, with interpretive comments and citations to relevant caselaw, or a set of copyright principles that would provide a shorter, simpler, more comprehensible, and more normatively appealing framework for copyright law.

Call me an incurable optimist, but I think we might aim a little higher....

Magnatune, a record label that uses a CC BY-NC-SA license for all releases (Magnatune founder John Buckman is also on the CC board), has just hired free software developer Nikolaj Hald Nielsen to work on Amarok, a free software media player.

While software and services companies for years have hired many free software developers to continue to work on their free software projects and employees of open content companies have contributed to free software projects, this may be the first time an open content company has hired a free software developer to work on the developer’s free software project.

I suspect this will be the first of many such hires. Open content companies are growing and often are highly dependent on free software for infrastructure and end user services.

I agree: as open content becomes more of an economic force we can expect the synergy between it and open source to become more explicit.

Further to yesterday's post about a call to respect free use of copyrighted material, here's an interesting point about Google's participation:

it certainly seems ironic that Google is being associated with this complaint, at the same time as they are putting putting highly misleading notices on scanned public domain works:

The Google notice, found as page 1 on downloadable PDFs of public domain works available via Google Book Search, "asks" users to:

Make non-commercial use of the files. We designed Google Book Search for use by individuals, and we request that you use these files for personal, non-commercial purposes...

Maintain attribution The Google “watermark” you see on each file is essential for informing people about this project and helping them find additional materials through Google Book Search. Please do not remove it.

There is clear U.S. precedent that scanning a public domain work does not create a new copyright so there seems to be absolutely zero legal basis for restricting use or forcing users to preserve inserted per-page watermarks-cum-advertisements.

About Me

I have been a technology journalist and consultant for 30 years, covering
the Internet since March 1994, and the free software world since 1995.

One early feature I wrote was for Wired in 1997:
The Greatest OS that (N)ever Was.
My most recent books are Rebel Code: Linux and the Open Source Revolution, and Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business.