30 March 2007

A rather fine little speech about ODF and the virtues of openness, made by IBM's Bob Sutor as part of his testimony to the Texas House and Senate regarding the open document format legislation. Here's the nub:

to be clear, EVERYONE can implement a true open standard. This bill is about choice. ODF and open standards for file formats will drive choice of applications, innovative use of information, increased competition, and lower prices. Personally, I think these are good things.

In closing, the world is shifting to non-proprietary open standards based on the amazing success of the World Wide Web, a success that was far more important than any single vendor’s market position or ideas for what was right for the world.

I've written about one open source car, OSCar, before, and now here's another, with a rather stranger name: C,mm,n. The idea, of course, is intriguing, though the Flash-infested Web site - literally the most sickening I have ever seen in terms of all the whooshing and sloshing of images - is rather thin on info:

Soon to be found here: detailed information on everything that is c,mm,n. Background stories, links to in-depth articles, blueprints, design schematics and much more. All you'll need to participate in the c,mm,n community and help develop the first real open source car in the world.

It will be interesting to see how exactly all those blueprints and design schematics actually feed into the open design process: applying openness to this kind of project is a real challenge, and it's not clear yet how easily complex objects of this kind can, in fact, be designed in this way. (Via Techmeme.)

Last night, I read the last draft of GPLv3 on my cell phone during dinner in Orlando. I went looking for the provision they had in the last draft, the one that closes the GPLv2 ASP loophole that forced me to create HPL. In a nutshell, it is the ability of running GPLv2 software as a service (SaaS) without returning any changes to the community, because distribution of software as a service might not technically be considered distribution of software (therefore circumventing the copyleft clause that made open source what it is today). That is what Google does, making gazillions of dollars thanks to Linux and open source but keeping its secret sauce concealed from the rest of the world (but contributing in many other ways, therefore cleaning its conscience, I guess).

The provision is not there. Gone. They dropped the ball. Actually, it has been made very clear that the ASP loophole is not a loophole anymore. It is perfectly fine to change GPLv3 software and offer it to the public as a service, without returning the changes to the community.

This is an interesting point, although I tend to view SaaS as yesterday's big idea, so it may not be a major problem. See also the comments on the above posting for more (and more coherent) thoughts on this.

Update: More negative vibes here. It will be interesting to see how this develops. I've not read the latest draft yet, so don't really have a strong view either way.

29 March 2007

If, like me, you somehow didn't make it to the Virtual Worlds 2007 conference, fear not: two reporters with, er, inimitable styles did attend, and have filed virtuoso reports on Philip Rosedale's speech. Read them both, and feel virtually there/hair.

He talks about how the Mandelbrot program on his computer blew his mind. He and a friend follow the replication of a starfish in a diagram as they zoom in on regions of it. Did I imagine this or did he say *chocolate* starfish. “The area of diagram was the same as the surface of the earth” – the earth tiled with chocolate starfish. Imagine.

So I walk into the 55th floor of the Millenium Hotel and I see it...The Hair. Our Hero's Hair is Holding Up. Relieved, I shake Philip Rosedale's hand and ask him how he's holding up, but the message has already been telegraphed to me: gelled, sturdy, stellar, architectural -- thank you very much. Philip's hair, if it could talk, would describe what it's like being the Cat in the Hat holding up all those sims, a rake, a plate, a cake...So...how many sims is it now? He gives me a figure..it's different than the figure Joe Miller gives later, you know, I don't think they really know, it's *almost organic* this stuff and out of control. 7800?

If you can imagine it's possible -- Philip's hair is *even more amazing* than it was at SOP II and SLCC I, which is when I first was exposed to the construction. People in New York don't do that kind of thing to their hair. I mean, you just never see it. Walk around, look. So this is So California. And...it's like...so cool and perfectly constructed, with just the right amount of mix of "bedhead" and "tousled bad boy" and "mad scientist". Gazing out over the sterilized wound of downtown, I couldn't help thinking of that time Nikola Tesla shorted out lower Manhattan with some experiment on Houston St...Philip looks more than ever like he stuck his hand in the socket and still finds it interesting...

I had to smile when I saw this piece from the ever-perceptive Andrew Leonard at Salon about English as a global language:

This isn't just about encouraging youngsters with an eye to getting ahead in the 21st century to study Mandarin. It's also about coming to terms with other members of the English family -- the Chinglishes and Hinglishes and Spanglishes spoken by hundreds of millions of non-native English speakers across the globe. Too often, English-language instruction is contemplated only in a framework in which teaching the "correct" English according to some foundational British or American standard is the only choice. But today, there are many correct Englishes, and flourishing in a globalized world will require that those brought up in Oxford or New York understand those reared in Mumbai or Shanghai.

I had to smile because it reminded of a little number I wrote nearly 20 years ago, as part of a long-forgotten book of essays called Glanglish (although amazingly Amazon.co.uk seems to have a copy for sale):

Glanglish

English has never existed as a unitary language. For the Angles and the Saxons it was a family of siblings; today it is a vast clan in diaspora. At the head of that clan is the grand old matriarch, British English. Rather quaint now, like all aristocrats left behind by a confusing modern world, she nonetheless has many points of historical interest. Indeed, thousands come to Britain to admire her venerable and famous monuments, preserved in the verbal museums of language schools. Unlike other parts of our national heritage, British English is a treasure we may sell again and again; already the invisible earnings from this industry are substantial, and they are likely to grow as more and more foreigners wish at least to brush their lips across the Grande Dame's ring.

One group unlikely to do so are the natural speakers of the tongue from other continents. Led by the Americans, and followed by the Australians, the New Zealanders and the rest, these republicans are quite content to speak English - provided it is their English. In fact it is likely to be the American's English, since this particular branch of the family tree is proving to be the most feisty in its extension and transformation of the language. Even British English is falling in behind - belatedly, and with a rueful air; but compared to its own slim list of neologisms - mostly upper-class twittish words like 'yomping' - Americanese has proved so fecund in devising new concepts, that its sway over English-thinking minds is assured.

An interesting sub-species of non-English English is provided by one of the dialects of modern India. Indian English is not a truly native tongue, if only for historical reasons; and yet it is no makeshift second language. Reading the 'Hindu Times', it is hard to pin down the provenance of the style: with its orotundities and its 'chaps' it is part London 'Times' circa 1930; with its 'lakhs' it is part pure India.

Whatever it is, it is not to be compared with the halting attempts at English made by millions - perhaps billions soon - whose main interest is communication. Although a disheartening experience to hear for the true-blue Britisher, this mangled, garbled and bungled English is perhaps the most exciting. For from its bleeding hunks and quivering gobbets will be constructed the first and probably last world language. Chinese may have more natural speakers, and Spanish may be gaining both stature and influence, but neither will supersede this mighty mongrel in the making.

English is so universally used as the medium of international linguistic exchange, so embedded in supranational activities like travel - all pilots use English - and, even more crucially, so integral to the world of business, science and technology - money may talk, but it does so in English, and all computer programs are written in that language - that no amount of political or economic change or pressure will prise it loose. Perhaps not even nuclear Armageddon: Latin survived the barbarians. So important is this latest scion of the English stock, that it deserves its own name; and if the bastard brew of Anglicised French is Franglais, what better word to celebrate the marriage of all humanity and English to produce tomorrow's global language than the rich mouthful of 'Glanglish'?

The prose and examples may be rather dated now, but as the Salon piece shows, its basic idea is alive and well.

In 1980, Classical music represented 20% of global music sales. In 2000, Classical had plummeted to just 2% of global music sales. What happened? Did all those people suddenly lose their taste for classical music? Or is something else going on?

At Magnatune.com, an online record label I run, we sell six different genres of music, ranging from Ambient to Classical to Death Metal and World Music. Yet Classical represents a whopping 42% of our sales. Even more intriguingly, only 9% of the visitors to our music site click on “classical” as the genre they’re interested in, yet almost half of them end up buying classical music.

Do read the rest - it's fascinating.

Looks like innovative digital music business models can be even more disruptive than you might think.

Dell has heard you and we will expand our Linux support beyond our existing servers and Precision workstation line. Our first step in this effort is offering Linux pre-installed on select desktop and notebook systems. We will provide an update in the coming weeks that includes detailed information on which systems we will offer, our testing and certification efforts, and the Linux distribution(s) that will be available. The countdown begins today.

Interesting fact from this announcement:

On March 13, we responded by launching a Linux survey asking for your feedback on what you need for a better Linux experience. Thank you to the more than 100,000 people who took the survey. Here are some of the highlights from the survey:

...

* Majority of survey respondents said that existing community-based support forums would meet their technical support needs for a tested and validated Linux operating system on a Dell system.

which is what I wrote, too, in my answer to the survey. It will be interesting to see what happens and how it works out in practice. I will certainly be interested in buying a system or two if they make something decent available.

First it was patents, then copyright, and now it seems the IP mob are trying to pervert trademarks too:

it is insane to try and claim a general trademark over the phrase itself when it is divorced from a pre-existing good or service. At that point, it is no longer a tool to identify a commercial good, it then becomes a naked and virulent attempt to try and privatize language itself through a government enforced monopoly. Anyone claiming to be an attorney who endorses such nonsense out to be shamed out of the profession.

28 March 2007

Regular readers of this blog will know that I have an instinctive suspicion of organisations that try to co-opt weasel words. And now we have not one such group, but three of them:

Coalition for Patent Fairness

Innovation Alliance

Coalition for 21st Century Patent Reform

I couldn't even begin to parse all the subtle biases and hidden agendas going on here (this post takes a stab).

But what's most interesting, of course, is that whatever the position, we're talking about patents here. Suddenly, patent reform is hot in the US, which means there's a hope - just a glimmer - of some sense being brought to the seriously broken PTO there (and if you want further proof of why it isn't working try this excellent piece about patent thickets.)

SAP AG will not be impacted by open source ERP software, chief executive Henning Kagermann is adamant.

Despite evidence of open source creep, Kagermann thinks it is still a database and OS-level model. He tackled the rise of open source in a recent interview with ComputerWire.

“It is an option for operating systems and databases but not at the business application level,” he said. “There are no open source ERP products that are any good for the high end, although it could be argued that they could be developed for the low end.”

So writes Angela Eager - who, parenthetically, used to work for me: wotcha, Angela.

Poor old SAP: the fact that proprietary software vendors in every market - including operating systems and databases - have said precisely the same thing when challenged by open source from below seems not to have penetrated the poor chap's skull. ERP is not special (and open source ERP is flourishing.)

So let's say it in easy-to-understand terms: you cannot defend yourself from low-end creep by pinning your hopes on up-market products. Try reading The Innovator's Dilemma to find out why.

Previous posts have noted that there is an inherent tension between openness and privacy. That tension is even more acute in the case of surveillance, which goes beyond consensual openness. Despite this, there is relatively little public debate around these issues; instead, as has been remarked, the UK is effectively sleepwalking into a surveillance society.

Against this depresseing background, the new report from the Royal Academy of Engineering, entitled Dilemmas of Privacy and Surveillance Challenges of Technological Change, is particularly welcome.

This is not least because it offers a depth of knowledge about the technological issues involved that is rarely encountered (these are engineers, remember). But it is also notable for its even-handedness and sensible suggestions. For example:

In this scenario, disconnection technologies are widely used in a co-ordinated manner: personal data is routinely encrypted and managed in a secure fashion, so co-ordinated connectivity does not threaten it and even substantial processing resources are not a day-to-day threat. This leads to Little Sisters who, by themselves, watch over only a fragment of a person's identity, but when co-ordinated can reveal all.

It would be possible to devise a store loyalty card which incorporated a computer chip that could perform the same functions as an ID card, but without giving away the real name of its owner. Someone might choose a loyalty card in the name of their favourite celebrity, even with the celebrity's picture on the front. If they were to use that card to logon to Internet sites, the fact that they are not really the film star whose name they have used would be irrelevant for most applications, and the privacy of the consumer would be maintained. However, if they did something they should not, such as posting abusive messages in a chat room, law enforcement agencies might then ask Little Sister (ie, the company that runs the loyalty card scheme, in this case) who the person really is, and Little Sister will tell them. In thisscenario, government departments are just more Little Sisters, sharing parts of the picture without immediate access to the whole.

This approach exploits both mathematics and economics. If it is technically possible to find out who has done what - for example when a crime has been committed - but cryptography makes it economically prohibitive to monitor people continuously on a large scale, then a reasonable privacy settlement can be achieved.

This approach suggests a interesting way of balancing the opposing requirements for privacy and accountability.

Laura Breeden bought a new Compaq Presario C304NR notebook in January. She bought it because she wanted to get rid of Windows and all the malware that surrounds it and move to Linux, and her old laptop lacked the memory and power to run Ubuntu Edgy. The salespeople assured her that the C304NR was "Linux ready." But they didn't tell her that running Linux would void her warranty.

Until recently, she's been happy with it, and with Ubuntu Edgy. But a couple of weeks ago she began having keyboard problems. The keyboard is misbehaving when she begins to type quickly: keys are sticking and the space bar does not always respond when pressed.

When she called Compaq -- the unit comes with a one-year warranty on the hardware -- they asked what operating system she was running. When she told them Linux, they said, "Sorry, we do not honor our hardware warranty when you run Linux."

For the first time fans and Artists can be in business together. Therefore each Artist issues 5,000 so called Parts. Parts cost $10 (plus transaction costs) each. Together Believers have to raise $50,000 to get their Artist of choice in the studio. At any point before your Artist has reached the Goal of $50,000, you can withdraw your Parts and pick a different Artist. You can even get your money back. It's your music. It's your choice.

Once your Artist has raised $50,000 SellaBand will assign an experienced A&R-person to this project. Together with a top Producer, your Artist will record a CD in a state-of-the-art Studio. During the process you will get an exclusive sneak preview of this exciting process.

...

The music on the CD will be given away as free downloads on our download portal. All advertising revenues generated on SellaBand will be shared equally between you, the other Believers, the Artist and SellaBand. The amount of money you and the band will get paid depends on the advertising revenues and the market-share your band gains on our download portal.

In some ways this is like vanity publishing: people pay to be published. The differences are that fans pay for publication - micro-patronage - published items are given away (because content has zero marginal cost), and money is made from advertising (the Web 2.0 way). I can see this working, provided the main company Sellaband isn't taking such a big cut from the ad revenue that it is perceived as a free rider on the work and money of others.

At least it's founders seem to have the right background, as well as an interesting idea. Here's hoping. (Via OpenBusiness.)

A little while back I wrote about the Qwaq virtual world system. This is based on the open source Croquet code, which has just released version 1.0 of its SDK. Qwaq and HP have also helped set up the Croquet Consortium to support the development of the software.

The governing board of the Smithsonian Institution announced Monday that it had accepted the resignation of its top official, Lawrence M. Small, after an internal audit showing that the museum complex had paid for his routine use of lavish perks like chauffeured cars, private jets, top-rated hotels and catered meals.

But aside from claiming interesting expenses like “chandelier cleaning and pool heaters” at his home, Mr Small will be of most interest to readers of this blog for

a recent deal with Showtime, the cable channel. In that deal, the Smithsonian agreed to restrict access to its archives and scientists, which critics said violated its public status.

In other words, Mr Small was enclosing a commons. Nice to see that he's received his comeuppance, however, er, small it might be. (Via Boing Boing.)

Zimbra is part of a new generation of open source enterprise apps that are really starting to be taken seriously by companies. The original Zimbra is basically an Ajax-based Web client, but now Zimbra has come out with Zimbra Desktop, that lets you work collaboratively even offline.

I predict this is going to become the next big thing with the current collection of web apps. The only problem is that there's going to be lots of duplication, as each desktop sets up its own offline Web server on the user's computer. So how about if all the open source companies got together and standardised on a single piece of code that all their apps could use?

26 March 2007

Most of this is just legal posturing, but the following paragraph is noteworthy:

Intellectual property is worth $650 billion a year to the U.S. economy. Not only does intellectual property drive our exports, it's a key part of what distinguishes developed economies from developing ones. Protecting intellectual property spurs investment and thereby the creation of new technologies and creative entertainment. This creates jobs and benefits consumers. Google and YouTube wouldn't be here if not for investment in software and technologies spurred by patent and copyright laws.

This equation of intellectual monopolies with civilisation is insulting in the extreme. As this blog has noted, IP maximalists - mostly in the US, but from Europe, too - are trying to stuff their monopolies down the throats of many developing nations, with disastrous effects on national and local economies, on people's lives and on entire cultures. Civilisation, my foot, this is pure neo-colonialism.

But of course the real scream is the last statement: "Google and YouTube wouldn't be here if not for investment in software and technologies spurred by patent and copyright laws". What, like the free software both use, which employs copyright to subvert traditional intellectual monopolies, or like the millions of user-created videos that are added to the content commons for the sheer joy of creating and sharing?

25 March 2007

look at the difference in how each industry has reacted. The music industry continued to try and sue everyone it can in order to enforce a status quo that no longer exists. The news industry has perhaps resigned itself to the fact that they will have to operate with less revenues for the foreseeable future. But they are at least slowly coming to grips with that future and are still struggling to find sensible solutions. Imagine the cultural impact if media corporations started suing Internet users for reading news off of "unauthorized" websites.

Here's an interesting perspective from India on intellectual monopolies:

The term "intellectual property" reduces knowledge into a tangible product. In international trade negotiations, when India negotiates on the basis of the term "intellectual property," we implicitly accept that intellect can be reduced to property and all that remains is to dot the i's and cross the t's. We buy into the rhetoric that without the "propertization" of knowledge, there will be no innovation. And in doing so, we ignore our own history where astonishing innovations flourished over thousands of years. In accepting the term "intellectual property," we implicitly accept a playing field that is dominated by the commercial traditions of the West, rather than the spiritual traditions of the East.

Another storied print magazine is coming to an end in print, and the focus is shifting to online and events: InfoWorld, the weekly magazine owned by IDG, is closing down, and the announcement will come Monday morning, paidContent.org has confirmed.

When I was a cub computing reporter (or thereabouts) on a long-forgotten title called Practical Computing in the 1980s, reading the thick pages of Infoworld was a weekly ritual for me. And now its been blown to bits - literally.

23 March 2007

General ideas and structures behind computer games and programs can be copied as long as the source code and graphics are not, the UK's Court of Appeal has ruled.

The judgment upholds an earlier High Court ruling in a case involving three computer games simulating pool. Under UK copyright law and EU Directives, the court ruled that the ideas behind the games cannot be protected by copyright, because copyright does not protect general ideas.

"Merely making a program which will emulate another but which in no way involves copying the program code or any of the program's graphics is legitimate," said Lord Justice Jacob, who gave the Court's ruling.

This really gets to the heart of what a program is - code - and where the originality lies - in the details of its coding, which is protected by copyright, not patent law.

A time-honored Washington practice of trying to extinguish, pre-empt, or redirect news coverage by dumping stacks of previously secret government documents on the press may be in for some changes after a headlong collision with hundreds of liberal Web loggers in the wee hours of yesterday morning.

On Monday night, the Justice Department delivered to Congress more than 3,000 pages of e-mails, memos, and other records about the firing of eight U.S. attorneys. The handover came so late that many news organizations had to scramble to try to skim a few headlines from the files before latenight deadlines.

Despite the late hour, readers of a liberal Web site, tpmmuckraker.com, tackled the task with gusto. They quickly began grabbing 50-page chunks of the scanned documents from a House of Representatives Internet server, analyzing them and excerpting them. The first post about the Department of Justice records hit the left-leaning news and commentary site at 1:04 a.m. Within half an hour, there were 50 summaries posted by readers gleaning the documents. By 4:30 a.m., more than 220 postings were up detailing various aspects of the files.

Ah, there's nothing like a bit of distributed activity early in the morning - open politics at its finest. (Via Boing Boing.)

I wrote about this case before. It seems that the forces of light have prevailed:

Last June we sued the Estate of James Joyce to establish the right of Stanford Professor Carol Shloss to use copyrighted materials in connection with her scholarly biography of Lucia Joyce. Shloss suffered more than ten years of threats and intimidation by Stephen James Joyce, who purported to prohibit her from quoting from anything that James or Lucia Joyce ever wrote for any purpose. As a result of these threats, significant portions of source material were deleted from Shloss's book, Lucia Joyce: To Dance In The Wake.

In the lawsuits we filed against the Estate and against Stephen Joyce individually, we asked the Court to remove the threat of liability by declaring Shloss's right to publish those deleted materials on a website designed to supplement the book. After the trying to have the case dismissed for lack of subject matter jurisdiction, the Estate gave up the fight. Joyce and the Estate have now entered into a settlement agreement enforceable by the Court that prohibits them from enforcing any of their copyrights against Shloss in connection with the publication of the supplement, whether in electronic or printed form.

That is, Firefox now has nearly 25% of the European browser market according to XiTiMonitor. The figures for the rest of the world are not quite so impressive, but it seems clear that Europe is leading the way here. And as it does so, the global importance of serving Firefox users will rise, and so will the tendency to use it elsewhere.

XitiMonitor also has some other interesting graphs showing the rate of uptake of Firefox 2.0 and Internet Explorer 7. From this, it seems clear that Firefox users are upgrading faster than Internet Explorer users, as you might expect since the former are probably more tech-savvy than the latter. (Via Quoi9.)

Although I am not a great user of YouTube, I know a significant cultural/market shift when I see one. NBC Universal CEO Jeff Zucker and News Corp. COO Peter Chernin clearly do not. Try these choice quotes from a media call about their rival to YouTube as reported by Michael Arrington:

Zucker is now on. Talking about importance of “significant IP protection” as a primary goal.

...

Chernin: this will be the largest advertising platform on earth.

So let me get this right. The primary goal of what Google has dubbed "Clown Co." is not serving customers are anything rash like that, it's "significant IP protection"; and what those lucky customers are going to get as a result of that primary goal is "the largest advertising platform on earth".

How's this for proof that virtual worlds can have real-world consequences?

An online game operator has demanded that banned players donate blood to be allowed back into the game. Moliyo, which runs a 3D massively multiplayer online game in China, made the demand after banning 120,000 players who attempted to hack the game.

More than 100 players had already signed up to exchange half a litre (1 pint) of blood for game accounts. The company has also offered free accounts to ordinary players who give blood.

22 March 2007

Representatives of the US government have demanded that the Internet Engineering Task Force (IETF) come up with a solution for prioritizing certain data within government networks and at the interfaces to other networks. Representatives of the US Department of Defense and of the National Communications System (NCS), which is part of the Department of Homeland Security, are seeking to ensure that certain items of information can even in an emergency be guaranteed to arrive. This presupposes appropriate identification mechanisms in the servers. At the IETF meeting in Prague Antonio Desimone of the US Department of Defense said that the switch to a "global grid" raised a number of issues, such as how delivery of a specific e-mail could be ensured within a defined period of time. What was needed was a prioritizing of data, one that also took in emergency and catastrophe scenarios.

"Some calls are more important than other calls, some chats more important than others or a certain content within a chat session may have priority," Mr. Desimone explained.

Why's it stupid? Well, it essentially kills net neutrality, and at the behest of the soldiers. If they want their own super-duper networks, let them build it, rather than attempt to steal the toys everyone else is sharing. And another reason this is asking for trouble is the following:

He said he was especially worried that prioritization might in reality not be confined to authorized persons. Should confinement fail script kids and hackers might find ways to use "priority bits" for their purposes, he observed.

One of the great things about all things open, is that their documentation is nearly always freely available. A case in point is this monograph on ODF, which can be downloaded in its entirety, or chapter-by-chapter. It's all about choice.... (Via GotzeBlogged.)

21 March 2007

Here's the latest contribution to the (academic) debate about whether and in what circumstances virtual dosh should be taxed:

Although it seems intuitively the case that the person who auctions virtual property online for a living should be taxed on his or her earnings, or even that the player who occasionally sells a valuable item for real money should be taxed on the profits of those sales, what of the player who only accumulates items or virtual currency within a virtual world? Should the person whose avatar7 discovers or wins an item of value be taxed on the value of that item? And should a person who trades an item in-game with another player (for an item or virtual currency) be taxed on any increase in value of the item relinquished?

Malaysia's traditional media has been ordered not to mention, quote or pursue stories exposed by bloggers and online news sites, which are emerging as a powerful new media force.

A security ministry circular dated March 13 told top editors of a dozen mainstream newspapers and five television stations that they must not "give any consideration whatsoever" to anti-government material posted online.

Ironically the circular, issued by the ministry's secretary general, was first exposed by the independent online magazine Malaysiakini.com on Saturday.

Further proof of the power - and importance - of blogs, especially in countries with a supine press. Come to think of it, they're also pretty important in countries with even a mostly-supine press - as in, everywhere. (Via Smart Mobs.)

One of the problems with the DRM battle is that it tends to get into a rut: the same old arguments for and against are trotted out. For those of us who care, it's a necessary price to pay for telling it as it is, but for onlookers, it's just plain boring.

That's what makes this piece, which reports on the recent conference "Copyright, DRM Technologies, and Consumer Protection", at UC Berkeley, quite simply the most interesting writing on DRM that I've come across for ages: as well as explaining the old arguments well, it includes a couple of new thoughts:

One good point a few panelists made is that successful DRM is likely to weaken the user's privacy. All DRM prevents computers and media devices from sharing files freely with each other. But in order to merely curb freedom, rather than end it entirely, DRM must identify which files can be shared and which can't, and which methods of sharing are permissible. The more sophisticated this process of determination becomes, the more it is necessary for devices to analyze information about the files in complex ways. The burden of this analysis will often be too great to implement in typical consumer electronics — so instead the data will be sent to an online server, which will figure out your rights and tell the client device what to do. But step back and consider where this is going: devices all over your house, sending information about your viewing and listening habits to a central server. Is this data certain to be subpoena-able someday? You bet. It probably already is.

Another point (made by Peter Swire among others) was the computer security implications of running DRM. The code in a DRM system must be a black box: it cannot be open source, because if the user could understand and change it, she could disable it and copy her files without restriction. But if the code is opaque, it cannot be examined for security flaws — and in fact, the Digital Millennium Copyright Act makes it illegal to even attempt such an examination in most circumstances. Basically, you have to run this code, for even if you are technically capable of modifying it, doing so would be illegal. (In response to this situation, Jim Blandy proposed a new slogan: "It's my computer, damn it!")

I believe that now is a critical moment in the fight against DRM: if we don't scotch the snake soon, it will turn into a hydra. To win, we need to convince "ordinary" people that DRM is mad, bad and dangerous to use; the points raised above could well prove important additions to the anti-DRM armoury.

Major European studies on open source are two a penny these days (and that's good), but some of the other opens have yet to achieve this level of recognition. So the appearance of major EU report on Open Educational Resources from the Open e-Learning Content Observatory Services (OLCOS) project is particularly welcome.

At present a world-wide movement is developing which promotes unencumbered open access to digital resources such as content and software-based tools to be used as a means of promoting education and lifelong learning. This movement forms part of a broader wave of initiatives that actively promote the “Commons” such as natural resources, public spaces, cultural heritage and access to knowledge that are understood to be part of, and to be preserved for, the common good of society. (cf. Tomales Bay Institute, 2006)

With reference to the Open Educational Resources (OER) movement, the William and Flora Hewlett Foundation justifies their investment in OER as follows: “At the heart of the movement toward Open Educational Resources is the simple and powerful idea that the world’s knowledge is a public good and that technology in general and the Worldwide Web in particular provide an extraordinary opportunity for everyone to share, use, and re-use knowledge. OER are the parts of that knowledge that comprise the fundamental components of education – content and tools for teaching, learning and research.”

Since the beginning of 2006, the Open e-Learning Content Observatory Services (OLCOS) project has explored how Open Educational Resources (OER) can make a difference in teaching and learning. Our initial findings show that OER do play an important role in teaching and learning, but that it is crucial to also promote innovation and change in educational practices. The resources we are talking about are seen only as a means to an end, and are utilised to help people acquire the competences, knowledge and skills needed to participate successfully within the political, economic, social and cultural realms of society.

Despite its title, it covers a very wide area, including open courseware, open access and even open source. It's probably the best single introduction to open educational resources around today - and it's free, as it should be. (Via Open Access News.)

Well, it does if you are Brit caricaturist, parodist, pasticheur or general masher-upper:

The Patent Office is charged with implementing the exciting recommendations suggested in the recent Gowers Review of IP. But they are yet to be convinced of the crucial need for some of these recommendations, mainly because they’re finding it hard to get in touch with the relevant practioners. They are looking for concrete examples of creative practices inhibited by the law, to back up proposed exceptions for the purposes of “creative, transformative or derivative works” and “caricature, parody or pastiche”.

Would you, your colleagues, students or collaborators benefit from these exceptions? Are you working or have you worked on a project outlawed by the overly-protectionst copyright regime, which would have benefited from these kinds of exceptions? If so, please get in touch - info[at]openrightsgroup.org - and share your experience.

Here's an interesting tale that highlights the absurdity of DRM in the context of scientific publishing - not a sphere where you normally expect to encounter it:

The MIT Libraries have canceled access to the Society of Automotive Engineers’ web-based database of technical papers, rejecting the SAE’s requirement that MIT accept the imposition of Digital Rights Management (DRM) technology.

...

When informed that the SAE feels the need to impose DRM to protect their intellectual property, Professor John Heywood, the Director of MIT’s Sloan Automotive Lab, who publishes his own work with the SAE, responded with a question: “Their intellectual property?” He commented that increasingly strict and limiting restrictions on use of papers that are offered to publishers for free is causing faculty to become less willing to “give it all away” when they publish.

Echoing Professor Heywood, Alan Epstein, Professor of Aeronautics and Astronautics, believes that “If SAE limits exposure to their material and makes it difficult for people to get it, faculty will choose to publish elsewhere.” He noted that “SAE is a not-for-profit organization and should be in this for the long term,” rather than imposing high prices and heavy restrictions to maximize short-term profit.

As this makes clear, the SAE is attempting to protect an intellectual monopoly it has on other people's work by imposing DRM, which adds insult to injury. Let's hope more institutions can follow MIT's fine example, and nip this DRM madness in the bud. (Via Open Access News.)

20 March 2007

Nicola Zingaretti, the EU parliament's rapporteur for the EU directive on the planned penal regulations for the enforcement of intellectual property rights, has proposed that the mere "acceptance" of such violations be made a crime. The Italian Social-Democrat proposed that this vague term be included as part of amendments orally proposed as a "compromise" at the last minute to the Committee on Legal Affairs, which will be voting on the matter today. The FFII, a German organization for free information infrastructure, has called this proposed amendment a "broad concept of secondary liability" for "intentional" violations of copyright, patent, and trademarks. The FFII says that the proposal goes far beyond the much criticized original proposal made by the EU Commission to criminalize "inciting, aiding and abetting" legal violations.

To see why this legislation should be dropped completely, try replacing "enforcement of intellectual property rights" by "enforcement of intellectual monopolies": doesn't sound so good, eh?

A superb example of how cavalier proponents of intellectual monopolies can be with figures:

Leaving aside the rhetoric, what is particularly remarkable about these comments is the claim that Canadian copyright law is costing the economy between $10 to $30 billion per year. Obviously any estimate that varies by up to $20 billion is not particularly credible. Further, even the low end figure looks ridiculous as it is four times the losses claimed by the MPAA in China and is more than three times the total amount of cultural goods that Canada imports from the U.S. every year. Or considered another way, the $10 billion figure is more than the Finance Minister committed yesterday to new health care initiatives, the environment, education, and special services for armed forces veterans combined. And that is the low end - the $30 billion figure represents nearly 13 percent of total government revenues and nearly equals the total amount of provincial transfers and subsidies. All of this from "a lot of counterfeiting of movies and songs and whatnot?"

There is a certain irony in the fact that the OpenDocument Format, that essence of office suite freedom, has been locked up as a proprietary document costing the princely sum of 342 Swiss Francs.

Well, it's now been liberated - at least in the beery sense. You still can't do anything daringly with it, like change it; but since it is meant to be a standard, I suppose that's not totally unreasonable.

Be warned, though: its 728 pages are not for the those of a delicate disposition (but are, at least, better than the 6000 pages from Microsoft's rival OOXML offering.) (Via Rob Weir.)

It's always a good idea to try to understand how Microsoft regards the world of free software, and there's no better way of doing that than reading its own materials aimed at beating open source. Here's a good example, called Linux Personas, which presents various kinds of GNU/Linux users and how to win them back to Windows.

Perhaps the most interesting category is the Linux Aficionado - hard-core open source geek, in other words. The two key approaches are the usual tired TCO studies - a pretty forlorn hope given the extent to which they have been debunked - and an argument based on the strength of Windows' integrated platform.

The latter has always truck me as one of the better points, since it is (currently) a key differentiator for Microsoft. I still don't see geeks going for it (their senior managers might, though). What's more important in this context, perhaps, is the rise of the open source stack, which effectively is building a counter-argument to this. (Via Slashdot.)

19 March 2007

I've noted in a number of places the impressive and continuing rise of Sun to become pretty much the leading defender of the GNU GPL faith. Anyone who had any doubts about its ultimate intentions might like to read this post from Ian Murdock, the -ian in Debian, and one of the senior figures in the GNU/Linux world:

I’m excited to announce that, as of today, I’m joining Sun to head up operating system platform strategy. I’m not saying much about what I’ll be doing yet, but you can probably guess from my background and earlier writings that I’ll be advocating that Solaris needs to close the usability gap with Linux to be competitive; that while as I believe Solaris needs to change in some ways, I also believe deeply in the importance of backward compatibility; and that even with Solaris front and center, I’m pretty strongly of the opinion that Linux needs to play a clearer role in the platform strategy.

On Saturday I attended the Open Knowledge 1.0 meeting, which was highly enjoyable from many points of view. The location was atmospheric: next to Hawksmoor's amazing St Anne's church, which somehow manages the trick of looking bigger than its physical size, inside the old Limehouse Town Hall.

The latter had a wonderfully run-down, almost Dickensian feel to it; it seemed rather appropriate as a gathering place for a ragtag bunch of ne'er-do-wells: geeks, wonks, journos, activists and academics, all with dangerously powerful ideas on their minds, and all more dangerously powerful for coming together in this way.

The organiser, Rufus Pollock, rightly placed open source squarely at the heart of all this, and pretty much rehearsed all the standard stuff this blog has been wittering on about for ages: the importance of Darwinian processes acting on modular elements (although he called the latter atomisation, which seems less precise, since atoms, by definition, cannot be broken up, but modules can, and often need to be for the sake of increased efficiency.)

One of the highlights of the day for me was a talk by Tim Hubbard, leader of the Human Genome Analysis Group at the Sanger Institute. I'd read a lot of his papers when writing Digital Code of Life, and it was good to hear him run through pretty much the same parallels between open genomics and the other opens that I've made and make. But he added a nice twist towards the end of his presentation, where he suggested that things like the doomed NHS IT programme might be saved by the use of Darwinian competition between rival approaches, each created by local NHS groups.

The importance of the ability to plug into Darwinian dynamics also struck me when I read this piece by Jamais Cascio about carbon labelling:

In order for any carbon labeling endeavor to work -- in order for it to usefully make the invisible visible -- it needs to offer a way for people to understand the impact of their choices. This could be as simple as a "recommended daily allowance" of food-related carbon, a target amount that a good green consumer should try to treat as a ceiling. This daily allowance doesn't need to be a mandatory quota, just a point of comparison, making individual food choices more meaningful.

...

This is a pattern we're likely to see again and again as we move into the new world of carbon footprint awareness. We'll need to know the granular results of actions, in as immediate a form as possible, as well as our own broader, longer-term targets and averages.

Another way of putting this is that for these kind of ecological projects to work, there needs to be a feedback mechanism so that people can see the results of their actions, and then change their behaviour as a result. This is exactly like open source: the reason the open methodology works so well is that a Darwinian winnowing can be applied to select the best code/content/ideas/whatever. But that is only possible when there are appropriate metrics that allow you to judge which actions are better, a reference point of the kind Cascio is writing about.

By analogy, we might call this particular kind of environmental action open greenery. It's interesting to see that here, too, the basic requirement of modularity turns out to be crucially important. In this case, the modularity is at the level of the individual's actions. This means that we can learn from other people's individual success, and improve the overall efficacy of the actions we undertake.

Without that modularity - call its closed-source greenery - everything is imposed from above, without explanation or the possibility of local, personal, incremental improvement. That may have worked in the 20th century, but given the lessons we have learned from open source, it's clearly not the best way.

16 March 2007

On this day, we learn from IBM's attorney, David Marriott that the "mountain of code" SCO's CEO Darl McBride told the world about from 2003 onward ends up being a measly 326 lines of noncopyrightable code that IBM didn't put in Linux anyway.

On the other hand, SCO has infringed all 700,000 lines of IBM's GPL'd code in the Linux kernel.

Goodbye, Darl, it was vaguely fun while it lasted - well, not much actually - but now, it's over. So long, and thanks for all the fish.

This raises some interesting issues about what exactly copyright covers:

A cricketing website has found what it hopes is an inventive way to bypass copyright laws to show users action from the Cricket World Cup.

Despite the fact that Sky Television has the exclusive rights to broadcast the live action from the West Indies, Cricinfo.com is using computer animation to provide ball-by-ball coverage to non-Sky viewers.

...

Wisden said it had carefully consulted lawyers before going ahead with the simulations in this week's World Cup. "Cricinfo 3D is based on public domain information gathered by our scorers who record a number of factors such as where the ball pitched, the type of shot played and where the ball goes in the field," said a Wisden statement. "That data is then fed as an xml to anyone who has Cricinfo 3D running on their desktops and the software generates an animation based on this data."

The issues is whether the information about the match is in the public domain, and can thus be fed into a simulation, or whether the rights that Sky has bought cover that information in some way.

I'd say not, because you generally can't copyright (or patent) pure information: for intellectual monopolies to be granted, you need to go beyond the facts to add artistic expression in the case of copyright, or non-obvious inventive steps in the case of patents. Cricinfo 3D seems to be a new artistic interpretation of pure data, independent of Sky's own "artistic" images of the game (i.e., the camera shots they take).

Not that intellectual monopolies are known for their strict adherence to the laws of logic....

Social networks lie at the heart of Web 2.0 - and of the opens. So it is surprising that more hasn't been done to analyse and map the ebb and flow of ideas and influence across these networks.

Here's an interesting solution for enterprises, called Trampoline. There are clear financial benefits for companies if they can understand better how the social networks work within (and without) their walls, so it's a good fit there too.

We humans spent 200,000 years evolving all kinds of social behaviour for accumulating, filtering and passing on information. We're really good at it. So good we don't even think about it most of the time. However the way we use email, instant messaging, file sharing and so on disrupts these instincts and stops them doing their job. This is why we waste so much time scanning through emails we're not interested in and searching for documents we need.

Trampoline's approach is so refreshingly obvious it seems radical. We've gone right back to the underlying social behaviour and created innovative software that harnesses human instincts instead of disabling them. We describe this process of mirroring social behaviour in software as "sociomimetics".

Trampoline's products leverage the combined intelligence of the whole network to manage and distribute information more efficiently. Individuals get the information they need, unrecognised expertise becomes visible, the enterprise increases the reuse and value of its knowledge assets.

Given the simplicity of the idea, it should be straighforward coming up with open source implementations. And there would be a double hit: a project that was interesting in itself, and also directly applicable to open source collaboration. (Via Vecosys.)

Leading European, Brazilian and Chinese information and communications technology (ICT) players announced today that they have joined forces to launch QualiPSo, a quality platform to foster the development and use of open source software to help their industries in the global race for growth.

The aim of QualiPSo is to help industries and governments fuel innovation and competitiveness in today’s and tomorrow’s global environment by providing the way to use trusted low-cost, flexible open source software to develop innovative and reliable information systems. To meet that goal, QualiPSo will define and implement the technologies, processes and policies to facilitate the development and use of open source software components, with the same level of trust traditionally offered by proprietary software.

Er, yes, and how will it do that?

Developing a long-lasting network of professionals caring for the quality of open source software for enterprise computing. Six Competence Centres – running the collaborative platforms, tools and process developed in this project – will be set up to support the development, deployment and adoption of OSS by private and public Information Systems Departments, large companies, SMEs, end users and ISVs.

Yes, yes, yes, and that will be done how?

Defining methods, development processes, and business models to facilitate the use of open source Software (OSS) by the industry.

Can't they just get stuck in and try it - you know, download, install, give it a go? Anything else?

Developing a new Capability Maturity Model-like approach to assessing the quality of OSS. This model will be discussed with CMM’s originators, the Software Engineering Institute (SEI), with a view to formalising it as an official extension of CMMI.

What? Maturity? What's this got to do with getting people to use the ruddy stuff?

QualiPSo is launched in synergy with Europe’s technology initiatives such as NESSI and Artemis, and will leverage Europe’s existing OSS initiatives such as EDOS, FLOSSWorld (http://flossworld.org/), tOSSad (http://www.tossad.org/) and others. The project will also leverage large OSS communities such as OW2 and Morfeo.

Oh, now I see: all this is just an excuse for more acronym madness. So it's basically just a waste of money, and a missed opportunity to do something practical.

But wait:

QualiPSo is the ever largest Open Source initiative funded by the EC.

OK, make that the biggest waste of money, and biggest missed opportunity yet.

Why couldn't they invest in a few hundred open source start-ups across Europe instead? Or, easier still, simply mandate ODF for all EU government documents? That single act alone would jump-start an entire open source economy in Europe. (Via Open Source Weblog.)

When, seventeen years ago, I designed the Web, I did not have to ask anyone's permission. The Web, as a new application, rolled out over the existing Internet without any changes to the Internet itself. This is the genius of the design of the Internet, for which I take no credit. Applying the age old wisdom of design with interchangeable parts and separation of concerns, each component of the Internet and the applications that run on top of it are able develop and improve independently. This separation of layers allows simultaneous but autonomous innovation to occur at many levels all at once. One team of engineers can concentrate on developing the best possible wireless data service, while another can learn how to squeeze more and more bits through fiber optic cable. At the same time, application developers such as myself can develop new protocols and services such as voice over IP, instant messaging, and peer-to-peer networks. Because of the open nature of the Internet's design, all of these continue to work well together even as each one is improving itself.

I've written several times on this blog and elsewhere about the rise of the open source enterprise stack. Its appearance signals both the increasing acceptance of a wide range of open source solutions in business, as well as the growing maturity of those different parts. Essentially, the rise of the stack represents part of a broader move to create an interdependent free software ecosystem.

Red Hat has been active in this area, notably through the acquisition of JBoss, but now it has gone even further with the announcement of its Red Hat Exchange:

Red Hat has worked with customers and partners to develop Red Hat Exchange (RHX), which provides pre-integrated business application software stacks including infrastructure software from Red Hat and business application software from Red Hat partners.

RHX is a single source for research, purchase, online fullfillment and support of open source and other commercial software business application stacks. Through RHX, customers will be able to acquire pre-integrated open source software solutions incorporating infrastructure software from Red Hat and business application software from Red Hat partners. Red Hat will provide a single point of delivery and support for all elements of the software stacks.

Through RHX, Red Hat seeks to reduce the complexity of deploying business applications and support the development of an active ecosystem of commercial open source business application partners. RHX will be available later this year.

It's obviously too early to tell how exactly this will work, and how much success it will have. But it's nonetheless an important signal that the open source enterprise stack and the associated ecosystem that feeds it are rapidly becoming two of the most vibrant ideas in the free software world.

Nice story in the Guardian today about a local UK health system that works - unlike the massive, doomed, centralised NHS system currently being half-built at vast cost. It makes some important points:

Next week the annual Healthcare Computing conference in Harrogate will buzz with accusations that the national programme has held back progress. There are two reasons behind this charge. First, under the £1bn contracts signed early in the programme, hospitals have to replace their administrative systems which record patients' details with systems from centrally chosen suppliers. As this involves considerable local effort for little benefit, progress is painfully slow. The second problem is the potential threat to confidentiality arising from making records available on a national scale.

Quite: if there is no local benefit, there will be no buy-in, and little progress. Think local, act local, and you get local achievement. The other side is that if you impose a central system, security is correspondingly weaker. Hello, ID card....

Of course, there are many areas where you want to be able to bring together information from local stores for particular purposes. That's still possible - provided you adopt open standards everywhere. Hello, ODF....

14 March 2007

The Infoethics Survey of Emerging Technologies prepared by the NGO Geneva Net Dialogue at the request of UNESCO aims at providing an outlook to the ethical implications of future communication and information technologies. The report further aims at alerting UNESCO’s Member States and partners to the increasing power and presence of emerging technologies and draws attention to their potential to affect the exercise of basic human rights. Perhaps as its most salient deduction, the study signals that these days all decision makers, developers, the corporate scholar and users are entrusted with a profound responsibility with respect to technological developments and their impact on the future orientation of knowledge societies.

It touches on a rather motley bunch of subjects, including the semantic Web, RFID, biometrics and mesh networking. But along the way it says some sensible things:

One primary goal of infoethics is to extend the public domain of information; that is, to define an illustrative set of knowledge, information, cultural and creative works that should be made available to every person.

Even more surprising, to me at least, was this suggestion:

UNESCO should meanwhile support open standards and protocols that are generated through democratic processes not dominated by large corporations.

The use of OpenDocument Format and other open formats should also be encouraged as they help mitigate lock-in to certain technologies. Other initiatives to consider include pursuing free and open software, as well as the “Roadmap for Open ICT Ecosystems” developed last year.

One of the central ideas behind openness is re-use - the ability to build on what has gone before, rather than re-inventing the wheel. And yet, as this fascinating article demonstrates, there is sometimes surprisingly little sharing and re-use between the various opens:

This study demonstrates among a sample of 100 Wikipedia entries, which included 168 sources or references, only two percent of the entries provided links to open access research and scholarship. However, it proved possible to locate, using Google Scholar and other search engines, relevant examples of open access work for 60 percent of a sub-set of 20 Wikipedia entries. The results suggest that much more can be done to enrich and enhance this encyclopedia’s representation of the current state of knowledge. To assist in this process, the study provides a guide to help Wikipedia contributors locate and utilize open access research and scholarship in creating and editing encyclopedia entries.

I can't help feeling that there is a larger lesson here, and that all the various opens should be doing more to build on each other's strengths as well as their own. After all, it's partly what all this openness is about. Perhaps we need a meta-open movement?

Although PicksPal, a fantasy sports betting site, interests me not a jot, this is truly fascinating:

It’s all for fun, but the company started selling the top picks of its best users in October. For $10, you can get the collective picks of the top 30 users on five games. The idea was that people could use these for-fun picks to win bets in Vegas. The question was, would PicksPal be able to consistently beat Vegas odds, and the spread, with these picks.

So far, yes. By a lot. PicksPal’s overall record, against the spread, has been 562-338, or a 63% win rate. In college basketball, the win rate is 66%. In pro football, 62%. They are even getting a 52% win rate in pro hockey, their worst sport.

These look like incredibly strong results; it would be interesting to see how the results are in other areas where this idea is starting to be applied.

Alas, not many people care enough about the threat posed by DRM. But I suspect that quite a few care about their TV viewing, and the traditional freedoms they enjoy in that sphere. So maybe this chilling news will wake up a few people from their digital slumbers:

Today, consumers can digitally record their favorite television shows, move recordings to portable video players, excerpt a small clip to include in a home video, and much more. The digital television transition promises innovation and competition in even more great gadgets that will give consumers unparalleled control over their media.

But an inter-industry organization that creates television and video specifications used in Europe, Australia, and much of Africa and Asia is laying the foundation for a far different future -- one in which major content providers get a veto over innovation and consumers face draconian digital rights management (DRM) restrictions on the use of TV content. At the behest of American movie and television studios, the Digital Video Broadcasting Project (DVB) is devising standards to ensure that digital television devices obey content providers' commands rather than consumers' desires. These restrictions will take away consumers' rights and abilities to use lawfully-acquired content so that each use can be sold back to them piecemeal.

13 March 2007

Your feedback on Dell IdeaStorm has been astounding. Thank you! We hear your requests for desktops and notebooks with Linux. We’re crafting product offerings in response, but we’d like a little more direct feedback from you: your preferences, your desires. We recognize some people prefer notebooks over desktops, high-end models over value models, your favorite Linux distribution, telephone-based support over community-based support, and so on. We can’t offer everything (all systems, all distributions, all support options), so we’ve crafted a survey (www.dell.com/linuxsurvey) to let you help us prioritize what we should deliver for you.

It is a truth universally acknowlegded that there is only one thing more stupid than content producers suing little people with nothing in their piggy bank for alleged copyright infringement, and that is content producers suing someone with billions of dollars in their piggy-bank for alleged copyright infringement:

Viacom Inc. today announced that it has sued YouTube and Google in U.S. District Court for the Southern District of New York for massive intentional copyright infringement of Viacom’s entertainment properties. The suit seeks more than $1 billion in damages, as well as an injunction prohibiting Google and YouTube from further copyright infringement. The complaint contends that almost 160,000 unauthorized clips of Viacom’s programming have been available on YouTube and that these clips had been viewed more than 1.5 billion times.

Sign-ons can be a real pain, as you are forced to create ever more accounts at sites. A single sign-on is the obvious solution, but getting everyone to agree on a standard is hard. So it's particularly good to see that OpenID is not only taking off, but an open standard to boot.

As the most basic level, your OpenID identity is a unique URL. It can be a URL that you directly control (such as that of your personal Web page or blog) or one provided to you by a third-party service, such as an OpenID provider. In that sense, a site's use of OpenID identities is no different than using email addresses as identifiers: they are unique to each user and are verifiable. But you can publicly display an OpenID identity without attracting spam.

Even though Second Life gets the lion's share of the attention, there are several other virtual world systems out there, including some that are fully open source. One such is Croquet:

Croquet is a powerful open source software development environment for the creation and large-scale distributed deployment of multi-user virtual 3D applications and metaverses that are (1) persistent (2) deeply collaborative, (3) interconnected and (4) interoperable. The Croquet architecture supports synchronous communication, collaboration, resource sharing and computation among large numbers of users on multiple platforms and multiple devices.

The ideas behind Croquet are undeniably powerful, but it's always looked a little clunky when I've investigated it, more like a research project than anything that you might use. In other words, a solution in search of a problem.

Well, the problem has just turned up, and involves creating a secure virtual workspace for distributed teams. In the corporate context, the Second Life gew-gaws are less important than functionality like security and the ability to collaborate on any application. A new company called Qwaq, which includes many of the key people from the Croquet project, has been set up to meet that need.

It adopts a hybrid approach for its licensing: the core code is Croquet, and hence open source, but Qwaq adds proprietary elements on top. Obviously, I'd prefer it if everything were free code from the start, but it's understandable if new companies are cautious when dabbling with this tricky open source stuff. The existence of Qwaq, which obviously has a vested interest in the survival and development of Croquet, is already good news for the latter, but I predict that in time the company will gradually open up more of its code in order to tap into the community that will grow around it.

Its business model could certainly cope with that: it offers two versions of its product - one as a hosted service, the other run on an intranet. Although it is true that other companies could also host and support the product in this case, Qwaq has a unique strength that comes from the people working for it (rather like the advantage that Red Hat's roster of kernel hackers confers.)

One of the benefits of using Croquet as the basis of its products is that the protocols are open, and this allows Croquet-compatible products to interoperate with Qwaq's. This means that the dynamics of the Croquet ecosystem are similar to that of the Web, which is never a bad thing.

At the time of writing, there's not much to see on Qwaq's site, but I imagine that will change soon, and I'll update this post to reflect that (and also be writing elsewhere about the technology and its applications). In the meantime, Qwaq's arrival is certainly welcome, since it signals a new phase in the roll-out and commercialisation of standards-based virtual spaces. I'm sure we'll see many more in the future.

Update: The Qwaq site has now gone live, with some info and a screenshot of the Qwaq Forums product, as well as a link to a datasheet. There is also a short press release available.

12 March 2007

Openness and governments go together like horses and horseless carriages, so I was heartened to come across what sounds like a major victory for open access to key information in the shape of FarmSubsidy.org:

Farmsubsidy.org uses freedom of information laws to force European governments to release detailed data on who gets what from Europe's €48.5 billion annual farm subsidy payments. We then make this data available online.

There's a good history of how this happened, which also provides something of a blueprint for further openness. (Via WorldChanging.)

Internal memorandums circulated in the Alaskan division of the Federal Fish and Wildlife Service appear to require government biologists or other employees traveling in countries around the Arctic not to discuss climate change, polar bears or sea ice if they are not designated to do so.

Like many, I've been following Twitter with interest, if a certain bemusement: just what is the attraction of knowing that your mates are drinking a cup of coffee or taking the dog for a walk?

This post provides perhaps the best explanations so far as to why Twitter is important:

You use your social network as a filter, which helps both in scoping participation within a pull model of attention management, but also to Liz’s point that my friends are digesting the web for me and perhaps reducing my discovery costs. But the affordance within Twitter of both mobile and web, that not only lets Anil use it (he is Web-only) is what helps me manage attention overload. I can throttle back to web-only and curb interruptions, simply by texting off.

But will I use it? Not yet, although at some point I may dip a cyber-toe, as I have with LinkedIn: not because I need it, but because these social networks are indisputably an interesting trend.

09 March 2007

Microsoft Corp's director of corporate standards has conceded that 'legitimate concerns' have been raised in response to its attempt to fast-track the approval of its Open XML format by ISO.

The level of criticism targeted at Microsoft's XML-based office productivity file formats is significant, raising the potential that Open XML might not gain ISO approval, but Microsoft's Jason Matusow insisted there is still a long way to go.

This is interesting: it's the first time that I've come across Microsoft expressing any kind of doubts about OOXML, its rival to ODF, romping home to become an ISO standard. I can only assume that there was a presumption on the company's part that for all the free software world's whingeing, the national bodies who have the right to object, wouldn't.

Once that open access, open data meme starts spreading, there's just no stopping it....

British scientists are leading an international effort to bring together all the known geological information about every country in the world. By making the data freely available and allowing researchers to track geological features across national boundaries, the project will make it easier to plan international projects, predict earthquakes and locate natural resources such as oil and gas.

Once the project, called OneGeology, is up and running the data will be searchable via the internet. "Geology has no respect for national frontiers," said Ian Jackson, who is coordinating the project for the British Geological Survey (BGS). "The data exists, but accessibility is the key."

I've written before about how digital technology can be applied by the oppressed and disenfranchised to help preserve their identity. It's good therefore to see new-ish technologies like YouTube being pressed into similar service for a mass online protest focussing on March 10:

On March 10, 1959, Tibetans took to the streets of Lhasa to actively resist the Chinese invasion of Tibet. Tens of thousands of Tibetans risked their lives to protect their nation and their beloved leader, His Holiness the Dalai Lama. They gave of themselves so that future generations could live to continue the fight and regain the freedom of Tibet.

A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.

The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.

...

The idea of a centralized database storing all of the world’s digital information is a fundamental shift away from today’s World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

A single database for all the world's digital information? Since when did massive, centralised, single point-of-failure systems come back into vogue? Google's holdings are bad enough.

Thanks, but no thanks.

Update: To be fair, it seems to be adopting a sensible licensing policy, so maybe there's hope yet:

We want to make it possible for you to add high quality structured information to your websites, mashups and applications without worrying about restrictive corporate licenses. All data is licensed Creative Commons Attribution. We only ask that you link back to us.

In addition, Tim O'Reilly has a more upbeat (perhaps because better-informed) assessment here. I can see a little better what they're trying to do, but I'm still not convinced by the centralised nature of it. Opinions?

As readers of this blog may recall, in general I'm not a big fan of analysts, since they seem to offer very little other than a re-statement of what was blindingly obvious six months ago. But there are honourable exceptions.

Take, for example, this insightful presentation by Brent Williams, a self-styled "(temporarily) Independent Equity Research Analyst". It's unusual because it manages to combine a good understanding of the open source model and world with some grown-up economics. The result is well-worth reading.

Traditionally, open source has been most successful when applied to generic, mainstream software categories - operating systems, Web servers, browser etc. Specialised, vertical applications have not generally been thought suitable, because the pool of interested people who can contribute bug reports and fixes is small.

But the appearance of this open source urban forest tracking system suggests we may be entering a new phase:

In urban San Francisco, the public works department and nonprofit organizations work together to preserve and expand tree life as part of that city's efforts to create sustainability. The city today unveiled a new Web portal and open source application that will help those agencies, and the general public, keep tabs on a growing urban forest.

This new project will probably work not so much because there is a huge untapped group of urban forest tracking system hackers just waiting to hit some code, but because there are plenty of tree-huggers who will help debug the system and input data. In other words, these new kinds of open source projects - call them open source 2.0 - only require a small core of coders to maintain, but survive and thrive thanks to the larger group of suppliers of open data.

About Me

I have been a technology journalist and consultant for 30 years, covering
the Internet since March 1994, and the free software world since 1995.

One early feature I wrote was for Wired in 1997:
The Greatest OS that (N)ever Was.
My most recent books are Rebel Code: Linux and the Open Source Revolution, and Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business.