if everything you hear is always recorded, if your phone can be active with no external indication, if your main lines of communication can be tapped or hacked, the potential for Big Brother abuse grows exponentially. privacy concerns loom, piracy is facilitated, and safety issues escalate (hopefully, by the time earpods replace cell phones, cars will be driving themselves!) new forms of public and private behaviour will develop; work and personal relationships will evolve based on previously nonexistent modes of communication; new digital divides (those that can iHear vs those that can't) will deepen.

Imagine if this stuff is only closed source: let's get some open source hackers working on it fast. (Via O'Reilly Radar.)

Developers are puzzling over recent clues blogged by a few Microsoft employees regarding a new “Emacs.Net” tool the company is building.

Microsoft’s Connected Systems Division (the folks who developed the Windows Communication Framework, a k a “Indigo”) is hiring developers to build a product that team member Doug Purdy described as “Emacs.Net.” Purdy hinted that Microsoft will divulge its Emacs.Net product/strategy plans at the company’s Professional Developers Conference in late October 2008.

Emacs is a text editor used primarily by the Unix community (though versions of Emacs that work on Windows systems already exist). Richard Stallman is credited as the father of Emacs, the name of which was derived from “Editing MACRoS.”

Here's a tired old meme that I've dealt with before, but, zombie-like, it keeps on coming back:

The open-source software community is simply too turbulent to focus its tests and maintain its criteria over an extended duration, and that is a prerequisite to evolving highly original things. There is only one iPhone, but there are hundreds of Linux releases. A closed-software team is a human construction that can tie down enough variables so that software becomes just a little more like a hardware chip—and note that chips, the most encapsulated objects made by humans, get better and better following an exponential pattern of improvement known as Moore’s law.

So let's just look at those statements for a start, shall we?

There is only one iPhone, but there are hundreds of Linux releases.

There's only one iPhone because the business of negotiating with the oligopolistic wireless companies is something that requires huge resources and deep, feral cunning possessed only by unpleasantly aggressive business executives. It has nothing to do with being closed. There are hundreds of GNU/Linux distributions because there are even more different kinds of individuals, who want to do things their way, not Steve's way. But the main, highly-focussed development takes place in the one kernel, with two desktop environments - the rest is just presentation, and has nothing to do with dissipation of effort, as implied by the above juxtaposition.

chips, the most encapsulated objects made by humans, get better and better following an exponential pattern of improvement known as Moore’s law

Chips do not get better because they are closed, they get better because the basic manufacturing processes get better, and those could just as easily be applied to open source chips - the design is irrelevant.

The iPhone is just one of three exhibits that are meant to demonstrate the clear superiority of the closed-source approach. Another is Adobe Flash - no, seriously: what most sensible people would regard as a virus is cited as one of "the more sophisticated examples of code". And what does Flash do for us? Correct: it destroys the very fabric of the Web by turning everything into opaque, URL-less streams of pixels.

The other example is "the page-rank algorithms in the top search engines", which presumably means Google, since it now has nearly two-thirds of the search market, and the page-rank algorithms of Microsoft's search engine are hardly being praised to the sky.

But what do we notice about Google? That it is built almost entirely on the foundation of open source; that its business model - its innovative business model - would not work without open source; that it simply would not exist without open source. And yes, Yahoo also uses huge amounts of open source. No, Microsoft doesn't, but maybe it's not exactly disinterested in its choice of software infrastructure.

Moreover, practically every single, innovative, Web 2.0-y start-up depends on open source. Open source - the LAMP stack, principally - is innovating by virtue of its economics, which make all these new applications possible.

And even if you argue that this is not "real" innovation - whatever that means - could I direct your attention to a certain technology known colloquially as the Internet? The basic TCP/IP protocols? All open. The Web's HTTP and HTML? All open. BIND? Open source. Sendmail? Open source. Apache? Open source. Firefox, initiated in part because Microsoft had not done anything innovative with Internet Explorer 6 for half a decade? Open source.

But there again, for some people maybe the Internet isn't innovative enough compared to Adobe's Flash technology.

As I've noted before, one of the key features of free software is its modularity. From this and the underlying licence flows the ability to mix and match different elements to produce new applications.

Synovel, a startup based on Hyderabad, India founded by a group of International Institute of Information Technology (IIIT) graduates, has released a preview of Spicebird, a Mozilla-based collaboration suite.

Spicebird is built on Thunderbird and Lightning, the powerful extension that adds calendaring functions to Thunderbird. Additionally it seems to integrate SamePlace, a Firefox extension that provides instant messaging capabilities based on the Jabber protocol.

Interesting to see that this is coming out of India - not currently a hotbed of such open source startups, but an area I'm sure we'll be hearing more from in the future.

There are lots of young companies in the same space, each promoting their own angle on solving the problem that they’ve identified. There are companies playing within the Outlook/Exchange framework. There are companies coming at it with Exchange replacements. There are companies focusing on collaboration rather than communication. There are companies with a web focus, others with a mobile focus, others with a social network focus.

But as he also notes, there are very particular advantages to working in the open source space:

From the project health point of view, I think it’s good to have various companies building products off of the Mozilla codebase in general. At the very least, it means that the platform won’t get too tied to any one product’s requirements. I don’t think there’s a huge risk of that happening, because Mozilla already supports several active products (Firefox, Thunderbird, Seamonkey, Komodo, Songbird, Miro, Joost, etc.). But having more people care about the mail/news bits should at least help with the engineering work we need to do there which is product-independent. There are long-standing architectural problems with the system which haven’t been fixed because of a lack of resources. With several companies betting on this platform, as long as the discussions happen in public and in good faith, we should be able to work together to improve things for all.

30 December 2007

One of the great things about open standards is that anyone can implement them - including those in the free software world. An obvious candidate for this treatment is the new OpenSocial set of APIs from Google, and here's an Apache project doing just that:

Shindig will provide implementations of an emerging set of APIs for client-side composited web applications. The Apache Software Foundation has proven to have developed a strong system and set of mores for building community-centric, open standards based systems with a wide variety of participants.

A robust, community-developed implementation of these APIs will encourage compatibility between service providers, ensure an excellent implementation is available to everyone, and enable faster and easier application development for users.

The Apache Software Foundation has proven it is the best place for this type of open development.

The Shindig OpenSocial implementation will be able to serve as a reference implementation of the standard.

29 December 2007

The recording industry is an extraordinary example of not learning from experience. You would have thought that the backlash against its heavy-handed response to people downloading music would have been enough to teach it a lesson, given the negative image it earned as a result. Apparently not:

In legal documents in its federal case against Jeffrey Howell, a Scottsdale, Ariz., man who kept a collection of about 2,000 music recordings on his personal computer, the industry maintains that it is illegal for someone who has legally purchased a CD to transfer that music into his computer.

The industry's lawyer in the case, Ira Schwartz, argues in a brief filed earlier this month that the MP3 files Howell made on his computer from legally bought CDs are "unauthorized copies" of copyrighted recordings.

"I couldn't believe it when I read that," says Ray Beckerman, a New York lawyer who represents six clients who have been sued by the RIAA. "The basic principle in the law is that you have to distribute actual physical copies to be guilty of violating copyright. But recently, the industry has been going around saying that even a personal copy on your computer is a violation."

28 December 2007

It's been a great year for free software, which just keeps on getting better and more widely adopted. And if you can't quite remember who, what, when, why or how, try these excellent listings from MattAsay and Tristan Nitot for open source and Mozilla respectively.

27 December 2007

One of the recurrent themes on this blog is the transition from a world of analogue content to one that is purely digital - and hence trivially copiable. The refusal of the media producers to recognise this shift is at the root of most of the problems they face in terms of declining sales and increasing unauthorised copying. Another recurrent idea has been the solution to this problem: to give away the digital but make money from the analogue.

Here's someone else with a nice observation that meshes with this perfectly:

Last Friday I was at a movie preview for a concert movie called U23D, which, as you will correctly surmise, was a U2 concert filmed in digital 3D.

A few weeks ago I saw the new film Beowulf, also in 3D.

As I look out the office window to the AMC Loews on 84th St, I see that the marquee is already pitching Hannah Montana 3d, not due out until February.

And outside that same theater is a 3d movie poster for the upcoming Speed Racer movie.

Suddenly everything is floating in space, after decades of flatness. What gives?

The answer?

Could it have something to do with the fact that a 3d movie cannot be pirated?

According to IMDB, the LA premier of Beowulf was on November 3, 2007 and the film was officially released in the US on November 16. On the other hand, according to vcdquality (a news site that announces the “releases” of films into various darknets) it was already available for file sharing by November 15.

Isn’t it just possible that the studios were thinking: Hey guys, I know you could just download this fantasy flick and see it on your widescreen monitor. But unless you give us $11 and sit in a dark theater with the polarized glasses, you won’t be seeing the half-naked Angelina Jolie literally popping off the screen!

The Director of the National Institutes of Health shall require that all investigators funded by the NIH submit or have submitted for them to the National Library of Medicine's PubMed Central an electronic version of their final, peer-reviewed manuscripts upon acceptance for publication to be made publicly available no later than 12 months after the official date of publication: Provided, That the NIH shall implement the public access policy in a manner consistent with copyright law.

That's just been signed into law in the US, and even though the choice of 12 rather than six months is slightly pusillanimous, it's still a huge win for open access in the US. It will also have a knock-on effect around the world, as open access to publicly-funded research starts to become the norm.

For more details see Peter Suber's post, with links to more details and background.

24 December 2007

23 December 2007

News that IBM was buying Solid Information Technology, a company with close ties to MySQL, set off a distant bell ringing in my head in connection with something I'd written a while back, but I didn't have the time to pursue it.

When [Monty Widenius] started MySQL, I worked for this other small database company, Solid Information Technology. I told Monty that his project was just going to fail, and that it was a stupid thing to do, and that he didn't have a chance because we had a chance.

GM: What was your view of the Free Software world when you were at Solid--were you even aware of it?

MM: I was getting more aware of it, and I was getting excited about it. At Solid, I drove an initiative of not open-sourcing the product, but making it very popular on the Linux platform--and that was why I was an advertiser in Linux Journal, because we were the leading Linux database in the world in 1996. We gave it away free of charge, so we had taken a step in that direction.

Then Solid decided to cancel the project and just focus on high-end customers, and that's when I left the company. So in that sense, when I got to MySQL, I had some unfinished business. By that time, I had completely bought into the notion of code being open.

22 December 2007

The battle for the soul of the document is usually presented as a two-horse race between ODF and OOXML. But that's a very parochially Western view of things - there is, after, a third format available: UOF, China's "Uniform Office Document Format", which I've written about several times before. If, like me, you were wondering what's happening in that world, he's a short update from Andy Updegrove.

In a much-awaited move, the non-profit Citizendium (http://www.citizendium.org/) encyclopedia project announced that it has adopted the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-by-sa) as the license for its own original collaborative content. The license permits anyone to copy and redevelop the thousands of articles that the Citizendium has created within its successful first year.

The license allows the Citizendium to join the large informal club of free resources associated especially with Creative Commons and the Free Software Foundation. Wikipedia uses the FSF’s GNU Free Documentation License (GFDL), which is expected to be made fully compatible with CC-by-sa in coming months. Therefore, Wikipedia and the Citizendium will be able to exchange content easily. A minority of Citizendium articles started life on Wikipedia and so have been available under the GFDL.

Avoiding a Balkanisation of the digital content commons through incompatible licences is critically important.

21 December 2007

I just could not find a spot on the spectrum that would trigger these kids' morality alarm. They listened to each example, looking at me like I was nuts.

Finally, with mock exasperation, I said, "O.K., let's try one that's a little less complicated: You want a movie or an album. You don't want to pay for it. So you download it."

There it was: the bald-faced, worst-case example, without any nuance or mitigating factors whatsoever.

"Who thinks that might be wrong?"

Two hands out of 500.

Now, maybe there was some peer pressure involved; nobody wants to look like a goody-goody.

Maybe all this is obvious to you, and maybe you could have predicted it. But to see this vivid demonstration of the generational divide, in person, blew me away.

I don't pretend to know what the solution to the file-sharing issue is. (Although I'm increasingly convinced that copy protection isn't it.)

Er, David, it's called changing the business model. It is just not sustainable to try to enforce analogue-type laws on digital content, and ultimately it's counterproductive - as the music industry is finding to its cost.

John Naughton points us to a nicely-written piece by John Lanchester about the way the City - and its global mates - work using derivatives to the tune of $85,000,000,000,000 (sorry, no mistake in that number of zeroes.)

It's a long piece because it's describing something that's complicated - sometimes made intentionally more complicated by the banking industry for the purposes of obfuscation - but at its heart it amounts to a very simple thing: gambling. As Lanchester writes:

The list of individual traders who have lost more than a billion dollars at a time betting on derivatives is not short: Robert Citron of Orange County, Toshihide Iguchi at Daiwa, Yasuo Hamanaka at Sumitomo and Nick Leeson at Barings, just to take examples from the early 1990s. In Leeson’s case in 1995, it was a huge unauthorised position in futures on the Nikkei 225, the main Japanese stock exchange. Leeson had been doubling and redoubling his bets in the belief/hope that the index would rise, and hiding the resulting open position – a gigantic open-ended bet – in a secret account. (Incidentally, Leeson’s big bet was on the Nikkei holding its level above 18,000. At the time of writing, 121/2 years later, the index sits at 15,454 – proof, if it were needed, that when prices go down they can stay that way for a long time.) The loss eventually amounted to £827 million, and destroyed Barings, Britain’s oldest merchant bank.

Got that? These are bets, pure and simple, on the way that things will work out. You can dress them up as you will, you can complexify them as you will, but at bottom they are simply gambles.

Now, add that fact to the distasteful sight of the US - a country that probably uses derivatives more than any other, and also probably makes more money from derivatives than any other, trying to stop online gambling with non-US companies - for example by buying off pathetically greedy entities like the EU:

The United States has reached a deal with the European Union, Japan and Canada to keep its Internet gambling market closed to foreign companies, but is continuing talks with India, Antigua and Barbuda, Macau and Costa Rica, U.S. trade officials said on Monday.

Since I'm no expert on derivatives, I don't know the extent to which you can buy them online from anyone anywhere, but I would be utterly astonished if you couldn't (and this suggests you can.) So you have a fundamental cognitive dissonance between the extraordinary use of derivatives worldwide, and the US attempt to ban online gambling though non-US companies.

Maybe the idea is that only the ultra-rich should be allowed to gamble wherever they want.

Folksonomies - the ad hoc tagging by anyone of anything - sound terribly democratic compared to your top-down authoritarian imposition of taxonomies, but it's easy to see why people are sceptical about them: how can anything useful arise out of something so chaotic?

Del.icio.us is one example of how such folksonomies can be really useful, and here's another (and note the groovy .museum domain - the first time I've seen this):

"Steve” is a collaborative research project exploring the potential for user-generated descriptions of the subjects of works of art to improve access to museum collections and encourage engagement with cultural content. We are a group of volunteers, primarily from art museums, who share a common interest in improving access to our collections. We are concerned about barriers to public access to online museum information. Participation in steve is open to anyone with a contribution to make to developing our collective knowledge, whether they formally represent a museum or not.

Very cool - both in terms of adding metadata to objects, and as far as getting the public involved with art. Indeed, this idea should really be extended to everything - imagine a database of public places that people could tag.

The announcement that Red Hat's CEO and President, Matthew Szulik, is moving on (back?) to become its Chairman, is obviously pretty big news, since Szulik has led the company for nearly a decade, a long time in the still-young open source world. His valedictory message is well worth a read; I particularly liked the following section:

My early days at Red Hat were sitting in small office with no door in Durham, NC, across from the free soda machine. People by the hour would stop and punch their selection for Mountain Dew or Coke. My challenge was that I was tasked to go and raise venture money for this free software company. And over the phone, in the middle of my sales pitch, corporate types at Dell, IBM and HP and others would hear the constant banging of soda cans dropping in the soda machine and would ask if there were fights going on outside my office. So, after a while, I told the prospective investors that YES there were fights going on. And yes, these fights happened frequently. It’s how people at Red Hat settled technical issues likes software bugs and features in new releases. Red Hat was a real tough place to work. Dell, HP and IBM became investors because they liked the fighting spirit of Red Hat.

20 December 2007

A patent is an artificial government-imposed monopoly on implementing a certain method or technique. If the method or technique can be implemented by software, so that the patent prohibits the distribution and use of certain programs, we call it a software patent.

The third stage, planned for mid-2008, will be the addition of the OpenDocument format for word processors to the list of export formats. "Imagine that you want to use a set of wiki articles in the classroom. By supporting the OpenDocument format, we will make it easy for educators to customize and remix content before printing and distributing it from any desktop computer," Sue Gardner explained.

The first stage, in case you were wondering,

is a public beta test running on WikiEducator.org of functionality for remixing collections of wiki pages and downloading them in the PDF format.

while the second stage is

the deployment of the technology on the projects hosted by the Wikimedia Foundation, including Wikipedia. At this point, users will also be given the option to order printed copies of wiki content directly from PediaPress.com. "The integration into Wikipedia will be a milestone for print-on-demand technology. Users will literally be empowered to print their own encyclopedias", according to Heiko Hees, product manager at PediaPress.com.

Hmm, well, maybe: I think the amount of work involved might make buying an encyclopaedia rather more attractive.... (Via Open Access News.)

I mentioneden passant the new CCZero licence, but here's news of yet another:

CC+ is a protocol to enable a simple way for users to get rights beyond the rights granted by a CC license. For example, a Creative Commons license might offer noncommercial rights. With CC+, the license can also provide a link to enter into transactions beyond access to noncommercial rights — most obviously commercial rights, but also services of use such as warranty and ability to use without attribution, or even access to physical media.

Dopplr can show me when a distant friend will be near and vice versa. Twitter can show me what my friends are doing right now. Wesabe can show me what others have learned about saving money at the places where I spend my money. Among many other things Flickr can show me how to look differently at the things I see when I take photos. And del.icio.us can show me things that my friends are reading every day.

It's all about making connections, creating a community and finding a commonality. The post calls this "surfacing coincidences" but I think that "coincidence" is the wrong word, since it suggests something random and casual; what we're talking about is an action that is much more directed: people looking for like-minded, like-thinking, like-doing people. (Via John Battelle.)

The Mindquarry GO and Mindquarry PRO products will be discontinued as of today. Our Open Source product will remain publicly available (see below for more information). To those with a Mindquarry GO Beta account, we now offer the possibility to migrate their data to the Open Source version of Mindquarry. This means that they can install Mindquarry themselves and use existing data from their Mindquarry GO Beta instance. Please write to support@mindquarry.com if you want us to extract your data from Mindquarry GO Beta to send it to you.Keeping our Open Source software alive

Our developers team is currently working on finishing the Mindquarry 1.2beta release, which will be available around end of October. Beginning with 1.2beta, Mindquarry source code will be hosted on Sourceforge as well as the mindquarry.com Web site. Hence, our software as well as all necessary information such as installation documentation and forum discussions will still be available. Further details and links will be available in the next and probably final Mindquarry community newsletter.

This is an object lesson in one of free software's great virtues: whatever happens, the code lives on. This means that even commercial customers can migrate to free versions where they have been paying for other varieties. (Via NetworkWorld.)

Something calling itself a “Protocol for Implementing Open Access Data” sounds about as exciting as a list of ingredients for paint. But this memo from the Science Commons is one of the most important documents in this field to date. Its scope is explained in the opening paragraph:

This memo provides information for the Internet community interested in distributing data or databases under an “open access” structure. There are several definitions of “open” and “open access” on the Internet, including the Open Knowledge Definition and the Budapest Declaration on Open Access; the protocol laid out herein is intended to conform to the Open Knowledge Definition and extend the ideas of the Budapest Declaration to data and databases.

Again, that may not sound very exciting, but trying to come up with definitions of “open data” or “open access data” have proved extraordinarily hard, and in the course of the memo we learn why:

3. Principles of open access dataLegal tools for an open access data sharing protocol must be developed with three key principles in mind:3.1 The protocol must promote legal predictability and certainty.3.2 The protocol must be easy to use and understand.3.3 The protocol must impose the lowest possible transaction costs on users.

These principles are motivated by Science Commons’ experience in distributing a database licensing Frequently Asked Questions (FAQ) file. Scientists are uncomfortable applying the FAQ because they find it hard to apply the distinction between what is copyrightable and what is not copyrightable, among other elements. A lack of simplicity restricts usage and as such restricts the open access flow of data. Thus any usage system must both be legally accurate while simultaneously very simple for scientists, reducing or eliminating the need to make the distinction between copyrightable and non-copyrightable elements.

The terms also need to satisfy the norms and expectations of the disciplines providing the database. This makes a single license approach difficult – archaeology data norms for citation will differ from those in physics, and yet again from those in biology, and yet again from those in the cultural or educational spaces. But those norms must be attached in a form that imposes the lowest possible costs on users (now and in the future).

The solution is at once obvious and radical:

4. Implementing the Science Commons Database Protocol for open access data4.1 Converge on the public domain by waiving all rights based on intellectual property

The conflict between simplicity and legal certainty can be best resolved by a twofold measure: 1) a reconstruction of the public domain and 2) the use of scientific norms to express the wishes of the data provider.

Reconstructing the public domain can be achieved through the use of a legal tool (waiving the relevant rights on data and asserting that the provider makes no claims on the data).

Requesting behavior, such as citation, through norms and terms of use rather than as a legal requirement based on copyright or contracts, allows for different scientific disciplines to develop different norms for citation. This allows for legal certainty without constraining one community to the norms of another.

Thus, to facilitate data integration and open access data sharing, any implementation of this protocol MUST waive all rights necessary for data extraction and re-use (including copyright, sui generis database rights, claims of unfair competition, implied contracts, and other legal rights), and MUST NOT apply any obligations on the user of the data or database such as “copyleft” or “share alike”, or even the legal requirement to provide attribution. Any implementation SHOULD define a non-legally binding set of citation norms in clear, lay-readable language.

The solution is obvious because the public domain is the zero state of copyright (in fact, the new Creative Commons public domain licence is called simply CCZero.) It is radical because previous attempts have tried to build on the evident success of the GNU GPL by taking a kind of copyleft approach: using copyright to limit copyright. But the new protocol explicitly negates the use of both GPL's copyleft and the Creative Commons Sharealike licences because, minimal as they are, they are still too restrictive – even though they are both predicated on maximising sharing.

One knock-on consequence of this is that attribution requirements are out. This is not just a matter of belief or principle, but of practicality:

In a world of database integration and federation, attribution can easily cascade into a burden for scientists if a category error is made. Would a scientist need to attribute 40,000 data depositors in the event of a query across 40,000 data sets? How does this relate to the evolved norms of citation within a discipline, and does the attribution requirement indeed conflict with accepted norms in some disciplines? Indeed, failing to give attribution to all 40,000 sources could be the basis for a copyright infringement suit at worst, and at best, imposes a significant transaction cost on the scientist using the data.

It is this pragmatism, rooted in how science actually works, that makes the current protocol particularly important: it might actually be useful. It's also significant that it plugs in to previously existing work in related fields. For example, as the accompanying blog post explains:

We are also pleased to announce that the Open Knowledge Foundation has certified the Protocol as conforming to the Open Knowledge Definition. We think it’s important to avoid legal fragmentation at the early stages, and that one way to avoid that fragmentation is to work with the existing thought leaders like the OKF.

The Open Data Commons Public Domain Dedication & Licence is a document intended to allow you to freely share, modify, and use this work for any purpose and without any restrictions. This licence is intended for use on databases or their contents (”data”), either together or individually.

Many databases are covered by copyright. Some jurisdictions, mainly in Europe, have specific special rights that cover databases called the “sui generis” database right. Both of these sets of rights, as well as other legal rights used to protect databases and data, can create uncertainty or practical difficulty for those wishing to share databases and their underlying data but retain a limited amount of rights under a “some rights reserved” approach to licensing. As a result, this waiver and licence tries to the fullest extent possible to eliminate or fully license any rights that cover this database and data.

Again, however dry and legalistic this stuff may seem it's not: we're talking about the rigorous foundations of new kinds of sharing - and we all know how important and powerful that can be.

Update: John Wilbanks has pointed me to his post about the winnowing process that led to this protocol - fascinating stuff.

As a sad sack who has been writing about computers for too long well over a quarter of a century, I'm all in favour of facts and getting them checked. But it's a little hard to tell whether this site is going to be doing that out of the goodness of its journalistic heart or not:

This blog has a single purpose: to analyze blog postings about open source, and to do some basic fact-checking where necessary.

I was slightly worried by the following:

This has become more important because there is an increasing number of blogs which have a bias and political view-point they are trying to promote, and that are not being counter-balanced.

This suggests it is more interested in politics than technology. One of the striking aspects of political blogs is how bloody tiresome they are, since they seem to descend into mindless ad hominem/ad feminam name-calling within about two comments to any post. At least technical corrections can be kept objective and civil (well, mostly.)

Nonetheless, I welcome critical and objective coverage of writing about open source, particularly if it is applied even-handedly to *all* the players. After all, inspecting the source code is what it's all about.... (Via Luis Villa.)

Simon Willison has picked up a nice quotation from Linus he made a few years back, but what really interests me are some other things he said in the same post:

think about how you and me actually came about - not through any complex design.

Right. "sheer luck".

Well, sheer luck, AND: - free availability and _crosspollination_ through sharing of "source code", although biologists call it DNA. - a rather unforgiving user environment, that happily replaces bad versions of us with better working versions and thus culls the herd (biologists often call this "survival of the fittest") - massive undirected parallel development ("trial and error")

In other words, the open source methodology is hard-wired into us - right down at the level of DNA.

A Google Profile is simply how you represent yourself on Google products — it lets you tell others a bit more about who you are and what you're all about. You control what goes into your Google Profile, sharing as much (or as little) as you'd like.

And here's the sting in the tail:

Use multiple Google products? Soon your Google Profile will link up with these as well.

In other words, despite its ultra low-profile launch, Google Profile will be the nexus of everything you do on Google.

The Open Source Consortium has prodded the BBC Trust into words, if not action:

The BBC Trust and the Open Source Consortium (OSC) have agreed the promotion of Microsoft by the BBC should end. After a meeting with the OSC, the BBC Trust restated its commitment to a platform agnostic solution for the iPlayer's catch-up service and agreed that the recently launched streaming service was only an interim solution.

The main credit for this should go to the OSC's indefatigable boss, who explained what remains to be done:

Mark Taylor, President of the Open Source Consortium, said: “We are pleased that the BBC Trust continues to engage with us and take our concerns seriously. The seven-day streaming service is elegant and attractive, and most importantly, can be used on any computer and most mobile devices without unnecessary concern with technology. Instead consumers can choose on the more important criteria of price and performance.

“However we remain concerned that the 30 day catch-up service is exclusively provided only for newer versions of Microsoft operating systems and are pleased that the BBC Trust continues to share our concern that iPlayer be made technology agnostic at the earliest opportunity.

“Thanks to the BBC Trust's intervention we met BBC management to outline how they could deliver an open iPlayer that would meet all rights holders concerns. We think it would be easily possible to use the BBC's existing, world leading Free Software solutions in an open iPlayer. We sincerely hope that the BBC will take this further."

This does matter, because if the catch-up service remains Windows only, it turns the BBC into a vector of Microsoft's DRM and products - hardly what the public broadcaster should be doing.

Moreover, fine words butter no parsnips: can we trust the BBC Trust to follow through on this? If they don't, at least we can be sure that the OSC will be there with a sharp stick goading them to do so.

As I've lamented before, open source usage in China is hard for us outside to gauge. Even the open source structures there are difficult to discern. So news that the Linux Foundation is linking up with something called the Chinese OSS Promotion Union is interesting:

COPU now has over 300 members, covering nearly all the domestic enterprises and public institution units in the field of open source, including all the Linux distributions including Red Flag, Co-Create, China Standard Soft, TurboLinux, and Sun Wah, universities (over 200), and institutes for scientific research, standard, law and industry. COPU also has over 20 multinational companies as its members who have their representative offices or branches in China including IBM, Intel, HP, Sun, Oracle, SAP, NEC, CA, BEA, Hitachi, Sybase, France Telecom, MontaVista, and Google.

16 December 2007

The United Nations Intergovernmental Panel on Climate Change (IPCC) has issued increasingly alarming conclusions about the climatic influences of human-produced carbon dioxide (CO2), a non-polluting gas that is essential to plant photosynthesis.

didn't sound suspiciously close to this one:

As for carbon dioxide, it isn't smog or smoke, it's what we breathe out, and plants breathe in. Carbon dioxide: they call it pollution, we call it life.

Because that, as I wrote some time ago, was an egregious bunch of propaganda for the joys of pollution provided by the Competitive Enterprise Institute (CEI), "advancing liberty - from the economy to ecology".

And oh, look: by an amazing coincidence the Open Letter talks about - guess what? - yes, that precious economy:

While we understand the evidence that has led them to view CO2 emissions as harmful, the IPCC’s conclusions are quite inadequate as justification for implementing policies that will markedly diminish future prosperity.

Ah, yes, prosperity - so much more important than little things like trees, a healthy, sustainable environmental commons, or survival. No, let's get our priorities right:

Attempts to prevent global climate change from occurring are ultimately futile, and constitute a tragic misallocation of resources that would be better spent on humanity’s real and pressing problems

which are, of course, how to make the rich even richer by exploiting the environmental commons as quickly as possible, before the world is burnt to a crisp.

15 December 2007

Read/WriteWeb is one of the more perceptive blogs - and I thought that even before they wrote this:

In this post we'll give you our pick for Most Promising for Web for 2008.

Originally we planned to pick the most promising Web company for 2008. But in the end the ReadWriteWeb team decided to follow the example set by Time magazine last year, when it named "You" as its 'Person of the Year'. Likewise we think there is no single Web company that is more promising than... the open source movement. It's a loose-knit group that aims to make a huge impact by tying all Web companies together.

Most people - myself included - take fonts for granted. But we shouldn't, because, just like software, fonts can be free and non-free. If you want to find out everything there is to know about the subject of free fonts, try this excellent short article.

Interesting piece in Forbes about CEOs learning to eat crow and enjoy it. Take Facebook's Zuckerberg, for example:

When Zuckerberg's apology surfaced, the protest's 70,000 or so privacy advocates still represented a relatively small seed of revolt--less than .2% of Facebook's 50 million plus members. Facebook's apology and changes to Beacon seem to have appeased that angry minority before it could swallow up the site.

That such a small group could pull a contrite message out of a chief executive also shows just how the Web can channel consumers' anger. And tech companies may be especially prone to those backlashes: Not only are tech customers particularly Web savvy, but the tech industry itself frequently sails into uncharted and--from a PR perspective--dangerous waters, says Waggener Edstrom's Neptune.

I think that much of this is due to the Internet culture, which is pretty much the same as that of the free software world. It's one that requires transparency and accountability; and when either of those is missing, it also requires apologies. Remember:

I've been extolling the virtues of the Asus EEE PC and its ilk as exemplars of an important new class of computers; but Jono Bacon has spotted a problem:

One of the distinctive traits of EEE PC, and many other sub-notebook, MID and smaller computing devices, is that they run with a smaller screen resolution than typical desktop machines. I am pretty sure that most desktop machines that people are running Linux on will be running on a minimum of 1024×768, and likely a higher resolution. One of the things that I have noticed in recent years is that an increasing number of Open Source applications look terrible on lower resolutions.

Fortunately, it's readily solvable:

We need better testing, bug-reports being filed, and users actively checking and ensuring that software works well in lower resolutions. I also believe it forces us all into a world of more intelligent, usable design - hugely tall windows crammed with a million preferences or super-thick toolbars are not usable interfaces. One could infer that having to be conscious of lower resolutions will make us think more about the usability of our applications and ensure we don’t cram a million-and-one buttons into a window.

A knol on a particular topic is meant to be the first thing someone who searches for this topic for the first time will want to read. The goal is for knols to cover all topics, from scientific concepts, to medical information, from geographical and historical, to entertainment, from product information, to how-to-fix-it instructions. Google will not serve as an editor in any way, and will not bless any content. All editorial responsibilities and control will rest with the authors. We hope that knols will include the opinions and points of view of the authors who will put their reputation on the line. Anyone will be free to write. For many topics, there will likely be competing knols on the same subject. Competition of ideas is a good thing.

Knols will include strong community tools. People will be able to submit comments, questions, edits, additional content, and so on. Anyone will be able to rate a knol or write a review of it. Knols will also include references and links to additional information. At the discretion of the author, a knol may include ads. If an author chooses to include ads, Google will provide the author with substantial revenue share from the proceeds of those ads.

Once testing is completed, participation in knols will be completely open, and we cannot expect that all of them will be of high quality. Our job in Search Quality will be to rank the knols appropriately when they appear in Google search results. We are quite experienced with ranking web pages, and we feel confident that we will be up to the challenge. We are very excited by the potential to substantially increase the dissemination of knowledge.

Numerous commentators, including myself, have decried the growth of copyright holder rights in recent decades. Copyright’s expansion is widely said to be inimical to copyright’s core goals and economic rational. If so, why has that expansion occurred? Without question, there are multiple causes. This essay surveys and critiques a number of them, beginning with the copyright industries’ raw political muscle and moving to the rhetorical and theoretical frameworks for expansion.

One of the many insights that have come out of open source is what might be called the "pebble on the cairn" effect - the idea that by combining the small, even negligible, individual efforts we can create something large and durable.

Here's a perfect example that builds on the fact that scholars very often scan books in the public domain during the course of their research, but then don't do anything with those scans. What if they were all brought together, and then fed into an OCR system?

If many researchers have had to scan rare documents or books for their own perusal, there’s a potential treasure trove of material that exists among their combined efforts. Rather than let all that scholarship rot, or waste away in data files, the university’s Center for History and New Media sees an opportunity to create an open archive of scholarly resources in the public domain.

...

In partnership with the Internet Archive, and with funding from the Andrew W. Mellon Foundation, the center is creating a way for scholars to upload existing data files to be optically scanned (to make them text-searchable) and stored in a database available to the public.

Even better is that fact that open source software can be used to make realise this idea:

The vehicle for the new environment will be the Zotero plug-in for the Firebox browser, also developed by the center. The software stores Web pages, collects citations and lets scholars annotate and organize online documents. A new feature of the plug-in will allow people to collaborate and share materials through a dedicated server. Building on that functionality, according to Cohen, the system will allow scholars to drag and drop documents onto an icon in Zotero that essentially sends it to the Internet Archive for storage and free optical character recognition.

The eventual result of the project, called Zotero Commons, could be reduced need need for research trips, Cohen suggested.

Imagine a boot stamping on your face when you cry with hopes for a better World. That's what it felt like when I went about trying to actually use this pen. It mocked me. It shouted at me. It told me I was not worthy. In short, it jilted me.

...

I will still be ordering a new quill next week from a custom manufactory in Belgium, but this has been a fascinating jaunt into the future. Perhaps the pen might be more practical if some fins were attached to the sides.

...

I glanced down to my hand and there, humbly, sat the pen. I cannot reccomend this highly enough; indeed, I would say that it is the ultimate catalyst to enlightenment. Since I have owned this pen every word I have written has been like pure gold; my business ventures have prospered, my home life excelled and my pot plants flourished. Where, oh where, I hear you ask, can I purchase such a pen-sized piece of wonder? It is here, my friend, it is here.

and the coup de grace:

This Pen has been amazing, although not for its intended use, this pen has still lived up to its reputation as a solid performer.I've used this pen to stab and kill 3 neighborhood dogs this week alone.The pen retains its grip even when submerged in dog blood.Thank You Amazon and Bic for creating such a useful pen!

The "brains" of the Ares I rocket that will send four astronauts back to the moon sometime in the next 12 years will be built by Boeing, NASA announced today—but the specifications will be open-source and non-proprietary

Readers of this blog will know that open source is by definition intelligent, but what we're talking about here is something else: a report for congress about the use of information that is openly available by intelligence services:

Open source information (OSINT) is derived from newspapers, journals, radio and television, and the Internet. Intelligence analysts have long used such information to supplement classified data, but systematically collecting open source information has not been a priority of the U.S. Intelligence Community (IC). In recent years, given changes in the international environment, there have been calls, from Congress and the 9/11 Commission among others, for a more intense and focused investment in open source collection and analysis. However, some still emphasize that the primary business of intelligence continues to be obtaining and analyzing secrets.

A consensus now exists that OSINT must be systematically collected and should constitute an essential component of analytical products. This has been recognized by various commissions and in statutes. Responding to legislative direction, the Intelligence Community has established the position of Assistant Director of National Intelligence for Open Source and created the National Open Source Center. The goal is to perform specialized OSINT acquisition and analysis functions and create a center of excellence that will support and encourage all intelligence agencies.

Got that? There is now an official Assistant Director of National Intelligence for Open Source, and even a National Open Source Center. (Via Cryptome.)

Opera Software has filed an antitrust suit against Microsoft in the European Union, accusing it of stifling competition by tying its Internet Explorer web browser to Windows.

The complaint, which was filed by the Norwegian firm with the European Commission yesterday, says Microsoft is abusing its dominant position in the desktop PC market by offering only Internet Explorer as a standard part of Windows, and hindering interoperability by not following accepted standards with IE.

Opera is asking the Commission, the executive branch of the European Union, to force Microsoft to unbundle IE from Windows, or include other browsers as a standard part of its operating system. It also wants it to require Microsoft to adhere to industry standards with its Web browser.

It didn't do any good last time, but this is the EU rather than the US, so it will be interesting compare and contrast the outcomes. Still, I have to say that the real solution is not to file this kind of anti-trust suit, but to deploy Firefox. Obviously, that's not an option for Opera, which may explain why they've taken this route.

Open Yale Courses provides free and open access to seven introductory courses taught by distinguished teachers and scholars at Yale University. The aim of the project is to expand access to educational materials for all who wish to learn.

I'm pretty much the world's biggest fan of opening things up, but sometimes you do have to ask: what's the point?

Sun Microsystems on Tuesday followed through on a promise to release the designs of a second server processor as open-source software.

The design for Niagara 2, formally called the UltraSparc T2 and currently shipping in servers, now is governed by the General Public License (GPL)--though as with Niagara 1, Sun is using the earlier version 2 of the seminal license.

I applaud the sentiment behind this move, but wonder whether anyone will benefit. How many people are actually hacking on the design of chips?

The Eee PC has attracted so much attention worldwide that other vendors, including China's Hasee Computer, want to grab a share of the market, Gartner says in its Semiconductor DQ Monday Report this week. The difference is that these companies plan to make low-cost laptops at standard sizes and with better functionality, so they're easier to use.

Hasee plans to launch a low-cost laptop soon, but with a bigger display than the Eee PC, a more powerful processor and much more storage, Gartner says. The Q540X laptop will carry an Intel Celeron 540 processor, an 80G byte hard drive, a 13.3-inch display, weigh 2.19 kilograms and cost just 2,999 Chinese renminbi (US$405), Gartner says.

In 1978, I.B.M. was beginning to design its PC, which was a radical break for a company that had until then resisted open architectures and industry standards. Mr. Lowe invited Mr. Nelson to the company’s offices in Atlanta for a 90-minute presentation.

The resulting slide show, in which Mr. Nelson sketched out a world in which computer users would be able to retrieve information wherever they were, came as a shock to the blue-suited I.B.M. executives, Mr. Lowe said. It gave a hint of the world that the PC would bring, and even though the I.B.M.-ers were getting ready to transform a hobbyist business into one of the world’s major industries, they had no clue of the broader social implications.

The statement, issued on behalf of Rosedale, read: "I can confirm that Cory Ondrejka, CTO, will be leaving Linden Lab at the end of this year, in order to pursue new professional challenges outside the company. I wanted to take this opportunity to publicly thank Cory for his tremendous contribution to the company and to Second Life, in terms of its original vision and ongoing progress.

Eeek: this is not good. I interviewed Cory earlier this year, and found him both an extremely pleasant chap and very switched-on. Obviously, I don't know the background to this latest news, but it bodes ill to lose your CTO in this way....

William Patry is Senior Copyright Counsel, Google; he's also author of a seven-volume, 6,000 page treatise on copyright, which suggests he knows a thing or two about the subject. In one of the longest blog posts I've seen in a while (and not exactly light reading, either), he wrote about "the legendary UK intellectual property authority Sir Hugh Laddie" and his inaugural lecture as a Professor at the University College London.

The title of his lecture was "The Insatiable Appetite for Intellectual Property Rights" - interesting in itself. But what is really remarkable is that Patry agrees:

Regrettably, both Sir Hugh and I have been lead in recent years to speak out in protest over the unslakable lust for more and more rights, longer terms of protection, draconian criminal provisions, and civil damages that bear no resemblance to the damages suffered. As Sir Hugh noted in his speech, “A calm look at the way IP rights are obtained and enforced in practice suggests that something is wrong. The drive for more IP rights has produced startling results.” He then gives page after page of examples, drawn from copyright law, trademark law, and patent law

When two of the top copyright experts in the Anglophone world both speak out in no uncertain terms against the current intellectual property maximalism, you know that there's something seriously rotten in the state of Denmark.

I have to declare an interest in the new blog Open Teaching, Learning & Certification, since I was probably partly instrumental in its creation. Its author, Leo Max Pollak, chatted to me about his interesting ideas, whose principal ideas are:

* An instituted centralised hub of British open courseware from Britain's Russell Group (and contributing) research universities, at www.ocw.ac.uk. Open courseware constitutes of a freely-accessible, IP-cleared, online publication of a university's full catalogue of under- and post-graduate course materials - syllabi, reading lists with links to open access papers, course notes, video/audio lecture notes, slideshows, past exam papers, assignments etc.. The pioneering university in the provision of open courseware is MIT, whose entire course materials can be found at ocw.mit.edu * Complementing a high-quality and pluralistic British open courseware offering, I will be advocating a new kind of university qualification - an Open degree - whereby citizens can pay a premium fee and take the same exams as do existing students in enrolled face-to-face learning, with certificates signifying information about the specific courses examined on. This would be targetted, via a high-profile public information campaign, at adult learners, excluded minorities, and students at pre-university age.

Recognising that such bold moves might prove hard to get implemented, I suggested blogging to get them out in the open, so to speak. The result is the new blog.

11 December 2007

It's still very hard to read what is happening in the Chinese GNU/Linux market:

Although China's Linux market as a whole doubled from 2003 to 2006 to $20 million per year, sales of Linux desktop software grew more slowly. In fact, the market share of Linux desktop software in China dropped from 16% to 12% in the same period. But according to CCID Consulting, sales of Linux desktop software increased 25.1% in the third quarter of this year, catching up with the quick growth of China's Linux industry as a whole. Several new developments have added fuel to the growth.

And this is very worrying:

Additionally, the low-cost advantage of Linux desktop software is diminishing. Microsoft has taken a more flexible pricing tack in the Chinese market, offering increasingly better discounts for Chinese computer producers. An anonymous executive of a Chinese computer producer says that his company considered using the Linux desktop OS at the beginning of this year, but eventually went with Windows because Microsoft didn't charge much more than the service fee of Linux companies. He suggested this could be looked at as a victory for Linux, as it had forced Microsoft to lower its price.

I just learned that in a Scale Out File Services (SOFS) solution a customer can implement a global filesystem (with clustering/replication) that has a maximum filesystem capacity of 33554432 Yobibytes.

I can't find the original reference to those yummy yobibytes (1024 zebibytes, in case you were wondering), but I have no reason to think they're not there, somewhere, it's just a question of searching....

Independently of the fact that it's probably the best single intro to open content currently available, how could anyone resist a publication that has a gnu and penguin sitting together on its front cover?

If you *do* need more, try this:

This e-Primer introduces the idea of Open Content by locating it within the larger historical context of copyright’s relation to the public domain. It examines the foundational premises of copyright and argues that a number of these premises have to be tested on the basis of the public interest that they purport to serve. It then looks at the ways in which content owners are increasingly using copyright as a tool to create monopolies, and how an alternative paradigm like Open Content can facilitate a democratization of knowledge and culture.This e-Primer focuses on some of the implications for policy makers thinking about information policies, and the advantages that the Open Content model may offer, especially for developing countries.

The music industry has finally found an online music model it can live with:

Imeem, a social networking site that was in the recording industry's crosshairs earlier this year for allowing file-sharing on its network, has pulled off an impressive feat. This summer it settled its lawsuit with Warner Music by promising to give Warner a cut of advertising revenues from the site. Now the Wall Street Journal is reporting that it's signed similar deals with all four major labels, meaning that Imeem is now the first website whose users have the music industry's blessing to share music for free.

But wait, even though it's a streaming site, it's not actually much different from all the download sites the music industry professes to hate:

it's quite easy to download music files from Imeem using third-party tools. And because Imeem's site doesn't use DRM, Imeem downloading tools are probably legal under the DMCA. So what we have here is the de facto legalization of Napster-like sites, as long as the record labels get a cut of the advertising revenue. It's an exciting development, albeit one that should have happened seven years ago.

More evidence that GNU/Linux is carving out a new ultra-portable market sector:

Everex has confirmed plans to ship a UMPC (ultra-mobile PC) with a 7-inch screen, similar to competitor Asus's EEE PC. A source close to the company revealed that the device -- codenamed "Cloudbook" -- will ship with the Google Apps-oriented "gOS" Linux distribution early next year.

Projects make this list "because there is no adequate free placement," the list's home page explains, which means that "users are continually being seduced into using non-free software."

He concludes with the just observation:

Personally, I find the current list both encouraging and depressing. On the one hand, it is encouraging in that relatively few items affect daily computing for the average user. Moreover, the fact that free software is in reasonable enough shape that it can start thinking beyond immediate needs and worry about such things as the BIOS is a sign of progress.

On the other hand, it is discouraging because progress sometimes seems slow. Video drivers have been a problem for years, and the improvements, while real, are also painfully slow. Similarly, Gnash has not yet developed to the stage where it can rival Adobe's Flash reader, despite several years of work.

Still, over time, the list reflects progress. For instance, since Sun announced last year that it was releasing the Java code, you will no longer find support for free Java implementations listed. By comparing the current list with previous ones, you can get a sense of the gradual evolution of free software, seeing where it's been and where it is heading. For a GNU/Linux watcher, it remains an invaluable resource.

Red Hat Enterprise MRG enables customers to leverage the full power of distributed computing with commercial-strength grid capabilities, based on the University of Wisconsin's highly respected Condor high-throughput computing project. These capabilities provide customers with a practical means of using their total compute capacity with maximum efficiency and flexibility, while improving the speed and availability of any application. Additionally, Red Hat and the University of Wisconsin have signed a strategic agreement to make Condor's source code available under an OSI-approved license and jointly fund ongoing co-development at the University of Wisconsin.

As grid guru Ian Foster notes, that last point is particularly good news regarding

the supposedly open source, but never really accessible Condor software

At this time we do not distribute source code publicly, but instead consider requests on a case-by-case basis. If you need the source code, please e-mail us at condor-admin@cs.wisc.edu explaining why, and we'll get back to you.

Free software is already very strong in this sector; open sourcing Condor will only add to its lead there over proprietary solutions.

10 December 2007

One of the premises of this blog is that openness - radiating out from open source through open content, open access, open data and the rest - is more than a technical issue. Ultimately it is something that will touch every aspect of our lives.

One manifestation of this is the move to obtain free access to government data. In the UK, the Guardian has been a leading campaigner, and here's some news of what going on the other side of the pond:

I got a sense for the importance of the task talking with Dan O’Neil, who is “people person” for Everyblock.com, a remarkable project headed by Adrian Holovaty designed to be a “one-stop shop” for information about urban neighborhoods, including building permits, crime reports, planned improvements, school information, etc. Dan’s job is to negotiate with government officials in the twenty cities Everyblock seeks to map, and gain access to vast geocoded data sets. Armed with a set of principles and best practices that government geeks can show to their bosses, his job would be a lot easier than it is right now.

Most significant, perhaps, is the definition of what constitutes open government data:

Government data shall be considered open if it is made public in a way that complies with the principles below:

1. Complete All public data is made available. Public data is data that is not subject to valid privacy, security or privilege limitations.

2. Primary Data is as collected at the source, with the highest possible level of granularity, not in aggregate or modified forms.

3. Timely Data is made available as quickly as necessary to preserve the value of the data.

4. Accessible Data is available to the widest range of users for the widest range of purposes.

Somewhat naively I thought that Nokia was a savvy company on the side of light - maybe because it's Finnish; but I was wrong, it seems:

Nokia has filed a submission with the World Wide Web Consortium (W3C) objecting to the use of Ogg Theora as the baseline video standard for the Web. Ogg is an open encoding scheme (On2, the company that developed it, gave it and a free, perpetual unlimited license to its patents to the nonprofit Xiph foundation), but Nokia called it "proprietary" and argued for the inclusion of standards that can be used in conjunction with DRM, because "from our viewpoint, any DRM-incompatible video related mechanism is a non-starter with the content industry (Hollywood). There is in our opinion no need to make DRM support mandatory, though."

...

Nokia intervention here is nothing short of bizarre. Ogg is not proprietary, DRM is, and DRM-free may be a "non-starter" for Hollywood today, but that was true of music two years ago and today, most of the labels are lining up to release their catalogs without DRM. The Web, and Web-based video, are bigger than Hollywood. The Web is not a place for proprietary technology or systems that take over your computer. For Nokia (and Apple, who also lobbied hard for DRM inclusion) to get the Web this badly wrong, this many years into the game, is really sad: if you haven't figured out that the Web is open by 2007, you just haven't been paying attention.

One of the themes of this blog is how the principles behind open source can be applied to other domains. Here's someone with the same idea:

Just say the words quietly to yourself: open…source….society…. A society where the inner workings of the government, the economy, every aspect of everyday life, are placed under the spotlight for every citizen to see, examine, and have an impact on.

These are my goals: to present ways you can improve your own life and the lives of your friends and family through the benefits of the open source movement; to present ways you can give back to the movement through your own ideas, labor, or financial support; to present ways you can have a positive impact on other aspects of your life completely unrelated to technology through the use of an open source philosophy; to create a movement hellbent on remaking the world into a more cooperative, friendly, honest, and above all equitable place to live, work, and play.

One of the besetting faults of the online world is a certain anglocentricity in its reporting: we tend not to hear much about the goings-on in other parts of the world - even other parts of Europe. So for all those of you who were wondering, here's a list of the top 100 Web 2.0 sites in Germany, complete with quick notes explaining what they do.

new research suggests that the presence of other people may enhance our movie-watching experiences. Over the course of the film, movie-watchers influence one another and gradually synchronize their emotional responses. This mutual mimicry also affects each participant's evaluation of the overall experience -- the more in sync we are with the people around us, the more we like the movie.

08 December 2007

Keelan says, “YouTube is increasingly a resource people consult for health information, including vaccination. Our study shows that a significant amount of immunization content on YouTube contradicts the best scientific evidence at large. From a public health perspective, this is very concerning.”

Clearly, we need to start seeing YouTube for what it is: a communications medium that governments should be employing routinely to get messages - about health, for example - across:

According to Wilson, “The findings also indicate that public health officials should consider how to effectively communicate their viewpoints through Internet video portals.”

With one important caveat: that governments must learn to use YouTube on its own terms - not trying to impose traditional formats, which will simply be ignored. That's going to be hard...

One of the persistent myths about free software is that successes like Linux are one-offs, and that the open source methodology can't be applied easily to tackle complex software challenges. In the early days of free software, the relative paucity of end-user apps was trotted out as proof of this idea - The GIMP stood in splendid isolation back then.

Things have change, though; today, there is a wide range of high-quality open source apps, and the list keeps on growing. Here's there latest, and it's a biggie:

Until recently, a student solving a calculus problem, a physicist modeling a galaxy or a mathematician studying a complex equation had to use powerful computer programs that cost hundreds or thousands of dollars. But an open-source tool based at the University of Washington won first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.

The tool, called Sage, faced initial skepticism from the mathematics and education communities.

"I've had a surprisingly large number of people tell me that something like Sage couldn't be done -- that it just wasn't possible," said William Stein, associate professor of mathematics and lead developer of the tool. "I'm hearing that less now."

Open-source software, which distributes programs and all their underlying code for free, is increasingly used in everyday applications. Firefox, Linux and Open Office are well-known examples.

But until recently, nobody had done the same for the everyday tools used in mathematics. Over the past three years, more than a hundred mathematicians from around the world have worked with Stein to build a user-friendly tool that combines powerful number-crunching with new features, such as collaborative online worksheets.

"A lot of people said: 'Wow, I've been waiting forever for something like this,'" Stein said. "People are excited about it."

Sage can take the place of commercial software commonly used in mathematics education, in large government laboratories and in math-intensive research. The program can do anything from mapping a 12-dimensional object to calculating rainfall patterns under global warming.

The benefits of using free software for maths extend far beyond the usual ones:

The frustrations weren't only financial. Commercial programs don't always reveal how the calculations are performed. This means that other mathematicians can't scrutinize the code to see how a computer-based calculation arrived at a result.

"Not being able to check the code of a computer-based calculation is like not publishing proofs for a mathematical theorem," Stein said. "It's ludicrous."

About Me

I have been a technology journalist and consultant for 30 years, covering
the Internet since March 1994, and the free software world since 1995.

One early feature I wrote was for Wired in 1997:
The Greatest OS that (N)ever Was.
My most recent books are Rebel Code: Linux and the Open Source Revolution, and Digital Code of Life: How Bioinformatics is Revolutionizing Science, Medicine and Business.