The main difference among the free software and open source software concepts are the motivation of the people identifying with each (that is why I tend to use the term FLOSS when I do not want to be specific about either group. From time to time the question of whether some software is open source or rather free software appears. For example, Linus said that the Linux kernel:

… has never been an FSF project, and in fact has never even been a “Free Software” project.

Whether the kernel is or is not a Free Software project is arguable, because it depends on how the developers feel about it or what their intentions are. But what can we say about the set of software grouped under the label of “Free Software” and the set of software gropued under the label of “Open Source Software”? This is far more objective, although not absolutely objective.

We can certainly say that Free Software is software available under a license that fits the Free Software Definition, and that Open Source software is likewise the software available under a license that fits the Open Source Definition. By looking at both definitions, it seems hard to find a license that would fit one definition and not the other, making both sets roughly equivalent.

But let us also compare the list of software licenses that the FSF blesses as Free Software Licenses and compare that list with the software licenses blessed by the OSI as Open Source licenses. The FSF also publishes a list of licenses that are considered to be non-Free Software licenses, whereas the OSI only publishes accepted licenses.
It turns out that there are around 30 licenses (depending on how you count the different versions of the same license) that are accepted by both groups, 32 licenses accepted by the FSF that are not mentioned by the OSI and 22 licenses accepted by the OSI which are not mentioned by the FSF. Then we have some special cases:

Python: the FSF separates the python license versions into three groups, of which two are compatible and one incompatible with the GPL, though all classified as Free Software

Perl: the “Perl License” is not evaluated by the OSI. However, since this “disjunctive license” specifies that the licensee may choose either the Artistic License or the GPL, and both are accepted by the OSI, we can deduct that the Perl license should be considered Open Source without a doubt.

Eiffel Forum v1: does not appear explicitly accepted in the FSF, the document only states that this license is incompatible with the GPL. The conclusion is that it is considered a GPL-incompatible free software license, while version 2 is a GPL-compatible free software license.

Academic Free License: the FSF accepts versions 1.1 and 2.1, the OSI only mentions acceptance of version 3.0.

And finally we have three licenses that are accepted by the Open Source Initiative and explicitly rejected by the Free Software Foundation: the “Artistic License”, the “Nasa Open Source Agreement” and the “Reciprocal Public License”. However, in both cases the reasons to reject the licenses are not based on failing to fulfill some requirement in the Free Software Definition.

Artistic License: the FSF does not accept the original artistic license, because of its vagueness, but accepts a modified veresion. However, according to the FSF, “the problems are matters of wording, not substance”.

Nasa Open Source Agreement: the FSF objects to the requirement that the changes made to the software be the contributor’s “original creation”. However, the Free Software Definition does not require the right to include any third party code. There is no substantial difference between right 3 of the FSD: requiring “the freedom to improve the program, and release your improvements to the public” and the third condition of the OSD stating “the license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software”.

Reciprocal Public License: there are two criticisms on behalf of the FSF:
1. the licensee has to notify the licensor when she publishes a modified version of the code. This is not in contradiction with the Free Software Definition, so there is no reason to deny this license. Other much stronger restrictions (such as copyleft or the prohibition of modifying files in the LaTeX license) are accepted by the FSF.
2. there is a limit on how much anyone can charge for the source code. This does however not limit the amount of money you may charge for the distribution of both the binary and the source code, or for the binary on its own. And it also does not contradict any of the freedoms in the Free Software Definition.

As a conclusion, we can say that the set of software that fits the Free Software Definition and the set of software that fits the Open Source Definition is essentially equivalent. Some exception may exist depending on the interpretation, but there would be no point in assuming that one is the subset of the other.

I am sitting in a seminar dealing with promotion of access to knowledge, mainly through the use of open licenses that provide much more access then the minimum required by law. It has become more clear than ever yo me that the term “intellectual property” is very misleading.
Several arguments say that the term is basically propaganda, just as the term “piracy” is when applied to infringement to copyright and similar laws. But it should be enough to see how misleading it is to go and change it.
Severa critics of the term suggest no alternative name for “Intellectual Property”, argumenting that there is no reason why you would have to group the several unconnected legal constructs (copyrights, patents, trademarks, industrial secrets) that form the group. I disagree. If you want to refer to all those legal constructs in a single name, you should just figure out what they have in common. It turns out all of them have something in common that is also very characteristic: they provide exclusion rights. One single person gets the right to exclude everybody else to do or use something under certain circumstances.
Sure, it does not sound the same when you talk about limitations to intellectual property than about limitations on exclusion rights. In the first case, we would naturally feel compelled to reject any limitation on a protection and instead increase the protection as far as we can, right? if there is something in need of protection, why would we take it away? In the second case, we would feel inclined to limit the exclusion rights, because to exclude everybody from using or doing something requires strong and persuasive arguments. This is also the reason why many people related to WIPO have forgotten about the original goals justifying the legal constructs, going from incentives to creation to a blind “protection” no matter whether the consequences align with the original goals.
On my part, I will avoid using the term “Intellectual Property” whenever possible, substituting it with the term “Exclusion Rights”.

Ed Felten asks interesting questions regarding the Spamhaus.org case. You can read his post for details, but basically, Spamhaus.org has ignored a ruling in which it should pay a plaintiff for damages and publish a declaration stating that it has erroneously published the plaintiff in its “spammers” lists. It might be possible that a court rules that Tucows (as the registrar managing the domain registration) and/or ICANN has to withdraw the name assignment of Spamhaus.org, so that the domain ceases to exist. Felten’s questions are:

Is it appropriate under U.S. law for the judge to do this?

If the spamhaus.org is revoked, how will spamhaus and its users respond?

If U.S. judges can revoke domain name registrations, what are the international implications?

As for question 1, I reserve my opinions. The important fact is that, however small, there is a chance that a judge will eventually rule that some domain name has to disappear. So let us just assume that it might happen and continue with the other two. My bet i: there would be a proliferation in the usage of alternative root servers for the Domain Name System (DNS), probably lead by Spamhaus and its users. ICANN would loose its position, which is fragile at best. This would have a high cost for all Internet users (see RFC 2826 for details), as it would make the ‘Net a less reliable place.

You might think of it as a social or political revolution, but in this case, as in many others, there would be no winners, we would all loose to some extent. So, while accepting a loss just to make someone else loose more is no recommendable practice, my intuition says that it’s precisely what would happen in this case. I assume that is what Felten is thinking when he writes:

The result wouldn’t be pretty. As I’ve written before, ICANN is far from perfect but the alternatives could be a lot worse.

OpenBusiness runs a very interesting inteview with Last.FM on their project, website or service, whatever you may call it. This is an interesting iniciative that offers what we could call an “open service”, although we still do not have a sound definition for what an open service should entail, but both Tim O’Reilly and Tim Bray have made interesting points. This is further followed by Anthony Coates by concluding:

Data matters. It shouldn’t be an afterthought. It will outlive your applications.

The differences of FLOSS, Open Standards and Open Services and Open Infrastructure are very interesting, since each of these has its particularities. You would not want to make an open standard free for everyone to change on their own will as many times as they want, since one of the value of standars is that software that implements it can interoperate, so it should be chasing a moving target. On the other hand, anyone should be able to participate in the definition of a standard, but without having the design by committee effect of creating a bloated and far from ideal result by including everyone’s opinion. Bob Sutor has given it a thought, as has Bruce Perens who even has come up with a proposed definition of the open standards concept on which I have commented previously in spanish.

Similar differences apply to both Open Services and Open Infrastructure. On the latter, I personally think that FON is something close to the model of how this concept should be like, specifically when considering the Linus way of using it. The basis here is: I give you mine so you let me use yours. This has been the basis of several widely used iniciatives, ranging from subscription libraries to public goods and infrastructure managed by governments. So why should we not apply these principles to our IT infrastructures, with the benefit that this does not depend on a government making decisions for all of a country’s citizens, and not being bound to any geographic region? This topic have been addressed by Jon Udell and Tim O’Reilly, and we can look at projects like BOINC that take a different path than FON.
To conclude: FLOSS, Open Standards, Open Services and Open Infrastructure do have some relations but also meaningful differences. Their use and development in the future is something to keep an eye (and actively work) on.

Update: there is an interesting discussion about what a specific kind “open service” (they talk about web 2.0 sites that enable people to share content) should look like, triggered by Lessig’s post “The Ethics of Web 2.0” and a nice followup by Tim O’Reilly “Real Sharing vs. Fake Sharing“.

This article in Wired by Bruce Schneier gives another hint at what some people have been arguing for a long time: Liability for software vendors. It describes how fast an organization reacts when there is money in risk. In spite of the promise to focus on security after several worm breakouts with huge financial consequences for customers, the security has not been one of the most outstanding features for Microsoft lately. In Bruce’s words:

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

We have become used to think of DRM-related laws in terms of one-sided issues that consider only the publishers and completely ignore the general public as well as the potential authors of new material. The EUCD, DMCA and other implementations of the WIPO Performances and Phonograms Treaty.

Reading the articles on PRM as the next step by Ed Felten, about how the reasons put forward to justify DRM-related laws have shifted, I started reasoning about what such a law should look like. So, here I present some thoughts on what a law regarding DRM, that really considers the general public (society) and potential new authors, should look like.

Basically, people are used to make things in certain way. The problem DRM poses is that it has the potential to force a change in the way people can do things, without ever telling anybody about it until it is too late. This is why the “trusted computing” has been rephrased as “treacherous computing”: it effectively deceipts the general public into beleiving what it is told (better quality of some “content”) as being the only consequences of the new technologies. But the most important characteristics are kept quiet and do not surface until the users have already made their choices, the market has accepted some technology under false premises and there is no turning back.

In order to avoid this treacherous method of forcing certain technologies to unsuspecting users, the users should have all of the information before choosing, which is a very basic requirement by the way. The steps to force this could be the following. In order to distribute a device that enforces DRM, it is necessary for the vendor to:

inform exactly and in detail how the DRM solution will work and what the consequences are for end users. (specification)

provide ways to verify reliably that the devices effectively work exactly as described in the specification. The best way to do so would be to make the source code available and provide a way to compile the source into the binary that is effectively distributed along with the devices.

allow the user to keep the old specification or override (circumvent) the DRM when a change is made to the original specification.

Drivers for Graphic Cards has been a pain in the ass for open source communities. Since the market is still evolving very fast, the vendors are reluctant to give any information to their competitors. The problem is that they consider open source drivers to be one way of giving away information. I will not comment on that one right here, just mention it as a fact: vendors have been very reluctant to deliver open source drivers or even information to others who would be able to create those drivers. Hence, the end-user in most cases has the choice of using a less-featurefull, lower-performance but open source driver or to use a binary-only driver provided by the vendor.

The open source driver is generally of lower performance because of the lack of information, making it difficult for the programmers to make use of the hardware capabilities. They need to go through a long and difficult process of reverse engineering in order to guess the way the hardware works.

However, the binary drivers have had their own troubles. These problems have practical, moral and legal roots. The first issue is that it has to be maintained separated from the kernel. Any changes to the internal kernel structures can make the driver worthless because it cannot be used in the next kernel release unless the vendor releases a new version. Also, the driver needs some “glue” that has to be created for each specific version of a kernel, even when there are no changes to any internal kernel structures. As the programmers that create the linux kernel drivers are no linux kernel experts and work separated from that community, several problems caused by the incorrect programming of the binary modules have caused more than a headache to this community, to the point that binary modules now “taint” the kernel and are no longer supported by the community (nor by vendors). This is the right thing to do in my opinion: since they cannot fix the binary module, they refuse to look at any problem with kernels running that code. Since the linux architecture is monolithic, a programming error in the driver can cause problems in supposedly unrelated parts (file system corruption for example). Vendors have improved their code and the drivers have improved, but they are nontheless a potential problem.

Second, the moral issue: people wanting to run a completely free operating system on their boxes will not want to be forced to install a binary-only driver in order to make use of their hardware. Not much to discuss there.

Third, the legal issue: the linux kernel is released under the GPLv2 license. This does not allow the creation of binary-only derivative works. So the question is: are binary-only modules derivative works? The answer seems to be that if the modules are created to be used inside the linux kernel, the make use of the internal structures and thus are effectively derivative works. There are some cases where the answer is less clear, but the general idea is that binary only drivers should not be allowed.
So it comes as good news that at least one vendor (Intel) is announcing full support for open source drivers for their cards. And Intel is the biggest player in that market.

The 2006 OSS Watch Survey is available (you may also take a look at the executive summary). This survey studies the usage of Open Source Software (FLOSS) in Higher Education (HE) and Further Education (FE) institutions in the UK. The previous survey was from 2003 and some improvements have been made. This time, 23 institutions answered the questions.

The study not only looks at the usage but also into the reasons behind it, contribution to the OSS community and others. Contrary to the 2003 version, this time the vendor lock-in was said to be an issue among the institutions. The study is definitively worth a look.
One of the results states that 56% of the FE institutions use Moodle. This is consistent with the feeling you get about the issue here in Chile, but I would not be surprised that the usage percentage would be higher here (mostly because of the lack of legacy systems and because licensing costs tend to have a greater impact).

Thiru Balasubramaniam writes an interesting entry about Public Domain and Open Standards. The position of Chile is particularly interesting, since our delegate gave an impassioned defence of why WIPO should engage in further examination of proposals to “consider the protection of the public domain within WIPO’s normative processes” and to draw “up proposals and models for the protection and identification of, and access to, the contents of the public domain”. However, the laws governing the country do not consider such an agenda. The following are the main points:

There are no exceptions related to disability of certain users

There is no right to private copies

There is no specific exception for libraries

Exceptions for educational development are excessively restrictive

Right of illustration has been derogated

Right of quotation has been excessively restricted through a regulation

The details of this schizophrenia are explained in a public letter (spanish version also available). It is to hope that the face shown to the outside world will have an impact on how the law regulates the life inside the country. It is a step in the right direction to have this new speech, so at least somebody has the right intentions. Let us hope that somebody will prevail. At least that somebody has a lot of support on the part of civil society.

So, the second draft of the GPLv3 is out. Changes include a rephrasing of the anti-DRM aspects of the code. In fact, the wording DRM is not there anymore. As Richard Stallman has made it clear in his presentation at barcelona, the purpose of these clauses is to avoid the “tivoisation” of programs. That is, even if the source code of the GPL software is available, you cannot change some bit and trust it to be installed on the same hardware it was distributed with, and work. This is because you need a special key to do so, or the hardware will refuse to run the modified code.

If we assume as a fact that software enforcing DRM will exist in the future, I would rather like to have the code available, and being able to reproduce the compilation exactly as to generate the same binary that has been signed as “trusted”. That way, at least I would have enough information to choose whether I could trust the system enough or not, and this would set abuses on the part of publishers to a minimum. This does not mean that the code should be under the GPL, though. So up to this point there is really no problem.

There are some issues, though, where I’m not so sure about. One phrase in particular states:

“However, the fact that a key is generated based on the object code of the work or is present in hardware that limits its use does not alter the requirement to include it in the Corresponding Source.”

I wonder what this implies. Let’s take The GIMP as an example, as it is a useful program, not implementing any DRM schemes and working on the Microsoft Windows platform. Suppose that the GIMP is available under GPLv3 and keeps to be compatible with the MS-Windows platform. What if the next version of MS-Windows implements a DRM scheme in which an application has to be signed before it may access a certain file? This would imply a key generated based on the object code, and thus somebody should make the key available. The question is: who? Apparently, the people distributing (conveying according to the new wording in GPLv3 beta 2) the code. So, as these people have nothing to do with the release of a new MS-Windows version, how on earth are they supposed to distribute the key? Until the existence of that new version, everyting worked smoothly. Now, because of some action of a third party, The GIMP can no longer be redistributed, because the Corresponding Source is impossible to be made available without the collaboration of somebody not related at all (and, probably interested in avoiding the availability of The GIMP)? And by way of the “Freedom or death” clause, if one cannot guarantee every right stated in the GPL, the software cannot be conveyed anymore, to nobody. That just doesn’t make sense to me. Seems to me like the perfect denial-of-service activity in the software development field: create a new feature on a system, and magically it makes some competing software illegal.

If, on the other hand, the case mentioned above resolves to The Gimp being available to anybody, including the ones who use that new DRM implementing MS-Windows version, then the fight against DRM is lost. Because the GIMP will then be subject to the DRM rules and the GPLv3 would have nothing to say against it. In the case of the TiVo, it would not be possible to distribute something which is GPLv3′d and prohibiting the system to run, but only because the system is conveyed as a unit. If the hardware plus the DRM-enabling components (hardware plus software) is distributed by some different party, either we have the case in the previous paragraph, or the GPLv3 would not avoid tivoization. Both outcomes are unacceptable for a free software license that intends to avoid tivoization.

I sympathize with the opponents of DRM technologies. The motivation for them is mainly to extend the established (and in many ways already too wide) rights by adding a technological measure. It doesn’t seem right to use technology to narrow even more the usage you can make of information, and the result is unfair, benefiting the powerful over the weak and/or scattered ones.However, it also doesn’t seem right to impose an ethical view by the usage of software, through the licensing of that software. It would be OK to keep the freedom to use a certain software (avoiding tivoization as Stallman puts it) and thus avoid DRM for that case. But are we certain we aren’t shooting ourselves in the foot?

The problem is as follows: apparently to make the system faster for visitors (requiring only one click), many banks make their login form available on a non-secured page. When everything works as intended, the form directs the request to an SSL-enabled page, so the transmission is effectively encrypted before your browser begins to send any data. But what happens when you get to a web-page that seems exactly like the original but that doesn’t redirect you to that SSL-enabled page? Your data goes unencrypted, probably right to the hands of someone you should not trust. You might notice this if you pay attention, but it would probably be too late and there are many ways of how to make it look as if you really did go to the bank’s web after giving away your login credentials to some unknown server on the internet.
So, how are the odds of getting to a fake bank page? Not very high, unless you get some phishing e-mail or somebody plays with the DNS resolver you use, both very simple and common activities nowadays.

Many banks are using marketing strategies to show how secure they are by giving (actually, selling) tokens for enabling 2-factor authentication. This is good for avoiding your fixed password to be captured by some keylogger (either a software or hardware keylogger). But it does not protect you from a man-in-the-middle attack like the one published on Washington Post.
Ironically, to get to the login page of my bank (Banco de Chile), you have to make an extra click. I can understand that, since not everybody who connects to the main page of the bank needs authentication or encryption, the main page is not secured. Probably most customers would not care to make one extra click (or store a bookmark so they won’t have to) in order to get to the secure login page. But in my case, even when I have to click on the link to get to a second page, that second page is also not secured by SSL, even when the most common use for that page is for logging into the customers bank account. There is not even the alternative for a security-aware customer to go to a, maybe slower but secure, login page. The only way to log in securely is to first enter a valid ID and a fake password, verify the authenticity of the server, go back to the non-secure page and assume that the second time you will also connect to the right server (which is not necessarily true). Or, you may have come across a dark and upublished way to access the server and obtain the login form.

The Consumer Electronics Association (CEA) published an interesting ad in a Capitol Hill newspaper this week. It contains a few quotes of arguments that have been repeated over the time to oppose different technologies, and are basically the same we are hearing these days:

“I forsee a marked deterioration in American music…and a host of other injuries to music in its artistic manifestations, by virtue—or rather by vice—of the multiplication of the various music-reproducing machines…” -John Philip Sousa on the Player Piano (1906)

“The public will not buy songs that it can hear almost at will by a brief manipulation of the radio dials.” -Record Label Executive on FM Radio (1925)

“But now we are faced with a new and very troubling assault on our fiscal security, on our very economic life and we are facing it from a thing called the videocassette recorder.” -MPAA on the VCR (1982)

“When the manufacturers hand the public a license to record at home…not only will the songwriter tie a noose around his neck, not only will there be no more records to tape [but] the innocent public will be made an accessory to the destruction of four industries.” -ASCAP on the Cassette Tape (1982)

Should I be glad for not being the only one that cares about the reckless attitude of banks with the usage of SSL? It would be preferable for the problem to be solved (since the solution is pretty much straight forward). I first wrote about the situation of chilean banks back in december 2003, and it hasn’t improved. Now I see that the same is happening in USA, with more or less the same answers.

This comes as a surprise. In the last few years, the debate in Europe around the patentability of software had the European Commission arguing that it was necessary to legislate according to current practice of the European Patent Office (EPO), in a “harmonisation of the status quo”. The EPO should not be granting patents on software, however there are over 30.000 such patents already granted.

Finally the European Commission has ruled that the European Court of Justice (ECJ) can and should question the validity of patents granted by the EPO without following the rules set by the European Patent Convention. While this is not the final word on forbidding the granting of software patents in Europe, nor a solution to the problem, it is a first step in the right direction.
Read the press release of the FFII for more information.

According to Yahoo! News (via BoingBoing), Sony is being sued for treating incomes due to internet download of songs as normal record sales rather than song licensing. This means that artists receive 4,25 cents per song instead of 30 cents. However, when a user downloads a song, the indications are the opposite: “you are buying a licence and thus not the same rights as in a normal record sale”.

Sounds a lot like the mexican saying “Jalisco nunca pierde, y cuando pierde, arrebata” (”Jalisco never looses, and when it looses, it seizes”). Some decision has to be made on which way it’s gonna be. Probably the contracts will be seized corrected to give 4,25 cents to artists on song licensing, or else Jalisco would loose.

Some time ago I wrote about the beginning of the “Pirate Party” in Sweden (article is in spanish). In that entry I really thought the whole story was more of a hoax than a real intention of creating a political party, mostly because of the irony inherent in the whole thing. The objectives were:

abolition of all intellectual property rights

Sweden must secede from international IP treaties

abolition of laws that forbid or limit distribution of information

right to privacy must be defined in the constitution and be protected harder.

But it seems that I was wrong. Or perhaps the huge response of the announcement moved the people behind this idea to really go for it and get a 4% in the upcoming elections. Whatever the case, now the objectives are much more conservative, leaving only “Right to Privacy” and a much more conservative stance on Copyrights and Patents, aiming at returning the “protections” to a fair and balanced level. Details in english are at http://www.piratpartiet.se/English.aspx

I have been reading Extreme Democracy, a book edited by Jon Lebkowsky andMitch Ratcliffe. It is a collection of papers on the subject of how the usage of Information Technology have and will influence the way the world is governed. Very recommended reading, and I will probably have more than one comment on this 371 page work.

One of the papers included in the book is “6.4 Billion Points of Light Lighting the Tapers of Democracy“, by Roger Wood. The points Roger makes are very clear and I concur with most of them. However, there is one point regarding the equivalence of money and speech where I disagree strongly with his arguments. Roger writes (emphasis added):

Buckley v. Valeo (1976) gives us the theory that money is equivalent to speech. The issue is ripe for debate, if for no other reason than money is not equally available to all citizens in society, while we are all equally endowed with one mind. […]

Money is not speech, it merely creates (or pays for) a platform for an individual to speak.

Living in a developing country, I can see a clear relation among money and speech. Not the way Roger means it, but rather that people lacking resources generally do not have the skills, means or opportunities to even develop their speech. So the problem is not only to make the (presumably existing) speech available for others to see, but to enable poor people to reflect upon their situation, have access to information, being able to process it and finally to create their own speech.

Consider a child that is born in a low-income group in Chile. The child generally does not get much stimulation at home, since both parents probably have to work in order to feed the family, covering only the basic needs (which already is an improvement relative to other families). When the child goes to school, it is necessary to choose a private (high quality) or state (low quality) school. When the money is scarce, the child will drop out of school and work to support the family.

In this circumstances, having money guarantees access to cover basic needs (food, housing, health, living in a stimulating or depressing environment) that are taken for granted in many countries. A person that has not covered its basic needs will have no opportunity for reflection on their situation, hence no possibility of arguing or making a point. The day to day survival is all that matters in their case, which is not helpful either to avoid the same situation for their own children. When the title is citing 6.4 Billion people (the whole earth population), we should consider that not all have access to the same basic services, so an argument that makes sense in an industrialized country cannot be blindly applied to the whole world. Zuckerman makes that point in his article “Making Room for the Third World in the Second Superpower“, also included in the book Extreme Democracy.