In the process of preparing the presentation, I returned to thesis and delved into the material in a way that I haven’t done since I wrote it. It was interesting to see how my own ideas have developed in the light of what I have learned and worked with since finishing two years ago. So I’ve continued working on the presentation even after the conference, annotating and adding to it, and making a more visual, more updated and – hopefully – more easily approachable version of my thesis than just the raw PDF of the whole thing that I’ve showed so far:

There was an interesting attempt at a discussion on the Anthrodesign mailing list recently as to what online ethnography actually entails. But the discussion never really seemed to get off the ground, and effectively had died by the time I posted my comment. So I thought I put it up here with a few adjustments:

Online ethnography is a very interesting research practice. In part because you are completely dependent on what your informants are willing to show you. You can only learn as much as they put online, and you have no way verifying that what they say is true. As the classic saying goes, “On the Internet, nobody knows you’re a dog.”

When you’re initiate ethnographic reseach online, you are acutely aware of this fact. No physical context or cues makes it difficult to interpret the actions and motivations of people. The short film “The Parlor” gives a great impression of how these issues.

One ethnography that does well to explore these issues of representation and anonymity is Annette Markham’s “Life Online“. But Markham’s central point is that the net-savvy people that she interviews do not see the Internet as a separate place that they enter when they go online. Rather, “going online means turning on the computer, just as one would pick up the phone.”

Online and in-person are parts of the same domain of social experience. I find that a lot of talk about “virtual ethnography” misses this and instead attempt to explore Internet relationships and behaviour as if they are completely different and unrelated to their informants’ in-person lives.

What I found in my fieldwork is that doing online ethnography is little different from other flavours of ethnography in that you have to examine not just a single aspect of your informants’ lives in order to be able to appreciate their practices and motivations online. This is equally true of everybody else online: Social ties are immensely strengthened by in-person meetings. As Gabriella Coleman has argued, online sociality augments offline sociality, rather than the other way around. In a similar vein, Brigitte Jordan labels this mixing of physical and digital fieldwork “hybrid ethnography” and argues that “the blurring of boundaries and the fusion of the real and the virtual in hybrid settings may require rethinking conventional ethnographic methods in the future.”

I don’t know exactly how ethnographic methods may require rethinking, I can only point to a description of how I combined different research methods, online and in-person in my fieldwork. If you’re curious, you can read my reflections on being in a digital field and my experiences there in my field report [pdf], which I’ve just uploaded for the first time (shame on me for putting it off for so long).

Having said that, I don’t think that online ethnography on its own is without merit. There is plenty of potential to learn from people online from behind the computer screen. But there is one other central issue here: It is incredibly easy to just observe others and not participate online. They can’t see you so there’s no social awkwardness associated with lurking. Not only is it unethical to some extent (just because it’s public doesn’t mean you shouldn’t let people be aware of your presence), but it is also a bad way to do research.

Actively sharing yourself, participating on equal terms is the cornerstone of participant observation, giving you the best possible opportunity to experience what your informants are experiencing. And it is the central way to build trust with people online. Actions do speak louder than words online. Much louder. And it is perfectly possible to do online participant observation. A great example of this is Michael Wesch‘s fascinating study of the Youtube community. Both Wesch and his students shared themselves through videos of their own in a way that garnered both respect and interest in their project. Video is a much more personal and credible way to interact than text online, and it is well worth the time to check out Wesch’s presentation of their study (available on Youtube, of course).

As Kelty argues, we’ve been drowning in explanations of why Free Software has come about, while starving for explanations of how it works. Thus, Kelty’s focus is on the actual practices of Free Software and the cultural significance of these practices in relation to other aspects of our lives.

Kelty’s main argument is that Free Software communities are a recursive public. He defines a recursive public as a public “whose existence (which consists solely in address through discourse) is possible only through discursive and technical reference to the means of creating this public.”

It is recursive in that it contains not only a discourse about technology, but that this discourse is made possible through and with the technology discussed. And that this technology consists of many recursively dependent layers of technical infrastructure: The entire free software stack, operating systems, Internet protocols. As Kelty concludes:

The depth of recursion is determined by the openness necessary for the project itself.

This is a brilliant observation, and I agree that the notion of a recursive public goes far to explain how the everyday practices and dogmatic concern for software freedom is so closely intertwined in this public.

The book is divided into three parts, each part using a different methodological perspective to examine the cultural significance of Free Software.

The first part is based on Kelty’s ethnographic fieldwork among geeks and their shared interest in the Internet. I found this to be the weakest part of the book. His ethnography does not cover the actual practices of Free Software hackers, but rather on the common traits among Internet geeks, which certainly supports his argument (that they’re all part of a shared recursive public), but doesn’t give a lot of depth to understanding their motives.

The second part is based on archive research of the many available sources within the various open source communities. In my opinion, this is the best part of the book with both deep and thorough analyses of the actual practices within free software communities, as well as vivid telling of the pivotal stories of “figuring out” the practices of Free Software.

The final part is based on Kelty’s own participation (anthropologist as collaborator) in two modulations of the practices of Free Software in other fields, the Duke University Connexions project, and the Creative Commons. These are stories of his own work “figuring out” how to adapt Free Software practices in other realms. These practices are still in the process of being developed, experimented with, and re-shaped – like all Free Software practices. And this part gives a good idea of what it feels like to be in the middle of such a process, though it offers few answers.

Being a completely biased reviewer, I’ll stop pretending to do a proper review now, and instead focus on how Kelty’s analysis fits with my own study on the Ubuntu Linux community. Kelty argues that there are five core practices, which define the recursive public of Free Software. Kelty traces the histories of “figuring out” these practices very well, and I’ll go through each in turn:

Fomenting Movements
This is the most fuzzy on Kelty’s list of five core practices. I understand it as placing the software developed within a greater narrative that offers a sense of purpose and direction within the community – “fomenting a movement” as it were. Kelty has this delicious notion of
“usable pasts” – the narratives that hackers build to make sense of these acts of “figuring out” after the fact.

In my research, I found it very difficult to separate these usable pasts from the actual history within the Free Software movement, and my thesis chapter on the cultural history of Ubuntu bears witness to that. So I am very happy to see that Chris Kelty has gone through the momentous task of examining these stories in detail. I find that this detective work in the archives is among the most important findings in the book.

Sharing Source Code
A basic premise of collaboration is shared and open access to the work done – the source code itself. The crux of the matter being giving access to the software that actually works. Kelty tells the story of Netscape’s failure following its going open source with a telling quote from project lead Jamie Zawinski:

We never distributed the source code to a working web browser, more importantly, to the web browser that people were actually using.

People could contribute, but they couldn’t see the immediate result of their contribution in the browswer that they used. The closer the shared source code is tied to the everyday computing practices of the developers, the better. As Ken Thompson describes in his reflections on UNIX development at AT&T:

The first thing to realize is that the outside world ran on releases of UNIX (V4, V5, V6, V7) but we did not. Our view was a continuum. V5 was simply what we had at some point in time and was probably put out of date simply by the activity required to put it in shape to export.

They were continually developing the system for their own use, trying out new programs on the system as they went along. Back then, they distributed their work through diff tapes. Now, the Internet allows for that continuum to be shared by all developers involved with the diffs being easily downloaded and installed from online repositories.

As I point out in my thesis, this is exactly the case with the development of the Ubuntu system, which can be described as a sort of stigmergy where each change to the system is also a way of communicating activity and interest to the other developers.

Writing Licenses
Kelty argues that the way in which a given software license is written and framed shapes the contributions, collaboration and the structure of distribution of that software, and is thus a core practice of Free Software. Kelty illustrates this by telling the intriguing story of the initial “figuring out” of the GPL, and how Richard Stallman slowly codified his attitude towards sharing source code. This “figuring out” is not some platonic reflection of ethics. Rather, it is the codifying of everyday practice:

The hacker ethic does not descend from the heights of philosophy like the categorical imperative – hackers have no Kant, nor do they want one. Rather, as Manuel Delanda has suggested, the philosophy of Free Software is the fact of Free Software itself, its practices and its things. If there is a hacker ethic, it is Free Software itself, it is the recursive public itself, which is much more than list of norms.

Again, almost too smartly, the hackers’ work of “figuring out” their practices refers back to the core of their practices – the software itself. But the main point that the licenses shape the collaboration is very salient, still. As I witnessed in the Ubuntu community, when hackers chose a license for their own projects, it invariably reflected their own practices and preferred form of collaboration.

Coordinating Collaborations
The final core practice within Free Software is collaboration – the tying together of the open code directly with the software that people are actually using. Kelty writes:

Coordination in Free Software privileges adaptability over planning. This involves more than simply allowing any kind of modification; the structure of Free Software coordination actually gives precedence to a generalized openness to change, rather than to the following of shared plans, goals, or ideals dictated or controlled by a hierarchy of individuals.

I love this notion of “adaptability over planning”. It describes quite precisely something that I’ve been trying to describe in my work on Ubuntu. I used Levi-Strauss’ rather worn duality between the engineer and the bricoleur to describe part of this, but I find Kelty’s terms to better describe the practice of collaboration on a higher level:

Linux and Apache should be understood as the results of this kind of coordination: experiments with adaptability that have worked, to the surprise of many who have insisted that complexity requires planning and hierarchy. Goals and planning are the province of governance – the practice of goal-setting, orientation, and definition of control – but adaptability is the province of critique, and this is why Free Software is a recursive public: It stands outside power and offers a powerful criticism in the form of working alternatives.

As Kelty points out, the initial goal of these experiments wasn’t to offer up powerful criticism. Rather, the initial goal is just to learn and adapt software to their own needs:

What drove his [Torvalds’] progress was a commitment to fun and a largely in articulate notion of what interested him and others, defined at the outset almost entirely against Minix.

What Linus Torvalds and his fellow hacker sought to do was not to produce “a powerful criticism” – those almost always come after the fact in the form of usable pasts to rally around – rather, their goal was to build something that would work for their needs, and allowed them to have fun doing so.

I find that this corresponds very well to the conclusion of my thesis: that the driving goal of the Ubuntu hackers continues to be to build “a system that works for me” – a system that matches their personal practices with the computer. A system that is continually and cumulatively improved through the shared effort of the Ubuntu hackers, each adapting the default system to his or her own needs, extending and developing it as needed along the way. As Kelty writes in his conclusion:

The ability to see development of software as a spectrum implies more than just continuous work on a product; it means seeing the product itself as something fluid, built out of previous ideas and products and transforming, differentiating into new ones. Debugging, in this perspective is not separate from design. Both are part of a spectrum of changes and improvements whose goals and direction are governed by the users and the developers themselves, and the patterns of coordination they adopt. It is in the space between debugging and design that Free Software finds its niche.
(…)
Free software is an experimental system, a practice that changes with the results of new experiments. The privileging of adaptability makes it a peculiar kind of experiment, however, one not directed by goals, plans, or hierarchical control, but more like what John Dewey suggested throughout his work: the experimental praxis of science extended to the social organization of governance in the service of improving the conditions of freedom.

In this way, Free Software is a continuing praxis of “figuring out” – giving up an understanding of finality in order to continually adapt and redesign the system. It is this practice of figuring out that is the core of cultural significance of Free Software, as we continue to figure out how to apply these learnings to other aspects of life. Kelty does well to describe his own efforts “figuring out” in relation to non-software projects inspired by Free Software practices in the final part of the book. Though these reflections do not come across as entirely figured out yet.

All in all, it is a brilliant book. But given its Creative Commons license, it poses an interesting challenge to me: Remixing – or modulating, as Kelty calls it – the book with my own work (and that of others – like Biella) to create a new hybrid, less tied up in the academic prestige game.

(Maybe then I can change the title, because that continues to annoy me: Why is it called Two Bits? Apart from the obvious reference to computing in general, it doesn’t seem to have any other relevance particular to Free Software?)

I gave the online presentation this evening, and if I hadn’t been so darned busy lately with work and moving to a different commune (more on that in a separate blog post), I would have blogged about the presentation earlier so that you’d all could have had had the opportunity to listen in.

Online in this case means via Skype teleconference and a community chat channel, which meant visualizing my audience while talking, and linking to images that related to presentation in the online chat (NB: they’re not sorted. It’s a mess. I’ll add my notes to the images soon to give some sense of a sequence). It’s not the easiest of formats – a lot energy and rapport goes lost in the ether. But I thought it worked out well. The participants were attentive and inquisitive while remaining constructive and supportive – a real treat.

Actually, I was surprised to get the invitation. But I’ve really relished the chance to revisit my thesis work. As I reread it, I realised that writing the thesis is only the beginning.

Since I joining Socialsquare, I’ve been working with all sorts of aspects relating to communities online, and it’s been great to return to that the my work on the Ubuntu Community and see new ways to extend my old analyses and apply them in new contexts. But most of all, I’ve come back and found just what a good framing the Community of Practice is for understanding online communities, and I hope to learn a lot more on how to apply it from the CP Square community.

Recently, I’ve come across several blog posts using the metaphor of a good party to describe well-functioning online communities. Paraphrasing Matt Mullenweg, founder of the WordPress project, Service Untitled sums up the metaphor thus:

Parties that are successful bring the right number of people together. Those people end up having a good time and having fun. They will hopefully come for whatever their purpose is and achieve that sort of goal (having fun, learning, meeting people, etc.). When people achieve their particular goals and have fun, they leave feeling happy.

Good parties almost always have good hosts. It is their job to keep the size of the space appropriate for the number of guests, plan the party, get people involved, and keep things rolling. The host not only needs to be the organizer of many things, but sometimes the life of the party and cheerleader. Sometimes this is is necessary, but not always.

One or two bad guests can ruin a party and make it miserable for almost everyone. A space that is too large or too small for the number of guests can make for a bad party. A party with a terrible host will likely be bad. Sometimes parties are really great or really bad for no apparent reason.

Now replace every use of the word party with community, every use of the word guest with member, and host with community leader.

Lee LeFever, who probably first made up the metaphor, lists all the ingredients which a good party and an active online community have in common. Unsuprisingly, his conclusion is simple:

In the end, if you’re truly interested in online communities, the most important ingredient is you. Without people who care about the community and are willing and excited about making it work, it will not succeed.

Writing a thesis is a difficult undertaking. Before I started writing mine, I hadn’t written any assignment longer than 30 pages (my Bachelor’s essay), and it was quite a step up from that to having to structure a huge complex of data that I’d gathered on my own, analyze it and bring it together in a coherent academic argument.

Luckily, I was well helped along the way by my supervisor, Morten, who really reeled me in from time to time when I was going off in weird and unsustainable directions, which happened fairly regularly. He gave me a lot of pointers, which I have summed up here for anybody about to write a major piece of academic argumentation. It may seem simple enough, but trust me: Once you get involved in it, you lose yourself to the writing, and it is difficult to avoid being overly esoteric with regards to your special niche of interest.

Be overly pedagogical! Keep a continuous meta-discourse going to explain to the reader why this bit of information is relevant in the grand scheme of things. It may seem obvious to you, since you know what is coming. But the unaccustomed reader won’t.

Use lots of part conclusions! Sum up again and again how each bit of analysis is relevant and necessary to make sense of your overall argument.

Focus on readability! Don’t use more than a handful abbreviations that you can reasonably expect the reader to know in advance. Use clear examples to explain difficult terms and processes!

Be very careful with descriptive passages. It can easily become either dry or boring or light-weight and irrelevant. Keep your focus on the relevant scientific observations. Those are the ones that you are meant to pursue!

Make it perfectly clear to yourself which academic or scientific tradition you aiming to be part of. Are you going for the anthropological insights, or the psychological qualities, or perhaps the computer science bits? There’s no way you can appeal to all, and your thesis will suffer from lack of focus, otherwise.

Be analytical: Use quotes or specific data to underline your analyses and conclusions. Hack the data! Fashion surprising and worthwhile points from your empirical descriptions.

Write descriptively in order to support your analysis – but don’t write naïvely. The description can be an analysis in its own right if used to expose analytically interesting situations and issues.

Use and express clear levels within the text: Who is saying what? When are you being analytical and when are you being descriptive? Use meta-commentary to separate the two, but don’t be judgmental. Try colouring the text so that you can see where you are analyzing and when you are describing. Keep these in separate sections! Otherwise it will confuse the reader!

Make clear distinctions between what your informants are saying, and what you are saying: Are you using their metaphors and terms? When are you speaking and when are they speaking? You cannot be reflective and critical when using their terms. Use italics and quotes to signify that you are aware of the difference!

Be reflective all the time: Ask: Which implicit assumptions do your informants have that shape their demeanor and convictions? For example: What assumptions are inherent in the idea of the transparency of a computer program? How does this assumption shape relations between people?

Focus on the relations between your informants! What does it mean to be part of this group? Is it a group? Where do their shared bonds lie?

Pick a theoretical perspective and give it more depth! Illuminate it from different angles through various analytical means. Dig deeper!

Use diagrams to illustrate and explain tricky analytical points that you find central. Often, a good diagram will express a thousand words of analysis.

Each chapter of the thesis should be a paper in its own right – containing its own analytical focus and conclusion. But at the same time, it should lead on to the next chapter. Ask yourself: How does this chapter lead on to what I discuss in the next chapter? Is there a feeling of natural flow between the analyses?

Layout the text as it has to be in the final version. It will make it a lot easier for you to see if you are within the formal word and page limit. Writing too much will require rewriting and cutting, which is arduous and difficult! Better to write it right the first time.

Have a draft chapter ready for review for every meeting with your supervisor. Write a letter along with the draft: Describe how the draft fits with the greater whole of the thesis – what function it fulfills. Make it easy for the supervisor to comment it in a way that can help you!

Well, I’m sure this seems like pretty self-evident advice, but it is still hard to remember when you’re getting carried away writing about your very favourite obscure detail about the history of the Unix operating system. And you know it has to go, the moment you finished it…

It’s been a long way underway, first through fieldwork, writing, submitting, defending, editing, and polishing. But now, finally. My anthropological thesis on the social dynamics of the Ubuntu community is available for everybody to read.

And thus arose the day where I end my association with the University of Copenhagen after almost 7 years to the day.

I defended my thesis this morning with some success, with fun props and pictures to explain my theoretical perspective. And I passed comfortably, though not without being told that there was a distinct lack of methodological discussion, only barely an academic argument, and that it lacked a proper critical approach to the theories I used. Indeed, I was told that I didn’t “unfold” my material properly as there were too many theories in play – several of which which were contrary to one another.

All valid criticism, I suppose. In the end, I’m quite happy with the decisions I made, since I emphasized not making only making the field interesting for anthropologists to read about, but also to make it readable and interesting for other people who might be interested in the social dynamics of a free software community. I could have added more reflection on my methods, or focused even more on the analytical crisis cases – but as I already had reached the maximum length allowed for the thesis, I could only have done so by cutting something else.

I’d rather describe the many aspects of the Ubuntu community as they are, rather than focus on crisis cases and dilemmas which are so rare and much less typical of the community as a whole. I’ll digest these comments, clarify a few elements in the thesis and rewrite the conclusion – and ever so soon, I’ll put the “director’s cut” of the thesis up here for all to see.

“Free Software and Open Source are the same. What difference there used to be between the two is now deprecated. When we first began working on the term ‘Open Source’, Eric Raymond was afraid that IT companies couldn’t deal with Richard Stallman, and thus it would be necessary to distance ‘Open Source’ from Richard and Free Software. But it turned out that the companies have no trouble relating to Richard. They do indeed take him seriously, which became quite apparent during the drafting of the GPL version 3 where several big companies took part in the process. So there is no longer any need to differentiate between the two.”

Bruce Perens has launched into his talk at the Danish Unix User Group, and he immediately touches upon the issue for which he is best known: the definition of Open Source, and his role as a mediator between two of the other big men within the free software community, Richard Stallman and Eric Raymond whose eccentricities and disagreements have become the stuff of legend.

As he notes, Stallman often refuses to give talks where people are using proprietary software, instead offering to “defenestrate the computer” – that is: remove Windows from it (and, presumably, install a suitable free replacement).

Compared to those two, Perens appears as one of the more sensible, pragmatic free software advocates. Usually, he does not attract as much attention as the others, and no more than 30 people have showed up this Monday night to hear the Copenhagen stop in his whirlwind tour of talks around Europe.

Perens obviously gets a lot of practice doing public speaking and he sprinkles the relevant technical and legal expositions with some interesting anecdotes while remaining patient and interested in the questions that appear along the way. He comes across as a sage passing on the growing tradition of free software activism, telling us how to learn from past mistakes and seeking to amend previous wrongs – such as the divide between open source and free software.

His talk is called “Innovation goes public” and it is basically Perens’ interpretation of how the free software advocates should go about introducing the core ideas of free software to policy makers, corporations and other non-technical parties:

“You know how newscasters used to report from a crime scene, saying “The police suspect it is the work of a lone isolated nut”? Well, with the Internet, there are no more isolated nuts! The nuts can easily go on-line and find 50 people who share the same obscure interest. That’s basically how Open Source development came about!”

Perens compares this with the way that old people have embraced the Wikipedia. He has found that there many retired professionals with plenty of time on their hands and no one to receive their no-longer active knowledge, so they begin adding that knowledge to the Wikipedia, finding new communities of shared interest in that way.

But the main point of his talk is how free software compares favourably to proprietary software in that it makes economic sense. He argues that companies should examine their use of software and find out which software they use is the differentiating factor setting them apart from their competitors. For the Amazon bookstore, that factor is the recommendation system which boosts their sales remarkably.

Perens notes that, generally, a company’s differentiating software is less than 5 % of all the software they use. Thus it will make sense to them to keep that 5 % percent proprietary and develop that as they have so far, but to use Open Source for everything else, since those 95% of their software is infrastructure such as web server and operating system software which isn’t giving them any vital advantage over the competition, and thus, it can be developed in an free software fashion, giving all of the companies greater rewards compared to the amount of development they invest.

Thus, Perens explains, the Amazon bookstore uses free software such as the Apache web server, the PHP scripting language and the MySQL database which cost them nothing, and enables them to get custom made development done in-house or by others as they may need it. Thus allowing them to reduce cost and focus on what they actually sell. That is: books.

I quite enjoyed the talk, which lasted almost 3 hours as Perens expanded his scope to touch upon more and more topics. I wonder how he fares with a less geeky crowd, but he certainly was hit here. Make sure to go see him if talks at a LUG near you.

Since coming back from my holidays, I have been in something of a limbo state while waiting to find out the date for my thesis defense. I couldn’t quite tell if I should be panicky with last-minute preparations or summerly relaxed with plenty of time to spare.

Well, now I know. I got the letter this morning:

My thesis defense will take place at 10.00 am on Monday the 27th of August, at the University of Copenhagen. Please do let me know if you plan to attend.

As to making the thesis available on-line: I’m still tweaking a few bits, and haven’t heard back from some of my informants, whom I’ll have to hunt down. Hopefully, it’ll be up for download by the end of next week.