Stefano delivered an excellent address to the Debian project. As Project Leader, he offered a perspective on how far Debian has come, raised some of the key questions facing Debian today, and challenged the project to move forward and improve in several important ways.

He asked the audience: Is Debian better than other distributions? Is Debian still relevant? Why/how?

Having asked this question on identi.ca and Twitter recently, he presented a summary. There was a fairly standard list of technical concerns, but also:

A focus on quality, as defined by Debian’s highly modular approach. Each package maintainer is an expert on the software they package, and Debian as a whole offers a superior repository of packages.

The principles of software freedom, as embodied in Debian’s Social Contract. The Debian community’s current interpretation is a purist one, and Stefano cited the elimination of non-free firmware as a milestone in the upcoming Squeeze release. I wonder, though, how many of the audience, tapping away on WiFi-connected laptops, were able to do so without such firmware.

The project’s independent status, supported by donations and volunteers, which empowers it to make its own decisions, free of external impositions.

Debian’s ability to make decisions, as embodied in the constitution. This happens mostly through do-ocracy (individuals are empowered to decide questions concerning their own work), though larger scope issues are decided democratically. This one evoked a bit of a chuckle, as decision making in Debian is not always perceived as fully effective.

He pointed out some areas which we would like to see improve, including:

Developers accepting shared responsibility for the release as a whole. Making one’s own packages ready for release is necessary, but not sufficient. He cited evidence that the culture around NMUs is changing: historically, due to the do-ocratic system mentioned above, Debian developers have been somewhat territorial about their packages, and non-maintainer uploads were seen as stepping on their toes. However, recent experiments have indicated that this may no longer be the case, and Stefano encouraged more developers to help each other through NMUs.

When making decisions, we should seek consensus, not unanimity. In a project with thousands of contributors, whose operations are open to the public, there will never be unanimous support for a proposal, and seeking unanimity leads to stalled decisions.

In order to gain more contributors, Debian needs to welcome new and inexperienced contributors, as well as users (who can grow into contributors. He suggested reaching out to derivatives to find more of both. He decried the conventional wisdom that a “thick skin” should be a prerequisite for joining the project, pointing out that this attitude simply leads to fewer contributors. This point was met with applause by the DebConf audience.

All in all, I thought this was an accurate, timely and inspirational message for the project, and the talk is worth watching for any current or prospective contributor to Debian.

Russ facilitated a discussion about the Debian policy document itself and the process for managing it. He has recently put in a lot of time working on the backlog (down from 160+ to 120), but this is not sustainable for him, and help is needed.

There was a wide-ranging discussion of possible improvements including:

Editing the policy manual so that it is more readable start to finish as a document, rather than a reference

Creating a closer linkage between lintian and the policy manual, so that best practices from lintian get documented, and policy changes are accompanied by new checks

Separating the normative and informative parts of the policy manual

There was also some discussion in passing of the long-standing confusion (presumably among people new to the project) with regard to how policy is established. In Debian, best practices are first implemented in packages, then documented in policy (not the reverse). Sometimes, improvements are suggested at the policy level, when they need to start elsewhere. I’m not very familiar with how the policy manual is maintained at present, but listening to the discussion, it sounded like it might help to extend the process to include the implementation stage. This would allow standards improvements to be tracked all the way through from concept, to implementation, to documentation.

Torsten described the current state of Java packaging in Debian and the general problems involved, including licensing issues, build system challenges (e.g. maven) and dependency management. His slides were information-dense, so I didn’t take a lot of notes.

His presentation inspired a lively discussion about why upstream developers of Java applications and libraries often do not engage with Debian. Suggested reasons included:

They are not interested in Linux as a target platform

Although their code is released under a free license, they are not interested in meeting Debian standards for freedom and license correctness

They use Java because it is cross-platform, and so do not want to concern themselves with platform-specific issues

Because Java applications are easy to download and run manually, they perceive relatively little value in the Debian packaging system

Jorge talked about the connections between Debian and Ubuntu, how people in the projects perceive each other, and how to foster good relationships between developers.

He talked about past efforts to quantify collaboration between the projects, but the focus is now on building personal relationships. There were many good questions and comments afterward, and I’m looking forward to the Debian derivatives BoF session tomorrow to get into more detail.

Tonight is the traditional wine and cheese party. When this tradition started, I was one of just a handful of people in a room with some cheese and paper plates, but it’s now a large social gathering with contributions of cheese and wine from around the world. I’m looking forward to it.

This week, I am attending DebConf 10 at Columbia University in New York.

The first day of DebConf is known as Debian Day. While most of DebConf is for the benefit of people involved in Debian itself, Debian Day is aimed at a wider audience, and invites the public to learn about, and interact with, the Debian project.

Andy discussed FLOSS adoption in governments, drawing on examples from Peru, the city of Munich, the state of Massachusetts. He covered the reasons why this is valuable, the relationship between government transparency and software freedom, and practical advice for successful adoption and deployment.”

The panelists discussed the use of technology in education, especially free software, some of the parallels between free software and education, and what these communities could learn from each other. This is a promising topic, though the perspectives seemed to be mostly from the education realm. There is much to be learned on both sides.

This talk covered the student projects for this year’s Summer of Code. Most of the students were in attendance, and presented their own work. They ranged from more specialized projects like the Hurd installer, to core infrastructure improvements like multi-arch in APT.

Mushon gave an excellent talk on open design. This is a subject I’ve thought quite a bit about, and Asheesh validated many of my conclusions from a different angle. I’ve added a new post to my todo list to go into more detail on this subject.

Some points from his talk which resonated with me:

When collaborating on code, everyone must reason with one collaborator: the computer. This forces a level playing field and a common encoding.

Collaborating on other types of creative work is more difficult in part because of the differences encoding/decoding information between different individuals

Making this easier for design work requires improving motivational factors and language as well as tools and processes

Many design decisions are actually rational, and are compatible with a group consensus project. Too often, I hear that design can’t be done collaboratively, citing “too many cooks in the kitchen” analogies, but I have never believed it.

Mushon’s own project, shiftspace.org, seems to be a browser-plugin-based system for collaboratively remixing web applications. I haven’t looked at it yet.

Leadership and openness are not mutually exclusive. This is another pet peeve of mine, and there are so many examples of open leadership in the free software community that I don’t see how anyone can think otherwise.

Mushon’s presentation is available in revision control so that it can be freely used and improved

Councillor Brewer paid a visit to DebConf to tell us about the work she is doing on the city council to promote better government through technology.

Brewer seems to be a strong advocate of open data, saying essentially that all government data should be public. She summarized a bill to mandate that New York City government data be public, shared in raw form using open standards, and kept up to date. It sounded like a very strong move which would encourage third party innovation around the data.

She also discussed the need for greater access to computers and Internet connectivity, particularly in educational settings, and a desire to to have all public hearings and meetings shared online.

Jon is a very engaging speaker. He drew parallels between the development of player pianos, reproducing pianos, reed organs, pipe organs…and free software. He even tied in Hedi Lamarr’s work which led to spread spectrum wireless technology. To be quite honest, I did not find that these analogies taught me much about either free software or player pianos, but nonetheless, I couldn’t help but take an interest in what he was saying and how he presented it.

Biella and company explained all the ins and outs of the event: where to go, what to do (and not do), and most importantly, whom to thank for all of it. Now in its 11th year, DebConf is an impressively well-run conference.

The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today?

Web architecture

The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it’s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends.

This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves.

However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users’ lives by giving them immediate access to their digital lives from anywhere.

Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It’s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue.

So what?

This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software.

Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it’s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link?

Many web applications—perhaps even a majority—are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make.

Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do.

What would that look like?

In my view, a FLOSS client platform which fully embraced the web would:

treat web applications as first-class citizens. The web would not be just another application, represented by a browser, but more like a native application runtime. Web applications could feel much more “native” while still preserving the advantages of a web-style user experience. There would be no web browser: that’s a tool for legacy systems to run web applications within a compatibility environment.

provide a seamless experience for developers to build web applications. It would be as fast and easy to develop a trivial client/server web application as it is to write “Hello, world!” in PyGTK using Quickly. For bonus points, it would be easy to develop and run web applications locally, and then deploy directly to a PaaS or IaaS cloud.

empower the user to manage their applications and data regardless of where they are hosted. Traditional operating systems act as a connecting fabric for local applications, providing a shared namespace, file store and IPC mechanisms, but web applications are lacking this. The web’s security model requires that applications are thoroughly sandboxed from each other, but a mediating operating system could connect them in meaningful ways, just as web browsers store cookies and passwords for various websites while protecting them from each other.

Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them.

What about Chrome OS?

Chrome OS is a step in the right direction, but doesn’t yet realize this vision. It’s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser.

It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so.

It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs.

Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself.

How?

Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source.

This is a huge head start toward a free web. I think what’s missing is a client platform which catalyzes the development and use of FLOSS web applications.

I have noticed that when I am reading, I cannot simultaneously understand spoken words. If someone speaks to me while I am reading, I can pay attention to their voice, or to the text, but not both. It’s as if these two functions share the same cognitive facility, and this facility can only handle one task at a time. If someone is talking on the phone nearby, I find it very difficult to focus on reading (or writing). If I’m having a conversation with someone about a document, I sometimes have to ask them to pause the conversation for a moment while I read.

This phenomenon isn’t unique to me. In Richard Feynman’s What Do You Care what Other People Think?, there is a chapter entitled “It’s as Simple as One, Two, Three…” where he describes his experiments with keeping time in his head. He practiced counting at a steady rate while simultaneously performing various actions, such as running up and down the stairs, reading, writing, even counting objects. He discovered that he “could do anything while counting to [himself]—except talk out loud”.

What’s interesting is that the pattern varies from person to person. Feynman shared his discovery with a group of people, one of whom (John Tukey) had a curiously different experience: while counting steadily, he could easily speak aloud, but could not read. Through experimenting and comparing their experiences, it seemed to them that they were using different cognitive processes to accomplish the task of counting time. Feynman was “hearing” the numbers in his head, while Tukey was “seeing” the numbers go by.

Analogously, I’ve met people who seem to be able to read and listen to speech at the same time. I attributed this to a similar cognitive effect: presumably some people “speak” the words to themselves, while others “watch” them. Feynman found that, although he could write and count at the same time, his counting would be interrupted when he had to stop and search for the right word. Perhaps he used a different mental faculty for that. Some people seem to be able to listen to more than one person talking at the same time, and I wonder if that’s related.

I was reminded of this years later, when I came across this video on speed reading. In it, the speaker explains that most people read by silently voicing words, which they can do at a rate of only 120-250 words per minute. However, people can learn to read visually instead, and thereby read much more quickly. He describes a training technique which involves reading while continuously voicing arbitrary sounds, like the vowels A-E-I-O-U.

The interesting part, for me, was the possibility of learning. I realized that different people read in different ways, but hadn’t thought much about whether one could change this. Having learned a cognitive skill, like reading or counting time, apparently one can re-learn it a different way. Visual reading would seem, at first glance, to be superior: not only is it faster, but I have to use my eyes to read anyway, so why tie up my listening facility as well? Perhaps I could use it for something else at the same time.

So, I tried the simple technique in the video, and it had a definite effect. I could “feel” that I wasn’t reading in the same way that I had been before. I didn’t measure whether I was going any faster or slower, because I quickly noticed something more significant: my reading comprehension was completely shot. I couldn’t remember what I had read, as the memory of it faded within seconds. Before reaching the end of a paragraph, I would forget the beginning. It was as if my ability to comprehend the meaning of the text was linked to my reading technique. I found this very unsettling, and it ruined my enjoyment of the book I was reading.

I’ll probably need to separate this practice from my pleasure reading in order to stick with it. Presumably, over time, my comprehension will improve. I’m curious about what net effect this will have, though. Will I still comprehend it in “the same” way? Will it mean the same thing to me? Will I still feel the same way about it? The many levels of meaning are connected to our senses as well, and “the same” idea, depending on whether it was read or heard, may not have “the same” meaning to an individual. Even our tactile senses can influence our judgments and decisions.

I also wonder whether, if I learn to read visually, I’ll lose the ability to read any other way. When I retrained myself to type using a Dvorak keyboard layout, rather than QWERTY, I lost the ability to type on QWERTY at high speed. I think this has been a good tradeoff for me, but raises interesting questions about how my mind works: Why did this happen? What else changed in the process that might have been less obvious?

Have you tried re-training yourself in this way? What kind of cognitive side effects did you notice, if any? If you lost something, do you still miss it?

(As a sidenote, I am impressed by Feynman’s exuberance and persistence in his personal experiments, as described in his books for laypeople. Although I consider myself a very curious person, I rarely invest that kind of physical and intellectual energy in first-hand experiments. I’m much more likely to research what other people have done, and skim the surface of the subject.)

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.

Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.

Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.

Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.

Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.

Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.

Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.

Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.

Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.

I’ve written a simple application which will automatically extract media from CDs and DVDs when they are inserted into the drive attached to my server. This makes it easy for me to compile all of my media in one place and access it anytime I like. The application uses the modern udisks API, formerly known as DeviceKit-disks, and I wrote it in part to learn get some experience working with udisks (which, it turns out, is rather nice indeed).

Naturally, I wanted to grant this application the privileges necessary to mount, unmount and eject removable media. The server is headless, and the application runs as a daemon, so this would require explicit configuration. udisks uses PolicyKit for authorization, so I expected this to be very simple to do. In fact, it is very simple, but finding out exactly how to do it wasn’t quite so easy.

The Internet is full of web pages which recommend editing /etc/PolicyKit/PolicyKit.conf. As far as I can tell, nothing pays attention to this file anymore, and all of these instructions have been rendered meaningless. My system was also full of tools like polkit-auth, from the apparently-obsolete policykit package, which kept their configuration in some other ignored place, i.e. /var/lib/PolicyKit. It seems the configuration system has been through a revolution or two recently.

In Ubuntu 10.04, the right place to configure these things seems to be /var/lib/polkit-1/localauthority, and this is documented in pklocalauthority(8). Authorization can be tested using pkcheck(1), and the default policy can be examined using pkaction(1).

I solved my problem by creating a file in /var/lib/polkit-1/localauthority/50-local.d with a .pkla extension with the following contents:

This took effect immediately and did exactly what I needed. I lost quite some time trying to figure out why the other methods weren’t working, so perhaps this post will save the next person a bit of time. It may also inspire some gratitude for the infrastructure which makes all of this work automatically for more typical usage scenarios, so that most people don’t need to worry about any of this.

Along the way, I whipped up a patch to add a --eject option to the handy udisks(1) tool, which made it easier for me to test along the way.

I find that habits are best made and broken in sets. If I want to form a new habit, I’ll try to get rid of an old one at the same time. I don’t know why this works, but it seems to. Perhaps I only have room in my head for a certain number of habits, so if I want a new one, then an old one has to go. I’m sure some combinations are better than others.

I’m currently working on changing some habits, including:

Start exercising, swimming three times per week

Stop drinking alcohol entirely

Start a consistent flossing routine

I’m thinking of adding a reading habit to the set, but it’s going well so far and I don’t want to overdo it. I feel good, and am forming a new routine.

The flossing is definitely the hardest of the three. I hate pretty much everything about flossing. It also unbalances the set, so that I have a net gain of one habit. Maybe that’s the real reason, and if I broke another habit, it would get easier.

Does anyone else have this experience? What sort of tricks do you employ to help you change your behavior?

Having invested in some introspection into my reading habits, I made up my mind to dial down my consumption of bite-sized nuggets of online information, and finish a few books. That’s where my bottleneck has been for the past year or so. Not in selecting books, not in acquiring books, and not in starting books either. I identify promising books, I buy them, I start reading them, and at some point, I put them down and never pick them back up again.

Until now. Over the weekend, I finished two books. I started reading both in 2009, and they each required my sustained attention for a period measured in hours in order to finish them.

Taking a tip from Dustin, I decided to try alternating between fiction and non-fiction.

Jitterbug Perfume by Tom Robbins

This was the first book I had read by Tom Robbins, and I am in no hurry to read any more. It certainly wasn’t without merit: its themes were clever and artfully interwoven, and the prose elicited a silent chuckle now and again. It was mainly the characters which failed to earn my devotion. They spoke and behaved in ways I found awkward at best, and problematic at worst. Race, gender, sexuality and culture each endured some abuse on the wrong end of a pervasive white male heteronormative American gaze.

I really wanted to like Priscilla, who showed early promise as a smart, self-reliant individual, whose haplessness was balanced by a strong will and sense of adventure. Unfortunately, by the later chapters, she was revealed as yet another vacant vessel yearning to be filled by a man. She’s even the steward of a symbolic, nearly empty perfume bottle throughout the book. Yes, really.

Managing Humans by Michael Lopp

Of the books I’ve read on management, this one is perhaps the most outrageously reductionist. Many management books are like this, to a degree. They take the impossibly complex problem domain of getting people to work together, break it down into manageable problems with tidy labels, and prescribe methods for solving them (which are hopefully appropriate for at least some of the reader’s circumstances).

Managing Humans takes this approach to a new level, drawing neat boxes around such gestalts as companies, roles, teams and people, and assigning them Proper Nouns. Many of these bear a similarity to concepts which have been defined, used and tested elsewhere, such as psychological types, but the text makes no effort to link them to his own. Despite being a self-described collection of “tales”, it’s structured like a textbook, ostensibly imparting nuggets of managerial wisdom acquired through lessons learned in the Real World (so pay attention!). However, as far as I can tell, the author’s experience is limited to a string of companies of a very specific type: Silicon Valley software startups in the “dot com” era.

Lopp (also known as Rands) does have substantial insight into this problem domain, though, and does an entertaining job of illustrating the patterns which have worked for him. If you can disregard the oracular tone, grit your teeth through the gender stereotyping, and add an implicit preface that this is (sometimes highly) context-sensitive advice, this book can be appreciated for what it actually is: a coherent, witty and thorough exposition of how one particular manager does their job.

I got some good ideas out of this book, and would recommend it to someone working in certain circumstances, but as with Robbins, I’m not planning to track down further work by the same author.

Like you, dear Internet readers, I have no shortage of reading material. I have ready access to more engaging, high quality, informative and relevant information than I can possibly digest. Every day, I have to choose what to read, and what to pass by. This seems like an important thing to do well, and I wonder if I do a good enough job of it. This is just one example of a larger breadth/depth problem, but I’m finding the general problem difficult to stomach, so I’m focusing on reading for the moment.

These are my primary sources of reading material on a day-to-day basis:

Email – I read everything which is addressed to me personally. I don’t reply to all of it, and my reply time can vary greatly, but I am able to keep up with reading it, and I consider it important to do so. I am still subscribed to a selection of mailing lists, but I find them increasingly awkward to manage. There are a few which I scan on a daily basis, but most of them I process in batches when I’m offline and traveling. I’m subscribed to far fewer mailing lists than I was five years ago, though I feel they are still the most effective online discussion facility available. I find myself doing more and more discussing in real-time on IRC and by phone rather than by email.

Blogs – I subscribe to a fewbigaggregators and a random sampling of individual blogs. Most of them I scan rather than read. I do most of this offline, while in transit, and so I don’t tend to follow links unless they’re promising enough to save for later. I’ve recently stopped trying to “keep up” (scan every post) on most of them, and instead just “sample” whatever is current at the time. It feels like turning on a television, flipping through all of the channels, and turning it off again. Even when I do find something which I feel is worth reading, it’s hard for me to focus my attention after a long session of scanning. I do find a lot of good stuff this way, but I’m pretty dissatisfied with the overall experience. I never feel like I’m looking in the right places.

Shared links – I share my own links publicly, and follow those shared by friends and acquaintances. I do this with multiple groups of people who don’t connect directly, and pass items back and forth between those groups. I place an increasingly high priority on reading items which are shared by people I know, more than on trying to follow the original sources, because the signal-to-noise ratio is so good: my personal network acts as a pretty good filter for what will interest me. I have the nagging feeling that I need to maintain a balance here, though. If I read mostly what other people are sending me, I feel like I’m living in a bubble of like-minded people and fear that I’ll lose perspective.

News – I read hardly any “proper” news. I don’t subscribe to any newspapers, and generally don’t read the online versions either. I do read articles which are shared by people in my network. Traditional media never seems to have the right scope for me. There may be particular journalists, or particular topics I’d like to follow, but news outlets simply don’t group their content in a way which fits my mind.

Books – Remember these? My diet of books has shrunk drastically since I started reading more online media. Devoting my full attention to a book just doesn’t feel as energizing as it used to. I hesitate at the prospect of sinking so many hours into a book, only to decide that it wasn’t worthwhile, or worse, to forget what I learned as I’m bombarded by bite-sized, digestible tidbits from the Internet. I feel sad about losing the joy of reading I once had, and want to find a way to reintegrate books into my regular diet.

How do you decide what to read, and what not to read? How does your experience differ between your primary information sources? How have you tried to improve?

DevOps

I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I’ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:

Development

Operations

is responsible for…

creating products

offering services

is measured on…

delivery of new features

high reliability

optimizes by…

increasing velocity

controlling change

and so is perceived as…

reckless and irresponsible

obstructing progress

Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.

So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist “development” or “operations” roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work “over the wall”, they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a “devops” team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there’s a problem, it’s the team’s problem.

Some take it a step further, and feel that what’s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as “developers” or “sysadmins”, these folks consider themselves “devops”. They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution.

DevOps meets Cloud

Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I’ve written previously). It’s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn’t correspond piece-for-piece to the old one.

DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn’t “fit” cloud as well as a DevOps model.

Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the “fence” between development and operations, and can accelerate development as well as promote stability in production.

This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for “black box” access to computing resources. How those resources are provisioned and maintained is entirely Amazon’s problem, while its customers must decide how to deploy and manage their applications within Amazon’s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply “switch” to an IaaS model and expect these needs to be taken care of by their service provider.

With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else’s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach.

Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture.

Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned.

So what?

DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they’re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.