Blog entries for July, 2007

There's some talk on desktop-devel-list
about exactly what "online
desktop" means, and in private mail I got a good suggestion to
focus it such that end users would understand.

"Online Desktop" is an abstraction. First, let me try to convince you
that it's more specific than what GNOME purports to be about right
now. Then I'll suggest a way to avoid architecture
astronauting the Online Desktop abstraction.

GNOME offers an easy to understand desktop for your Linux or UNIX computer.

This is aimless, in my opinion. As Alex Graveley said in his keynote,
the "easy to understand" (i.e. usability) point is a feature, like the
color of a car. It doesn't give any direction (sports car or
truck?). "Runs on Linux" is a feature too, like "car that drives on
roads."

Chop out the two features, and the only direction here is "desktop" -
that word by itself just means "a thing like Windows or OS X" -
it's a category name, and thus defined by what's already in the
category. It dooms us to cloning, in other words.

Here's what I offered at GUADEC as an alternative:

GNOME Online Desktop: The perfect window to the Internet: integrated with all your favorite
online apps, secure and virus­free, simple to set up and zero­
maintenance thereafter.

This is still a conceptual mission statement (or something), not a
product. I went on and had a series of slides about possible
products that fit into the above mission - the idea is that
end users would get the products, and would be marketed a product.
Here are the products I mentioned:

An amped-up mobile device (internet tablet or phone) designed to
work with popular web sites and seamlessly share your stuff with your
desktop (including your proprietary desktop, if you have one!). The
closest current products to this might be the Helio Ocean, the iPhone, and of
course GNOME's own Nokia N800.

An Internet appliance. To date, the only one of these I was ever
impressed with was the Netpliance - as best
I can tell, the company cratered because it sold a $300 device for
$99, hoping to make it up on service contracts cell-phone-style, but
did not lock people in to a 2-year contract. So people bought the
device, canceled the contract, and the company went bankrupt. Anyway,
my grandfather had one of these, and it was perfect for him. I think
it was a good idea and might have gone somewhere if they hadn't dug
themselves into a big old hole by losing $200 per device sold.
The idea is also more timely today than in 1999, because you can do a
lot more with a simple device as long as it has a web browser.

PCs for students. College students love the cuteness of the One Laptop
Per Child. Imagine a cheap, well-designed, durable laptop, with an OS designed
for the Internet. A slightly scaled-up version of the One Laptop with
an OS designed for online instead of peer-to-peer.

Managed clients for small/medium businesses. If Google is successful
with Google Apps for Your
Domain, or someone else is successful with something similar
(already there are other examples - QuickBooks Online Edition,
salesforce.com), then
companies can outsource their server-side IT at low cost.
But they'll still be maintaining Windows, with its downsides.
GNOME could offer an alternative client, perhaps managed by the
service provider just as the online services themselves are,
but in any case better optimized for a "window to the Internet" role
than Windows and lower maintenance to boot.

An awesomer version of developer distributions like Fedora and Ubuntu,
with neat features for services Linux lovers tend to use, such as
Flickr, and nice support for stuff Linux lovers care about such as
ssh.

You can probably imagine how to improve the above products, or even
come up with a few more.

In deciding what to hack on next, we should probably always be
thinking of one of the specific products, rather than the Online Desktop
abstract mission statement concept.

If you were selling GNOME to someone, you'd want to tell them about
one of these products, not the "window to the Internet"
blurb.

I proposed the Online Desktop abstraction because 1) a high-concept
mission sounds more exciting to many people and 2) the specific
products each exclude some of the primary GNOME constituents. The
GNOME project can support several of these products. The Online
Desktop abstraction is meant to be something a large part of the GNOME
community can have in common, even though we're working on a variety
of different products. But we should keep working on products, not
abstract missions.

Even though Online Desktop is an abstraction, I think
it's both more specific and a better direction than
the current abstraction on www.gnome.org - "a desktop." "Perfect
window to the Internet" is still vague, and I'm sure can be improved
on, but at least it isn't a pre-existing product category that's
already been defined by proprietary competitors.

You may notice that I tacked a bunch of features onto the Online
Desktop definition: "integrated with all your favorite online apps,
secure and virus­free, simple to set up and zero­ maintenance
thereafter." I guess these are more illustrations than anything
else. The point is to capture what the various products built around
GNOME could have in common.

Talking to lots of developers at GUADEC about their designs, I'm
reminded of the hardest thing to get right in software engineering:
when are you doing too much of it?

The "agile development" model is to always do as little as possible,
adding code and design complexity only as needed. I'm a big fan of
this, especially for apps. It breaks down a bit for libraries and
APIs, though; it's too hard to get anybody to try the API until you
have it fairly far along, and then it becomes too hard to change the
API. A good approach that helps a bit is to always develop an API as
part of writing an app - for example we've developed HippoCanvas as
needed while writing Mugshot, Big Board, and Sugar. Compared to a
written-in-a-vacuum API the core turned out very nicely IMO, but one
consequence of as-needed development is that the API has a lot of
gaps. Still, an API founded in as-needed development would often be a
better start for a productized API than a from-scratch design.

Another guideline I use is that the last 5% of your use cases
or corner cases should be addressed with hacks and
workarounds. Otherwise you will double your complexity to cover that
last 5% and make the first 95% suck. The classic Worse is Better
paper says about the same thing, without the made-up percentages.
Typical hacks in this context might include:

"Just don't do that then" - declare that while the API could be
misused or something bad could happen in a particular case, the case
is avoidable and people should just avoid it.

"Convention rather than enforcement" - all of Ruby on Rails is
based on this one - rather than jumping through hoops to "enforce"
something that's hard to enforce, just don't.

"Slippery slope avoidance" - pick some bright line for what to add
vs. what not to add in a particular category and stick to it, even
though each individual addition seems sensible in isolation.

I bet someone could write (or has written) a whole book on
"complexity avoidance patterns."

I've tried different complexity levels in different projects. For
example, libdbus is flexible in various ways and even adds probably
30% more code just to handle rare out-of-memory situations. (The GLib
and GTK+ stack reduce API and code complexity that other C libraries
have by punting the out-of-memory issue.) While Metacity doesn't even
use GObject to speak of, just structs in a loose object-oriented style.
(I frequently prefer a struct with new/ref/unref methods to GObject in
application code, though not in APIs.)

Some commenters thought the talk wasn't alarmist enough and
should have more strongly stressed the urgency of the situation. I
don't think we need to panic but I do think a lot of hard work is in
order, in either the online desktop direction (or some other focused
direction others may propose).

Aren't many web services proprietary? was a good question
raised during the talk Q&A. My short answer to this is yes, but
ignoring the real user benefits of these services will result in
everyone using them anyway, while not using GNOME or any other open
source software. We have to engage with where the world is going and
what users want to have. That will put us in a position to affect
openness. Taking a "stop progress!" attitude won't help.

Even among GNOME developers, almost everyone uses
Google or Flickr or something. Expecting a wider
non-geek audience to forego these services on ideological grounds
while we aren't even doing it ourselves doesn't seem very reasonable
to me.

Also, web services may well not be proprietary in the sense of the
Open Source Definition, but are proprietary in effect. See
below, on the need for an Open Service Definition.

I don't want to put my data on someone else's server and other
security issues are a common concern. Let's be clear that of course it
will always be possible to keep your data locally, or run your own
server.

But I think the privacy issues are very solveable, even for
people who care deeply about them. An Open Service Definition or the
like might address these. And of course you can do strong cryptography
- though for the average consumer, the prospect of losing 5 years of
data when they lose their private key is not acceptable, Mozy is an example of a service that gives
you an option to strongly encrypt with your own private key. They
don't default to that choice since it's too risky for an
average person.

As with the issue of proprietary web services, though, a "stop
progress!" attitude won't put us in a position to affect security or
privacy. If we want to affect these things, we first have to offer the
user benefits and be a project people really care about. And then we
can affect what other participants in the industry do.

Several people suggested the argument against security concerns is
"do you use online banking?" and that seems like a good point, since
most people do use it.

We need a Free Services License, Open Service Definition, Free
Terms of Service, or whatever we want to call it. I see more and
more people talking about this, even aside from the GNOME Online
Desktop conversation. Topics to cover in an Open Service Definition
might include ability to export your personal data, your right to own
your data's copyright, etc. There may also be a requirement to use an
Affero GPL type of license. This is very open-ended and unclear at the
moment.

To me the reason open source works is that multiple parties with
competing interests can collaborate on the software. What would make
multiple parties interested in collaborating on a service? Probably a
fairly radical-sounding set of requirements. But the GPL was pretty
radical-sounding too, many years ago.

Running servers that require real bandwidth, hardware, and
administration will be hard for the open source community. This is
absolutely true. On the other hand, I can imagine a lot of ways we can
approach this, and we don't need very much in the way
of servers to get started. As I said in the talk, if we produce
something compelling people will be excited about it and we'll have a
number of opportunities to work with for-profit and nonprofit funding
sources to get the server problem solved. If we don't produce
something compelling, then there won't be a scalability issue.

There are some precedents, the main one being Wikipedia, but
I'm also thinking of the Internet Archive, iBiblio, and ourmedia.org
as examples of nonprofit services.

What about just using a WebDAV home directory or syncing files
around? If you start to prototype this, I would bet it produces a
distinctly different and probably worse user experience than building
around domain-specific services like calendar, photos, etc., for a
variety of reasons. But it could be part of the answer and is
certainly worth prototyping.

Since it's come up at GUADEC, I wanted to post a bit about D-Bus
licensing. D-Bus is dual licensed under your choice of the GPL or the
Academic
Free License 2.1. The AFL is essentially an MIT/X11 style license,
i.e. "you can do almost anything" - however, it has the following
patent clause:

This License shall terminate automatically and You may no longer
exercise any of the rights granted to You by this License as of the
date You commence an action, including a cross-claim or counterclaim,
against Licensor or any licensee alleging that the Original Work
infringes a patent. This termination provision shall not apply for an
action alleging patent infringement by combinations of the Original
Work with other software or hardware.

In other words, if you sue claiming that D-Bus infringes your patent,
you have to discontinue distributing D-Bus. The patent clause does not
affect anything "outside" of D-Bus, i.e. it does not affect patents on
stuff other than D-Bus, or your right to distribute stuff other than
D-Bus.

Versions of the AFL prior to 2.1 had a more scary patent
clause. However, I have not heard any objections to this more
limited one in 2.1.

Let's compare the situation here to LGPL. LGPL is a dual license; the
LGPL license plus GPL. Quoting from LGPL:

You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library.

As I understand it, this is why the LGPL is GPL-compatible. If you
link your GPL app to an LGPL library, you are using the library under
GPL.

I believe if you distributed D-Bus under GPL or LGPL, you would be
making a patent grant of any patents affecting D-Bus. The AFL
patent clause does not require you to make a patent grant; it still
allows you to sue. You just have to stop distributing D-Bus while you
do it. With the GPL or LGPL, you can never distribute in the first
place, without giving up the right to sue at all. Unless I'm missing
something, there's no way the AFL patent clause can be a problem
unless LGPL or GPL would be a problem in the same context.

That said, there may be some advantages to relicensing D-Bus, some of
the options would be:

Add LGPL as a choice (so LGPL + GPL + AFL)

Add GPLv3 as a choice

Switch the whole thing to MIT/X11

Some combination of the above

For the record, I'm not against any of these in principle. I would
just say, 1) it's a lot of work due to all the authors,
so someone would have to volunteer to sort this out (and figure out
what to switch to), and 2) I think some people are not understanding
the current licensing - in particular, at the moment it isn't clear to
me at least what LGPL would allow you to do that the current licensing
does not. AFL is much less restrictive than the LGPL, and the
GPL is not compatible with the LGPL either - the GPL is only
LGPL-compatible because LGPL programs are dual-licensed under GPL,
just as D-Bus is.

I may be confused on point 2): it would seem the implication is that
if your app is "GPL + exception" you can't use an LGPL library such as
GTK+, except by adding another exception to allow using GTK+ under
LGPL rather than GPL. This is the same with GPL+AFL. But people don't
worry about linking their GPL+exception apps to GTK+, and they do
worry about linking them to D-Bus. What am I missing?

Historically, the intent of using AFL rather than LGPL was to be less
restrictive (and vague) than the LGPL, and the intent of AFL rather
than MIT/X11 was to retain some very minimal patent protection for
patents that affect D-Bus itself while keeping the MIT/X11 concept
otherwise. Also, AFL is a slightly more "legally complete and correct"
way to write the MIT/X11 type of license.

There isn't any ideology here, just an attempt to pick the best
license, and we can always revise the decision.

My first attempt at the Mugshot client build had one big Makefile.am
with every target inline in it; that was a downside. Owen nicely
fixed it with a convention: for build subcomponent "libfoo" put
"include Makefile-libfoo.am" in the Makefile.am, then put the build
for stuff related to "libfoo" in "Makefile-libfoo.am".

I recommend doing your project this way. I've spent a lot less
time messing with weird build issues; nonrecursive make "just works"
as long as you get the dependencies right, while recursive make
involves various hacks and workarounds for the fact that make can't
see the whole dependency graph. In particular, nonrecursive make
supports "make -jN" without extra effort, a big win since
most new computers have multiple cores these days.

Nonrecursive make has the aesthetic benefit that it keeps all
your build stuff separate from your source code. On top of
that, since srcdir != builddir will work more easily, you can make a
habit of building in a separate directory. Result: your source tree
contains source and nothing else.

GNOME unfortunately makes nonrecursive automake painful. Two issues
I've encountered are that "gtkdocize" generates a Makefile that is
broken in a nonrecursive setup, and that jhbuild always wants to do
srcdir=builddir even though my project is nice and clean and doesn't
require that.

I'm not sure why GNOME started with recursive make and everyone has
cut-and-pasted it ever since; it's possible automake didn't support
nonrecursive make in older versions, or maybe it was just dumb luck.

With "weird build bugs due to recursive make" knocked off the
list, my current top automake feature requests:

A way to echo only the filename "foo.c" instead of a 10-line
list of compiler flags, so I can see all warnings and errors at once without
scrolling through pages of junk.

In a nonrecursive make setup, a way to set a base directory for
SOURCES, so instead of "foo_SOURCES=src/foo.c src/bar.c" I can have
"foo_SOURCE_BASEDIR=src foo_SOURCES=foo.c bar.c" or something along
those lines.

Ability to include in the build a relative directory outside the
source tree, like "../common" or "../some-not-installed-dependency" -
this almost works but breaks in "make dist" because it tries
to copy "../common" to "distdir/../common" - we fix that in the
Mugshot client build with a little hack, but I can imagine an
automake-level convention for how to handle it.

gnomedesktop.org linked to
a
VentureCake article on ways to improve GNOME. While the article
covers a lot of ground, as a kind of aside it brings up the age-old "X
would be faster if it knew about widgets and not just drawing"
theory.

I wonder why this theory comes up repeatedly - it is one hundred
percent bogus. In fact the direction over the last few years is the
opposite. With Cairo, the X server is responsible for hardware
acceleration and the client side implements operations like drawing a
rectangle. Moving more logic client-side was one of the reasons for
the switch to Cairo.

Must be a lesson here somewhere about the value of applying intuition
to performance.

(And yes, some things may be slower and others faster in recent
Cairo-using apps vs. older apps, but I assure you it has nothing to do
with whatever you think it has to do with - unless your thinking is
based on profile results from solid benchmarks.)