The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.

Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!). However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:

It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).

All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.

You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself

After a few weeks of being coffee-deprived, I decided to disassemble my espresso machine and see if I could figure out why it leaked water while on, and didn't have enough pressure to produce drinkable coffee.
I live a bit on the edge of where other people do, so my water supply is from my own pump, 40 meters into the ground. It's as hard as water gets. That was my main suspicion. I read a bit about it on the interwebz and learned about descaling, which I'd never heard about. I tried some of the home-made potions but nothing seemed to work.
Long story short, I'm enjoying a perfect espresso as I write this.

I wanted to share a bit with the internet people about what was hard to solve, and couldn't find any instructions on. All I really did was disassemble the whole thing completely, part by part, clean them, and make sure to put it back together tightening everything that seemed to need pressure.
I don't have the time and energy to put together a step-by-step walk-through, so here's the 2 tips I can give you:

1) Remove ALL the screws. That'll get you there 95% there. You'll need a philips head, a torx head, a flat head and some small-ish pliers.
2) The knob that releases the steam looks unremovable and blocks you from getting the top lid off. It doesn't screw off, you just need to pull upwards with some strength and care. It comes off cleanly and will go back on easily. Here's a picture to prove it:

As the pieces start to come together and we get closer to converging mobile and desktop in Ubuntu, Click packages running on the desktop start to feel like they will be a reality soon (Unity 8 brings us Click packages). I think it's actually very exciting, and I thought I'd talk a bit about why that is.

First off: security. The Ubuntu Security team have done some pretty mind-blowing work to ensure Click packages are confined in a safe, reliable but still flexible manner. Jamie has explained how and why in a very eloquent manner. This will only push further an OS that is already well known and respected for being a safe place to do computing for all levels of computer skills.
My second favorite thing: simplification for app developers. When we started sketching out how Clicks would work, there was a very sharp focus on enabling app developers to have more freedom to build and maintain their apps, while still making it very easy to build a package. Clicks, by design, can't express any external dependencies other than a base system (called a "framework"). That means that if your app depends on a fancy library that isn't shipped by default, you just bundle it into the Click package and you're set. You get to update it whenever it suits you as a developer, and have predictability over how it will run on a user's computer (or device!). That opens up the possibility of shipping newer versions of a library, or just sticking with one that works for you. We exchange that freedom for some minor theoretical memory usage increases and extra disk space (if 2 apps end up including the same library), but with today's computing power and disk space cost, it seems like a small price to pay to empower application developers.
Building on top of my first 2 favorite things comes the third: updating apps outside of the Ubuntu release cycle and gaining control as an app developer. Because Click packages are safer than traditional packaging systems, and dependencies are more self-contained, app developers can ship their apps directly to Ubuntu users via the software store without the need for specialized reviewers to review them first. It's also simpler to carry support for previous base systems (frameworks) in newer versions of Ubuntu, allowing app developers to ship the same version of their app to both Ubuntu users on the cutting edge of an Ubuntu development release, as well as the previous LTS from a year ago. There have been many cases over the years where this was an obvious problem, OwnCloud being the latest example of the tension that arises from the current approach where app developers don't have control over what gets shipped.
I have many more favorite things about Clicks, some more are:
- You can create "fat" packages where the same binary supports multiple architectures
- Updated between versions is transactional so you never end up with a botched app update. No more holding your breath while an update installs, hoping your power doesn't drop mid-way
- Multi-user environments can have different versions of the same app without any problems
- Because Clicks are so easy to introspect and verify their proper confinement, the process for verifying them has been easy to automate enabling the store to process new applications within minutes (if not seconds!) and make them available to users immediately

The future of Ubuntu is exciting and it has a scent of a new revolution.

I'm a few days away from hitting 6 years at Canonical and I've ended up doing a lot more management than anything else in that time. Before that I did a solid 8 years at my own company, doing anything from developing, project managing, product managing, engineering managing, sales and accounting.
This time of the year is performance review time at Canonical, so it's gotten me thinking a lot about my role and how my view on engineering management has evolved over the years.

A key insights I've had from a former boss, Elliot Murphy, was viewing it as a support role for others to do their job rather than a follow-the-leader approach. I had heard the phrase "As a manager, I work for you" a few times over the years, but it rarely seemed true and felt mostly like a good concept to make people happy but not really applied in practice in any meaningful way.

Of all the approaches I've taken or seen, a role where you're there to unblock developers more than anything else, I believe is the best one. And unless you're a bit power-hungry on some level, it's probably the most enjoyable way of being a manager.

It's not to be applied blindly, though, I think a few conditions have to be met:
1) The team has to be fairly experienced/senior/smart, I think if it isn't it breaks down to often
2) You need to understand very clearly what needs doing and why, and need to invest heavily and frequently in communicated it to the team, both the global context as well as how it applies to them individually
3) You need to build a relationship of trust with each person and need to trust them, because trust is always a 2-way street
4) You need to be enough of an engineer to understand problems in depth when explained, know when to defer to other's judgments (which should be the common case when the team generally smart and experienced) and be capable of tie-breaking in a technical-savvy way
5) Have anyone who's ego doesn't fit in a small, 100ml container, leave it at home

There are many more things to do, but I think if you don't have those five, everything else is hard to hold together. In general, if the team is smart and experienced, understands what needs doing and why, and like their job, almost everything else self-organizes.
If it isn't self-organizing well enough, walk over those 5 points, one or several must be mis-aligned. More often than not, it's 2). Communication is hard, expensive and more of an art than a science. Most of the times things have seemed to stumble a bit, it's been a failure of how I understood what we should be doing as a team, or a failure on how I communicated it to everyone else as it evolved over time.
Second most frequent I think is 1), but that may vary more depending on your team, company and project.

Oh, and actually caring about people and what you do helps a lot, but that helps a lot in life in general, so do that anyway regardless of you role

Now that all the responsible disclosure processes have been followed through, I’d like to tell everyone a story of my very bad week last week. Don’t worry, it has a happy ending.

Part 1: Exposition

On May 5th we got a support request from a user who observed confusing behaviour in one of our systems. Our support staff immediately escalated it to me and my team sprung into action for what ended up being a 48-hour rollercoaster ride that ended with us reporting upstream to Django a security bug.

The bug, in a nutshell, is that when the following conditions lines up, a system could end up serving a request to one user that was meant for another:

- You are authenticating requests with cookies, OAuth or other authentication mechanisms
- The user is using any version of Internet Explorer or Chromeframe (to be more precise, anything with “MSIE” in the request user agent)
- You (or an ISP in the middle) are caching requests between Django and the internet (except Varnish’s default configuration, for reasons we’ll get to)
- You are serving the same URL with different content to different users

We rarely saw this combination of conditions because users of services provided by Canonical generally have a bias towards not using Internet Explorer, as you’d expect from a company who develops the world’s most used Linux distribution.

Part 2: Rising Action

Now, one may think that the bug is obvious, and wonder how it went unnoticed since 2008, but this really was one was one of those elusive “ninja-bugs” you hear about on the Internet and it took us quite a bit of effort to track it down.

In debugging situations such as this, the first step is generally to figure out how to reproduce the bug. In fact, figuring out how to reproduce it is often the lion’s share of the effort of fixing it. However, no matter how much we tried we could not reproduce it. No matter what we changed, we always got back the right request. This was good, because it ruled out a widespread problem in our systems, but did not get us closer to figuring out the problem.

Putting aside reproducing it for a while, we then moved on to combing very carefully through our code, trying to find any hints of what could be causing this. Several of us looked at it with fresh eyes so we wouldn’t be tainted by having developed or reviewed the code, but we all still came up empty each and every time. Our code seemed perfectly correct.

We then went on to a close examination of all related requests to get new clues to where the problem was hiding. But we had a big challenge with this. As developers we don’t get access to any production information that could identify people. This is good for user privacy, of course, but made it hard to produce useful logs. We invested some effort to work around this while maintaining user privacy by creating a way to anonymise the logs in a way that would still let us find patterns in them. This effort turned up the first real clue.

We use Squid to cache data for each user, so that when they re-request the same data, it’s queued up right in memory and can be quickly served to them without having to recreate the data from the databases and other services. In those anonymized Squid logs, we saw cookie-authenticated requests that didn’t contain an HTTP Vary header at all, where we expected it to have at the very least “Vary: Cookie” to ensure Squid would only serve the correct content all the time. So we then knew what was happening, but not why. We immediately pulled Squid out of the middle to stop this from happening.

Why was Squid not logging Vary headers? There were many possible culprits for this, so we got a *lot* of people were involved searching for the problem. We combed through everything in our frontend stack (Apache, Haproxy and Squid) that could sometimes remove Vary headers.

This was made all the harder because we had not yet fully Juju charmed every service, so could not easily access all configurations and test theories locally. Sometimes technical debt really gets expensive!

After this exhaustive search, we determined that nothing our code removed headers. So we started following the code up to Django middlewares, and went as far as logging the exact headers Django was sending out at the last middleware layer. Still nothing.

Part 3: The Climax

Until we got a break. Logs were still being generated, and eventually a pattern emerged. All the initial requests that had no Vary headers seemed for the most part to be from Internet Explorer. It didn’t make sense that a browser could remove headers that were returned from a server, but knowing this took us to the right place in the Django code, and because Django is open source, there was no friction in inspecting it deeply. That’s when we saw it.

In a function called fix_IE_for_vary, we saw the offending line of code.

del response['Vary']

We finally found the cause.

It turns out IE 6 and 7 didn’t have the HTTP Vary header implemented fully, so there’s a workaround in Django to remove it for any content that isn’t html or plain text. In hindsight, if Django would of implemented this instead as a middleware, even if default, it would have been more likely that this would have been revised earlier. Hindsight is always 20/20 though, and it easy to sit back and theorise on how things should have been done.

So if you’ve been serving any data that wasn’t html or plain text with a caching layer in the middle that implements Vary header management to-spec (Varnish doesn’t trust it by default, and checks the cookie in the request anyway), you may have improperly returned a request.

Newer versions if Internet Explorer have since fixed this, but who knew in 2008 IE 9 would come 3 years later?

Part 4: Falling Action

We immediately applied a temporary fix to all our running Django instances in Canonical and involved our security team to follow standard responsible disclosure processes. The Canonical security team was now in the driving seat and worked to assign a CVE number and email the Django security contact with details on the bug, how to reproduce it and links to the specific code in the Django tree.

The Django team immediately and professionally acknowledged the bug and began researching possible solutions as well as any other parts of the code where this scenario could occur. There was continuous communication among our teams for the next few days while we agreed on lead times for distributions to receive and prepare the security fix,

Part 5: Resolution

I can’t highlight enough how important it is to follow these well-established processes to make sure we keep the Internet at large a generally safe place.
To summarise, if you’re running Django, please update to the latest security release as quickly as possible, and disable any internal caching until then to minimise the chances of hitting this bug.

If you're running squid and want to check if you could be affected, here's a small python script to run against your logs we put together you can use as a base, you may need to tweak it based on your log format. Be sure to run it only against cookie-authenticated URLs, otherwise you will hit a lot of false positives.

This week has been bitter-sweet. On the one hand, we announced that a project many of us had poured our hearts and minds into was going to be shut down. It’s made many of us sad and some of us haven’t even figured out what to do with their files yet

On the other hand, we’ve been laser-focused on making Ubuntu on phones and tablets a success, our attention has moved to making sure we have a rock-solid, scalable, secure and pleasant to use for developers and users alike. We just didn’t have the time to continue racing against other companies whose only focus is on file syncing, which was very frustrating as we saw a project we were proud of be left behind. It was hard to keep feeling proud of the service, so shutting it down felt like the right thing to do.

I am, however, very excited about open sourcing the server-side of the file syncing infrastructure. It’s a huge beast that contains many services and has scaled well into the millions of users.

We are proud of the code that is being released and in many ways we feel that the code itself was successful despite the business side of things not turning out the way we hoped for.

This will be a great opportunity to those of you who’ve been itching to have an open source service for personal cloud syncing at scale, the code comes battle-tested and with a wide array of features.

As usual, some people have taken this generous gesture “as an attempt to gain interest in a failing codebase”, which couldn’t be more wrong. The agenda here is to make Ubuntu for phones a runaway success, and in order to do that we need to double down on our efforts and focus on what matters right now.

Instead of storing away those tens of thousands of expensive man-hours of work in an internal repository somewhere, we’ve decided to share that work with the world, allow others to build on top of that work, benefit from it.

It’s hard sometimes to see some people trying to make a career out of trying to make everything that Canonical does as inherently evil, although at the end of the day what matters is making open source available to the masses. That’s what we’ve been doing for a long time and that’s the only thing that will count in the end.

So in the coming months we’re going to be cleaning things up a bit, trying to release the code in the best shape possible and work out the details on how to best release it to make it useful for others.

All of us who worked on this project for so many years are looking forward to sharing it and look forward to seeing many open source personal cloud syncing services blossoming from it.

Following up on the discussion opened up by Colin Watson on ubuntu-devel and further discussions at vUDS, we’ve created a public mailing list to continue exploring and coordinating all the work around the new packaging format and changes needed to the surrounding systems.

Since we didn’t want to block on having everything cleaned up, some documents thrown in the mailing list may not be publicly visible. Apologies in advance while we slowly move them over to be accessible by everyone. We’ve decided to take a pragmatic approach here instead of blocking until everything was perfect so the discussions could all happen in public.

So, I've been around the Ubuntu community for a while. I installed 4.10 (Warty Warthog) as soon as it came out, I was fighting to keep my Debian installation usable at the time. I instantly fell in love and dove into the community, I wanted to do whatever I could to make the project succeed. It was exactly what I was looking for. At the time, Canonical was also shipping CDs to anyone who wanted them, which gave the project a much more professional feel to it.
And, the focus Mark set for the project turned out to be the right one, it very quickly converted thousands of open source enthusiasts to it and a solid, technically capable community started to be built around it. Soon enough, with the focus laser-sharp on making Ubuntu as usable as possible, non-technical folks started to show up, people who were Windows users but were tired of it and looking for something better. These people gave our project an awesome foundation for support (once they figured out how to make certain things work, they'd immediately help the next person who came along with this problem). Translations grew, since it was a great way for a non-technical person to help. documentation grew, advocacy grew, communication, marketing, you name it, it was growing.
As things moved forward, there were some tough decisions to be made. I remember when Compiz came around, it was very immature and almost guaranteed it would break your system, just have a quick read through the Slashdot comments! You could very easily replace the word "compiz" for "unity" when it was first introduced and you'd have most of the same comments that went on when that first happened.
But, it was the right choice. The hard and unpopular choice. We, the community at large, mostly wanted a stable system. Mark, Canonical, were pushing to mature the technology so be able to build awesome things on top of it. It was the same story for Pulseaudio, the same for binary drivers, we've been here before, over and over.Change is very hard, and a lot of it feels wasteful. Nobody wants to waste their free time, you want to make it count.

As for where we stand today, I first want to be clear that my initial reaction to the flood of changes being proposed upset me as well. A lot. I laid low for a while so I could clear my head and understand what was going on before reacting. When the Rolling Releases proposal came out, I read the email on ubuntu-devel (which, btw, is where I read about it, there was no internal Canonical "announcement") and I was frustrated with how it was being presented. It felt like Canonical imposing whatever they wanted, bulldozing over the community. How could Rick do something like that? He's a smart and well-intentioned person, this isn't the smart thing to do. I started writing up an angry email to the Community Council, and as I did, I stopped to re-read the original email to rant with specific references. When I did, I couldn't believe my eyes. The email was clearly stated as a proposal, open to discussion with quite of bit of work done beforehand, ending the email with:

"Such a change needs to be discussed in the Ubuntu community. Therefore, I asked my team to put together a strawman proposal for how such moving to a monthly cadence with rolling release might work."

Go ahead, read it yourself. As a long-time member, my gut feeling is that in the past this would of been presented to the Technical Board first to be discussed, and then a wider conversation would be had. But the reverse makes more sense to me actually, have a wider conversation first, then bring it to the Technical Board.
So, now I deleted my email and started all over again. I explained how I was feeling rather than rant about things that apparently didn't happen as I imagined them, and just admitted that I no longer knew where we were as a project and needed to talk it out a bit.
So we did. We talked, vented, ranted, looked at the positive side of things, the negative, remembered the past, imagined the future.

The way I see things now is that the project has changed. But this was the path all along, it should of been more obvious. First we won the Linux distro user base, gained support, a community, a clearer focus on what less technical people wanted and it felt great. People were moving to Ubuntu left and right, first on the desktop, then the server migrations came along with it. But that was not the goal. The goal was (and I quote from bug #1) "Our work is driven by a belief that software should be free and accessible to all.". The "all" part of that is the key. That's why we made the desktop slow and buggy for a while to introduce compiz, even though it didn't really fill any need for technical users. Same with Unity, same with Pulseaudio, same with the Ubuntu font, same with shipping free CDs to anywhere in the world.
So as we progressed in our goal, technical users felt a bit more and more distant from what was changing, because they were no longer the primary user. It makes the "scratch your own itch" part of free software a bit harder. In exchange, I started to meet taxi drivers who were Ubuntu users, musicians, graphic designers, writers. I'd see Ubuntu out in the while in the strangest places.

And now, the world has changed. It no longer seems like the way to make computing available as free software to everyone can be accomplished with just a great desktop. Mobile phones and tablets is where most of people's time seems to be shifting to. It's a multi-device world and it's here to stay. If we want to fix bug #1, we now need to change tactics and tackle the full story. There seems to be a window of opportunity for us as a project right now, I don't think we'll get many more of these. It feels like a now-or-never kind of moment, and I can't imagine having invested most of my energy in the last 8 years fading away into a niche market. That's not what I set out to help do.
It's going to be a bumpy ride for a while, we need to move fast, and speed is not one of the easiest things to do when you need to find consensus across many different people, timezones, interests, goals, agendas and languages. I don't see what other choice we have than to rise up to the challenge and find a way to make it work.

Speaking purely from a personal point of view, I think Canonical will need to push harder for changes in processes, tools, libraries and focuses. I also happen to think Canonical has done poorly at presenting and driving these changes. Not due to a lack of trying to do the right thing, it's just really hard to do. Stress, pressure, deadlines, partners, confidentiality agreements, private negotiations, business deals to ship Ubuntu on millions of devices, it all sets you up to rush and get things done as quickly as possible. That's how the market works. But when you're not immersed in all of that, from the outside, it just looks slightly evil and a bit like bullying.
I think Canonical can and will do better, it has to, I feel the survival of the company partially depends on it.

One thing to remember though, is that free software is very much like evolution, survival of the fittest. This means trying out many different things, and the best ones overall survive and thrive. Competition is essential. The fact that Canonical is putting out there more free software projects is the best thing that can happen to the movement, no matter how many times you yell out that you know for a fact that if that same effort was spent on an existing project it would all be better. If that were true, there would be one Linux distro, period.
As long as it's free software, and Canonical is shoveling code into it, that's what counts at the end of the day. Working, maintained code. Don't forget that. If Canonical is wrong about, let's say, that investing in Mir is a better bet than investing in Wayland, ultimately, it's Canonical's money. If it's done in a way that developers are drawn to help, it'll be cheaper and happen faster. It's a win-win. The fact that they are betting on free software no matter what is what counts.

So I think it's time. In many ways this feels like the last big battle. We fought and won a lot to get here, it's now time to win or loose the war.

There seems to be quite a bit of buzz around Yahoo! effectively laying off remote workers (making them choose to start going to an office or resign), and I've read different perspectives on the subject, for and against remote working.
Having worked at Canonical for over 4 years, and in open source projects for quite a bit longer than that, my knee-jerk reaction is that the folks crying out that remote working just isn't as productive as working in an office is pretty short-sighted.
Canonical has hundreds of employees working remotely, far more than working in an office, and it seems like we're generally a very productive company. We take on huge competitors who have ten times the amount of people working on any given project, and we put up a pretty good fight. So I can tell you remote working is full of awesome for both the company (productivity, get to choose from a huge pool of talent) and the employee (no commute, less distractions).
I also think that the fact that open source projects are taking over the world at an incredible pace is a pretty huge testament to just how great remote working can be. This is even an extreme case where people aren't even available on a regular schedule with much tighter and clearer shared goals.

All that said, there are several ways things can go wrong with remote working.

Thoughtlessly mixing remote and co-located teams. All-remote and all co-located tends to work out easier. Mixing these things without having a clear plan on how communication is going to work is most likely going to end up badly. The co-located team will tend to talk to each other in the hallways and not bring the people who are remote into the loop, mostly because of the extra cost of communication there. If making decisions in person is accepted, and there are no guidelines in place to document and open up the discussion to the full audience, then it's going to fail. Regardless of remote-or-not, documenting these things is good practice, it provides traceability and there's less room for people to go away with different interpretations.

Hiring remote workers that are not generally self-directed. I can't stress this point enough. Remote working isn't for everybody, you have to make sure the people who are working remotely are generally happy making decisions on their own on a daily basis, can push through problems without a lot of hand-holding and are good at flagging problems when they see one. These types of people are great to have on site as well, but in a remote situation this is a non-negotiable skill.

Unclear goals as a team or company. If what people are suppose to be doing isn't crystal clear to everybody involved remote working is going to be very messy. Strongly self-directed people are going to push forward with what they think is the right thing to do (based off of incomplete information), and people less strongly independent are going to be reading a lot of RSS feeds.

I also think there are some common sense arguments against remote working that are actually an argument in favor of it.

Slackers will slack harder when at home. So, if you're at home, who's going to know if you spent your morning watching TV or thinking about a really hard problem? When you're at the office, it's much easier to check up on what you're doing with your time. I think that if you have an employee that you need to check up on what he's doing with his time, you have a problem. The answer is not going to be to put him in an office and get him to learn how to alt-tab very quickly to an IDE when you walk by. You should be working with them to make sure their performance is adequate. If it's not, and you can't seem to find a way around it, fire him. Keeping him around and force-feeding work is a huge waste of time and money. Slackers are going to slack harder at home, use that to your advantage to get rid of people who aren't up to task or don't care anymore quicker.

Communication is more expensive. It is. It also forces people to learn how to communicate better, more concisely, and in a way that's generally documented. While you can easily have calls, in the end you need to email a list or some form of communication that reaches everybody. So there's a short-term cost for a long-term benefit. You may need that short-term benefit right now, in which case you bring people together for a week or two, spend some of that money you've saved on infrastructure, and push things forward.

So, in general I think having remote workers forces a company to have clearer, well-communicated goals, better documentation on decisions, hiring driven and self-directed people makes you think long and hard about your processes and opens up to hiring from a much larger pool of people (all over the world!). I think those are great things to have pressuring you consistently, and will make you a better company for it.
Like everything else, if you have remote workers and pretend they are the same as co-located it's going to fail.

12.10 is out, how awesome is that? Go ahead and get it if you haven't yet. I've upgraded all my computers months ago and they've been stable and receiving polish and new features almost every day since, how awesome is that? It has tons of new features that put closed-source competitors to shame, how incredibly awesome is that!? It looks nicer, it works faster on my slower machines and a lot of the small bugs in 12.04 have magically gone away, awe-some.

Then, as if things couldn't seem better in a project nearing it's 10th year of attempting to take over the world in a lot of very literal ways, Mark spontaneously decides to take on more financial risk by further opening up the current skunkworks projects Canonical works on and what happens? A lot of crap gets thrown his way. How insane is that?

I can understand competitors taking the opportunity of spinning this as a bad thing, highlighting the fact that there are such projects at all, and how X or Y project is 100% open and pure (although, maybe not as successful). Then there's the usual Ubuntu trolls, folks who are bitter about Ubuntu being successful in the format that it adopted, blending commercial and community development in a unique way that requires a constant balancing act. They were betting on Ubuntu failing and they hate that it hasn't, they hate that for a huge number of people "Linux" actually means "Ubuntu". They also hate that there are millions of people who don't even know (or care) what Linux is, and happily use Ubuntu. That's fine, this is how life works, let them be bitter.
But I cannot understand strong, long-time Ubuntu members and contributors bashing Mark, Canonical or Ubuntu. It feels very disconnected from reality.
I can understand Unity sucked, everybody hated it and it made everything slow. It doesn't any more. In fact, it's crazy fast, crazy stable and it sets us apart from everybody else by a very long stretch. In some areas we leap-frogged a worthy competitor like Apple, and in many cases even forgot about Windows, our bug #1. This happened with many things, compiz, pulseaudio, empathy, you name it. Those sucked too, but ultimately rocked. For us, and for the rest of the open source ecosystem.

And yes, now you can purchase things from the Dash. It'll offer up items even though you maybe weren't looking to buy something, just opening your email. But it helps the project, it helps fund the very same things that make Ubuntu different from everyone else because we get to invest an enormous amount of money in user testing, design, custom engineering and closing deals with OEMs so Ubuntu ends up in the hands of millions of new users every year. I have an unfair advantage over most of you since I've worked at Canonical for over 4 years now and have seen a lot of what it costs in terms of actual dollars. It's not that hard to imagine, though, flying hundreds of people across the globe every 6 months to get together, work and make it feel more like a community, by any simple math it is hundreds of thousands of dollars. That is a lot of money. And when you complain about a feature which you can ultimately disable bothers you and should be removed (or disabled by default, cutting off the actual chance that it'll generate any significant revenue), also take a minute to think that you're saying to Mark he should take that money out of his own pocket instead just so you can feel more comfortable with yourself. I can empathise with people immediately thinking of all the terrible examples of OEMs bundling adware with their computers that annoy people to no end, just to squeeze out every single penny out of each user to bump up their stock. But this is not the same, Mark's been crystal clear that there is a lot being developed to make this a fantastic experience, I have inside knowledge to vouch for that. It is also all free software, it has been for almost 10 years, consistently, and has shown no signs of changing that. In fact, I started writing this because Canonical is trying to make the few bits that aren't fully permeable to the community more open. How fucking awesome is that?

Last week we organized a local Ubuntu conference in Buenos Aires, Argentina, which we plan on making it a regional conference from now on thanks to the help from our friends in the Uruguay LoCo. The conference was great but by far what stayed with me was a talk and some subsequent conversations with Guillermo Espertino about how a new-ish and small group of designers that used open source software to design professionally had gotten together and started a community called Gráfica Libre. These guys, individually do some very amazing things. As a group they've blown my mind

These are designers who are using 100% no-excuses free software on a daily basis to design and ship professional designs to customers.

These are some of the things they've designed as a group for the conference:

The video was edited by Guillermo Espertino and the 3D animation done by Martin Eschoyez. The blender source files are available on his website.

There's a presentation given by Guillermo Espertino (you can see the work his company does with open source in their website http://ohweb.com.ar/) you can download it (it is in spanish, though) and it highlights the challenges they've faced so far in putting together designs in the open and collaboratively. They still feel they have a few iterations to go until they have a settled process, but it certainly does look like they've cracked the hardest part to me.

While a lot of you are at UDS, several Latin American LoCos are working hard to organize a local Ubuntu conference.
Things are going really well, we're 4 weeks away, but we're a little short on funds. Every year the same people who organize it end up having to pay for many things themselves despite have a few generous sponsors, and this year I'd like to change it so I set up a small but valuable fund raising campaign and we could really use your help.
The site is in Spanish, so it may take a bit of blind surfing to get around but it should be fairly easy once you've been sent to PayPal

This June 1st and 2nd, we will be holding an all-Ubuntu conference for the second time in Argentina, and with plans to make it regional from now on (next one is in Uruguay!).
Even though it's in Spanish, I'd like to open up the Call for Papers here on planet Ubuntu as well, in case anyone reading is close by

0 A.D. is an awesome cross-platform game that is fun, has stunning graphics and is completely open source.
There's even a PPA for Ubuntu.
It works wonderfully on both my laptops.

They are looking for a round of donations to pay for some more development work, and as of this moment they're $634 USD short. I've just sent $50 their way.
If you've got a few bucks to spare, please send some money their way. Or maybe you want to get into some development work, they have detailed instructions on how to do just that!

This release is probably the most important of them all. We're releasing an LTS that will be supported for 5 years, that means it'll be around until 2017!
Different people will help out in making it awesome in different ways, but one we can all help with is upgrading to Precise today. And I do mean today. I've upgraded all my computers, including my work laptop and it's all generally running smoother than 11.10. And if it isn't, file a bug with the relevant information, that's what you upgraded for

So if you've been unsure about upgrading, please take the plunge and help out in making 12.04 a rock-solid release.

During the Community Council meeting yesterday we were talking about the general health and excitement levels of the community, and whether we were loosing a lot of members. I had a vague memory of us (Canonical) having an internal graph of number of members on the ~ubuntumember team, and I dug it up to see what story it told. As it turns out, it's a very positive and healthy one \o/
Here's the graph of number of Ubuntu members over time (there's no data prior to Sept 2007):

Note that the curve starts to really go up around May of 2008, that's when the membership boards took over member approvals from the Community Council.

So, it seems my nomination to the Community Council has been accepted \o/ It caught me a bit by surprise, so I'm struggling to add information to my wiki page again (it's been 4 years since I last touched it!).
The current list of nominees is awesome, so I'm very happy that no matter what the results are it's going to be a great board.

I wanted to share why I'd like to fill this position at this point in time with everybody so you know what you're voting for

My main concern right now is the decrease in motivation I've seen in some places in the community, which is counter-intuitive because there's more to do today than ever.
I'd like to get to the bottom of why this is happening and turn it around. I want to find new, exciting and clearly articulated goals for us to achieve and continue working on all the delicate balances we have between upstreams, Canonical, and Ubuntu.

I'd also like to find ways to more clearly document the different uses people have for Ubuntu and make sure either the default install is addressing them, or when impossible, communities are formed around spinning off the needed changes into its own thing to keep people productive and happy.

These are ambitious and hard things to do, but that's the case for most things worth doing.

P.S. I'm going to be on a plane from London to Buenos Aires when the results get announced!

Besides it letting you access all your files stored in Ubuntu One, it has a very cool feature to auto-sync all the pictures on your phone, having an instant backup of them, and a convenient place to share them!

A very healthy and civilised session about switching to Thunderbird by default just ended here in the Ubuntu Developer Summit, and the outcome was that if the Thunderbird developers manage to do some needed work (to be defined) by a certain time in our cycle (to be defined), we will ship Oneiric and more importantly, the 12.04 LTS with Thunderbird by default.

The bits I can remember that need to be done are:
- Evolution data server integration
- Tighter integration with Unity
- Shrink the size of the overall application so it fits in the CD
- A good upgrade story
- Migration plan for Evolution users

We will also make sure it ships with integration with contacts in Ubuntu One, thanks to James Tait's head start with the Hedera project.

I'm a big fan of Thunderbird, so I'll be doing my best to help them achieve their goals

In the last few months, I've been lucky enough to be able to hire some exceptional people that were contributing to Ubuntu One in their free time. Every time someone comes in from the community, filled with excitement about being able to work on their pet project full time my job gets that much better.
So, everyone say hello to James Tait and Micha? Karnicki!

Now we're looking for a new team member to help us make the Ubuntu One website awesome. Someone who knows CSS and HTML inside out, cares deeply about doing things the best way possible and is passionate about their work.

If you're interested or know anyone who may, the job posting is up on Canonical's website.