Back when everyone had an xorg.conf and a lot of the X bugs were due to
malformed config files, we would collect the files and run them through
checkers. Brian Murray ran a bot that'd test xorg.confs in bugs posted
to launchpad for instance. Found quite a few issues that way. These
days xorg.conf's are rare and hardly anyone hand-edits them so we don't
do this anymore (although maybe we should...)
But the concept could be applied more generally for config files of all
sorts.
The most effective thing you can do (IMHO) is create apport hooks for
every application that installs a config file, that looks for and
attaches that file when a user files a bug via 'ubuntu-bug package'.
The hooks are trivial python scripts, basically just:
from apport.hookutils import *
from os import path
def add_info(report):
if ui.yesno("Would you like to include your ~/.myconf?"):
attach_file_if_exists(report, path.expanduser('~/.myconf'), 'MyConf')
[You can omit the yesno prompt if you're absolutely certain there's no
chance of sensitive info in the config file, or if you programmatically
strip out or hide that info.]
Copy your hook to /usr/share/apport/package-hooks locally, and test it
out yourself. Once you have it working the way you think it should,
post it to the package's bug tracker (ubuntu-bug <package>), and make
sure to mark it as a patch. A package maintainer will review and
incorporate it into the package.
Next, wait, and let a nice database of config files accumulate in
launchpad.
Then, run a launchpadlib script to download all the configs. Weed out
all the dupes. Keep track of the package version.
Now make a test that iterates through all the config files, launching
the program, and testing if it successfully loads or exits/crashes.
If the app has a test suite, bonus, run that too.
Finally, go file bug reports (both in launchpad and upstream) for the
failures that occur.

For 12.10 we will switch the ARM images from preinstalled to be normal live images.
This session is not for discussing technical bits of this change but to get all stakeholders
using the preinstalled images together to discuss the possible impact of this move.

Hey,
Not sure how much we need to discuss but it's always good to have a GNOME checkpoint session.
It's likely that this cycle we will not "hold back" on things we kept behind until now, which means we need to bring clutter on the CD and see how we do that and what it means (do we need extra testing on some platforms during the cycle, how will it work for people not having 3d working, etc).
Some other desktopish topics I would like to discuss, not sure if that's the right session but since we will probably have time in that one:
- our delta with upstream and Debian and how we could lower it? mpt suggested that "launchpad-integration" items are quite "geeky", they also create most of our diff over Debian and extra work and don't really "scale" since they require sources patching, maybe it's time to discussion dropping that?
- tools, though UDD didn't change a lot so I don't think the consensus will be any different from what it was other cycles
- whatever other topics you guys come with ;-)

OpenLDAP's new MDB library provides basically unlimited scaling for reads, high speed writes, and extremely efficient memory use. It has already been ported into a full OpenLDAP backend, a Cyrus SASL sasldb backend, a Heimdal hdb backend, and an SQLite backend with dramatic resource savings and performance gains yielded in each case. Work is also underway to provide a perl DB module, and other projects such as OpenDKIM are now adopting it. With the prevalence of apps dependent on SQLite in Android and other mobile platforms, and the order of magnitude efficiency gains from MDB, the potential for battery savings and extended runtimes on mobile devices is significant. What other apps/tools should we explore for MDB adoption?

Upstart currently considers a service "ready" (fully initialised) once:
- [Services] The process has forked the expected number of times (0-2)
- [Tasks] The process has been exec'd successfully
For daemons therefore, "service readiness" is inextricably linked to the
overloaded 'expect' stanza which is also used for PID tracking.
The problem is that some services (such as cups) are _not_ ready once they have forked 'n' times.
The proposal is to introduce a new 'ready on' stanza coupled with a 'ready' event that would allow explicit control over when Upstart deems a service to be in a usable state:
http://people.canonical.com/~jhunt/blueprints/upstart-service-readiness-table.html
Summary:
- No change to existing 'expect' behaviour.
- If no 'ready on' condition specified, 'ready' event emitted immediately
after 'started'.
- If 'ready on' condition specified, 'ready' event emitted if and when
condition becomes true.
- 'ready' event can optionally be used by other services as a more
reliable way to know when a service is fully initialized and thus usable.
Observations:
- possible to specify multiple values in 'ready on' condition such as:
"ready on (dbus NAME=org.bar.foo and file FPATH=/var/log/myapp.log and socket PROTO=inet PORT=80"
"ready on stopped myjob and started myjob2"
- upstart-socket-bridge will be retained but with advent of (C), no
longer necessary to modify any daemons as is required by systemd for
"socket activation".
Advantages:
- No change to existing 'expect' behaviour.
- Solves the readiness problem since .conf files would have a rich
palette of sources of readiness to choose from which should cover 99%
of all cases (udev, dbus, file, socket).
- More reliable behaviour.
- Would allow for simplification for jobs that currently fail to work
solely via ptrace (for example, see gross hacks in /etc/init/cups.conf).
Work required:
- Finish (C).
- Implement (D) and (E).
- Modify upstart-udev-bridge to look at "ready on" job stanzas to allow
"ready on <udev-event>".
Concerns:
- (D) would need to be accepted into the upstream kernel.
- (D) would not currently work in LXC containers since netlink is effectively disabled (as it is not namespace-aware). Correct fix would presumably be to make netlink ns-aware?
- (D) ties this feature to Linux rather heavily
(*could* provide a very crude /proc/net/{tcp,udp} implementation but
performance would be poor as file must be continually re-read!)
- (C) would need to use inotify (or fsnotify to avoid complexities to overcome racy behaviour for inotify recursive watches) but could be ported to other architectures
(such as FreeBSD using kqueue).
----------------------------------
Alternative idea (from apw): put the onus on the daemons to inform Upstart when they are ready.
This is in fact already possible using 'expect stop' where Upstart waits for the application to send SIGSTOP before considering it ready. It could be extended to obtain the PID directly via sigaction(2) to avoid the need to obtain it via ptrace(2). Could go a stage further and provide some sort of formal API rather than a signal to allow a daemon to indicate readiness (coupled with a utility command to do the same).
Advantages:
+ simple.
+ puts onus on daemons rather than Upstart.
+ potentially removes the need to use ptrace for PID tracking.
+ if the API idea were selected, this could be used with SysV jobs too (by providing a NOP implementation for the traditional SysV init).
+ no kernel support required (so would map across to other systems (BSD/Hurd if desirable).
+ could be standardized as part of the LSB since it would be init-system-agnostic.
Disadvantages:
- daemons may ignore the standard behaviour.
- we would need to modify every daemon in the archive to work with this model.
- highly unlikely that commercial vendors would modify their products unless it were an approved standard.
- putting control in the hands of the daemons is not necessarily desirable: consider if they go haywire - Upstart would not be able to control the problem as it may not yet know the PID.

Application startup time is unnecessarily slow in a large number of
instances. Can we see some improvement in that area in the Q cycle? The
price of RAM has dropped dramatically, and usage has not increased all
that much. Can't we use it for something when it's available?
We now have Zeitgeist. This means we can know what users will do after
login. It's possible to tell not only what applications will be started,
but also what files will be used. In many cases, there's only a single
human user in the system. I would really like it if I could set my work
desktop to boot automatically in the morning, and it'd load my stuff
into RAM while waiting for me to log in. There's also a few websites I
always check first thing while I have my first cup of coffee. Load them
too so I don't have to wait for it. I'm the only human user on my
desktop, so why not log me in automatically, but in the background,
keeping the login screen as it is?
To my mind, these are all attainable goals:
* Sub-second login
* Instant loading of frequently used applications
* Zero-delay access to most frequently used websites.
Everyone is telling me to go buy a fast SSD. But that's expensive and in
my case, it doesn't provide any benefits that can't be achieved by
software. RAM is extremely cheap, and much faster than any SSD on the
market. What currently happens is that the login screen sits there
idling, waiting for me to pay attention to the computer before it starts
doing work it knows I'm going to want it to do. That's rude, isn't it?
In networked environments of diskless desktops, such as schools and
offices, the effects can be even greater. It might not be possible to do
background logins for the user, but a lot of things can still be loaded
in advance, providing a significantly improved experience. And of
course, the older the computers are, the greater the effect will be.

Hda-emu is a way to test kernel code for Intel HDA sound cards, without having the hardware at hand. Discuss how to evolve this code into a regression test suite, that we could run before e g releasing proposed kernels, and how to integrate it into existing QA efforts (jenkins etc).

This blueprint is about analysing the damage that is done by the spliting of LibreOffice into separate packages, which never was a goal at the upstream project. Because of that, in some cornercases LibreOffice crashes or fails to perform because of missing parts, when not installed completely. This is more severe than in other distros because we ship only a partial installaltion with the default install.
Examples include:
- document wizards not working in the default install (need java components)
- mailmerge not working in writer in the default install (needs libreoffice-base)
- some HTML-imports not working in writer in the default install (needs libreoffice-base)
- formfields/checkboxes not working in writer in the default install (needs libreoffice-base)
Fixing all these upstream is an uphill battle as new dependencies might be added under the radar by new features. So while a full libreoffice install on Ubuntu is doing fine, the default installation on Ubuntu is casting a shadow on both Ubuntu and LibreOffice.

Discussion of improvmenet and enhancements to the Cloud Images and Cloud-Init
* Addition of an SSH recovery Shell: Depending on the virtualization solution (i.e. EC2 EBS versus EC2 Instance-store versus OpenStack), the ability to recover from file system corruption is limited or non-existent. In order to support users across different virtualization solutions, it is proposed to introduce a SSH recovery method to assist users in recovering from file-system corruption or missing disks.
Proposal:
1. On failure of mount-all or on cloud-init failure to mount all disks, SSH would be
launched
2. Users would be forced into a screen session with an error message.
3. Users would need to reboot.
* Dynamic multiple LOCALE support: While English is the language of Ubuntu development, the use of Ubuntu is global. Further many Ubuntu users have default locale settings that are different. When SSH'ing into a Ubuntu Cloud Image, some software may fail to work properly with invalid locale settings set by SSH.
Proposal: Develop a method of compiling new locales based on SSH LC_* and LANG
settings sent by SSH client.
* Improving methods for users to find official EC2 AMI ids: Over the last couple of months we have made significant progress in developing new ways for users to discover the official EC2 AMI ids. Between the AWS Quickstart, AWS Marketplace (free tier and paid support), cloud-utils (which provides ubuntu-cloudimg-query), cloud-images.ubuntu.com (/query and /query2) and cloud.ubuntu.com/ami, there are several official ways to find the images.
Discussion: What are the deficiencies in the current methods of finding images and
how could we make finding the official AMI's easier?

As we move towards new markets and challenges to satisfy those markets, its time to re-examine how we've been doing things, and start planning for the longer term infrastructure goals we want to have in place for the next LTS and beyond. The build infrastructure has evolved since it was started 8 years ago, and several things work very well, while others could benefit from some brainstorming about what we'd do if we had a clean slate.

Recently it was announced (at http://wiki.ubuntu.com/ServerTeam/CloudArchive) that we will be backporting newer releases of Openstack to precise in order to offer users a chance to use newer features of Openstack when it becomes available. The purpose of this blueprint is to figure out the process and track work that needs to be done.

The XCP Toolstack is an open source, server and cloud virtualization platform which provides a rich management API on top of the Xen hypervisor. The purpose of this blueprint is to discuss improvements to the Ubuntu XCP Toolstack that we wish to make during the Q-series development cycle. We would also like to discuss ideas for improving the interaction between the XCP Toolstack and other Cloud and Server managment interfaces, such as OpenStack, CloudStack, and Juju.
[edit]

Every architecture we ship has instruction supersets that cause incompatibilities with our baseline targets, for example:
NEON on armhf
altivec on powerpc
cmov on i386
sse, 3dnow, etc on amd64
We need to both define a base ISA for each architecture, and sort out ways to continually scan for violations of same.

Java 7 was released last year and is now the primary development/support focus for both OpenJDK and Oracle.
Java 6 has limited support lifetime left.
We should transition all Java packages in the archive to OpenJDK 7 and endeavour to drop OpenJDK 6 from the archive.

In the interest of better parallelization, as well as better use of idle machine time, we'd like to move livefs building from an out-of-band affair to a launchpad-buildd-driven build job type. This has been architected a couple of times in the past and repeatedly not made it to implementation due to lack of time, but it really should be done soon, even if the work spans a couple of cycles.

Building upon the foundations of the initial Ubuntu App Developer site, we'd like to expand it on a second design phase.
Depending on the resource allocation from design, the scope might be reduced, so this blueprint is also to discuss the incremental improvements that can make a substantial impact and that we can work on next cycle.