ITSM stands for IT service management. ITSM is a discipline for managing information technology (IT) systems, philosophically centered on the customer's perspective of IT's contribution to the business. ITSM stands in deliberate contrast to technology-centered approaches to IT management and business interaction.
In order to differentiate ourselves from other orchestration products we should offer at least an Inventory Management system (ocsinventory) and a Request Tracker (rt).

The Hardware Certification team routinely performs certification testing of Kernel SRUs for recent releases. The objective is to run a small test suite on every system that has been certified with stock Ubuntu and ensure no regressions have occurred. This compliments the testing done by QA through the sheer range of hardware components and drivers exercised in the testing. The goal is to protect the Ubuntu user with a certified system from suddenly finding that something important doesn't work.
Currently the test suite consists of just over 20 tests as defined at https://wiki.ubuntu.com/Kernel/kernel-sru-workflow/CertificationTestSuite. These tests are unlikely to fail unless there has been a major regression and due to the automated nature of the testing (which is essential due to the volume of systems which need to be covered in a short period of time) there are no manual tests. So far this has meant that even though there have been regressions encountered in SRU updates, the certification testing process has not picked them up. Some have been bugs that could only be discovered through manual testing and some have been fairly obscure corner cases that the test suite simply didn't address.
The goal of this blueprint is to significantly increase the coverage of the SRU test suite within a few limits, the first being that all tests must be automated and the second being that there needs to be a sensible limit on the execution time of the test suite (to be discussed, but a guideline would be 30 minutes to 1 hour) so that SRUs can still be released in a timely manner. Each test should have as high a value as possible, so must focus on the most important hardware functionality first. Wireless testing is non-existent and testing of graphics and audio is not as good as it should be for such important pieces of functionality, so we intend to increase coverage in these areas at least. When considering new tests to add we will be looking to keep the SRU test suite synchronized with the test suite used to certify systems. This means that in the first instance new SRU tests should be sourced from the certification test suite and if a suitable test is not found then any new test created should also be included in the certification test suite.
Initial thoughts on what needs to be tested are here: https://docs.google.com/document/d/1fHfOnnnVCXsSayz3XlaL9hgrl-uTBXHx7Ijbkc6aKTo/edit?hl=en_US
== Agenda ==
* Introduction and overview
* Wireless testing
* Video testing
* Audio testing
* Q&A/Comments

When preparing milestone releases, we often need to turn around fixes quickly. The pipeline from a developer source upload to a full set of updated image builds on all architectures is currently somewhere in the region of nine hours. We would like to make this much quicker.

The Ubuntu Engineering team in Canonical is committing a rotation of developers to work solely on development release maintenance for a month or two at a time, with the goal of keeping the development release buildable, installable, and upgradeable at all times in order to allow other developers to work with fewer interruptions. What should our priorities be? How should we organise ourselves? Given the alignment between this and e.g. the historical activities of MOTU, how can we build community participation or link into existing activities?

= Problem Statement =
At present, there are 101 SystemV services in the Precise main archive that have not yet been converted to Upstart jobs(*). This needs to be recified.
= Important services that need conversion =
This is a selected list, but the following are important services that we could start with:
- rabbitmq
- open-iscsi
- bind9
- apache
- postfix
- puppet
- postgresql
- tomcat6
- memcached
= Rationale for Change =
- Upstart is our init system of choice (SystemV is considered legacy).
- Although Upstart does handle SystemV jobs, undesirable behaviour can and does result when there exist relationships between SystemV and Upstart jobs.
- Upstart jobs are easier to maintain than SystemV jobs.
- Upstart jobs are simpler than SystemV jobs.
- Upstart jobs place the burden of managing certain repeated tasks on Upstart, rather than requiring each SystemV service to re-invent the wheel (often badly).
- We wish to segregate SystemV jobs from Upstart jobs to optimize system shutdown.
- See http://upstart.ubuntu.com/cookbook/#critique-of-the-system-v-init-system
- To allow Upstart to be fully integrated into Debian, SysV services scripts *and* Upstart job files need to exist for a package.
= Proposal =
- Muster Community interest in an effort to port the remaining SystemV jobs to Upstart: It's a great way to learn Upstart!
- Consider having an online sprint to concentrate on this activity for a few days(?)
- Identify individuals who can help out when questions arise (jamesodhunt, spamaps, vorlon, etc?)
- Leverage the work done in Fedora to migrate away from SystemV.
- Concentrate on the most popular services first.
- Review all Upstart jobs.
- Thorough testing required.
= Questions =
- Aside from time, what is slowing down the conversion activity?:
- lack of examples? (We can blog and provide wiki examples)
- lack of familiarity with upstart job syntax? (We can hold education sessions)
- lack of ability to test the Upstart job versus the SysV service? (QA may be able to help here?)
- concerns over migration to "alternative init systems"? (there are no plans to switch)
- other?
= See Also =
https://blueprints.launchpad.net/ubuntu/+spec/foundations-o-upstart-convert-main-initd-to-jobs
(*) - this number has fallen from the 122 SysV services in natty, so the number is falling slowly :)

We have many reports that help us keep track of automatically-detectable problems in the development release (FTBFS, NBS, component-mismatches, the conflict checker, the transition tracker, etc.). These are all well and good, but they are rather disconnected from each other and in many cases do not provide very good facilities for distributing work among developers. If we want to drive these reports consistently to zero, some time spent on infrastructure would be worthwhile. What can we do to improve matters?

Project Unify is a new tool designed to help integrate the Unity design bug workflows with internal and external upstreams and downstreams. If you are interested solving Unity user interface bugs, or just finding out more about how the different teams involved in developing the Unity user interface work together, come to this session.

Having playlists shared between music players and between machines would be a great idea; make a playlist on Banshee on one Ubuntu machine and it also shows up in Rhythmbox, and Banshee on your netbook, and Ubuntu One music streaming. A standard format for playlists should be defined (m3u? pls? Something else?) and details of where those playlists should be stored, how they should be synced, and how music players can automatically read and write them seamlessly.

Some features were rolled out during Oneiric to make the weekly release meetings more efficient for tracking features and bugs of interest to the release team, but more improvement is needed.
Features via Topics in: http://status.ubuntu.com/ubuntu-oneiric/
Bugs of interest to release team in: http://reports.qa.ubuntu.com/reports/kernel-bugs/reports/rls-mgr-o-tracking-bugs.html
Would like to discuss elements of what is useful for each team to share, and how the round table can be made more effective for the participants.

Apache Hadoop has gained widespread adoption; the various flavours of Hadoop appear to be consolidating and Cloudera have transferred a number of their Hadoop related projects to Apache including Bigtop (the Cloudera packaging for Redhat, Debian and SuSE).
Packaging Hadoop for Ubuntu would help support developing a set of rock solid Juju charms for Hadoop by providing a well integrated version of the packaging for Ubuntu.
Collaboration with Apache Bigtop would also potentially help support packaging the wider family of Hadoop related projects.

Community testing of accessibility during the development cycle. During most cycles, testing can not be started until the last month or later of the cycle. Testing needs to start as close to the cycle start as possible, with Alpha2 as the latest date to be able to test it.

Today you can buy a computer preloaded with Ubuntu from many different vendors. The OS image used at the factory is often modified to provide better hardware support. It may also contain customizations such as additional bookmarks or applications. As a result of these customizations, the upgrade path to the next version of Ubuntu is not as clean as it could be.

Goals for the UDS-P cycle to help the Ubuntu Leadership Team establish relationships with formal Ubuntu Leadership Councils, Boards, and leaders throughout the community in a effort to provide resources for mentoring new leadership and encourage and motivate current leadership to become even more effective and efficient.

The current install experience for Ubuntu Cloud Infrastructure is less than optimal:
- lack of discoverability
- many steps
- lack of clean documentation
We need to identify what, when and how to improve this, specifically for Ubuntu Cloud Infrastructure and in general for products delivers via orchestra + juju

We need to identify areas where the Ubuntu documentation is lacking, improve our own documentation of how to get involved, and advertise to recruit contributors to help fill in the missing documentation.

During 12.04 development we will strive to ensure that Ubuntu Desktop works each day so that everyone can reasonably make progress with their development goals, rather than being blocked by poor quality in different areas of the product.
The flow as I envision it would go:
1. The ISO is testing in the morning for Europe
2. If the ISO is found to be acceptable, the QA reports such
3. If the ISO is found to be hard to use or test, the QA team reports as such
4. Ubuntu Engineering then investigates which package caused the breakage
5. The package that caused the ISO Is reverted
6. The ISO is rebuilt and step 1 starts again

Charms need to be tested by an automated test runner on each commit to the charm store. There is a need to test each charm's use of each interface with any charms which implement the other side of the interface.

Discuss the current HA cluster stack and its usage in OpenStack and finish with the merges of newest upstream changes.
Additionally, discuss the adoption of Pacemaker Cloud [1].
"The Pacemaker Cloud project provides high levels of service availability for high scale cloud deployments. Our approach to high availability is to detect failures, isolate failures, followed by restart of the failed components. When repeated component failures occur the software escalates those failures into failures of higher level components."
""
[1]: http://pacemaker-cloud.org/

A bit of work has been done late in the Oneiric cycle to improve friendly recovery to work better.
Now with the LTS coming up, it's time to fix some of the bigger issues for good and make sure the recovery mode will be useful and working for everyone.
Things to discuss include:
- Dealing with udev so important devices are initialized in recovery mode
- Properly initialize the network, either by using ifupdown or Network Manager
- Update the plugins to work properly when network isn't available and give a clue to the user that they need to enable network

Since Diablo, Openstack continues to grow at a rapid pace. As it will likely continue its pace throughout the Essex & P cycles, we need to determine which components, functionality and configurations matter most to us and how to test. With Juju driving automated complex deployment testing, we should drill down on specific areas of the stack that are critical to the success Ubuntu Cloud LTS. We should also consider how our testing can make use of upstream's existing QA infrastructure and how our efforts can benefit the Openstack community as a whole.

The QA tracker at http://iso.qa.ubuntu.com has been around for quite a while now.
It's in desperate need of some small changes to better work with the amount of testing Ubuntu requires nowadays.
This session is meant to discuss what are the most important changes we need to make the tracker work better for the LTS.
These changes should ideally be implementable very quickly as the tracker starts being used for the first alpha.

Similar to the IPv6 session we had in Budapest.
Discuss what changed in Oneiric and what we want to focus on for Precise.
Things to discuss at least include:
- New ifupdown supporting dhcpv6
- Testing our most important server and client packages for IPv6 support
- Status of IPv6 support for Ubuntu core services like archive.ubuntu.com, archive.canonical.com, ntp.ubuntu.com, geoip.ubuntu.com, ... so we can have a perfectly working install in an IPv6 only environment
- Privacy extensions
- Dual-stack DHCP server support