Planet Ubuntu

Well you better listen my sisters and brothers,
‘Cause if you do you can hear
There are voices still calling across the years.
And they’re all crying across the ocean,
And they’re cryin’ across the land,
And they will till we all come to understand.

None of us are free
None of us are free
None of us are free, if one of us is chained
None of us are free

And there are people still in darkness,
And they just can’t see the light.If you don’t say it’s wrong then that says it right.
We got try to feel for each other, let our brothers know that we care.
Got to get the message, send it out loud and clear

None of us are free
None of us are free
None of us are free, if one of us is chained
None of us are free

Now I swear your salvation isn’t too hard too find,
None of us can find it on our ownWe’ve got to join together in spirit, heart and mindSo that every soul who’s suffering will know that they’re not alone

None of us are free
None of us are free
None of us are free, if one of us is chained
None of us are free

If you just look around you,
You’re gonna see what I say.
‘Cause the world is getting smaller each passing day
Now it’s time to start making changes,
And it’s time for us all to realize,
That the truth is shining real bright right before our eyes

None of us are free
None of us are freeNone of us are free, if one of us is chained
None of us are free.

Application of this to the year 2016, how you should deal with everything that’s happened this year, and how you should stand with your friends and the people around you… is left as an exercise for the reader.

I’d like to first thank the amazing Ubuntu community for funding this trip to promote all *buntu’s, provide a face to face for Ubuntu, Kubuntu, and Lubuntu contributors to plan the next cycle and more to come!

After the booth was up and running with people from Ubuntu, Kubuntu, Lubuntu. We had machines running Kubuntu, Lubuntu and a few devices with Ubuntu Touch. We were also showing off both gaming on *buntu and how well the Steam Controller works with it.

Currently Kubuntu does not have a Release Manager and as such we have started talking to Lubuntu’s Walter (wxl) as he has been working as Lubuntu’s RM for a few releases. We quickly realized that we don’t have any documentation on how that is handled. Thanks to Walter we now have a great starting point on how releases are normally worked out, we’ll have to see if Kubuntu needs any extra steps but it’s a step in the right direction.

What does this mean? Just like with our OpenStack offering you can now have Kubernetes deployed and running all on a single machine. All moving parts are confined inside their own LXD containers and managed by Juju.

It can be surprisingly time-consuming to get Kubernetes from zero to fully deployed. However, with conjure-up and the driving technology underneath, you can get straight Kubernetes on a single system with LXD, or a public cloud such as AWS, GCE, or Azure all in about 20 minutes.

As an example for users unfamiliar with Kubernetes, we packaged an action to both deploy an example and clean itself up.

To deploy 5 replicas of the microbot web application inside the Kubernetes cluster run the following command:

$ juju run-action kubernetes-worker/0 microbot replicas=5

This action performs the following steps:

It creates a deployment titled 'microbots' comprised of 5 replicas defined during the run of the action. It also creates a service named 'microbots' which binds an 'endpoint', using all 5 of the 'microbots' pods.

Finally, it will create an ingress resource, which points at a xip.io domain to simulate a proper DNS service.

Our congratulations Magnus, who won signed copies of “What If?” and “Thing Explainer” by Randall Munroe (creator of XKCD) with this entry:

The police and others send people who have done bad things to me. I give them a room in my ‘house’ and they are not allowed to leave until I say so. I tell these people not to do bad things again and I also help them with how not to do bad things. I write papers when they do bad things while they are living in my ‘house’ and give them warning. I have some rooms that have little in them where they get to stay if they have been doing really bad things to others or to themselves. Some like to live there because they get food, clean clothes, something to do and help. Some hate it and try to leave even though they are not allowed. If these people have done many bad things in my ‘house’ or tried to leave without checking with me, I can tell them that they have to stay for a longer time.

The runners up

Listen to this episode to hear us read out the runner up entries from:

Dave Hingley

Ivan Pejić

Tai Kedzierski

John Garner

Jordan [redacted]

Surma Saif

Katherine Hill

Iain Forbes

John Garner’s Thing Explainer Picture

This picture accompanies John Garner’s competition entry. Listen to this episode to hear how he explains what he does.

Earlier this week the Ubuntu community was busy with the Ubuntu Online Summit. If you head to the schedule page, you can watch all the sessions which happened.

As I’m interested in snaps a lot, I’d like to highlight some of the sessions which happened there, so if you missed them, you can go back and see what happened there:

Intro and keynote by Gustavo Niemeyer
Gustavo (amongst others projects he is involved with) is one of the lead developers of snapd. During his keynote he gives an overview over what the team has been working on in the last time and explains which features all landed in the snap world recently. It quickly gives you an idea of the pace of development and the breadth of new features which landed.

Creating your first snap
This is a session I gave. Unfortunately Didier couldn’t make it as he had lost his voice in the days before. We both worked together on the content for this. Basically, if you’re new to using and creating snaps, watch this. It’s a series of very simple steps you can follow along and gives you enough background to see the bigger picture.

Snap roadmap and Q&A
This was a fun session with Michael Vogt and Zygmunt Krynicki. They are also both lead developers of snapd and they share a lot of their thoughts in their own very fun and very interesting way. After some discussion of the roadmap, they dived right into the almost endless stream of questions. If you want to get an idea of what’s coming up and some of the more general decisions behind snaps, watch this one.

Building snaps in Launchpad
Colin Watson gave this demo of a beautiful new feature in Launchpad. Starting from a github repo (the source could live elsewhere too), the source is pulled into Launchpad, snaps are built for selected releases of Ubuntu and selected architectures and directly pushed to the store. It’s incredibly easy to set up, complements your CI process and makes building on various architectures and publishing the snaps trivial. Thanks a lot for everybody’s work on this!

The other sessions were great too, this is just what I picked up from the world of snaps.

Everyone who has followed Ubuntu lately for sure stumbled across the snappy technology, which does not only bring the new cross-distro packaging format “snap” but also a sandboxing technology for apps, as well as transactional updates that can be rolled back in case of an update going wrong and a new way of installing and upgrading Ubuntu called “Ubuntu Core”.

Together with all those new technologies came new tools that make it possible for app developers to build and pack their applications to target Snappy and Core. The central tool for that is snapcraft and it aims to unite a lot of tasks that were separate before. It can set up your build environment, build your projects and even package it with just one call in the project directory: “snapcraft”.

We took the last few weeks to start the work on supporting those new tools and now we have the first release of the IDE with direct support for building with snapcraft, as well as a basic template to get you started.

New technologies usually come with certain limitations. This one is not an exception and we hope that these issues will be eliminated in the near future.:

Snapcraft uses sudo when it needs to install build packages, however that does not work when run from the QtCreator, simply because sudo does not have a console to ask the password on. So make sure build dependencies are installed before building.

“Out of source” builds are not yet implemented in snapcraft, but since QtCreator always uses a extra build directory we had to work around that problem. So for now we rsync the full project into a build directory and run the build there.

Also incremental builds are yet not supported, so every build is a complete rebuild.

Snapcraft projects are described in a snapcraft.yaml file, so it made sense for us to use it as the project file in the IDE as well, so instead of opening a .pro or CMakeList.txt file the snapcraft.yaml is opened directly. Naturally implementing a completely new project type manager is not a trivial task, so many key features are still missing.

Code model support: while completion does work in the file scope, it does not for the full project.

Debugging mode: currently the profiling and debugging run modes do not work, so snap projects can only be executed normally.

Those limitations aside it can be already used to create snap packaged applications.

With this new release we consider the IDE as feature complete for the time being. Since the development of snapcraft is moving in a very fast pace we need to let it evolve to a certain degree to be sure new features added to the IDE represent the future way of building with snapcraft.

If you’re interested in running Kubernetes you’ve probably heard of Kelsey Hightower’s Kubernetes the Hard Way. Exercises like these are important, they highlight the coordination needed between components in modern stacks, and it highlights how far the world has come when it comes to software automation. Could you imagine if you had to set everything up the hard way every time?

Learning is fun

Doing things the hard way is fun, once. After that, I’ve got work to do, and soon after I am looking around to see who else has worked on this problem and how I can best leverage the best open source has to offer.

It reminds me of the 1990’s when I was learning Linux. Sure, as a professional, you need to know systems and how they work, down to the kernel level if need. Having to do those things without a working keyboard or network makes that process much harder. Give me a working computer, and then I can begin. There’s value in learning how the components work together and understanding the architecture of Kubernetes, I encourage everyone to try the hard way at least one time, if anything it’ll make you appreciate the work people are putting into automating all of this for you in a composable and reusable way.

The easy way

I am starting a new series of videos on how we’re making the Canonical Distribution of Kubernetes easy for anyone to deploy on any cloud. All our code is open source and we love pull requests. Our goal is to help people get Kubernetes in as many places as quickly and easily as possible. We’ve incorporated lots of the things people tell us they’re looking for in a production-grade Kubernetes, and we’re always looking to codify those best practices.

Enjoy:

Following these steps will get you a working cluster, in this example I’m deploying to us-east-2, the shiny new AWS region. Subsequent videos will cover how to interact with the cluster and do more things with it.

One of the advantages of snap packages is that they are self-contained. When you install a snap, you know that you don’t need to install additional dependencies (besides the automatically-installed core snap that provides the basic operating system layer), and it will simply work on every Linux distribution that supports snaps.

Here, we show how to create self-contained snap packages for Qt-based applications, and we show an additional approach where some of the app dependencies are provided by a separate snap: the Ubuntu app platform snap. The platform snap provides an (optional) approach for the software provider, and can save disk space in some cases. Below we will explain the two approaches to building a snap for Qt-based software: a snap that is self-contained and includes Qt, and one that uses the platform snap, and we show the advantages of each approach. However, before showing these two approaches that you can apply to your own QML code, we demonstrate how to create a snap from deb packages in the Ubuntu archive so that you can get started right away even before you write any code.

We assume that before reading this blog post, you have acquired knowledge about how to use Snapcraft. So if you haven’t, we recommend reading the documentation on snapcraft.io and the snap-codelabs tutorials.

All the commands that are listed below are executed on an Ubuntu 16.04 LTS machine with the stable-phone-overlay PPA enabled. Some of the snapcraft commands may run on other configurations, but for the “Ubuntu App Platform Snap” section it is a hard requirement because the version of Qt - upstream 5.6 long term support version - and other libraries used to build the snap need to match the versions in the ubuntu-app-platform snap. Installing the snap packages works on different versions of Ubuntu and even different Linux distributions. The examples were tested on amd64 architecture with Intel graphics. If you are running this on a different CPU architecture, naturally the architecture in the directory and snap file names listed below must be modified. If you have an Nvidia GPU and use the Nvidia proprietary drivers there can be problems when running some snapped applications, so in that case we currently recommend to use the open source Nouveau drivers.

The examples are also available in a repository linked to from the Evaluation section.

Qt cloud parts - a simple use case

We will demonstrate how to build a simple app snap that includes the Qt release and Ubuntu UI Toolkit (UITK) from the Ubuntu archives. For this example, we use the UITK gallery which is part of the ubuntu-ui-toolkit-examples deb package on classic Ubuntu systems, so no additional code is needed. Because of that, we can simply use the nil plugin and pull in the examples as stage-packages. We create a directory called uitk-gallery which contains only a snapcraft.yaml file with the following contents:

(notes: the command line assumes you are on and targeting amd64 system. the plugs line is needed so that you have access to graphical subsystem from your confined app)

Under stage-packages we listed all the packages that need to be pulled from the Ubuntu archive, including their dependencies. ubuntu-ui-toolkit-examples contains all the QML code for the UITK gallery that we want to run using qmlscene. We also included qml-module-qtqml-models2 because some pages of the UITK gallery import QtQml.Models. The line after: [desktop-qt5] fetches the desktop-qt5 part from the remote parts repository. It will automatically pull in Qt 5 from the Ubuntu archive, set-up environment variables, and provide the desktop-launch script that is called to start the app. The snap file can be created simply by going to the uitk-gallery directory which contains the snapcraft.yaml file, and running:

snapcraft

Note that Snapcraft will ask for the sudo password to install the Qt5 dev packages that are required to compile Qt apps, but can be left out if all the dependencies are already present. Running snapcraft will create (on an amd64 machine) the file uitk-gallery_0.1_amd64.snap which can then be installed by:

snap install --dangerous uitk-gallery_0.1_amd64.snap

where the dangerous parameter is required because we are installing an unsigned snap that does not come from the Ubuntu store. Note that you do not need to use sudo if you have logged in with snap login. The UITK gallery can now be launched using:

uitk-gallery

The desktop-qt5 cloud part pulls in the current stable version of Qt of the Ubuntu 16.04 LTS release - 5.5.1 normally or 5.6.1 in the case of stable overlay PPA. To uninstall the UITK gallery snap before going to the next section, run:

snap remove uitk-gallery
QML project using parts from the cloud

If your existing QML code is not available as a deb package, then obviously you cannot pull it in from the archive when creating the snap using stage-packages. To show how to include your own QML code, we copy the UITK gallery code to the ubuntu-ui-toolkit-gallery directory inside the snapcraft (uitk-gallery) directory. Go to the parent directory of the uitk-gallery of the previous section, and run:

You should now have both the snapcraft.yaml and the copied ubuntu-ui-toolkit-gallery directory that contains the source code of the UITK gallery under the uitk-gallery. We can now remove the ubuntu-ui-toolkit-examples from the stage-packages in the snapcraft.yaml file. Because that line is removed, the dependencies of the UITK gallery are no longer pulled in automatically, and we must add them to the YAML file, which then becomes:

Note that besides the changes in stage-packages, also the location of ubuntu-ui-toolkit-gallery.qml was updated in the uitk-gallery command because the QML files are no longer installed in usr/lib inside the snap, but copied in the root of the snap filesystem. As before, the snap package can be created by executing:

snapcraft

inside the uitk-gallery directory. The UITK gallery can then be installed and started using:

snap install --dangerous uitk-gallery_0.2_amd64.snap
uitk-gallery

and uninstalled by:

snap remove uitk-gallery

Now that you have seen how to package the UITK gallery from source into a snap, you can do the same for your own QML application by using the dump plugin with the dependencies as stage-packages. If your application includes C++ code as well, you need to use another plugin, for example the qmake plugin. For that we refer to the Snapcraft tutorials mentioned in the introduction.

For those who like to experiment with newer versions of upstream Qt, we provide qt57 and qt58 cloud parts in the parts repository for Qt 5.7.1 and 5.8 (in development). However, the qt57 and qt58 cloud parts do not yet include a wrapper script similar to desktop-launch, so one must be included with with snap configuration, see for example timostestapp2. When using these cloud parts, you should usually omit any Qt/QML package from stage-packages, as the ones compiled from newer Qt are used directly, and you should also omit the after: [desktop-qt5].

Ubuntu app platform snap

The snap files we created in the previous sections contain everything that is needed in order to run the UITK gallery application, resulting in a snap file of 86MB. Here we will explain how to use the Ubuntu app platform snap when you have multiple app snaps that all use the same Qt version.

Benefits of this approach include disk space saving, download time and bandwidth usage if metered.

When your snap uses the ubuntu-app-platform snap for Qt and other platform libraries, we can remove the stage-packages from the snapcraft.yaml file because (in this case), all the needed libraries are included in ubuntu-app-platform. We must also replace after: [desktop-qt5] by after: [desktop-ubuntu-app-platform]. This will set-up your snap to use the global desktop theme, icon theme, gsettings integration, etc. A more elaborate description of the desktop-ubuntu-app-platform is given in the parts list on the Ubuntu wiki. In the uitk-gallery directory we must currently create a directory where the files from the platform snap can be mounted using the content interface:

mkdir ubuntu-app-platform

and this empty directory (mount point) must be added in the YAML file as well. At this point the directory structure is as follows:

Note that the snaps must be connected before running uitk-gallery for the first time. If uitk-gallery has been executed before the snap connect you will see an error message. To fix the problem, uninstall the uitk-gallery snap, then re-install it and run the snap connect command before executing uitk-gallery. This is a known limitation in snapd which will be resolved soon.

Another note: once support for the default-provider, already defined above, will correctly be implemented in snap, there will no longer be a need to install the platform snap separately - it will be pulled from the store automatically and the interface connects automatically.

Evaluation

We demonstrated three different approaches to creating a UITK gallery snap, which we gave the version numbers 0.1, 0.2 and 0.3. For each of the approaches, the table below lists the time needed for the different stages of a snapcraft run, but the pull and build stages have been combined because when doing pull, some of the prerequisites need to be built already. The all stages row shows the total time when running the snapcraft command in a clean directory, so that all stages are executed sequentially, so the value is less than the sum of the previous rows in the table because in each stage it is not necessary to check completion of the previous stages.

Version (bzr revision)

0.1 (r1)

0.2 (r2)

0.3 (r3)

build (includes pull)

1m49s

1m48s

3.6s

stage

7s

7s

1.5s

prime

33s

34s

1.8s

snap

1m11s

1m13s

1.7s

all stages

3m32s

3m20s

4.0s

install

2.2s

2.4s

1.2s

snap file size

86 MB

86 MB

1.3 MB

The measurements were done on a laptop with an Intel Core i5-6200U CPU with 8 GB RAM and an solid-state drive by running each command three times and listing the average execution time. All build-dependencies were pre-installed so their installation time is not included in the measurements. Note that this table only serves as an illustration, and execution times will vary greatly depending on your system configuration and internet connection, but it can easily be tested on your own hardware by bzr branching revisions r1, r2 and r3 of lp:~tpeeters/+junk/blog-snapping-qt-apps.

The times and file size listed in the last column (version 0.3) do not include downloading and installing the ubuntu-app-platform snap, which is 135 MB (it includes more than just the minimal Qt and UITK dependencies of the UITK gallery). It can be seen that (depending on the internet connection speed, and which files were downloaded already), using the ubuntu-app-platform can speed up the pull and build time when creating a new snap file. However, the most important advantage is the reduction of the sum of the file sizes when installing multiple app snaps that all depend on Qt or other libraries that are part of the platform snap, because the libraries need to be installed only once. The disadvantage of this approach is that the app snap must be built using the exact same Qt (and other libraries) version as the ubuntu-app-platform snap, so the choice whether the snap should be fully self-contained or depend on the platform snap must be made individually for each use case.

Future work

The UITK gallery snap is not yet available in the Ubuntu store, so our next step will be to publish a UITK examples snap that includes the UITK gallery, and to set-up automatic publishing of that snap to different channels when we update the UITK or the examples. We will also evaluate what is the best way to make newer versions of Qt available and determine whether we need to provide prebuilt binaries to replace the qt57 and qt58 cloud parts.

Finally, we will determine which libraries need to be included in the ubuntu-app-platform snap. The plan is to include all APIs that are listed on https://developer.ubuntu.com/api/qml/ and if APIs are missing we will add them to that webpage as well as to the platform snap. Of course, if you think we are forgetting a library that is useful and used in many different applications, please leave a comment below.

Launchpad has had Git-to-Bazaar code imports since 2009, along with imports from a few other systems. These form part of Launchpad’s original mission to keep track of free software, regardless of where it’s hosted. They’re also very useful for automatically building other artifacts, such as source package recipes or snap packages, from code hosted elsewhere. Unfortunately they’re quite complicated: they need to be able to do a full round-trip conversion of every revision from the other version control system, which has made it difficult to add support for Git features such as signed commits or submodules. Once one of these features is present anywhere in the history of a branch, importing it to Bazaar becomes impossible. This has been a headache for many users.

We can do better nowadays. As of last year, we have direct Git hosting support in Launchpad, and we can already build snaps and recipes straight from Git, so we can fulfil our basic goal more robustly now with a lot less code. So, Launchpad now supports Git-to-Git code imports, also known as Git mirroring. You can use this to replace many uses of Git-to-Bazaar imports (note that there’s no translations integration yet, and of course you won’t be able to branch the resulting import using bzr).

On Ubuntu many of the default boot loaders support booting kernels located on LVM volumes. This includes following platforms

i686, x86_64 bios grub2

arm64, armhf, i686, x86_64 UEFI grub2

PReP partitions on IBM PowerPC

zipl on IBM zSystems

For all of the above the d-i has been modified in Zesty to create LVM based installations without a dedicated /boot partition. We shall celebrate this achievement. Hopefully this means one doesn't need to remove kernels as much, or care about sizing /boot volume appropriately any more.If there are more bootloaders in Ubuntu that support booting off LVM, please do get in touch with me. I'm interested if I can safely enable following platforms as well:

Last month we moved the neon archive to a new server so packages got built on our existing server then uploaded to the new server. Checking the config it seemed I’d made the nasty error of leaving it open to the world rather than requiring an ssh gateway to access the apt repository, so anyone scanning around could have uploaded packages. There’s no reason to think that happened but the default in security is to be paranoid for any possibility. The security advisory is out, the archives have been wiped and all packages in User rebuilt so upgrade now to get the new package builds, or for extra security do a reinstall. The new User Edition ISO is out and I’ll update the website once that gets mirrored enough. Developer Editions packages are being rebuild now and go directly into the archives so you should start seeing those appear shortly as they are built. Sorry for the hassle folks, you wouldn’t want us to just hide it I’m sure.

Not much has changed in the design because I had been using Hugo
before. However, Hugo is now automatically run inside of an
AWS Lambda function triggered by updates to a CodeCommit Git
repository.

It has been a pleasure writing with transparent review and publication
processes enabled by Hugo and AWS:

When I save a blog post change in my editor (written using
Markdown), a local Hugo process on my laptop automatically detects
the file change, regenerates static pages, and refreshes the view
in my browser.

When I commit and push blog post changes to my CodeCommit Git
repository, the Git-backed Static Website stack automatically
regenerates the static blog site using Hugo and deploys to the live
website served by AWS.

Blog posts I don’t want to go public yet can be marked as “draft”
using Hugo’s content metadata format.

Bigger site changes can be developed and reviewed in a Git feature
branch and merged to “master” when completed, automatically triggering
publication.

I love it when technology gets out of my way and lets me be focus on
being productive.

I often have folks asking how the text & video consoles work on
OpenPOWER machines. Here's a bit of a rundown on how it's implemented, and
what may seem a little different from x86 platforms that you may already
be used to.

On POWER machines, we get the console up and working super
early in the boot process. This means that we can get debug, error and
state information out using text console with very little hardware
initialisation, and in a human-readable format. So, we tend to use
simpler devices for the console output - typically a serial UART -
rather than graphical-type consoles, which require a GPU to be up and
running. This keeps the initialisation code clean and simple.

However, we still want a facility for admins who are more used to
a keyboard & monitor directly plugged-in to have a console facility too.
More about that later though.

The majority of OpenPOWER platforms will rely on the attached
management controller (BMC) to provide the UART console (as of November
2016: unless you've designed your own OpenPOWER hardware, this will be
the case for you). This will be based on ASpeed's AST2400 or AST25xx
system-on-chip devices, which provide a few methods of getting console
data from the host to the BMC.

Between the host and the BMC, there's a LPC bus. The host
is the master of the LPC bus, and the BMC the slave. One of the
facilities that the BMC exposes over this bus is a set of UART devices.
Each of these UARTs appear as a standard 16550A register set, so having
the host interface to a UART is very simple.

As the host is booting, the host firmware will initialise the UART
console, and start outputting boot progress data. First, you'll see the
ISTEP messages from hostboot, then skiboot's "msglog"
output, then the kernel output from the petitboot bootloader.

Because the UART is implemented by the BMC (rather than a real
hardware UART), we have a bit of flexibility about what happens to the
console data. On a typical machine, there are four ways of getting
access to the console:

Direct physical connection: using the DB-9 RS232 port on the back
of the machine;

Over network: using the BMC's serial-over-LAN interface, using
something like ipmitool [...] sol activate;

Local keyboard/video/mouse: connected to the VGA & USB ports on the
back of the machine, or

Remote keyboard/video/mouse: using "remote display" functionality
provided by the BMC, over the network.

The first option is fairly simple: the RS232 port on the machine is
actually controlled by the BMC, and not the host. Typically, the BMC
firmware will just transfer data between this port and the LPC UART
(which the host is interacting with). Figure 1 shows the path of the
console data.

Figure 1: Local UART console.

The second is similar, but instead of the BMC transferring data
between the RS232 port and the host UART, it transfers data between a
UDP serial-over-LAN session and the host UART. Figure 2 shows the
redirection of the console data from the host over LAN.

Figure 2: Remote UART console.

The third and fourth options are a little more complex, but basically
involve the BMC rendering the UART data into a graphical format, and
displaying that on the VGA port, or sending over the network. However,
there are some tricky details involved...

UART-to-VGA mirroring

Earlier, I mentioned that we start the console super-early. This
happens way before any VGA devices can be initialised (in fact, we don't
have PCI running; we don't even have memory running!). This
means that it's not possible to get these super-early console messages
out through the VGA device.

In order to be useful in deployments that use VGA-based management
though, most OpenPOWER machines have functionality to mirror the
super-early UART data out to the VGA port. During this process, it's the
BMC that drives the VGA output, and renders the incoming UART text data
to the VGA device. Figure 3 shows the flow for this, with the GPU
rendering text console to the graphical output.

Figure 3: Local graphical console during early
boot.

In the case of remote access to the VGA device, the BMC takes the
contents of this rendered graphic and sends it over the network, to a
BMC-provided web application. Figure 4 illustrates the redirection to
the network.

Figure 4: Remote graphical console during early
boot, with graphics sent over the network

This means we have console output, but no console input. That's okay
though, as this is purely to report early boot messages, rather than
provide any interaction from the user.

Once the host has booted to the point where it can initialise the VGA
device itself, it takes ownership of the VGA device (and the BMC
relinquishes it). The first software on the host to start interacting
with the video device is the Linux driver in petitboot. From there on,
video output is coming from the host, rather than the BMC. Because we
may have user interaction now, we use the standard host-controlled
USB stack for keyboard & mouse control.

Figure 5: Local graphical console later in boot,
once the host video driver has started.

Remote VGA console follows the same pattern - the BMC captures the
video data that has been rendered by the GPU, and sends it over the
network. In this case, the console input is implemented by virtual USB
devices on the BMC, which appear as a USB keyboard and mouse to the
operating system running on the host.

Figure 6: Remote graphical console later in boot,
once the host video driver has started.

we then get the output from the zImage wrapper, which expands the
actual kernel code and executes it. In recent firmware builds, the
petitboot kernel will suppress most of the Linux boot messages, so we
should only see high-priority warnings or error messages.

We tend to prefer the text-based consoles for managing OpenPOWER
machines - either the RS232 port on the machines for local access, or
IPMI Serial over LAN (SOL) for remote access. This means that there's
much less bandwidth and latency for console connections, and there is a
simpler path for the console data. It's also more reliable during
low-level debugging, as serial access involves fewer components of the
hardware, software and firmware stacks.

That said, the VGA mirroring implementation should still work well,
and is also accessible remotely by the current BMC firmware
implementations. If your datacenter is not set up for local RS232
connections, you may want to use VGA for local access, and SoL for
remote - or whatever works best in your situation.

On the path to Xfce 4.14, many components have been ported to GTK+ 3 while many others are in progress. This is the first milestone in the Xfce Settings port. What’s New? This is a one-to-one port from GTK+ 2, no new features or fixes have been implemented at this stage. Translation Updates: Basque, Bulgarian, Chinese … Continue reading Xfce Settings 4.13.0 Released

Today, I received my Steam Controller in the mail (very quick shipping!) and found that it doesn’t work outside the box. I found that you do need the steam-devices package (via sudo apt install) from the repos and also Steam Beta updated to the latest version. Then you need to get into Big Picture mode to update the firmware for the controller.

And here I am, packing to go in yet another adventure. If you are near the Seattle area, I encourage you to go to SeaGl, a volunteer-ran conference. I will be speaking about Juju, and we’ll also have an Ubuntu table!

On the other hand, I will also be at UbuCon Europe. If you are in Germany, make sure to attend!

If you are going to be at any of those conferences, make sure to come by and say hi – I’d love to see you.