Canonical Voices

What I me mine talks about

These are my notes about UbuconLA, a bit of social activities and thoughts on the talks related to the snappy ecosystem.
Arriving I left late on thursday for a planned night arrival into Lima, Leo was going to be arriving around 2 hours earlier than me flying in from Costa Rica.
Once the plane I was on coming from Cordoba, Argentina landed into Lima I got onto an ad-hoc telegram group we had created and mentioned that I had landed and discovered Leo was still trying to Uber out of there.

Delivery and Store Concepts

So let’s start with a refresher on what we have available on the Store
side to manage your snaps.

Every time you push a snap to the store, the store will assign it
a revision, this revision is unique in the store for this particular
snap.

However to be able to push a snap for the first time, the name
needs to be registered which is pretty easy to do given the name is
not already taken.

Any revision on the store can be released to a number of channels which
are defined conceptually to give your users the idea of a stability
or risk level, these channel names are:

stable

candidate

beta

edge

Ideally anyone with a CI/CD process would push daily or on every
source update to the edge channel. During this process there are two
things to take into account.

The first thing to take into account is that at the beginning of the
snapping process you will likely get started with a non confined snap
as this is where the bulk of the work needs to happen to adapt to
this new paradigm.
With that in mind, your project gets started with a confinement set to
devmode. This makes it possible to get going on the early phases
of development and still get your snap into the store. Once everything
is fully supported with the security model snaps work in, this
confinement entry can be switched to strict. Given the confinement
level of devmode this snap is only releasable on the edge and beta
channels which hints your users on how much risk they are taking by
going there.

So let’s say you are good to go on the confinement side and you start
a CI/CD process against edge but you also want to make sure in some
cases that early releases of a new iteration against master never make
it to stable or candidate and for this we have a grade entry. If
the grade of the snap is set to devel the store will never allow you
to release to the most stable channels (stable and candidate).
not be possible.

Somewhere along the way we might want to release a revision into beta
which some users are more likely want to track on their side (which
given good release management process should be to some level more
usable than a random daily build). When that stage in the process is over
but want people to keep getting updates we can choose to close the
beta channel as we only plan to release to candidate and stable
from a certain point in time, by closing this beta channel we will
make that channel track the following open channel in the stability
list, in this case it is candidate, if candidate is tracking
stable whatever is in stable is what we will get.

Enter Snapcraft

So given all these concepts how do we get going with snapcraft, first of
all we need to login:

After logging in we are ready to get our snap registered, for examples sake
let’s say we wanted to register awesome-database, a fantasy snap we want
to get started with:

$ snapcraft register awesome-database
We always want to ensure that users get the software they expect
for a particular name.
If needed, we will rename snaps to ensure that a particular name
reflects the software most widely expected by our community.
For example, most people would expect ‘thunderbird’ to be published by
Mozilla. They would also expect to be able to get other snaps of
Thunderbird as 'thunderbird-sergiusens'.
Would you say that MOST users will expect 'a' to come from
you, and be the software you intend to publish there? [y/N]: y
You are now the publisher for 'awesome-database'.

So assuming we have the snap built already, all we have to do is
push it to the store. Let’s take advantage of a shortcut and --release
in the same command:

In this last channel map view for the architecture we are working with,
we can see that edge is going to be stuck on revision 10, and that
beta and candidate will be following stable which is on revision 10.
For some reason we decide that we will focus on stability and make our
CI/CD push to beta instead. This means that our edge channel will
slightly fall out of date, in order to avoid things like this we can
decide to close the channel:

In this current state, all channels are following the stable channel
so people subscribed to candidate, beta and edge would be tracking
changes to that channel. If revision 11 is ever pushed to stable only,
people on the other channels would also see it.

This listing also provides us with a full architecture view, in this
case we have only been working with amd64.

Getting more information

So some time passed and we want to know what was the history and status
of our snap in the store. There are two commands for this, the
straightforward one is to run status which will give us a familiar
result:

Closing remarks

Today I am going to be discussing parts. This is one of the pillars of
snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page,
http://snapcraft.io/ but if you are a developer and have already been
introduced to this new world of snaps, you probably want to just go and hop
on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts
and start to wonder how to get started or think that maybe you are duplicating
work done by others, or even better, maybe an upstream. This is where we start to
think about the idea of sharing parts and this is exactly what we are going
to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the
part definition from scratch and be on with my own business but surely I might
be missing out on something about optimal switches used to configure the
package or even build it. I would also need to research on how to use the specific
plugin required. So instead, I’ll see if someone already
has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME DESCRIPTION
curl A tool and a library (usable from many languages) for client side URL tra...

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your
parts section

parts:
client:
plugin: autotools
source: .

My client part which is using sources that sit alongside this snapcraft.yaml,
will hypothetically fail to build as it depends on the curl library
I don’t yet have. There are some options here to get this going, one using after
in the part definition implicitly, another involving composing and last but not least
just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look
like:

parts:
client:
plugin: autotools
source: .
after: [curl]

This will use the cached definition of the part and may potentially be updated by
running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source
release? Well we can override pieces of the part; so for the case of wanting to
change the source:

And we will get to build curl but using a newer version of curl. The trick
is that the part definition here is missing the plugin entry, thereby
instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part.
It is as simple as copying in the part definition we got from running
snapcraft define curl into your own. For the sake of completeness here’s how it
would look like:

What does this mean? Well, the part itself is not defined on the wiki, just
a pointer to it with some meta data, the part is really defined inside a
snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an
upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a
maintainer would add a description that provides an idea of what that part
(or collection of parts) is doing. Then, last but not least, the maintainer declares
which parts to expose to the world as maybe not all of them should. The main part
is exposed in project-part and will carry a top level name, the maintainer can
expose more parts from snapcraft.yaml using the general parts keyword. These
parts will be namespaced with the project-part.

Introduction

With snapcraft 2.5 which can
be installed on the upcoming 16.04 Xenial Xerus with apt or consumed from
the 2.5 tag on
github we have included two interesting plugins: kbuild and kernel.

The kbuild plugin is interesting in itself, but here we will be discussing
the kernel plugin which is based out of the kbuild one.

A note of caution though, this kernel plugin is still not considered
production ready. This doesn’t mean you will build kernels that don’t work on
today’s version of Ubuntu Core; but caution is required as the nature of
rolling, which is what this kernel plugin targets, can still change.
Additionally we may still modify the plugin’s options for the part setup
itself.

Last but not least we are introducing, given the nature of kernel building,
some experimental cross building support. The reason for this is that cross
compiling a kernel is well understood and straightforward.

Walkthrough

Objective

The final objective is to obtain a kernel snap; we will want to create a kernel
that would work on the 410c DragonBoard from Arrow which features Qualcomm’s
Snapdragon 410. To do so we will take a look at the
96boards wiki
and the 96boards published kernel.

Setup

You must be running from a Xenial Xerus system and have at least snapcraft 2.5
installed, make sure by running:

$ snacraft -v
2.5

If not, then:

$ apt update
$ apt install snapcraft

Cloning the kernel

Since the kernel is the main project and to iterate quickly it makes sense
to clone it and start snapcrafting from there, so let’s clone

Creating the base snapcraft.yaml

Go into the recently cloned kernel directory and let’s get started with a yaml
that has the standard entries for someone familiar with snapcraft.yaml:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.

Now this is a kernel snap, so let’s add that information in; this is rather
important since if not done, the resulting snap might as well be some sort
of asset holder; by adding the type of snap, snappy Ubuntu Core will know
what to do:

name: 96boards-kernel
version: 4.4.0
summary: 96boards reference kernel with qualcomm firmware
description: this is an example on how to build a kernel snap.
type: kernel

That’s all we need with regards to headers.

Adding parts

kernel

So let’s add some parts, the first part will use the new kernel plugin,
This plugin’s help can be seen by running:

snapcraft help kernel

The kernel plugin is based out of the kbuild one, so there are some extra
parameters we can use from that plugin which can be seen by running:

snapcraft help kbuild

And finally these plugins make use of snapcraft’s source helpers which can be
discovered by runnning:

snapcraft help sources

So when we look at the wiki again we will notice there are 2 defconfigs,
defconfig and distro.conf.
Even though distro.config defines squashfs support to be
built as a module, let’s make use of kconfigs and explicitly set it (we
also set a couple of other kernel configurations). We will build 2 device trees
making use of kernel-device-trees. In kernel-initrd-modules we will
mention squashfs as we need support for it to boot.

Given that particular piece of information let’s work on adding this part:

Building

Now that we have a complete snapcraft.yaml we will proceed to build. If you
did this on a 64bit system, you will be able to cross compile this snap, just
run:

$ snapcraft --target-arch arm64

This build will take a while, an average of 30 minutes give or take. You will
eventually see a message that says Snapped 96boards-kernel_4.4.0_arm64.snap.
That means you are done and have successfully created a kernel snap.

Just a week ago I made my way back from Linaro Connect. It is my first time at
a Linaro Connect that was not jointly done with a UDS and even then I did not
participate in the event. It was also my first time in Thailand and to a
greater extent Asia so I was very interested in going.

The main purpose for attending was to show snappy and in particular, snapcraft.
The part of snapcraft that I was going to show was related to building kernels,
I must say I am quite pleased with the results.

My first two days at the event mostly involved attending the keynote
and then going to the quiet not so quiet hacking room and work on
supporting whatever my colleagues Ricardo and Alexander wanted to demo and
present in their accepted talk.

In one of those two keynotes or in another random presentation I attended, I
discovered (personally, others may have known already) that 96boards had
released a working 4.4 kernel to support the 96boards initiative so those two
days I spent polishing the kernel snapcraft plugin we had to be able
to build a nice kernel snap for Ubuntu Core using the 96boards kernel, that
was a success and part of what is being released today with snapcraft (a follow
up post will describe how to make use of this plugin).

On day three, jetlag really kicked in so I zombie participated with some
comments when relevant in some of the business and/or planning meetings that
went on related to Canonical, this was really fun, during introductions it
seems it was a nice ice breaker (even though not my intention) to say I’m just
an engineer working on …. I felt I had to put emphasis on mentioning just
after all other folk exchanged business cards (note to self: maybe get some
business cards) and mentioned all these fancy titles :-)

On day four the main highlight from an Ubuntu point of view, was Alexander and
Ricardo’s presentation.

I also briefly met some of the fine Qualcomm folk and it turns out they are
running a maker’s contest

The last day was demo day mostly, so Ricardo and me setup a couple of
snappy related demo’s, there was lot of interest across which left us
pleased.

There was time to socialize as well and I did get to see some former Canonical
faces and catch up a bit; there was also some spare time to get to know new
people and that is always fun as well.

Given that I’ve never been to Bangkok before, the Saturday that followed was
dedicated to some sightseeing.

I’m finally taking some time to wite about things that happened during
UbuCon and SCaLE. I am really grateful for Canonical as without the
sponsorship I wouldn’t have been there at all.

UbuCon

UbuCon is where was I was mostly involved, I had a scheduled lightning talk,
a proper talk that I gave out with Manik and also actively involved in 3
unconference sessions.

Presenting

Plenary talk

The day started with some an intro to the UbuCon Summit and what it was all
about followed by a keynote from Mark Shuttleworth. Once Mark left the stage
the UbuCon Plenary Day 1
sessions started. Mine was the first and so it went… not any task is without
issues, I started out with the lack of an hdmi cable to hook up my laptop,
apparently when I mentioned I needed one, the organizers took it as I was
saying I wouldn’t need one, in the end, 10 minutes later, the problem was
solved.

There were also problems with the Ubuntu archives at the event, a transparent
proxy mirroring issue of some sort making installing and updating packages a
not so happy experience. Luckily I focused on live snapping shout
which does not require any Ubuntu packages. It seemed to go rather well for a
lightning talk.

After my quick talk followed Jorge Castro talking about gaming, Didier Roche
about Ubuntu as a development platform and Scarlett Clark about Kubuntu
Development.

Talking IoT getting snappy

Manik Taneja
and myself did a joint talk just to spice things up a bit; he talked mostly
from a PM point of view and I from someone down in the trenches. It seemed
to provide good balance.

We presented some slides
and also got a demo going with ROS and opencv going through the new features
in the soon to be Ubuntu Core 16.04 like the classic dimension.

There was many interest in the audience and many questions asked. People liked
the fact that we were focusing on ROS as well. Personally, I felt the whole
thing went down rather well.

Unconference

On the second day, followed by a keynote from Cory Doctorow, we had the Ubuntu
unconference part of the summit. As a snappy person I proposed 2 things:

creating a snap that uses SDL.

snapping your project.

Additionally, I attended another session snappy for sysadmins.

These sessions were basically round tables; small groups were formed,
probably due to the focus required and the fact that the larger SCaLE event
had started.

The admin questions and discussion was pretty good and a lot of doubts were
aired out. The SDL session I consider a failure, as mentioned the archive
was broken so we had to juggle around that and we also spent a lot of time
setting up http://tmate.io/ so everyone could follow. In the end, I still
have to work on that SDL based snap.

The snapping session got meshed into the SDL one as we ended up doing a lot
of snapcrafting there, nothing working or final came out, but we got to
walk through many scenarios, most of which translated into a bug and a fix
in snapcraft so I do feel there was good value in this session overall even
if during the session it felt as we weren’t moving forward.

Most of my time took place in the famous or infamous hallway track talking
snappy, Ubuntu Core and snapcraft to people and sitting down and getting things
done with these fine folk :-)

SCaLE

What can I say, I liked the exibit hall, it was massive compared to the events
I go to in general. Lot’s of fun walking around collecting swag and getting
the marketing speech from some vendors ;-) Microsoft even had representation,
they had run out of T-shirts by the time we arrived but offered to send one
over which was kind of cool and kudos to the new Microsoft as well, I guess
10 years ago no one would of seen this change coming!

As it goes with hallway tracks, I didn’t have enough time to see much of the
events presentations. On Saturday I got to see Mark’s SCaLE oriented keynote

On day 2 I went and saw Sarah Sharp talk about
diversity but to be fair, those problems are rather far away
from where I live where we have a whole different set of problems so I couldn’t
feel so identified with the discussion.

Closing thoughts

About UbuConLA

Last week I attended the 2015 edition of UbuConLA, a successor to what once was
UDS, the Ubuntu Developer Summit which later transformed into vUDS, the v
standing for virtual which eventually was renamed to Ubuntu Online Summit.
UbuConLA, fully organized by the community, tries to relive the days of UDS for
a chance to see face to face with fellow Ubuntu contributors or contributors to
be or just people generally interested in Ubuntu. So in other words, the social
and human aspect of it.

My first UbunConLa was in 2013 and took place in Montevido, we had recently
announced and released the Ubuntu Phone (dubbed as Ubuntu Touch) and spoke
about it then.

This year, the even took place in Lima and was organized by a very avid Ubuntu
Member, Jose Antonio Rey; he did an excellent job overall with the
organization.

For this event I took my Ubuntu powered phones and tablets to be able to
display and show around. People seemed to like them and the general comment was
“I expected this to crash more”.

Walk the talk with Snappy

The thing I wanted to talk about this year wasn’t specifically about phones
though, as was in 2013 when the phone was fresh, this year I talked about
Snappy, Ubuntu Core and to some extent Ubuntu Personal. Everyone seemed
receptive to the idea and the roadmap. I tried to go through the history
and lead the way to the logical conclusion of why a snappier
architecture was needed instead of just laying it out which seemed to
hit the nail on the right spot. I must add that the audience was a mix of
users and developers.

Listening in

I had the pleasure to listen to some great talks here and there, all were
good to some degree but these are the ones that kept resonating after a while

Software Libre en las Nubes, by Juanjo Ciarlante

Led with grace and ease, when he talked it seemed so straight forward.
What was complicated felt simple and elegant. He rambled over the hot cloud
topic, going over a cube and triangle…

Juju, by Sebastián Ferrari

Was great, I liked how the presenter presented this, so far my interactions
with how juju works and is used was limiting (I had the basics, but that
was it).

Ubuntu in Schools, by Neyder Achahuanco

This guy came from Puno, an engineer turned school teacher for the love of the
art, he went through how he failed at teaching kids how to develop software
by jumping straight into it and instead on how he approached it with simpler
things like programming without a computer and only using your mouth and ears.
Later on diving into other things like codely and blocky and MITs Android
development kit. It seemed pretty effective as he says the acceptance and joy in his class is
pretty good.

He did not stop there, with a sprinkle of Peruvian politics and comments on
how One Laptop per Child failed miserably in Peru, he told us his anectode of
how he repurposes unopened OLPCs boxed in a closet with Ubuntu to be able to
teach kids.

My personal take on this, is if you want something like this to succeed, it
needs to be bottom up, instead of top down which he alluded to when telling the
story on how to get teachers out of their dogma and buy into change and it is
not that teachers don’t like change more so that they can’t cope with things
just being dropped on them (like OLPCs to schools without electricity).

Closing remarks

I mostly liked the whole event, the organization was great. Everything was
streamed live through ubuntuonair.com and available for offline consumption
through the Ubuntu on Air Youtube channel.

On Saturday, we had a group photo taken outside on campus just like what was
done during the UDS’s of the past.

While I’m not the best person socializing, I did have a good chance at it.
My socializing was mostly with other speakers for some odd reason though.

My only critique here is that it is hard to make this event known when the
cities are switched between the whole Latin American continent, well, it is two
fold; on the one hand it’s good to spread the knowledge, on the other it
becomes harder to grow a base and dig deeper into the nitty gritty details.

Introduction

A note of caution, most of this is an experiment and lacks finess.

Ubuntu Core was released over half a year ago using this nice thing called
snappy, the design allows for transactional updates among others, these
updates keep on rolling though their streams and can be kicked out (rolled
back) if something was fishy, guaranteeing a certain level of confidence that
the system should not break.

Now introduce the concept of storage, that thing that will always limit you
no matter the amount; with this consider that a popular method to avoid this
is to garbage collect, old things you forgot about will just go away.

To add a twist to the story, imagine you want to wipe your system. Given the
fact that we have a clear separation of what is writable and what is intrinsic
to what makes up the core, this is rather trivial. This would indeed reset any
customization done to the system, but…

The OS parts that compose Ubuntu Core are also garbage collected, better said
it is like a round robin of size 2, these parts of the system are implicitly
garbage collected so if I want to do a real factory reset, there is no way
to do that today because you don’t have the core part of the system that
the device came with.

There’s a couple more questions:

do we want to update the boot loader of the running system?

can we recovery from a completely broken system in an autonomous way?

how do we make this generic?

There are more…

Booting Ubuntu Core

We use two boot loaders to power this snappy system, one is grub, the other
u-boot. We default to the former for x86 based systems while the latter we
use for arm.

Both are similar, using an A/B model to boot with some try variables that the
boot loader in question reads to determine which system to boot and where to
rollback to in case of an issue.

The OS part of the system lives in either a partition labeled system-a or
system-b, this is similar to the Ubuntu rootfs with the booting parts
stripped out to a platform specific part that lives in the system-boot
partition with an A/B scheme. Take note that the platform name and full
functionality is under (re) design, and also currently known as kernel or
device.

All snappy packages are intended to be real snaps, today these are snappy
package types:

app

framework

oem (to be repurposed as gadget)

As mentioned, two more package types are arriving, platform and os which
today are driven by system image
pending a migration to actual snappy packages.

Bootloader

There is only one boot loader core that takes care of booting into the right
system, updating this boot loader logic adds risk as breaking it would render
a system useless. As long as it’s not updated everything should be fine.

Regular booting

When business is as usual, the boot loader will load its environment, read the
snappy_ab variable, which would contain a value of either a or b together
with the snappy_mode variable which would contain the value of regular.

If snappy_ab were to have the value a, the kernel cmdline would contain an
entry with root=LABEL=system-a (it’s root=LABEL=system-$snappy_ab) whilst
the kernel part (for grub) would start with something like
linux /a/vmlinuz…, the initrd line for grub would be rather similar
initrd /a/initrd.img

Booting into an upgrade

When the system updates the os and platform parts of the system, if the
system was currently running from system-a it would drop the update onto
system-b and the b part of the system-boot partition for the kernel,
initrd and related files. The snappy_mode variable would be set to try
and after the system finished booting it would set the mode to regular and
life would move on.

The experiment

In this experiment we want to have a recovery partition with its own boot
loader and the original image that was put on the system by the manufacturer.
This would allow:

for potential updates to the running systems.

a mechanism for a real factory wipe.

a tentative installer.

For this, two new components are needed:

a better ubuntu-device-flash (call it uflash).

a recovery component.

Additionally, and this is not final or has been discussed, we created some stub
platform and os snappy packages. The os snap is an ubuntu rootfs put into
a squashfs while the platform snap provides the kernel, a modified initrd
that knows how to go into recovery or a running system and some boot loader
assets (initial grub.cfg).

recovery, with all those snappy packages passed in the command line and
a grub’s core.img.

I want to take into account again that this uflash thing is just for play
and its cli will likely be different if it comes to fruition.

Recovering

The grub.cfg put into recovery would boot into recovery using the
platform and os snaps to drive it. The recovery logic would create two
new partitions:

boot

writable

It will then set up boot to have an A/B schema and insert the platform and
os snappy packages used in the recovery partition.

All the snappy packages passed in with --snap in uflash will be installed
onto the system (depending on restraints defined in the gadget snap which
are ignored here as it’s not part of the current experiment).

It will also install a core.img which the boot loader in recovery would
jump to providing boot loader independence for the running system.

This week, the snappy powered Ubuntu Core landed some interesting changes with
regards to how it handles grub based systems.

History

The original implementation was based on traditional Ubuntu systems, where a
bunch of files that any debian package could setup influenced the resulting
grub.cfg that resulted after running update-grub. This produced a
grub.cfg that was really hard to manage or debug, and what is most important,
out of our control. This also differed greatly from our u-boot story where
any gadget could override the boot setup so it bootstraps as needed.

We don’t want to own the logic for this, but only provide the guidelines for
the necessary primitives for proper rollback at that level to work.

These steps also make our bootloader story look and feel more similar between
each other where soon we may be able to just not care about it as the logic
will be driven as if it were a black box.

Rolling it out

Even though this worked targeted the development release, also known as the
rolling one, we trie to make sure all systems would transition to this model
transparently. Given the model though, it isn’t a one step solution as we need
to be able to update to systems which do not run update-grub and rollback
to systems that do. We also need to update from a sytem that has this new
snappy logic to one that doesn’t. This was solved with a very grub.cfg
slick grub.cfg delivered through the gadget packages (still named oem in
current implementations), in contrast, similar to the u-boot and uEnv.txt
mechanics.

On a running system, these would be the steps taken:

Device is running a version that runs update-grub.

oem package update is delivered through autopilot containing the new
grub.cfg.

The os is updated bringing in some new snappy logic.

The next os update would be driven by the new snappy logic which would
syncgrub.cfg from the oem package into the bootloader area. This
new snappy would not run update-grub. The system would boot from the
legacy kernel paths as if it were a delta update no new kernel would be
delivered.

Updates would rinse and repeat, when there’s a new kernel provided in an
update, the bootloader a and b locations would be used to store that
kernel, grub.cfg already has logic to boot from those locations so the
change is transparent.

On the next update, kernel asset file syncing would take place and populate
the other label (a or b).

This is the development release so we shouldn’t worry to much about breaking,
but why do so if it can be avoided ;-)

Outcome

The resulting code base is much simpler, there are less headaches and we don’t
need to maintain or understand huge grub script snippets. Just for kicks, this
is the grub.cfg we use:

We wanted to start a migration path from bazaar to git given how ubiquitous it
is and due to the fact that most in our team prefer it. A few months ago the
decision was easy, since launchpad did not support git, we would just switch to
github given it’s popularity. That’s not true anymore…

Today launchpad supports git and our comparison becomes finer grained and we
have to break it down a bit more.

So here are things I like github:

Code is presented first.

Documentation is easy to write and very nice to read.

Non technical people can make edits and propose pull requests all from the
web.

It’s a bit more social (e.g; you have mentions).

Web hooks and many things embracing them.

A big user base, mostly everyone is already on github.

The code review interface.

The UI layout in general.

The API.

The things I like about launchpad:

Direct link between the source and ubuntu.

A very nice bug tracking system.

Given we work with Ubuntu, a very big existing database. Every other team
working on Ubuntu uses launchpad already.

Very product oriented.

A nice language translation system.

Most of the things a like about one are probably things that I don’t like or
are missing in the other.

snappy

Given we work on lp:snappy most of the time now, I want to have a look at the
would be workflow when on launchad and on github.

The launchpad workflow

First of all, if the codebase were moved to launchpad’s git support we’d be
missing proper support to query merge proposal status and linking bug reports
to commits.

The flow with git would be as follows:

cd $GOPATH/src/launchpad.net/snappy.

git branch -c <feature>

edit/create/fix

git commit -s -m '...'

git push git+ssh://USER@git.launchpad.net/~USER/snappy`

Create merge proposal.

Manually merge.

Manually invoke test run.

git push git+ssh://USER@git.launchpad.net/snappy

It is an improvement over bzr (especially since branches are colocated and go
likes that), but we miss:

unit test runs.

unit test coverage tracking.

automatic merging, launchpad support required and a new tarmac
implementation.

translation support, only supported for bazaar.

package recipe to push latest trunk to a PPA, also requires launchpad support.

That said, things are coming along and most of this would be solved by either launchpad API enhancements to understand git or webhooks.

The github workflow

Given github’s popularity, mostly everything is already done for you, and since they have webhooks a chain of events that follow an action in github gives us a very neat experience.

This is what would happen:

cd $GOPATH/src/launchpad.net/snappy.

git branch -c <feature>

edit/create/fix

git commit -s -m '...', if the issue is part of the
comment it gets linked through github.

git push git@github.com:/snappy.git

Create pull request.

travis is triggered by the event and runs everything we
tell it to:

Run a test build.

Runs unit tests.

Runs sanity checks (go vet, lint, …)

Push the unit test coverage to corevalls.io

Build deb.

Reviewer uses the data updated in real time aside from his human provided
factor to determine if the PR should be merged. This data includes, travis
passing unit tests and it’s coverage increase or decrease among others with
nice badges.

Click on Merge PR.

The master branch has it’s status/sanity presented with badges as well.

Closing thoughts

It is no secret I’ve been wanting to move to github for a while, it solves
many problems we have that we don’t want to go around and solve ourselves. It
is not the panacea but it does seem fit for most of the things we need.

Given that now both launchpad and github support git we can ping pong between
them as seen fit (not out of spite though).

The biggest hurdle we’d be facing on every change is our go import paths which
are absolute to make go get straightforward (even if we don’t take too many
other advantages for it) for which one solution I’ve been wanting to give a try
is http://getgb.io.

In some sense I sometimes get the feeling that github is like vim and launchpad
is like emacs, and I am a vim person.

Image you get an update and the kernel panics with that update, what are you to
do? Suppose now that you have a snappy based system, this is automatically
solved for you.

Here’s a short video showing this on a Beagle Bone Black, the concept is quite
simple, if there is a panic, we revert to the last known state. In this video I
inject an initrd that panics on boot after issuing a snappy update and before
rebooting into the update.

In this video you observe the following:

Manually checking for updates.

Manually applying the updates.

How the a/b boot selection is done.

Implicitly observe the internal (subject to change) layout of snappy-boot
and system-a or system-b selection.

Rebooting into the new kernel.

Observing a panic and rebooting back into the working system.

In the normal case this would seldom happen (the broken boot aside) as the
autopiloting feature is enabled by default today, which you can check by
running snappy config ubuntu-core

The past few weeks in the snappy world have been a revolt, better said a rapid evolution for it to be closer to what we wanted it to be.
Some things have change, if you are tracking the bleeding edge you will notice a couple of changes, the store for example now exposes packages depending on the OS release, and system images are now built against an OS release as well.

The past few weeks in the snappy world have been a revolt, better said a rapid evolution for
it to be closer to what we wanted it to be.

Some things have change, if you are tracking the bleeding edge you will notice a couple of
changes, the store for example now exposes packages depending on the OS release, and system
images are now built against an OS release as well. For core images we have two choices:

15.04

rolling

15.04 will be nicely locked down and guarantee stability while rolling will just roll
on and you will see it stumble over (although it shouldn’t break badly, APIs are what we
will try and aspire to keep in the breaking zone). Try is a strong word, which is why channels
are being used; the core images have the concept of channel which can be:

stable

rc

beta

alpha

edge

Today, as of this writing, we are supporting edge and alpha for each OS release and as soon
as we release we will have a stable channel enabled. Store support for channels is coming to
a future near you which means that eventually packages can track different channels.

Another addition is a new snap type called oem, this snappy package allows OEMs to enable
devices and boards with a degree of customization such as:

preinstalled unremovable or removable packages

default configurations for preinstalled packages and ubuntu-core

lock down configurations.

custom DTBs

boot files (e.g.; u-boot, uEnv.txt)

This package, uploaded to the store allows people to create custom enablements to support
their product stories. This package’s capabilities can grow in the future to support some
other niceties.

If you happen to use the development ppa for snappy ppa:snappy-dev/tools you should be seeing
a new ubuntu-device-flash in the updates which supports most of this syntax and retires early
enablement work.

So in order to create a default image for the Beagle Bone Black you would do:

15.04 could be replaced with rolling and today the default channel is edge but will be
stable as soon as we have something in there :-)

Keep in mind now that 15.04 and rolling will return different store search results depending
on what the developer has targetted.

Installing local oem snaps passing in --oem forces you to setup --developer-mode if
the package is not signed by the store.

Last but not least, the flashassets entry from device tarballs used to enable new devices are
now ignored in favor of using the information from the oem snappy package, this means that if
you have a port you will need to move it over to the oem packaging.

Today the always in motion ppa ppa:snappy-dev/tools has landed support for overriding the dtb provided by the platform in the device part with one provided by the oem snap.
The package.yaml for the oem snap has been extended a bit to support this, an example follows for extending the am335x-boneblack platform.
name: mydevice.sergiusens vendor: sergiusens icon: meta/icon.png version: 1.0 type: oem branding: name: My device subname: Sergiusens Inc. store: oem-key: 123456 hardware: dtb: mydtb.

A while back, Snappy was introduced
and it was great, while that was happening we were already working on the next
great thing, Snappy for devices, or as everyone calls it, things.

Today that was finally announced. It’s been
lots of fun working on this. Enablement aside, we also created a very minimal
webdm, it is a Web Device Management snap framework provided in the store
which can be easily installed on existing devices by calling

sudo snappy install webdm

On networks where it is allowed, it can be accessed by going to
http://webdm.local:4200. Here’s a quick demo of it running on a BeagleBone Black

So to get this going, all you need is to follow what is mentioned in the main
site and pop that sdcard into the device

The install option is basically an option to install snaps during
provisioning; you may notice this weird one
beagleboneblack.element14_1.0_all.snap, that is an oem snap and in summary,
it’s similar to the customization framework in Ubuntu Touch, but different.
today it’s pretty minimal and allows to just set the branding text either as
can be seen on the video or at the login prompt, where you would see something
like this:

A while back, Snappy was introduced and it was great, while that was happening we were already working on the next great thing, Snappy for devices, or as everyone calls it, things.
Today that was finally announced. It’s been lots of fun working on this. Enablement aside, we also created a very minimal webdm, it is a Web Device Management snap framework provided in the store which can be easily installed on existing devices by calling

Ubuntu Core is what we’ve been working on this past time, it has been an
interesting ride. It was developed completely in the open, there was just no
real promotion about it until we were ready.

If you had noticed, we use ubuntu-device-flash to create this core image, and
for development we used it across the board with the core subcommand. We did
learn a couple of things from the phone and decided to just provide a static
image that we could make sure would work for everyone giving it a try (aka more
QA). In essence you can still upgrade and if something is not to your like,
just rollback, it’s that neat. So in summary, ubuntu-device-flash today is
just a step in the release process to get to the final image.

Yesterday I played around with creating a snap for
camlistore and it was a breeze, To get it just
snap install camlistore, all the command line tools are in there provided by
the binary stanza from
package.yaml.
The camlistored daemon is created in the services list where I just needed
to provide a start, which in the background creates a systemd unit.

The beauty here is that I don’t really need to know much of the underlying
technology, and that is awesome for just quickly creating a snap.

What is missing here though, is an easy way to configure the package that was
just intalled, for now, it should be as easy to look at the
file system layout and
going to /var/lib/apps/<app-name>/<version>/ which would be
/var/lib/apps/camlistore/0.8 and within we’d have
.config/camlistore/server-config.json, in most cases you’d want to setup your
authentication in there.

And here’s the mandatory screenshot of this running on my kvm instance:

Ubuntu Core is what we’ve been working on this past time, it has been an interesting ride. It was developed completely in the open, there was just no real promotion about it until we were ready.
If you had noticed, we use ubuntu-device-flash to create this core image, and for development we used it across the board with the core subcommand. We did learn a couple of things from the phone and decided to just provide a static image that we could make sure would work for everyone giving it a try (aka more QA).