Now, if you want a detailed, developer's look at building a Snappy Ubuntu image and running it on a BeagleBone, you're in luck! I shot this little instructional video (using Cheese, GTK-RecordMyDesktop, and OpenShot). Enjoy!

A transcript of the video follows...

What is Snappy Ubuntu?

A few weeks ago, we introduced a new flavor of Ubuntu that we call “Snappy” -- an atomically, transactionally updated Operating System -- and showed how to launch, update, rollback, and install apps in cloud instances of Snappy Ubuntu in Amazon EC2, Microsoft Azure, and Google Compute Engine public clouds.

And now we’re showing how that same Snappy Ubuntu experience is the perfect operating system for today’s Cambrian Explosion of smart devices that some people are calling “the Internet of Things”!

Snappy Ubuntu Core bundles only the essentials of a modern, appstore powered Linux OS stack and hence leaves room both in size as well as flexibility to build, maintain and monetize very own device solution without having to care about the overhead of inventing and maintaining your own OS and tools from scratch. Snappy Ubuntu Core comes right in time for you to put your very own stake into stake into still unconquered worlds of things

We think you’ll love Snappy on your smart devices for many of the same reasons that there are already millions of Ubuntu machine instances in hundreds of public and private clouds, as well as the millions of your own Ubuntu desktops, tablets, and phones!

Unboxing the BeagleBone

Our target hardware for this Snappy Ubuntu demo is the BeagleBone Black -- an inexpensive, open platform for hardware and software developers.

I paid $55 for the board, and $8 for a USB to TTL Serial Cable

The board is about the size of a credit card, has a 1GHz ARM Cortex A8 processor, 512MB RAM, and on board ethernet.

While Snappy Ubuntu will run on most any armhf or amd64 hardware (including the Intel NUC), the BeagleBone is perhaps the most developer friendly solution.

The easiest way to get your Snappy Ubuntu running on your Beaglebone

The world of Devices has so many opportunities that it won’t be possible to give everyone the perfect vertical stack centrally. Hence Canonical is trying to enable all of you and provide you with the elements that get you started doing your innovation as quickly as possible. Since there will be many devices that won’t need a screen and input devices, we have developed “webdm”. webdm gives you the ability to manage your snappy device and consume apps without any development effort.

To installl you simply download our prebuilt WEB .img and dd it to your sd card.

After that all you ahve to do is to connect your beaglebone to a DHCP enabled local network and power it on.

After 1-2 minutes you go to http://webdm.local:8080 and can get onto installing apps from the snappy appstore without any further effort

Of course, we are still in beta and will continue give you more features and a greater experience over time; we will not only make the UI better, but also work on various customization options that allow you to deliver your own app store powered product without investing your development resources in something that already got solved.

Downloading Snappy and writing to an sdcard

Now we’re going to build a Snappy Ubuntu image to run on our device.

Soon, we’ll publish a library of Snappy Ubuntu images for many popular devices, but for this demo, we’re going to roll our own using the tool, ubuntu-device-flash.

ls -halF mysnappy.img

sudo dd if=mysnappy.img of=/dev/mmblk0 bs=1M oflag=dsync

Hooking up the BeagleBone

Insert the microsd card

Network cable

USB debug

Power/USB

Booting Snappy and command line experience

Okay, so we’re ready for our first boot of Snappy!

Let’s attach to the USB/serial console using screen

Now, I’ll attach the power, and if you watch very carefully, you might get to see some a few boot messages.

In the ubuntu download manager we are using the new connection style syntax so that if there are errors in the signal connections we will be notified at compile time. However, in recent versions of udm we have noticed that the udm tests that ensure that the qt signals are emitted correctly have started failing randomly in the build servers.

As it can be seen in the following build logs the compilation does finish with no errors but the tests raise errors at runtime (an assert was added for each of the connect calls in the project):

I am not the only one that have encoutered this bug within canonical (check out this bug). Apprently -Bsymbolic breaks PMF (Pointer to Member Function) comparison under ARM as it was reported in linaro. As it is explained in the Linaro mailing list a workaround to this (since the correct way would be to fix the linker) is to build with PIE support. The Qt guys have decided to drop -Bsymbolic* on anything but x86 and x86-64. I hope all this info help others that might find the same problem.

Today we’re introducing some new features into Ubuntu’s systems management and monitoring tool, Landscape. Organisations will now be able to use Landscape to manage Hyperscale environments ranging from ARM to x86 low-power designs, adding to Landscape’s existing coverage of Ubuntu in the cloud, data centre server, and desktop environments. There’s an update to the Dedicated Server too, bringing SAAS and Dedicated Server versions in alignment.

Hyperscale is set to address today’s infrastructure challenges by providing compute capacity with less power for lower cost. Canonical is at the forefront of the trend. Ubuntu already powers scale-out workloads on a new wave of low-cost ultradense hardware based on x86 and ARM processors including Calxeda EnergyCore and Intel Atom designs. Ubuntu is also the default OS for HP’s Project Moonshot servers.

This update includes support for ARM processors and allows organisations to manage thousands of Hyperscale machines as easily as one, making it more cost-effective to run growing networks spanning tens of thousands of devices. The same patch management and compliance features are available for ARM as they are for x86 environments, making Landscape the first systems management tool of a leading Linux vendor to introduce ARM support – and we are doing so on a level of feature parity across architectures.

Calxeda is the leading innovator engaged in bringing ARM chips to servers and partnered with us early on to bring Ubuntu to their new platform. “Landscape system management support for ARM is a huge step forward”, said Larry Wikelius, co-founder and Vice President at Calxeda. “Adding datacenter-class management to the Ubuntu platform for ARM demonstrates Canonical’s commitment to innovation for Hyperscale customers, who are looking to Calxeda to help improve their power efficiency.”

“Landscape’s support for the ARM architecture extends to all ARM SoCs supported by Ubuntu, but we adopted the Calxeda EnergyCore systems in our labs as the reference design in light of both their early arrival to market and their maturity”, said Federico Lucifredi, Product Manager for Landscape and Ubuntu Server at Canonical, adding “we are excited to be bringing Landscape to Hyperscale systems on both ARM and x86 Atom architectures.” CIOs and System Administrators considering implementing Hyperscale environments on Ubuntu will now have access to the same enterprise-grade systems management and monitoring capabilities they enjoy in their data centres today with Landscape.

Kurt Keville, HPC Researcher at Massachusetts Institute of Technology (MIT) commented: “MIT’s interest in low power computing designs aims to achieve the execution of production HPC codes at the same level of numerical performance, yet within a smaller power envelope.” He added: “With Landscape, we can manage our ARM development clusters with the same kind of granularity we are accustomed to on x86 systems. We are able to manage ARM compute clusters without affecting our production network bandwidth in any way”.

The Parallella Board project aims to make parallel computing ubiquitous through an affordable Open Hardware platform equipped with Open Source tools. Andreas Olofsson, CEO, Adapteva said: “We selected Ubuntu as our default platform because of its popularity with the developer Community and relentless pace of updating, regularly providing our users with the newest builds for any project.” He added: “ The availability of a management and monitoring platform like Landscape is essential to managing complexity as the scale of Parallella clusters rapidly reaches into the hundreds or even thousands of nodes.”

As we talk to customers building cloud infrastructure or big data computing environments, it’s clear that power consumption and efficient scaling are key drivers to their architectural decisions. When these considerations are coupled with Landscape’s efficiency and scalable management characteristics, we believe enterprises will be able to achieve a significant shift in both scalability and manageability in their data centre through Hyperscale architecture.

Ubuntu is the default OS for HP’s project Moonshot cartridges, ships or is available for download to every Moonshot customer, with direct support from HP backed by Canonical’s worldwide support organization. The Landscape update today also means that the full bundle of Ubuntu Advantage support and services becomes available to Moonshot customers.

“Canonical continues to lead the way in the Hyperscale OS arena introducing full enterprise-grade support services for Ubuntu on Hyperscale hardware”, remarked Martin Stadtler, Director of Support Services at Canonical.

Landscape’s Dedicated Server edition has also been refreshed in this update. This means that those businesses choosing to keep the service onsite (rather than hosted) will benefit from the same functionality and a series of updates already available to SAAS customers, including the new audit log facility and performance enhancements, while retaining full local control of their management infrastructure.

Samsung announced an 8 core! yes an 8 core ARM processor which may power the Samsung Galaxy S4

Rather than a single eight-core chip, it has two quad-cores inside – one being a quad-core ARM Cortex A15 and the other a quad-core Cortex A7. The Cortex A15 deals with the tough stuff but passes off the easy tasks to the Cortex A7, or they can both be fired up to really show off. This means it’s strong enough to provide all the power you may need, while at the same time being smart enough to conserve energy when it can. If you’re wondering just how much difference the Exynos 5 Octa and other big.LITTLE chips will make when used in a device, ARM’s CEO Warren East said he expects “twice the performance and half the power consumption” compared to today’s best offerings.

While ARM is gaining a lot of momentum, the challenge with ARM until now was that every architecture is very different from different vendors and requires a separate kernel and entire OS stack.

With Linux Kernel 3.7, this has changed for the better.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors. 64-bit ARM CPUs won’t ship until in commercial quantities until 2013. When they do arrive though programmers eager to try 64-bit ARM processors on servers will have Linux ready for them.

The main new feature is supporting foreign architectures in apport-retrace. If apport-retrace works in sandbox mode and works on a crash that was not produced on the same architecture as apport-retrace is running on, it will now build a sandbox for the report’s architecture and invoke gdb with the necessary magic options to produce a proper stack trace (and the other gdb information).
Right now this works for i386, x86_64, and ARMv7, but if someone is interested in making this work for other architectures, please ping me.

This is rolled out to the Launchpad retracers, see for example Bug #1088428. So from now on you can report your armhf crashes to Launchpad and they ought to be processed. Note that I did a mass-cleanup of old armhf crash bugs this morning, as the existing ones were way too old to be retraced.

For those who are running their own retracers for their project: You need to add an armhf specific apt sources list your per-release configuration directory, e. g. Ubuntu 12.04/armhf/sources.list as armhf is on ports.ubuntu.com instead of archive.ubuntu.com. Also, you need to add an armhf crash database to your crashdb.conf and add a cron job for the new architecture. You can see how all this looks like in the configuration files for the Launchpad retracers.

The other improvement concerns package hooks. So far, when a package hook crashed the exception was only printed to stderr, where most people would never see them when using the GTK or KDE frontend. With 2.7 these exceptions are also added to the report itself (HookError_filename), so that they appear in the bug reports.

The release also fixes a couple of bugs, see the release notes for details.

The Ubuntu Developer Summit was held in Copenhagen last week, to discuss plans for the next six-month cycle of Ubuntu. This
was the most productive UDS that I've been to — maybe it was the shorter four-day schedule, or the overlap with Linaro Connect, but it sure felt like a whirlwind of activity.

I thought I'd share some details about some of the sessions that cover areas I'm working
on at the moment. In no particular order:

Improving cross-compilation

This plan is a part of a mutli-cycle effort to improve cross-compilation
support in Ubuntu. Progress is generally going well — the consensus from the
session was that the components are fairly close to complete, but we still
need some work to pull those parts together into something usable.

So, this cycle we'll be working on getting that done. While we have a few
bugfixes and infrastructure updates to do, one significant part of this cycle’s
work will be to document the “best-practices” for cross builds in Ubuntu, on
wiki.ubuntu.com. This process will be
heavily based on existing pages on the Linaro wiki. Because most of the
support for cross-building is already done, the actual process for
cross-building should be fairly straightforward, but needs to be
defined somewhere.

I'll post an update when we have a working draft on the Ubuntu wiki,
stay tuned for details.

Rapid archive bringup for new hardware

I'd really like for there to be a way to get an Ubuntu archive built
“from scratch”, to enable custom toolchain/libc/other system components
to be built and tested. This is typically useful when bringing up new hardware,
or testing rebuilds with new compiler settings. Because we may be dealing
with new hardware, doing this bootstrap through cross-compilation is
something we'd like too.

Eventually, it would be great to have something as straightforward as the
OpenEmbedded or OpenWRT build process to construct a repository with a core set
of Ubuntu packages (say, minbase), for previously-unsupported hardware.

The archive bootstrap process isn't done often, and can require a large
amount of manual intervention. At present, there's only a couple of
folks who know how to get it working. The plan here is to document the
bootstrap process in this cycle, so that others can replicate the process,
and possibly improve the bits that are a little too janky for general
consumption.

ARM64 / ARMv8 / aarch64 support

This session is an update for progress on the support for ARMv8 processors
in Ubuntu. While no general-purpose hardware exists at the moment, we want
to have all the pieces ready for when we start seeing initial
implementations. Because we don't have hardware yet, this work has to be
done in a cross-build environment; another reason to keep on with the
foundations-r-improve-cross-compilation plan!

Although kernel support isn’t urgent at the moment, we’ll be building an
initial kernel-headers package for aarch64. There's also a plan to get a page
listing the aarch64-cross build status of core packages, so we'll know what
is blocked for 64-bit ARM enablement.

We’ve also got a bunch of workitems for volunteers to fix cross-build issues
as they arise. If you're interested, add a workitem in the blueprint, and keep an eye on it for updates.

Secure boot support in Ubuntu

This session covered progress of secure boot support as at the 12.10
Quantal Quetzal release,
items that are planned for 13.04, and backports for 12.04.2.

As for 12.10, we’ve got the significant components of secure boot
support into the release — the signed boot chain. The one part that hasn't
hit 12.10 yet is the certificate management & update infrastructure, but that
is planned to reach 12.10 by way of a not-too-distant-future update.

The foundations team also mentioned that they were starting the 12.04.2
backport right after UDS, which will bring secure boot support to our
current “Long Term Support” (LTS) release. Since the LTS release is often
preferred Ubuntu preinstall situations, this may be used as a base for
hardware enablement on secure boot machines. Combined with the certificate
management tools (described at sbkeysync & maintaining uefi key databases), and the requirement for
“custom mode” in general-purpose hardware, this will allow for user-defined
trust configuration in an LTS release.

As for 13.04, we're planning to update the shim package to a more recent
version, which will have Matthew Garrett's work on the Machine Owner Key
plan from SuSE.

We're also planning to figure out support for signed kernel modules, for
users who wish to verify all kernel-level code. Of course, this will mean
some changes to things like DKMS, which run custom module builds outside
of the normal Ubuntu packages.

Netboot with secure boot is still in progress, and will require some
fixes to GRUB2.

And finally, the sbsigntools codebase could do with some
new testcases, particularly for the PE/COFF parsing code. If you're interested
in contributing, please contact me at jeremy.kerr@canonical.com.

The repositories here contain sources and binaries for the arm64 bootstrap in Debian (unstable) and Ubuntu (quantal). There are both toolchain and tools packages for amd64 build machines and arm64 binaries built with them. And corresponding sources.

Most of the components of the 64-bit ARM toolchain have been released, so I've put together some details on building a cross compiler for aarch64. At present, this is only binutils & compiler (ie, no libc), so is probably not useful for applications. However, I have a 64-bit ARM kernel building without any trouble.

pre-built toolchain

If you're simply looking to download a cross compiler, here's one I've built earlier: aarch64-cross.tar.gz (.tar.gz, 85MB). It's built for an x86_64 build machine, using Ubuntu 12.04 LTS, but should work with other distributions too.

The toolchain is configured for paths in /opt/cross/. To install it, do a:

This week I’m proudly participating at the Ubuntu Developer Summit to help planning and defining what will the Quantal Quetzal (12.10) release be in the next following months.

As usual I’m wearing not only the Linaro hat, but also my Ubuntu and Canonical ones, interested and participating actively at most topics that are related with ARM in general.

And what can I say after the first 3 days at UDS-Q? Well, busy as never before and with great opportunities to help getting Ubuntu to rock even more at ARM, with current devices/platforms and with the exciting new ones that will be coming in the next few months.

Great start as usual by Mark, showing the great opportunities for both Canonical and Ubuntu, describing the new target and use cases, and also showing how important Cloud is now for Ubuntu. After that we had, finally, the announcement of a real hardware availability from Calxeda, proving that ARM server are indeed real! (which is a quite important accomplishment)

This was the first time that all the schedule displays available at UDS were all covered by the ARM boards provided by Linaro. This time we got Pandaboard, Origen and also Snowball constantly showing the schedule through all the day. Low power and powerful devices all around :-)

Discussion to cover all the possible embedded related use cases for Ubuntu, and trying to understand the real requirements for a minimum filesystem (rootfs) for those devices. While we didn’t decide to generate the smallest-still-apt/dpkg-compatible rootfs for our users (as ubuntu-core is already covering most of the cases), we’ll provide enough tools and documentation on how to easily generate them. At Linaro side the Ubuntu Nano image should probably reflect such suggestions.

Here the focus was basically to review and understand if we would really continue providing pre-installed based images instead of just supporting live based ones. Having the images provided only at the SD cards are very useful to make the bootstrap and install quite easy, but it hurts badly the performance. As we’re now getting ARM boards that are very powerful in many ways, the I/O bound shouldn’t limit what the users would be able to get from them. The decision for Quantal is to drop support for the pre-installed images, and provide live based ones at the SD cards (think like the live-sd image as we have with CD on other archs), where the user would install Ubuntu the same way as done with x86, and using USB/Sata based devices as rootfs by default.

The focus of this session was basically to better understand what might be the missing pieces for a proper OpenStack support at ARM. Quite a few open questions still, but the missing pkgs enablement, LXC testing and support and KVM for a few platforms will help making sure the support is at least correctly in place. After initial support, continuous test and validation should happen to make sure the ARM platforms keeps well supported over the time (which will be better stressed and tested once MAAS/Juju is also supported properly at ARM).

Clearly the most important session of the day for ARM. Great discussion on how to prepare and start the ARMv8 port at Ubuntu and Debian, by starting with cross-build support with multiarch and later support with Fast Models and Qemu. A lot is still to be covered once ARM is able to publish the ARMv8 support for Toolchain and Kernel, and session will be reviewed again at Linaro Connect at the end of this month.

Usual review of the patches the Ubuntu Kernel team is maintaining at the Ubuntu Kernel tree. At Linaro this is important as we also enable the Ubuntu specific patch-set at the packages provided by the LEB, for proper kernel and user-space support. Luckily this time it seems the delta is really minimum, which should probably also start to be part of Linux Linaro in the following month.

Usual discussion about trying to avoid replicated work that is strictly related with each ARM board we support at both Ubuntu and Linaro. Decision is to finally sync with the latest flash-kernel available at Debian and try to get the common project/package with the hardware specific bits in place, so it can be used by linaro-image-tools, flash-kernel and debian-cd.

Session to review and plan what are the next steps for the MAAS project, which is also missing proper ARM support for now. Great discussions on understanding all the requirements, as they will not necessarily match entirely with the usual ARM devices we have at the moment. Here the goal for ARM is to continue improving the PXE support at U-Boot (even with UEFI chainload later), and understanding what might be missing to also have IPMI support (even if not entirely provided by the hardware).

Great session covering what might be the improvements and development on the graphics side for next release. Goal is to use a system compositor that would be started right at the beginning at the boot, which will then be controlled and used properly once lightdm is up (with X11). This will improve a lot the user experience on normal x86 based desktops, and luckily on ARM we’re also in a quite nice situation with the work done by Linaro helping getting the proper DRM/KMS support for the boards we support, so I hope ARM will be in a great shape here :-)

At this session we could cover what seems to be the most recurrent and problematically thing at supporting ARM servers, which is the lack of a single and supported boot method and boot loader. UEFI should be able to help on this front soon, but until then the focus will be to keep checking and making sure the current PXE implementation at u-boot works as expected (chainloading UEFI on u-boot is also another possibility Linaro is investigating). There is also the request for IPMI support, which is still unclear in general how it’ll be done generically speaking.

As Ubuntu is also moving to the direction of continuous validating and testing all important components available, there’s the need for a proper validation of the bootloader, and the effect at the user experience while booting the system. For ARM it’s also a special case, as U-Boot is still the main bootloader used across the boards. Test case descriptions in place, and discussion will probably continue at Linaro Connect as this is also an area where we also want to help validating/testing.

Here the Ubuntu Server Team presented how they are benchmarking and checking performance at the server level at x86, and covering what might still be needed to run and validate the ARM boards the same way. For ARM the plan is to run the same test cases on the available scenarios, and also try to get Linaro involved by making sure this is also part of the continuous validation and testing done with LAVA. Another important topic that will probably be extended at Linaro Connect is finding a way to get the power consumption data when running the test cases/benchmarks, so it can be further optimised later on.

Last session of the day, trying to find the missing gaps to finally get the OpenGL ES2.0 support merged at the Compiz and Unity upstream branches used by the entire Ubuntu desktop (across all archs). Following work and actions will basically be to fix the remaining and important plugins after merging the changes, and also getting a few test cases to properly validate the support at Ubuntu. Once all done, it should be merged ASAP.

These are just a few topics which I was able to participate. There are a lot of more exciting work coming on, which can all be found at http://summit.ubuntu.com/uds-q/. Remember that you’re still able to participate in a few of them tomorrow and friday, as remote access is provided for all the sessions we have.

I’m sure a lot of more exciting stuff will be discussed for ARM support until the end of this week, and at Linaro Connect, at the end of the month, we’ll be able to review and get our hands dirty as well :-)

On Monday, Calxeda, one of the leading innovators bringing revolutionary efficiency to the datacenter, unveiled their new EnergyCore reference server live onstage with Mark Shuttleworth at the Ubuntu Developer Summit (UDS) in Oakland California.

Calxeda CTO and Founder Larry Wikelius with Mark Shuttleworth at UDS

The choice of UDS at the venue to unveil the new hardware to the world was flattering and underlines how the innovators in next generation computing are building out a compelling platform together. Ubuntu and Calxeda have been working together for several years to bring Ubuntu on Calxeda to market in the form now being shown at UDS. The collaboration of Canonical and the Ubuntu community with Calxeda has been vital to be able to deliver a solution that can very easily deploy OpenStack based cloud using MAAS and Juju on hardware that is so innovative.

The EnergyCore reference server unveiled at UDS can house up to 48 Quadcore nodes at under 300 Watts with up to 24 SATA drives. In this configuration it is possible to house 1000 server instances in a single rack and other server form factors being developed by OEMs may enable several times this volume. It is precisely this type of power efficient technology that will accelerate the adoption of next generation hyperscale services such as cloud and we are proud to be at the very core of it.

So congratulations to Calxeda on the arrival of the EnergyCore and congratulations to Canonical and the Ubuntu Community for providing the platform that will power it.

For those following the development of the next Ubuntu release (12.04 – Precise Pangolin), you all know that we’re quite close to the release date already, and to make sure Precise rocks since day 0, we all need to work hard to get most of the bugs sorted out during the next few weeks.

At Linaro, the Linaro Developer Platform team will be organizing an ARM porting Jam this Friday, with the goal of getting all developers interested in fixing and working on bugs and portability issues related with the Ubuntu ARM port (mostly issues with ARMHF at the moment).

The idea of having the Porting Jam at Friday is to have it as a joint effort with Ubuntu’s Fix Friday and Ubuntu Global Jam, so expect quite a few other developers helping improving Ubuntu as well!

Remember that for ARM this release will be a quite huge milestone, as it’ll be the first LTS release supporting ARM, besides delivering support for ARM servers and ARMHF as default, so let’s make sure it rocks!

Yesterday Canonical announced the first UI concept for the Ubuntu TV. Together with the announcement, the first code drop was released, so we could read and understand better the technologies used, and how this will behave on an ARM environment, mostly at a Pandaboard (that we already have OpenGL ES 2 and video decode working).

As it’s quite close with Unity 2D (similar code base), and also based on Qt, I decided to follow the steps described at wiki page and see if it should work correctly.

First issue we found with Qt, was that it wasn’t rendering at full screen when using with latest PowerVR SGX drivers, so any application you wanted to use with Qt Opengl would just show itself on a small part of the screen. Luckily TI (Nicolas Dechesne and Xavier Boudet) quickly provided me a new release of the driver, fixing this issue (version that should be around later today at the Linaro Overlay), so I could continue my journey :-)

Next problem was that Qt was enabling brokenTexSubImage and brokenFBOReadBack for the SGX drivers based on the old versions available for Beagle, and seems this is not needed anymore with the current version available at Pandaboard (still to be reviewed with TI, so a proper solution can be forwarded to Qt).

Code removed, patch applied and package built (after many hours), and I was finally able to successfully open the Ubuntu TV interface at my Panda :-)

UI Navigation on a Pandaboard, with Qt and OpenGL ES2.0

Running Ubuntu TV is quite simple if you’re already running the Unity 2D interface. All you need to do is to make sure you kill all unity-2d components and that you’re running metacity without composite enabled. Other than that you just run ”unity-2d-shell -opengl” and voilà ;-)

Here’s a video of the current interface running on my Panda:

As you can see from the video, I didn’t actually play any video, and that’s because currently we’re lacking a generic texture handler for OpenGL ES with Gstreamer at Qtmobility (there’s only one available, but specifically for Meego). Once that’s fixed, the video playback should behave similarly as with XBMC (but with less hacks, as it’s a native GST backend).

Next steps, enabling proper video decode

Looking at what would be needed to finally be able to play the videos, and to make it something useful at your Pandaboard, the first thing is that we need to improve Qtmobility to have a more generic (but unfortunately still specific to Omap) way handle texture streaming with Gstreamer and OpenGL ES. Rob Clark added a similar functionality at XBMC, creating support for ”eglImage”, so we just need to port the work and make sure it works properly with Qtmobility.

Once that’s ported, the video should be streamed as a texture at the video surface, making it also work transparently with QML (the way it’s done with Ubuntu TV).

If you know Qt and Gstreamer, and also want to help getting it to work properly on your panda, here follows a few resources:

As described on my previous post about Ubuntu TV support on a Pandaboard, we were still missing proper support for texture streaming on a Pandaboard, to have the video playback also working and fully accelerated.

This weekend Rob Clark managed to create the first version of the TI’s specific eglImage support at Qtmobility, posting the code at his gitorious account, and for the first time we’re fully able to use Ubuntu TV on a ARM device, using a Pandaboard.

Demo video with the Ubuntu TV UI (accelerated with Qt and OpenGL ES 2.0) and with video decode support of 720p and 1080p:

The code support for TI’s eglImage still needs a few clean-ups, but we hope to be able to push the support at Ubuntu in the following weeks (make it good enough to try at least a package patch).

For people wanting to try it out, a few packages are already available at Linaro’s Overlay PPA, and the remaining ones should be available later today (Qt and Qtmobility), so people can easily run it with our images.

Hope you enjoy, and we’ll make sure we’re always working on keeping and improving the current support, so Ubuntu TV also rocks with ARM :-)

During the end of October and beginning of November we had the last Linaro Connect for the year. This time we also had it together with the Ubuntu Developer Summit, giving us the opportunity to better discuss the roadmap with both Linaro and the Ubuntu team.

From the Developer Platform team perspective, we had a quite nice week, with demos happening at Monday and Friday (showing people what we’ve been working on), and also sharing some great news with the Ubuntu team, now that Mark Shuttleworth announced that Ubuntu will go to Tablets, TVs and Phones (and ARM for sure will be a huge part of that).

Today HP announced Project Moonshot - a programme to accelerate the use of low power processors in the data centre.

The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86), the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.

Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.

The HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.

The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?

Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.

The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM. This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.

As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.

I've long had a personal interest in the energy efficiency of the Ubuntu Server. This interest has manifested in several ways. From founding the PowerNap Project to using tiny netbooks and notebooks as servers, I'm just fascinated with the idea of making computing more energy efficient.

It wasn't so long ago, in December 2008 at UDS-Jaunty in Mountain View that I proposed just such a concept, and was nearly ridiculed out of the room. (Surely no admin in his right mind would want enterprise infrastructure running on ARM processors!!! ... Um, well, yeah, I do, actually....) Just a little over two years ago, in July 2009, I longed for the day when Ubuntu ARM Servers might actually be a reality...

My friends, that day is here at last! Ubuntu ARM Servers are now quite real!

A huge round of kudos goes to the team of outstanding engineers at Canonical (and elsewhere) doing this work. I'm sure I'm leaving off a ton of people (feel free to leave comments about who I've missed), but the work that's been most visible to me has been by:

Over the past month I’ve being working with John Rigby to integrate the SMSC95XX and OMAP4 EHCI patches into Linaro U-Boot, so we could deliver the network booting feature for people using Pandaboards.

Those patches are published at the U-Boot mailing list, but still as a working in progress. While we work helping the original developers to get the patches accepted upstream, we also want to deliver the functionality for our users, so all those patches are now integrated at the Linaro U-Boot tree.

This should be enough for you to get your Pandaboard booting with PXE. You can also script these commands at your boot.scr file that U-Boot loads automatically from your SD card, so you don’t have to call them by hand every time you reboot your board.

In case it doesn’t work for you, just ping me (rsalveti) at #linaro on freenode :-)

As discussed at last months Ubuntu Developer Summit in the session ‘ARM and other architectures certification program‘, there’s a plan to start certifying ARM hardware, or at least start investigating how we’ll do it. To this end I’ve received on loan a TI OMAP4 Pandaboard from Canonical’s ARM QA team. I’ve actually had it here in the office for quite a few weeks now but for some reason or another I haven’t got around to blogging about it yet!

So, without further adieu – here are a couple of shots of my setup:

I like it because it’s really compact and smacks of geekiness, with all the exposed circuits, yet is really quite easy to use in a lot of ways. The monitor is plugged in via the HDMI port on the right hand side (because of an issue with my monitor I can only get 640×480 out of it, so everything is very squeezed on the screen) and the wireless desktop receiver which handles my mouse and keyboard plugs right in to one of the two full sized USB 2.0 ports. The whole thing is powered by my laptop (even when it’s suspended) via USB-AC 5v connector, also on the right-hand side.

It’s running Natty/Unity 2D installed on the 8GB SDHC card on the left of the board. This means that the whole setup cost (if I had have payed for rather than borrowed it) just under $200. The white labeled chip on the top left hand side of the board is the WiFi/Bluetooth chip and that works *perfectly* out of the box – often picking up a better signal than the laptop sitting right next to it. I also have the option of plugging in my USB headset in the the same USB hub as the wireless receiver (it’s a tight squeeze but it just about fits) and that too works perfectly.

Cons are that I don’t have a USB HDD so Ubuntu is running on flash memory (notoriously bad performance) and that if I decide to power down my laptop but forget the Pandaboard has some task running on it then all is lost Overall though it’s a really nice piece of equipment and because of all the good work that has been done around it, I could recommend one to anyone with a bit of technical know-how (no ARM experience required!)

For Maverick Meerkat we’re continuing improving the ARM support for Ubuntu. With Lucid we got the first release optimized for ARMv7 (Thumb2 and SoftFP but not NEON), and for Maverick the plan is to keep the same ARM optimizations as base, but improving board support and user experience.

The main decisions to support these boards are basically the upstream support, solid community around them, easy hardware access and CPU power (standard Ubuntu is quite heavy, so we need a good and powerful machine).

At the moment we already got a good support for them, and the Beta release is somehow usable already! There are some development on-going to have a full working 3D interface (unity) for OpenGL ES much the same way we have for normal OpenGL devices. The only bad thing is that currently most of the 3D drivers for ARM (if not all) are closed source, so the development is a little bit harder than the usual.

If you just got your BeagleBoard xM, or want to try Ubuntu on your C4, please give it a try. For Maverick the idea is to give the users a pre-installed image, that you just need to ‘dd’ to your SD card, boot and adjust the environment.

In case you don’t have any of these boards, but want to use Ubuntu with different devices remember you can always try to build a ‘rootfs’ with RootStock. You’ll only need a working and compatible kernel and boot-loader.

And please, in case of you find any bug, want to help testing and getting Ubuntu better on your ARM device, just poke us at #ubuntu-arm (freenode). We’ll for sure be happy to assist you with any problems you may find.

Note for Beagle xM users: in case you find that your Maverick Beta image doesn’t boot with your board, please check bug https://bugs.launchpad.net/bugs/628243. This means that you have a Numonyx memory chip, and unfortunately the fix didn’t make Beta. To work around it just mount the first partition of your SD card (after giving ‘dd’) and replace your MLO with http://people.canonical.com/~rsalveti/maverick/boot/xM/MLO. After this just umount the partition, put it at your board and boot it.