Mark Shuttleworthhttp://www.markshuttleworth.com
Planetary perspectivesMon, 17 Oct 2016 12:23:29 +0000en-UShourly1https://wordpress.org/?v=4.6.1The mouse that jumpedhttp://www.markshuttleworth.com/archives/1512
http://www.markshuttleworth.com/archives/1512#respondMon, 17 Oct 2016 12:23:29 +0000https://www.markshuttleworth.com/?p=1512The naming of Ubuntu releases is, of course, purely metaphorical. We are a diverse community of communities – we are an assembly of people interested in widely different things (desktops, devices, clouds and servers) from widely different backgrounds (hello, world) and with widely different skills (from docs to design to development, and those are just the d’s).

As we come to the end of the alphabet, I want to thank everyone who makes this fun. Your passion and focus and intellect, and occasionally your sharp differences, all make it a privilege to be part of this body incorporate.

Right now, Ubuntu is moving even faster to the centre of the cloud and edge operations. From AWS to the zaniest new devices, Ubuntu helps people get things done faster, cleaner, and more efficiently, thanks to you. From the launch of our kubernetes charms which make it very easy to operate k8s everywhere, to the fun people seem to be having with snaps at snapcraft.io for shipping bits from cloud to top of rack to distant devices, we love the pace of change and we change the face of love.

We are a tiny band in a market of giants, but our focus on delivering free software freely together with enterprise support, services and solutions appears to be opening doors, and minds, everywhere. So, in honour of the valiantly tiny leaping long-tailed over the obstacles of life, our next release which will be Ubuntu 17.04, is hereby code named the ‘Zesty Zapus’.

]]>http://www.markshuttleworth.com/archives/1512/feed0Thank you CChttp://www.markshuttleworth.com/archives/1498
http://www.markshuttleworth.com/archives/1498#commentsTue, 17 May 2016 20:16:55 +0000https://www.markshuttleworth.com/?p=1498Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

]]>http://www.markshuttleworth.com/archives/1498/feed1Y is for…http://www.markshuttleworth.com/archives/1496
http://www.markshuttleworth.com/archives/1496#commentsThu, 21 Apr 2016 23:40:46 +0000https://www.markshuttleworth.com/?p=1496Yakkety yakkety yakkety yakkety yakkety yakkety yakkety yakkety yak. Naturally
]]>http://www.markshuttleworth.com/archives/1496/feed2Nova-LXD delivers bare-metal performance on OpenStack, while Ironic delivers NSA-as-a-Servicehttp://www.markshuttleworth.com/archives/1493
http://www.markshuttleworth.com/archives/1493#respondWed, 13 Apr 2016 15:28:52 +0000https://www.markshuttleworth.com/?p=1493With the release of LXC 2.0 and LXD, we now have a pure-container hypervisor that delivers bare-metal performance with a standard Linux guest OS experience. Very low latency, very high density, and very high control of specific in-guest application processes compared to KVM and ESX make it worth checking out for large-scale Linux virtualisation operations.

Even better, the drivers to enable LXD as a hypervisor in OpenStack, are maturing upstream.

That means you get bare metal performance on OpenStack for Linux workloads, without actually giving people the whole physical server. LXD supports live migration so you can migrate those users to a different physical server with no downtime, which is great for maintenance. And you can have all the nice Openstack semantics for virtual networks etc without having to try very hard.

By contrast, Ironic has the problem that the user can now modify any aspect of the machine as if you gave them physical access to it. In most cases, that’s not desirable, and in public clouds it’s a fun way to let the NSA (and other agencies) install firmware for your users to enjoy later.

NSA-as-a-Service does have a certain ring to it though.

]]>http://www.markshuttleworth.com/archives/1493/feed0Nominations to the 2015 Ubuntu Community Councilhttp://www.markshuttleworth.com/archives/1488
http://www.markshuttleworth.com/archives/1488#respondWed, 11 Nov 2015 19:23:58 +0000https://www.markshuttleworth.com/?p=1488I am delighted to nominate these long-standing members of the Ubuntu community for your consideration in the upcoming Community Council election.

The Community Council is our most thoughtful body, who carry the responsibility of finding common ground between our widely diverse interests. They oversee all membership in the project, recognising those who make substantial and sustained contributions through any number of forums and mechanisms with membership and a voice in the governance of Ubuntu. They delegate in many cases responsibility for governance of pieces of the project to teams who are best qualified to lead in those areas, but they maintain overall responsibility for our discourse and our standards of behaviour.

We have been the great beneficiaries of the work of the outgoing CC, who I would like to thank once again for their tasteful leadership. I was often reminded of the importance of having a team which continues to inspire and lead and build bridges, even under great pressure, and the CC team who conclude their term shortly have set the highest bar for that in my experience. I’m immensely grateful to them and excited to continue working with whomever the community chooses from this list of nominations.

I would encourage you to meet and chat with all of the candidates and choose those who you think are best able to bring teams together; Ubuntu is a locus of collaboration between groups with intensely different opinions, and it is our ability to find a way to share and collaborate with one another that sets us apart. When it gets particularly tricky, the CC are at their most valuable to the project.

Voting details have gone out to all voting members of Ubuntu, thank you for participating in the election!

What a great Wily it’s been, and for those of you who live on the latest release and haven’t already updated, the bits are baked and looking great. You can jump the queue if you know where to look while we spin up the extra servers needed for IMG and ISO downloads

Utopic, Vivid and Wily have been three intense releases, packed with innovation, and now we intend to bring all of those threads together for our Long Term Support release due out in April 2016.

LXD is the world’s fastest hypervisor, led by Canonical, a pure-container way to run Linux guests on Linux hosts. If you haven’t yet played with LXD (a.k.a LXC 2.0-b1) it will blow you away. It will certainly transform your expectations of virtualisation, from slow-and-hard to amazingly light and fast. Imagine getting a full machine running any Linux you like, as a container on your laptop, in less than a second. For me, personally, it has become a fun way to clean up my build processes, spinning up a container on demand to make sure I always build in a fresh filesystem.

Snappy Packaging System

Snappy is the world’s most secure packaging system, delivering crisp and transaction updates with rollback for both applications and the system, from phone to appliance. We’re using snappy on high-end switches and flying wonder-machines, on raspberry pi’s and massive clouds. Ubuntu Core is the all-snappy minimal server, and Ubuntu Personal will be the all-snappy phone / tablet / pc. With a snap you get to publish exactly the software you want to your device, and update it instantly over the air, just like we do the Ubuntu Phone. Snappy packages are automatically confined to ensure that a bug in one app doesn’t put your data elsewhere at risk. Amazing work, amazing team, amazing community!

Metal as a Service

MAAS is your physical cloud, with bare-metal machines on demand, supporting Ubuntu, CentOS and Windows. Drive your data centre from a single dashboard, bond network interfaces, raid your disks and rock the cloud generation. Led by Canonical, loved by the world leaders of big, and really big, deployments. MAAS gives you high availability DNS, DHCP, PXE and other critical infrastructure, for huge and dynamic data centres. Also pretty fun to run at home.

Juju is… model-driven application orchestration, that lets communities define how big topological apps like Hadoop and OpenStack map onto the cloud of your choice. The fastest way to find the fastest way to spin those applications into the cloud you prefer. With traditional configuration managers like Puppet now also saying that model-driven approaches are the way to the future, I’m very excited to see the kinds of problems that huge enterprises are starting to solve with Juju, and equally excited to see start-ups using Juju to speed their path to adoption. Here’s the Hadoop, Spark, IPython Notebook coolness I deployed live on stage at Apache Hadoopcon this month:

Apache Hadoop, Spark, IPython modelled with Juju

All of these are coming together beautifully, making Ubuntu the fastest path to magic of all sorts. And that magic will go by the codename… xenial xerus!

What fortunate timing that our next LTS should be X, because “xenial” means “friendly relations between hosts and guests”, and given all the amazing work going into LXD and KVM for Ubuntu OpenStack, and beyond that the interoperability of Ubuntu OpenStack with hypervisors of all sorts, it seems like a perfect fit.

And Xerus, the African ground squirrels, are among the most social animals in my home country. They thrive in the desert, they live in small, agile, social groups that get along unusually well with their neighbours (for most mammals, neighbours are a source of bloody competition, for Xerus, hey, collaboration is cool). They are fast, feisty, friendly and known for their enormous… courage. That sounds just about right. With great… courage… comes great opportunity!

]]>http://www.markshuttleworth.com/archives/1479/feed4Introducing the Fan – simpler container networkinghttp://www.markshuttleworth.com/archives/1471
http://www.markshuttleworth.com/archives/1471#respondMon, 22 Jun 2015 10:40:53 +0000https://www.markshuttleworth.com/?p=1471Canonical just announced a new, free, and very cool way to provide thousands of IP addresses to each of your VMs on AWS. Check out the fan networking on Ubuntu wiki page to get started, or read Dustin’s excellent fan walkthrough. Carry on here for a simple description of this happy little dose of awesome.

Containers are transforming the way people think about virtual machines (LXD) and apps (Docker). They give us much better performance and much better density for virtualisation in LXD, and with Docker, they enable new ways to move applications between dev, test and production. These two aspects of containers – the whole machine container and the process container, are perfectly complementary. You can launch Docker process containers inside LXD machine containers very easily. LXD feels like KVM only faster, Docker feels like the core unit of a PAAS.

The density numbers are pretty staggering. It’s *normal* to run hundreds of containers on a laptop.

And that is what creates one of the real frustrations of the container generation, which is a shortage of easily accessible IP addresses.

It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don’t need extra compute. Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.

So the key problem is that you want to find a way to get tens or hundreds of IP addresses allocated to each VM.

Most workarounds to date have involved “overlay networking”. You make a database in the cloud to track which IP address is attached to which container on each host VM. You then create tunnels between all the hosts so that everything can talk to everything. This works, kinda. It results in a mess of tunnels and much more complex routing than you would otherwise need. It also ruins performance for things like multicast and broadcast, because those are now exploding off through a myriad twisty tunnels, all looking the same.

The Fan is Canonical’s answer to the container networking challenge.

We recognised that container networking is unusual, and quite unlike true software-defined networking, in that the number of containers you want on each host is probably roughly the same. You want to run a couple hundred containers on each VM. You also don’t (in the docker case) want to live migrate them around, you just kill them and start them again elsewhere. Essentially, what you need is an address multiplier – anywhere you have one interface, it would be handy to have 250 of them instead.

So we came up with the “fan”. It’s called that because you can picture it as a fan behind each of your existing IP addresses, with another 250 IP addresses available. Anywhere you have an IP you can make a fan, and every fan gives you 250x the IP addresses. More than that, you can run multiple fans, so each IP address could stand in front of thousands of container IP addresses.

We use standard IPv4 addresses, just like overlays. What we do that’s new is allocate those addresses mathematically, with an algorithmic projection from your existing subnet / network range to the expanded range. That results in a very flat address structure – you get exactly the same number of overlay addresses for each IP address on your network, perfect for a dense container setup.

Because we’re mapping addresses algorithmically, we avoid any need for a database of overlay addresses per host. We can calculate instantly, with no database lookup, the host address for any given container address.

More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.

You can expand any network range with any other network range. The main idea, though, is that people will expand a class B range in their VPC with a class A range. Who has a class A range lying about? You do! It turns out that there are a couple of class A networks that are allocated and which publish no routes on the Internet.

We also plan to submit an IETF RFC for the fan, for address expansion. It turns out that “Class E” networking was reserved but never defined, and we’d like to think of that as a new “Expansion” class. There are several class A network addresses reserved for Class E, which won’t work on the Internet itself. While you can use the fan with unused class A addresses (and there are several good candidates for use!) it would be much nicer to do this as part of a standard.

The fan is available on Ubuntu on AWS and soon on other clouds, for your testing and container experiments! Feedback is most welcome while we refine the user experience.

This will map 250 addresses on 241.0.0.0/8 to your 172.16.0.0/16 hosts.

Docker, LXD and Juju integration is just as easy. For docker, edit /etc/default/docker.io, adding:

DOCKER_OPTS=”-d -b fan-10-3-4 –mtu=1480 –iptables=false”

You must then restart docker.io:

sudo service docker.io restart

At this point, a Docker instance started via, e.g.,

docker run -it ubuntu:latest

will be run within the specified fan overlay network.

Enjoy!

]]>http://www.markshuttleworth.com/archives/1471/feed0Announcing the “wily werewolf”http://www.markshuttleworth.com/archives/1468
http://www.markshuttleworth.com/archives/1468#commentsMon, 04 May 2015 14:48:00 +0000https://www.markshuttleworth.com/?p=1468Watchful observers will have wondered why “W” is yet unnamed! Without wallowing in the wizzo details, let’s just say it’s been a wild and worthy week, and as it happens I had the well-timed opportunity of a widely watched keynote today and thought, perhaps wonkily, that it would be fun to announce it there.

But first, thank you to all who have made such witty suggestions in webby forums. Alas, the “wacky wabbit” and “watery walrus”, while weird enough and wisely whimsical, won’t win the race. The “warty wombat”, while wistfully wonderful, will break all sorts of systems with its wepetition. And the “witchy whippet”, in all its wiry weeness, didn’t make the cut.

Instead, my waggish friends, the winsome W on which we wish will be… the “wily werewolf”.

Enjoy!

]]>http://www.markshuttleworth.com/archives/1468/feed1W is for…http://www.markshuttleworth.com/archives/1466
http://www.markshuttleworth.com/archives/1466#commentsMon, 04 May 2015 06:39:13 +0000https://www.markshuttleworth.com/?p=1466… waiting till the Ubuntu Summit online opening keynote today, at 1400 UTC. See you there
]]>http://www.markshuttleworth.com/archives/1466/feed3Smart things powered by snappy Ubuntu Core on ARM and x86http://www.markshuttleworth.com/archives/1445
http://www.markshuttleworth.com/archives/1445#respondTue, 20 Jan 2015 14:00:34 +0000http://www.markshuttleworth.com/?p=1445“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux – but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that!

Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices.

In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device.

If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build.

I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds!