Hands-on with Canonical’s Orange Box and a peek into cloud nirvana

Ten Intel NUCs in a custom enclosure make for a compelling cloud-in-a-box.

Enlarge / Looking down into the Orange Box. The ten naked NUCs are vertically mounted to the walls, while the central cavity includes a power supply, gigabit Ethernet switch, and shared storage.

Lee Hutchinson

Take ten high-end Intel NUCs, a gigabit Ethernet switch, a couple of terabytes of storage, and cram it all into a fancy custom enclosure. What does that spell? Orange Box.

Not the famous gaming bundle from Valve, though—this Orange Box is a sales demo tool built by Canonical. There are more than a dozen Orange Boxes in the wild right now being used as the hook to get potential Canonical users interested in trying out Metal-as-a-Service (MAAS), Juju, and other Canonical technologies. We got the chance to sit down with Canonical’s Dustin Kirkland and Ameet Paranjape for an afternoon and talk about the Orange Box: what it is, what it does, and more importantly, what it is not.

Enlarge/ The rear of the Orange Box, showing Ethernet connections (they're attached to the internal switch and are used to expand the Orange Box—like if you wanted to cluster it with a twin), power, USB, and HDMI. The USB and HDMI connect to the control node.

Lee Hutchinson

First off, Canonical emphasized to Ars multiple times that it is not getting into the hardware business. If you really want to buy one of these things, you can have Tranquil PC build one for you (for £7,575, or about $12,700), but Canonical won’t sell you an Orange Box for your lab—there are too many partner relationships it could jeopardize by wading into the hardware game. But what Canonical does want to do is let you fiddle with an Orange Box. It makes for an amazing demo platform—a cloud-in-a-box that Canonical can use to show off the fancy services and tools it offers.

Inside the custom orange chassis are ten stripped Intel Ivy Bridge D53427RKE NUCs. Each comes with 16GB of RAM and a 120GB SSD, and they’re all connected to a gigabit Ethernet switch. One of the NUCs is the control node; its USB and HDMI ports are wired to the Orange Box’s rear panel, and that particular node also runs Canonical’s MAAS software. Its single unified internal 320W power supply runs on a single 110v outlet—even when all ten nodes are going flat-out, it doesn't require a second power plug.

The initial view of the Metal-as-a-Service (MAAS) console running on the first node. MAAS is an off-the-shelf Canonical tool, but here it's preconfigured to work with the Orange Box's NUCs.

The initial view of the Metal-as-a-Service (MAAS) console running on the first node. MAAS is an off-the-shelf Canonical tool, but here it's preconfigured to work with the Orange Box's NUCs.

The MAAS console showing the status of all the NUC nodes. None have been assigned any roles, so they show state "ready."

Details on one of the physical nodes. Information displayed here (and in detail below in the "raw discovery data" section) is from a "lshw" run in an ephemeral PXE-booted Linux environment.

Some of the nodes' properties can be edited, including the management protocol used (the NUCs use Intel AMT, though MAAS supports a number of other options).

Nodes can be started, stopped, and deployed singly or in groups.

These are the different boot images that can be deployed to this particular Orange Box's nodes.

For companies that are interested, Canonical is using the Orange Box with what it’s calling Jumpstart Training. For $10,000, Canonical will show up at your business with an Orange Box, provide two days of deep-dive training, and will then leave the box with you for two weeks. There are few enough actual Orange Boxes in existence that they weren’t able to give one to us to beat on, but Kirkland and Paranjape drove out from Canonical’s Austin office to Houston to give me an abbreviated demo and let me test drive the thing.

And here’s the first thing you have to realize about the Orange Box: it’s cool, but it’s not particularly noteworthy. It's a neat concept and it's very useful, but the capabilities that it demonstrates aren't unique to the form factor—Canonical is quick to point out that it's merely a convenient demo and training tool. The default image loaded onto node 0 gives you a MAAS console preconfigured to control the nine other NUC nodes in the Orange Box using Intel AMT, but this isn’t a special build of Canonical’s MAAS—this is an off-the-shelf application that’s being used here to demonstrate an integrated use case.

MAAS can be used to deploy a number of different operating system images to the Orange Box nodes, which happens via PXE. Node 0 also comes with Juju, Canonical's service deployment tool, which we’ll get into in a moment. By bringing together Juju and MAAS, Canonical can quickly show off some deeply complex deployments with actual hardware rather than relying on virtual machines or quickly spun-up EC2 demo instances.

Piercing the buzzword bingo

I know that no small number of Ars readers want to hear about the cool hardware and don’t care a whole lot about the software. That reaction—"Oh, cool, check this box out!"—is one of the main points of the Orange Box—the hardware is the hook Canonical is hoping to use to get people interested in seeing more (and it definitely worked on us). The hardware is startling and attractive (and orange!), and it’s a hell of a lab box, but it lacks essential features that it would need in order to be data center-ready—it only has a single internal power supply, its networking is non-redundant, and there's no inbuilt concept of hardware failover. That's OK, though: it's not supposed to be a production box.

We walked through a bunch of different installations and deployments, but before we dig into that, we need to define a few terms and describe why those terms are a big deal. Those of you with IT experience (and I’m sure that’s most of the audience!) can probably skip ahead a teeny bit, but taking time to make sure we’re all on the same page will be helpful once we really get going.

The Orange Box more than anything else shows potential Canonical customers how the Canonical way of managing servers and services works. There are two big "wow" moments you’re supposed to have while using the thing: the first comes when you see how all of Canonical’s tools work—and to the company's credit, the demos we ran through were slick and everything worked well. The second "wow," though, is when you realize that everything you do on the Orange Box demo unit using its built-in nodes can also be done at a much larger scale on real hardware or big virtual machines or on a public or private cloud provider’s gear—and, if everything works right, just as easily.

The Orange Box uses two key Canonical technologies: MAAS and Juju. MAAS, as we’ve described above, stands for Metal-as-a-Service. That name is a play on all the various thing-as-a-service names that cloud providers provide: whenever you see "thing-as-a-service," the "thing" is typically being marketed as a demand-based service or product that runs "in the cloud." Amazon’s EC2 service, for example, is an "infrastructure-as-a-service" cloud offering. You can activate as many EC2 virtual computers (infrastructure) as you need for a task, be it one or a hundred, and you pay for what you use.

There’s also storage-as-a-service (like Amazon S3, Rackspace, or OpenStack), software-as-a-service (like Salesforce.com), and many other things-as-a-service. The commonality between all of them is that you pay for what you use without worrying about the hardware underneath—it’s all in "the cloud."

Of course, "the cloud" is another misunderstood, horribly abused computing term. "The cloud" means different things to different people; it most often simply means "someone else’s servers," though a cloud can be "public" or "private"—it all depends on whose servers and where the line of abstraction is drawn. A company might store its data in a "private cloud," which could mean a big OpenStack deployment in its data centers on hardware that it owns; another company might use a public cloud or hybrid approach, keeping some data and apps internal and others running on Amazon EC2 or another provider.

It can be complicated. There’s no real magic in "the cloud," nor is it a particularly revolutionary concept, but it’s an easy word to say, and it crystallizes a bunch of different concepts in ways that "grid computing" and "time-sharing" failed to do.

Juju charms

There’s more, though—beyond MAAS, Canonical has Juju. Juju is a complex tool that can do a whole lot of stuff, but the simplest way to think of it is as "apt-get but for services." Put another way, if you wanted to install a Web server on Ubuntu, you could use apt-get; if you want to deploy an entire Web application stack, you could do it with Juju.

Juju uses "charms," which are scripted recipes that can install one or more packages and also link those packages together. A "MediaWiki" charm, for example, might install the Apache Web server package, then install the MySQL database package, then install the MediaWiki package from a third-party PPA, then configure Apache to properly serve PHP, and finally configure Apache and MySQL for MediaWiki, leaving you with a functional MediaWiki instance. Juju charms can also be linked together in "bundles," enabling you to deploy complex services consisting of many meshed and interacting applications.

The Juju console, also running on node 0. This drag-and-drop tool (which sits atop a much richer set of command line tools) lets you run Juju charms and bundles. Here, we have a MediaWiki charm and a MySQL charm with a relationship automatically established between them via their bundle.

The Juju console, also running on node 0. This drag-and-drop tool (which sits atop a much richer set of command line tools) lets you run Juju charms and bundles. Here, we have a MediaWiki charm and a MySQL charm with a relationship automatically established between them via their bundle.

Details on the MediaWiki instance we've just deployed to one of the Orange Box nodes.

And here's MediaWiki, up and running with basically zero effort.

Juju is a cloud deployment tool, too. You could use Juju to deploy applications locally, but the tool is most properly used in conjunction with some kind of cloud layer—for example, you could tell Juju to deploy that MediaWiki charm to Amazon EC2, and after providing your EC2 credentials, you’d have a fully functional MediaWiki server on EC2 a few minutes later. Juju can deploy services to anything it has an API for—and that, of course, includes MAAS.

Servers are cattle, not pets

Which brings us back to the Orange Box and the demo. Kirkland noted that often, IT departments tend to treat servers as special pampered pets—you might buy four servers to function as a Hadoop cluster and then spend time polishing and tuning Hadoop for those four servers. And that’s all those servers are good for, too—they were bought to a certain spec, and they’re Hadoop servers until you’re done with your Hadoop project.

Servers, Kirkland explained, shouldn’t be special pets—servers should be cattle. In the Canonical universe, if you have MAAS in your data center, you should be able to deploy a Hadoop Juju charm out to four MAAS servers that fit your desired performance criteria and start work rather than requiring bespoke hardware. Further, you should be able to scale up and down as needed; if your Hadoop workload is low, you can destroy two of the four boxes and retask them to something else by deploying a different charm to them. If your Hadoop workload goes up dramatically, you can reclaim them or even spin up additional ones.

MAAS is aware of a node’s capabilities and specs—it uses an ephemeral PXE image to quickly boot and assay nodes before assigning them to the "ready" pool. Once inventoried, you can tag machines in the pool and manage them as groups if you need. For our Orange Box demo, all nine of the non-management nodes were visible and usable (as well as a KVM virtual machine running on node 0).

As with any vendor’s picture of how the data center should work, it makes for a compelling story—as long as everything is in the same world running the same management layer. To this end, Canonical has made sure that its MAAS tool can deploy not just Ubuntu images but other Linux and Windows images as well.

With very little effort, Kirkland and Paranjape quickly set up a small Hadoop cluster using Juju charms. Juju on the Orange Box is preconfigured to work with MAAS as its backend, so it passes instructions via MAAS’s RESTful API. MAAS actually does initial operating system installations and then executes the specific Juju scripts. We ended up with a four-node Hadoop environment within two minutes—one master node, two worker nodes, and one MySQL database node.

Kirkland kicked off a quick MapReduce job on our new Hadoop cluster, which took about seven minutes to run; after it completed, Kirkland quickly requisitioned the remaining five unused nodes in the Orange Box and transformed them into Hadoop compute nodes simply by dialing up the number of compute nodes in the Juju console. The process required no reconfiguration of any of the existing nodes—or rather, that reconfiguration was done transparently by the Juju charm. When it was done, we re-ran the same MapReduce job and it completed far faster (not at all surprising, since we’d almost tripled the amount of horsepower being thrown at the job).

After nuking our MediaWiki bundle, we built a small Hadoop cluster with another Juju bundle. The bundle also included MySQL and Hive to help tie Hadoop to MySQL.

After nuking our MediaWiki bundle, we built a small Hadoop cluster with another Juju bundle. The bundle also included MySQL and Hive to help tie Hadoop to MySQL.

Here are the details of the Hadoop namenode, including the Juju relations between it and the other active charms.

Our MapReduce run took almost nine minutes with two nodes doing the computing. That's too slow. Let's crank it UP.

Adjusting the number of datanodes from two to six with a keystroke. Juju handles the relationships, reconfiguring Hadoop and MySQL on the fly, without you having to actually do anything.

MapReduce goes faster now!

Here we're dropping a Ganglia charm into the mix. Ganglia is a monitoring tool, like Nagios or Munin. Once installed, you create a relationship between the new charm and the Hadoop cluster, and...

...Ganglia goes to work monitoring the nodes. Juju handles all of the underlying configuration.

I also got to watch Kirkland demonstrate a complex OpenStack deployment via MAAS and Juju to the Orange Box components. This is one of the same demos that was shown off last month at the OpenStack Summit; there were more than a dozen separate Juju charms executed as a bundle to install and configure OpenStack with a whole array of production capabilities. The removal of the admin from the setup process was a bit shocking—I’ve been through complex enterprise VMware deployments before, and watching OpenStack gamely set itself up before my eyes was amazing. We blasted through what would probably have been a two-day traditional deployment in minutes.

Even crazier, Kirkland informed me that if we wanted to, we could switch Juju’s backend away from the preconfigured MAAS setting and use our newly deployed OpenStack cluster as the basis for further Juju deployments. After all, at least in this instance, it’s all hardware rather than virtual machines. Cue the "Inception" music.

Here we have an OpenStack bundle. It's not an overly complex OpenStack setup, but it's still made up of a whole lot of charms and complex relationships. This kind of deployment might take days to roll out manually using a runbook; we did it in minutes.

Here we have an OpenStack bundle. It's not an overly complex OpenStack setup, but it's still made up of a whole lot of charms and complex relationships. This kind of deployment might take days to roll out manually using a runbook; we did it in minutes.

Logged into the OpenStack dashboard, running on the metal.

This is a much larger OpenStack deployment bundle from Jujucharms.com, a Canonical site running the same Juju admin console as the Orange Box. Here you look through bundles and their relationships without actually deploying them to anything.

Another complex Web application stack from Jujucharms.com, this one duplicating the production Web stack of a real customer.

A gateway drug

I only spent a few hours playing with the Orange Box, but it still told a pretty compelling story—life in Canonical cloud land looks pretty sweet. I found myself asking halfway through the demo why I didn’t just ditch the four servers in my closet and replace them with four NUCs running MAAS—surely down that path would lie computing nirvana.

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it's making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud. And it does make for a hell of an impressive demo environment—the slickly preconfigured MAAS + Juju setup lets Canonical throw down and show off dozens of different services and application configurations in a very short amount of time.

Enlarge/ My very own cloud demo station! The Orange Box as it was shown to me in my living room, with the Juju console in the background showing an operational OpenStack deployment.

Lee Hutchinson

You certainly don’t need an Orange Box to start fiddling with MAAS or Juju—in fact, with Juju, you don’t really even need hardware at all. You can start deploying charms and bundles to Amazon EC2 or any other big cloud provider that Juju supports—or even make your own. If the demo showed me anything, it’s that Canonical is sitting on some attractive technology—and in keeping with the company’s roots, it’s all open source. Canonical would certainly love to sell you a support agreement—that’s how it gets revenue—but you don’t need to pay to play.

Still, all that being said, I wish my closet had an Orange Box in it. That thing is hella cool.

Edit: Not sure why the down votes. The regular NUCs have a heatsink/fan assembly. This looks like there is nothing on the CPU's, all in the same airspace, with a single cooling fan. I don't mind being wrong but what am I not seeing then?

For a "home cloud" (multiple actual machines in a single box as opposed to those damnable WD "Cloud" NAS disks), this isn't bad. I wouldn't run a data center or anything seriously permanent on them, but for development and testing this is a really cool way to get everything in one box. This is the next step up from a couple home servers (like Lee mentioned), and is more power-friendly to boot.

I realize it's explained in the linked page at the top of the article, but you've got a whole section that seems like it is supposed to define what Metal-as-a-Service is, and it doesn't actually. It just talks about what the word "cloud" means. Is there a paragraph missing?

Edit: I see that's the point of the rest of the article! Never mind me, I'm just reading this in bits and pieces over here...

Small correction: OpenStack is not just storage-as-a-service. OpenStack is an open-source project that replicates most of Amazon's AWS components (EC2, S3, etc) It includes Infrastructure-as-a-Service (the Nova component), Networking as a Service (Neutron) and Storage-as-a-Service (both Block and Object Services).

Companies like Rackspace (and others) sell managed implementations of OpenStack in much the same way as Amazon sells its home-grown AWS services.

I last played with MAAS and JuJu about 8 months ago. I found it interesting and attractive but buggy an unfinished. There was a crippling bug that broke deploying any kind of KVM virtual machine. Deploying an openstack setup was right out.

Those were all fixable though. The real issue was that to deploy what they designed as a usable and fault tolerant openstack platform (on real hardware) required something in the range of 32 physical host servers. OK, assuming you want to run a datacenter of potentially thousands of servers that might be acceptable but it basically puts it a bit outside the realm of acceptable for a huge number of organizations. I wonder if that has changed?

I last played with MAAS and JuJu about 8 months ago. I found it interesting and attractive but buggy an unfinished. There was a crippling bug that broke deploying any kind of KVM virtual machine. Deploying an openstack setup was right out.

Were you just messing around for fun or did you end up using a competing product instead? Have you compared it to openshift? It's hard to find people with actual experience using these things instead of experience reading the spec sheets and marketing docs (people outside the companies that create and sell these packages).

For this price there are better options that don't slowly eat your data. For home use, I've heard HP has a nice line of microservers that support ECC.

But sure, as a flashy toy for demo'ing their software that never sees any important data it's great. Which is why they don't want to sell it.

Wow, did you even read the next sentence? I immediately followed that up with a disclaimer about this not being datacenter-grade, nor something I'd use for permanent or even long-term storage. It's perfect for the home enthusiast who wants to experience running a full cloud system without being in debt to Amazon or Rackspace every month. It's also good for a development "box" where you can test out various configurations in short order. And that's why they built it - to show off the capabilities of their cloud solution, not to make a reference build for others to emulate exactly.

As for price, I'm sure you can buy ten systems, a full gigabit switch, and all the rest of the gear cheaper on eBay and frankenmonster it together. I'd pay $12K not to have to do that. And since they're custom built anyway, why not just add all that fancy ECC stuff and call it a day. If you're saying that it's too much for a consumer to spend on tech equipment, look at how much the average sport car costs. Or a mid-grade home theater system.

Oh, and your points about degrading bits is valid. But if you're that picky about your home data, I suggest that you put everything you have into a chemically neutral and non-reactive storage solution, make fifty copies of it, and place at least half of them into orbit, while burying the other half in multiple geographically stable regions around the globe. Of course, you could just nuke it from orbit. It's the only way to be sure.

Methinks you missed the point. Canonical does NOT want to sell you an orange box, this isn't about the hardware. It's about MAAS and JuJu.

Yeah, the whole box has been designed to be "a convenient demo and training tool", so no-one's trying especially hard to sell them for production. In a useful real-world hardware platform you'd want a) more CPU power per node, b) more memory per node, c) more IO per node, d) enterprise features or e) some combination of the above.

The subtitle isn't particularly apt, so someone who just reads titles might be confused...

Edit: Not sure why the down votes. The regular NUCs have a heatsink/fan assembly. This looks like there is nothing on the CPU's, all in the same airspace, with a single cooling fan. I don't mind being wrong but what am I not seeing then?

Look at what they are mounted to. The whole assembly is basically mounted to a big heatsink in the back.

Cool stuff but Unity doesn't look professional or attractive to businesses in my opinion.They could sell it better and shorten the training a bit if they focused on something like XFCE or any other desktop environment that is productivity oriented.They are fighting an up hill battle vs Red Hat and Windows 7. They should display something familiar. Then they can show Unity and say that this option is available.

Cool stuff but Unity doesn't look professional or attractive to businesses in my opinion.They could sell it better and shorten the training a bit if they focused on something like XFCE or any other desktop environment that is productivity oriented.They are fighting an up hill battle vs Red Hat and Windows 7. They should display something familiar. Then they can show Unity and say that this option is available.

This software is the opposite end of the spectrum that you're commenting on. Unity has nothing (yet) to do with this article.

I immediately followed that up with a disclaimer about this not being datacenter-grade, nor something I'd use for permanent or even long-term storage.

That's a false dichotomy. At 160GiB, non-ECC isn't anything-grade.

Quote:

It's perfect for the home enthusiast who wants to experience running a full cloud system without being in debt to Amazon or Rackspace every month.

It's perfect for spending nearly $13,000 on special-purpose hardware so you can experiment with running a cloud, because it saves you money over Amazon and Rackspace's month-to-month on-demand fees?

How many years is this "experience" going to go on? And how does that square with your claim that you won't use it for long-term storage. Keeping in mind that at 21 RAM errors a day, "long-term" is about two hours.

Quote:

I'd pay $12K not to have to do that.

No you wouldn't. You might post on a comment thread that you would, but we all know that nobody is going to buy this for "home use."

Quote:

And since they're custom built anyway, why not just add all that fancy ECC stuff and call it a day.

ECC isn't available on the NUC at all. I don't think the CPU's it takes support it, but the chipset definitely doesn't.

Quote:

If you're saying that it's too much for a consumer to spend on tech equipment, look at how much the average sport car costs. Or a mid-grade home theater system.

The market for ultra-rich people who spend money on 10 special-purpose server PC's for the sole reason that they have a cool-looking case even though they are pretty much guaranteed to crash or lose data on a regular basis is vanishingly small.

Quote:

Oh, and your points about degrading bits is valid. But if you're that picky about your home data, I suggest that you put everything you have into a chemically neutral and non-reactive storage solution, make fifty copies of it, and place at least half of them into orbit, while burying the other half in multiple geographically stable regions around the globe.

Or I could use ECC, which is substantially cheaper than the above. That is, after all, what it's for. It's easy to obtain and affordable. Its sole drawback for the purposes of this discussion is the equipment that uses it isn't available in bright orange packaging.

I don't know why you think "home data" isn't "important data." Uncaught RAM errors aren't bad disk blocks that put a bright green dot in the middle of , they can screw up everything read or written to them, crash the system, or corrupt the results of whatever you're doing that needs 10 servers. But I forget, you're buying 10 servers for the sole purpose of doing nothing with them.

At the end of the day, if you say both "Oh I want to learn about cloud computing" and "ECC isn't important" then you've already failed at learning about cloud computing. Data integrity matters.

And that's what your position comes down to. You're basically saying, "sure, I could do this cheaper and better, but then it wouldn't be orange."

At which point I'd suggest calling SuperMicro because if the stylishness of the enclosure is what's holding you back, with your lavish budget they'll probably do a custom color for you.

But this device is good for one thing and one thing only: demoing your cloud stack at trade shows and the like, where being portable and cool looking are both worth the huge price premium. Which is why Canonical commissioned it. The company they commissioned it from may have productized it, but they're looking to see it to other companies with a similar need, not for "home users."

I am struggling to understand what this software actually does. Can someone help explain?

MAAS ("Metal-as-a-Service") lets you install preconfigured operating system images onto lots of different physical computers without having to go sit down at each of them. It uses a variety of different methods do to the remote installations, including PXE, Intel AMT, and a bunch of other tools. It is not the first remote deployment tool, and it won't be the last.

Juju lets you take complicated sets of applications (like, a functional MediaWiki installation, composed of Apache with a custom config and MySQL and the MediaWiki PHP application) and install them to computers (either physical ones or virtual ones hosted locally or on a cloud service provider), again without having to do detailed configuration. Juju also lets you change complex sets of apps without having to do reconfiguration—like when Kirkland added Hadoop compute nodes to a Hadoop cluster by clicking a button. Juju automates all the things you'd have to do install interlinked sets of applications.

With the two of them together, you can do stuff like "deploy 50 Ubuntu server images and turn them into a big OpenStack cluster made up of a dozen different types of nodes, each with different roles and different programs installed."

This ties in with the Orange Box because having ten computers in a single box provides a convenient platform to show off how MAAS and Juju works, without having to use VMware or otherwise fake having lots of computers to deploy to.

Again, this isn't the first or last set of remote deployment and management tools—there have been many before and there will be many others to come. This are the tools Canonical provides. Ultimately, they'd like your business to deploy a bunch of Ubuntu servers with their tools and then purchase a support contract so they can make some money.

Look over the BMC logs of the average 24x7 server and see that they have about one uncorrectable memory error per 6-12 months. Then consider instead how many memory errors must have been corrected by ECC and the superior shielding quality of a server (in a rack - more shielding). Cosmic rays and inductive EMF are all over the place, and as we keep shrinking semiconductor processes and counting values in just dozens of electrons, we're going to see these kinds of errors increase.

The US and EU need to seriously reconsider their restrictions on lead in electronics. For one, we're still dealing with the increased failure rates of components subject to motion and vibration failing due to brittle lead free solder. Now we're getting to process sizes of semi-conductors that require better shielding than sheet steel will provide.

I am struggling to understand what this software actually does. Can someone help explain?

MAAS ("Metal-as-a-Service") lets you install preconfigured operating system images onto lots of different physical computers without having to go sit down at each of them. It uses a variety of different methods do to the remote installations, including PXE, Intel AMT, and a bunch of other tools. It is not the first remote deployment tool, and it won't be the last.

Juju lets you take complicated sets of applications (like, a functional MediaWiki installation, composed of Apache with a custom config and MySQL and the MediaWiki PHP application) and install them to computers (either physical ones or virtual ones hosted locally or on a cloud service provider), again without having to do detailed configuration. Juju also lets you change complex sets of apps without having to do reconfiguration—like when Kirkland added Hadoop compute nodes to a Hadoop cluster by clicking a button. Juju automates all the things you'd have to do install interlinked sets of applications.

With the two of them together, you can do stuff like "deploy 50 Ubuntu server images and turn them into a big OpenStack cluster made up of a dozen different types of nodes, each with different roles and different programs installed."

This ties in with the Orange Box because having ten computers in a single box provides a convenient platform to show off how MAAS and Juju works, without having to use VMware or otherwise fake having lots of computers to deploy to.

Again, this isn't the first or last set of remote deployment and management tools—there have been many before and there will be many others to come. This are the tools Canonical provides. Ultimately, they'd like your business to deploy a bunch of Ubuntu servers with their tools and then purchase a support contract so they can make some money.

Why all the downvotes? Comments seem to be right on topic, yet I see plenty of -20 or even worse. Downvote is for trollish or OT comments, not for things you personally disagree with (that's what they are being given for I assume). This is spoiling my reading of the comments.

I can't say that Cloud == someone else's server is a very compelling definition. In my mind "cloud" includes the cattle vs pet aspect. Cloud needs to have standardized units of compute/storage/network and an API to manage those units in order to scale an application on demand, otherwise it is just a bunch of servers.

Wow, I don't think I've seen so much missing the point in one place before.

Exactly. The article isn't about the actual HARDWARE, it's a SOFTWARE article. The hardware is used to just DEMO the software. I will say this though, the hardware is pretty cool... And so is the software - it's quite tempting since I have a rack of servers in my basement, but the configuration doesn't really change so I would find it of limited value (main webserver, couple of miners, backup servers, media server etc). For anyone who needs to allocate resources on the fly though, it would be extremely useful.

Lee Hutchinson / Lee is the Senior Reviews Editor at Ars and is responsible for the product news and reviews section. He also knows stuff about enterprise storage, security, and manned space flight. Lee is based in Houston, TX.