CET introduces a shadow stack for return addresses only, and will fail your code into an exception if the normal stack return address and the shadow stack address disagree. Trying to touch and manipulate the shadow stack will also fail into an exception. That is, CET makes touching a return address on the stack toxic by having in effect separate argument and return address stacks, and your code explodes every time you try to do something funny with return addresses.

There is a very useful and interesting article by Chris Down: “In defence of swap: common misconceptions“. Chris explains what Swap is, and how it provides a backing store of anonymous pages as opposed to the actual code files, which provide backing store for file based pages.

I have no problem with the information and background knowledge he provides. This is correct and useful stuff, and I even learned a thing about what cgroups can do for me.

I do have a problem with some attitudes here. They are coming from a developers or desktop perspective, and they are not useful in a data center. At least not in mine. :-)

A few days ago, a few of “us” have been to france. On a cold November morning in a brand new data center hall, we had a look at some Version 1 OCP racks, and a very nice conversation with a bunch of friendly people interested in getting the foundation going.

A OCP Version 1 rack, with three power zones. You can see the centralized power supplies at the bottom of each of the zones.

See Open Rack Specs and Designs, the Open Rack Standard 1.2 Spec and Facebook Open Rack V1 Specification. There is also the Facebook V1 Power Shelf Specification.

The Intel RSD Platform Guide (PDF) is the one document you should skim front to back, it’s really useful.

Back in the bad old time, server computers had a proprietary management controller (BMC), for example HP iLO or Dell iDRAC. These varied widely in capabilities, and worse, in data structures presented to the management software controlling the data center.

A lot of standards came, and failed, until pressure from certain customers with a lot of machines, everybody kind of centered around Redfish. All modern servers, no matter who makes them, understand Redfish.

But Redfish does not stop at the server, nor is it the whole story. It is cross linked to Rack Scale Design (RSD), which is an initiative lead by Intel and joined by many vendors to build composable hardware.

A cellphone or tablet is a fanless device. So is the 12″ Macbook. That means you can do whatever is possible at any point in time within a TDP of approximately 5W.

Here is the power consumption of my cellphone over a 12h period. The scale on the left is mW, down is discharge, up is recharge (plugged in). It’s basically limited to 5W, and that only for short periods of time.

These devices also have batteries, and when they are running on batteries, they need to be sleeping most of the time and have their display off. Whenever they are not dark and/or sleeping, they drain the battery, fast.

So, let’s do this again, but this time cleanly. In a Facebook Post, Michael Seemann has been explaining why the Facebook App does not listen to every word you ever say, all of the time.

He is right. A telephone is a device with limited power supply, limited cooling and limited, metered connectivity. It has an operating system that monitors and manages these critical resources, hard. You can’t listen to things all of the time and expect not to be noticed. Like, “the battery is empty and my LTE budget is gone” noticed.

Other devices, an Alexa, a Sonos One or a Google Home, are on cabled power and unmetered Wifi. The could theoretically get away with listening all of the time.

There is a very nice talk by John Laban on the accumulation of cruft and old style features in how we are currently building data centers. Being an advocate for the Open Compute Foundation, Laban is an advocate for OCP, which at the core has several ideas.

One of them being the vision of a Data Center Room, Rack and Machine as a system that are depending on each other in construction.

Yet another data center, west of Houston, was so well prepared for the storm — with backup generators, bunks and showers — that employees’ displaced family members took up residence and United States marshals used it as a headquarters until the weather passed.

“It wasn’t Noah’s ark, but it was darn close,” said Rob Morris, managing partner and co-founder of Skybox, the company that runs the center.

So at work we discussed Data Center Design at scale, and then things got out of hand. We ended up discussing Computronium, a hypothetical stuff that basically is a piece of thinking matter, performing computation, the ultimate composable piece of hardware.

Computronium is a problem, though. You can’t just cover the planet in a crunchy Computronium crust – not only because the Hotels have to go somewhere. But also, because whatever thickness of Computronium you propose, it has to be powered somehow.

Ultimately, it has to be powered by the amount of energy hitting us from the sun. So there is likely a Dyson sphere behind the earth or elsewhere, collecting even more energy from the sun and sending it into the Computronium.