More than a blog entry, this is a “let’s store what I did somewhere, so I don’t forget”.

Even better, if would be to put some automation together, I know that very well. 🙂
I plan to do that, actually, but I’m not there yet.

So, basically, I want compile-time warnings and errors to be clearly visible and easy to spot –while logs are flowing in a terminal– and I want to speed-up builds themselves. The tools for the job are, apparently:

golorgcc

And, in that config file, do what the comment that says to uncomment says, i.e., uncomment the lines following the comment: 🙂

# Uncomment this if you want set up default path to gcc
#g++: /usr/bin/g++
#gcc: /usr/bin/gcc
#c++: /usr/bin/c++
#cc: /usr/bin/cc

And, finally, symlinks. Basically, we want the colorgcc wrapper script to be invoked, instead of one (any) of the GCC compilers. I’ve done it by creating these links in $HOME/bin/, and making sure $HOME/bin is in $PATH (and comes early enough):

This Monday and Tuesday (3rd and 4th of December) were the days of LinuxLab2018. It’s only the second year they’re (Develer) doing this conference, but it’s a quite good one already, at least in my opinion.

SUSE @ LinuxLab2018, in Florence

For sure, it stands out in the Italian landscape of Linux technical conference… assuming there even are others worth being mentioned. I was there last year, and have been there again this year, and this year it has been even more fun!

In particular, it was rally cool to meet again and catch-up with some of my friends and mates, mostly from the Ph.D time. Interestingly enough, most of us managed to continue “doing Linux stuff”, in one way or another, over these years. 😀
I’m glad they’ve noticed this conference as well, and recognized it as a good opportunity to see what’s around –in Linux, here in Italy– as well as sharing what they’re currently doing, as I did myself.

Having skipped more than one LinuxConf (now Open Source Summits), it was a while since I saw one of the famous Jonathan Corbet‘s “Kernel Weather Report” live. And, yes, it’s still as cool as I remembered it.

My talk, this time, was about “VIRTUALIZATION IN THE AGES OF SPECULATIVE EXECUTION HARDWARE BUGS”. I think it went well… I had too much material (and I knew it!) so I had to rush a little bit. I did a so-and-so job at covering everything that I wanted to (and I sort of have the impression I’ve been given a couple of minutes less than promised! :-P), but I’m happy about the overall turnaround. And in fact, I received good feedback. 🙂

The slides, for the interested, are below (or at this link). Please notice that, although I did my best, these stuff drives me crazy all the times I try to (re-)figure out how they work exactly! So, if anyone spot mistakes, do not hesitate to point that out to me.

Like this:

From where it has been introduced in GNOME / Nautilus, I’ve always been an assiduous, diligent, consistent and zealous user of the ‘Connect to Server’ functionality. It’s just really really really convenient, that’s all about it.

So, as soon as I finished installed something which came with Nautilus 3.6 (we’re now at 3.8, and it’s pretty much the same), which I think at the time was Fedora 18, I started looking for it, and was quite upset when I did not actually find it!

I looked for it by clicking on the ‘gear’ icon… and failed! Then I looked for it under the ‘arrow down’ icon… and failed!

At this point I was really mad, especially as I started to think the GNOME guys could have removed it, which would have been very, VEry, VERY bad for me! Then the light. Just by chance, I clicked on window name, on the left, in the top panel (usually called ‘the App menu’ in GNOME3 jargon, see below) and found it… Phew!

Oh, BTW, the same applies to ‘Enter Location’, which I also find a lot useful.

Of course, this post comes after a while that things are like this, so I guess everyone is used to the new interface right now… But, I mean, one never knows! 😛

“The NUMA-aware scheduler is an important component in Xen on machines with multiple processor sockets […]”

“As the number of VMs climbs in the machine, the effect of NUMA-aware scheduling increases, as you can see in these preliminary benchmark test results, and presumably this will also be the case as the number of sockets increases. It gets a bit dicey when a machine becomes overloaded with work, but even then the tweaks to make Xen appreciate the eccentricities of NUMA systems seems to help some.”

Also, ‘the tweaks to make Xen appreciate the eccentricities of NUMA systems’ is, I think, the best definition for what I’m doing during most of my working hours… Thanks for that too, Register!

So, hacking the XenOpen Sourcehypervisor is what I do for living (and these are the guys providing me with my monthly paycheck for that: http://www.citrix.com). During the last months, I’ve been concentrating on improving NUMA awareness of the Xen scheduler, and this an attempt to describe what that is all about…

Background and Motivation

The official Xen blog already hosted a couple of stories about what is going on, in the Xen development community, regarding improving Xen NUMA support. Therefore, if you really are interested in some background and motivation, feel free to check them out:

Long story short, they say how NUMA is becoming more and more common and that, therefore, it is very important to: (1) achieve a good initial placement, when creating a new VM; (2) have a solution that is both flexible and effective enough to take advantage of that placement during the whole VM lifetime. The former, basically, means: <<When starting a new Virtual Machine, to which NUMA node should I “associate” it with?>>. The latter is more about: <<How hard should the VM be associated to that NUMA node? Could it, perhaps temporarily, run elsewhere?>>.

NUMA Placement and Scheduling

So, here’s the situation: automatic initial placement has been included in Xen 4.2, inside libxl. This means, when a VM is created (of course, if that happens through libxl) a set of heuristics decide on which NUMA node his memory has to be allocated, and the vCPUs of the VM are statically pinned to the pCPUs of such node.
On the other hand, NUMA aware scheduling has been under development during the last months, and is going to be included in Xen 4.3. This mean, instead of being statically pinned, the vCPUs of the VM will strongly prefer to run on the pCPUs of the NUMA node, but they can run somewhere else as well… And this is what this status report is all about.

NUMA Aware Scheduling Development

The development of this new feature started pretty early in the Xen 4.3 development cycle, and has undergone a couple of major rework along the way. The very first RFC for it dates back to the Xen 4.2 development cycle, and it showed interesting performance already. However, what was decided at the time was to concentrate only on placement, and leave scheduling for the future. After that, v1, v2 and v3 of a patch series entirely focused on NUMA aware scheduling followed. It has been discussed during XenSummit NA 2012, in a talk about NUMA future development in Xen in general (slides here). While at it, a couple of existing scheduling anomalies of the stock credit scheduler where found and fixed (for instance, the one described here).

Right now, we can say we are almost done. In fact, v3 received positive feedback and is basically what is going to be merged, and so what Xen 4.3 will ship. Actually, there is going to be a v4 (being released on xen-devel right at the same time of this blog post), but it only accommodates very minor changes, and it is 100% functionally equal to v3.

Any Performance Numbers?

Sure thing! Benchmarks similar to the ones already described in the previous blog posts have been performed. More specifically, directly from the cover letter of the v3 of the patch series, here’s what has been done:

I ran the following benchmarks (again):
* SpecJBB is all about throughput, so pinning
is likely the ideal solution.
* Sysbench-memory is the time it takes for
writing a fixed amount of memory (and then
it is the throughput that is measured). What
we expect is locality to be important, but
at the same time the potential imbalances
due to pinning could have a say in it.
* LMBench-proc is the time it takes for a
process to fork a fixed number of children.
This is much more about latency than
throughput, with locality of memory
accesses playing a smaller role and, again,
imbalances due to pinning being a potential
issue.

This all happened on a 2 node host, where 2 to 10 VMs (2 vCPUs and 960 RAM each) were executing the various benchmarks concurrently. Here they are the results:

The tables show how, when not in overload (where overload=’more vCPUs than pCPUs’), NUMA scheduling is the absolute best. In fact, not only it does a lot better than no-pinning on throughput biased benchmarks, as well as a lot better than pinning on latency biased benchmarks (especially with 6 VMs), it also equals or beats both under adverse circumstances (adverse to NUMA scheduling, i.e., beats/equals pinning in throughput benchmarks, and beats/equals no-affinity on the latency benchmark).

When the system is overloaded, NUMA scheduling scores in the middle, as it could have been expected. It must also be noticed that, when it brings benefits, they are not as huge as in the non-overloaded case. However, this only means that there is still room for more optimization, right? In some more details, the current way a pCPU is selected for a vCPU that is waking-up, couples particularly bad with the new concept of NUMA node affinity. Changing this is not trivial, because it involves rearranging some locks inside the scheduler code, but is already being worked-on.
Anyway, even with what we have right now, we are overloading the test box by 20% here (without counting Dom0 vCPUs!) and still seeing improvements, which is definitely not bad!

What Else Is Going On?

Well, a lot… To the point that it is probably pointless to try make a list here! I maintain a NUMA roadmap on our Wiki, which I’m trying to keep updated and, more important, to honor and fulfill so, if interested in knowing what will come next, go check it out!

FROM Paul McCartney to Lord Stern, more people are promoting the benefits of a meatless society.

Meat production not only contributes to climate change and land degradation but is also a cause of air and water pollution and biodiversity loss. The farming industry accounts for nine per cent of UK total greenhouse gases, half of which come from sheep, cows and goats. Is the meat on our plate really worth the impact on the planet?

Deforestation, manure and livestock flatulence all contribute to global warming and are associated with excessive meat consumption.

As nations become richer, they tend to eat more meat and more livestock has to be raised to keep up with the demand.

In turn, more grazing land is required and more forests are cut down to expand farmland. As trees get the chop the carbon dioxide that they have absorbed over their lifetime is eventually released back…