Posted
by
EditorDavid
on Sunday November 12, 2017 @05:18PM
from the fearless-coyotes dept.

diegocg quotes Kernel Newbies: Linux 4.11 has been released. This release adds support for bigger memory limits in x86 hardware (128PiB of virtual address space, 4PiB of physical address space); support for AMD Secure Memory Encryption; a new unwinder that provides better kernel traces and a smaller kernel size; support for the zstd compression algorithm has been added to Btrfs and Squashfs; support for zero-copy of data from user memory to sockets; support for Heterogeneous Memory Management that will be needed in future GPUs; better cpufreq behaviour in some corner cases; faster TBL flushing by using the PCID instruction; asynchronous non-blocking buffered reads; and many new drivers and other improvements.
Phoronix has more on the changes in Linux 4.14 -- and notes that its codename is still "Fearless Coyote."

It must be noted that I am running the stable version of Fedora 26 not the developer's version, however, I do have a tenancy to get a new incremental release of the kernel once a week as part of the normal update process.

Of course like most Linux distribution updates I have the choice of a graphical update or command line update or a combination and except for initializing the update process (I co

Windows 7 did, BSOD's were pretty annoying when using Prolific USB serial adapters - the dodgy 3rd party code Windows Update automatically installs and runs in the kernel (like every 3rd party driver) when you plug the USB device in.

I got into a situation last week doing a fresh install where the chipset's USB host support was built as a module but not included in initramfs. A startup problem (fumbled fstab) left it prompting for the root password without a working keyboard. Well, at least now the blasted driver's compiled in.

Amusingly, NT4 is where they merged the Kernel and GDI memory spaces in pursuit of graphics performance. Well, they got it, but they also absolutely destroyed NT's reliablity. 3.51 was a rock. Granted, a rock with a 2GB filesystem limit...

You can write a.ko that will be loaded by the kernel to handle your device(used on most Linux for a few things where speed matters, like mass storage, network.or for booting simplicity like mouse/keyboard/bluetooth)

Or you can write an user space device that communicates with the raw USB device using libusb.(used on the huge variant zoo of non critical USB devices, like scanners, firmware upgrader, etc.)

Technically if you have your kernel offer PCI bus access to userspace you could drive the USB host controller completely from there. Not that it would necessary be a good idea, but it would reduce the attack surface to the PCI driver/bus logic (as well as introducing a new potential security problem from userspace)

DMA makes that approach a nonstarter unless you have a working and properly configured IOMMU between the controller and main memory. Even then, the most common use case is to give a virtual machine direct access to a device rather than to put an ordinary driver in user space.

... and PCID is not an instruction. The feature means that there is a "process ID" tag on each entry in the TLB to avoid having to flush them unnecessarily.The intended benefit is that all entries would not necessarily have to be reloaded from page tables in RAM (or cache) whenever there is a context switch.

"Tagged TLB"s have been available on other CPU architectures for decades -- and have been used by the Linux kernels for those architectures. The feature is pretty recent on Intel x86 CPUs though.Correct me if I'm mistaken but I think AMD's x86 CPUs do not have PCID specifically but has support for "virtual machine ID" tags on the hypervisor's second-level TLB.

"Original x86-64 was limited by 4-level paging to 256 TiB of virtual address space and 64 TiB of physical address space. People are already bumping into this limit: some vendors offers servers with 64 TiB of memory today. "

The thing is, when you've got that much addressable space you should probably be doing paging with an LRU cache flush to an intermediate level of memory, which should save itself to disk in idle moments. This would take about 64 bits/block. One bit for "changed since read from/written to disk" and a bunch for "time of last access".

OTOH, I have my doubts that they actually have 64TB of RAM. I expect they just have a memory-mapped disk wi

I don't. It seems that you can fit 12TB of RAM (128GB*96) into this fairly standard high-end server: http://www.dell.com/en-us/work/shop/povw/poweredge-r930 . I expect that there are niche vendors that sell absolutely _massive_ machines for people who absolutely _must_ work with huge datasets in memory.

Perhaps I misunderstand the assertion. Perhaps you are measuring total RAM rather than directly addressable fast RAM. If that's so, then the rest of this response is inappropriate,

But my first reaction was as follows:

Well, if you amend that to "people who feel the must", I'll agree, but virtual memory means that this is a silly attitude. I don't believe that anybody is actively working with 64TB of data at once (i.e., within, say, the same second). If they think you need that then either they're wrong o