Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

Recently at Phoronix, Michael Larabel tested a Threadripper 3990X system using nine different Linux distros, one of which was Clear Linux—and Intel's distribution got three times as many first-place results as any other distro tested. When attempting to conglomerate all test results into a single geometric mean, Larabel found that the distribution's results were, on average, 14% faster than the slowest distributions tested (CentOS 8 and Ubuntu 18.04.3).

There's not much question that Clear Linux is your best bet if you want to turn in the best possible benchmark numbers. The question not addressed here is, what's it like to run Clear Linux as a daily driver? We were curious, so we took it for a spin.

Installing Clear Linux

This is the only text mode you'll see if you download the Desktop ISO for Clear Linux.

Jim Salter

Clear Linux's desktop installer drops you straight into a fully featured live desktop environment. The installer is the top button on the left-hand launch bar.

Jim Salter

We tested Clear on both live hardware and VMs. The VM install threw the first hurdle—UEFI is required, and in our host environment (Linux KVM) you must specify UEFI architecture at VM creation time.

Jim Salter

Installation is much the same for Clear Linux as for any other operating system—download the ISO, dump it to a thumb drive, boot, and go. Two installer versions are available: a "server" that's text-mode only and a "desktop" that uses a fully featured live desktop environment. We chose the desktop. On real hardware, Clear gave us no trouble and installed immediately—but in a KVM environment, it initially refused to install, with a less-than-helpful "failed to pass pre-install checks" error message.

If you're using Linux KVM and virt-manager to test out Clear, you'll need to tick 'custom configuration' in the final step.

Jim Salter

Firmware selection is in the Overview tab of the custom configuration screen. It's immutable, once the VM has been created.

Jim Salter

A little sleuthing online uncovered the fact that while Clear Linux's live desktop environment will boot in BIOS mode, the actual OS requires UEFI. In our virtualization environment—Linux KVM, under Ubuntu 19.10—new VMs default to BIOS mode unless you check "Customize configuration before install" on the final step, and then in the Overview tab, change from BIOS to UEFI. So we blew away the VM, recreated it with the appropriate UEFI firmware, and then we were off to the races.

Once we'd straightened out our VM's firmware architecture, installing Clear Linux in a VM was as straightforward as on real hardware—real hardware with UEFI firmware, that is. If you were hoping to install Clear Linux on legacy hardware that only supports BIOS mode, you're out of luck.

You can currently install Clear in (American) English, (Mexican) Spanish, or (Simplified, Mainland) Chinese.

Jim Salter

Clear will default the time zone to UTC unless instructed otherwise, but you'll need to manually configure the three options with yellow bangs on them.

Jim Salter

Clear has a "safe" installation option for dual-booting. We did not test safe mode, opting to install it like we meant it instead.

Jim Salter

Hunter2.

Jim Salter

We like that Clear Linux asks you about telemetry, rather than simply defaulting either way.

Jim Salter

The installer is clear and straightforward. You must choose a language (currently from a very limited list), an installation target, and feed the installer a username and password for the new OS. You also need to let it know whether you're opting in or out of phone-home telemetry used for QA and dev purposes.

When setting an installation target, Clear Linux offers either a "safe" installation or a "destructive" one. We did not test the safe installer, instead choosing to install Clear Linux as the only operating system available.

Intrepid users who click over to the Advanced options can choose a kernel version, preinstall additional software, and so forth.

Jim Salter

If you're going to run Clear Linux in the first place, you probably want the Native kernel, which is stupendously bleeding-edge and newer than you'll find by default in pretty much any other distro.

Jim Salter

You can preinstall additional software here, or you can wait until after the system is up and install from the command line or Gnome software center.

Jim Salter

Clear wants to give you one last chance to rethink before it formats the drive.

Jim Salter

No turning back now! This last step took around 10 minutes.

Jim Salter

Be warned, if you wander away while the system's installing, after a few minutes the installer's screen will lock. Click and drag up to unlock it again.

Jim Salter

Success! When we restart, we'll be restarting into a native installation of Clear Linux.

Jim Salter

Once you've selected your options, Clear shouldn't take more than a few minutes total to actually install—but if you walk away and come back, it's worth realizing that the screen saver lock screen may kick in on you. (If you're not used to Gnome3, click and drag up to dismiss the lock screen.)

Post-installation: The GIMP race

For the most part, there didn't seem like a lot of point in doing traditional performance benchmarks on Clear Linux. Phoronix has already done plenty of those—and yes, without a doubt, Clear Linux is faster on average than most distros. But winning benchmarks isn't necessarily the same thing as feeling fast.

Without a point of reference for comparison—a watched and ticking timer or a head-to-head race—most people won't notice less than 33% difference in the time to complete a familiar task. A typical observer—one not actually timing things—faced with an hour-long task that completed in 40 minutes will think "hey, that seemed fast." The same observer, waiting for a one second task to complete, will generally start frowning around 1,300ms.

We should also point out that the majority of Phoronix's benchmarks focus on long-running computational or storage tasks. This type of benchmark correlates better to changes in hardware than to changes in software at the distribution level. That is to say, even if Clear Linux benchmarks faster at a task relevant to desktop performance, the difference may be easily overwhelmed by differences in the desktop—or the specific application package—itself.

When I installed and opened GIMP in a Clear Linux virtual machine, I thought, "that feels fast"—but I was expecting it to feel fast. To test my initial perception, I also opened GIMP on my Ubuntu 19.04 workstation itself and counted Mississippi—turns out, the Ubuntu desktop was actually twice as fast as the Clear desktop. So much for human perception? Perhaps not—I work within VMs a lot, so maybe I had been subconsciously comparing the Clear VM to an Ubuntu VM, not to Ubuntu on the host workstation.

To test that theory, I brought an Ubuntu 18.04.4 VM and a Clear Linux VM up side by side, each with four vCPUs and 4GB of RAM allocated. Then I installed and configured the NTP daemon on both VMs to bring their clocks to within a millisecond of one another and installed my own whenits scheduling utility. With all that done, the results of a side-by-side "GIMP race" were no different—despite having the same resources allocated to each, the Ubuntu 18.04 VM still "won" handily.

Investigating further, I noticed that Ubuntu 18.04 uses an older version of GIMP than Clear does. So I uninstalled the system-provided GIMP 2.08 from the Ubuntu VM and installed the latest 2.10.14—the same version Clear uses—from a PPA. The outcome didn't change significantly—GIMP still opened faster in the Ubuntu VM, and you can see the side-by-side results of that final "race" in the short video clip above.

None of this should be taken as a definitive benchmark making Clear Linux out to be "slow." But it does demonstrate the fallibility of human perception and the limits of how much impact a "fast" distro can really have on normal, day-to-day operation of a desktop system. Aside from booting, Clear Linux didn't feel noticeably faster than Ubuntu in general use—either in VMs hosted on my Ryzen 3700X workstation or on an i7-6500U-powered Dell Latitude I installed it on directly.

If you're the sort of person who gets really enthusiastic about compiler optimizations in Gentoo or Arch packaging—or if you've got a very specific task that you're eager to potentially accelerate by 15% or so—Clear might very well be for you. But if you expect the kind of kick-in-the-pants speedup that your friends will immediately notice and drool over, you'll probably be disappointed.

Installing software

Although each distro uses Gnome's Software Center, Ubuntu has put a lot more effort into its, with recommendations, better organized categories, and far more apps.

Jim Salter

Each category in Ubuntu's Software Center has featured apps—and a lot more apps in general—than Clear Linux's. (Ubuntu also doesn't miscategorize the Blender 3D creation and rendering suite as a "game.")

Jim Salter

The game Frozen Bubble is available on both distros—but it's a third-party Flatpak on Clear, where it's a native .deb package from the universe repo on Ubuntu.

Jim Salter

On Ubuntu, Frozen Bubble installed in seconds. On Clear—likely due to bandwidth constraints at flathub.org—the process dragged on for ten minutes or more.

Jim Salter

Ubuntu 19.10 and Clear Linux both use the Gnome Software Center as a GUI for software installation and removal. The most immediately obvious difference here is Canonical's efforts to make the repositories in their version of Software Center feel more curated and caretaken—Ubuntu's Software Center prominently features Editor's Picks and featured applications that Clear Linux doesn't.

Somewhat more importantly, Canonical has much deeper repositories underneath than Clear does—and that can make an impact even when both distributions offer a particular application. For example, the game Frozen Bubble is available in Software Center on either distribution—but on Clear, it's sourced as a flatpak, coming from third-party source dl.flathub.org.

On Ubuntu, Frozen Bubble comes from Canonical's own Universe repository instead of a third-party source. That might not sound like it matters—but installing the game on Ubuntu from Canonical's own repository only took a few seconds, while it took nearly ten minutes to install on Clear.

Will it Chrome?

If you want Chrome on Ubuntu, you go to google.com/chrome and download and install it automagically. The process is a bit more convoluted on Clear Linux.

Jim Salter

The Chrome download page offers you a .deb and an .rpm, but neither works natively with Clear. You can use some trickery and shell-fu to install it anyway—but it won't auto-update afterward; you'll need to do this manually every time you update Chrome.

Jim Salter

After a little shell-fu, Chrome is installed and works perfectly on Clear—except for automatic updates. You'll need to remember to update it and perform the same bit of bash-fu every time you do.

Jim Salter

Neither Clear Linux nor Ubuntu bundles the Google Chrome browser—but on Ubuntu, installation is as straight-forward as it would be on Windows: a search, a download, a click, and you're done. The actual download you get is an Ubuntu native .deb file, and besides installing the browser itself, it automatically updates your repository list—so from then on, Chrome will be automatically updated by Ubuntu, the same way and using the same tools as the standard system updates.

Browsing to the Chrome download page in Clear Linux's natively installed Firefox presents you with the same choice of a .deb or .rpm download—but neither one will "just work." There is a bit of trickery you can do on Clear Linux's command line to download the .rpm file, extract and install it, and then do some manual reconfiguration to keep the fonts from looking weird.

Unfortunately, Chrome won't be automatically updated as it would on Ubuntu or most other desktop distributions—you'll instead have to remember to update it yourself and go through the same few steps on the command line (including reconfiguration of the fonts) each time you do.

Package management

Note the distinct lack of returns for anything like "libuuid" or "uuid" or "uuid-dev" or "devel/uuid". This becomes very important when and if you want to compile your own packages.

Jim Salter

Here's an example of swupd search-file being useful. Unfortunately, it frequently isn't.

Jim Salter

Of course, more advanced users will likely never bother with the Software Center in the first place, on either distribution. Ubuntu, as a Debian-based distribution, uses .deb packages under the hood, which can be installed, updated, removed, and searched using the apt command line tool. Clear Linux doesn't use apt—or yum, zypper, pacman, pkg, or anything else you've likely heard of. Instead, it uses its own command-line package management tool called swupd.

For the most part, swupd works like any other package manager—there's an argument to install packages, another couple to search them either by package name/description or by included files, and so forth. Unfortunately, I must admit I found swupd consistently frustrating—in particular, the arguments are verbose and oddly worded.

In Debian, Ubuntu, Fedora, OpenSUSE, CentOS, or FreeBSD you'd <packagemanager> install <package> to install a new app from repositories—for example, apt install gimp. But in swupd, you swupd bundle-add <package> instead. You similarly bundle-remove, bundle-list, bundle-info and so on.

This might sound like a minor, petty distinction, but I found it to be pretty obnoxious. I fumbled the syntax—for example, mistakenly typing add-bundle instead of bundle-add—far more frequently than I normally do when using an unfamiliar package manager.

The bundles themselves also flout relatively standard naming conventions pretty frequently. For example, when I found myself needing a particular set of headers that Ubuntu has in uuid-dev, and Fedora has in libuuid-devel, Clear Linux instead had them in os-core-dev—and figuring that out was an enormous nuisance. Trying swupd search uuid didn't list the os-core-dev bundle at all—and neither did searching for the actual file I needed, with swupd search-file uuid.h. (More on this topic later.)

Although swupdworks, it feels an awful lot like the result of NIH Syndrome. Intel claims that a lot of Clear Linux's secret sauce is in the packaging, and perhaps it genuinely needed to build its own management tool from the ground up. But from this sysadmin's perspective it's difficult to see the benefits and easy to see the warts—a little more effort devoted to swupd's polish and usability would go a long way.

Will it ZFS?

I had a sneaking suspicion getting ZFS running on Clear wasn't going to be simple.

Jim Salter

This part worked fine. But actually switching to the LTS kernel instead of the native one—as simple as that sounds—turned out to be a bit of a nightmare.

Jim Salter

OpenZFS on Clear Linux! I got there in the end—but it required requests for help on both the Clear and the OpenZFS sides of the puzzle to get there, largely because of Clear's obstinately weird package naming schema.

Jim Salter

Further Reading

Further Reading

Not everybody is going to care whether you can get OpenZFS working on Clear Linux. But I certainly cared, and I spent a ridiculous amount of time chasing this particular dragon. I was seriously considering paving my main laptop and reinstalling it with Clear Linux for a long-term test drive—but even on "just a laptop" I didn't want to do without ZFS' ability to rapidly asynchronously replicate, cryptographically detect and repair bitrot, use inline compression, and so forth and so on.

The OpenZFS project itself doesn't have any installation notes for Clear Linux, and a swupd search zfs came up empty, so I hit the Internet. Searching "Clear Linux ZFS" brings you rapidly to Clear's FAQ, which states "ZFS is not available with Clear Linux OS" and offers btrfs as an alternative.

Btrfs offers most of the same features that ZFS does—but unfortunately, if you actually use the most interesting of those features, such as redundant arrays with data healing, rapid replication, or inline compression, it rapidly becomes unreliable. (Yes, really—commercial NAS devices such as Synology and Netgear's ReadyNAS use btrfs, but they layer it on top of LVM and mdraid, and they do so for good reason. See Debian's wiki for more and note Red Hat's decision to deprecate btrfs entirely in RHEL 7.4.)

The Clear Linux FAQ also points us to an elderly Github issue in which a user requests a ZFS bundle and is shot down. Another user asks for help getting unsigned kernel modules to work and gets a pointer to some documentation via a now-dead link. I found a copy of the deadlinked doc on web.archive.org (and later, a Clear Linux project member provided an updated link to the current version), but that didn't get me where I needed to go, either.

Installing the linux-lts-dev bundle was straightforward, as was creating a kernel configuration file that would allow unsigned modules to load. But switching back to the LTS kernel—necessary, since the native kernel was a bit too bleeding-edge for official support from OpenZFS—proved trickier. Installing the kernel was simple—swupd bundle-add kernel-lts2018—but getting Clear Linux to actually boot from it was a bit of a nightmare.

Enlarge/ The clr-boot-manager tool looks pretty straightforward—but not all of the options worked.

Jim Salter

The distribution doesn't keep its boot management configuration in any of the places an experienced *nix user might look for them—/boot, /etc/default, anything to do with grub, etc. I never did find the actual configuration data location but eventually discovered that a Clear Linux user is expected to manipulate the boot environment with the tool clr-boot-manager. Unfortunately, clr-boot-manager set-kernel org.clearlinux.lts2018.4.19.103-113 followed by clr-boot-manager update—which should have selected that kernel for use at next boot—did absolutely nothing, and I spun my wheels poking at things, rebooting, running uname -a, and still seeing a 5.5 kernel running for quite some time.

Finally, I gave up on clr-boot-manager set-kernel and instead tried clr-boot-manager set-timeout 10. That actually worked—after rebooting this time, I was presented with a kernel list and manually selected the 4.19 LTS kernel. Now, uname -a showed me that I was running on the 4.19 kernel, and I was ready to compile ZFS!

The problems were far from over, unfortunately. Downloading and extracting the OpenZFS source tarball, chdiring into it and running ./configure, I was presented with an error: uuid/uuid.h missing, libuuid-devel package required. Unfortunately, there is no libuuid-devel bundle in swupd—nor is there libuuid, uuid, uuid-dev, uuid-devel, or anything else along those lines. Neither swupd search uuid nor swupd search-file uuid.h came up with any useful results, either—even though they should have.

Finally, I opened up a new issue in the ZFS on Linux tracker, hoping either that someone else had gotten ZFS running on Clear or that I could get enough information about the configure script to try to monkey-patch it myself. Brian Behlendorf—founding developer of the Linux port of OpenZFS and all-around nice guy—didn't have the answer either.

But Brian did give me the hint that finally solved the puzzle—although swupd search-file uuid.h didn't find the package I needed, swupd search-file libuuid.so.1 did. So one swupd bundle-add os-core-dev later/configure and make install, both completed successfully!

The remaining issue I faced is that the simple Linux Kernel Module (LKM) manipulation command insmod—which allows you to specify a path to the module to be inserted into the kernel—does not resolve dependencies, and so insmod /path/to/zfs.ko failed with the error unknown symbol. The much smarter tool modprobe will detect and resolve dependency issues—but it won't let you specify the path to the kernel modules, and the installer had dumped them into places where modprobe didn't know to look.

After a bit of flailing, I eventually just dumped a symlink to each of ZFS' package.ko files—which were in individual directories under /lib/modules/extra—directly into /lib/modules itself. With that, modprobe zfs worked, and I actually had ZFS running on Clear Linux. Huzzah!

Although ZFS was functional now, there were still papercuts to deal with. The zpool and zfs commands were in /usr/local/sbin, which isn't part of the default PATH in Clear Linux. Also, the ZFS module wasn't set to load automatically on boot. Those remaining problems are fortunately pretty trivial to solve. To fix the path issue, either update your PATH to include /usr/local/sbin, or symlink the utilities there into /usr/local/bin. To get ZFS to autoload on boot, create a directory /etc/modules-load.d, then create a file /etc/modules-load.d/zfs.conf and populate it with a single line just saying zfs.

This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. These types of issues are all solvable, of course—but if you aren't willing and excited to be a part of the effort to solve them yourself or for those who come after you, you should probably steer clear of Clear as a daily driver.

The good

Clear Linux is backed by Intel, one of the world's largest and foremost computer science companies

Clear Linux has a concise, clear mandate: be secure, be fast, do things right

Most things work with little or no tweaking

If you're bound and determined to have The Fastest Linux In The West, this is the distro for it—sorry, Arch and Gentoo users

"This is Linux! I know this!"

The bad

Although most things work without tweaking, most users will quickly want something that doesn't

155 Reader Comments

A lot of that stuff will probably be worked out in the next year or so.

At one time, daily use of any Linux distro looked like that. Getting anything going, ever, was a constant struggle. Once you had it working, it tended to be extraordinarily stable and trustworthy, at least compared to NT 3.5 and 4.0, but getting to that point was often excruciating.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore. The overall knowledge set is different, so you have learn a bunch of new stuff, but if you sat two total neophytes down in front of both, they'd probably have about the same difficulty in figuring out how to use the machine in a routine way.

However, Linux gives you a much bigger shovel to dig with, and when attempting to fix things, you can make very deep holes, very quickly. It's harder to accidentally mess up a Windows install.

"This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. "

Those of us who understand that the authors of ZFS chose to license it in such a way that it is incompatible with Linux know that ZFS on Linux will never be "relatively simple".

Failing to acknowledge this to your readers does them a great disservice.

The kernel devs seem to be going out of their way to make things difficult, when the ZFS maintainers have their hands tied by Sun (now Oracle). They have made things as easy as they possibly can, given their licensing restrictions, but the kernel devs seem to be actively trying to make it hard again.

The actual loser of the kernel devs' pissing match with ZFS is us, not them.

"This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. "[snipe about ZFS on Linux]

I believe you've missed the point. Clear Linux apparently doesn't follow conventional Linux file path customs so doing work in the commandline requires understanding what it's doing different from other Linux distros under the hood. That has nothing to do with ZFS, as pointed out in TFA.

Is Clear Linux expecting users to get most of their software as flatpaks?

Good question. I don't know whether "expecting" is correct... But they're certainly not against it, given that they have their version of software center populating partially from flathub.

I don't know if that was a consciously made decision, or that's just the bone stock condition of software center without any customization, though--I've never tried building a completely vanilla software center.

The author's experience with Clear Linux has been my personal experience with it. Technically interesting, but it's not useful for my personal needs. The benchmark wins don't tell the whole story. I really don't care if GIMP takes 6 seconds or 5 seconds to start up. Likewise I don't care if a website takes 1 second to load, or 1.4. So long as the programs I use run with reasonable performance and don't crash (which is not a given among the various disparate distributions) I'm fine and dandy. The Clear Linux user experience was way too rough and ill defined. I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

A lot of that stuff will probably be worked out in the next year or so.

At one time, daily use of any Linux distro looked like that. Getting anything going, ever, was a constant struggle. Once you had it working, it tended to be extraordinarily stable and trustworthy, at least compared to NT 3.5 and 4.0, but getting to that point was often excruciating.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore. The overall knowledge set is different, so you have learn a bunch of new stuff, but if you sat two total neophytes down in front of both, they'd probably have about the same difficulty in figuring out how to use the machine in a routine way.

However, Linux gives you a much bigger shovel to dig with, and when attempting to fix things, you can make very deep holes, very quickly. It's harder to accidentally mess up a Windows install.

I remember the pain of checking the distro's repository, finding their version of Software X was horribly outdated or partially broken. Then going on to run make, find there are some typos in the make file, fix them, get it going but finding you have to copy over the default configs and make a couple changes... It just went on and on, to the point where there was no impulse usage of software. You really had to want or need a piece of software to endure it. Windows wasn't that much better back then, InstallShield was very new and lacked the ability to bundle dependencies, often times it was assumed you would be able to figure those out or a copy was provided in a folder somewhere that may or may not be mentioned at the end of the InstallShield wizard.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore.

As a recent full-time Linux user who's dabbled over the years, I strongly agree. My hardware worked out of the box, and once I'd made the choice of distribution and desktop environment, settling in was pretty painless.

Now if only to same had been true of my Linux on my Pixelbook. Shame on you, Google.

This kind of reminds me of those AMD Bulldozers years back, where they had impressive benchmarks for the price, but in practical settings were found wanting compared to Intel CPUs. Still, assuming that Intel actually invests some time into smoothing out the issues, I could very easily see myself using this distro.

"Clear Linux has a concise, clear mandate: be secure, be fast, do things right"

"do things right" sounds good at first, but like Google's "don't be evil", if you don't define what "evil" is "don't be evil" doesn't mean anything.

Not just "right", but "secure" needs to be defined as well. "Secure" against a script kiddie or my neighbor, is not the same as "secure" against a highly motivated attacker or nation state. The latter have the resources to go after the hardware flaws even if the software on top of the hardware were perfect (it's not).

I really find it strange that intel decided to make their own linux distribution.But then again, there are so many distributions.

Marketing. Clear Linux is a research project to show what's possible with a modern compiler stack, performance friendly settings and configurations, and modern hardware - namely Intel CPU platforms. Yes, it makes AMD look good, too, but that's just a side effect. It's a distribution that compares Intel with its biggest competition, so from that point of view, even if they lose a few benchmarks on certain programs, it's a net marketing win for Intel.

The idea is similar to the years when B. Dalton and Waldenbooks were usually in the same malls together, and sometimes right across the hall. It made sense to be next door to your primary competition. Keep your friends close, but your enemies closer: the better to know what they're doing and how they're doing on the same playing field. Macy and Gimbell used to do something similar as well.

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Uh... you do realize that they are legally required to release any changes to any GPL code projects if they push out binary releases, right? They also release the code for non-GPL projects that they alter. That said, just because they do release any code changes doesn't mean the upstream projects have to integrate any changes. The reasons project managers may or may not integrate those changes are as varied as the individuals and projects themselves. The Clear Linux team appears to be forthright and playing fair with what they're doing.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing?

I don't track Linux Distros anymore. So I'm curious what Clear is doing that Gentoo or Arch aren't (other than providing pre-compiled binaries). If they're modifying the programs to get more performance out of them, hopefully they're trying to get their changes merged upstream so everyone can benefit more readily. If they're just using icc and letting the compiler get more performance out of the existing code, then this is probably a lot less interesting to Gentoo or Arch users besides looking at maybe switching compilers. If they want to pay for it. (icc is still commercial, right?)

I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

I agree. How on Earth did Gnome 3 end up as the default instead of KDE Plasma on most Linux distros? Where did the universe go wrong?

Am not able to install in VirtualBox (Linux Mint as host). The install doc says to set chipset to ICH9 enable EFI and PAE/NX. It won't boot past initial boot screen. If I turn off EFI it boots but says it can't install OS due to incompatibilities. Any ideas?

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Intel are showing off performance optimizations. They're also removing legacy compatibility, and because they started fresh, they could.

Most Linux software is compiled to use the minimum set of capabilities available on all amd64 CPUs, but some of Intel's biggest advantages, such as AVX512 instructions, are not in that set. Therefore, to show that these instructions provide tangible benefits and aren't just gimmicks, they created a way of shipping two versions of critical system libraries in a single file. One of the versions will run on all computers, the second will run faster on computers with AVX 512.

However, many distributions didn't want to make the effort of compiling software twice. Therefore, Intel came up with the idea of shaming other distributions into optimizing the software they ship, which has been working quite well. By telling reviewers what distro to install they can have full control of the software stack, and are helping the entire Linux ecosystem improve its performance. Maintainers of other distributions can see the results and the secret sauce how to get there as Intel blazes the trail

Am not able to install in VirtualBox (Linux Mint as host). The install doc says to set chipset to ICH9 enable EFI and PAE/NX. It won't boot past initial boot screen. If I turn off EFI it boots but says it can't install OS due to incompatibilities. Any ideas?

I'd advise installing virt-manager and using that instead. But then, I'm not a virtualbox fan.

Agree about both the amazing performance and the "why would you do this?" package manager. I use R, and CL's R is built against the Intel Math Kernel Library and, with other optimizations, easily outclasses anything else on the same hardware.

And yet my R server still runs Ubuntu, because packages don't live in a vacuum. They need an ecosystem, and swupd is a pain to use compared to apt/yum, and there are too many packages missing. I'm not against compiling the odd package by hand, but if I wanted to "build it myself" I would just use ArchLinux instead.

It's a compelling VM OS/server OS for users who can handle those limitations, but it wasn't a good desktop/workstation OS for me. I'm waiting for Ubuntu/Fedora to start importing CL's optimizations so I get the pleasure without the pain.

I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

I agree. How on Earth did Gnome 3 end up as the default instead of KDE Plasma on most Linux distros? Where did the universe go wrong?

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Uh... you do realize that they are legally required to release any changes to any GPL code projects if they push out binary releases, right? They also release the code for non-GPL projects that they alter. That said, just because they do release any code changes doesn't mean the upstream projects have to integrate any changes. The reasons project managers may or may not integrate those changes are as varied as the individuals and projects themselves. The Clear Linux team appears to be forthright and playing fair with what they're doing.

Sure, I never said they weren't open source, or that they weren't acting in good faith, I'm pointing out that the acid test of both the optimizations and the open source process are whether the optimizations are in fact integrated upstream. If not, either they aren't really acceptable optimizations for mainstream use, or they aren't really worth the effort... or the linux development community is leaving non-trivial performance on the table.

I want to know IF it is being done, and why or why not, not who is to blame.

Phew! Does anyone remember MobLin? I can see where the custom file paths for OS/App/user file segregation and stateless installation where you can reset the machine safely by deleting everything in /etc and /var would be very useful for a mobile phone/embedded distribution...

I want to take a moment to note how PTS (Phoronix Test Suite) works. I'm sure there's probably a large audience reading this article that may be making incorrect assumptions since they've never encountered it before, or may not have used it.

PTS does not use the distribution packages that come with any particular distribution. It uses the build environment, basic development libraries, and the distro's PHP for functionality. It then downloads what it needs that's not in the distro's basic packaging repositories and the program to be benchmarked from upstream project repositories without any distro specific patches. Those source trees may, but usually are not, the same as what's in the distribution itself. These source trees are then compiled with whatever Michael Larabel believe the configuration, compiler, and linker flags should be to get maximum performance from the resulting binaries. These may not even be good, or even valid flags, as sometimes he's been known to include mutually contradictory compiler flags from time to time, or nonequivalent flags when comparing GCC & LLVM.

These resulting binaries are usually run several times to reduce multiproccess execution jitters in timed results. An average of these times, FPS, w/e are then given as the result.

These results may not (and probably rarely do) reflect the real life performance of a distribution's own packages as distributions use default configurations that try to maximize compatibility with hardware and various user needs (basically everything and the kitchen sink is often compiled in). Sometimes the distribution's own packages may perform better than the benchmarked version, sometimes it may not. Regardless, the distribution packages usually are more suitable for everyday use.

This is the primary criticism for PTS over the years. It may or may not reflect any real world use especially when you get to the game benchmarks because most people playing games aren't interested in whether a game can get 200 FPS at peak performance, only in that it can hold a steady frame rate at their monitor's refresh speed and whether or not input latency is a problem.

What PTS is actually good at and for, however, is pointing out huge jumps in performance or huge losses between different version releases. This generally suggests Something Bad Happened and the cause can be bisected down into the problem commit(s).

Although most things work without tweaking, most users will quickly want something that does

What does this sentence mean?

Eventually you'll want a something, and it'll need tweaking to work.

I worked it out, too, but I can understand metalliqaz's confusion: The first clause is positive ("works"), and so is the second ("does"). Mr. Salter may have originally wrote "most things don't require tweaking to work", or similar and the second part was not changed accordingly.