"This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. "

Those of us who understand that the authors of ZFS chose to license it in such a way that it is incompatible with Linux know that ZFS on Linux will never be "relatively simple".

Failing to acknowledge this to your readers does them a great disservice.

A lot of that stuff will probably be worked out in the next year or so.

At one time, daily use of any Linux distro looked like that. Getting anything going, ever, was a constant struggle. Once you had it working, it tended to be extraordinarily stable and trustworthy, at least compared to NT 3.5 and 4.0, but getting to that point was often excruciating.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore. The overall knowledge set is different, so you have learn a bunch of new stuff, but if you sat two total neophytes down in front of both, they'd probably have about the same difficulty in figuring out how to use the machine in a routine way.

However, Linux gives you a much bigger shovel to dig with, and when attempting to fix things, you can make very deep holes, very quickly. It's harder to accidentally mess up a Windows install.

"This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. "

Those of us who understand that the authors of ZFS chose to license it in such a way that it is incompatible with Linux know that ZFS on Linux will never be "relatively simple".

Failing to acknowledge this to your readers does them a great disservice.

The kernel devs seem to be going out of their way to make things difficult, when the ZFS maintainers have their hands tied by Sun (now Oracle). They have made things as easy as they possibly can, given their licensing restrictions, but the kernel devs seem to be actively trying to make it hard again.

The actual loser of the kernel devs' pissing match with ZFS is us, not them.

"This shaggy dog story isn't really about ZFS itself—it's about the fact that issues that are relatively simple under more well-traveled distributions can be a giant pain in the rear under Clear Linux. "[snipe about ZFS on Linux]

I believe you've missed the point. Clear Linux apparently doesn't follow conventional Linux file path customs so doing work in the commandline requires understanding what it's doing different from other Linux distros under the hood. That has nothing to do with ZFS, as pointed out in TFA.

Is Clear Linux expecting users to get most of their software as flatpaks?

Good question. I don't know whether "expecting" is correct... But they're certainly not against it, given that they have their version of software center populating partially from flathub.

I don't know if that was a consciously made decision, or that's just the bone stock condition of software center without any customization, though--I've never tried building a completely vanilla software center.

The author's experience with Clear Linux has been my personal experience with it. Technically interesting, but it's not useful for my personal needs. The benchmark wins don't tell the whole story. I really don't care if GIMP takes 6 seconds or 5 seconds to start up. Likewise I don't care if a website takes 1 second to load, or 1.4. So long as the programs I use run with reasonable performance and don't crash (which is not a given among the various disparate distributions) I'm fine and dandy. The Clear Linux user experience was way too rough and ill defined. I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

A lot of that stuff will probably be worked out in the next year or so.

At one time, daily use of any Linux distro looked like that. Getting anything going, ever, was a constant struggle. Once you had it working, it tended to be extraordinarily stable and trustworthy, at least compared to NT 3.5 and 4.0, but getting to that point was often excruciating.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore. The overall knowledge set is different, so you have learn a bunch of new stuff, but if you sat two total neophytes down in front of both, they'd probably have about the same difficulty in figuring out how to use the machine in a routine way.

However, Linux gives you a much bigger shovel to dig with, and when attempting to fix things, you can make very deep holes, very quickly. It's harder to accidentally mess up a Windows install.

I remember the pain of checking the distro's repository, finding their version of Software X was horribly outdated or partially broken. Then going on to run make, find there are some typos in the make file, fix them, get it going but finding you have to copy over the default configs and make a couple changes... It just went on and on, to the point where there was no impulse usage of software. You really had to want or need a piece of software to endure it. Windows wasn't that much better back then, InstallShield was very new and lacked the ability to bundle dependencies, often times it was assumed you would be able to figure those out or a copy was provided in a folder somewhere that may or may not be mentioned at the end of the InstallShield wizard.

It's easy to forget just how far it's all come, and just how much has improved. Routine use of a Ubuntu desktop really isn't that difficult, anymore.

As a recent full-time Linux user who's dabbled over the years, I strongly agree. My hardware worked out of the box, and once I'd made the choice of distribution and desktop environment, settling in was pretty painless.

Now if only to same had been true of my Linux on my Pixelbook. Shame on you, Google.

This kind of reminds me of those AMD Bulldozers years back, where they had impressive benchmarks for the price, but in practical settings were found wanting compared to Intel CPUs. Still, assuming that Intel actually invests some time into smoothing out the issues, I could very easily see myself using this distro.

"Clear Linux has a concise, clear mandate: be secure, be fast, do things right"

"do things right" sounds good at first, but like Google's "don't be evil", if you don't define what "evil" is "don't be evil" doesn't mean anything.

Not just "right", but "secure" needs to be defined as well. "Secure" against a script kiddie or my neighbor, is not the same as "secure" against a highly motivated attacker or nation state. The latter have the resources to go after the hardware flaws even if the software on top of the hardware were perfect (it's not).

I really find it strange that intel decided to make their own linux distribution.But then again, there are so many distributions.

Marketing. Clear Linux is a research project to show what's possible with a modern compiler stack, performance friendly settings and configurations, and modern hardware - namely Intel CPU platforms. Yes, it makes AMD look good, too, but that's just a side effect. It's a distribution that compares Intel with its biggest competition, so from that point of view, even if they lose a few benchmarks on certain programs, it's a net marketing win for Intel.

The idea is similar to the years when B. Dalton and Waldenbooks were usually in the same malls together, and sometimes right across the hall. It made sense to be next door to your primary competition. Keep your friends close, but your enemies closer: the better to know what they're doing and how they're doing on the same playing field. Macy and Gimbell used to do something similar as well.

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Uh... you do realize that they are legally required to release any changes to any GPL code projects if they push out binary releases, right? They also release the code for non-GPL projects that they alter. That said, just because they do release any code changes doesn't mean the upstream projects have to integrate any changes. The reasons project managers may or may not integrate those changes are as varied as the individuals and projects themselves. The Clear Linux team appears to be forthright and playing fair with what they're doing.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing?

I don't track Linux Distros anymore. So I'm curious what Clear is doing that Gentoo or Arch aren't (other than providing pre-compiled binaries). If they're modifying the programs to get more performance out of them, hopefully they're trying to get their changes merged upstream so everyone can benefit more readily. If they're just using icc and letting the compiler get more performance out of the existing code, then this is probably a lot less interesting to Gentoo or Arch users besides looking at maybe switching compilers. If they want to pay for it. (icc is still commercial, right?)

I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

I agree. How on Earth did Gnome 3 end up as the default instead of KDE Plasma on most Linux distros? Where did the universe go wrong?

Am not able to install in VirtualBox (Linux Mint as host). The install doc says to set chipset to ICH9 enable EFI and PAE/NX. It won't boot past initial boot screen. If I turn off EFI it boots but says it can't install OS due to incompatibilities. Any ideas?

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Intel are showing off performance optimizations. They're also removing legacy compatibility, and because they started fresh, they could.

Most Linux software is compiled to use the minimum set of capabilities available on all amd64 CPUs, but some of Intel's biggest advantages, such as AVX512 instructions, are not in that set. Therefore, to show that these instructions provide tangible benefits and aren't just gimmicks, they created a way of shipping two versions of critical system libraries in a single file. One of the versions will run on all computers, the second will run faster on computers with AVX 512.

However, many distributions didn't want to make the effort of compiling software twice. Therefore, Intel came up with the idea of shaming other distributions into optimizing the software they ship, which has been working quite well. By telling reviewers what distro to install they can have full control of the software stack, and are helping the entire Linux ecosystem improve its performance. Maintainers of other distributions can see the results and the secret sauce how to get there as Intel blazes the trail

Am not able to install in VirtualBox (Linux Mint as host). The install doc says to set chipset to ICH9 enable EFI and PAE/NX. It won't boot past initial boot screen. If I turn off EFI it boots but says it can't install OS due to incompatibilities. Any ideas?

I'd advise installing virt-manager and using that instead. But then, I'm not a virtualbox fan.

Agree about both the amazing performance and the "why would you do this?" package manager. I use R, and CL's R is built against the Intel Math Kernel Library and, with other optimizations, easily outclasses anything else on the same hardware.

And yet my R server still runs Ubuntu, because packages don't live in a vacuum. They need an ecosystem, and swupd is a pain to use compared to apt/yum, and there are too many packages missing. I'm not against compiling the odd package by hand, but if I wanted to "build it myself" I would just use ArchLinux instead.

It's a compelling VM OS/server OS for users who can handle those limitations, but it wasn't a good desktop/workstation OS for me. I'm waiting for Ubuntu/Fedora to start importing CL's optimizations so I get the pleasure without the pain.

I also hate Gnome with a passion. Even if it's not crashing every half hour or so, and taking everything with it, it's constantly getting in my way or having to install third party programs just so I can tweak some annoying UI choice that the devs force down your throat without any built in way to alter it.

I agree. How on Earth did Gnome 3 end up as the default instead of KDE Plasma on most Linux distros? Where did the universe go wrong?

Intel's Clear Linux distribution has been getting a lot of attention lately, due to its incongruously high benchmark performance. Although the distribution was created and is managed by Intel, even AMD recommends running benchmarks of its new CPUs under Clear Linux in order to get the highest scores.

So the interesting question to me is, is all this optimization effort by Intel completely useless except as marketing? Clearly the distro isn't going to take over. EIther Clear is full of bullshit optimizations, that tradeoff speed in one area for speed in another to make a benchmark look good without improving overall performance, or they are real optimizations, in which case they really, really ought to be folded into, you know, the production software that people actually use.

I suppose a lot of this might be whether this is all compiler optimization, or whether there are a lot of changes to actual system code too. I expect it will be both, since otherwise they would simply release a recompiled form of another distro. In which case, the compiler optimizations might be a little sketchy to put into production, but the other code modifications, if they are not cheating rubbish ought to be pushed back out.

So is that happening, or not, and if not, why?

Uh... you do realize that they are legally required to release any changes to any GPL code projects if they push out binary releases, right? They also release the code for non-GPL projects that they alter. That said, just because they do release any code changes doesn't mean the upstream projects have to integrate any changes. The reasons project managers may or may not integrate those changes are as varied as the individuals and projects themselves. The Clear Linux team appears to be forthright and playing fair with what they're doing.

Sure, I never said they weren't open source, or that they weren't acting in good faith, I'm pointing out that the acid test of both the optimizations and the open source process are whether the optimizations are in fact integrated upstream. If not, either they aren't really acceptable optimizations for mainstream use, or they aren't really worth the effort... or the linux development community is leaving non-trivial performance on the table.

I want to know IF it is being done, and why or why not, not who is to blame.

Phew! Does anyone remember MobLin? I can see where the custom file paths for OS/App/user file segregation and stateless installation where you can reset the machine safely by deleting everything in /etc and /var would be very useful for a mobile phone/embedded distribution...

I want to take a moment to note how PTS (Phoronix Test Suite) works. I'm sure there's probably a large audience reading this article that may be making incorrect assumptions since they've never encountered it before, or may not have used it.

PTS does not use the distribution packages that come with any particular distribution. It uses the build environment, basic development libraries, and the distro's PHP for functionality. It then downloads what it needs that's not in the distro's basic packaging repositories and the program to be benchmarked from upstream project repositories without any distro specific patches. Those source trees may, but usually are not, the same as what's in the distribution itself. These source trees are then compiled with whatever Michael Larabel believe the configuration, compiler, and linker flags should be to get maximum performance from the resulting binaries. These may not even be good, or even valid flags, as sometimes he's been known to include mutually contradictory compiler flags from time to time, or nonequivalent flags when comparing GCC & LLVM.

These resulting binaries are usually run several times to reduce multiproccess execution jitters in timed results. An average of these times, FPS, w/e are then given as the result.

These results may not (and probably rarely do) reflect the real life performance of a distribution's own packages as distributions use default configurations that try to maximize compatibility with hardware and various user needs (basically everything and the kitchen sink is often compiled in). Sometimes the distribution's own packages may perform better than the benchmarked version, sometimes it may not. Regardless, the distribution packages usually are more suitable for everyday use.

This is the primary criticism for PTS over the years. It may or may not reflect any real world use especially when you get to the game benchmarks because most people playing games aren't interested in whether a game can get 200 FPS at peak performance, only in that it can hold a steady frame rate at their monitor's refresh speed and whether or not input latency is a problem.

What PTS is actually good at and for, however, is pointing out huge jumps in performance or huge losses between different version releases. This generally suggests Something Bad Happened and the cause can be bisected down into the problem commit(s).

Although most things work without tweaking, most users will quickly want something that does

What does this sentence mean?

Eventually you'll want a something, and it'll need tweaking to work.

I worked it out, too, but I can understand metalliqaz's confusion: The first clause is positive ("works"), and so is the second ("does"). Mr. Salter may have originally wrote "most things don't require tweaking to work", or similar and the second part was not changed accordingly.

I criticised Jim, who as far as I can see repeatedly insists on disingenuously ignoring the licensing conditions set by those who authored ZFS.

Those conditions are the reason that using ZFS with software that is licensed under incompatible terms will *never* be simple. The kernel devs did not write ZFS, and they did not choose its license. Some of them may not even be that fond of the license which Linus adopted for the kernel.

If Jim were more honest about the situation we would see less posts like yours which display staggering ignorance. Sadly though it appears that his emotional attachment to ZFS and his desire to use it with Linux means that he is unable to do so.