Interestingly, the numbers are very different on here than they are on birdsite. Many more people here seem to be willing to take a speed hit for security here than there. Not surprising, I suppose: selection bias.

(Yes, I realize that’s not bulletproof because how much can I *really* trust a site but I expect actual wide-ranging in-the-wild side channel attacks to get on the public radar before they can reach me.)

@cwebber The notebook computer I still use regularly is over six years old and was a budget model. I think it is already 3x slower than a modern machine, so I would certainly appreciate a security upgrade with similar performance

@aran Indeed. In this case I'm suggesting security against things such as rowhammer, spectre, etc (btw my understanding is that the slowdown would be more like 1.5x-2x but I decided to give a significantly more conservative estimate)

@cwebber It depends how you define modern machine. If it were 3x slower than my current computer, no. It might technically be possible, but it'd be very unpleasant. If it were 3x slower than, say, a brand new, top of the line, ridiculously overpowered machine, then maybe.

@shadowfacts It's interesting that people feel this way, and I wonder how much is "anchoring"... I guess having run computers that were 300x slower than modern computers and getting a lot done on them makes me less bothered by it.

By "anchoring", I mean: if moore's law didn't die out and I was asking you this after computers were 3x faster than today, and asking if you were willing to drop from that to "today's speeds", would you object?

@cwebber If Moore's Law still held, it would be a much different question. A 3x decrease would only be a few years old, so despite being objectively much slower, I'd be much more used to it because it would be so recently in the past.

A great deal of the reluctance to change, I think, is not just that the hardware's gotten better, but the software that we use _expects_ that modern, fast hardware. My (and I suspect other people's) answer was based on the assumption of continuing to use the software we already do as normal. But, if something like your hypothetical actually happened, and lots of people started using computers that were several times slower, I think we'd see the software adapt to that.

If the software adapted to work about equally well on 3x slower hardware as current software does on current hardware, I'd be much more willing to take the slower hardware, knowing that I'd still be able to do everything I currently do and be similarly productive. But, current software designed for modern computers, running on a 3x slower device, would for a great many tasks be unbearable.

@cwebber but I would say that part of that is probably due to perceived speed being very very different then just like, raw processor speed. it's dependent on a lot of things like OS programming, cache size, workflow, amount of RAM, etc.

when you said "3x slower" my gut reaction was "you know the amount of times your macbook completely freezes up when trying to do some trivial task like alt-tab or open a new text file? what if those freezes took three times as long or happened three times as frequently"

@nightpool I'm also aware that many people in this generation have been using computers *primarily since* moore's law has leveled off, so unlike people from my generation, have gotten used to an approximate baseline of speed.

@cwebber sure. I'm probably willing to give up "having 100 chrome tabs open" but not willing to give up "having a super high DPI laptop display". trade-offs are going to be different for different people

@cwebber and of course that's completely nonsense, that's all based on stuff like, "I always forget to close programs I'm not using" and "retina displays with high scaling are hard for even top-line laptop graphics cards with current implementations"

@cwebber basically the only computationally expensive task I do is software development (okay and guix).If people would realise that their phones are sometimes 1.5-2x times slower than their desktop, more would answer positively I believe.If not for development, I would say "yes" probably. Not sure.

@bobGenerally #GUI make use of it. But the demands there aren't very high either, an embedded graphic core can cope with it.

I'd say, the only place where the performance is needed for s/w development and maintenance is reproducible builds.And the applications for content creators of course: CAD, graphic design, music creation, etc.@cwebber

@AzureWith explicit parallelism (#EPIC), you can sometimes reach the same performance at slower clock speed, but it costs way more than a conventional hardware, because it requires more circuitry, and because such machines aren't mass-produced yet.@cwebber

@amiloradovsky@cwebber The 'more circuitry' surprises me, only because I recall one of the motivations for Merced being to simplify the on-die apparatus associated with out of order and speculative execution. (Of course, this could very well have just been bad information.)

@AzureNo, that's correct, but you still need more computing units/devices to process information as fast as those working at higher clock rates.Also there is such a thing as "intrinsic parallelism" in programs, limiting the minimum clock speed needed for the performance.@cwebber

@cwebberI've been buying used business systems and installing Linux for personal use for a decade and a half, so use... sure. Buy? Hmm...

I'm good with the performance of an i5-6600 (Skylake, 2015) for a lot of demanding applications. I couldn't afford a current generation i9 to get that, but make that a business requirement today and we'll see what's on the online auction sites in 3 years

@cwebberBeyond side channels I'm even more worried about the ever present issues of poor security design/architecture and of security-critical components being written in unsafe languages. The fact that there is always another buffer overflow waiting in the kernel, in the browser, etc is nonsense. Who knows when someone will find a critical vulnerability in libjpeg and start manipulating images to take over the browser, then call a vulnerable syscall to install a rootkit.

@cwebberI really want to run a microkernel (so poorly written driver code doesn't compromise the whole system) written in a safe language with arbitrarily nestable security contexts (eg. beyond users having different privileges, I want any program to be able to spawn processes, threads, etc in more restricted contexts, which can also spawn more restricted children, etc).

@TryphonI don't know. For desktop use I would want good performance per thread. I don't know that I would care too much about massively parallel workloads. But I'm certainly open to the idea of massively parallel lisp machines.@cwebber@alcinnz

@willghatch@cwebber@alcinnz I wasn’t thinking threads actually. More like actors or coroutines the kind Erlang/Elixir uses. See if there is another programming system that would benefit from parallel hardware, not just a faster sequential Lisp machine. Not sure I am making sense.