The 286 didn't have 32-bit registers. The 386sx had narrower external buses, but was otherwise identical to the full 386.

32 bit registers were as almost as universal in 16 bit architectures as 16 bit registers were in 8 bit. Back then bitness generly referred to memory bus width (and secondly architecture generation), it wasn't until 64 bit that it referred to register size.

It has been a while since I touched that old x86 assembler, but the other quintessential 16 bit architecture 68000 I know had 32 bit registers, and that never made it a 32 bit architecture.

Not correct.

In general, the "bitness" of CPUs is corresponds to the width of the general purpose registers and arithmetic/logic operation on them which is exposed through the instruction set.In general, this matches the hardware implementation: 16-bit CPUs can perform 16-bit operations in "one go", 32-bit CPUs can perform 32-bit operations in "one go".

The 68000 family was an odd, if very wise, case: the instruction set defined 32-bit general purpose registers and operations, but the first implementations (68000, 68008, 68010) internally used 16-bit units and data paths.Which meant that the 32-bit operations took longer than the 16-bit ones.Later implementations (68020 onwards) internally used 32-bit units and data paths.The first members of the 68000 family are sometimes known as 16-bit, others as 16/32-bit.The later members are 32-bit.

The width of the external bus stopped being a good metric for CPU width when manufacturers like intel introduced chips like the 8086 and 8088: otherwise identical 16-bit CPUs which ran the exact same software whose only difference was that that they had 16-bit and 8-bit external buses.

Or chips like the 80386DX and 80386SX which were otherwise identical but had 32-bit and 16-bit external bus

Or chips like the Pentium, which was still using 32-bit instructions, operations and internal datapath but had a 64-bit external bus.

Doesn't the spectre meltdown thing affecting AMD, ARM, and Intel lead to some serious questions?

I have wondered for months and seen very little on it, but, I cannot see how it affects all three unless there is espionage involved.

Two of those companies stole from the third, or, one influential group helped design all three. If it is the former, all three companies should be shut down while investigations can be completed. They are all national security threats at this point. If the latter, then, we can be essentially certain that Spectre and Meltdown are the result of an NSA op.

...No?

If you're gonna have prefetching (and if you want a CPU that isn't glacially slow you'll want to have it) then...you have to load stuff into the cache before it's needed. If you load stuff into the cache before it's needed then you're gonna change the data that's in the cache. If you change the data that's in the cache then a malicious program can keep asking questions and waiting to see how long it takes to be told it's not allowed to know the answer and use that to tell what data is in the cache. There is no possible way to load data into the cache without loading data into the cache.*

Notably, Intel's implementation was significantly more vulnerable than AMD's, so it's not as if it was identical across the board anyway.

But basically, Spectre and Meltdown rely on the fact that you need to be able to quickly access data that you need to access quickly. There's no conspiracy there, just a tautology.

It's kinda like wondering if espionage is the reason why cars on wet roads crash more often than those on dry ones. After all, why would every car manufacturer build the same "lower traction" flaw into every car unless they were all stealing from each other?! Well, because that's just how the physics works out, no conspiracy required.

*There are ways to block the exploits, but they involve manipulating how the cache works in ways that slow it down when applied globally, so instead are only applied at certain points by programmers. So these mitigations weren't in place by default because they'd have a large performance impact for no apparent benefit, until the exploits were found.

But, have any consumer exploits actually been documented 'in the wild'? They exist, to be sure, but have the been implemented at any level yet? Just wondering...

I would think you can get the compiler to schedule operations on idle ports with no committed results over sections of code that you want to obscure side-channel communication. You could use a library solution I suppose, but a compiler solution with specific pragma directives to "flood the ports" might help developers. You can also insert uncommitted operations into ports that are used but still chugging through their pipelines. ie. for certain sections of code make SIMT unavailable, create a white noise of port usage, and disable any kind of turboboost of dynamic frequency effects the hardware might deploy. The exploit code would not get much fetch-decode time in the SIMT that way, and when it did, the ports are a uniform haze of full utilization. You might then know where the crypto is running, but you would need a tremendously long time to get the signal, if it is even possible. I think it is more realistic to harden specific code fragments than to abandon entire categories of parallelism.

Disable SMT? This is stupid. Most of your apps have multiple threads that share an address space. A small scheduler change to limit cohabitating thread pairs to common address spaces would sort this out.

The 286 didn't have 32-bit registers. The 386sx had narrower external buses, but was otherwise identical to the full 386.

32 bit registers were as almost as universal in 16 bit architectures as 16 bit registers were in 8 bit. Back then bitness generly referred to memory bus width (and secondly architecture generation), it wasn't until 64 bit that it referred to register size.

It has been a while since I touched that old x86 assembler, but the other quintessential 16 bit architecture 68000 I know had 32 bit registers, and that never made it a 32 bit architecture.

Not correct.

In general, the "bitness" of CPUs is corresponds to the width of the general purpose registers and arithmetic/logic operation on them which is exposed through the instruction set.In general, this matches the hardware implementation: 16-bit CPUs can perform 16-bit operations in "one go", 32-bit CPUs can perform 32-bit operations in "one go".

The 68000 family was an odd, if very wise, case: the instruction set defined 32-bit general purpose registers and operations, but the first implementations (68000, 68008, 68010) internally used 16-bit units and data paths.Which meant that the 32-bit operations took longer than the 16-bit ones.Later implementations (68020 onwards) internally used 32-bit units and data paths.The first members of the 68000 family are sometimes known as 16-bit, others as 16/32-bit.The later members are 32-bit.

The width of the external bus stopped being a good metric for CPU width when manufacturers like intel introduced chips like the 8086 and 8088: otherwise identical 16-bit CPUs which ran the exact same software whose only difference was that that they had 16-bit and 8-bit external buses.

Or chips like the 80386DX and 80386SX which were otherwise identical but had 32-bit and 16-bit external bus

Or chips like the Pentium, which was still using 32-bit instructions, operations and internal datapath but had a 64-bit external bus.

All 8-bit processors had 16 bit adressing, hence the common 64kb of memory. So you argue the external bus stopped being the key factor one generation earlier? Well, okay. It was never that strictly defined in the first place, which was my point

The 286 didn't have 32-bit registers. The 386sx had narrower external buses, but was otherwise identical to the full 386.

32 bit registers were as almost as universal in 16 bit architectures as 16 bit registers were in 8 bit. Back then bitness generly referred to memory bus width (and secondly architecture generation), it wasn't until 64 bit that it referred to register size.

It has been a while since I touched that old x86 assembler, but the other quintessential 16 bit architecture 68000 I know had 32 bit registers, and that never made it a 32 bit architecture.

Not correct.

In general, the "bitness" of CPUs is corresponds to the width of the general purpose registers and arithmetic/logic operation on them which is exposed through the instruction set.In general, this matches the hardware implementation: 16-bit CPUs can perform 16-bit operations in "one go", 32-bit CPUs can perform 32-bit operations in "one go".

The 68000 family was an odd, if very wise, case: the instruction set defined 32-bit general purpose registers and operations, but the first implementations (68000, 68008, 68010) internally used 16-bit units and data paths.Which meant that the 32-bit operations took longer than the 16-bit ones.Later implementations (68020 onwards) internally used 32-bit units and data paths.The first members of the 68000 family are sometimes known as 16-bit, others as 16/32-bit.The later members are 32-bit.

The width of the external bus stopped being a good metric for CPU width when manufacturers like intel introduced chips like the 8086 and 8088: otherwise identical 16-bit CPUs which ran the exact same software whose only difference was that that they had 16-bit and 8-bit external buses.

Or chips like the 80386DX and 80386SX which were otherwise identical but had 32-bit and 16-bit external bus

Or chips like the Pentium, which was still using 32-bit instructions, operations and internal datapath but had a 64-bit external bus.

All 8-bit processors had 16 bit adressing, hence the common 64kb of memory. So you argue the external bus stopped being the key factor one generation earlier? Well, okay. It was never that strictly defined in the first place, which was my point

All 8 bit processors had 16-bit addressing but they had 8-bit general purpose registers and logic/arithmetic operations.Manipulation of 16-bit adresses required multiple operations.

I am arguing that if anyone characterized 8-bit CPUs by having 8-bit external bus, that did not carry on well to CPUs like the 8086 and 8088.And it just got worse from there on.

We are not calling modern CPUs 128-bit although they have 128-bit memory bus.

The 286 didn't have 32-bit registers. The 386sx had narrower external buses, but was otherwise identical to the full 386.

32 bit registers were as almost as universal in 16 bit architectures as 16 bit registers were in 8 bit. Back then bitness generly referred to memory bus width (and secondly architecture generation), it wasn't until 64 bit that it referred to register size.

It has been a while since I touched that old x86 assembler, but the other quintessential 16 bit architecture 68000 I know had 32 bit registers, and that never made it a 32 bit architecture.

Not correct.

In general, the "bitness" of CPUs is corresponds to the width of the general purpose registers and arithmetic/logic operation on them which is exposed through the instruction set.In general, this matches the hardware implementation: 16-bit CPUs can perform 16-bit operations in "one go", 32-bit CPUs can perform 32-bit operations in "one go".

The 68000 family was an odd, if very wise, case: the instruction set defined 32-bit general purpose registers and operations, but the first implementations (68000, 68008, 68010) internally used 16-bit units and data paths.Which meant that the 32-bit operations took longer than the 16-bit ones.Later implementations (68020 onwards) internally used 32-bit units and data paths.The first members of the 68000 family are sometimes known as 16-bit, others as 16/32-bit.The later members are 32-bit.

The width of the external bus stopped being a good metric for CPU width when manufacturers like intel introduced chips like the 8086 and 8088: otherwise identical 16-bit CPUs which ran the exact same software whose only difference was that that they had 16-bit and 8-bit external buses.

Or chips like the 80386DX and 80386SX which were otherwise identical but had 32-bit and 16-bit external bus

Or chips like the Pentium, which was still using 32-bit instructions, operations and internal datapath but had a 64-bit external bus.

All 8-bit processors had 16 bit adressing, hence the common 64kb of memory. So you argue the external bus stopped being the key factor one generation earlier? Well, okay. It was never that strictly defined in the first place, which was my point

All 8 bit processors had 16-bit addressing but they had 8-bit general purpose registers and logic/arithmetic operations.Manipulation of 16-bit adresses required multiple operations.

I am arguing that if anyone characterized 8-bit CPUs by having 8-bit external bus, that did not carry on well to CPUs like the 8086 and 8088.And it just got worse from there on.

We are not calling modern CPUs 128-bit although they have 128-bit memory bus.

But the 8086 was a 16 bit processor, and the 8088 was a 8086 where only 8 bits were connected to the memory, making it an 8-bit processor with 16-bit registers just like the Z80 it was directly competing against.

Again the whole that X-bit CPU didn't refer to same thing all the time but to a generation, in a time when CPUs were single-step CISC and limited by memory bandwidth. But yeah, I got the original comment wrong the 286 was 16-bit only, I confused it because OS/2 was written for the 286 can could run Win32 apps, but not win32 on 286.

But the 8086 was a 16 bit processor, and the 8088 was a 8086 where only 8 bits were connected to the memory, making it an 8-bit processor with 16-bit registers just like the Z80 it was directly competing against.

The 8088 was hardly comparable to the Z80.The Z80's instruction set had limited 16-bit capabilities: addition and subtraction only IIRC.Other operations were only on 8-bit quantities.This meant that the 8088 could perform some 16-bit data manipulations in fewer instructions.Internally, the 8088 was based on a 16-bit ALU compared to the 4-bit ALU in the Z80, which meant that the instructions themselves took fewer cycles.(And the 8088 also had multiplication and division).

In short: the 8088 could do many manipulations of 16-bit values in fewer instructions and much fewer clock cycles than the Z80.

The 8088 was internally identical and software compatible to it's 8086 brother.The 8086 was faster but only to the extent that it's 16-bit external bus allowed it to fetch instructions and load/store 16-bit data in half o the bus cycles (ergo, twice the bandwith).To attempt to qualify the 8088 as 8-bit and the 8086 as 16-bit is a pointless distinction.

Might as well attempt to qualify some Athlon64 as 64-bit processor and others as 128-bit processor depending on whether they used socket 754 or 939.

Quote:

Again the whole that X-bit CPU didn't refer to same thing all the time but to a generation, in a time when CPUs were single-step CISC and limited by memory bandwidth.

You can't assign it to generations either.Note that the 8088 was introduced ~1 year _after_ the 8086.The 68008 and the 80386SX were introduced ~3 years after the 68000 and 80386DX, respectively.These variants of existing chips, aimed at lower system cost.