Linus Torvalds believes ARM has little chance of usurping x86, because the latter has built an open hardware ecosystem that the former just doesn't look like replicating.
Torvalds voiced his opinions in a fireside chat with David Rusling, chief technology officer of ARM tools vendor Linaro, at the end of September.
Rusling …

Re: Almost bought a QL

I learned to programme MC68K assembler on a Sinclair QL (and BCPL, then C). Then went into my first job where I programmed VAXes in Macro-32, and it turned out the VAX had a very similar instruction set to MC68K - both are/were wonderful (for the time, circa 1988) modern 32-bit orthogonal instruction sets. Presumably Motorola based their 68K instruction set on the VAX as they were that similar. Things would have been a lot harder for me if I'd learned the horrors of 6502 or, god forbid x86, so in that sense I don't regret buying the QL at all!

On the other hand that keyboard, those microdrives... eugh! The fully pre-emptive mutitasking QDOS operating system with SuperBASIC (all in 48K ROM, in 1984) was quite an impressive achievement though. I'd love to see a proper write up on QDOS with input from the original author Tony Tebby.

Re: Almost bought a QL

> Don't think so. There's lots of differences between 68K and VAX at a fundamental level, eg. on VAX most registers can be used for most purposes, not so easy on the 68K.

That's exactly what orthogonal means. You could use the 68K and VAX data or address registers as source or destinations (in any combination) with pretty much any relevant instruction. Try doing that with 6502 or x86.

Re: Almost bought a QL

Yes, the very basic windowed GUI was certainly lacking, but the underpinnings (IO channels, job control, pre-emptive multitasking) were all certainly far more impressive, particularly when it was done in 48K.

Re: Almost bought a QL

> Don't think so. There's lots of differences between 68K and VAX at a fundamental level, eg. on VAX most registers can be used for most purposes, not so easy on the 68K.

Although I'm talking about orthogonality at the assembler level - the 68K had many different opcodes for implementing the various instructions (unlike the VAX) so yes at a fundamental level the hardware implemented very different approaches but as far as the assembler programmer was concerned the 68K assembly language (ie. which is what I mean by the instruction set in this context) is very similar to that of the VAX.

Re: Almost bought a QL

"32 bit CPU... with an 8 bit data bus?!"

This was 1983! Producing any kind of affordable machine with a 32 bit CPU was a pretty impressive feat back then. Also IIRC sinclair used the 8 bit bus not only because of price but because their engineers were familiar with the 8 bit support components which were available in bulk. If you wanted true 32 bit machine you would be looking at paying a 5 or 6 figure sum for a unix workstation.

Re: Almost bought a QL

Was the M68k really 32 bit internally? I never used it in anger (I went 6502 - ARM) but my recollection is that it was essentially 16 bits with some instructions capable of operating on pairs of registers, somewhat like the Z80. Would you call the Z80 a 16 bit chip?

The reduced-width data bus was common back then. Every pin added cost, not only in traces on the motherboard but the CPU's packaging - DIL packages become very cumbersome when you try to make them with enough legs to support 32 bit addresses and 32 bit data as well as all the control signals, power etc.

In the early 1990s I worked with Intel's MCS-96 family, a "16 bit" family, at least one variant of which had an 8-bit memory bus multiplexed with one half of the 16 memory address lines. Retrieving a 16 bit value from memory involved four steps - latching the address, an 8-bit read, a second latch, a second read. All to save perhaps six or seven pins on the package - although you saved 8 data lines, you had to have an additional line (or two?) to signal whether it was address or data on the multiplexed lines. Oh, and you also had to fit an external data latch.

Bearing in mind that I was a student on placement, and had to self-teach pretty much everything on this project (did all the digital hardware as well as 90% of the software in assembler), I remember one "lightbulb" moment very well. I couldn't work out why my code wasn't saving values properly to EEPROM. It took a couple of days of pouring through the code and probing signal lines before I realised that the EEPROM had a 1ms write cycle time. The RAM had no problem with 16-bit accesses, but the EEPROM couldn't keep up with the double-write required. Solved it by writing as descrete 8 bit writes with a few NOPs in between.

I really pity the poor person who had to take that code on when my placement year was over...

Re: Almost bought a QL

Your recollection is both right and wrong. The 68000 was 32-bit as far as the programmer was concerned so no combining 16-bit registers or other such stupidity but the implementation was a bit less so, meaning that 32-bit instructions might have been slower to execute than 16-bit ones (even internally, not just because of the narrow external data bus). But the important thing was the programming model. The 68020 was the first "full" 32-bit implementation but the nice thing was all your 32-bit code from the 68000 would be ready to run and take full advantage of it.

Re: Almost bought a QL

"Yes, the very basic windowed GUI was certainly lacking, but the underpinnings (IO channels, job control, pre-emptive multitasking) were all certainly far more impressive, particularly when it was done in 48K."

"over the two decades MS and Intel kept us in 64k blocks"

Two decades? The 80386 was released in 1986. Just four years after the original PC. Software took longer to become fully 32 bits, but remember many PCs back then had few MBs of RAM only. Many registers use limitations were lifted since the 8086.

I was reading an "Old New Thing" blog post a few days ago, which remembered than in *1995* most PCs had only 4MB of RAM. Nevertheless, many DOS extenders allowed for 32 bit programs even before Windows switched to 32 bit executables.

Re: Almost bought a QL

Re: Almost bought a QL

'Your recollection is both right and wrong....The 68020 was the first "full" 32-bit implementation but the nice thing was all your 32-bit code from the 68000 would be ready to run and take full advantage of it.'

Yes, the 68k family became very popular for Unix boxes by being 32-bit but it was probably 68020s and later that were used.

Re: Almost bought a QL

I doubt it, what with all the added bits by the ODM, that work alongside ARM, but weren't really part of the Spec. Add to that, the fact that these ODMs, not least among them Samsung, have had any interest to actually release any info, or Source Code for such SoCs means those unlucky enough to run a Exynos Device, have about as much chance as anyone else running some MediaTek Part. i.e. Non at all!

Linux has facilitated the cituation he is lamenting about

Well, one of the reasons for Arm hardware wild west is exactly because you can adapt a Linux tree to handle a particular hardware flavor and live with it from there onwards. The manufacturers do not care, they keep their 3.4 (on a good day) tree and stay with it. Example Samsung till recently on the Exynox, LeMaker on the Banana Pi series, etc. Windows did not allow any such liberties with the PC that is why it is so uniform.

This is also why Arm has practically won everywhere where the hardware specialization is the key - such as mobile. This is why Windows got pushed out of the way there in favor of Linux too.

I agree about the Razzie being a toy - I have started moving all the stuff I had built on Razzies to Bananas. 10+ times more reliable, significantly lower latency and all of the bundled hardware just works (tm).

Re: Linux has facilitated the cituation he is lamenting about

Yep, open source types shouldn't be surprised if not everyone keeps up!

To me it sounds a little bit like he's thanking Microsoft, albeit indirectly. Having to be compatible with DOS then Windows was what drove the PC clone ecosystem to standardise. MS even weighed in with that with the PC System Design Guides (PC'97, PC'98, etc). That is also what made it practicable for Linux to thrive too - it was easier to get Linux to the point where you didn’t have to compile it to use it.

In my opinion MS missed an opportunity about 9 years ago to do the same with ARMs. As an experiment they showed Windows 7 and Office running on an ARM board, printing to an Epson printer. But instead of defining an ARM based PC or server architecture, they went off and did Windows RT, tablets, etc. We all know how well that went.

They kinda did it with mobiles, defining a hardware spec that would give binary compatibility with Windows mobile. Trouble was it wasn't open; not many bothered to follow it. Now had it been open, that hardware spec would have been ideal for all sorts of interesting things. Just as you can run Linux, Solaris, Windows, FreeBSD, etc on a PC, an open mobile spec would allow the same diversity to exist on handsets.

Instead we have proprietary mobile hardware that no-one can keep Android up to date on, punters are continually exposed to security risks, and manufacturers can gouge the market simply by not supporting their current product line.

Re: "It's about time governments got involved and forced the market open."

Re: Linux has facilitated the cituation he is lamenting about

I'm really not sure why Microsoft sometimes only go half the way.

Windows RT did work really well, and the experience on Office on a Surface 2 is pretty impressive for an ARM-based 2 Gb RAM tablet.

If Microsoft did not lock RT and enabled devs to recompile their apps easily, or offered some kind of x86 emulation or on the fly translation (like the Windows Ubuntu subsystem), maybe Windows on ARM would be a reality.

The same happened with Windows NT which used to be truly multiplatform. After having invested so much in hardware abstraction, why throw it away so fast ? I understand that the user base was much smaller than x86, but sometimes you just need to give it a little time. And Microsoft has/had the financial means to be patient.

Re: Linux has facilitated the cituation he is lamenting about

1) The 80/20 rule: They got RT to 80% of the ecosystem that they wanted, but that last 20% was going to take "another" 80% effort. Basically, they didn't think the juice was worth the squeeze.

2) The RT system with .NET was how .NET was originally intended, .NET IL code interpreted and running on both x86 and ARM devices. It worked too well except no COM. To implement that last 20% (of COM) would have required massive effort (another 80%).

3) Having apps run without being retargeted for different CPUs conflicted with MS desire of a walled garden. They wouldn't be able to have a controlled App store under the "RT" model.

4) It's Microsoft after all. We need to only look at Zune, (insert favorite half efforted technology here) for why..

Re: "It's about time governments got involved and forced the market open."

@Pascal Monett,

But . . . but . . the market auto-corrects itself !

Doesn't it ?

There's a lot of people who say as such without stopping to wonder why the SEC and other regulatory bodies exist. Where there's a dysfunctional market you need a government regulator to clean it up.

A lot of the problems in the US were caused by things like sub prime mortgages, a good example of how inattentive oversight by regulators allowed awful practises to flourish to the point of bringing down the whole economy.

There's no really meaningful competition left in the online world. Google, and that's about it. Why they've not been broken up Bell style is simply because the politicians have no idea that there's a monopoly.

Re: "It's about time governments got involved and forced the market open."

"There's no really meaningful competition left in the online world. Google, and that's about it. Why they've not been broken up Bell style is simply because the politicians have no idea that there's a monopoly."

When it comes to science, engineering and tech, politicians have no idea. Full stop.

"But instead of defining an ARM based PC"

The reason is the walled garden which worked so well for iOS made MS greed and hungry. They exactly didn't want to replicate the PC business where IBM first, and later MS almost too became irrelevant. Really, MS doesn't care about hardware you can run Linux, Solaris, FreeBSD, etc. on. And why should it? No revenues from it. MS is a business company, not a charity. Does Google help to build alternative search engines?

The PC was an exception, not the rule. IBM didn't really believe in it, and so didn't protect it. It was a lucky accident. If it could get back in the past, IBM would surely fix such a mistake.

Yet, sorry, governments - but authoritarian ones - can't intervene and "force the market open". It would be a violation of basic democratic rights.

Re: Linux has facilitated the cituation he is lamenting about

The Windows-Ubuntu subsytem is much "easier " to implement - it translates Ubuntu kernel calls into Windows ones, but everything is still x86 (and, AFAIK, command line only).

Emulating a CPU is an heavy task (look at IA-64 when it had to run x86 code...), and Windows RT is not a full ARM implementation of the "NT" kernel and "Win32" subsystem, thereby nor simple recompilation works nor emulation is an easy task.

Windows NT was truly multiplatform because it had everything - HAL, kernel and user space subsystems - each running natively on each supported CPU. The HAL never abstracted the CPU code - it just abstracted the hardware architecture interaction (i.e. hardware I/O, physical memory managment, etcl). Note that HAL also allowed systems which used x86 processor but didn't follow the IBM PC architecture to run Windows.

It could done, of course - but depends on the ROI. Back then it was clear that Alpha and MIPS CPUs would have no brought enough money.

Re: Linux has facilitated the cituation he is lamenting about

> Windows did not allow any such liberties with the PC that is why it is so uniform.

Ah, those who don't remember history.

I can just about remember the early days of "the PC", though I was only involved as a user back then. You caould buy loads of "PC"s from other vendors, that came with PC-Dos from MS, and would run "PC" software. But they really didn't have the uniformity of hardware that people think - there was a lot of variation and MS would provide each manufacturer with a PC-Dos tweaked to suit.

As I recall (rather vaguely through the mists of time), it was some game reviewer in a magazine that coined the "PC Compatible" that people take for granted these days. SHe worked on the basis that if you could take ${random_game} off the shelf, unwrap it, and boot the PC with that disk and be able to play the game - only then was it "PC Compatible".

As so, fairly quickly, all the manufacturers quickly learned that they had to mimic the IBM PC fairly closely (eg putting the serial ports at the same I/O addresses etc) or they'd be labelled as "not compatible" and would lose sales. Thus the "PC Compatible" standard "happened" !

I deliberately say "happened" because it wasn't really designed, it sort of came into being in a very accidental way.

AIUI, the original IBM PC wasn't actually an IBM project. Some small electronics company took a National Semiconductor data sheet/design notes for their 8080 family of processors, and with very little of their own design, made an implementation of a suggested system design. IBM were geared up to "big stuff" (where productivity is measured in how many lines of code you make, not how small you make it !) and as they could see the likes of Commodore and Apple eating their lunch in the small office - bought the company and stuck an IBM badge on it.

Thus the original PC was born, and more or less by accident, the design "choices" made by Nat Semi and some never heard of hardware house became the de-facto industry standard.

.

But Linus is right about ARM. It's not the processor, it's the way every system manufacturer does their own thing - in the same way that the original desktop PC makers did. The difference is that there is no process these days that would pressure any of them into following any standard - eg each phone is made to run a specific OS provided by the device manufacturers, and thrown away before it needs too many upgrades. So the manufacturers really don't give a sh*t how hard it is for anyone else to put a different OS on it - long gone are the days when the games came with their own OS to boot the system into.

In the server world that may change eventually, but not for anything else.

And the modern user (in general) doesn't give a sh*t either as long as they can get their cat videos on FarceBork.

"Ah, those who don't remember history."

It wasn't so simple... while some established brands avoided to clone exactly the IBM PC and made their own x86 computers with their versions of MS-DOS, probably to avoid litigation with IBM and hope in some form of lock-in, others, mostly "startups" in modern term, cloned the PC completely very early, reversing and/or re-implementing the BIOS also. No need to ask then for specific versions of MS-DOS and its applications, something very important for small, young companies.

Because software began very early to bypass MS-DOS and even BIOS for performance reasons (especially video stuff, being MS-DOS really to slow and cumbersone, and feature-limited) - and not only games. While MS Flight Simulator was often used as a test due to its complexity - what matter most was compatibility with software like Lotus 1-2-3, WordStar or Sidekick. For most of these companies, still in their infancy, coding, testing and supporting different systems was a real burden.

And probably also the diffusion of pirated software boosted the market for PC clones....

Re: Linux has facilitated the cituation he is lamenting about

"80/20 rule" (etc.)

well, SOME of what you're saying is correct, but I think you got it backwards for other stuff...

The biggest problem with Windows has been TOO MUCH on the "20%". Except NOW they're removing that 20% and saying "do it OUR way" and taking away choices, legacy hardware support, etc.. (Win-10-nic and 2D FLATSO being the 2 worst outcomes from this).

On a related note, Linux has focused more on the 80% until recently, with a huge push to support every possible bit of hardware that exists, and NOT remove legacy hardware support while doing it.

The RT system with ".Not" - just "ew". It was a SNAFU out of the concept box. It should have NEVER been done, like disco "music" and Obaka-care. Developers took ONE look and went "W.T.F.?" and didn't play in Micro-shaft's sandbox under Micro-shaft's ridiculous rules. And, with lack of "Developers, developers, developers, developers" even the CUSTOMERS said "W.T.F." and now it's *HISTORY*.

The ".Not" initiative was the _WORST_ thing (next to Win-10-nic and Windows "Ape") that Micro-shaft EVER rectally extrapolated out of the bowels of HELL. Ballmer did it the moment he took the reigns from Bill. But I'll reserve THAT discussion for another forum...

However, you're #3 comment is RIGHT ON. "Having apps run without being retargeted for different CPUs conflicted with MS desire of a walled garden. They wouldn't be able to have a controlled App store under the "RT" model.". THAT it DOES! Though, of course, using ".Not" run-time might as well cripple your 'app' into a 'CRapp', so 'go figure' on THAT one.

Developer note: If I MUST need a VM to run my application, I'll write it in JAVA. THEN, it should run EVERYWHERE, literally, and NOT just on stupid ".Not" capable winders boxen. The fact that I don't really like Java is the only reason I haven't done this... [ok I've been forced into it for Android development, but still...]

Re: "probably to avoid litigation with IBM"

Um, if I remember correctly, there was no chance of litigation since IBM did not bother to protect anything via copyright. It was a truly open market.

That's why IBM attempted a market takeover with the PS/2 when it realized how the market was shifting - except that it didn't work for various reasons, but mainly because there was no point, technically speaking.

Avoiding litigation with IBM concerning the PS/2 was definitely a concern, though, which is why the PS/2 is dead and the PC lives on.

The only safe PC is a SPARC

Both Intel and [most of the] ARM [community] are guilty of bundling opaque processor controls, and the i386/ARM architectures cannot be trusted as the opaque components have unrestricted access to networking, memory, and i/o.

Re: Linux has facilitated the cituation he is lamenting about

> I can just about remember the early days of "the PC",

Unfortunately you seem to have misremembered most of it.

> bought the company and stuck an IBM badge on it.

No. The IBM 5150 PC was an internal project based on their previous System 23 Datamaster and somewhat on their Displaywriter. While the System 23 was Z80 based Intel persuaded them to use the new 8088 which had an 8bit data bus and would not need much rejigging of the planar (motherboard), though it was redesigned somewhat for the 'Model B'.

> You caould buy loads of "PC"s from other vendors, that came with PC-Dos from MS, and would run "PC" software. But they really didn't have the uniformity of hardware that people think - there was a lot of variation and MS would provide each manufacturer with a PC-Dos tweaked to suit.

'PC-DOS' was strictly for IBM and only available from IBM (though it could later be used on clones). Many other 8088/8086 (or compatible such as V20/V30) machines could run MS-DOS and they did not need to be anything like an IBM-PC. They could be S100 based, or Wang or DEC Rainbows. But, no, MS would not provide a 'tweaked' version. Exactly like DRI's CP/M and CP/M-86, MS-DOS (which was actually written by SCP) was structured as a BDOS, a CCP (Command.COM), a BIOS and utilities. The BDOS and CCP was invariant, the manufacturer needed to write a BIOS to suit their hardware. It happened that the IBM-PC had a ROM BIOS which only required a small stub software BIOS to translate the BDOS calls to the ROM BIOS**.

> SHe worked on the basis that if you could take ${random_game} off the shelf, unwrap it, and boot the PC with that disk and be able to play the game - only then was it "PC Compatible".

Much software in the early days could run on any hardware that was running MS-DOS. Some came in 'PC-DOS' or 'MS-DOS' versions where the former required an IBM-PC or clone and the latter had a configuration program that could choose the appropriate way of using the screen or terminal. For example Wordstar and Borland Pascal 3 came in several versions (also for CP/M and CP/M-86). Because MS-DOS terminal handling was so poor, and so slow with ANSI.SYS, many software writers included to option of using BIOS terminal handling. This was also poor and, on IBM PC, they started doing direct screen writes to the CGA or MDA, or Hercules. _This_ is what changed the users to needing clones.

> IBM were geared up to "big stuff" (where productivity is measured in how many lines of code you make, not how small you make it !)

Many mainframes of the time had quite small memories. While LOC was one metric used to measure programmers productivity that does not imply that the programs were huge, nor that RAM was freely available.

> and as they could see the likes of Commodore and Apple eating their lunch in the small office - bought the company and stuck an IBM badge on it.

The small office was not the primary aim of the IBM 5150 PC. Apple IIs and Commodore Pets were appearing in the IBM sites running Visicalc, Wordstar and dBaseII. IBM wanted a machine that would keep these sites 'pure'. The IBM PC was designed (by IBM - NOT a bought-in company) to be just a bit better than the Apple II and to run the same software. It was also intended to be a terminal (which is why its serial ports were DTE when most other micro computers were DCE*). There were also versions of the IBM-PC that were 3740 terminals and 360 Emulators (with additional 680x0 boards).

IBM were also already in the small business market with the 5100, 5120, 5130 and small System 3s.

> As so, fairly quickly, all the manufacturers quickly learned that they had to mimic the IBM PC fairly closely (eg putting the serial ports at the same I/O addresses etc) or they'd be labelled as "not compatible" and would lose sales. Thus the "PC Compatible" standard "happened" !

The 'PC Compatible' was not just a few port addresses, it relied on having a compatible ROM BIOS and an internal address mapping of hardware, such as the video adaptor. Manufacturers could have licenced this from IBM (some 'stole' a copy) but a deliberate clean-room implementation by Phoenix provided a cheaper way of making clones.

> I deliberately say "happened" because it wasn't really designed, it sort of came into being in a very accidental way.

Re: "probably to avoid litigation with IBM"

> Um, if I remember correctly, there was no chance of litigation since IBM did not bother to protect anything via copyright. It was a truly open market.

Your rememberence is incorrect. While the ROM BIOS source was available it was fully protected by copyright (as is all FOSS software). It wasn't until Phoenix did a clean-room reimplementation that there was a cheap enough way to implement clones. IBM did protect its copyrights agressively.

Re: The only safe PC is a SPARC

@Chasil,

It appears that the best "open" CPU architecture is the decade-old SPARC T2 - the full Verilog source for the CPU is provided, and there is no "management engine."

Not so. The openpower bunch have done some interesting things, and you can buy an ATX motherboard with a POWER CPU that is completely open. That is the CPU design, board schematic, BIOS source code and much else besides is freely available. They use (I think) the words 'blobless computing', referring to the fact that the source code for every bit of software and firmware is available.

The best bit is that it offers competitive performance, and is a lowish price too.

Re: "It's about time governments got involved and forced the market open."

The free market is a powerful force, but must be kept in check over the long haul with regulation. Unregulated, the market forces inevitably lead to negative long term consequences, from the natural trend toward cost savings/profit earnings, that goes unnoticed until things are in a bad state. I'm thinking of the mortgage crisis here. Industries that generate pollution, because its cheaper to just dump it, so regulation and laws are required. FCC requirements for RF interference. We need similar regulations of security and updatability for anything put on the internet. It will cost us more, however. If things got bad enough, eventually capitalism gets labelled "evil", the root of all our problems, and some commie dictator rides in, like cuba, venezuela, and then you get to see what real bad is. I'll take capitalism and a good dose of regulation, thanks.

Re: Linux has facilitated the cituation he is lamenting about

One of the differences between clones and PCs was that the PCs had BASIC in ROM. (Int 18h) Clones relied on GwBasic.exe

And nobody, but nobody, used the DOS or BIOS video routines for anything worthwhile. Maybe you used the BIOS for mode switch or cursor control. After that it was direct memory IO, even for text mode. But I never saw an MDA adaptor.

Re: Linux has facilitated the cituation he is lamenting about

> One of the differences between clones and PCs was that the PCs had BASIC in ROM. (Int 18h) Clones relied on GwBasic.exe

The IBM 5150 PC did have IBM Cassette BASIC in ROM. This could access the cassette tape port (fitted to the original 5150 PC - I have one here) and the machine could boot up into BASIC without needing a diskette or even a drive. This was similar to many machines of the time, such as Apple II or Commodore Pet. To use BASIC from MS-DOS there was a BASICA.COM program that used some of the ROM and provided disc access routines.

Clones didn't bother with this as they didn't have cassette ports (and always had diskette or disk drives), which was to only point of having ROM Cassette BASIC. GWBASIC was a far better version, ROM BASIC was very poor - eg it only allowed variables of 1 letter plus 1 digit.

> And nobody, but nobody, used the DOS or BIOS video routines for anything worthwhile.

I ran quite a bit of MS-DOS software on non-IBM Clones. Much of this had configuration software that required the terminal type to be set (eg ANSI or Wyse-60) as well as the printer and other items.

For example: Wordstar 3.3, Borland Pascal 3, Supercalc 2. Note that many of these had versions for PC-DOS that used direct screen writes as well as versions for MS-DOS that could be configured to use various terminal controls when outputting via the DOS routines. (they also had CP/M and CP/M-86 versions which also could output to various terminals). Later editions of these did only provide PC-Clone versions.

Note that even Microsoft Software, such as MultiPlan and earlier versions of Word and all their languages could run on non-clone machines - using DOS and ANSI.SYS (or equivalent).

> But I never saw an MDA adaptor.

My IBM 5150 PC Model B has an MDA and an IBM Mono monitor. I got it from a business throwing out old machines in the mid-late 80s.

Re: "It's about time governments got involved and forced the market open."

" If things got bad enough, eventually capitalism gets labelled "evil", the root of all our problems [...]. I'll take capitalism and a good dose of regulation, thanks."

That's excellent. So when are the UK and the US and others going to try this capitalism thing then, rather than the corporate kleptocracy they currently have ? (As evidenced by the taxpayer funded bailout of the banksters (UK+US) and the bailout of the auto industry (US)) and various others?

And where are the regulatory authorities that haven't been got at by 'regulatory capture', and where are the politicians (even just a few of them) whose strings aren't pulled by corporate lobbyists and special interest groups?

Is Linus's vision really this narrow?

It's not the x86 architecture (or even the AMD64 architecture that replaced Intel's failed attempt at "industry standard 64bit") that means there's a lot of x86 about, it's the Wintel monopoly and the resulting vendor-defined architectural specs, going all the way back to (e.g.) the Lotus Intel Microsoft specification for "extended memory" (or was it "expanded memory") in the world above 640kB. Which led in due course to things like the PC98 spec and so on.

Outside the world of the Wintel-centric IT department and its consumer equivalent (and outside the world of Apple too) there's a lot of ARM and not a lot of x86, But the ARMs are often invisible to Joe Public, and apparently invisible to Linus too, if the reporting here is to be believed.

Imagine a world without ARM. It'd be a bit different. Nokia-style mobile phones, except with a battery life of two hours. Portable 2.5" disk drives with a capacity of hundreds of MB not hundreds of GB because they haven't the space or power for the x86 version of the ARM-centric embedded processing disk drives have all been using for many years - how many of them are using x86?

Re: Is Linus's vision really this narrow?

Possibly - its a fucking hugely wide field and he works in one apex of it and I doubt he spends his evenings playing with RaspberryPi zeros at £4 a pop going 'fucking ada I can do that on it!' as I do.

I've just spent a couple of weeks pissing about with a Pi-Top and baring the slightly shit keyboard the the PI-Top OS wanting to be Pi-Top and not raspbian I could type and code and play with shit quite happily for 7 or 8 hours battery life at a go - and it will only cost me around £30 to upgrade it when the Pi4 comes out.

My children's (10-15) friends are not in the least interested in PCs - everything they own is ARM. Of my friends 40 years older I have seen 1 surface and a couple of laptops (one of mine included) and some PCs that are intel - all the rest are ARM devides

The Pi is open architecture I believe and if someone can take that and find a matching SOC that does 4GB ram then Intel are fucked.

Re: Is Linus's vision really this narrow?

Maybe he is too close to the coalface? Not seeing Raspberry PI in actual use by today's generation. Looks like someone from Intel has tried to pull him onside.

I learnt my metal on 6502/Arm on Acorn Archimedes. I learnt Arm Chip fundamentals transitioning from 6502. I was certain ARM would win out over Intel in mobile early on, and IoTs before it was taken over by SoftBank. Now I'm less certain, but not by much (yet). SoftBank is still an unknown to me.

Power/Efficiency are key where {at a fundamental level} ARM wins hands down over Intel. Intel can never catch up, if Arm/Intel are fabbed at the same level, because you aren't talking about one ARM processor in a device. With self driving vehicles, it could be 400.