If you want facts and details then they're in Motorola's own M68000 Programmer's Reference Manual and M68000 User Manual, currently at nxp.com. I mean, Whatever you find elsewhere on the web is either already covered in the official documents, or will just be opinions and anecdotes.

Hello again litwr !
Your distaste for 68k and strange love for x86 never fail to amaze me

Wanting to find wrong data in your articles ?

Let's start with 68k article (there are wrong things in x86 as well).

First, code density isn't worse on 68k than it is on x86. While x86 is marginally better for very small code, its code density collapses for larger programs. You might well find a 40-byte routine that's smaller in x86, but (as an example) HOMM2 is a 1.5MB program in x86 and 0.9MB for 68k (while still being largely suboptimal code).

Having two stacks is nothing like "clumsiness and contrived oddities". But this has already be explained in the past, apparently with no effect. I may try again if you ask.

I agree that two carries are a little poor, they wanted to separate testing from multi-precision. Not much to worry about, as it does not have dramatic consequences on coding. And x86 also has two carries anyway (you know, that little useless AF flag).

The CLR problem is implementation bug and has nothing to do with the ISA's quality. On the other hand, x86 lacks the very useful CLR, TST, and MOVEM.

Arithmetic and logical shifts are indeed a little redundant, but nothing like the x86's way of handling it (two strictly equal commands). ASL might be slightly slower than LSL but has the full functionnality.

The codes aren't cumbersome and clumsy in comparison to x86 and arm. Actually, quite the opposite. I challenge you to give me a complete opcode map for those two !

The 68010 doesn't lose in comparison to 80286 because the 68000 already doesn't - yes, bare 68000 is still faster than 80286 !

Bitfield operations may look bulky, but they're very far from being useless. While they need some time to be mastered, once this is done it's just a pleasure to use them.

Yes 68020 does not have built-in memory management but a 68451 can be added to fix this - using that ability to connect coprocessors that you seem to dislike.

Of course 68040 is still faster than 80486 - but indeed heats too much and has shortcomings so it started to kill the 68k family.

With all that said, comparison of processors may be a lot easier by writing code (a routine doing the same thing) for each of them. Something around 100-200 instructions, without OS dependencies that may falsify the result.

It would be fun to see code (and actual encoding) compared for 68k, x86, and arm (and whatever else). But who will write the x86/arm code ?

Overall the tone says it clearly : in 68k page it was "bad" cpu and here it's "good" cpu. *Cough*. But let's go to the details (there is less to say here).

The 8086 isn't "one of the best processors made in the 70's". Actually, one of the worse. It would have disappeared long ago if it weren't used by IBM on the PC (for economic reasons and nothing else).

While the architecture indeed doesn't adhere to abstract theories, it's nothing about balance nor steadiness either (and not even further development).
It has shortcuts for relative rare operations, some common things are just missing, and so on. Yukk.

And, oh, yes. The segmentation stuff is far from "brilliant". Actually one of the worse things ever invented.
While it may look good in the paper, it's the typical thing that only works in theory - in practice, segments were too short and it led to the near/far pointer horror.

I can just laugh when reading :
"It's hard to say that in the 8086 command system, something is clearly missing. Quite the contrary."
Well, the most problematic are already mentioned in the above post. But it's true that the main issues are with addressing modes, and restricted use of registers.

CMPXCHG and CMPXCHG8B aren't specific to x86, 68020 has CAS/CAS2 that do the same thing. Of course XADD is the perfect example of a totally useless instruction.

Talking about "ease of programming" in case of x86 looks like the author never attempted to write any code.

It's true that many instructions have several opcodes. You may also mention the "de-facto standard" of TEST instruction duplicated at F6-F7 /1.

If you still think x86 is a good cpu overall, i suggest you try to write a disassembler in asm for it (I did that for 68k long ago). There you will more clearly see all the quirks, the first being that it's impossible to know about all the extensions that have been added in "modern" iterations. Say, for example what's encoded at 0F 38 xx exactly, i can't tell.

Yeah, i've maybe been hard with you litwr, but, hey, you nearly asked for it

Having two stacks is nothing like "clumsiness and contrived oddities".

Indeed. Have fun implementing a multitasking OS with only one stack, for example.

Quote:

Originally Posted by meynaf

On the other hand, x86 lacks the very useful CLR, TST, and MOVEM.

Can't you do tst rx on x86 with just mov rx,rx or something like that?

Quote:

Originally Posted by meynaf

Arithmetic and logical shifts are indeed a little redundant

Only arithmetic left shift is redundant, though.

Quote:

Originally Posted by meynaf

And, oh, yes. The segmentation stuff is far from "brilliant". Actually one of the worse things ever invented.

Couldn't agree more. Having 16Mb address space on the 68000 is fantastic. Sure, in 1979 that would've been a bit much, but Motorola was clearly thinking of the future here. Same with the brilliant idea of making the instruction set 32bit.

Quote:

Originally Posted by meynaf

Talking about "ease of programming" in case of x86 looks like the author never attempted to write any code.

A fact i've found about people defending whatever cpu family vs 68k is that when asked to show significantly sized code for performing comparisons (i.e. for proving their claims), they just won't do it.

- 3 8-bit registers
- 1 megahertz
- no multiplication/division (except by powers of 2 through shifting)
- additions and subtractions take the Carry flag into account. That is to propagate the carry when result is > 256. Clear it when starting an addition, set it when starting a subtraction...
- indirection only using zero page indexed (thanks to 8-bit) + one register
- some instructions just froze the machine (ex: 2), after that just turn it off and on again

I'll stick to 68000 family asm-wise!!

When motorola started with the overhyped PPC, things started to go downhill, and it became impossible to code in asm properly. You'd have to trust the compilers (which were crap at the time, now they've improved)

I wonder (but maybe I'm wrong) if it wouldn't have been wiser to remove the "long" 10 bytes instructions and create a RISC processor from a reduced 680x0 instruction set. After all, who does "MOVE.L #$12445,$4434" nowadays? It's always possible to load a source or target register and perform the operation. At least the existing instructions would have been preserved. Intel has been doing this for decades...

- some instructions just froze the machine (ex: 2), after that just turn it off and on again

IIRC a NMI (that hidden button behind the oric) could get rid of this too.
Anyway, put a 65C02 in your oric and this problem is gone.

Quote:

Originally Posted by jotd

I wonder (but maybe I'm wrong) if it wouldn't have been wiser to remove the "long" 10 bytes instructions and create a RISC processor from a reduced 680x0 instruction set. After all, who does "MOVE.L #$12445,$4434" nowadays? It's always possible to load a source or target register and perform the operation. At least the existing instructions would have been preserved. Intel has been doing this for decades...

It exists and is called the Coldfire. Something tells me it was quite a failure...

Anyway i kinda like the idea of doing move.l #my_cop_list,$dff080. Furthermore, in very large programs it's quite common to do move.l reloc1,reloc2 (linear access to vars).

Quote:

Originally Posted by Thorham

Not really, because asr shifts in the sign bit and lsr doesn't.

Weren't we speaking about left shifts and aren't these right shifts instead ?
Anyway, shifting in the rightmost bit for the extraneous left shift would have been possible too, and, even though not exactly useful, it would have looked more balanced.

Hi, friends! Sorry for a bit later reply - I have been a bit busy this week.
Sorry that some of my points look a bit unoptimal for somebody. However they all are based on my experience and common unbiased facts. It is well known fact that x86 family processors still have the best code density. x86-64 are not so good for this. I don't know why the code for HOMM2 for x86 could be larger in size than the code for 68000. I can only speculate that the code for x86 was written to support several graphic and sound cards. It is also quite possible that x86 version has more maps, etc. I have played it with x86 PC only. IMHO HOMM3 is much better and worth a realease in 2015! We can play it using Steam with modern x86 OS.
A lot of x86 instructions have odd length in bytes, 68000 has to use only even length for any of its instructions - this fact reduces the code density very much.
"The CLR problem is implementation bug and has nothing to do with the ISA's quality." Users of 68000 were forced to use CPU with this bug and they didn't have an alternative to use a corrected CPU.
"Address registers don't need to load 4 bytes (not always)." I don't understand this. If we want to set up an address register we have to load 4 bytes into it - and there is no alternative.
"bare 68000 is still faster than 80286" It sounds as a complete oddity for me. Try to find out benchmarks of the 80s, for example, https://archive.org/stream/byte-maga.../n247/mode/2up - it shows that PC XT with 8088 @4.77 MHGz is faster than Mac with 68000, and even 6502 @2MHz could beat 68000 at effective 6 MHz. 80286 is 3-4 times faster than 8088.
"Actually, one of the worse." My story is an emotional one so I have just expressed my feelings. I like 68000 too but I like truth and facts much more. Indeed 68000 is in some way more aesthetic.
XADD can be very useful as an atomic operation - https://en.wikipedia.org/wiki/Fetch-and-add
You can use OR R,R instead of TST R.
Anyway thanks for your words in this very interesting for me discussion.
EDIT. "There are relative modes, especially 16-bit PC-relative mode which x86 lacks." Those modes can help to make more compact and fast object file code but they mean not much for the executing code. x86 has relative jumps...

@meynaf: the reset button could get rid of some lockups (where the cursor was still blinking) but not the ones where the CPU was latched, like instruction $02. I didn't think of changing my 6502 to a 65C02 either.

Try to find out benchmarks of the 80s, for example, https://archive.org/stream/byte-maga.../n247/mode/2up - it shows that PC XT with 8088 @4.77 MHGz is faster than Mac with 68000, and even 6502 @2MHz could beat 68000 at effective 6 MHz. 80286 is 3-4 times faster than 8088.

I see the problem to get reliable benchmarks for such old hardware, but The ones given in the BYTE article are just nonsense for your comparision, they measure BASIC and disk/file access between three very different operating systems and basic interpreters - so almost zero meaning for determining CPU performance.

Quote:

Originally Posted by Leffmann

There was at least one more manufacturer of the M68000: ST Microelectronics in Europe.

Yep, and Hitachi in Japan, and Rockwell, and... it's actually in the Wikipedia article:

Quote:

Several other companies were second-source manufacturers of the HMOS 68000. These included Hitachi (HD68000), who shrank the feature size to 2.7 µm for their 12.5 MHz version,[4] Mostek (MK68000), Rockwell (R68000), Signetics (SCN68000), Thomson/SGS-Thomson (originally EF68000 and later TS68000), and Toshiba (TMP68000). Toshiba was also a second-source maker of the CMOS 68HC000 (TMP68HC000).