Hello I had this discuss with some contact of Tony in the 90. Tno Teby disliked the Semaphore approach, and sligthly rejected some approach of the Linux kernel.In fact, his approach in SMS/QE was to use the TAS instruction., to inmplement a exculsion zone. In OS theory, this is a Mutex.

The difference between Mutex and Semaphore is light. Some people say that Mutex are binary semaphore. A true semaphoe implement a file of signaling and process must wait for execute. With Mutex, this is the first who execute the TAS who is free to enter in the "sensible execution zone".

In all case, if you don't take attention, you can create a "deadlock". The problem existe everywhere (even in database mecanism). Some solution exist, automatic or by hand (no the purpose here).

The advantage of the TAS (and binary semaphore), is the code of the OS is simpler, then more quick and efficient. For Real Time (with hard constraints), this is (or was when we analyze that during 90's) a better choice.

I little lost this part of technology those years, then I am not sure that today, the problem of performance already exist. But If i had to do programming in real time embedded, I take real attention to TAS, before sempahore.

OK a few words about this analogy. The semaphore/mutex/lock concept is associated with waiting lists of threads/processes when another one already "took" the permission to access the shared data structures to read or possibly modify them.

This is like traffic lights protecting a crossroad. The drivers have to stop when the light is red, this creates waiting lists of cars. Even if there is no accident, that can create lots of perturbations when traffic increases. I live near Paris. At 5am when the streets are empty (system not loaded) you can reasonably foresee how long a trip will take. Not so at 5pm when the streets are full (system loaded).

But it is worse than this. In real world, when a car is blocked in the middle of a crossroad, it is called an accident. In computer systems, when a thread/process became owner of a semaphore/mutex/lock, rescheduling is authorised. Accidents are normal inside computers. This of course makes the situation inside computers orders of magnitude worse than in real world.

In QDOS/SMSQE/Minerva, when a job needs access to a shared data structure, it enters atomic mode, ie it finishes what it has to do with the data structure, either reading or writing. Thus in QDOS FROM THE POINT OF VIEW OF THE RUNNING JOB (in capital because this is fundamental) ALL THE SHARED DATA STRUCTURES ARE ALWAYS AVAILABLE. Another way to look at it, in QDOS each #TRAP behaves like a single processor super-instruction. And Tony left the possibility to define new #TRAPs if necessary because he could not foresee all the situations of the future. Of course the code inside the TRAPs must be perfectly clean, developed and tested by extremely talented programmers, no bug, or this may definitely crash the system. But in the end with QDOS the programmer can develop software nearly as easily as in a single tasking system. Many many problems disappear.

This is what made QDOS special.

Tony told me that in the beginning this was not foreseen. As you may know he developed QDOS alone in a hurry without telling anyone. The problems with semaphores/mutex/locks were well known, you can verify this in the literature of the 1970's. And anyway semaphores also contain a small portion of atomic code (I let you think about this: you can obviously not protect a semaphore and it's associated list of waiting processes/threads, which is a shared data structure, with another semaphore...). So he decided to protect all the shared data structures with atomic accesses. It is only when he moved to France and redeveloped QDOS into SMSQ/E that he made a theory out of this concept. As of today, a lost theory, I am afraid.

I had an argument about this years ago with a programmer who did not know QDOS. In the end he was nearly convinced. But according to him that would have meant a lot less work to do for them (programmers) so he felt OK... It is possible that people had to die in transportation systems because of problems created by semaphores/mutex that should not have occured. However there is no way to prove this because we, simple people, are not granted to look at so called industrial secrets after catastrophies happened, so...

Two things bothered me with this approach though. Until Tony one day explained the cure... or I was ready to understand If interested I can also write about this.

My take on "What makes the OS for QL any better, different, unique?" is that:When it first arrived the promise, at least, of the QL was of something faradvanced of anything else available for home use. A "complete" SoHo system forless than the price of the software alone for an "IBM".

Of course, in reality, it took a while before that promise was realised. Youneeded extra RAM and real disk drives, a monitor and printer, a RTC and a mouse,plus some additional software, before it finally became that multitasking wonderwe were given to believe.

Yet by that time, one had sunk so much Time, Effort, and Treasure in putting ittogether, and getting to grips with it all, that it would have been foolish toswitch to anything lesser, however popular. And yes, ultimately, it did deliverthe goods: Move the best bits over to some decent hardware like the Atari ST andyou really had the machine we were promised!

While I certainly did not spend as much TET on any other system, I did haveexperience of the struggles of some of my acquaintances with their brain deadsystems; IBMs being perhaps the dumbest of the pile. What rational human beingcould possibly have imagined that they eventually would win the struggle forsurvival?! Well, they did. Not by being the technically best, but by beingtotally ruthless, business-savvy, and by throwing tons of money at every littleproblem until the problem was so deeply buried that for all practical purposesit had gone away. (Once they got rich enough, it is true, they re-wrote a lot oftheir stuff and sorted out some of those problems..)

I worked for some years for a vast, famous IT firm, of some equally vast andfamous country, and got to see how that sort of thing was done in practice: Atwork in Amsterdamned, our main database was located in said vast country, somethousands of miles away. Each time we needed to look something up in thatdatabase - and that was pretty often, it took an absolute age. The reason wasthat instead of just downloading the (mainly text) data, together with a fewinstructions on how to prettily reformat that data on the local machine, whicheven in 1995-6 would have been a matter of a split second, the whole interface,the menus and the lot, were painstakingly transferred for each and every query! Icomplained up the chain, and even suggested a fix, but to no avail. The solutionthese clever chaps hit upon was to throw together ten more dedicatedtransatlantic lines, and ten more servers with two CPUs each. That solved theproblem! From painstakingly slow it became an almost bearable five times faster.I say almost bearable, because I was well aware of the stupid work going on inthe background to make the "magic" happen.

As far as Im concerned, the (true and typical) story above, illustrates my answer to the question.

Peter wrote:Although the QL came earlier than ST and Mac, the multitasking scheduler was already preemptive - technologically superior to the cooperative multitasking of the other machines, where every program could block the whole system.

One moment...

Neither the original Macintosh nor the Atari ST have had any Multitasking at all, not even cooperative Multitasking. Windows since 3.0 had cooperative Multitasking for Windows programs AND even pre-emptive Multitasking for the DOS-Boxes (but only on 80386 machines, not on 80286 machines).

Peter wrote:Although the QL came earlier than ST and Mac, the multitasking scheduler was already preemptive - technologically superior to the cooperative multitasking of the other machines, where every program could block the whole system.

One moment...

Neither the original Macintosh nor the Atari ST have had any Multitasking at all, not even cooperative Multitasking. Windows since 3.0 had cooperative Multitasking for Windows programs AND even pre-emptive Multitasking for the DOS-Boxes (but only on 80386 machines, not on 80286 machines).

Not quite true for the Atari ST (GEM) and Macintosh. Desk accessories for both of those could do things in the background during the desktop interrupt cycle, but it was highly restricted.

On the Mac System 6 had the co-operative multi-tasking version of Finder, called MultiFinder. I can't remember when System 6 was released.

On the Atari there was nothing until Eric Smith created MiNT and MagiC came along. Atari bought MiNT from Eric and hired him in 1992. They then used it to create MultiTOS.

From 1976 to 1995 only single tasking microcomputers were accepted by the public simply because multitasking systems based on academic research were not usable with the then available microprocessors. Except the QL with a completely different system software. Indeed even with access to the hardware, GST could not include the specifications by Sir Clive for the QL into 68K/OS : it did not work, and then none of us bought it anyway...

Remember the LISA based on 68k and all the good theories about multitasking as per philosophers eating spaghetti which nearly killed Apple. Fortunately Steve Jobs had not yet been fired and he had developed the single tasking Mac with another team. However he did not understand the technology or he would not have done the NeXT. Steve had good memories though, he forbade multitasking in IOS, and that killed the multitasking client/server based Symbian. From the day he allowed multitasking in IOS, Android became stronger. I agree, other reasons here, as IOS and Android are equal(ly bad?) at system software level.

Also look at MS Xenix (was it the name ? I do not remember correctly) : total failure. More interesting, if you search long enough, you will find that around 1985 MS made an attempt to develop a multitasking version of MSDOS. I strongly believe that this was because Bill Gates knew the QL (TT had a visit of BG to sell BASIC to SINCLAIR). And I also believe that it never happened because at that time MS software developers had no access to a commented disassembly of the QL ROM. Even if they had, TT thought that they could not have understood a simpler concept than the one they had learnt in the best universities. I wonder what happened inside MS when BG understood that his software developers could not even copy a toy british home computer. Notice that he also had good memories, one of the main success of MS is the Xbox game machine with a single tasking system, at least when it came out in 2001 (now I don't know).

Of course there were UNIX workstations, they did more or less work, my then employer had bought an Apollo based on 68k to do some scientific calculations. But at what price ! And worse than MSDOS, not user friendly at all. Unthinkable for the public.

IIRC the first relatively low priced usable multitasking systems were AMIGA and Archimedes. But with hardware TT could only dream of when he worked at Sinclair. I know, he told me.