Retrocomputing Stack Exchange is a question and answer site for vintage-computer hobbyists interested in restoring, preserving, and using the classic computer and gaming systems of yesteryear. Join them; it only takes a minute:

What is odd in my opinion is the use of x86 assembly language for everything. The assembly language would not be my first choice for implementing an operating system. At the time MS-DOS was created, the C programming language had already been invented in Bell Labs, offering a good compromise between low level and high level programming.

Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?

The MS_DOS ABI (if we can call it that) is an assembly register interface. It would be quite inconvenient to implement that in C.
– tofroOct 7 '18 at 13:21

15

"The assembly language would not be my first choice for implementing an operating system." -- But someone did anyways, a fully graphical, multitasking OS that can run from a floppy: menuetos.net. While it's not exactly the same thing as Windows/Linux/MacOS (more primitive/flickers, etc), it's a cool proof of concept of how much space can be saved when you use only assembler.
– phyrfoxOct 7 '18 at 15:28

78

"The assembly language would not be my first choice for implementing an operating system." Because you're young, and have no appreciation for #1 what it means to have to run in 16KB of RAM, and #2 how hard it is to write a compiler that optimizes so well that it's better than hand-coded assembler.
– RonJohnOct 8 '18 at 5:22

10

RAM is the issue, entirely. You don't say why assembly language wouldn't be your first choice, but my guess is that you feel it would be easier to code in C or some other higher-level language, and it would be. However, when programming resource-limited computers, ease is not the primary goal; space efficiency, speed of execution, or some combination of these two factors will be the main considerations. Consider that some computers had one kilobyte or less of RAM versus gigabytes now, and even the very last version of MS-DOS had to run nicely on 640kB systems (the first, 16 kB!).
– Jim MacKenzieOct 8 '18 at 14:40

7

@LightnessRacesinOrbit reverse-ageism is a time-honored tradition. I can't wait until I need a cane and can start smacking deserving idiots with it!
– RonJohnOct 9 '18 at 16:57

13 Answers
13

C did exist when DOS was developed, but it wasn’t used much outside the Unix world, and as mentioned by JdeBP, wouldn’t necessarily have been considered a good language for systems programming on micros anyway — more likely candidates in the late seventies would include Forth and Pascal. SCP developed DOS in assembly for a few very pragmatic reasons:

The last design requirement was that MS-DOS be written in assembly language. While this characteristic does help meet the need for speed and efficiency, the reason for including it is much more basic. The only 8086 software-development tools available to Seattle Computer at that time were an assembler that ran on the Z80 under CP/M and a monitor/debugger that fit into a 2K-byte EPROM (erasable programmable read-only memory). Both of these tools had been developed in house.

As you’ve seen from the code available for MS-DOS 1.25 and 2.11, these versions were also written in assembly language. That code was never entirely rewritten, so there never was the opportunity to rewrite it in a higher-level language; nor was the need ever felt, I suspect — assembly was the language of choice for system tools on PCs for a long time, and system developers were as familiar with assembly as with any other language.

Other languages were used in MS-DOS releases. BASIC was used for a number of demos of various kinds over the years; but they hardly count as “core” utilities. Microsoft’s Pascal compiler (sold as IBM Pascal) was available early on, and could have been used — but it produces binaries with tell-tale memory problems which none of the MS-DOS tools exhibit, as far as I’m aware.

Some tools added in later versions were developed in C; a quick look through MS-DOS 6.22 shows that, for example, DEFRAG, FASTHELP, MEMMAKER and SCANDISK were written in C (some of these tools were licensed from other companies such as Symantec). FDISK is also a C program in 6.22; I haven’t checked its history to see if it started out in assembly and was rewritten (in early versions of DOS, it wasn’t provided by Microsoft but by OEMs). As No'am Newmanmentions, the OS/2 Museum page on DOS 3 lists ATTRIB.EXE as the first program provided with MS-DOS to have been written in C.

It gets even more interesting when one adds DR-DOS to the picture, but the question did not ask about that. It's worth disputing the question's implied premise that at the time people generally accepted that the C language was the language to use for implementing operating systems. That was by no means a given. There were people who implemented operating systems in Pascal, for example. (-:
– JdeBPOct 8 '18 at 13:41

@JdeBP do you know what languages were used in DR DOS? OpenDOS’ kernel was all assembly, but its COMMAND.COM was partly written in C. I haven’t looked into the rest of the system...
– Stephen KittOct 8 '18 at 13:58

3

Let's not forget that MS-DOS included a complete IDE for working in assembly on the computer that allowed for editing of even the currently executing program without the need to re-compile, re-link, and re-execute to see the results. DEBUG
– Gypsy SpellweaverOct 9 '18 at 4:07

2

@Gypsy believe it or not, that’s how I learned to code assembly on PCs, along with EDLIN! Before that, I hand-assembled 6502 assembly for use in DATA statements in BASIC on 8-bit Ataris... So DEBUG was a huge improvement.
– Stephen KittOct 9 '18 at 19:26

The DEFRAG utility you mention was originally part of Norton Utilities (called SPEEDISK) and licensed by Microsoft. SCANDISK was similarly based on Norton Disk Doctor.
– Ed AvisOct 11 '18 at 10:10

From The OS/2 Museum page about DOS3: "The new ATTRIB.EXE utility allowed the user to manipulate file attributes (Read-only, Hidden, System, etc.). It is notable for being the first DOS utility written in C (up to that point, all components in DOS were written in assembly language) and contains the string “Zbikowski C startup Copyright 1983 (C) Microsoft Corp”, clearly identifying Mark Zbikowski’s handiwork."

I realise that this doesn't really answer the question, but gives an idea of how DOS was implemented. Also remember that the operating system had to be as small as possible so as to leave room for user programs, so the resident part of COMMAND.COM would have stayed written in Assembly.

MS-DOS (by which I mean the underlying IO.SYS and MSDOS.SYS files) was written in assembly through the first half of the 1990s.

In 1995 for Windows 95, which was bootstrapped by what you would call MS-DOS 7.0 (although nobody ran DOS 7.0 as a stand-alone OS), I did write a small piece of code in C and included it in the project. As far as I know, it was the first time C code appeared in either of those .SYS files (yes I know one of those SYS files became a text file and all the OS code ended up in the other one).

I remember sneaking a look at the Windows NT source code at the time to see how they had solved some issue, and I was impressed at how even their low level drivers were all written in C. For instance they used the _inp() function to read I/O ports on the ISA bus.

In addition to the Basic Input/Output System and the Basic Disk Operating System, MS-DOS also comprises the command processor and the housekeeping utilities (superuser.com/questions/329442); and those are also in the Microsoft publication referred to in the question. As such, the use of C code in one of the housekeeping utilities in MS-DOS in 1984, per another answer here, does mean that MS-DOS (the whole actual operating system, not a partial subset of it) had not been wholly written in assembly for more than 10 years prior to that.
– JdeBPOct 8 '18 at 13:14

2

Do you know any more from that mysterious problem, why the MS-DOS couldn't read DR-DOS floppies?
– peterhOct 8 '18 at 15:33

2

I tended to run MS-DOS 7.0 as a standalone OS by installing Windows 95 and modifying the boot configuration so it left you at a DOS prompt. You could still launch Windows by saying 'win'. You could also make bootable floppies with MS-DOS 7.0, I think.
– Ed AvisOct 11 '18 at 10:12

@EdAvis - I've used a standalone MS-DOS 7 installation via boot floppies made with format a: /s from a Win95 machine (and probably then copy c:\windows\command\*.* a: to get the utilities), so I agree that that worked too.
– JulesOct 12 '18 at 0:25

Low Memory ==> Assembly Language

In the early days every byte mattered. MS-DOS was, in many ways, an outgrowth of CP/M. CP/M had a fairly hard limit of 64K. Yes, there were some bank switching in later versions, but for practical purposes for most its popular lifetime it was a 64K O/S. That included O/S resident portion + Application + User Data.

MS-DOS quickly increased that to 1 Meg. (but 640K in practical terms due to IBM's design decisions) and it was relatively easy to use the 8086 segmented architecture to make use of more than 64K memory, as long as you worked in 64K chunks.

Despite the 1 Meg./640K limit, plenty of machines started out with a lot less RAM. 256K was typical. The original IBM PC motherboard could hold from 16K to 64K, though most (and all the ones I ever worked with myself) could hold from 64K - 256K. Any more RAM went on expansion cards. RAM was still rather expensive in the early days - plenty of machines did plenty of useful work with 256K (or less!), so keeping the resident O/S components to a minimum was very important to allow for larger applications and more user data.

Can an optimizing C compiler get really close to hand-coded assembly language in memory usage? Absolutely. But they weren't there in the early days. Plus, compilers (in my mind, until Turbo Pascal came along) were big & clunky - i.e., needed plenty of RAM and disk space and took a long time to compile/link/etc. which would make developing the core of an O/S even harder to do. MS-DOS wasn't like a strip of paper tape loaded in via a TTY to an Altair (the first Microsoft Basic) but it was small and efficient for what was needed at the time, leaving room for applications on a bootable floppy and in RAM.

COMMAND.COM (the command-line interpreter) loaded in the top 32K of RAM and could be overwritten by a large application if necessary. In the twin-floppy days, it was a PITA if that was the case - one finished up putting copies of COMMAND.COM on the data disks. DOS was so small, it wasn't worth writing in C. Even large applications like Lotus 1-2-3 were written in assembler. 1-2-3 version 3 was the first C version and it was slower and had more bugs than version 2.
– grahamj42Oct 7 '18 at 20:27

8

@grahamj42 "DOS was so small, it wasn't worth writing in C" - actually I would argue the opposite - because it was so small, it had to be assembler to keep it as absolutely small as possible in the early days. Large applications were initially in assembler to - every cycle counts on a 4.77 Mhz. 8088, and every byte counts in 256K (or often less). As you move on to 6 Mhz. 80286, 640K, etc. the overhead of a high-level language (both CPU cycles & bytes) becomes more acceptable. Bugs - any major rewrite has 'em :-(
– manassehkatzOct 7 '18 at 20:41

1

"the original IBM PC motherboard could hold from 16K to 256K" -- a quick correction: the original motherboard held 16K-64K (1-4 banks of 16K chips). It was quickly replaced by a version that held 64K-256K (1-4 banks of 64K chips) after IBM realised that the 16K configuration wasn't selling well. See minuszerodegrees.net/5150/early/5150_early.htm for more details.
– JulesOct 8 '18 at 7:43

C would have been really inefficient to write the operating system for a number of reasons:

First, initial compilers for high level language used 16-bit pointers on MS-DOS, only later adding support for 32-bit pointers. Since much of the work done in the operating system needed to manage and work with a 1MB address space, this would have been impractical without larger pointers. Programs that took advantage of compiler-support for larger pointers suffered significantly on 8086 since the hardware doesn't actually support 32-bit pointers nicely!

So second, code generation quality was poor on 8086, in part because compilers were less mature back then, in part because of the irregular instruction set of the 8086 (re: many things, including the above mentioned 32-bit pointer handling), and in part because in assembly language a human programmer can so simply use all the features of the processor (e.g. return value in CF flags (Z-bit) along with return value in AX, as is done with the int 21h system calls). The small register set also makes the compiler's job harder, which means it would tend to use stack memory for local variables when a programmer would have used registers.

Compilers only use a subset of a processor's instruction set, and accessing those other features would have required an extensive library or compiler options and language extensions that were yet to come (e.g. __stdcall, others).

As hardware has evolved it has become more friendly to compilers; while also, compilers have improved dramatically.

Also, I just want to add that even if the code generation quality was good, assembler is sometimes helpful just because it's easier to debug when there are problems. Compilers can and do do all sorts of weird things in the interest of speed; sometimes you want to err on the side of readability/understandability later on.
– Kevin McKenzieOct 10 '18 at 1:33

Aside from the historical answer, which is just "yes", you also have to keep in mind that DOS is magnitudes smaller than what we'd call an "OS" today.

What is odd in my opinion is the use of x86 assembly language for everything.

On the contrary; using anything else would have been odd back then.

DOS had very few responsibilities - it handled several low level components in a more or less static way. There was no multi-user/-tasking/-processing. No scheduler. No forking, no subprocesses, no "exec"; no virtualization of memory or processes; no concept of drivers; no modules, no extensability. No USB, no PCI, no video functionality to speak of, no networking, no audio. Really, there was very little going on.

See the source code - the whole thing (including command line tools, "kernel"...) fits into a handful of assembler files; they aren't even sorted into subdirectories (as Michael Kjörling pointed out, DOS 1.0 didn't have subdirectories, but they didn't bother adding a hierarchy in later versions either).

If you count the DOS API calls, you end up at roughly 100 services for the 0x21 call, which is... not much, compared to today.

Finally, the CPUs were much simpler; there was only one mode (at least DOS ignored the rest, if we ignore EMM386 and such).

Suffice it to say, the programmers back then were quite used to assembler; more complex software was written in assembler on a regular basis. It probably did not even occur to them to rewrite DOS in C. There simply would have been little benefit.

"they didn't even need to be sorted into subdirectories" MS-DOS 1.x didn't even support subdirectories. That was only added in 2.0. So development of 2.0 would pretty naturally not have used subdirectories for source code organization. It's like how, these days, when building a compiler for an updated version of a programming language, any newly introduced language constructs likely don't get used in the compiler source code until the new version of the compiler is quite stable.
– a CVnOct 8 '18 at 11:24

1

@MichaelKjörling, phew, thanks for that addition. My first contact with DOS was on an Schneider Amstrad PC, I think (no HDD but two floppies, though I cannot recall if they were 5 1/4" or already 3 1/2"; and I certainly do not recall the version of DOS). I do recall vividly how I once tried out all the DOS commands... up to and including RECOVER.COM on the boot disk. The disk certainly had no subdirectories after THAT one, and I learned the importance of having backups. :-) en.wikipedia.org/wiki/Recover_(command) Good old times.
– AnoEOct 8 '18 at 13:33

@MichaelKjörling Were not directories only added in 2.20 or something like that?
– Rui F RibeiroOct 9 '18 at 19:12

1

@RuiFRibeiro I don't think so; my understanding is that directories were introduced together with hard disk support in 2.0. Either way, though, the point remains valid: (sub)directories were not supported in the 1.x releases, so would likely not have been available during significant portions of the development that led up to 2.0. The only way I can see that subdirectories would have been available is if development was done on some other platform and the resulting source code only cross-compiled.
– a CVnOct 10 '18 at 8:02

You need to understand that C wasn't a good compromise between low-level and "high-level". The abstractions it offered were tiny, and the cost of them was more important on the PC than on machines where Unix originated (even the original PDP-11/20 had more memory and faster storage than the original IBM PC). The main reason why you'd choose C wasn't to get useful abstraction, but rather to improve portability (this in a time where differences between CPUs and memory models were still huge). Since the IBM PC didn't need portability, there was little benefit to using C.

Today, people tend to look at assembly programming as some stone-age level technology (especially if you've never worked with modern assembly). But keep in mind that the high-level alternatives to assembly were languages like LISP - languages that didn't even pretend to have any relation to the hardware. C and assembly were extremely close in their capabilities, and the benefits C gave you were often outweighed by the costs. People had large amounts of experience and knowledge of the hardware and assembly, and lots of experience designing software in assembly. Moving to C didn't save as much effort as much as you'd think.

Additionally, when MS-DOS was being developed, there was no C compiler for the PC (or the x86 CPUs). Writing a compiler wasn't easy (and I'm not even talking about optimizing compilers). Most of the people involved didn't have a great insight into state-of-the-art computer science (which, while of great theoretical value, was pretty academical in relation to desktop computers at the time; CS tended to shun the "just get it working, somehow" mentality of commercial software). On the other hand, creating an assembler is pretty trivial - and already gives you lots of capabilities that early compilers for languages like C did. Do you really want to spend the effort to make a high-level compiler when what you're actually trying to do is write an OS? By the time tools like Turbo Pascal came to be, they might have been a good choice - but that was much later, and there'd be little point in rewriting the code already written.

Even with a compiler, don't forget how crappy those computers were. Compilers were bulky and slow, and using a compiler involved flipping floppies all the time. That's one of the reasons why at those times, languages like C usually didn't improve productivity unless your software got really big - you needed just as much careful design as with assembly, and rely on your own verification of the code long before it got compiled and executed. The first compiler to really break that trend was Turbo Pascal, which took very little memory and was blazing fast (while including a full-blown IDE with a debugger!) - in 1983. That's about in time for MS DOS 3 - and indeed, around that time, some of the new tools were already written in C; but that's still no reason to rewrite the whole OS. If it works, why break it? And worse, risk breaking all of the applications that already run just fine on MS-DOS?

The API of DOS was mostly about invoking interrupts and passing arguments (or pointers) through registers. That's an extremely simple interface that's pretty much just as easy to implement in assembly as in C (or depending on your C compiler, much easier). Developing applications for MS DOS required pretty much no investment beyond the computer itself, and a lot of development tools sprung up pretty quickly from other vendors (though on launch, Microsoft was still the only company that provided an OS, a programming language and applications for the PC). All the way through the MS-DOS era, people used assembly whenever small or fast code was required - compilers only slowly caught up with what assembly was capable of, though it usually meant you used something like C or Pascal for most of the application, with custom assembly for the performance critical bits.

OSes for desktop computers had one main requirement - be small. They didn't have to do much stuff, but whatever they had to keep in memory was memory that couldn't be used by applications. MS-DOS targeted machines with 16 kiB RAM - that didn't leave a lot of room for the OS. Diminishing that further by using code that wasn't hand optimized would have been a pointless waste. Even later, as memory started expanding towards the 640 kiB barrier, every kiB still counted - I remember tweaking memory for days to get to run Doom with a mouse, network and sound at the same time (this was already with a 16 MiB PC, but lots of things still had to fit in those 640 kiB - including device drivers). This got even worse with CD-ROM games; one more driver to fit in. And throughout all this time, you wanted to avoid the OS as much as possible - direct memory access was the king if you could afford it. So there wasn't really much of a demand for complicated OS features - you mostly wanted the OS to stand aside while your applications were running (on the PC, the major exception would only come with Windows 3.0).

But programming in 100% assembly was nowhere near as tedious as people imagine today (one notable example being Roller Coaster Tycoon, a huge '99 game, 100% written in assembly by one guy). Most importantly, C wasn't significantly better, especially on the PC and with one-person "teams", and introduced a lot of design conflicts that people had to learn to deal with. People already had plenty of experience developing in assembly, and were very aware of the potential pitfalls and design challenges.

Yes - C has been around since 1972 but there were no MSDOS C compilers until the late 80s. To convert an entire OS from Assembler to C would be a mammoth task. Even though it might be easier to maintain, it could be a lot slower.

You can see the result of conversion when you compare Visual Studio 2008 to VS2010. This was a full blown conversion from C to C#. OK - it is easier to maintain from the Vendor's point of view but the new product is 24 times slower: on a netbook, 2008 loads in 5s, 2010 takes almost 2 minutes.

Also, DOS was a 16-bit OS in a 20 bit address space. This meant that there was a lot of segmented addressing and a few memory models to choose from (Tiny, Compact, Medium, Large, Huge): not the flat addressing that you get in the 32-bit/64-bit compilers nowadays. The compilers didn't hide this from you: you had to make a conscious decision as to which memory model to use since changing from one model to another wasn't a trivial exercise (I've done this in a past life).

Assembly to C, and C to C# cannot really be compared, as C# is a JIT'd GC'd memory-safe language, whereas C is very close to Assembly in the features it provides, being just easier to write and maintain. Anyway, "entire OS" for MS-DOS is quite little code, so converting it to C, either completely or partially, wouldn't be such a large task.
– juhistOct 7 '18 at 10:58

4

@juhist Try actually doing some of the conversion yourself, then tell us whether it was really a large task or not. (Oh, and make sure you regression test every edge and corner case, just to be sure your C version doesn't change the behaviour of anything, not just what the (buggy and incomplete) user documentation says is valid!
– alephzeroOct 7 '18 at 11:42

June 1982 is post-launch by a bit over a year (and so well after key development work) but it is not by any stretch of imagination the late 80's
– Chris StrattonOct 7 '18 at 17:41

1

Chris Stratton points out one problem with this answer, that its dates for availability of C compilers are quite wrong. (By the late 1980s, not only were there C compilers for MS-DOS, there were even ones for both MS-DOS and OS/2 1.x.) Another is the idea that conversion is "a mammoth task" that needs to happen in one fell swoop. The reality was that contemporary C (and indeed BASIC and Pascal) compilers supported both linking to assembly language modules and in-line assembly language, and one did not have to make an either/or exclusive choice. Or indeed do the whole operating system at once.
– JdeBPOct 8 '18 at 13:26

For the rationale: there's very little gain in rewriting functioning code (unless for example you have portability specifically in mind).

The "newest version" of any major program generally contains much of the code of the previous version, so again, why spend programmer time on rewriting existing features instead of adding new features?

Welcome to Retrocomputing! This answer could be improved by supporting evidence. For example, specifying which version your answer refers to, and giving a link to source code. Although your rationale is valid, it doesn't count as evidence.
– Dr SheldonOct 7 '18 at 15:35

High level languages are generally easier to work with. However, depending on what one is programming, and one's experience, programming in assembler is not necessarily all that complex. Remove the hurdles of graphics, sound, inter-process communication, and just a keyboard and text shell for user interaction -- it's pretty straight-forward. Especially with a well-documented BIOS to handle the low level text in / text out / disk stuff, building a well-functioning program in assembler was straight-forward and not all that slow to accomplish.

Looking backward with a 2018 mindset, yeah it might seem strange to stick with assembler even in the later versions of DOS. It was not, though. Others have mentioned that some tools were eventually written in C. Still most of it was already written and known to operate well. Why bother rewriting everything? Do you think any users would have cared if a box containing the newest DOS had a blurb stating, "Now fully implemented in the C language!"?

It's interesting to note that the direct inspiration for the initial DOS API, namely CP/M was mostly written in PL/M rather than assembly language.

PL/M being a somewhat obscure language and the original Digital Research source code being unavailable due to copyright and licensing reasons anyway, writing in assembly language was the most straightforward course for direct binary compatibility. In particular since the machine-dependent part of the operating system, the BIOS, was already provided by IBM (it was very common to write the BIOS in assembly language anyway, even for CP/M, for similar reasons).

The original CP/M structure consisted of BIOS, BDOS, and CCP (basically what COMMAND.COM does) with BIOS implementing the system-dependent parts, BDOS implementing the available system calls on top of that, and the CCP (typically reloaded after each program run) providing a basic command line interface.

Much of the BDOS layer was just glue code, with the most important part that was more complex being the file system code and implementation. There were no file ids indexing kernel internal structures: instead the application program had to provide the room for the respective data structures. Consequently there was no limitation on concurrently open files. Also there was no file abstraction across devices: disk files used different system calls than console I/O or printers.

Since the core of MS-DOS corresponds just to what was the BDOS in CP/M, reimplementing it in assembly language was not that much of a chore. Later versions of MS-DOS tried adding a file id layer and directories and pipes to the mix to look more like Unix, but partly due to the unwieldy implementation language and partly due to a lack of technical excellence, the results were far from convincing. Other things that were a mess were end-of-file handling (since CP/M only had file lengths in multiples of 128) and text line separators vs. terminal handling (CR/LF is around even to these days).

So doing the original implementation in assembly language was reasonable given the system call history of CP/M that DOS initially tried to emulate. However, it contributed to drawing the wrong project members for moving to a Unix-like approach of system responsibility and mechanisms. Microsoft never managed utilizing the x286 16-bit protected modes for creating a more modern Windows variant; instead both Windows 95 and Windows NT worked with the x386 32-bit protected modes, Windows 95 with DOS underpinnings and Windows NT with a kernel developed new. Eventually the NT approach replaced the old DOS-based one.

NT was renowned for being "enterprise-level" and resource consuming. Part of the reason certainly was that it had bulkier and slower code due to not being coded principally in assembly language like the DOS-based OS cores were. That led to a rather long parallel history of DOS- and NT-based Windows systems.

So to answer your question: later "versions of DOS" were written in higher languages, but it took them a long time to actually replace the assembly language based ones.

Re open files: at some stage, a structure called IIRC the 'system files table' which allowed the possibility of a program - or rather, the computer as a whole - to have 20 files open. TSRs which used files would have to copy this table, enter their own files into the table, then restore the original entries.
– No'am NewmanOct 9 '18 at 8:13

@No'am yes, that is the SFT; see here (and the links at the bottom) or here for details. You could have more than 20 entries.
– Stephen KittOct 9 '18 at 19:34

I see many factors that make C programming uncomfortable for MS-DOS development:

Many of the interface MS-DOS offered was targeted at assembly language anyway, like direct use of AX-DX registers to pass arguments and replies to system calls

direct use of the (software) interrupts to call system services

use of the 'carry flag' to report error during system calls, which can be efficiently set/cleared/tested at assembly level and is missing from the C programming model altogether.

all of the above also applies to calls to the BIOS functions that the MS-DOS will have to do to use low-level "drivers".

it has to run real mode programs, which had to split code among multiple segments. The C execution environment model assumes one segment for the code and one for stack+data. C compilers of the era had complex behaviour to emulate that, sometimes by restricting the program to 64K. As a service to third-party software, MS-DOS must be able to work with buffers/strings located anywhere, forcing it to use exotic FAR pointers and the like all over the code.

Later versions of MS-DOS will still be facing the same constraints about BIOS, established interfaces and real mode addressing, so assembly might still be preferable over C for them.

While this may be true, I don't see how it answers the question of whether versions of MS-DOS were implemented in Assembler.
– Chenmunka♦Oct 10 '18 at 10:40

@chenmunka: edited. Hopefully, that should make it clearer.
– PypeBrosOct 10 '18 at 14:04

"The C execution environment model assumes one segment for the code and one for stack+data." Possibly (I haven't checked), but real-world DOS C compilers certainly weren't limited to one code segment and one combined stack/data segment. Both the Large and Huge memory models gave you multiple code and data segments; also Compact gave you a single code segment but multiple data segments, while Medium gave you a single data/stack segment with multiple code segments. Only Small and Tiny were limited to one of each type; Tiny being a single segment overall (CS=DS=SS), Small being CS != DS=SS.
– a CVnOct 19 '18 at 18:04

Indeed (congrats for remembering all those model names). They likely weren't available when earliest DOS came out, but I guess Borland C (in 1987) had them. whether it would be comfortable to code the terminate-and-stay-resident part of MS-DOS or the additional tools with those modes ... I don't know.
– PypeBrosOct 24 '18 at 14:40

In response to a few comments, I revised my answer to contain a short version without my personal interests included.

Question: Was this assembly language approach used also in the newest versions of MS-DOS in the 1990s?

TL;DR version:(I also give my 'rant' below for anyone interested. I thought it might spur some good discussion.)

No ... but they kept it as long as they could.

This is akin to "Would you cut off your legs just because you bought a new car?"

Assembly was preferred for optimizing efficiency and would not have been discarded without good reason. Nearly every line of code mapped directly to a machine code instruction. This was a marvel and allowed intimate control over the computer's resources.

It was 'normal' for programmers to use assembly back then. Assembly closely mirrored machine code and was as close as most humans could get to communicating directly with the computer. For anything that constantly made calls directly to hardware, this was the 'go to' language.

Higher level languages with convenient syntax and builtin help for common functions all had to be interpreted or compiled. This step could introduce errors, or at the very least would use general algorithms or methods that likely would not be the most efficient in every situation.

Constraints were different then - memory and storage were very scarce. Processing power and bus speeds were never enough. Optimized, bug-free code was the main goal.

Because assembly was common, it was easy. People thought in terms of interrupts, memory addresses, and flags. Still time consuming, but not particularly difficult.

This was cutting edge stuff. It was the best that could be done within the constraints of cost and materials that came with the ambitious of goal of an actual working computer that people could have in their home or office.

Programs were small enough that one person could write, debug, and optimize if needed. There were no huge communities of coders, no internet, and few minimal BBS forums. It was a frustratingly solitary activity. Portability and reusability were not quite as important.

Things have changed dramatically since then ... it has been exciting to be a part of it so far. After all these years, questions like this bring up 'big picture' thoughts for me. Maybe it's just me ... if watching the cultural shifts that mirror the computer revolution is too dull, I'll delete this and move on.

I suppose it is hard to imagine nowadays, but programmers were used to writing in assembly language, even straight machine code sometimes. For many tasks, there just wasn't anything else. It was the way things happened and it was miles better than punchcards or hand wiring circuits.

You have to imagine the context. It is something like people today trying to understand just how important the library was before there was internet or even modems. You could not just "google" something. If you wanted to know, you had to a) figure it out, b) find a book, or c) ask an expert. If you happened to be in a small town, you were stuck with (a) and "reinventing the wheel."

What is the cost of this convenience? People back then were much better at figuring out how to learn something. This is something I know for sure. I've been a teacher for nearly 20 years. I've seen it happen. I even have data, but any glance at the forum topics will give you the answer. Because it was a challenge to find information or knowledge about a subject, it took a lot more thinking and planning. People gained some wicked high level thinking skills just learning how to change spark plugs, testing a capacitor, or even trying over and over to find the best way to fix a flat on a bicycle.

The easy availability of information has led to less developed information gathering and information assessment skills. People 'just google it' but have no idea how to judge which information is true or reliable. Without frustration, failure, and practice, they don't know what to value and many end up valuing nothing at all.

We have a tragic subset of depressing people now who are so unwilling to put forth even the smallest amount of effort that they won't even use Google to look something up. They answer, "I don't know!" This answer isn't allowed anymore. "I don't know" just doesn't make sense when anything that is known or has ever been known is at our fingertips. Willful ignorance must be the insidious disease of the people whose lives are just too easy.

I'm suggesting that coding has become something similar. There are tons of front end tools, font services, tag managers, and huge libraries available ... all just to make a website. In back end services, there are tons of resources to manage databases, do scientific calculations, and manage systems. Cars can drive themselves. Social media accounts can be completely automated. Soon, computers will be coding themselves ... and they will do it better than us almost immediately.

In many modern situations, programmers don't have to really concern themselves with memory management, device access, reading ports, managing interrupts, optimizing hard drive access times, or any of the lower level things that actually make a computer work. It has become more and more 'symbolic' and 'virtual.' Programmers are nestled cozily in a soft bed of libraries and environments that take a lot of the critical thought and creativity away from the activity. I suppose soon we will just sit in front of a computer, or have a conversation while walking in the mall with one, and just ramble on about our creative ideas while the computer implements them in real time somewhere around the world.

I'm not sure whether it is better or worse, but it is definitely different. Would I prefer to code a modern application in assembly language? Nope. Not even a little bit. Do I find it handy that there are software libraries and wonderful communities of open source developers working together? Yes! What used to be a frustratingly solitary activity is now something that is shared.

In the sharing, something beneficial has been added. Extra value has
been created that wasn't there before. Just like in the Youtuber
community, the rising tide lifts all boats.

BUT, for the people who never had to do all that thinking and make it fit in 4k of RAM, my anecdotal observations say that they have lost something valuable and I don't know if anything has been put back into the system to replace those lost critical thinking skills.

We have gained a wonderful social commodity but at the cost collective creativity and problem solving skills.

How it will work out ... I certainly do not know, but it is a fun ride.