Here goes. Remember, what I write here may not be 100% true, but it is "true enough." (In other words, it gets the point across without getting bogged down in nitpicky details.)

MS-DOS served two purposes in Windows 95.

It served as the boot loader.

It acted as the 16-bit legacy device driver layer.

When Windows 95 started up, a customized version of MS-DOS was loaded, and it's that customized version that processed your CONFIG.SYS file, launched COMMAND.COM, which ran your AUTOEXEC.BAT and which eventually ran WIN.COM, which began the process of booting up the VMM, or the 32-bit virtual machine manager.

The customized version of MS-DOS was fully functional as far as the phrase "fully functional" can be applied to MS-DOS in the first place. It had to be, since it was all that was running when you ran Windows 95 in "single MS-DOS application mode."

The WIN.COM program started booting what most people think of as "Windows" proper. It used the copy of MS-DOS to load the virtual machine manager, read the SYSTEM.INI file, load the virtual device drivers, and then it turned off any running copy of EMM386 and switched into protected mode. It's protected mode that is what most people think of as "the real Windows."

Once in protected mode, the virtual device drivers did their magic. Among other things those drivers did was "suck the brains out of MS-DOS," transfer all that state to the 32-bit file system manager, and then shut off MS-DOS. All future file system operations would get routed to the 32-bit file system manager. If a program issued an int 21h, the 32-bit file system manager would be responsible for handling it.

And that's where the second role of MS-DOS comes into play. For you see, MS-DOS programs and device drivers loved to mess with the operating system itself. They would replace the int 21h service vector, they would patch the operating system, they would patch the low-level disk I/O services int 25h and int 26h. They would also do crazy things to the BIOS interrupts such as int 13h, the low-level disk I/O interrupt.

When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, who would do some preliminary munging and then, if it detected that somebody had hooked the int 21h vector, it would jump back into the 16-bit code to let the hook run. Replacing the int 21h service vector is logically analogous to subclassing a window. You get the old vector and set your new vector. When your replacement handler is called, you do some stuff, and then call the original vector to do "whatever would normally happen." After the original vector returned, you might do some more work before returning to the original caller.

One of the 16-bit drivers loaded by CONFIG.SYS was called IFSMGR.SYS. The job of this 16-bit driver was to hook MS-DOS first before the other drivers and programs got a chance! This driver was in cahoots with the 32-bit file system manager, for its job was to jump from 16-bit code back into 32-bit code to let the 32-bit file system manager continue its work.

In other words, MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack.

Let's start with a system that didn't contain any "evil" drivers or programs that patched or hooked MS-DOS.

Program calls int 21h

32-bit file system manager

checks that nobody has patched or hooked MS-DOS performs the requested operation updates the state variables inside MS-DOS returns to caller

Program gets result

This was paradise. The 32-bit file system manager was able to do all the work without having to deal with pesky drivers that did bizarro things. Note the extra step of updating the state variables inside MS-DOS. Even though we extracted the state variables from MS-DOS during the boot process, we keep those state variables in sync because drivers and programs frequently "knew" how those state variables worked and bypassed the operating system and accessed them directly. Therefore, the file system manager had to maintain the charade that MS-DOS was running the show (even though it wasn't) so that those drivers and programs saw what they wanted.

Note also that those state variables were per-VM. (I.e., each MS-DOS "box" you opened got its own copy of those state variables.) After all, each MS-DOS box had its idea of what the current directory was, what was in the file tables, that sort of thing. This was all an act, however, because the real list of open files was kept in by the 32-bit file system manager. It had to be, because disk caches had to be kept coherent, and file sharing need to be enforced globally. If one MS-DOS box opened a file for exclusive access, then an attempt by a program running in another MS-DOS box to open the file should fail with a sharing violation.

Okay, that was the easy case. The hard case is if you had a driver that hooked int 21h. I don't know what the driver does, let's say that it's a network driver that intercepts I/O to network drives and handles them in some special way. Let's suppose also that there's some TSR running in the MS-DOS box which has hooked int 21h so it can print a 1 to the screen when the int 21h is active and a 2 when the int 21h completes. Let's follow a call to a local device (not a network device, so the network driver doesn't do anything):

Program calls int 21h

32-bit file system manager

notices that somebody has patched or hooked MS-DOS jumps to the hook (which is the 16-bit TSR)

16-bit TSR (front end)

prints a 1 to the screen calls previous handler (which is the 16-bit network driver)

regains control performs the requested operation updates the state variables inside MS-DOS returns to caller

16-bit network driver (back end)

returns to caller

16-bit TSR (back end)

prints a 2 to the screen returns to caller

Program gets result

Notice that all the work is still being done by the 32-bit file system manager. It's just that the call gets routed through all the 16-bit stuff to maintain the charade that 16-bit MS-DOS is still running the show. The only 16-bit code that actually ran (in red) is the stuff that the TSR and network driver installed, plus a tiny bit of glue in the 16-bit IFSMGR hook. Notice that no 16-bit MS-DOS code ran. The 32-bit file manager took over for MS-DOS.

A similar sort of "take over but let the crazy stuff happen if somebody is doing crazy stuff" dance took place when the I/O subsystem took over control of your hard drive from 16-bit device drivers. If it recognized the drivers, it would "suck their brains out" and take over all the operations, in the same way that the 32-bit file system manager took over operations from 16-bit MS-DOS. On the other hand, if the driver wasn't one that the I/O subsystem recognized, it let the driver be the one in charge of the drive. If this happened, it was said that you were going through the "real-mode mapper" since "real mode" was name for the CPU mode when protected mode was not running; in other words, the mapper was letting the 16-bit drivers do the work.

Now, if you were unlucky enough to be using the real-mode mapper, you probably noticed that system performance to that drive was pretty awful. That's because you were using the old clunky single-threaded 16-bit drivers instead of the faster, multithread-enabled 32-bit drivers. (When a 16-bit driver was running, no other I/O could happen because 16-bit drivers were not designed for multi-threading.)

This awfulness of the real-mode mapper actually came in handy in a backwards way, because it was an early indication that your computer got infected with an MS-DOS virus. After all, MS-DOS viruses did what TSRs and drivers did: They hooked interrupt vectors and took over control of your hard drive. From the I/O subsystem's point of view, they looked just like a 16-bit hard disk device driver! When people complained, "Windows suddenly started running really slow," we asked them to look at the system performance page in the control panel and see if it says that "Some drives are using MS-DOS compatiblity." If so, then it meant that the real-mode mapper was in charge, and if you didn't change hardware, it probably means a virus.

Now, there are parts of MS-DOS that are unrelated to file I/O. For example, there are functions for allocating memory, parsing a string containing potential wildcards into FCB format, that sort of thing. Those functions were still handled by MS-DOS since they were just "helper library" type functions and there was no benefit to reimplementing them in 32-bit code aside from just being able to say that you did it. The old 16-bit code worked just fine, and if you let it do the work, you preserved compatibility with programs that patched MS-DOS in order to alter the behavior of those functions.

I wanted to ask why this whole mess when most stuff needed to be ported to the new Win95 anyway because, like you said, the performance was horrible.

But of course , the real mystery is: why win95 at all ? Why wasn’t Win95 just a ‘home edition’ of Win NT 4.0 ? What justified all the effort spent on the dead-end 9x OSes when you already had the NT kernel, which is way less horrible than the whol win 9x business ?

[The fact that Windows 95 was such a success demonstrated that the “Hey everybody, abandon all your old hardware that doesn’t have 32-bit drivers and switch to Windows NT” strategy wasn’t working. -Raymond]

As far as the real-mode mapper being handy for detecting viruses, it definitely was. I found one on my parents system that way. They weren’t able to access their CD drive, and it wasn’t set up with a driver and MSCDEX in config.sys, so compatibility mode meant it went away.

Finally, an explanation as to what the real deal was with MS-DOS under Windows 95. For all these years so many people have stated that Windows 95 was just a fancy screen saver for DOS or was totally based on DOS.

Thank you, Lord Chen, for clearing up years upon years of false speculation and rantings.

We now return you to your regularly scheduled Christmas Special of Slashdot rantings from people who claim to know more about how this stuff works than the person who actually worked on Windows 95 or sits everyday beside someone who did.

“The fact that Windows 95 was such a success demonstrated that the ‘Hey everybody, abandon all your old hardware that doesn’t have 32-bit drivers and switch to Windows NT’ strategy wasn’t working.”

No it demonstrated that Win95 had a much better marketing campaign than WinNT. NT4 ‘home edition’ would have worked just as well. Most consumers had never heard of such a beast as win NT.

And what “old hardware that doesn’t have 32-bit drivers” are you talking about ? The only thing this fixed is some filesystem access support and that sucked so much that it wasn’t usable anyway. If you wanted windows apps to be able to use your hardware, you needed windows drivers.

[“If you wanted windows apps to be able to use your hardware, you needed windows drivers.” Not true. Windows 3.1 worked just fine with 16-bit MS-DOS drivers. -Raymond]

"But of course , the real mystery is: why win95 at all ? Why wasn’t Win95 just a ‘home edition’ of Win NT 4.0 ? What justified all the effort spent on the dead-end 9x OSes when you already had the NT kernel, which is way less horrible than the whol win 9x business ?"

No mystery. The hardware requirements for Windows NT at the time were just too steep (and Windows NT 4.0 wasn’t around when Windows 95 development started in about 1993). The major leap forward for Windows 95 was the new shell; under the hood it was a logical continuation of the VMM technology that debuted in Windows 3.x. The whole thing was a glorious, ingenious and incredibly commercially successful hack!

Of course, Andrew Schulman’s "Unauthorized Windows 95" goes into this a great deal more detail.

The problem with the book, however, is that it’s misleading. He spends 90% of pretending that Windows 95 is a shell on DOS, and then in fine print basically admits that no, it’s not, and there’s a lot more to it than that.

Being sensational sells books, unfortunately, but the information is there in that book, if you’re willing to do more than just skim it.

"The hardware requirements for Windows NT at the time were just too steep."

Funnily enough, I always felt that NT ran rather better than Win95 on realistic hardware. I think the "minimum hardware requirement" has always been a figment of some marketeer’s imagination. Every version of Windows has been almost unusable unless you had at least double the "minimum" memory.

This is what I love about Old New Thing: I can be secure in the knowledge that I KNOW how things work (and I KNOW that Microsoft programmers are all slack buggers who’ve never done a day’s work in their lives) and one of your articles forces me to realise that I didn’t really know — and that some MS programmers, at least, are actually so clever it’s scary.

Keeping stuff backwards-compatible (whether in MS software or anywhere else) reminds me of all the work the ancients put in to predict the movements of the planets in a geocentric universe. To get the numbers right, given that they were assuming circular orbits around the earth rather than elliptical ones around the sun, they had to work out a complex system of cycles and epicycles, a marvel of pre-computer mathematical empiricism. In a sense, having to support old versions of popular software is like having to obey the religious strictures of the geocentric universe: you can be pretty damn clever with your creations even when your foundations are less than perfect.

Also, what if I had that CD-ROM drive, sound card or scanner for which there were only real mode or Windows 3.x drivers? They wouldn’t work in NT 4 no matter what I did. And these were fairly common back in 1995. Windows 95 gets another point.

Result: Windows 95, two points. Windows NT 4, zero points. NT may be clean, elegant solid and fast, but if it doesn’t work on your hardware, it’s no option. It’s no surprise ’95 was such a success, no matter how many gazillions Microsoft spent in marketing.

Windows 3.1 was not OS, but a graphic shell on top of MS-DOS. It would have been quite strange, had it not been happy with 16-bit MS-DOS drivers.

@Aaargh!:

“”The fact that Windows 95 was such a success demonstrated that the ‘Hey everybody, abandon all your old hardware that doesn’t have 32-bit drivers and switch to Windows NT’ strategy wasn’t working.””

“No it demonstrated that Win95 had a much better marketing campaign than WinNT.”

No, Raymond’s right here. You probably have not run NT at home in those years. Or if you did, you have not tried to run some of the games and/or applications that weren’t “behaving” in 32-bit environment and the VM. I, for one, remember how your brains could be sucked totally out trying to get a Clipper app to run reliably in NT’s NTVDM…

[“Windows 3.1 was not OS, but a graphic shell on top of MS-DOS.” Whether it was or not is beside the point. People ran Windows 3.1 with 16-bit MS-DOS drivers. If you want them to upgrade from Windows 3.1 to something else, you need to support those 16-bit MS-DOS drivers or convince them to abandon their old hardware. In 1995, a CD-ROM drive cost $200. You make the call. -Raymond]

A Pentium Pro cost over $1000 at the time of Windows 95’s release – it really targeted 486 and Pentium. And it really did target its minimum of 4-8 mb of memory; plenty of cheapskate OEMs installed it on junkers, where NT 3.5 is almost unusable even with 16. There are also aspects of Windows 95, particularly in the GUI and shell, that far exceeded NT’s capability at the time and wouldn’t show up until NT 4 a year later.

I always laugh when I read others’ claims of "ran better than" or "ran faster than." More times that I can count I’ve sat down at computers running this OS or that (Linux, NT, OS2) and claimed to smoke WinXX only to find them pathetically slow and unresponsive by my standards on windows. This even though my hardware was most often the lesser and my apps most often the hungrier.

That’s not to suggest that other experiences are invalid, but to offer that often our needs and frames of reference differ. Win95, Win98, and Win98SE flat out smoked NT4 on every box I ever ran them on – albeit the last being a PIII 450 with 512MB RAM. They had far more stability problems, but even with the crashes and reboots they were far more enjoyable and productive for me. I cursed NT’s slowness and unresponsiveness (due to seemingly constant disk access) way more than WinXX’s instability and crashing.

When I took my current development job, where the hardware is mostly older P4-2Ghz machines with 1GB RAM running XPSP2, I asked for a primary PC with Win98SE. I took a lot of ragging from fellow developers about that but it wasn’t long before my manager was giving them all hard times about not running SE. It seemed that almost no matter what was asked of us I could perform it faster on the SE machine than they could on XP.

Of course I run XPSP2 at home on a much more modern, and very tweaked, box. There the speed and responsiveness keep pace with me and the stability makes it worth abandoning 98SE.

@Aargh: [And what "old hardware that doesn’t have 32-bit drivers" are you talking about ?]

You know, funny thing is, this is the same problem plaguing the transition from 32-bit and 64-bit, and probably close to the same reason Vista came out in both versions. You can’t expect millions of people to go out and buy the latest hardware just to run your software… can you?

Win95 did run better on 16MB machines than NT4 – you had to have 32MB for NT to have a comparable level of performance. Also, NT4 had a lower level of support for DOS program, specifically games.

Being a PC technician in the late Win95 era, I found out the following path to MS-DOS compatiblity mode:

Computer has an Intel chipset from the late Pentium era (430VX or 430TX), and no CD drive.

You install Win95 on it without CDROM – probably by copying the 30MB installation dir to the HD from a temp CDROM or whatever, and then installing from it.

During installation, the PnP manager doesn’t recognize the chipset’s IDE controller (because it doesn’t exist in the in-box INFs). So, the non-PnP manager recognizes the IDE controller as "Standard ESDI controller" or something. But, it only recognizes the first controller, since the second doesn’t have anything connected to it. Installation completes, and everyone is happy.

Have you tried installing NT 4 on the recommended minimum memory? It would actually install on the set amount, but needed the next step up in order to run. My memory tells me the steps were 12MB to install and 16MB to run, but it was more than ten years ago so it could well have been 8 and 12. I remember the discrepancy because we were incredibly pissed that after getting a machine theoretically capable of running it we then had to stop what we were doing and go out and buy more memory to really run it.

What is it with these people. There’s a really annoying myth among them that installing more RAM improves performance.

More pages committed to physical RAM != better performance.

Higher clock frequencys may improve performance, but if you don’t see squiggles on the CPU-usage history in task manager, you’ll hardly notice any difference.

Faster storage media, controllers and buses improve I/O performance, but if you never see your HDD LED flashing then you’ll hardly notice the difference.

Only if the performance of the program you run is limited by how much paging your computer can do, more available physical memory can improve /perceived/ performance. But actually, under the hood everything is equally fast or slow. To use a bad car analogy, driving a wider road doesn’t make your car faster. (It merely allows more simultaneous cars.)

I loved and hated MS-DOS compatiblity. The few times it alerted me to viruses on friends or families computers I loved it, but the many times a CD-ROM driver caused it I hated it. It’s nice to know the whys.

I didn’t use NT until NT4 so it’s my only reference against win9x systems. The latter ran considerably faster and smoother for me until I increased ram beyond 384MB. Beyond that I liked NT better because other apps were more responsive during Visual Studio compiles.

Even better than the above is when Windows 95 starts installing from the CD-ROM but is unable to load the protected-mode drivers for it (because the cdrom driver is still in config.sys) but can’t access it in real mode either (because it removed mscdex from autoexec.bat). Windows 98 "solved" this problem with a .inf file… on the CD-ROM…

That bugged me a bit back when I last used win98; not enough to go looking for what it actually did, or try to figure out how to remove it, but enough to wonder why it was there, and why it took a couple tens of K of memory (or something like that) according to "mem /c". Now I know. Thanks! :-)

"You know, funny thing is, this is the same problem plaguing the transition from 32-bit and 64-bit, and probably close to the same reason Vista came out in both versions."

There’s a big difference between Win95 not being 32-bit clean and 64-bit Vista: the hardware of any Win95 computer is capable of running 32-bit clean except for the lack of drivers. Whereas there are many 32-bit computers that just canot run 64-bit code (i.e, older 32-bit only CPU)

The entire PC industry has come a long way in standardizing hardware. For example, nobody loads vendor-specific CD-ROM drivers anymore because everything is ATAPI or SCSI.

Of course there is still a ways to go, like why does every network card need a different driver?

"Whereas there are many 32-bit computers that just canot run 64-bit code"

The only difference between a 64-bit computer and a 32-bit one is the CPU. And 90% (if not 100%) of CPUs sold today are capable of 64-bit. It’s like saying that you had to upgrade your 286 CPU to a 386 in order to run Windows 95. By the time Windows 95 came out, nobody was selling 286’s either.

The only thing stopping adoption of 64-bit Vista is driver support (well, that and the fact that it costs more than 32-bit Vista… what’s up with that?!)

"You know, funny thing is, this is the same problem plaguing the transition from 32-bit and 64-bit, and probably close to the same reason Vista came out in both versions. You can’t expect millions of people to go out and buy the latest hardware just to run your software… can you?"

It’s even worse with 64-bit Windows. With Win95, the lack of drivers was only because the manufacturers didn’t want to revisit an operating system released later. With Win64, it’s both that *and* that the software/hardware manufacturers just don’t care for even new products.

Guitar Hero 3’s copy protection system doesn’t work on Win64 because it’s a 32-bit kernel driver. And Guitar Hero 3 was released in 2007! (My guess is that a crack would work, but I haven’t tried.)

Now *this* is why I read the Old New Thing! Thank you, Raymond, for finally and definitively demystifying something techie people have argued about for years.

And while I sometimes may argue with MSFT’s design decisions, I have to say this MS-DOS fakery business was truly one of MSFT’s more elegant and clever solutions. For the exact problem posed (new memory model + full backwards compatibility), I doubt there exists better way to solve it. Long overdue kudos on this.

Thanks for these article Raymond – they really are entertaining. I’m one of the Slashdot stamping horde that I know you hate but I just want to say I enjoy reading them. Double thanks for links to sites explaining the references to American culture (I’m European) – that’s a thoughtful touch.

Of course there is still a ways to go, like why does every network card need a different driver?"

You’re mixing driver layers and hardware technology. SCSI CD-ROM drives should (or, "should") accept all standard SCSI commands, hence no need for separate drivers for each drive. In SCSI, one CD-ROM drive should (again, "should") look like the next one.

However, each SCSI host adapter (the SCSI card) is different, and needs its own driver. On NT this is called a SCSI miniport driver. It’s the go-between between MS’s SCSI port driver (which the SCSI CD-ROM driver talks to) and the SCSI card hardware itself.

If you’re using widely available SCSI technology (i.e. Adaptec), you don’t need to install a special miniport driver because they come with the out-of-box installation. Also, many knock-off brands are register-compatible with major vendors’ cards (again, Adaptec), and so the standard miniport drivers will recognize and operate with them.

So: NT understands Ethernet frames and Ethernet hardware basic principles, but doesn’t know the particulars of every Ethernet card (although there’s plenty of register-compatible hardware out there, i.e. NE2000-compatible). I’m not sure they’re named this way, but I suspect what you’re installing are network miniport drivers, analogous to the SCSI situation.

Yes, I have run Windows NT 4 for five years (from 1997 to 2002) with several different configurations, the first one being a Pentium 75 with 16 Mb of RAM. It felt reasonably responsive, unless I opened more applications than fitted in RAM and it started paging. Anyway, NT 4 was a bit slower than Windows 3.1 or 95 on the same hardware (not surprisingly), but, for me, the stability made it worth.

My Pentium 75 computer had a bug in the motherboard, and sometimes, after a reboot, it would only see the first 8 Mb of RAM, requiring a hard reset to fix it. When I didn’t notice it at the POST screen, NT 4 would try to boot – and it always succeeded, arriving at the desktop after several minutes of frenzy paging :-) . So, I can assure you that even if the official specs say NT 4 requires 12 Mb of RAM, it *can* boot with just 8 Mb.

The last configuration I ran NT 4 on, just before switching to XP, was an AMD K7 500 with 128 Mb of RAM. Each time I had to sit on a comparable Windows 98 machine, I found no noticeable difference in speed in either way, but felt that memory-intensive applications usually ran worse in 98.

Myria: most copy protection schemes are evil, evil monstrosities that make dodgy assumptions, do awful things to system security and/or break at the slightest change.

It’s just their nature – the requirements inevitably result in nasty, evil code that does several dozen things which it has no business doing and which have serious issues. If Microsoft wanted to do the right thing, they’d pick some of the worse practices and refuse the offending software permission to use their trademarks and "compatible with xxx" logos.

The Wine devs analysed a particularly evil example recently. Apparently, it disassembled all entry points to certain system libraries and did analysis of the code. If it didn’t like what it saw, it aborted execution.

This has to be a app compat expert’s worst nightmare, particularly since it can apparently follow several levels of jumps into non-exported code. Also, many of the criteria are statistical with a threshold for acceptance, so it probably wouldn’t even need compiler changes or unusual code to trip it.

(Of course, the fact that this made the code incompatible with Linux means that Microsoft is probably more likely to reward than punish the companies involved.)

A RAID 0 array with 10krpm disks and a decent controller can perform really well when the memory manager is paging like if there’s no tomorrow. Even though to the end user this may be totally meaningless, the computer is still pretty fast at what it does.

So I’m being the hardware guy here: it’s a software problem. I know software people like to blame the hardware but they’re wrong. TBH I’m not a hardware geek and my computers probably won’t run Vista very well if at all, but it’s getting annoying to hear people cry that they’ve installed another GiB or 2 of RAM and things aren’t any faster. They should’ve known better.

To improve software performance by adding hardware is "cheating". It doesn’t solve the problem, it merely works around it. Use memory profilers or at the very least plain task manager to measure memory usage, make sure there are no suprises.

To be a bit more ontopic here: A typical DOS program could run well within low memory (which was like, the famous and misquoted(?) 640kB). Ofcourse a minimal Windows program needs at least 2MiB but more realistically 5MiB, still programs should be designed to use no more memory then neccesary for its purpose.

I never got the sense that Windows was a shell, reading "Schulman’s Inside Windows 95". I thought he made a rather strong case that DOS was pretty much gone. He goes through a lot of trouble to explain all that VMM32.SYS was really the boss and not MS-DOS. I remember a lot elaborate examples, and it wasn’t the fine print.

"You know, funny thing is, this is the same problem plaguing the transition from 32-bit and 64-bit, and probably close to the same reason Vista came out in both versions."

Yeah… only this time situation is vastly different. You can buy 8GB of DDR2-800 for as low as $175 and 32-bit Vista can’t use more than 3GB, not to mention that if you run three-way SLI with 8800GTX Ultra, those 3GB quickly go down to 1.75GB and with less than 4GB Vista is unusable anyway unless you consider using calculator and notepad as a serious computer use.

" … it’s getting annoying to hear people cry that they’ve installed another GiB or 2 of RAM and things aren’t any faster. They should’ve known better."

Obviously you’ve never met an average Windows user.

"To be a bit more ontopic here: A typical DOS program could run well within low memory (which was like, the famous and misquoted(?) 640kB). Ofcourse a minimal Windows program needs at least 2MiB but more realistically 5MiB, still programs should be designed to use no more memory then neccesary for its purpose."

That is true. However: You’re average DOS program doesn’t have translucent window borders, and a very long time ago Windows gave up on being compact. The cost of RAM has dropped, so Microsoft simply doesn’t seriously optimize memory usage anymore.

Most people complaining about Windows 95 have completely forgotten what things were like back then. If you had a CD-ROM, you were quite likely using an MS-DOS driver to access it. I still had plenty of DOS apps that required me to exit Windows to run (like games, demos, etc). And I ran Windows 95 in 4MB of RAM and it worked great. This was not a time for NT 3.51 to be in the hands of regular people!

Unfortuately however, even if Win32 existed since NT 3.1, Windows 95 encouraged a lot of Win32 apps that did not handle NT’s security well, causing trouble when 9x was finally abandoned. Originally it was supposed to be after 98, but it was later pushed to after Me. BTW, be glad Nepture did not release, else 9x vs NT would be even more confusing.

If you think that win 3.1 was just a simple shell over DOS, it’s instructive to do some research on win386.exe (remember 386-Enhanced mode, anyone?). The vmm + numerous vxds inside win386.exe were a new protected mode OS that certainly wasn’t DOS – we just didn’t know it at the time…

I read Raymond Chen’s blog from time to time, somewhat because he’s a really conversational writer, and somewhat because he’s got lots of interesting things to say about the history of Windows. I was amused by this post about MS-DOS…