A Very Critical Look at OS Re-creation Projects

There are at this time, a number of what I would term ‘OS re-creation projects’ (OSRs) in active development. These are OSes that attempt, by varying degrees, to re-implement the features of another operating system. In this article, I’m going to explore some of issues surrounding projects of this type. In the second half of the article, I apply these observations and examine two example platforms (Amiga and OS/2) and the related re-creation OSes.

Every OSR is defined by two criteria:

The choice of inspiration OS (IOS) which serves as the inspiration to the OSR.

The degree of adherence to the design and implementation of the IOS.

Obviously, these re-creation efforts do not seek to create a direct facsimile of the original OS. The reasons for this are two-fold:

Firstly, any OS which was simply a copy of another OS and that did not vary in any way from the original, would be, by definition, redundant. By way of example, let us say that a group of people might set out to create an open source clone of the Microsoft Windows operating system so that they could release it as freeware. In this example, it is the licence and the development model which vary from the original source OS.

This first point is dependent on the motivation of the developers – why do they want to create this new OS? Example motivations and ideological aspirations for such a project might include, in varying amounts, factors such as:

The re-creation of the best parts of an older operating system, but utilising more modern methods of design and software engineering.

The re-implementation of another OS, under a more open development and distribution model.

Secondly, the IOS system must, logically, predate the creation of the OSR; it is therefore natural that some advancement in the field of operating system development and hardware development would have occurred in the time between the release of the the IOS and the start of the re-creation project.

Omission and Addition

The developers who are involved in the actual creation of the new OS will almost certainly have some opinions on the subject of OS design. The two ways in which a thing which is inspired by something else can deviate from the original are by omission and by addition. The creators of a clone OS would have less motive to recreate a feature which they regard as unimportant than they would have motive to create a feature which they regard as relevant. This is not to say that the omission of a feature is evidence that the re-creators felt that it was a mistake in the original OS. An omitted feature may represent a function whose importance has become depreciated by time. For example, an OS created in 1994 might feature a very well designed and comprehensive Internet modem dialler; however, the importance of this feature is depreciated in 2006. An OSR of this sort might place only limited importance on full and comprehensive floppy disk support, for the same reasons.

Also, an omission might represent a compromise on behalf of the developers that was brought on by practical considerations. If a team were to create an a new OS which was similar in function and design to MacOSX, they may decide to stop short of recreating all of the iApps which are bundled within a MacOSX distribution. Having the complete iBundle set of applications might be desirable in a MacOSX clone but its omission could be the result of the practical resource limitations which face any software engineering project.

Internal and External Similarity

All things that can be observed and therefore assessed can be subjected to an analysis from two different standpoints – internal and external – or, the way things look and the way things work. This philosophy can be applied to an assessment of OSR projects.

Some OSRs utilise a ‘ground up’ approach in that the internals of the OSR mirror the internals of the IOS. One motivation behind this approach can be that, in the view of the developers, the design of the internals of the original OS were based on sound principles. The FAQ of the Syllable project states this as a design motivation:

“The BeOS API is undoubtedly a good design though. The Syllable API does use a lot of good ideas from the BeOS API, but we also design and add our own classes.”

Another motivation behind a re-creation of the internal parts of another OS would be in order to maintain some degree of compatibility that OS. This compatibility may take the form of source code or binary compatibility. In other words, in the OSR may be similar enough to the IOS to be able to run programs or even drivers which were created for the OSI (binary compatibility) or it may simply be a design goal that programmers with access to the original source code should easily be able to ‘port’ their existing software to the OSR (source code compatibility).

There is a third, important aspect to the degree of correspondence between the internals of the IOS and OSR and that is architectural similarity. For this reason, it is often an easier job to port a Unix tool to an OS like AmigaOS than to an OS such as RISCOS. This is because AmigaOS shares greater architectural similarity with Unix than RISCOS does. Like Unix, AmigaOS is based around preemptive multitasking. AmigaOS also has a filing system architecture which was inspired by Unix file system concepts to a greater degree than that of RISCOS.

In the same respect, a window manager for Linux which resembled RISCOS wouldn’t be a particularly good target platform for the task of porting RISCOS applications; the reason for this is that, this platform may increase the external similarities between the platforms but would do little, if anything, to increase the internal, architectural similarities between the two OSes.

Sustainability

Any appraisal of the prospects for the long term survival of an OS must include economic assessment. In the case of a FOSS OS, the economic factors are not monetary in nature but rather, concerned with the commodity that is manpower. To be a viable, sustainable project, the amount of manpower that the project can attract must be the same as or greater than the amount of work that needs to be done to maintain the project.

In the case of a purely commercial OS project, the most important considerations are monetary. In a commercial context, the manpower that the project requires has a monetary value attached to it. To be sustainable, the project must have available to it an amount of money that is equivalent to the amount of manpower that the project needs.

So, the question, “Is the project viable, in the long run?” could be reformulated as “Is the project sustainable?”. In commercial or FOSS OS projects, any design decision that increases the amount of manpower needed has impact upon the sustainability of the project. In the case of living things, businesses or operating system projects, the amount that comes out is equal to or less than the amount that is put in.

Whenever I read about a new OS project, I want to know what their plan or model is going to be: I want to know how they are going to attract the amount of resources they need in order to do the work of creating the finished (or ongoing) product. If one guy says that he is going to do all of the work himself, can he? If the originator(s) of the project claim that other people are bound to join in, what is evidence that supports their assertion? Is this assertion backed up by precedent, for example?

Similarly, a design decision which affects the accessibility of the project to developers will impact it’s sustainability. One way in which access ability can be limited is through decisions that tie the platform to unusual hardware. Particularly in the case of open source software, there is a relationship between the accessibility of the platform, the number of users and the number of developers.

In the case of any OS project, any such appraisal must take into account the creation or maintenance of the applications that people are going to want to run.

Some current OSRs

Using the principals I’ve outlined, I’ll focus on two different platforms and their corresponding re-creation projects.

OS/2

There are two projects which are currently attempting to recreate some or all of IBM’s OS/2. These projects are called Voyager and OSFree.

OS/2 Itself

For those who aren’t in the know, OS/2 started life as a joint OS project between Microsoft and IBM. The earliest, 1.x versions of OS/2 were 16 bit and had a front end not dissimilar to early versions of the Windows OS. The popularity of Windows grew at a rate beyond the expectations of Microsoft and as a result, they decided to concentrate their efforts on Windows and to abandon the OS/2 project. IBM on the other hand wanted to continue developing OS/2.

The 2.x versions of OS/2 represented a considerable technological departure from what had gone before. OS/2 2.0 was 32 a bit protected mode OS. The other departure from both the older version of OS/2 and the versions of Windows which were its contemporaries lay in the GUI. Amongst former users, the GUI is the most fondly remembered aspect of this operating system.

Prospects For Revival – The Bad

Whenever the matter of reviving OS/2 is brought up, I find myself asking – why? It is difficult to see which of the technologies that existed in OS/2 and could be brought back to life within a modern implementation. Many of the features, such as industry-leading DOS support, which gave OS/2 its edge, are simply no longer relevant.

OS/2 has some technological problems too. The messaging system of the GUI relies upon something called the ‘single input que’. In a nutshell, this reliance upon a SIQ means that a single crashing application can bring down the entire OS. This manifests itself in a situation in which the user can see programs running but will find his or herself unable to interact with the system. The only solution when this happens is to reset the machine.

Sustainability

Even if a complete, working clone of OS/2 were to be made available tomorrow, at no cost, where would the applications come from? The uptake of this OS would have to be considerable to attract the development manpower needed to develop and maintain software. If a developer dedicates the needed amount of time necessary to create the software that OS/2 needs, he has expended that amount of effort in way that only benefits OS/2 and its users. The same could be said in the case of most operating system but at least number of Linux, Windows or Mac users is considerable.

Another problem with OS/2 is that the API used to create full GUI applications isn’t very compatible with anything else. So, although it is technically possible to port full GUI applications to OS/2 from other OSes, it requires a lot of work; manpower in other words. To use a comparison: If a new Linux distribution were to be released, and if efforts had been made to ensure full compatibility with other Linuxes, the effort required to port an application such as the Firefox web browser should be somewhere between minimal and none.

Some people would point out that OS/2 has quite a lot of existing software. It does, but can anyone reading this article suggest a list of killer apps which aren’t equalled or bettered by the apps on other systems?

The Good

What could be salvaged from OS/2 if it were to be re-created?

The final version of OS/2 that IBM actively and extensively developed was OS/2 Version 4. This was released in 1996; consequently, its target hardware is the hardware that was common at that time. It offers a snappy, fairly feature-rich GUI OS that can easily run on a P166 with 64megs of RAM. In extreme conditions, it can be convinced to run quite well with even more meagre resources.

I would rate the Windows 9x of the same era as barely usable in terms of reliability and also somewhat feature poor. On the other hand, despite design flaws which place limits on fault tolerance of applications and subsequent OS reliability, OS/2’s stability is probably better than any of the other client oriented, GUI OSes of that time.

Perhaps a new OS/2 could be a good OS for NGOs and third world countries? Given an early Pentium and the right software, it could form the basis of a nice little word processing, emailing and web browsing station.

It’s GUI contains some elements which haven’t been fully assimilated into other operating systems yet. Many of the GUI concepts are rooted in the object orientated design philosophy of the OS. For example, it is possible to make any file system object into a ‘template’. This object could be a blank text file, an empty .zip file, a standard letter or even a folder containing other file system objects. A web developer might, for example, create a blank ‘website’ template consisting of a root directory which contains an initial ‘index.html’ file, an ‘/images’ directory with all of the .png files that the web developer might normally use and a standard ‘robots.txt’ file to guide search engine spiders. Once this object has been marked as a template, the user can create another with a simple drag operation.

Another useful feature of the OS is the integration of the REXX scripting language. Many applications feature REXX ‘ports’ which mean that users can add their own plug-ins created in the native scripting language.

OS/2 has a few neat features of this sort, and I think that they should live on as add-ons to other operating systems.

Conclusion

In its day, OS/2 struck a good balance between hi-tech features and reasonable performance on the hardware of the time. It was also the first consumer OS to ship with TCP/IP networking and an Internet dialler as standard.

The fact that it never quite broke through to the mainstream wasn’t as much of a handicap as some might think as OS/2 had some high quality shareware, commercial and freeware native apps. OS/2 could also run a lot of software which had been created for other platforms or that had been created in a platform independent manner. For example, it was the first consumer OS to ship with Java support as standard; OS/2 had probably the most comprehensive DOS support of any GUI OS; it could run Win3.x software. In its later years, an impressive project called Odin enabled a OS/2 to run a surprising amount of 32 bit Windows software. In other words, you had access to almost any type of application, even if it wasn’t a native OS/2 application.

However, the very features which made OS/2 the OS of choice for so many have faded in importance. It is with a heavy heart that many of OS/2 former users (myself included) have to admit that they don’t really want to go back to OS/2 anymore than they would trade in their broadband Internet connection for dial-up ANSI BBS access and 320×200 VGA games with ad-lib music. Perhaps IBM could have kept OS/2 relevant but they didn’t make any serious efforts to develop it beyond about 1996.

What features, for example, does the kernel offer that modern operating systems do not? For that matter, does anyone really want to use an OS that uses drive letters?

In conclusion, in my opinion, recreating OS/2 would be more work than starting an OS from scratch, considerably more work than improving another OS and ultimately, produce a less useful result than either.

The website includes a fairly comprehensive outline of what the developer would like to accomplish with this relatively new project. As the project is new, the bulk of materials are design docs as opposed to actual runnable code. Despite this wealth of design documentation, actual design specs are still rather conceptual rather than specific. For example:

The the kernel hasn’t been settled upon yet. It might be based upon a Linux kernel.

The exact nature of GUI is still undecided. In design terms, it will not be based on X Windows but it will make some concessions towards the PM+WPS nature of OS/2. How API compatible it is, is also not specified.

Source and binary compatibility with OS/2 is still undecided.

It’s impossible to make reliable judgements about the viability of a project which is still at the design stage. In its favour, at least the planning stage has been extensive. Also, it’s worth noting that the people associated with the project have a track record for software development on OS/2.

OSFree was initiated in 2000 but as with Voyager, the project exists as a set of design outlines and some test code. It would seem that they have settled on the idea of using the L4 Micro-kernel and have successfully recreated some of the OS/2 CLI tools. The project certainly isn’t dead though, there is some activity in the forum.

Amiga

The history of the Amiga can be divided into three eras.

In 1985, the Amiga was, conceptually, a Unix-like workstation that could run on custom, inexpensive hardware while still offering cutting-edge multimedia capabilities. It could be scaled from a floppy disk based, basic model, plugged into the TV of a lucky teenager all the way up to a professional class graphics workstation. For the hobbyist user, the most important element of the Amiga design was the use pattern that it engendered; a typical user would read some text files, check on the rendering which was taking place in the background all while listening to some .mod music. Sound familiar? The Amiga was so far ahead of its time that it offered its users a genuine glimpse of what a average geek’s leisure computing experience would consist of 20 years later.

The rival platforms started to catch up but the Amiga arguably held its own throughout the middle period of its history. The platform didn’t stand still: chip giant Motorola kept putting out faster 680×0 series chips while, at the same time, Commodore made incremental improvements to the OS and hardware.

From the perspective of any Amiga fan, the third, current period is the bleakest. The pressure from the rival efforts of Microsoft became absolutely intense while at the same time the Amiga suffered a series of setbacks as the result of mismanagement and simple bad luck.

I would challenge any fan of the Amiga to discuss what happened to the platform next without getting angry. For the purposes of this article, it will have to suffice to say that Commodore went bust, the Amiga properties went on the market and then a succession of companies promised a great deal and delivered very little. The set pattern of events became that a new company would purchase the rights to develop the platform, they would then… actually, I’m not quite sure what they were actually doing… but at the end, they would have nothing to show for it. And then the pattern would repeat again.

Prospects for revival – The bad

The Hardware Advantage

Much of Amiga’s performance superiority came from its incredibly powerful custom chip-set. Needless to say the capabilities which were jaw-dropping in 1985 couldn’t hope to power the typical flash-based banner ad now. If an effort were made to re-create a similar hardware platform that was orientated around a unique graphics architecture like the original Amiga was, its difficult to see how any custom architecture could beat the output of mainstream graphics accelerator chip-sets such as NVIDIA and ATI. In other words, the Amiga regaining the multimedia hardware superiority that it enjoyed upon release is unlikely.

Memory Protection

In its current form, AmigaOS has the dubious distinction of being one of the few OSes based around a micro-kernel but operating without memory protection. A typical micro-kernel implementation keeps all of the programs and all but the minimal, low level hardware drivers in their own protected memory. OSes such as Linux, MacOSX and Windows use a type of kernel called monolithic. In a monolithic kernel, drivers exist within the kernel or execute at the same level of privilege; this means that if a driver crashes, the whole system crashes. The micro-kernel design also has security advantages over the monolithic.

Unfortunately, the micro-kernel has a performance cost. On the original 1985 hardware a 20% performance hit was unacceptable and for this reason, AmigaOS uses a micro-kernel but without memory protection. In fact, it makes far less use of memory protection than most monolithic kernels do. If one application crashes, it can bring down the whole system.

Because of the way that the API is designed, a choice has to be made between breaking source and binary compatibility with current software or accepting that the machine is going to crash a bit more than average. One possible, hybrid solution would be to loose standard native app compatibility yet, at the same time run the old applications through a compatibility emulation layer.

“I consider stability to be an area in which most modern operating systems have improved over the older generation. Some people seem to have a different memory of these things than I but I seem to remember that my RISC OS, Amiga OS and OS/2 based machines would crash quite often.”

And I stand by that, it would take quite a lot to convince me that a modern AmigaOS could archive the level of stability that most people have to come to expect while still being based on a design model that does not feature extensive memory protection.

On the one hand, the overall architecture of the OS is, in some respects, Unix-like which eases the efforts of programmers porting applications. On the other hand, the Amiga GUI system doesn’t make any concessions towards compatibility with anything else. So, full GUI apps will take a lot of work to port and command line and server type apps less work.

The Good

Over the years, the graphics and sound APIs have been updated so that modern applications are no longer tied to the legacy hardware. Amiga OS is also small and fast. Amiga OS has a dedicated community of users and hobbyist developers. The size of this community and their loyalty are a huge asset that should be included in any assessment of the viability of the platform.

Funnily enough, like OS/2, AmigaOS includes integration of a variant of the IBM scripting language REXX. In AmigaOS this offers similar possibilities in terms of user created application plug-ins and automation of certain tasks.

In conclusion

In my opinion, the viability of a new AmigaOS hinges on the one Achilles heal of the entire OS, that of the stability and security problems raised by the lack of proper memory protection. It seems quite possible that if a viable, architecturally sound AmigaOS could be created in the near future, a substantial user-base and developer base are guaranteed.

The AROS project is an attempt to create an open source, portable implementation of AmigaOS. It aims for feature set roughly equivalent to that of AmigaOS 3.1. As it is largely source code compatible, Amiga applications can be recompiled for the OS or run under the AROS port of the UAE Amiga emulator. This high degree of compatibility at the source code level is a mixed blessing as it means that the OS has to make some concessions towards the legacy deficiencies of the original Amiga OS. For example, the FAQ has this to say on the subject of memory protection.

Several hundred Amiga experts (that’s what they thought of themselves at least) tried for three years to find a way to implement memory protection (MP) for AmigaOS. They failed. You should take it as a fact that the normal AmigaOS will never have MP like Unix or Windows NT.

However, it should be noted that they do present, in the FAQ, some ideas to work around this lack.

MorphOS is another attempt to create a Amiga-like operating system. It aims for good source code level compatibility with AmigaOS applications. The development uses a combination of open source components with proprietary closed source development. The OS is tied to the PPC platform and can run on either the special Pegasos [see – http://en.wikipedia.org/wiki/Pegasos ] hardware or original hardware Amigas which are equipped with PPC accelerator cards.

About the Author:Once, at school, Mike attempted to explain why Amigas were better than Spectrums to a member of the opposite sex; he’s regretted it ever since. Check out his website to learn more about his never finished writing and music projects.

If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.

52 Comments

By way of example, let us say that a group of people might set out to create an open source clone of the Microsoft Windows operating system so that they could release it as freeware. In this example, it is the licence and the development model which vary from the original source OS.

Ehm. It’s me, or this guy is completely unaware of the existence of ReactOS? It would have been the most obvious OS featured in this article.

I think that this is one of the points that I was trying to make – the distinction between OSes which are externally similar and OSes that are internally similar. This holds true for Minix also (see further down thread). Minix and Linux are externally quite similar but *internally* they are implemented differently.

“During one of our weekly group meetings at the CSRG, Keith Bostic brought up the subject of the popularity of the freely-redistributable networking release and inquired about the possibility of doing an expanded release that included more of the BSD code. Mike Karels and I pointed out to Bostic that releasing large parts of the system was a huge task, but we agreed that if he could sort out how to deal with reimplementing the hundreds of utilities and the massive C library then we would tackle the kernel. Privately, Karels and I felt that would be the end of the discussion.

Undeterred, Bostic pioneered the technique of doing a mass net-based development effort. He solicited folks to rewrite the Unix utilities from scratch based solely on their published descriptions. Their only compensation would be to have their name listed among the Berkeley contributors next to the name of the utility that they rewrote. The contributions started slowly and were mostly for the trivial utilities. But as the list of completed utilities grew and Bostic continued to hold forth for contributions at public events such as Usenix, the rate of contributions continued to grow. Soon the list crossed one hundred utilities and within 18 months nearly all the important utilities and libraries had been rewritten.

Proudly, Bostic marched into Mike Karels’ and my office, list in hand, wanting to know how we were doing on the kernel. Resigned to our task, Karels, Bostic, and I spent the next several months going over the entire distribution, file by file, removing code that had originated in the 32/V release. When the dust settled, we discovered that there were only six remaining kernel files that were still contaminated and which could not be trivially rewritten. While we considered rewriting those six files so that we could release a complete system, we decided instead to release just what we had. We did, however, seek permission for our expanded release from folks higher up in the University administration. After much internal debate and verification of our method for determining proprietary code, we were given the go-ahead to do the release.”

Regarding UNIX amd OSRs, “The UNIX system family tree: Research and BSD” is very interesting. You can find it on any FreeBSD installation or live system CD (FreeBSD, DragonflyBSD, FreeSBIE, PC-BSD, DesktopBSD etc.) in /usr/share/misc/bsd-family-tree or http://www.freebsd.org/cgi/cvsweb.cgi/~checkout~/src/share/misc/bsd… (text file with ascii diagram).

“MINIX is an open source, Unix-like operating system based on a microkernel architecture. Andrew S. Tanenbaum wrote the operating system to be used for educational purposes; MINIX also inspired the creation of Linux. Its name derives from the words minimal and Unix.”

Linux is a monolithic kernel. Device drivers and kernel extensions run in kernel space (ring 0), with full access to the hardware, although some exceptions run in user space. The GNU/Linux graphics subsystem (the X Window System) is not part of the kernel, is optional, and runs in user space, in contrast with Microsoft Windows.

Kernel mode preemption means device drivers can be preempted under certain conditions. This latter feature was added to handle hardware interrupts correctly, and to improve support for symmetric multiprocessing (SMP). Preemption also improves latency, increasing responsiveness and making Linux more suitable for real-time applications.

The fact that Linux is not a microkernel was the topic of the Tanenbaum-Torvalds debate[12] which was started in 1992 by Andrew S. Tanenbaum with Linus Torvalds regarding Linux and kernel architecture in general on the Usenet discussion group comp.os.minix.[13] Tanenbaum argues that microkernels are superior to monolithic kernels and that, for this reason, Linux is obsolete. This subject was revisited in 2006.[14] [15]

Unlike traditional monolithic kernels, device drivers are easily configured as modules, and loaded or unloaded while running the system.”

OS/2 almost never crashed as it features Protected Memory since the MS OS/2 1.X era.

Ran it for many years… Crashed? I can only remember once or twice. Lockups? All the f*%@ckin time until they fixed the SIQ, which didnt happen till Warp was a few years old. Is there a difference from the users perspective? Nope, sux either way. I did like the UI though – that and it ran 16-bit windows and DOS apps _really_ well.

Well to be fair the SIQ itself didn’t cause lockups, buggy software did. My experience with lockups was more due to the software I was trying to run. In fact if you were a run-of-the-mill OS2 user back in the day, the odds are that the only OS2 native apps you ran were WPS and maybe a few utilities. Most people ran Windows and DOS apps to be honest. And if you didn’t run native OS2 apps you would never run into the SIQ issue.

Anyway, if I remember correctly the 2 programs that caused the most trouble were Footprint Works (later IBM Works after IBM bought it for the release of Warp) and early versions of Lotus Smartsuite for OS2.

In regard to WPS I have to say that the template function and the general object oriented approach is something which should really be implemented in other OS’es. ASAP! (Gnome actually have some OS/2-like support for templates – it is just hidden away).

An even nicer feature of OS/2 was the “shadow copy” feature. Symlinks and shortcuts on steroids. You could make a shadow copy of an app, then move the original app, and the shadow would still point to it. Edit the shadow and you could change features of the original.

Just out of curiosity, why would you consider BSD an OSR of Unix? Considering that the code has evolved from the original Unix rather than being written from scratch (which is the case of Linux and many real OSRs), it doesn’t make much sense. Also, in terms of ideas, System V Unix has taken as much from BSD as it has given them.

Entertaining and informative, and thanks for taking the trouble. Yes, it would be interesting to hear your opinions on ReactOS. And the other thing I wonder, when looking at the timescales, is whether these efforts are doomed to relative failure. By the time they recreate the glories of the past, time has moved on too far. Still, the enthusiasm of the devotees is nice to see.

There is an interesting window manager inspired by Amiga by the way – came across it in the Debian repositories the other day, windowlab. A real blast from the past if you want a flavor without going all the way.

The bad is that the writer hasn´t been in touch with OS/2 in a long time. IBM did update Warp in 1999 and included USB support for most devices, JFS file system, kernel upgrades to support more memory, a proto 32 bit device driver model, updates to the TCPIP stack with new functions to accelerate Web work (web servers)… etc

Since then OS/2 became eComstation a comertial ditribution that is working in a version 2 of the product with a lot of new things; to improve the hardware support and the GUI.

The good is that even bad publicity is good publicity and is good to let the people know that there are projects to bring OS/2 back to life.

Lots of the arguments of the writer are really flaw when it comes to define the practical purpose of these kind of proyects. I have been following Linux since version 0.93 back in the early 90s and believe me no one imagined the fate of that system at the time. Much of the reasons why Linux is where it is today is because of OS/2 and IBM’s failure to win the OS wars.

From the software point of view clearly OS/2 has an advantage over other proyects of this nature. You still have DB2, MQSeries, WebSphere 4.x and all the suites that IBM produced. Today there are lots of tools for porting Linux software to OS/2 that way we have Apache 2.2, SVN servers and LOTS of ports using the GCC compiler directly. SkyOS, Syllable even BeOS (Haiku) don’t even come close to the thoudsands of shareware, freeware and commertial software that is available for OS/2, not to mention the old DOS and Win3x programs.

In my opinion, it is not fair to compare any current operating system with the OS/2 Version 4 from 1996, because since then a lot of fixpaks (17 in total!) have been published by IBM with tons of updates, bugfixes and new features:

E.g. Fixpak 15 is freely available since 2001 and updates your Warp 4 installation up to version 4.50 which is very different from its predecessor. So it would be fair and necessary to compare other (current) operating systems at least with this more modern 2001 version of OS/2 Warp 4.

And, moreover, please do not forget Ecomstation, the modern successor of OS/2 (OS/2 Warp Version 4.52). In my opinion, a comparison with this latest “Warp 4” would be the best.

I have to agree. The author seems like he needed something to write about and didn’t do much research. I guess you could say that eComStation is to OS/2 as ZETA is to BeOS, only properly executed. eCom is a pretty big jump up from what Warp 4 is/was. My dad still runs it exclusively. Has no issues with new hardware. So what if he can’t play Duke Nukem Forever (hehe), he is actually productive on his machine. He has Thunderbird, Firefox, OpenOffice, lots of imaging apps, etc. On my installation of eCom, I run VirtualPC and can run WinXP, BeOS, and tons of linux distros, thereby giving me the power of the apps from ALL those OSes.

For a reference, I run eCom on an IBM NetVista X41 P4 1.8 Ghz, 1 GB RAM, 160GB HD, and a 16x DVD+/-RW, USB2, Hauppauge PVR-150. There’s other goodies in there, but base system wise it’s pretty recent. I have PVR ability, which is something linux won’t even do on that machine, much less BeOS and ZETA which followed. WinXP likes to hiccup during recording.

So I’d say it’s a pretty damn useful OS in it’s current form, especially considering it can do things that “current” OSes can’t.

Not to sound like a rant (sorry if it did) but I really feel more research would have gone to make this article better than it was.

For a reference, I run eCom on an IBM NetVista X41 P4 1.8 Ghz, 1 GB RAM, 160GB HD, and a 16x DVD+/-RW, USB2, Hauppauge PVR-150. There’s other goodies in there, but base system wise it’s pretty recent. I have PVR ability, which is something linux won’t even do on that machine, much less BeOS and ZETA which followed. WinXP likes to hiccup during recording.

Linux has PVR ability and in fact your machine is probably fully supported under Linux. Intel hardware has better support under Linux than most other hardware and the hauppauge 150 is also supported. You could run MythTV on that with no problem.

“The bad is that the writer hasn´t been in touch with OS/2 in a long time. IBM did update Warp in 1999 and included USB support for most devices, JFS file system, kernel upgrades to support more memory, a proto 32 bit device driver model, updates to the TCPIP stack with new functions to accelerate Web work (web servers)… etc ”

It is true that I have been out of the os/2 scene for while. However, I stand by what I said:

“Perhaps IBM could have kept OS/2 relevant but they didn’t make any serious efforts to develop it beyond about 1996.”

I could perhaps have been a bit clearer in my definition of ‘serious efforts to develop’ but I consider ‘development’ to be different to ‘maintenance’. Perhaps, they were serious about the platform for a bit longer than 1996 too. My understanding was that IBM were forced to maintain OS/2 in order to comply with their corporate contracts. By the time I left the platform, IBM were recommending that customers transition over to windows/Linux.

New printer drivers, the deal with scitech for graphics support and USB support kept the platform usable on new hardware. However, I consider that to be different to a ‘serious effort to develop’ the OS.

Why this guy talk about things that don´t know a lot of it?.It´seem that he has never heard of Reactos, that AmigaOS have a modern version with partial memory protection that run on modern hardware(AmigaOS4), that there are 3 projects of recreating the old Amiga hardware in FPGA (with working prototypes showed already), that Morphos also has binary compatibility with AmigaOS and probably there are more thinks that he don´t know and has commented

I think it where good written article, highlight many of the problems in reviving an OS that has been out of shape for long time.

Now about porting program from UNIX/LINUX to OS2 for example, and GUI system being an big obstacle is not really true, for example MacOSX faced the same challenge so they ported X11 to MacOSX, they made it look transparent so it looked as if where part of the desktop, this is how programs like OpenOffice and Gimp can work whit out a complete rewrite for the GUI systems, having X11 makes it easier to port gtk and other toolkits.

As for porting Unix/Linux Commands to AmigaOS for example has always been a challenge, but AmigaOS4 and clones has adapted POSIX standards, to make it easier. Lots of energy has been directed at porting the most used Linux libraries, to make this task easier, what we are seeing is great number of SDL port coming, as an effect of this.

As for security and memory protection, AmigaOS4 enables memory protection and this is the first stepping stone, in to full memory protection, unreserved memory pages, some kernel memory, and a few memory areas that do not need to be shared (internal buffers) are now protected under OS4.

OS structures / tables are generally not protected because many programs obtain information directly from them, but is advised in some header files for shard libraries and OS library’s, what structure developers should not try accessing, there also exist some API’s to access some OS structures so developers can do it future safe in most cases.

AmigaOS4 added integrated debugging features that have helped detect unsafe software, and helped developers find problems, and where the problems are located whit in there programs.

Grim reports are where use full

(And it all thanks to memory protection and well designed expiation handler under OS4)

First step: moving from chipset dependencies to API’s, and laying out new API’s for new programs, get old programs ported to the new platform, get old software recompiled and updated.

Second step: Add more API’s that deals whit SMP, Internal Massaging System, graphics, new API’s will always be memory protection safe, less and less new programs will be using the old API’s over time, in some cases the old API are wrapped or emulated under the new API.

Windows have evolved over many version, memory protection and security where added step by step, new version of AmigaOS will be doing the same but over shorter time, the updated will be bigger compared to what “modern” operation system (Windows) have, many programs will break over time, even recent program might break too, this is ongoing process where software developer will be needing to update there software from time to time.

I believe the there where soundblaser 128 AHI driver that just broke recently under a version of OS4 that is only available to beta testers:

Programs that often execute in external library’s, memory protected by program can be used in this external library’s, but protected memory can not be shard by other running tasks, if a driver for example have an internal process going then memory reserved by program most shared, and can not be protected in this cases, just like whit shared memory under Linux, you can add.

One more way is a safe/managed language. So programs can run safe without any memory protection at all

Imagine a language that need to check if every byte is safe to read or write :-p

A simple code like this:

int main()

{

a=startmem;

for (;*a!=0;a++;) *a=20;

}

Become quite completed and slow.

int safe_read(a)

{

For (i=0;i<tabsize;i++)

{

If ((Memtab[ i ].startmem=>a) and (a<Memtab[ i ].endmem))

{

Return *a;

}

}

Memory_exeption();

}

void safe_write(a,data)

{

For (i=0;i<tabsize;i++)

{

If ((Memtab.startmem=>a) and (a<Memtab[i].endmem))

{

*a = data;

Return;

}

}

Memory_exeption();

}

int main()

{

a=startmem;

for (;safe_read(a)!=0;a++;) safe_write(a,20);

}

Now just imagine the extra needed assembler that need to be executed for just calling a safe_write() and safe_read(), safe languages like basic are NOT where popular whit programs that needs to be fast like games for example.

This type of memory protection was a natural cosequence in case of OS4 which is written for a PowerPC processor (PPC requires either memory mapping through BAT registers (max. 256MB IIRC), or mapping with MMU, or both combined). Having to deal with MMU hashtable which has finite capacity invites to keep as few MMU page descriptors in it as possible

PS. The same kind of “memory protection” may be seen in MorphOS and will be seen in x86_64 AROS. Nothing amazing here.

…since Warp 4’s original GA release in 1996 than a few printer drivers and USB support. They added a whole new filesystem (JFS, a journalled filesystem), a whole new way to create and manipulate filesystems and disk partitions (LVM – Logical Volume Manager) which also allows one to combine different partitions into one logical drive even across devices (so everthing could be on your C: drive if you want), a whole new network stack (the original OS/2 TCP/IP stack was 16-bit and IBM wrote a new 32-bit stack), etc.

Some of these things are pretty serious updates, and I think that they deserve more than a shrug on the part of the article’s author.

Insofar as the usability of base OS/2 Warp 4 FP 15 is concerned, I just installed Photoshop 3.04 under WinOS2 last night. It works just fine, and even though it’s an OLD version of Photoshop, it’s still a kick-ass photo processing program. Now, for bitmap and vector graphics work under OS/2, I have concurrent access to ColorWorks 1+ (native), Embellish 2.02 (native), TrueSpectra Photo>Graphics (native), XV (native/X), GIMP (native/X), Photoshop 3.04 (Win16), PaintShop Pro 3.11 (Win36), A&L Draw (Win16), Visio 4 Pro (Win16), Drafix CADD (WinOS2), SmartDraw 3 (WinOS2), NeoPaint (DOS), and IrfanView (Odin). Not bad for a “dead: OS. 🙂

I think that we may have to agree to disagree on this point. I still maintain that IBM threw in the towel on the OS for political rather than technical or economic reasons. The OS still had something going for it at a point when IBM was advising its customers to pursue Windows, Linux and other Unix instead of OS/2. I still stand by my point that even the other extras that you mention do not constitute ‘serious development of the platform by IBM’.

Also, let’s not forget the original focus of the article: I was arguing against the re-implementation of OS/2. I am not arguing against people, such as yourself, getting pleasure/utility out of sticking with OS/2. Do you feel that a project to recreate OS/2 would be a good use of resources?

I don’t have a problem agreeing to disagree. 🙂 If the world all agreed with each other, it’s be a boring place to live…

OS/2, as a general desktop OS, has several qualities that I have yet to find to the same extent in Linux, the BSDs, or Windows, or BeOS, or … , and that has been enough to keep me using OS/2 to this point.

I like the mix of a 4OS2/4DOS command set (DOSlike but far less limited), REXX, a filesystem which retains case without being case-sensitive, low resource usage, a high level of responsiveness to changing process loads, and a very high level of out-of-the-box legacy DOS and Win16 app support along with a more modern 32-bit native API.

For me, the WPS is a nice thing, but it isn’t the reason I like and use OS/2. It’s not even near the top of the list. I’m a fullscreen console guy who likes scripting and aliases, not a GUI clicker. 🙂

I’d certainly like to see it stick around in some form. As I see it, there are two ways to try and keep it alive:

(1) Try to prop up the existing kernel(s) with a newer supporting cast of device drivers, installation code, useful utilities, and other things, or

(2) Try to recreate the best aspects of OS/2 (hopefully including some level of native OS/2 API support) in a new OS built from scratch.

#1 is doable now, and probably provides the best bang for the buck, but the fact that the kernels are static now (or relatively so) and binary-only makes long-term survival a question.

#2 would ensure long-term viability, but OS/2 is not a simple operating system (it’s single-user, but it has a lot of APIs and uses some sophisticated x86 features), and the fact that it has more than one native API and an extensible OO desktop makes the task of re-creation a nontrivial one.

My own vote is for #1 as long as possible, with #2 going on in the wings. 🙂

I do think recreating it would be a very good use of resources. Why? Because it would provides one of the very few non-UNIX-like alternative platforms which actually has enough software to be a viable general desktop solution, and I think that would appeal a lot more to Windows folks than either Linux or BSD would.

OS/2 is not like Windows, not really, but it’s closer to the Windows way of thinking than Linux is without making many of the security/UI/resource sacrifices that Windows is saddled with. I see it as a happy medium which would otherwise not exist.