It's a real-time operating system and it's the brains behind BlackBerry 10.

Share this story

By now, you know all about BlackBerry 10, with its myriad gestures, its bustling notifications Hub, and the BlackBerry Balance mode, which separates work-centric applications and accounts from personal ones. You may also be aware that BlackBerry is hedging its bets on these features, which are the main attractions behind the newly overhauled mobile operating system. All of these features are made possible by QNX’s real-time operating system—one that the Canada-based company takes great pride in.

BlackBerry (née RIM) acquired QNX Systems in April 2010 with the intention of gaining a major foothold in the automotive industry. Although QNX has its hand in several automotive projects—the QNX Car Platform 2.0 is featured in the Bentley Continental GT, for instance, and the software has historically been embedded in the control systems of other high-end luxury vehicles—BlackBerry’s focus, for now, remains on its mobile devices, and it’s using this acquisition to hold on to its relevance in the mobile operating system wars.

While BlackBerry 10 still hasn't officially debuted in the United States, we thought we'd take a look at the framework behind BlackBerry 10 and see how the company's acquisition of this real-time operating system has been implemented in its new OS.

It started with the PlayBook

Florence Ion

The seven-inch PlayBook was the initial debut of the QNX kernel on a BlackBerry device. It had been designed with a custom user interface to work in conjunction with a tethered BlackBerry handset (the tablet didn't even have its own native e-mail client) but the PlayBook wasn't exactly a bestseller.

Regardless, it was still the beginning of the QNX revolution for BlackBerry and was essentially a test subject for the company to discover how it would move forward with its new acquisition. In October 2011, BlackBerry announced that it would fuse together the best parts of its BlackBerry OS with the QNX operating system featured on the PlayBook. Then-CEO Mike Lazaridis had said that the software, initially dubbed BBX, was intended to make it easier for developers to write applications that work interchangeably on both BlackBerry phones and tablets, and that “the whole company is aligning behind this single platform and single vision.”

“PlayBook was a big step in that journey [to BlackBerry 10],” said Sebastien Marineau, senior vice president of QNX Engineering and BlackBerry OS, in a phone interview with Ars. “It was more than just QNX—it was really a reinvention of the software platform for RIM.”

QNX: How does it work?

First released in 1982, QNX is a real-time operating system that is used mostly in the embedded systems market. Over the years, it's been implemented into things like in-car dashboards, medical devices, and routers. QNX employs what is called a micro-kernel architecture, which is decidedly different from software like UNIX, Mac OS, and Windows, which use kernels that are, to a greater or lesser extent, monolithic. With a monolithic kernel, all the traditional kernel tasks, such as the scheduling of processes and threads, the arbitration of access to hardware and the device drivers for that hardware, the provision of file systems, and the enforcement of system security, run in a single shared address space and use the processor's most privileged mode, so if any one of those things crashes, the entire system crashes.

With such a microkernel architecture used by QNX, each of these kernel tasks is broken out into its own address space, and most of them are run in the processor's least privileged mode. With this design, one thing can crash—a device driver, for example—without bringing down the rest of the system. In general, the crashed process can simply be restarted, and operation can continue as normal. This provides greater robustness and protection against programming errors. The operating system itself shares similar APIs and programming methodology with other UNIX and Linux frameworks, but according to Marineau, it was built from the ground up with a different underlying architecture.

Multitasking is something that’s well monitored in BlackBerry 10. As CrackBerry points out, everything runs as an individual process. The thumbnails that are seen on the running applications screen, for instance, use what BlackBerry calls “live covers,” which resemble something like widgets on the Android operating system. Most of the time they’re just intermittently grabbing data in the background—like the Gallery app, for instance, which refreshes every so often with a slideshow of the latest photos. When the frame goes out of view, the app is still running, but it isn't rendering anything to the screen because it's rendering to something else.

To keep them from running rampant, however, BlackBerry has included “knobs” in the operating system that put a container around each individual application so that it can continue to execute where it makes sense, while balancing that with power and CPU consumption. If an app wants to run in the background, the developer can request permission from within the app, but only within a finite CPU budget. This is all to prevent any application from being too aggressive and exhausting the battery. In some cases, apps are frozen in a “ready to run” state.

In spite of being used in a wide range of custom applications, QNX is not open source. BlackBerry does offer up portions of its frameworks for others to tweak, but they are typically things that its customers can take to modify and adapt them to various hardware. BlackBerry has also spent time evolving the software so that it's not tied down to particular hardware; this way, it can easily run on different form factors. "We've spent a lot of time abstracting the operating system and the applications from the hardware, which means we can take the same software platform and the applications and run them on a variety of different processors, different form factors, and so on," explained Marineau. He later used the aforementioned Bentley as a prime example of this. "[It] runs an automotive stack, QNX car, which probably shares 90 percent of the code and the lineage with BlackBerry 10 on a handset." All that needed to be tweaked was the user interface.

"The whole ubiquitous computing vision that we all share...is really easy to do when you have the same software running on a handset, on a tablet, in a car," Marineau said. He then added that BlackBerry's user base has always tended to be very focused on communication, efficiency, and getting things done very quickly. While that may still hold true, our biggest hope for BlackBerry yet is that BlackBerry 10 takes off so it has a chance to eventually execute that ubiquitous computing dream.

Not being a CS type I couldn't immediately see the distinction between a micro-kernel architecture and something like services or daemons in an OS with a monolithic kernel. Granted, services are outside the kernel but isn't what runs in vs. out a somewhat arbitrary choice? Is there really a hard division between the two types of OS kernels or is it more of a spectrum with Windows and UNIX on one end and something like QNX on the other?

Not being a CS type I couldn't immediately see the distinction between a micro-kernel architecture and something like services or daemons in an OS with a monolithic kernel. Granted, services are outside the kernel but isn't what runs in vs. out a somewhat arbitrary choice? Is there really a hard division between the two types of OS kernels or is it more of a spectrum with Windows and UNIX on one end and something like QNX on the other?

There's a spectrum sure, but some things are pretty black and white in practice.

Can a shitty chinese USB peripheral with questionable drivers crash your system? If so, you know the architects made the "arbitrary choice" to let it run in kernel mode.

Not being a CS type I couldn't immediately see the distinction between a micro-kernel architecture and something like services or daemons in an OS with a monolithic kernel. Granted, services are outside the kernel but isn't what runs in vs. out a somewhat arbitrary choice? Is there really a hard division between the two types of OS kernels or is it more of a spectrum with Windows and UNIX on one end and something like QNX on the other?

There's a spectrum sure, but some things are pretty black and white in practice.

Can a shitty chinese USB peripheral with questionable drivers crash your system? If so, you know the architects made the "arbitrary choice" to let it run in kernel mode.

This. Part of that black and white spectrum is what the OS actually allows you to do. If it lets you (as a choice) run in kernel mode without any barriers, lazy programmers will pick that, just like they make Windows applications that required admin privs to run, even if they really didn't need to.

Would it have made the story too complex to explain how all the other systems you named use hybrid kernels and explain what that means. I hope the greater/lesser extent monolithic idea came from a substitute editor.

Is there really a hard division between the two types of OS kernels or is it more of a spectrum with Windows and UNIX on one end and something like QNX on the other?

It's a spectrum, except that the choices tend to be clustered towards either end. After all, if you want the advantages of microkernel, why go half-way?

As for 'arbitrary', the over-all OS design dictates a philosophy that's usually unwise to ignore. As well, the cost-benefit analysis applies to service developers just as much as OS developers, although the benefit of 'requires less development time' can occasionally dominate many other concerns. :-).

Does anyone know what the BB10's browser is based on, if anything? Or is it simply WebKit, or perhaps full Chromium? From what I heard, it's a pretty mature and fast browser and developing a browser, including parsing, rendering and the script engine, seems like one of the most labor intensive tasks you can undertake in the software world, short of developing an OS.

Does anyone know what the BB10's browser is based on, if anything? Or is it simply WebKit, or perhaps full Chromium? From what I heard, it's a pretty mature and fast browser and developing a browser, including parsing, rendering and the script engine, seems like one of the most labor intensive tasks you can undertake in the software world, short of developing an OS.

Does anyone know what the BB10's browser is based on, if anything? Or is it simply WebKit, or perhaps full Chromium? From what I heard, it's a pretty mature and fast browser and developing a browser, including parsing, rendering and the script engine, seems like one of the most labor intensive tasks you can undertake in the software world, short of developing an OS.

From what I've read, the browser engine is WebKit-based, but the browser itself is actually an HTML5 app, developed from the ground up in house at BlackBerry. They discussed it at one of the developer events this past year. http://www.berryreview.com/2012/09/25/b ... avascript/

Would it have made the story too complex to explain how all the other systems you named use hybrid kernels and explain what that means. I hope the greater/lesser extent monolithic idea came from a substitute editor.

Actually, QNX is a different beast entirely.

Even the most modular configuration of Linux or Windows has drivers running in kernel mode (I don't know for sure on OSX, haven't been that deep under the hood there).

Under QNX everything but the most core OS services runs as a userspace process.Video, USB, ATA, filesystems: all run as userspace processes.

If I remember correctly, the only stuff actually in the kernel is memory management and the scheduler and a scant few utility functions for accessing hardware events.

I remember using QNX back when I was a pre-teen. It was the core OS of the ICON series of computers used in Ontario, Canada schools. It was my first experience at a unix-like system.

Ugh, those Unisys ICON's with the trackball mounted in the keyboard that always pinched your flesh into it when you were trying to navigate? Didn't realize those were running QNX, those would have been like grade 3 for me I think.

I remember using QNX back when I was a pre-teen. It was the core OS of the ICON series of computers used in Ontario, Canada schools. It was my first experience at a unix-like system.

Ugh, those Unisys ICON's with the trackball mounted in the keyboard that always pinched your flesh into it when you were trying to navigate? Didn't realize those were running QNX, those would have been like grade 3 for me I think.

I was in school around the same time but for whatever reason didn't get to use them too much. I think my primary school was too cheap, and they were mostly out of vogue by the time I got to high school. There was a lab that I used once or twice - the thing I remember was the built-in word processor that for spell check would mark which words were spelled wrong, but wouldn't give you suggestions. Pedagogically sound I suppose, but frustrating enough that I just went and used pencil and paper again.

....QNX employs what is called a micro-kernel architecture, which is decidedly different from software like UNIX, Mac OS, and Windows, which use kernels that are, to a greater or lesser extent, monolithic....

Mac OS is UNix based - your statement is redundant. It would better be stated Unix (including Mac OS) and Windows....

Quote:

With a microkernel architecture such as that used by QNX, each of these kernel tasks is broken out into its own address space, and most of them are run in the processor's least privileged mode. With this design, one thing can crash—a device driver, for example—without bringing down the rest of the system. In general, the crashed process can simply be restarted, and operation can continue as normal. This provides greater robustness and protection against programming errors.

You don't use Linux / Unix based OSes that much do you ?

Quote:

it can easily run on different form factors. "We've spent a lot of time abstracting the operating system and the applications from the hardware, which means we can take the same software platform and the applications and run them on a variety of different processors, different form factors, and so on,"

LOL - I'd like to see them port this to x86 or RISC without being heavily modified.

Even JAVA from one hardware spec to the next has to have different wrapper code.

Why not make every OS a Micro Kernel? Then they would all be fast and not crash. As I recall, the original design of Windows NT was a Micro Kernel. But, they could not get the performance out of it. In particular, the display/drivers was not responsive. So, they put the Display API and Drivers back into the Kernel. Worked well for performance but not so well for stability.

Well, I am the owner of QNX (originally QUNIX) prelease version 0.7, serial number 004... :-) Let's say that I am an OLD QNX user. Over the years I have been an QNX OEM, and major customer, as well as contributor to QNX software overall. I started using this system in 1982 when the IBM PC first came out - I was a sales rep for ComputerLand in the heart of the Silicon Valley (Los Altos, CA), and was selling it to organizations such as the Stanford Research Institute, Stanford University, US Geological Survey, Ford Aerospace, NASA Ames Research Lab, and others because it was already a generation ahead of available PC software such as MS-DOS, UCSD-Pascal, or CP/M-86. It had a K&R C compiler, supported more disc drive types (DS/DD floppies, hard drives, etc), and soon became fully multi-tasking, supporting much more sophisticated applications.

Why micro-kernel and message-passing vs. monolithic kernel w/ system calls / ioctl's, etc? Two issues - separation of responsibilities, and consistency of interface. It is this separation of responsibilities (running drivers in user-space) that allows the system to be self-healing. It is the consistency of interface (IPC's) that allows one to update drivers in real-time without bringing down the system, other than the device(s) in question for more than a few milliseconds. The other major advantage of QNX is that networking is inherent in the system. We used to call QNX the super-computer you built node-by-node. Need more computational power? Just add a node. Access/use of remote resources are no different than using local ones.

These days, I am a major Linux systems engineer - both from the data center server perspective, as well as embedded "soft" real-time systems. Why? Because Linux is "hot", but for all of that, when I have my druthers, I'd druther use QNX for most of these applications. It has hard real-time behavior, integrated networking, fully conformative POSIX support, speed, and support by a bunch of tech wizards I have been happy to work with for 30 years.

This is very interesting from a technical perspective, but I can't figure out how in the world these differences can possibly become advantages in the market for BlackBerry. Even if there are technical advantages for QNX (which I suspect are probably different tradeoffs rather than clear wins), how does it make a difference for users? If it doesn't make a difference in the user experience, this is just going to be an interesting technical footnote in computing history.

Why not make every OS a Micro Kernel? Then they would all be fast and not crash.

Microkernels do not typically offer speed advantages.

In theory, maybe; however, QNX has consistently proved to be the fastest OS when responding to hardware and software interrupts. There are numerous tests that prove this out. It is a lean, mean, processing machine! Yes, highly modified Linux-based systems run some of the rovers on Mars. They work very well, but if you want an off-the-shelf real-time system that can expand on demand, then QNX is the best choice.

....QNX employs what is called a micro-kernel architecture, which is decidedly different from software like UNIX, Mac OS, and Windows, which use kernels that are, to a greater or lesser extent, monolithic....

Mac OS is UNix based - your statement is redundant. It would better be stated Unix (including Mac OS) and Windows....

To be even more pedantic, Mac OS was a different product entirely. It's OS X.

Why not make every OS a Micro Kernel? Then they would all be fast and not crash. As I recall, the original design of Windows NT was a Micro Kernel. But, they could not get the performance out of it. In particular, the display/drivers was not responsive. So, they put the Display API and Drivers back into the Kernel. Worked well for performance but not so well for stability.

Every engineering decision is a balance.

Essentially you are refighting the Torvalds-Tanenbaum debate. Tanenbaum saying that Linux is obsolete and that Microkernels are so much better. And Linus basically telling him, if they are so much better than why doesn't he start coding instead of talking. Theoretically Microkernels have a lot of advantages but in the end the execution matters just as much and a Monolithic kernel like Linux seems to have a lot of advantages as well. Which in reality seem to win out. And if it is only the ease of programming them.

I do not really see the security problems, most monolithic kernels almost never crash anymore and running drivers in kernel space has some real performance benefits, you just need to get drivers from reputable sources. If your video card driver crashes a microkernel will not help you.

Why not make every OS a Micro Kernel? Then they would all be fast and not crash.

Microkernels do not typically offer speed advantages.

In theory, maybe; however, QNX has consistently proved to be the fastest OS when responding to hardware and software interrupts. There are numerous tests that prove this out. It is a lean, mean, processing machine! Yes, highly modified Linux-based systems run some of the rovers on Mars. They work very well, but if you want an off-the-shelf real-time system that can expand on demand, then QNX is the best choice.

That comes from its "real time" nature, not from being a microkernel design.

From what i understand (been a while) a real time OS attempts to guarantee a certain maximum time between context switches so that no single process can hog the system.

RiscOS, another real time OS, has been hailed as one of the most responsive OSs out there. I think one of the people behind OSNews.com still has a soft spot for RiscOS.

And i think there are at least one project focused on making Linux real time-ish.

....QNX employs what is called a micro-kernel architecture, which is decidedly different from software like UNIX, Mac OS, and Windows, which use kernels that are, to a greater or lesser extent, monolithic....

Mac OS is UNix based - your statement is redundant. It would better be stated Unix (including Mac OS) and Windows....

Didn't Apple have Unix run on top of Mach, a microkernel? I know that was true at the beginning and has probably changed a lot since then. But it definitely is not standard Unix at the lowest level.

I do not really see the security problems, most monolithic kernels almost never crash anymore and running drivers in kernel space has some real performance benefits, you just need to get drivers from reputable sources. If your video card driver crashes a microkernel will not help you.

Sure it would, with a microkernel a video card driver crash would disrupt your video/monitor, but that would be it. The rest of the system wouldn't go down and most likely the video system would just restart and you'd be off and running.

For a desktop user that'd be nice, but for any sort of embedded and/or control system that would be of critical importance. Losing a display might be a problem, but it's almost certainly worse to lose the whole system while it reboots.

UNIX is not an operating system. It once was. But in modern computing history, there are only unix-like systems, or *nix varieties, and UNIX as certified by the Single UNIX Specification (SUS). This has been true certainly since the 90s, but in this decade it is ludicrous to try and pretend there is a single operating system called UNIX, rather than various flavors of BSD, commercial System V derivatives, free linux distros, and so on.

It's more like POSIX. You don't say you run "POSIX OS" but you might run an operating system that meets the POSIX standard and specifications, and thus can run software against that spec.

Mac OS X was certified as UNIX. I am not sure whether the current OS X has met any UNIX certification, and I feel it is likely that iOS has not. I believe the last time Apple certified Mac OS X was in 2007 (UNIX 03). It seems that Apple no longer particularly cares about this as a marketing item.

I do not believe any linux distro has been certified against the SUS. I could be wrong, there are a lot of linux distros and commercialized versions.

Solaris, HP/UX, and AIX are or were all certified as meeting SUS.

Microkernels are another, more complex, issue. And almost entirely unrelated to whether something is "UNIX." Mach was a microkernel developed to replace BSD, which means you could have a full, compliant UNIX with a microkernel. I don't believe anyone actually sells this or certified this at any point. Mach development ceased in the mid 90s.

OS X uses a Mach microkernel derivative, but is not a true microkernel, instead representing a sort of hybrid kernel somewhere between a monolithic kernel and a microkernel.

I think GNU HURD uses a microkernel, thought it has never been finished or released, but it would certainly qualify as UNIX-like. Please don't flame me about HURD. I will readily confess I know almost nothing about this project as I have never run into it in the wild.

In any case, microkernels and UNIX are not incompatible. There is at least one Unix-like microkernel OS in-development. And Mac OS X is/was UNIX. iOS could even be argued to meet certain requirements of unix-like operating system characteristics. And linux and other *nix operating systems should be mentioned. Additionally, "Mac OS" traditionally refers to the operating systems in the System 6 through Mac OS 9 era of Apple, which are completely different from the modern, NeXT-derived OS X (Note: no longer "Mac OS X").

In summary, the phrase

Quote:

QNX employs what is called a micro-kernel architecture, which is decidedly different from software like UNIX, Mac OS, and Windows...

Is an absurd generalization, and does not make clear sense. At best it reads like marketing copy from QNX, at worst it seems un-researched and ill informed.

Are you talking about AIX, Mac OS 8.5.1, and Windows (something? 95? 3.11? NT? ME? Server 2008R2?)... it is not clear. Please be more specific.

I do not really see the security problems, most monolithic kernels almost never crash anymore and running drivers in kernel space has some real performance benefits, you just need to get drivers from reputable sources. If your video card driver crashes a microkernel will not help you.

Security by design is at least as important as not having known exploitable vulnerabilities. The monolithic vs microkernel security debate comes down to issues over their respective trust models. Architecturally, microkernels play to the idea of Defense in Depth, whereas monolithic kernels do not. Defense in Depth is basically the best way we have to mitigate flaws in software, of which there are demonstratively many, even at the kernel level.

That being said, microkernels have always looked better on paper. Just as you've said, execution is an important factor, but that's no reason to completely shrug off the benefits of clearly defined trust boundaries, especially between code from reputable vendors and the most critical sections of the software.

I do not really see the security problems, most monolithic kernels almost never crash anymore and running drivers in kernel space has some real performance benefits, you just need to get drivers from reputable sources. If your video card driver crashes a microkernel will not help you.

Security by design is at least as important as not having known exploitable vulnerabilities. The monolithic vs microkernel security debate comes down to issues over their respective trust models. Architecturally, microkernels play to the idea of Defense in Depth, whereas monolithic kernels do not. Defense in Depth is basically the best way we have to mitigate flaws in software, of which there are demonstratively many, even at the kernel level.

That being said, microkernels have always looked better on paper. Just as you've said, execution is an important factor, but that's no reason to completely shrug off the benefits of clearly defined trust boundaries, especially between code from reputable vendors and the most critical sections of the software.

This is very interesting from a technical perspective, but I can't figure out how in the world these differences can possibly become advantages in the market for BlackBerry. Even if there are technical advantages for QNX (which I suspect are probably different tradeoffs rather than clear wins), how does it make a difference for users? If it doesn't make a difference in the user experience, this is just going to be an interesting technical footnote in computing history.

On my Android phone, if I install a shoddy app and it crashes, it generally takes me whole phone down with it. So, I get stuck rebooting, can potentially lose any unsaved data if other things were going, could potentially corrupt something if some settings or critical process was running at a key moment. This could be game-breaking during an emergency, or during a phone interview or ... just any time in life. It gives the user the sense that the ENTIRE DEVICE is unreliable..

Likewise, I install an app... pull up my OS Monitor to find out the thing is using 90% of my 800mhz cpu in the backgrond when the app shouldn't even be running.

With QNX's micro-kernal, if that POS app crashes ... it just dies while leaving everything else going. This gives the user the sense that the device is still good, but that app needs to gtfo ... it helps push the blame where the blame belongs (if you want to look at it from a psychological, end-user stand-point).

By focusing on the "least privilage" instead of the "most privilage", it helps shield your device from shoddy (or, more importantly, malcious) apps that could run amok. The specific cpu budget QNX allows helps prevent some stupid app from firing up and just going ape-shit, sucking up all of your cpu time, bogging down interaction or just locking up the device.

Teh QNX folks are basically tech-heads, and they have spent years fine-tuning their OS. It's an amazing thing, but it comes at a premium. Hence, we hardly see it except in top-end gear (medical equipment, high-end cars, etc). All the rest of us bums get stuck with a "Good Enough" solution, like Android or Windows Phone.

edit:

The irony here is that in reading this story I almost felt like I was reading Ubuntu / Shuttleworth article ... one OS to span the devices. QNX already has a unified OS back-end for many devices. If they can create a unified interface, they'll basically cut Ubuntu off at the pass, will put Blackberry back in good graces, and could become a major game-changer in the coming years. While other companies are content with resting on their laurels, just adding a bit more polish to the top of the OS, QNX has spent years refining and amazing OS ... that now just needs a few blings added to the top to out-shine all the others.

If Blackberry plays their cards right, they could pull an "Apple" and come out from the woodwork to dominate in the next 10 years.

This is very interesting from a technical perspective, but I can't figure out how in the world these differences can possibly become advantages in the market for BlackBerry. Even if there are technical advantages for QNX (which I suspect are probably different tradeoffs rather than clear wins), how does it make a difference for users? If it doesn't make a difference in the user experience, this is just going to be an interesting technical footnote in computing history.

Failure is not an option in a phone. Blackberry picked QNX for reliability. In automotive use, they may use some flavor of linux for the entertainment system, but QNX for engine controls.

There are all sorts of ways to harden linux. Watchdog, kernel panic settings, etc. But they aren't particularly speedy. It takes some time to determine the system is hosed.

Actually failure is an option in a phone, but it is sure annoying. "In the beginning" in portable electronics, there was the battery pull. Just about any electronic device that is locked up can be brought back to functioning with a battery pull, essentially a POR (power on reset). As time progressed, batteries became captive in many consumer items. (Captive as in they are hard wired, not in a pack that can be pulled.) As a backup, the designer provided a tiny hole you could poke to force a reset. Today, there are devices with no way to externally reset them. With hardware watchdogs (a chip that needed to be tickled periodically else it resets the system), you can make somewhat reliable systems without external reset. OK for many applications, but not great for real time. (What do you do while your heart-lung machine is booting?)

Do we need a highly reliable smartphone? Well a friend of mine had her iphone 5 lock up and keep running until it killed the battery, and then not be able to be revived with a charge. Luckily apple replaces their junk without a hassle. It is all built into the price of the product.

Why not make every OS a Micro Kernel? Then they would all be fast and not crash.

Microkernels do not typically offer speed advantages.

In theory, maybe; however, QNX has consistently proved to be the fastest OS when responding to hardware and software interrupts. There are numerous tests that prove this out. It is a lean, mean, processing machine! Yes, highly modified Linux-based systems run some of the rovers on Mars. They work very well, but if you want an off-the-shelf real-time system that can expand on demand, then QNX is the best choice.