7/30/2010

Every month, QNX publishes The Source, a newsletter that provides the skinny on the latest QNX BSPs, product releases, whitepapers, webinars, and blog posts. It's a quick read — virtually no marketing fluff — and you can scan it in about 20 seconds. To subscribe, click here.

In case you missed the July issue, here it is:

The Sourcemonthly e-newsletter for the QNX community

QNX® Software Development Platform 6.5.0 now shippingWe're pleased to announce the general availability of release 6.5.0. New features include multi-core support for ARM Cortex-A9 processors and Freescale Power e500MC cores, a persistent publish/subscribe service, an updated tool chain, and various performance enhancements. More...

7/28/2010

I've posted a number of articles on the LTE Connected Car, which helps automakers, service providers, and content developers explore what happens when cars connect to the cloud over wideband wireless networks.

But here's the thing: people often assume that a cloud-connected car is simply about pushing more social media, more music, more video, more games, and, in short, more entertainment into the vehicle. Nested within that assumption is another assumption: that cloud-connected cars will lead to more driver distraction.

I'm not convinced. First, automakers aren't about to create cars that let you post Facebook updates or upload YouTube videos while driving. And even if they were tempted to do so, their teams of lawyers would advise otherwise. And if their lawyers failed, teams of government regulators (and lawyers) would step in. In short, mobile broadband technology will deliver more "infotainment" into the automobile, but drivers won't have unfettered access to this content while rocketing down the turnpike at 80 mph.

Second, cloud connectivity can actually help reduce distraction. For instance,today's voice recognition system are relatively primitive, using grammars and speech models limited by the car's onboard processing and storage. But once you add a wideband wireless connection, the car’s infotainment unit can use a sophisticated server-based voice recognition system that lets the driver use natural language. In fact, a remote server can provide a variety of helpful features — such as realtime traffic reports integrated into navigation services — that are difficult or impossible to implement using only on-board resources.

"People forget that, when you have an Internet connection, data can travel in both directions..."

Third, people forget that, when you have an Internet connection, data can travel in both directions. Case in point: "AJ", the 2011 Ford Fiesta that can automatically post tweets on its own Twitter account. If you read the New York Times article on AJ, you may feel that a Twittering car is a solution in search of a problem. And you'd be right. But AJ is just the beginning — or, more correctly, a platform for exploring what happens when cars become mobile Internet terminals.

Imagine, if you will, a critical mass of vehicles acting as floating traffic probes, anonymously uploading real-time information about traffic and road conditions, such as whether a stretch of road is icy. And then imagine nearby vehicles accessing this aggregated data and warning their drivers to slow down accordingly.

Personally, I'd be happy if my car distracted me long enough to say, "Paul, buddy, ease up on the accelerator, or you'll wipe out." What about you?

7/26/2010

Has it been five years already? I just came across a blog post on the PARS3C website that says today marks the fifth anniversary of the Space Shuttle’s return to flight mission.

This was, of course, the first shuttle mission after the tragic loss of the Columbia and its crew in 2003. It was also the first mission to use the QNX-based Laser Camera Systems (LCS) from Neptec.

The goal of the LCS is simple: Detect any fractures or anomalies on the shuttle’s exterior surface that could lead to a repeat of the Columbia disaster. Achieving that goal is anything but simple, however. To get an idea, check out this article that I co-authored with Iain Christie of Neptec back in 2005...

On February 1, 2003, the space shuttle Columbia disintegrated upon re-entering the Earth’s atmosphere, killing the entire crew. The most likely cause: a hole in the shuttle’s left wing. To avoid such disasters in the future, NASA decided they needed in-flight technology that could inspect the shuttle exterior — including areas normally invisible to the astronauts inside — and identify even the smallest threat to mission safety.

Enter the Neptec Laser Camera System (LCS). Designed and built by Neptec, a Canadian-based developer of space vision systems, this high-precision 3D laser scanner made its debut on the space shuttle Discovery, which launched on July 26 of this year and returned safely to Earth two weeks later. Using the LCS, NASA can detect tiny fractures in the shuttle's heat shield, even if they’re only a few millimeters in size. Just as important, the LCS can provide NASA with the data needed to determine whether a fracture does, in fact, pose a threat to the shuttle crew.

In this artist’s rendering, Neptec's LCS scans the nose cone of the shuttle Discovery for potential damage to the shuttle's heatshield tiles.

While in orbit, the shuttle faces extreme, fluctuating temperatures, with the sun rising and setting 18 times each day. These harsh conditions make it virtually impossible to use a traditional video camera to inspect the shuttle exterior. Neptec's LCS not only provides 3D information of the exterior, but is immune to changing temperature and lighting conditions. To achieve this immunity, the LCS incorporates a number of design features, including the wavelength of the laser source (1.5 micrometers) and the very small instantaneous field of view of the scanner. Radiation at 1.5 micrometers falls in a part of the spectrum in which there is very little competing radiation from the sun. As a result, there is only a small possibility that direct sunlight or specular reflections of the sun will be in the field of view at any given time.

A striking insight
Neptec has been a prime NASA contractor for 10 years and has worked on over 25 space shuttle flights. Working closely with more than 40 astronauts, the Neptec team has logged over 10,000 hours at NASA's mission control center in Houston.

The inspiration for Neptec's core technology occurred when an engineer for Canada's National Research Council (NRC) was watching The Graduate, a 1967 movie starring Dustin Hoffman. At the end of the movie, Hoffman runs quickly past a brick wall. The engineer found the image — a person moving against a regular pattern of bricks — very striking. More importantly, he realized that a computer could be programmed to use this regular pattern as a means to accurately measure the person’s motion. The idea led to the birth of the Space Vision Systems (SVS), with which Neptec eventually became involved.

Synchronized scanning
Neptec’s LCS is a wide-angle, high-speed, high-precision laser scanner. At distances of up to 10 metres, it can create a model of any object that is accurate to a few millimeters. Using a synchronized scanning technique, the LCS generates precise images in three modes: QuickView Area Scan, Detailed Area Scan, and Continuous Line Scan.

Using the Quickview Area Scan, the LCS gathers data to generate a 2D image that helps the astronaut operating the system to determine what is in the field of view of the LCS. In the Detailed Area Scan and Continuous Line Scan modes, the LCS gathers 3D data that is processed by software running on a ground-based workstation to produce 3D images and make quantitative measurements.

The LCS generated this 3D wireframe model, which shows a damaged tile on the underside of the shuttle wing. The model is color-coded to indicate the depth of the damage.

System design
The LCS consists of three main components: the Laser Camera Head, the Laser Camera Controller, and the Image Analysis Workstation. The Laser Camera Head is attached to a 50-foot-long extension of the Canadarm (the robotic arm used to deploy and retrieve satellites from the shuttle), allowing it to scan critical portions of the shuttle exterior. The output of the Laser Camera Head consists of raw time-tagged 3D data that is forwarded to the Laser Camera Controller for consolidation and storage.

The Laser Camera Controller, installed in the shuttle cockpit, consists of a laptop computer that executes the Neptec-developed LCS control software. This software provides the graphical interface for the LCS, which lets the user control various functions of the LCS and view the inspection images.

The ground-based Image Analysis Workstation consists of a computer platform situated in or near the mission control center. It runs software applications that process and analyze the 3D image data to support damage-inspection and detection operations.

Tight schedules
The biggest challenge facing Neptec’s LCS software development team was the tight production schedule. The team had to design and develop software to run on new hardware platforms, port application code to a new version of the operating system, develop new features, and qualify and deliver the product — all in less than one year.

“As if that wasn’t difficult enough,” says John Schneider, Neptec’s director of engineering, “Neptec encountered technical challenges with interfacing third-party hardware and resolving cross-platform communications protocol issues.”
QNX Neutrino, the realtime operating system (RTOS) chosen for the LCS, played a key role in helping Neptec address these issues.

OS matters
Neptec’s software developers have worked with QNX operating systems since 1991. They first used QNX during research and development work for Neptec’s early Space Vision System (SVS). Used with the Canadarm, the SVS allows astronauts to accurately position and orient payloads, such as satellites. In fact, the SVS can support all kinds of shuttle operations — everything from piloting the shuttle to providing relative range, bearing, and elevation information that helps astronauts monitor clearances between a payload and the shuttle.

In 2000 the Neptec team used QNX in the first-generation LCS system, which they developed as a Design Test Object (DTO) for the STS-105 shuttle mission in August 2001. They then ported the code to the QNX Neutrino RTOS, the latest generation of QNX operating system technology.

“Having developed a core expertise in QNX and gaining trust in the reliability of QNX products,” says John Schneider, Neptec’s director of engineering, “QNX became the natural choice for the LCS.”

Because Neptec developed a new computing platform for the LCS project, they needed a custom Board Support Package (BSP) to run QNX Neutrino on the new hardware. Consequently, they invited the application support team at QNX Software Systems to develop the BSP. This saved Neptec considerable time and resources, allowing Neptec developers to concentrate on the application development.

“The Neptec staff found the QNX support team to be knowledgeable, competent, and very responsive to the requirements,” says Schneider.

Extreme conditions
On the space shuttle, there is no room for error. Every system must perform with absolute reliability. To address this challenge, the QNX Neutrino RTOS uses a microkernel architecture. Microkernel RTOSs have two defining characteristics, both of which are critical to ensuring system reliability:

1. The OS kernel contains only a small core of fundamental services, such as timers, messages, and scheduling. All higher-level services and programs — device drivers, file systems, protocol stacks, user applications, and so on — run outside the kernel as separate, memory-protected components.

In the QNX Neutrino RTOS, most system services run as separate, memory-protected processes.

This architecture offers two key reliability benefits. First, it makes it much easier to isolate and correct programming errors before the errors can make their way into a deployed system. For instance, if any service or application under development attempts to access memory outside its process container, the OS can identify the process responsible, indicate the location of the fault, and create a process dump file viewable with source-level debugging tools. The dump file can include all the information the debugger needs to identify the source line that caused the problem, including a history of function calls, contents of data items, and other diagnostic information. Errors that would normally take days or weeks to resolve can be pinpointed almost immediately.

Second, microkernel architecture enables dramatically shorter Mean Time to Repair (MTTR). Consider what happens if, say, a device driver faults in a deployed system: the OS can terminate the driver, reclaim the resources the driver was using, and then restart it, often within a few milliseconds. From start to finish, the entire procedure can be orders of magnitude faster than the conventional solution, which is to reboot the entire system.

Extreme conditions
“The LCS was a critical element of NASA's Return to Flight mission and we had to be sure it used the most reliable operating system available,” said Iain Christie, vice president of research and development at Neptec. “Selecting the QNX Neutrino RTOS was an easy decision because we already knew that the system can handle the extreme conditions found in space.”

This article first appeared in Embedded Control Europe (ECE) magazine.

For more information on Neptec's technology and its role in NASA's Retun to Flight mission, visit www.neptec.com. For more information on the QNX Neutrino RTOS, visit www.qnx.com.

7/21/2010

For decades, science fiction writers have speculated on what will happen to humanity once robots become sufficiently intelligent and sufficiently easy to mass-produce. The scenarios are endless: robots replacing humans, robots killing humans, robots entertaining humans, robots protecting humans, and last but not least, robots becoming human.

Meanwhile, back in the real world, many researchers and engineers are focusing on robotic technology that can assist humans. These include intelligent prostheses, such as bionic hands or arms, and "rehab robots" that help stroke patients re-learn to walk.

Designing a robotic system that can assist humans is one thing; testing to see whether it is accomplishing its goals is another.

Enter Kinea Design, a UK-based firm that specializes in human interactive mechatronics, including bionic hands. To test haptic "tactors" that let bionic devices provide a sense of touch, Kinea created the Greenbox, a test instrument based on the QNX Neutrino RTOS. This instrument can calibrate load cells, check closed-loop responses of actuators, interface with a variety of sensors, and perform a host of other tasks necessary for testing and verification.

Testing and refining these haptic tactors is critical: By providing a sense of touch, the tactors free amputees from having to rely solely on visual input when manipulating objects. Moreover, they enable amputees to sense vibration, surface texture, friction, and other useful environmental cues.

I'm only scratching the surface of Kinea's technology. For a in-depth article on their testing methods and philosophy, including the Greenbox, click here. For more information on their products, click here.

For other examples of how QNX technology enables robotics and robotics research, click here.

7/14/2010

Okay, boys and girls, time to press the fast forward button. The "30 years of QNX" series has only reached 1998, but if you don't mind, we're going to jump ahead a few years — albeit temporarily.

Why the jump? Because, to my mind, the period from 2005 to 2010 has been an era of unprecedented innovation for QNX Software Systems. And the more I think about it, the more I want to talk about it. Simple as that.

So without further ado, let's look at what QNX has been up to since 2005:

Pretty impressive, I think. But do you know what's most gratifying to a QNX employee like myself? The fact that all this innovative technology now touches the lives of tens of millions of people every day. Sometimes, it touches them through really cool things, like car infotainment systems. And sometimes, it touches them through really important things, like cancer-treatment systems. Either way, I feel a strong connection between what we do and the smooth running of everyday life. And what's better than feeling connected?

On Monday, RIM posted a new sneak peek of its BlackBerry 6 OS. The video highlights several features, including a new browser, universal search, and the ability to post to multiple social networking sites simultaneously. (That last feature makes my social-media me very happy.)

7/12/2010

If you're a web developer or designer, there's a good chance you read Adobe Edge, an e-zine published by — you guessed it — Adobe.

This month, Adobe Edge takes the road less travelled with an article on how QNX Software Systems is employing Adobe Flash. No, it isn't a case study on how QNX has used Flash to spiffy up its website, but rather, an exploration of how QNX is using Flash to change the face of in-car computing.

To quote the author, Eric Oldrin, "At first, I was just interested in ways to make my dash look like Knight Rider but as I quickly discovered, when QNX CAR and Flash connect your car to the cloud the possibilities are limitless."

The articles includes quotes from Andy Gryc and Paul Streatch of QNX Software Systems. To read it, click here.

Yeah, I know, everyone posts their documentation online these days. So what's the big deal?

Well, you have to realize that the QNX RTOS v2 was replaced by a newer version of the QNX RTOS back in 1991. And the only reason that QNX has posted the manual online is because customers keep asking for it. And the only reason customers keep asking for it is because, after 20 or more years, their QNX 2 systems are *still* running.

7/08/2010

This just in: QNX has published the source code to its Flash-based smart energy demo.

The download provides source code for the demo's Flash application and device drivers. It also includes a workshop presentation that walks you through the process of building, editing, and running the demo software.

To download the source code and workshop presentation, click here. Note that you'll have to set up a "MyQNX" account, if you haven't already.

If you aren't familiar with QNX's smart energy reference, it provides out-of-the-box support for Zigbee sensors, Insteon home area networks (HANs), streaming IP video cameras for security monitoring, an application for calculating costs of energy consumption, and Internet connectivity for retrieving weather information and performing remote control and diagnostics. Other features include zone temperature controls, individual and zone light controls, and appliance monitoring and control.

The reference also employs persistent publish/subscribe technology from QNX Software Systems, which provides an abstraction layer between the HMI (user interface) and the system’s control software. As a result, it becomes much easier to add, change, or upgrade sensors, thermostats, alarms, and control mechanisms, without having to change the HMI.

Speaking of the HMI, here's a screen capture of the main screen for the smart energy reference. But why stop at a still picture, when you can see a moving picture instead? Click here to see videos of the reference on three platforms: Atmel, Freescale, and TI.