Poco-Blog...

...here's what's on our mind.

About the Poco-Blog

We are a San Francisco-based design firm with a passion for smart products and cutting-edge mechatronics - a blend of mechanical, electrical, and software design. This is our online discussion outpost -- a place for technology dialogue. Feel free to participate by commenting or emailing us anytime at:

(Context: this is Part IV of a look at the future of smart product interface technology; Part I set the stage, Part II looked at touch-based systems and Part III delved into touch-free systems.)

This week’s no doubt eagerly awaited installment of our series on smart product interfaces looks at some even more novel technology. Brain-computer interfaces (BCIs) are direct communication pathways between a human brain and an external device (i.e. without using things like hands or voices to intermediate). Various types of sensor networks are used to directly measure brain activity, which is then converted to something recognizable to external systems.

The Technology
Without getting too deep into the nitty gritty of the technology, it’s worth a brief look at some of the common approaches used in these systems. Both invasive (implanted into the body) and non-invasive systems exist, with unsurprising trade-offs: invasive systems increase medical risks and non-invasive systems often yield poor signal resolution because the skull dampens signals. For both practical and psychological reasons, commercially plausible systems outside special circumstances are almost certain to be non-invasive for the foreseeable future .

On the sensing side of the system, electroencephalography (EEG) is the most common technique and typically involves an array of electrodes mounted to the scalp that measure voltage fluctuations resulting from ionic current flows within neurons. Functional magnetic resonance imaging (fMRI) has also been used effectively, where brain activity is scanned by detecting changes in blood flow associated with different brain activity. However, fMRI has vastly increased hardware requirements regarding size, cost and complexity and thus early commercial systems have typically utilized an EEG headset (such as the systems we looked at a few years ago).

On the processing side, I am confident that progress over the next few years will be rapid, partly because the current state of the art is so crude (relatively). Historically, BCIs have had the most success translating physical commands (“Clench left fist”), but newer research has made strides in extracting more specific data (such as addresses or ATM numbers). However, one of the most interesting parts of BCIs is that processing can be addressed from both sides of the problem. The traditional engineering approach would be to work on developing and refining algorithms that can understand what the subject intends; to correlate certain brain activity measurements to physical commands or ultimately language. However, because of the remarkable neuroplasticity of the human brain, it is also possible to work in reverse and train the brain to think in certain ways that are more easily measured and understood by existing systems. It will be fascinating to watch the interplay between these two approaches as the technology matures.

The Why
Given the technical hurdles currently associated with BCIs, some people question whether they are worth the development headaches. Perhaps the most powerful potential application is for disability treatment – there is an already impressive body of research exploring the use of BCIs to control prosthetics and even, for instance, to restore non-congenital blindness. The ability to give a person back the use of a body part is a profoundly valuable result and should justify development of these systems all by itself.

However, there are other applications with larger potential markets that are also pushing both research and commercialization. Two of the major drivers are laziness and bandwidth. Laziness seems like a pretty minor incentive but the galvanizing force of annoyance should not be underestimated. History is littered with product improvements that did nothing more than save users a few seconds but came to dominate the market anyway (does anyone under 30 even know what a rotary phone is anymore?).

The bandwidth issue is more powerful. In previous posts, I’ve described the progression of interface technology in terms of increasing technical sophistication but another way to look at it is as a journey of increasing information transmission capacity. Buttons or even keyboards can convey only a single bit of information with each press. Touch screens improve on this but are inherently 2D. Gestural interfaces or voice command systems enable communication that is richer yet but the ultimate in bandwidth will occur when we can cut out the middleman (e.g. hands or mouth) altogether. Despite the sci-fi and touchy-feely connotations of a “mind-meld” (sarcasm intended), that concept embodies the pinnacle of possibility for human communication bandwidth and arguably also richness of communication, depending on how philosophical we want to get.

Such potential value does not come without potential costs, however. The idea of letting a person or system look directly into one’s thoughts will be considered creepy by many and there are significant technical obstacles between today’s state of the art and the kind of interface I posit above; it is even possible that the architecture of our brain does not lend itself to outputting the full information content of all our thoughts. Also and most importantly, we might have to wear dorky-looking helmets or headsets.

All in all, I expect brain-computer interfaces to become increasingly common in the world around us over the next few decades. In my next post, I will wrap up the series and summarize my best guess for where smart product interfaces are going – what would a competition be without a winner?