Tag Archives: E Ink Corporation

This story has it all: military, patents, international competition and cooperation, sex (well, not according to the academics but I think it’s possible), and a bizarre device – the PaperPhone (last mentioned in my May 6, 2011 posting on Human-Computer Interfaces).

“If you want to know what technologies people will be using 10 years in the future, talk to the people who’ve been working on a lab project for 10 years,” said Dr. Roel Vertegaal, Director of the Human Media Lab at Queen’s University in Kingston, Ontario. By the way, 10 years is roughly the length of time Vertegaal and his team have been working on a flexible/bendable phone/computer and he believes that it will be another five to 10 years before the device is available commercially.

PaperPhone consists of an Arizona State University Flexible Display Center 3.7” Bloodhound flexible electrophoretic display, augmented with a layer of 5 Flexpoint 2” bidirectional bend sensors. The prototype is driven by an E Ink Broadsheet AM 300 Kit featuring a Gumstix processor. The prototype has a refresh rate of 780 ms for a typical full screen gray scale image.

An Arduino microcontroller obtains data from the Flexpoint bend sensors at a frequency of 20 Hz. Figure 2 shows the back of the display, with the bend sensor configuration mounted on a flexible printed circuit (FPC) of our own design. We built the FPC by printing its design on DuPont Pyralux flexible circuit material with a solid ink printer, then etching the result to obtain a fully functional flexible circuit substrate. PaperPhone is not fully wireless. This is because of the supporting rigid electronics that are required to drive the display. A single, thin cable bundle connects the AM300 and Arduino hardware to the display and sensors. This design maximizes the flexibility and mobility of the display, while keeping its weight to a minimum. The AM300 and Arduino are connected to a laptop running a Max 5 patch that processes sensor data, performs bend gesture recognition and sends images to the display. p. 3

It may look ungainly but it represents a significant step forward for the technology as this team (composed of researchers from Queen’s University, Arizona State University, and E Ink Corporation) appears to have produced the only working prototype in the world for a personal portable flexible device that will let you make phone calls, play music, read a book, and more by bending it. As they continue to develop the product, the device will become wireless.

The PaperPhone and the research about ‘bending’, i.e., the kinds of bending gestures people would find easiest and most intuitive to use when activating the device, were presented in Vancouver in an early session at the CHI 2011 Conference where I got a chance to speak to Dr. Vertegaal and his team.

Amongst other nuggets, I found out the US Department of Defense (not DARPA [Defense Advanced Research Projects Agency] oddly enough) has provided funding for the project. Military interest is focused on the device’s low energy requirements, lowlight screen, and light weight in addition to its potential ability to be folded up and carried like a piece of paper (i. e., it could mould itself to fit a number of tight spaces) as opposed to the rigid, ungiving borders of a standard mobile device. Of course, all of these factors are quite attractive to consumers too.

As is imperative these days, the ‘bends’ that activate the device have been patented and Vertegaal is in the process of developing a startup company that will bring this device and others to market. Queen’s University has an ‘industrial transfer’ office (they probably call it something else) which is assisting him with the startup.

There is international interest in the PaperPhone that is collaborative and competitive. Vertegaal’s team at Queen’s is partnered with a team at Arizona State University led by Dr. Winslow Burleson, professor in the Computer Systems Engineering and the Arts, Media, and Engineering graduate program and with Michael McCreary, Vice President Research & Development of E Ink Corporation representing an industry partner.

On the competitive side of things, the UK’s University of Cambridge and the Finnish Nokia Research Centre have been working on the Morph which as I noted in my May 6, 2011 posting still seems to be more concept than project.

Vertegaal noted that the idea of a flexible screen is not new and that North American companies have gone bankrupt trying to bring the screens to market. These days, you have to go to Taiwan for industrial production of flexible screens such as the PaperPhone’s.

One of my last questions to the team was about pornography. (In the early days of the Internet [which had its origins in military research], there were only two industries that made money online, pornography and gambling. The gambling opportunities seem pretty similar to what we already enjoy.) After an amused response, the consensus was that like gambling it’s highly unlikely a flexible phone could lend itself to anything new in the field of pornography. Personally, I’m not convinced about that one.

So there you have a case study for innovation. Work considered bleeding edge 10 years ago is now cutting edge and, in the next five to 10 years, that work will be become a consumer product. Along the way you have military investment, international collaboration and competition, failure and success, and, possibly, sex.

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,
Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,
Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement
…
NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.
…
But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).

While the debates rage on about tablets versus e-readers and about e-ink vs LCD readers and about Kindle vs Nook and other e-reader contenders, there are other more fundamental debates taking place as per articles like E-reading: Revolution in the making or fading fad? by Annie Huang on physorg.com,

Four years ago Cambridge, Mass.-based E Ink Corporation and Taiwan’s Prime View International Co. hooked up to create an e-paper display that now supplies 90 percent of the fast growing e-reader market.

The Taiwanese involvement has led some observers to compare e-reading to the Chinese technological revolution 2,000 years ago in which newly invented paper replaced the bulky wooden blocks and bamboo slats on which Chinese characters were written.

But questions still hang over the Taiwanese-American venture, including the readiness of the marketplace to dispense with paper-based reading, in favor of relatively unfamiliar e-readers.

“It’s cockamamie to think a product like that is going to revolutionize the way most people read,” analyst Michael Norris of Rockville, Maryland research firm Simba Information Co. said in an e-mail. Americans use e-books at a rate “much, much slower than it looks.”

At 26, I’m part of a generation raised on gadgets, but actual books are something I just refuse to give up

…

One recent story in the New York Times went so far as to claim that iPads and Kindles and Nooks are making the very act of reading better by — of course — making it social. As one user explained, “We are in a high-tech era and the sleekness and portability of the iPad erases any negative notions or stigmas associated with reading alone.” Hear that? There’s a stigma about reading alone. (How does everyone else read before bed — in pre-organized groups?) Regardless, it turns out that, for the last two decades, I’ve been Doing It Wrong. And funny enough, up until e-books came along, reading was one of the few things I felt confident I was doing exactly right.

…

o is my overly personal, defensive reaction to the e-reader boom nothing more than preemptive fear of the future, of change in general? I’d like to think I’m slightly more mature than that, but at its core my visceral hatred of the computer screen-as-book is at least partially composed of sadness at the thought of kids growing up differently from how I did, of the rituals associated with learning to read — and learning to love to read — ceasing to resemble yours and mine. Nine-year-olds currently exist who will recall, years from now, the first time they read “Charlotte’s Web” on their iPads, and I’m going to have to let that go. For me, there’s just still something universal about ink on paper, the dog-earing of yellowed pages, the loans to friends, the discovery of a relative’s secret universe of interests via the pile on their nightstand. And it’s not really hyperbole to say it makes me feel disconnected from humanity to imagine these rituals funneled into copy/paste functions, annotated files on a screen that could, potentially, crash.

I doubt I’m the only one, even in my supposedly tech-obsessed generation, who thinks this way.

Well, maybe Silvers is a minority but there is at least one market sector, education texts, that e-readers don’t seem to satisfy as Pasco Phronesis (David Bruggeman) in an August 12, 2010 posting notes evidence that e-readers are less efficient than regular books,

Edward Tenner (who you should be following on general principle) at The Atlantic gathers some findings that suggest e-readers are less effective than regular books from an efficiency perspective – something that matters to readers concerned with educational texts. Both in terms of reading speed and the distraction of hypertext links, e-Readers are not the best means to focus on whatever text you’re trying to read.

Those problems may be remedied with a new $46M investment in Kno, Inc. (from the Sept. 8, 2010 news item on physorg.com),

Founded in May 2009 and short for “knowledge,” Kno is developing a two-panel, touchscreen tablet computer that will allow users to read digital textbooks, take notes, access the Web and run educational applications.

“Kno is gearing up to launch the first digital device that we believe will fundamentally improve the way students learn,” said Osman Rashid, Kno’s chief executive and co-founder.

Rashid said the funding will “help us continue to deliver on our product roadmap and ultimately deliver on our vision to bring innovative digital technology to the world of education.”

We’ve pointed out in the past that if you’re “buying” ebooks on devices like the Kindle or the iPad, it’s important to remember that you’re not really “buying” the books, and you don’t really own them. We’re seeing that once again with a story on Consumerist about a woman who was locked out of the ebooks on her Kindle for a month:

The worst thing about this story isn’t Amazon’s conduct; it’s the company’s technical capabilities. Now we know that Amazon can delete anything it wants from your electronic reader. That’s an awesome power, and Amazon’s justification in this instance is beside the point. As our media libraries get converted to 1’s and 0’s, we are at risk of losing what we take for granted today: full ownership of our book and music and movie collections.

Most of the e-books, videos, video games, and mobile apps that we buy these days day aren’t really ours. They come to us with digital strings that stretch back to a single decider—Amazon, Apple, Microsoft, or whomever else. … Now we know what the future of book banning looks like, too.

Consider the legal difference between purchasing a physical book and buying one for your Kindle. When you walk into your local Barnes & Noble to pick up a paperback of Animal Farm, the store doesn’t force you to sign a contract limiting your rights. If the Barnes & Noble later realizes that it accidentally sold you a bootlegged copy, it can’t compel you to give up the book—after all, it’s your property. The rules are completely different online. When you buy a Kindle a book [sic], you’re implicitly agreeing to Amazon’s Kindle terms of service. The contract gives the company “the right to modify, suspend, or discontinue the Service at any time, and Amazon will not be liable to you should it exercise such right.” In Amazon’s view, the books you buy aren’t your property—they’re part of a “service,” and Amazon maintains complete control of that service at all times. Amazon has similar terms covering downloadable movies and TV shows, as does Apple for stuff you buy from iTunes.

I certainly like owning my books and the idea that some unseen individual might decide to remove access with the click of a few keystrokes certainly gives me pause. As for whether or not people are using e-readers and their ilk, I have more about that along with my thoughts on these debates and what’s happening with ‘the word’ in part 3.