Human Media Labhttp://www.hml.queensu.ca/Fri, 02 Feb 2018 21:40:28 +0000en-USSite-Server v6.0.0-13258-13258 (http://www.squarespace.com)the human media lab is one of canada&#39;s premier media laboratories. its <br/>mandate is to develop disruptive technologies and new ways of working with <br/>computers that are viable 10 to 20 years from now. we are currently working <br/>on the design of organic user interfaces, an exciting new paradigm that <br/>allows computer interfaces to have any physical shape or form through <br/>flexible display technologies. <br/> Experimenting with the Future of Play at LEGO® World ExpoRoelMon, 12 Feb 2018 16:00:00 +0000http://www.hml.queensu.ca/blog/flyinglegobricks519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:5a74dacc9140b7945626b659Visitors will be able to experience and play with new technology combining
LEGO® bricks and drones.
COPENHAGEN - February 15-18, children and families visiting the LEGO® World
expo in Copenhagen, Denmark will have the chance to make their
brick-building dreams take flight with a flock of interactive miniature
drones developed by the Human Media Lab at Queen’s University in Canada in
collaboration with the LEGO Group’s Creative Play Lab.
The system allows children to arrange LEGO elements into a shape of their
choice and watch as a group of miniature drones takes flight to mimic the
shape and colour of their creation in mid-air. With the aid of tiny sensors
and gyroscopes, the system also tracks when the children move, twist and
bend their designs. The drones faithfully replicate any shape alterations
as an in-air animation.

Visitors will be able to experience and play with new technology combining LEGO® bricks and drones.

COPENHAGEN - February 15-18, children and families visiting the LEGO® World expo in Copenhagen, Denmark will have the chance to make their brick-building dreams take flight with a flock of interactive miniature drones developed by the Human Media Lab at Queen’s University in Canada in collaboration with the LEGO Group’s Creative Play Lab.

The system allows children to arrange LEGO elements into a shape of their choice and watch as a group of miniature drones takes flight to mimic the shape and colour of their creation in mid-air. With the aid of tiny sensors and gyroscopes, the system also tracks when the children move, twist and bend their designs. The drones faithfully replicate any shape alterations as an in-air animation.

“At the LEGO Group, we continuously explore the opportunities offered by new technologies to create fun and creative experiences for children. We are happy to offer the visitors at LEGO World the chance to experiment with LEGO bricks and drones in collaboration with Queen’s Human Media Lab. While the technology is a playful experiment, and not a real LEGO product, it is a way for us to explore the boundaries of what can be done with a combination of technology, LEGO bricks and loads of playful imagination,” said Tom Donaldson, VP of Creative Play Lab, at the LEGO Group.

The LEGO Creative Play Lab is a department within the LEGO Group, focusing on inventing the future of play. One of the ways it does this is by looking at different trends and ways in which children, parents and families play and interact with play material, aiming to create the play experiences of tomorrow and unleash their creative potential.

“At the Human Media Lab, we believe this technology has the potential to take experiential learning to an entirely new level. We have created a technology that works to blend the digital and physical worlds together right before children’s eyes,” says Dr. Vertegaal, head of the Human Media Lab and professor of Human-Computer Interaction at Queen’s University in Kingston, Canada.

He believes that the drone technology could potentially unlock new realms of interactive teaching capable of providing children insights into the physical world. While currently at an experimental stage, Vertegaal sees a potential technology used in the future to teach young schoolchildren about physics.

“As an example, imagine us interactively reconstructing the movement of planets around our sun or distant stars in the Milky Way galaxy,” says Dr. Vertegaal. “With this technology, we are able to simulate the physics of the natural world like gravity, planetary orbits, and more, giving children a chance to see what they have long learned from textbooks and two-dimensional depictions, in a real physical environment.”

The Human Media Lab has worked closely together with the Creative Play Lab team at the LEGO Group and Dr. Vertegaal’s research collaborator, Prof. Tim Merritt from Aalborg University in Denmark, to create the installation. Children and parents at the LEGO World expo who try the new technology will have the opportunity to discuss their experiences with Prof. Merritt. Information from these discussions will provide insights into how children interact with the drones, their ideas for how they’d like to play and learn with it in the future, and about the issues they encountered while operating the technology.

Media FootageHigh resolution photographs are available rights-free by clicking the thumbnails below. Please include a photo credit to Human Media Lab. Music by bensound.com

About the Human Media Lab at Queen’s UniversityThe Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include eye contact sensors, smart pause, attention-aware smartphones, PaperPhone, the world’s first flexible phone, PaperTab, the world’s first paper computer and TeleHuman2, the world’s first holographic teleconferencing system. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing. Working with him are a number of graduate and undergraduate students in engineering, design and psychology.

About the LEGO GroupThe LEGO Group is a privately held, family-owned company with headquarters in Billund, Denmark, and main offices in Enfield, USA, London, UK, Shanghai, China, and Singapore. Founded in 1932 by Ole Kirk Kristiansen, and based on the iconic LEGO® brick, it is one of the world's leading manufacturers of play materials. Guided by the company spirit: "Only the best is good enough”, the company is committed to the development of children and aims to inspire and develop the builders of tomorrow through creative play and learning. LEGO products are sold worldwide and can be virtually explored at www.LEGO.com. For more news from the LEGO Group, information about our financial performance and responsibility engagement, please visit http://www.LEGO.com/aboutus

About Human Centered Computing at Aalborg UniversityThe Human-Centered Computing at Aalborg University is a research unit in the department of computer science providing research and teaching in Interaction Design (IxD) and Human-Computer Interaction (HCI). iSince 1974, Aalborg University (AAU) has provided knowledge, development and highly qualified graduates to the outside world. More than 20,000 students are enrolled at Aalborg University and more than 3,500 staff members are employed across the University’s three campuses in Aalborg, Esbjerg and Copenhagen. For more information about the research unit, please visit http://www.cs.aau.dk/research/human-centered-computing

]]>Experimenting with the Future of Play at LEGO® World ExpoProf. Vertegaal on Real Reality Interfaces: Holograms and Programmable MatterRoelWed, 02 Aug 2017 18:43:47 +0000http://www.hml.queensu.ca/blog/2017/8/2/prof-vertegaal-on-real-reality-interfaces-holograms-and-programmable-matter519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:598219ff1e5b6cb234e09e91

Roel Vertegaal's take on Virtuality showcasing the latest research at the Human Media Lab at Queen's, which is moving towards hologrammatic displays that represent physical matter without any augmentation of the body.

]]>Presentation: Content Delivery for Tomorrow, From Vibrotactile Notifications to Mid-Air DisplaysRoelSun, 11 Dec 2016 15:26:26 +0000http://www.hml.queensu.ca/blog/2016/12/11/content-delivery-for-tomorrow-from-vibrotactile-notifications-to-mid-air-displays519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:584d6fbee3df28e180f78310Monday 12th of December, 11:00 AM @HML 3rd Floor Jackson Hall, Queen's
University
Professor Morten Fjeld
Head of t2i Lab, www.t2i.se
Chalmers University of Technology, Sweden
Abstract: The talk presents three projects in the field of emerging and
alternative display techniques; the two first are in the field of haptic
display, the thrid is in the area of mid-air display. The OmniVib projects
presents some basic studies and principles to leverage cross-body
vibrotactile notifications for mobile phones. The HaptiColor project deals
with a more specific challenge, but the insights are bearing for a wider
range of applications; to assist the colorblind, we employed a vibration
wristband that enables interpolating color information as haptic feedback.
As part of a more futuristic initiative, we present a map navigation
concept using a wearable mid-air display. The projects presented have been
carried out in collaboration with NUS Singapore and University of Maryland
(UMD), College Park.

Abstract: The talk presents three projects in the field of emerging and alternative display techniques; the two first are in the field of haptic display, the thrid is in the area of mid-air display. The OmniVib projects presents some basic studies and principles to leverage cross-body vibrotactile notifications for mobile phones. The HaptiColor project deals with a more specific challenge, but the insights are bearing for a wider range of applications; to assist the colorblind, we employed a vibration wristband that enables interpolating color information as haptic feedback. As part of a more futuristic initiative, we present a map navigation concept using a wearable mid-air display. The projects presented have been carried out in collaboration with NUS Singapore and University of Maryland (UMD), College Park.

Bio: Morten Fjeld's research activities are situated in the field of Human-Computer Interaction with a focus on tangible, mobile, and cross-device interaction. In 2005, he founded the t2i Lab at Chalmers. He holds a dual MSc degree in applied mathematics from NTNU (Norway) and ENSIMAG (France), and a PhD from ETH-Z (Switzerland). In 2002, Morten Fjeld received the ETH Medal for his PhD titled "Designing for Tangible Interaction". In 2004, Morten Fjeld became a faculty member of the Dept. of CSE, Chalmers University of Technology, Sweden. In 2011, he was a visiting professor at NUS (Singapore); in 2016 he was a visiting professor at Tohoku University (Japan). Since 2016, he is also an adjunct factuly member of the University of Bergen (Norway).

]]>WhammyPhone: Bending Sound with a Flexible SmartphoneRoelFri, 14 Oct 2016 11:00:00 +0000http://www.hml.queensu.ca/blog/whammyphone519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:57ffc00529687f7dc1b57c2eQueen’s University’s Human Media Lab to unveil musical instrument for a
flexible smartphone
KINGSTON - Researchers at the Human Media Lab at Queen’s University have
developed the world’s first musical instrument for a flexible smartphone.
The device, dubbed WhammyPhone, allows users to bend the display in order
to create sound effects on a virtual instrument, such as a guitar or
violin.
“WhammyPhone is a completely new way of interacting with sound using a
smartphone. It allows for the kind of expressive input normally only seen
in traditional musical instruments.” says Dr. Vertegaal.

KINGSTON -Researchers at the Human Media Lab at Queen’s University have developed the world’s first musical instrument for a flexible smartphone. The device, dubbed WhammyPhone, allows users to bend the display in order to create sound effects on a virtual instrument, such as a guitar or violin.

“WhammyPhone is a completely new way of interacting with sound using a smartphone. It allows for the kind of expressive input normally only seen in traditional musical instruments.” says Dr. Vertegaal.

WhammyPhone features a 1920x1080 full high-definition Flexible Organic Light Emitting Diode (FOLED) touchscreen display. The display shows keys that can be used to play sounds on sound synthesis software running on a computer. Like the ReFlex flexible smartphone, WhammyPhone is also equipped with a bend sensor, which allows for the user to bend the phone as a means of manipulating the sound.

Dr. Vertegaal demonstrates a number of applications for the new functionality of the WhammyPhone. The bend input can be used to simulate bending a string on a virtual guitar, providing Hendrix’ style feedback sounds. In another example, the phone is used to simulate the bowing of a simulated violin. Here, the bending of the phone provides the same kind of experience as exerting pressure on a real bow. A final example shows how WhammyPhone can be used to control loops in Electronic Dance Music, making it more intuitive for DJs to interact with their instruments.

“The real importance of WhammyPhone is that it provides the same kind of kinesthetic feedback that, say, a string provides when it is bent to alter the pitch”, says Dr. Vertegaal. “This kind of effect is critical for musicians to control their expression, and provides another level of utility for bend input in smartphones”.

Queen’s researchers will unveil WhammyPhone in Tokyo, Japan at one of the top conferences in Human-Computer Interaction, ACM UIST 2016, on Monday October 17th.

This research was support by Immersion Canada Inc. and the Natural Sciences and Engineering Research Council of Canada (NSERC).

A high resolution photograph of WhammyPhone is available below.Please include a credit to Human Media Lab.

WhammyPhone

]]>HoloFlex: Holographic, Flexible Smartphone Projects Princess Leia into the Palm of Your HandRoelThu, 05 May 2016 12:00:00 +0000http://www.hml.queensu.ca/blog/holoflex519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:572a5d6827d4bdbe975ec507Queen’s University’s Human Media Lab to unveil world’s first flexible
lightfield-enabled smartphone.
KINGSTON - Researchers at the Human Media Lab at Queen’s University have
developed the world’s first holographic flexible smartphone. The device,
dubbed HoloFlex, is capable of rendering 3D images with motion parallax and
stereoscopy to multiple simultaneous users without head tracking or
glasses.
“HoloFlex offers a completely new way of interacting with your smartphone.
It allows for glasses-free interactions with 3D video and images in a way
that does not encumber the user.” says Dr. Vertegaal.
HoloFlex features a 1920x1080 full high-definition Flexible Organic Light
Emitting Diode (FOLED) touchscreen display. Images are rendered into
12-pixel wide circular blocks rendering the full view of the 3D object from
a particular viewpoint. These pixel blocks project through a 3D printed
flexible microlens array consisting of over 16,000 fisheye lenses. The
resulting 160 x 104 resolution image allows users to inspect a 3D object
from any angle simply by rotating the phone.

KINGSTON - Researchers at the Human Media Lab at Queen’s University have developed the world’s first holographic flexible smartphone. The device, dubbed HoloFlex, is capable of rendering 3D images with motion parallax and stereoscopy to multiple simultaneous users without head tracking or glasses.

“HoloFlex offers a completely new way of interacting with your smartphone. It allows for glasses-free interactions with 3D video and images in a way that does not encumber the user.” says Dr. Vertegaal.

HoloFlex features a 1920x1080 full high-definition Flexible Organic Light Emitting Diode (FOLED) touchscreen display. Images are rendered into 12-pixel wide circular blocks rendering the full view of the 3D object from a particular viewpoint. These pixel blocks project through a 3D printed flexible microlens array consisting of over 16,000 fisheye lenses. The resulting 160 x 104 resolution image allows users to inspect a 3D object from any angle simply by rotating the phone.

Building on the success of the ReFlex flexible smartphone, HoloFlex is also equipped with a bend sensor, which allows for the user to bend the phone as a means of moving objects along the z-axis of the display. HoloFlex is powered by a 1.5 GHz Qualcomm Snapdragon 810 processor and 2 GB of memory. The board runs Android 5.1 and includes an Adreno 430 GPU supporting OpenGL 3.1.

Dr. Vertegaal envisions a number of applications for the new functionality of the HoloFlex technology. A first application is the use of bend gestures for Z-Input to facilitate the editing of 3D models, for example, when 3D printing. Using the touchscreen, a user can swipe to manipulate objects in the x and y axes, while squeezing the display to move objects along the z-axis. Due to the wide view angle, multiple users can examine a 3D model simultaneously from different points of view.

“By employing a depth camera, users can also perform holographic video conferences with one another”, says Dr. Vertegaal. “When bending the display users literally pop out of the screen and can even look around each other, with their faces rendered correctly from any angle to any onlooker”.

HoloFlex also can be used for holographic gaming. In a game such as Angry Birds, for example, users would be able to bend the side of the display to pull the elastic rubber band that propels the bird. When the bird flies across the screen, the holographic display makes the bird literally pop out of the screen in the third dimension.

Queen’s researchers will unveil HoloFlex in San Jose, California at the top conference in Human-Computer Interaction, ACM CHI 2016, on Monday May 9th.

This research was support by Immersion Canada Inc. and the Natural Sciences and Engineering Research Council of Canada (NSERC).

Media Footage

High resolution photographs of HoloFlex are available rights-free by clicking the thumbnails below. Please include a photo credit to Human Media Lab.

The Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, eye tracking TVs and cellphones, PaperPhone, the world’s first flexible phone, PaperTab, the world’s first flexible iPad and TeleHuman, the world’s first pseudo-holographic teleconferencing system. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing, as well as a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

]]>MagicWand: Canadian researchers unveil world’s first cylindrical handheld deviceRoelWed, 04 May 2016 20:01:46 +0000http://www.hml.queensu.ca/blog/magicwand519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:572a41f8c2ea5183172a6320Queen’s University’s Human Media Lab to unveil world’s first handheld with
a fully cylindrical display at CHI 2016 conference in San Jose, CA.
Researchers at Queen’s University’s Human Media Lab have developed the
world’s first handheld device with a fully cylindrical user interface. The
device, dubbed MagicWand, has a wide range of possible applications,
including use as a game controller.
Similar to the Nintendo Wii remote, but with a 340 degree cylindrical
display, users are able to use physical gestures to interact with virtual
3D objects displayed on the wand. The device uses visual perspective
correction to create the illusion of motion parallax; by rotating the wand
users can look around the 3D object.

Queen’s University’s Human Media Lab to unveil world’s first handheld with a fully cylindrical display at CHI 2016 conference in San Jose, CA.

KINGSTON - Researchers at Queen’s University’s Human Media Lab have developed the world’s first handheld device with a fully cylindrical user interface. The device, dubbed MagicWand, has a wide range of possible applications, including use as a game controller.

Similar to the Nintendo Wii remote, but with a 340 degree cylindrical display, users are able to use physical gestures to interact with virtual 3D objects displayed on the wand. The device uses visual perspective correction to create the illusion of motion parallax; by rotating the wand users can look around the 3D object.

“This, for example, means you can rotate MagicWand and see a gaming character inside it from all sides, as if it were 3D” says Roel Vertegaal (School of Computing), director of the Human Media Lab at Queen’s University. “Smartphones are flat and not ergonomically suitable for use as a controller or pointing device. MagicWand is the first handheld with a 340 degree high resolution display to have the regular physical affordances of a pointer stick."

MagicWand uses two high definition 720p LG Display Flexible OLED screens powered by Android 4.4 “KitKat” boards. Sensors inside the device are used to capture and transmit movements, as well as to adjust how objects are displayed. The screens are synchronized to act as a single, continuous 1440x1280 display. The MagicWand runs the Unity 3D game engine, allowing it to interact with gaming content on consoles over WiFi.

"As the Wii U has shown, the form factor of a controller is really critical when interacting with gaming content." says Dr. Vertegaal. "MagicWand really allows users, for the first time, to play around with gestures while being able to hold gaming visuals literally in their hand. It is the world's first true DisplayObject, going well beyond what has been demonstrated in any smartphone or controller."

Dr. Vertegaal thinks DisplayObjects with cylindrical form factors will be in the hands of consumers within five years. Queen’s researchers will unveil the MagicWand prototype at the ACM CHI 2016 Conference on Human Factors in Computing Systems in San Jose, California on May 9th. The annual forum is the world’s top conference on Human-Computer Interaction.

This research was supported by Immersion Canada, Inc. and the Natural Sciences and Engineering Research Council of Canada (NSERC).

Media Footage

High resolution photographs of MagicWand are available rights-free by clicking the thumbnails below. Please include a photo credit to Human Media Lab.

The Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, eye tracking TVs and cellphones, PaperPhone, the world’s first flexible phone, PaperTab, the world’s first flexible iPad and TeleHuman, the world’s first pseudo-holographic teleconferencing system. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing, as well as a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

]]>ReFlex: Revolutionary flexible smartphone allows users to feel the buzz by bending their appsRoelTue, 16 Feb 2016 05:37:49 +0000http://www.hml.queensu.ca/blog/reflex519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:56bcd549d210b8072ffb6fe2Queen’s University’s Human Media Lab to unveil world’s first wireless
flexible smartphone; simulates feeling of navigating pages via haptic bend
input
KINGSTON - Researchers at Queen’s University’s Human Media Lab have
developed the world’s first full-colour, high-resolution and wireless
flexible smartphone to combine multitouch with bend input. The phone, which
they have named ReFlex, allows users to experience physical tactile
feedback when interacting with their apps through bend gestures.
“This represents a completely new way of physical interaction with flexible
smartphones” says Roel Vertegaal (School of Computing), director of the
Human Media Lab at Queen’s University.
“When this smartphone is bent down on the right, pages flip through the
fingers from right to left, just like they would in a book. More extreme
bends speed up the page flips. Users can feel the sensation of the page
moving through their fingertips via a detailed vibration of the phone. This
allows eyes-free navigation, making it easier for users to keep track of
where they are in a document.”

KINGSTON - Researchers at Queen’s University’s Human Media Lab have developed the world’s first full-colour, high-resolution and wireless flexible smartphone to combine multitouch with bend input. The phone, which they have named ReFlex, allows users to experience physical tactile feedback when interacting with their apps through bend gestures.

“This represents a completely new way of physical interaction with flexible smartphones” says Roel Vertegaal (School of Computing), director of the Human Media Lab at Queen’s University.

“When this smartphone is bent down on the right, pages flip through the fingers from right to left, just like they would in a book. More extreme bends speed up the page flips. Users can feel the sensation of the page moving through their fingertips via a detailed vibration of the phone. This allows eyes-free navigation, making it easier for users to keep track of where they are in a document.”

ReFlex is based on a high definition 720p LG Display Flexible OLED touch screen powered by an Android 4.4 “KitKat” board mounted to the side of the display. Bend sensors behind the display sense the force with which a user bends the screen, which is made available to apps for use as input. ReFlex also features a voice coil that allows the phone to simulate forces and friction through highly detailed vibrations of the display. Combined with the passive force feedback felt when bending the display, this allows for a highly realistic simulation of physical forces when interacting with virtual objects.

“This allows for the most accurate physical simulation of interacting with virtual data possible on a smartphone today,” says Dr. Vertegaal. “When a user plays the “Angry Birds” game with ReFlex, they bend the screen to stretch the sling shot. As the rubber band expands, users experience vibrations that simulate those of a real stretching rubber band. When released, the band snaps, sending a jolt through the phone and sending the bird flying across the screen.”

Dr. Vertegaal thinks bendable, flexible smartphones will be in the hands of consumers within five years. Queen’s researchers will unveil the ReFlex prototype at the tenth anniversary Conference on Tangible Embedded and Embodied Interaction (TEI) conference in Eindhoven, The Netherlands on February 17th. The annual forum is the world’s premier conference on tangible human-computer interaction.

This research was support by Immersion Canada, Inc. and the Natural Sciences and Engineering Research Council of Canada (NSERC).

Media FootageHigh resolution photographs of ReFlex are available rights-free by clicking the thumbnails below. Please include a photo credit to Human Media Lab.

About the Human Media Labhe Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, eye tracking TVs and cellphones, PaperPhone, the world’s first flexible phone, PaperTab, the world’s first flexible iPad and TeleHuman, the world’s first pseudo-holographic teleconferencing system. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing, as well as a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

]]>BitDrones: Interactive Flying Microbots Show Future of Virtual Reality is PhysicalRoelThu, 05 Nov 2015 15:00:00 +0000http://www.hml.queensu.ca/blog/bitdrones519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:562e3b41e4b0c308fbc937b8Queen’s University’s Roel Vertegaal says self-levitating displays are a
breakthrough in programmable matter, allowing physical interactions with
mid-air virtual objects
KINGSTON, ON – An interactive swarm of flying 3D pixels (voxels) developed
at Queen’s University’s Human Media Lab is set to revolutionize the way
people interact with virtual reality. The system, called BitDrones, allows
users to explore virtual 3D information by interacting with physical
self-levitating building blocks.
Queen’s professor Roel Vertegaal and his students are unveiling the
BitDrones system on Monday, Nov. 9 at the ACM Symposium on User Interface
Software and Technology in Charlotte, North Carolina. BitDrones is the
first step towards creating interactive self-levitating programmable matter
– materials capable of changing their 3D shape in a programmable fashion –
using swarms of nano quadcopters. The work highlights many possible
applications for the new technology, including real-reality 3D modeling,
gaming, molecular modeling, medical imaging, robotics and online
information visualization.

KINGSTON, ON – An interactive swarm of flying 3D pixels (voxels) developed at Queen’s University’s Human Media Lab is set to revolutionize the way people interact with virtual reality. The system, called BitDrones, allows users to explore virtual 3D information by interacting with physical self-levitating building blocks.

Queen’s professor Roel Vertegaal and his students are unveiling the BitDrones system on Monday, Nov. 9 at the ACM Symposium on User Interface Software and Technology in Charlotte, North Carolina. BitDrones is the first step towards creating interactive self-levitating programmable matter – materials capable of changing their 3D shape in a programmable fashion – using swarms of nano quadcopters. The work highlights many possible applications for the new technology, including real-reality 3D modeling, gaming, molecular modeling, medical imaging, robotics and online information visualization.

“BitDrones brings flying programmable matter, such as featured in the futuristic Disney movie Big Hero 6, closer to reality,” says Dr. Vertegaal. “It is a first step towards allowing people to interact with virtual 3D objects as real physical objects.”

Dr. Vertegaal and his team at the Human Media Lab created three types of BitDrones, each representing self-levitating displays of distinct resolutions. “PixelDrones” are equipped with one LED and a small dot matrix display. “ShapeDrones” are augmented with a light-weight mesh and a 3D printed geometric frame, and serve as building blocks for complex 3D models. “DisplayDrones” are fitted with a curved flexible high resolution touchscreen, a forward-facing video camera and Android smartphone board. All three BitDrone types are equipped with reflective markers, allowing them to be individually tracked and positioned in real time via motion capture technology. The system also tracks the user’s hand motion and touch, allowing users to manipulate the voxels in space.

“We call this a Real Reality interface rather than a Virtual Reality interface. This is what distinguishes it from technologies such as Microsoft HoloLens and the Oculus Rift: you can actually touch these pixels, and see them without a headset,” says Dr. Vertegaal.

Dr. Vertegaal and his team demonstrate a number of applications for this technology. In one scenario, users physically explore a file folder by touching the folder’s associated PixelDrone. When the folder opens, its contents are shown by other PixelDrones flying in a horizontal wheel below it. Files in this wheel are browsed by physically swiping drones to the left or right.

Users are also able to manipulate ShapeDrones to serve as building blocks for a real-time 3D model. Finally, the BitDrone system allows for telepresence by letting remote users move around locally through a DisplayDrone with Skype. The DisplayDrone automatically tracks and replicates all of the remote user’s head movements, allowing a remote user to virtually inspect a location and making it easier for the local user to understand the remote user’s actions.

While their system currently only supports a dozen of comparatively large 2.5” - 5” sized drones, the team at the Human Media Lab are working to scale up their system to support thousands of drones. These future drones would measure no more than a half inch in size, allowing users to render more seamless, high resolution programmable matter.

High resolution photographs of BitDrones are available rights-free below.

About Human Media LabThe Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, eye tracking TVs and cellphones, PaperPhone, the world’s first flexible phone, PaperTab, the world’s first flexible iPad and TeleHuman, the world’s first pseudo-holographic teleconferencing system. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing. Working with him are a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

]]>PrintPut: Resistive and Capacitive Input Widgets for Interactive 3D PrintsRoelWed, 16 Sep 2015 01:52:14 +0000http://www.hml.queensu.ca/blog/printput519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:55f8c206e4b02e1b84db820aQueen’s University’s Human Media Lab present 3D Printed Touch and Pressure
Sensors at Interact'15 Conference
Queen's professor Roel Vertegaal and students Jesse Burstyn, Nicholas
Fellion, and Paul Strohmeier, introduced PrintPut, a new method for
integrating simple touch and pressure sensors directly into 3D printed
objects. The project was unveiled at the INTERACT 2015 conference in
Bamberg, Germany: one of the largest conferences in the field of of
human-computer interaction. PrintPut is a method for 3D printing that
embeds interactivity directly into printed objects. When developing new
artifacts, designers often create prototypes to guide their design process
about how an object should look, feel, and behave. PrintPut uses conductive
filament to offer an assortment of sensors that an industrial designer can
easily incorporate into these 3D designs, including buttons, pressure
sensors, sliders, touchpads, and flex sensors.

Queen's professor Roel Vertegaal and students Jesse Burstyn, Nicholas Fellion, and Paul Strohmeier, introduced PrintPut, a new method for integrating simple touch and pressure sensors directly into 3D printed objects. The project was unveiled at the INTERACT 2015 conference in Bamberg, Germany: one of the largest conferences in the field of of human-computer interaction.

PrintPut is a method for 3D printing that embeds interactivity directly into printed objects. When developing new artifacts, designers often create prototypes to guide their design process about how an object should look, feel, and behave. PrintPut uses conductive filament to offer an assortment of sensors that an industrial designer can easily incorporate into these 3D designs, including buttons, pressure sensors, sliders, touchpads, and flex sensors.

Existing touch solutions, even if flexible, cannot seamlessly wrap around many non-planar objects. Alternatively, using many individual sensors introduces wires that are difficult to manage and impede interaction. PrintPut addresses these concerns by seamlessly integrating interaction points within the existing surface geometry of the object and internally routing the wires to a common connection point.

PrintPut's main components are conductive ABS filament, a dual-extruder 3D printer, and a series of scripts to generate conductive geometry. After a designer makes an object with sensor geometry, they import it into their 3D printer’s build manager and assign the base and conductive geometry to standard and conductive filaments, respectively. Once the object is printed, sensor values can be easily read by connecting it to an Arduino or other microcontroller with alligator clips.

About Human Media Lab

The Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, Smart Pause, PaperPhone, the world’s first flexible phone and PaperTab, the world’s first flexible paper computer. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing. Working with him is a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

]]>DisplayCover: A Tablet Keyboard with an Embedded Thin-Film Touchscreen DisplayHML StudentThu, 03 Sep 2015 19:18:10 +0000http://www.hml.queensu.ca/blog/2015/9/3/displaycover-a-tablet-keyboard-with-an-embedded-thin-film-touchscreen-display519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:55e86abde4b01fadae0024f8Queen’s University’s Human Media Lab and the Microsoft Applied Sciences
Group unveil DisplayCover at MobileHCI'15
Queen’s professor Roel Vertegaal and student Antonio Gomes, in
collaboration with the Applied Sciences Group at Microsoft, unveiled
DisplayCover, a novel tablet cover that integrates a physical keyboard as
well as a touch and stylus sensitive thin-film e-ink display. The
technology was released at the ACM MobileHCI 2015 conference in Copenhagen
- widely regarded as a leading conference on human-computer Interaction
with mobile devices and services.
DisplayCover explores the ability to dynamically alter the peripheral
display content based on usage context, while extending the user experience
and interaction model to the horizontal plane, where hands naturally rest.

Queen’s professor Roel Vertegaal and student Antonio Gomes, in collaboration with the Applied Sciences Group at Microsoft, unveiled DisplayCover, a novel tablet cover that integrates a physical keyboard as well as a touch and stylus sensitive thin-film e-ink display. The technology was released at the ACM MobileHCI 2015 conference in Copenhagen - widely regarded as a leading conference on human-computer Interaction with mobile devices and services.

DisplayCover is a peripheral cover designed for compact touch-enabled laptops. A tactile keyboard affords users with the comfort and ease of use provided by physical keys. A thin-film e-ink display with a resolution of 1280x305px extends the available screen real estate of the slate device by up to 8%. An e-ink display as chosen due to the bistable nature of electrophoretic ink, reducing the secondary screen's impact on battery life.

DisplayCover explores the ability to dynamically alter the peripheral display content based on usage context, while extending the user experience and interaction model to the horizontal plane, where hands naturally rest. For example, stylus annotation can be performed directly on the peripheral cover, reducing the need need for users to routinely home their hands between the slate display and the physical keyboard.

To illustrate the potential and immediate feasibility of our approach, we highlight a series of application scenarios to showcase interactions techniques and features enabled by DisplayCover, aimed at increasing productivity in compact, touch-enabled devices. Application scenarios include concurrent access to multiple applications; gestures and trackpad interactions; context-aware applications; as well as stylus annotation on the horizontal plane.

Inspiration for DisplayCover came from an ongoing effort at the Applied Sciences Group to explore peripheral input devices as a means to extend the desktop experience to peripheral hardware, which led to the development of the Microsoft Adaptive Keyboard.

About Human Media Lab

The Human Media Lab (HML) at Queen’s University is one of Canada's premier Human-Computer Interaction (HCI) laboratories. Inventions include ubiquitous eye tracking sensors, Smart Pause, PaperPhone, the world’s first flexible phone and PaperTab, the world’s first flexible paper computer. HML is directed by Dr. Roel Vertegaal, Professor of HCI at Queen's University's School of Computing. Working with him is a number of graduate and undergraduate students with computing, design, psychology and engineering backgrounds.

That pièce de résistance—which included overlapping windows, a desktop metaphor, icons, and a pointer—was largely the brainchild of Alan Kay. Decades before Steve Jobs popularized the idea of "the intersection of technology and the liberal arts," Kay lived it: he was (and is) a computer scientist, professional musician, designer, educator, and artist all in one. Kay thrived in PARC's blue-sky R&D environment; he occupied similar littoral zones between art and tech at Apple's Advanced Technology Group, Disney's Imagineering division, and HP's Advanced Software Research Team.

Kay has also tried to help companies re-bottle the lightning that crackled within Xerox PARC during his tenure there. It usually doesn't work. The reason? "It’s either so antithetical to the corporate culture that nobody really wants it," Kay told me, "or [they] want to monetize everything." But that didn't stop Kay from trying again, this time with business-intelligence software behemoth SAP. True to form, the Communications Design Group (CDG) at SAP includes uncategorizable geniuses like Bret Victor and Vi Hart as founding members. It may or may not produce the "many trillions of dollars worth of inventions" that Kay gives PARC credit for, but if SAP—or any other organization—wants to take a serious shot at it, they ought to follow Kay's recipe. Here it is his own words:

Problem-finding is about how to get something out of almost nothing in some new area. You're by definition not doing something incremental. There’s a lot of playful stuff going on. The probability of a good idea is pretty low. Most of the ideation that happens [in an invention center] are things that get rejected, which is normal in this line of work. Very few people understand that.

My attempts to do Xerox PARC-like things over the years, like Interval [Research Corporation] with [Microsoft co-founder] Paul Allen, most of them have failed for one of two reasons. it’s either so antithetical to the corporate culture that nobody really wants it. Or in Paul Allen’s case, he wanted to monetize everything. He treated [Interval] as an innovation center or product engineering division. He never understood the process of doing invention research. A company should have an invention center because it’s a wild card.

Successful innovation is much harder than inventing. Companies with a good innovation center will easily spend 10 to 20 times as much [as they would on an invention center]. [Inventing is] a matter of finding a small number of special people and funding them with a relatively small amount of money. You’ll get something good out of it.

2. Think artist colony, not R&D lab.One of the keys to this kind of thing working is having a community. Xerox PARC and CDG are like collective MacArthur grants. You’re not funding individuals, you’re putting together a community. You’re trying to create an environment, a world. Not a thing. We never discussed what our ultimate goal was at PARC. It didn’t look like we were doing anything for the first couple years, and Xerox was upset. But the process was surprisingly efficient, because most of that trillion-dollar return was invented in the first five years.

You’re trying to create an environment, a world. Not a thing.

The rule is, if you put together an artist colony and fund it, you’re going to get art. And some of it’s going to be great. The way to get something [out of it] that’s relevant to a company like Xerox or SAP is to set up a contextso that the free thinking of these people is more often than not going to be about something of concern to the company. Inside an organization like PARC or CDG, a vision isn’t articulated in words. It’s more like a shared feeling, a sense of this magnetic field.

For example, SAP has great needs for improved user interfaces. Not just in the style of today, but totally new UIs for dealing with the enormities of what big data brings to us, or a programming language that involves search as an integral part of what it means to get at resources. These are all natural things for these talented people [at CDG] to immerse themselves in. They’re not given a direct goal, and they’re not lazy. They start thinking of this and that. Things happen in flashes and come out of conversations that start off about something else. What we try to do is not be anarchic, but when the butterfly we want comes by, we’re prepared to catch it.

3. Get the best people. Don't manage them.Vi Hart and Bret Victor have this way of being nuts part of the time, and the other part of the time having exquisite attention to detail at the level of an obsessive. It's like Michelangelo: first he had to imagine putting something on the ceiling of the Sistine Chapel, but he also personally spent four years lying on his back with candle wax dripping into his eyes, painting the goddamn thing. That’s the simplest recipe: find Michelangelos.

I don’t run CDG, I visit it. [Xerox PARC founder Robert] Taylor didn’t want to hire anyone who needed to be managed. That’s not the way it works. I have people on my list who are already moving in great directions, according to their own inner visions. I didn’t have to explain to these people what they would be working on, because they already are. Bret Victor has already hired four people that I didn't know about. I wanted people to fund, not manage.

4. Stay small, avoid hype, and pick a boring name.25 scientists at Xerox PARC were responsible for "the eight big inventions," which resulted in about $35 trillion in return on [an investment of] maybe $100 million.

The simplest recipe? Find Michelangelos.

The shortest lived group at Xerox PARC was "Office of the Future," because Xerox executives would not leave them alone. I chose the most innocuous name for my own group, the Learning Research Group. Nobody knew what it meant, so they left us alone to invent-object oriented programming and the GUI.

We had a contest [at CDG] to come up with the most innocuous name that didn't sound ridiculous. "Communications Design Group" is pretty vague, because what we’re likely to [invent] will be something other than what we’d put on a list. [But] the communication and design aspects do mean something. People communicate with each other, with themselves, in groups, with computers, and computers communicate with each other. If you take all the things in the world that can communicate and think of a future in which all those communications are qualitatively richer, then you have a vision.

We live in a world full of hype. When I look at most of the Silicon Valley companies [claiming to do invention research], they’re really selling pop culture. Pop culture is very incremental and is about creating things other than high impact. Being able to do things that change what business means is going to have a huge impact—more than something that changes what social interaction means in pop culture.

5. Hire Alan Kay (or someone like him).[Alan Kay is a modest man, so we turned to David Liddle, Kay's former colleague at PARC and an eminent inventor in his own right, to make this point.]

The computer industry has a lot of remarkable people in it, but they’re low-dimensional. Alan Kay is very high dimensional: a big space of instincts and interests and capabilities. I mean, the guy could design furniture. He planned a whole programming language while he was building a harpsichord. He was able to do the two things concurrently.

He also has amazingly good taste about choosing things that are worth working on: aiming a few years ahead, anticipating where the technology will go and therefore what will be interesting to do with it when it becomes fast and cheap, even if it’s slow and expensive now. And he’s really a good judge of talent. Especially in young people.

[We asked Liddle point blank: are you saying Alan Kay was extra special among the extra special people at Xerox PARC?]

]]>Iris Scanners Can Now Identify Us From 40 Feet Away | IFLScienceroel.vertegaal@me.comTue, 26 May 2015 17:22:22 +0000http://www.hml.queensu.ca/blog/2015/5/26/iris-scanners-can-now-identify-us-from-40-feet-away-iflscience519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:5564abcee4b0a4a8ab64592dBiometric technologies are on the rise. By electronically recording data
about individual’s physical attributes such as fingerprints or iris
patterns, security and law enforcement services can quickly identify people
with a high degree of accuracy.
The latest development in this field is the scanning of irises from a
distance of up to 40 feet
http://news.discovery.com/tech/gear-and-gadgets/iris-scanner-identifies-a-person-40-feet-away-150410.htm
(12 metres) away. Researchers from Carnegie Mellon University in the US
demonstrated they were able to use their iris recognition technology to
identify drivers from an image of their eye captured from their vehicle’s
side mirror.

Biometric technologies are on the rise. By electronically recording data about individual’s physical attributes such as fingerprints or iris patterns, security and law enforcement services can quickly identify people with a high degree of accuracy.

The developers of this technology envisage that, as well as improving security, it will be more convenient for the individuals being identified. By using measurements of physiological characteristics, people no longer need security tokens or cumbersome passwords to identify themselves.

However, introducing such technology will come with serious challenges. There are both legal issues and public anxiety around having such sensitive data captured, stored, and accessed.

Social Resistance

We have researched this area by presenting people with potential future scenarios that involved biometrics. We found that, despite the convenience of long-range identification (no queuing in front of scanners), there is a considerable reluctance to accept this technology.

On a basic level, people prefer a physical interaction when their biometrics are being read. “I feel negatively about a remote iris scan because I want there to be some kind of interaction between me and this system that’s going to be monitoring me,” said one participant in our research.

But another serious concern was that of “function creep”, whereby people slowly become accustomed to security and surveillance technologies because they are introduced gradually. This means the public may eventually be faced with much greater use of these systems than they would initially agree to.

Such familiarity could lead to the introduction of more invasive long-distance recognition systems. This could ultimately produce far more widespread commercial and governmental usage of biometric identification than the average citizen might be comfortable with. As one participant put it: “[A remote scan] could be done every time we walk into a big shopping centre, they could just identify people all over the place and you’re not aware of it.”

Legal Barriers

The implementation of biometric systems is not just dependent on user acceptance or resistance. Before iris-scanning technology could be introduced in the EU, major data protection and privacy considerations would have to be made.

The EU has a robust legal framework on privacy and data protection. These are recognised as fundamental rights and so related laws are among the highest ranking. Biometric data, such as iris scans, are often treated as special due to the sensitivity of the information they can contain. Our respondents also acknowledged this: “I think it’s a little too invasive and to me it sounds a bit creepy. Who knows what they can find out by scanning my irises?”

Before iris technology could be deployed, certain legal steps would need to be taken. Under EU law and the European Convention on Human Rights, authorities would need to demonstrate it was a necessary and proportionate solution to a legitimate, specific problem. They would also need to prove iris recognition was the least intrusive way to achieve that goal. And a proportionality test would have to take into account the risks the technology brings along with the benefits.

The very fact that long-range iris scanners can capture data without the collaboration of their subject also creates legal issues. EU law requires individuals to be informed when such information was being collected, by whom, for what purposes, and the existence of their rights surrounding the data.

Another issue is how the data is kept secure, particularly in the case of iris-scanning by objects such as smart phones. Scans stored on the device and/or on the cloud for purposes of future authentication would legally require robust security protection. Data stored on the cloud tends to move around between different servers and countries, which makes preventing unauthorised access more difficult.

The other issue with iris scanning is that, while the technology could be precise, it is not infallible. At its current level, the technology can still be fooled (see video above). And processing data accurately is another principle of EU data protection law.

Even if we do find ourselves subject to unwanted iris-scanning from 40 feet, safeguards for individuals should always be in place to ensure that they do not bear the burden of technological imperfections.

Schematic of wrinkled rubrene single-crystal field-effect transistor. Wrinkles are obtained when in-plane compressive strain is applied on the elastomeric substrate. Electric current between gold (Au) electrodes is modulated by the deformation imposed by the wrinkles. Courtesy UMass Amherst.

Researchers at Purdue University have proposed a way of printing liquid metal alloys. Their method involves dispersing gallium-indium in ethanol via ultrasound. The created nanoparticle solution is suitable for inkjet printing and can be printed onto any surface. Once printed the ethanol evaporates and after light pressure the particles fuse together creating a conductive material.

"We want to create stretchable electronics that might be compatible with soft machines, such as robots that need to squeeze through small spaces, or wearable technologies that aren't restrictive of motion," said Rebecca Kramer, an assistant professor of mechanical engineering at Purdue University. "Conductors made from liquid metal can stretch and deform without breaking."

]]>new liquid metal inkjet printing could produce flexible circuitryCOCA 201 students at the Creative Computing Exhibit 2015RoelTue, 07 Apr 2015 14:24:59 +0000http://www.hml.queensu.ca/blog/2015/4/7/coca-201-students-at-the-creative-computing-exhibit-2015519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:5523e85fe4b0e2ae74c450a7Last week 2nd year students presented the results from their introductory course in Computing and the Creative Arts at Queen's in the BioSciences Atrium. Here is a report from CKWS.]]>Andrew McLuhan PresentationRoelFri, 20 Feb 2015 22:18:41 +0000http://www.hml.queensu.ca/blog/2015/2/20/andrew-mcluhan-presentation519d10a2e4b090350a2b66a0:519d10a2e4b090350a2b66a4:54e7b1c8e4b0b080e15523d9On Thursday, 26th of February, Andrew McLuhan will be exploring the legacy
of his grandfather Marshall McLuhan on Metaphor and Media.
Where: ILC 213
When: Thursday 26th of February 2015 14:3

Andrew McLuhan

On Thursday, 26th of February, Andrew McLuhan will be exploring the legacy of his grandfather Marshall McLuhan on Metaphor and Media.

Where: ILC 213When: Thursday 26th of February 2015 14:3

Abstract: I will discuss how Marshall McLuhan, and English professor who became an international figure, came to study technology and culture through literature and literary criticism. We will take a brief look at a few of McLuhan’s key ideas: “The medium is the message”, figure and ground, the four laws of media (where media = ’the extensions of man’), and to finish where we start, we will explore perception, and why McLuhan believed, as Ezra Pound said, “The artist is the antennae of the race”.