Description

Our research showcases a conceptual model for bimanual input that spans a wide range of form-factors including smart phones, slates, and tabletop systems using a combination of pen, touch, motion sensing, and voice input modalities. The pen writes, the
hands manipulate, and the combination of modalities yields new tools – as well as compelling over-the-shoulder playback experiences for human-human communication.