Stephen Latta, Seattle US

Stephen Latta, Seattle, WA US

Patent application number

Description

Published

20100199229

MAPPING A NATURAL INPUT DEVICE TO A LEGACY SYSTEM - Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.

08-05-2010

20100281439

Method to Control Perspective for a Camera-Controlled Computer - Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

11-04-2010

20100306261

Localized Gesture Aggregation - Systems, methods and computer readable media are disclosed for a localized gesture aggregation. In a system where user movement is captured by a capture device to provide gesture input to the system, demographic information regarding users as well as data corresponding to how those users respectively make various gestures is gathered. When a new user begins to use the system, his demographic information is analyzed to determine a most likely way that he will attempt to make or find it easy to make a given gesture. That most likely way is then used to process the new user's gesture input.

12-02-2010

20100306712

Gesture Coach - A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.

12-02-2010

20100306713

Gesture Tool - Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.

12-02-2010

20100306714

Gesture Shortcuts - Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.

12-02-2010

20100306715

Gestures Beyond Skeletal - Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.

12-02-2010

20110299728

AUTOMATIC DEPTH CAMERA AIMING - Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.

12-08-2011

20110304632

INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

12-15-2011

20110304774

CONTEXTUAL TAGGING OF RECORDED DATA - Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.

12-15-2011

20120154618

MODELING AN OBJECT FROM IMAGE DATA - A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape.

06-21-2012

20120155705

FIRST PERSON SHOOTER CONTROL WITH VIRTUAL SKELETON - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.

06-21-2012

20120157198

DRIVING SIMULATOR CONTROL WITH VIRTUAL SKELETON - Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.

06-21-2012

20120157200

INTELLIGENT GAMEPLAY PHOTO CAPTURE - Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.

06-21-2012

20120157203

SKELETAL CONTROL OF THREE-DIMENSIONAL VIRTUAL WORLD - A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.

06-21-2012

20120309534

AUTOMATED SENSOR DRIVEN MATCH-MAKING - A method of matching a player of a multi-player game with a remote participant includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, using an identity of the observer to find one or more candidates to play as the remote participant of the multi-player game, and when selecting the remote participant, choosing a candidate from the one or more candidates above a non-candidate if the candidate satisfies a matching criteria.

12-06-2012

20120311031

AUTOMATED SENSOR DRIVEN FRIENDING - A method of finding a new social network service friend for a player belonging to a social network service and having a friend group including one or more player-accepted friends includes recognizing the player, automatically identifying an observer within a threshold proximity to the player, and adding the observer to the friend group of the player in the social network service if the observer satisfies a friending criteria of the player.

12-06-2012

20130007013

MATCHING USERS OVER A NETWORK - Various embodiments are disclosed that relate to negatively matching users over a network. For example, one disclosed embodiment provides a method including storing a plurality of user profiles corresponding to a plurality of users, each user profile in the plurality of user profiles including one or more user attributes, and receiving a request from a user for a list of one or more suggested negatively matched other users. In response to the request, the method further includes ranking each of a plurality of other users based on a magnitude of a difference between one or more user attributes of the user and corresponding one or more user attributes of the other user, and sending a list of one or more negatively matched users to the exclusion of more positively matched users based on the ranking.

01-03-2013

20130127994

VIDEO COMPRESSION USING VIRTUAL SKELETON - Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device.

05-23-2013

20130135180

SHARED COLLABORATION USING HEAD-MOUNTED DISPLAY - Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.

05-30-2013

20130141419

AUGMENTED REALITY WITH REALISTIC OCCLUSION - A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display.

06-06-2013

20130141421

AUGMENTED REALITY VIRTUAL MONITOR - A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.

06-06-2013

20130169682

TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER - A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).

07-04-2013

20130174213

IMPLICIT SHARING AND PRIVACY CONTROL THROUGH PHYSICAL BEHAVIORS USING SENSOR-RICH DEVICES - A system for automatically sharing virtual objects between different mixed reality environments is described. In some embodiments, a see-through head-mounted display device (HMD) automatically determines a privacy setting associated with another HMD by inferring a particular social relationship with a person associated with the other HMD (e.g., inferring that the person is a friend or acquaintance). The particular social relationship may be inferred by considering the distance to the person associated with the other HMD, the type of environment (e.g., at home or work), and particular physical interactions involving the person (e.g., handshakes or hugs). The HMD may subsequently transmit one or more virtual objects associated with the privacy setting to the other HMD. The HMD may also receive and display one or more other virtual objects from the other HMD based on the privacy setting.

07-04-2013

20130187835

RECOGNITION OF IMAGE ON EXTERNAL DISPLAY - Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object.

07-25-2013

20130194164

EXECUTABLE VIRTUAL OBJECTS ASSOCIATED WITH REAL OBJECTS - Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.

08-01-2013

20130194259

VIRTUAL ENVIRONMENT GENERATING SYSTEM - A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.

08-01-2013

20130194304

COORDINATE-SYSTEM SHARING FOR AUGMENTED REALITY - A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.

08-01-2013

20130194389

HEAD-MOUNTED DISPLAY DEVICE TO MEASURE ATTENTIVENESS - A method for assessing a attentiveness to visual stimuli received through a head-mounted display device. The method employs first and second detectors arranged in the head-mounted display device. An ocular state of the wearer of the head-mounted display device is detected with the first detector while the wearer is receiving a visual stimulus. With the second detector, the visual stimulus received by the wearer is detected. The ocular state is then correlated to the wearer's attentiveness to the visual stimulus.

08-01-2013

20130196757

MULTIPLAYER GAMING WITH HEAD-MOUNTED DISPLAY - A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game.

08-01-2013

20130196772

MATCHING PHYSICAL LOCATIONS FOR SHARED VIRTUAL EXPERIENCE - Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.

08-01-2013

20130208014

DISPLAY WITH BLOCKING IMAGE GENERATION - A blocking image generating system including a head-mounted display device having an opacity layer and related methods are disclosed. A method may include receiving a virtual image to be presented by display optics in the head-mounted display device. Lighting information and an eye-position parameter may be received from an optical sensor system in the head-mounted display device. A blocking image may be generated in the opacity layer of the head-mounted display device based on the lighting information and the virtual image. The location of the blocking image in the opacity layer may be adjusted based on the eye-position parameter.

08-15-2013

20130335404

DEPTH OF FIELD CONTROL FOR SEE-THRU DISPLAY - One embodiment provides a method for controlling a virtual depth of field perceived by a wearer of a see-thru display device. The method includes estimating the ocular depth of field of the wearer and projecting virtual imagery with a specified amount of blur. The amount of blur is determined as a function of the ocular depth of field. Another embodiment provides a method for controlling an ocular depth of field of a wearer of a see-thru display device. This method includes computing a target value for the depth of field and increasing the pixel brightness of the virtual imagery presented to the wearer. The increase in pixel brightness contracts the wearer's pupils and thereby deepens the depth of field to the target value.

12-19-2013

20130335435

COLOR VISION DEFICIT CORRECTION - Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user.

12-19-2013

20130335442

LOCAL RENDERING OF TEXT IN IMAGE - Various embodiments are disclosed that relate to enhancing the display of images comprising text on various computing device displays. For example, one disclosed embodiment provides, on a computing device, a method of displaying an image, the method including receiving from a remote computing device image data representing a non-text portion of the image, receiving from the remote computing device unrendered text data representing a text portion of the image, rendering the unrendered text data based upon local contextual rendering information to form locally rendered text data, compositing the locally rendered text data and the image data to form a composited image, and providing the composited image to a display.

12-19-2013

20130342568

LOW LIGHT SCENE AUGMENTATION - Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features.

12-26-2013

20140043433

AUGMENTED REALITY DISPLAY OF SCENE BEHIND SURFACE - Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display.

02-13-2014

20140044305

OBJECT TRACKING - Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, ate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object.

02-13-2014

20140049558

AUGMENTED REALITY OVERLAY FOR CONTROL DEVICES - Embodiments for providing instructional information for control devices are disclosed. In one example, a method on a see-through display device comprising a see-through display and an outward-facing image sensor includes acquiring an image of a scene viewable through the see-through display and detecting a control device in the scene. The method also includes retrieving information pertaining to a function of an interactive element of the control device and displaying an image on the see-through display augmenting an appearance of the interactive element of the control device with image data related to the function of the interactive element.

02-20-2014

20140049559

MIXED REALITY HOLOGRAPHIC OBJECT DEVELOPMENT - Systems and related methods for presenting a holographic object that self-adapts to a mixed reality environment are provided. In one example, a holographic object presentation program captures physical environment data from a destination physical environment and creates a model of the environment including physical objects having associated properties. The program identifies a holographic object for display on a display of a display device, the holographic object including one or more rules linking a detected environmental condition and/or properties of the physical objects with a display mode of the holographic object. The program applies the one or more rules to select the display mode for the holographic object based on the detected environmental condition and/or the properties of the physical objects.

02-20-2014

20140125574

USER AUTHENTICATION ON DISPLAY DEVICE - Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.

05-08-2014

20140125668

CONSTRUCTING AUGMENTED REALITY ENVIRONMENT WITH PRE-COMPUTED LIGHTING - Embodiments related to efficiently constructing an augmented reality environment with global illumination effects are disclosed. For example, one disclosed embodiment provides a method of displaying an augmented reality image via a display device. The method includes receiving image data, the image data capturing an image of a local environment of the display device, and identifying a physical feature of the local environment via the image data. The method further includes constructing an augmented reality image of a virtual structure for display over the physical feature in spatial registration with the physical feature from a viewpoint of a user, the augmented reality image comprising a plurality of modular virtual structure segments arranged in adjacent locations to form the virtual structure feature, each modular virtual structure segment comprising a pre-computed global illumination effect, and outputting the augmented reality image to the display device.

05-08-2014

20140125698

MIXED-REALITY ARENA - A computing system comprises a see-through display device, a logic subsystem, and a storage subsystem storing instructions. When executed by the logic subsystem, the instructions display on the see-through display device a virtual arena, a user-controlled avatar, and an opponent avatar. The virtual arena appears to be integrated within a physical space when the physical space is viewed through the see-through display device. In response to receiving a user input, the instructions may also display on the see-through display device an updated user-controlled avatar.

05-08-2014

20140128161

CROSS-PLATFORM AUGMENTED REALITY EXPERIENCE - A plurality of game sessions are hosted at a server system. A first computing device of a first user is joined to a first multiplayer gaming session, the first computing device including a see-through display. Augmentation information is sent to the first computing device for the first multiplayer gaming session to provide an augmented reality experience to the first user. A second computing device of a second user is joined to the first multiplayer gaming session. Experience information is sent to the second computing device for the first multiplayer gaming session to provide a cross-platform representation of the augmented reality experience to the second user.

05-08-2014

20140145914

HEAD-MOUNTED DISPLAY RESOURCE MANAGEMENT - A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode.

05-29-2014

20140168075

Method to Control Perspective for a Camera-Controlled Computer - Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.

06-19-2014

20140192084

MIXED REALITY DISPLAY ACCOMMODATION - A mixed reality accommodation system and related methods are provided. In one example, a head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A mixed reality safety program is configured to receive a holographic object and associated content provider ID from a source. The program assigns a trust level to the object based on the content provider ID. If the trust level is less than a threshold, the object is displayed according to a first set of safety rules that provide a protective level of display restrictions. If the trust level is greater than or equal to the threshold, the object is displayed according to a second set of safety rules that provide a permissive level of display restrictions that are less than the protective level of display restrictions.

07-10-2014

20140267311

INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.

09-18-2014

20140320389

MIXED REALITY INTERACTIONS - Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.

10-30-2014

20140333665

CALIBRATION OF EYE LOCATION - Embodiments are disclosed that relate to calibrating a predetermined eye location in a head-mounted display. For example, in one disclosed embodiment a method includes displaying a virtual marker visually alignable with a real world target at an alignment condition. At the alignment condition, image data is acquired to determine a location of the real world target. From the image data, an estimated eye location relative to a location of the head-mounted display is determined. Based upon the estimated eye location, the predetermined eye location is then calibrated.

11-13-2014

20140347390

BODY-LOCKED PLACEMENT OF AUGMENTED REALITY OBJECTS - Embodiments are disclosed that relate to placing virtual objects in an augmented reality environment. For example, one disclosed embodiment provides a method comprising receiving sensor data comprising one or more of motion data, location data, and orientation data from one or more sensors located on a head-mounted display device, and based upon the motion data, determining a body-locking direction vector that is based upon an estimated direction in which a body of a user is facing. The method further comprises positioning a displayed virtual object based on the body-locking direction vector.

11-27-2014

20140375683

INDICATING OUT-OF-VIEW AUGMENTED REALITY IMAGES - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.

12-25-2014

20140380254

GESTURE TOOL - Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.