Abstract: A display device includes an image display section that transmits an outside scene and displays an image to be visually recognizable together with the outside scene. A control section acquires depth data concerning a direction visually recognizable through the image display section and controls the image display section on the basis of the acquired depth data. According to the control, the control section changes visibility of the outside scene visually recognized through the image display section.

Abstract: Disclosed is an operation display system, including: an operation display device including a display unit to display an operation window; an operating unit to receive an operation to the operation window; and a display control unit to change the operation window in accordance with the operation received by using the operating unit; an air operation detecting unit to detect an air operation performed by one user in air apart from the display unit; a virtual operation window creating unit to create a virtual operation window in which the operation window is changed in accordance with the air operation; and an AR display unit to show the one user an augmented reality space in which the virtual operation window is synthesized with a real space, wherein the display control unit does not change the operation window displayed on the display unit, in accordance with the air operation.

Abstract: A virtual-reality system includes a head-mounted display, a camera mounted on and protruding from a surface of the head-mounted display, and a compressible shock mount mounting the camera on the surface. The shock mount is to retract the camera towards the head-mounted display when compressed. The shock mount protects the camera from damage when the head-mounted display is dropped.

Abstract: A head-mounted display (HMD) divides an image into a high resolution (HR) inset portion at a first resolution, a peripheral portion, and a transitional portion. The peripheral portion is downsampled to a second resolution that is less than the first resolution. The transitional portion is blended such that there is a smooth change in resolution that corresponds to a change in resolution between a fovea region and a non-fovea region of a retina. An inset region is generated using the HR inset portion and the blended transitional portion, and a background region is generated using the downsampled peripheral portion. The inset region is provided to a HR inset display, and the background region is provided to a peripheral display. An optics block combines the displayed inset region with the displayed background region to generate composite content.

Abstract: To provide a head-mounted display and an information display apparatus that match the intuition of the user and are excellent in operability. A head-mounted display according to an embodiment of the present technology includes a reception unit, an image display element, and a display processing unit. The reception unit receives an operation signal including information on a relative position of a detection target in contact with an input operation surface, which is output from the input device. The image display element forms an image V1 presented to a user. The display processing unit causes, based on the operation signal, the image display element to display an operation image V10 with an auxiliary image P indicating a position of the detection target being overlapped on the image V1.

Abstract: This document describes techniques and devices for occluded gesture recognition. Through use of the techniques and devices described herein, users may control their devices even when a user's gesture is occluded by some material between the user's hands and the device itself. Thus, the techniques enable users to control their mobile devices in many situations in which control is desired but conventional techniques do permit effective control, such as when a user's mobile computing device is occluded by being in a purse, bag, pocket, or even in another room.

Abstract: Embodiments are disclosed that relate to determining a pose of a device. One disclosed embodiment provides a method comprising receiving sensor information from one or more sensors of the device, and selecting a motion-family model from a plurality of different motion-family models based on the sensor information. The method further comprises providing the sensor information to the selected motion-family model and outputting an estimated pose of the device according to the selected motion-family model.

Abstract: An augmented reality image display system may be implemented together with a surgical robot system. The surgical robot system may include a slave system performing a surgical operation, a master system controlling the surgical operation of the slave system, an imaging system generating a virtual image of the inside of a patient's body, and an augmented reality image display system including a camera capturing a real image having a plurality of markers attached to the patient's body or a human body model. The augmented reality image system may include an augmented reality image generator which detects the plurality of markers in the real image, estimates the position and gaze direction of the camera using the detected markers, and generates an augmented reality image by overlaying a region of the virtual image over the real image, and a display which displays the augmented reality image.

Abstract: A display apparatus includes: a glass-type frame mounted to a head of an observer; and two image displaying devices for the left and right eyes that are attached to the frame. Each of the image displaying devices includes an image forming device, an optical system making light from the image forming device to be parallel light, and an optical device to which the light from the optical system is incident, and in which the light is guided so as to be output, at least one of the image displaying devices further includes a movement device relatively moving optical axes of the image forming device and the optical system in a horizontal direction, and a convergence angle is adjusted by relatively moving the optical axes of the image forming device and the optical system in the horizontal direction using the movement device depending on an observation position of an observer.

Abstract: A head-wearable device includes a center support extending in generally lateral directions, a first side arm extending from a first end of the center frame support and a second side arm extending from a second end of the center support. The device may further include a nosebridge that is removably coupled to the center frame support. The device may also include a lens assembly that is removably coupled to the center support or the nosebridge. The lens assembly may have a single lens, or a multi-lens arrangement configured to cooperate with display to correct for a user's ocular disease or disorder.

Abstract: This disclosure concerns an interactive head-mounted eyepiece with an integrated processor for handling content for display and an integrated image source for introducing the content to an optical assembly through which the user views a surrounding environment and the displayed content, wherein the eyepiece includes event and sensor triggered interface to external devices.

Abstract: Embodiments related to mapping an environment of a machine-vision system are disclosed. For example, one disclosed method includes acquiring image data resolving one or more reference features of an environment and computing a parameter value based on the image data, wherein the parameter value is responsive to physical deformation of the machine-vision system.

Abstract: A display includes a projector configured to provide light of a virtual image, a waveguide into which the light of the virtual image is injected at an injection angle by the projector, and a combiner disposed along the waveguide and configured to redirect the light of the virtual image. The waveguide is configured to emit the light at a point established by the injection angle. The combiner is further configured to allow ambient light from beyond the waveguide to pass through the combiner. The waveguide constrains the light of the virtual image through total internal reflection along a curved path for the light between the projector and the combiner.

Abstract: A head-mounted display device is disclosed, which includes an at least partially see-through display, a processor configured to detect a physical feature, generate an alignment hologram based on the physical feature, determine a view of the alignment hologram based on a default view matrix for a first eye of a user of the head-mounted display device, display the view of the alignment hologram to the first eye of the user on the at least partially see-through display, output an instruction to the user to enter an adjustment input to visually align the alignment hologram with the physical feature, determine a calibrated view matrix based on the default view matrix and the adjustment input, and adjust a view matrix setting of the head-mounted display device based on the calibrated view matrix.

Abstract: A head mounted display (HMD) apparatus including a head posture information sensor for sensing a value of a head posture information and a display for displaying an image in a display area. An information terminal (IT) including an IT-side sensor for sensing a value of an IT-side information. In a first control mode the IT transmits a first control mode information to the HMD apparatus, wherein the first control mode information is determined based on a received first value of the head posture information and the value of the IT-side information, and the HMD apparatus sets the image to be displayed based on the first control mode information. In a second control mode, the HMD apparatus sets the image to be displayed based on a second value of the head posture information. Methods and information storage devices storing programs for controlling the HMD apparatus and IT are also provided.

Abstract: A wearable device is disclosed according to one embodiment. The wearable device can include an eyewear body, onboard electronic components, a thermal coupling and a heat transfer device. The eyewear body can be configured for wearing by a user to hold one or more optical elements mounted on the eyewear body within a field of view of the user. The onboard electronic components can be carried by the eyewear body at a first portion of the eyewear body and can comprise a heat source that generates heat during electrically powered operation thereof. The thermal coupling can be thermally coupled to the heat transfer device at a second portion of the eyewear body. The elongate heat transfer device can be disposed within the eyewear body and can be thermally coupled to the heat source and the thermal coupling. The heat transfer device can extend lengthwise between the heat source and the thermal coupling to transfer heat from the heat source to the thermal coupling.

Abstract: A display apparatus assembly including: a display apparatus; and a speed measuring device that measures a movement speed of the display apparatus, wherein the display apparatus includes a glass-type frame that is mounted on a head of an observer and two image displaying devices for left and right eyes that are mounted in the frame, each of the image displaying devices includes an image forming device, an optical system that forms light output from the image forming device to be parallel light, and an optical device to which light output from the optical system is incident and in which the light is guided so as to be output, and a convergence angle is changed based on the movement speed of the display apparatus that is measured by the speed measuring device.

Abstract: An immersive headset device is provided that includes a display portion and a body portion. The display portion may include microdisplays having a compact size. The microdisplays may be movable (e.g., rotational) relative to the body portion and can be moved (e.g., rotated) between a flipped-up position and a flipped-down position. In some instances, when the microdisplays are flipped up, the headset provides an augmented reality (AR) mode to a user, and when the microdisplays are flipped down, the headset provide a virtual reality (VR) mode to the user. In certain implementations, the headset includes an electronics source module to provide power and/or signal to the microdisplays. The electronics source module can be attached to a rear of the body portion in order to provide advantageous weight distribution about the head of the user.

Abstract: A head-mounted display device is provided. The head-mounted display device includes: a frame having one surface formed to face a user's face; a wearing part formed to be coupled to at least a part of the frame so as to allow the frame to be fixed to the face; a mounting part formed in a cavity structure so that an external electronic device is capable of being mounted on the other surface of the frame; and one or more fastening parts that fix the external electronic device to the mounting part, wherein the fastening parts has an inclined opening angle.

Abstract: The present invention provides a display apparatus and a display method for realizing control for display operations by a user precisely reflecting the user's status, i.e., the user's intentions, visual state and physical conditions. Worn as an eyeglass-like or head-mount wearable unit for example, the display apparatus of the present invention enables the user to recognize visibly various images on the display unit positioned in front of the user's eyes thereby providing the picked up images, reproduced images, and received images. As control for various display operations such as switching between the display state and the see-through state, display operation mode and selecting sources, the display apparatus of the present invention acquires information about either behavior or physical status of the user, and determines either intention or status of the user in accordance with the acquired information, thereby controlling the display operation appropriately on the basis of the determination result.

Abstract: An electronic device and an operation mode switching method thereof are described. The operation mode switching method is applied to an electronic device that includes a display unit, and the electronic device has a first operation mode and a second operation mode. The display unit has a first light-transmittance in the first operation mode and a second light-transmittance in the second operation mode such that the first high light-transmittance is higher than the second light-transmittance. The method includes detecting to obtain a trigger event; judging whether or not the trigger event satisfies a predefined condition to get a judgment result and, when the judgment result indicates that the trigger event satisfies the predefined condition, generating a switching instruction, and according to the switching instruction, switching the electronic device between the first operation mode and the second operation mode.

Abstract: An imaging device includes a first imaging optical system; a first imaging unit that converts an optical image of a subject formed via the first imaging optical system into an electrical signal and produces an image signal of a first imaged image; a second imaging optical system; a second imaging unit that converts an optical image of the subject formed via a second imaging optical system into an electrical signal and produces an image signal of a second imaged image; and a control unit that independently control the first imaging optical system and the second imaging optical system and performs individually a focus adjustment of the first imaged image and the second imaged image.

Abstract: A preferred system and method for projecting a business information model at a construction site includes a network, a system administrator connected to the network, a database connected to the system administrator, a set of registration markers positioned in the construction site, and a set of user devices connected to the network. The system includes a hard hat, a set of headsets mounted to the hard hat, a set of display units movably connected to the set of headsets, a set of cameras connected to the set of headsets, and a wearable computer connected to the set of headsets and to the network. The cameras capture an image of the set of registration markers. A position of the user device is determined from the image and an orientation is determined from motion sensors. A BIM is downloaded and projected to a removable visor based on the position and orientation.

Abstract: An apparatus having a mounting system and a helmet mounted display. The mounting system including a mounting plate and a mounting rocker. The helmet mounted display including a housing; latching tabs; an optical element having a combiner surface; at least one processor; and at least one memory including software, the at least one memory and software configured to, with the at least one processor, cause the apparatus at least to display heads up information on the combiner surface. The optical element is housed within a second portion of the housing being hingedly attached to the first portion of the housing. The apparatus is configured such that the combiner surface is positionable within the field of view of a wearer of the helmet. Related assemblies and methods are described.

Abstract: A method, includes pre-setting a geo-spatial location where content will be displayed to a user of a see-through head-worn display, establishing a region proximate the geo-spatial location; and causing a marker recognition system of the head-worn see-through display to activate when data indicates that the see-through head-worn display is within the region, wherein the marker recognition system monitors a surrounding environment to identify a marker that will act as a virtual anchor for the presentation of the content.

Abstract: An optical device detachably attached to a frame fixed to a user's head is provided. The optical device includes a body extending in one direction, a display extending in a direction intersecting the one direction, connected to the body, and disposed to be adjacent to the user's eye when fixed to the frame, so as to provide visual information, and a clip module protruding from the body and caught in one region of the frame.

Abstract: Methods, apparatus, and computer-readable media are described herein related to using self-generated sounds for determining a worn state of a wearable computing device. A wearable computing device can transmit an audio signal. One or more sensors coupled to the wearable computing device may then receive a modified version of the audio signal. A comparison may be made between the modified version of the audio signal and at least one reference signal, where the at least one reference signal is based on the audio signal that is transmitted. Based on an output of the comparison, a determination can be made of whether the wearable computing device is being worn.

Abstract: One or more sensors gather data, one or more processors analyze the data, and one or more indicators notify a user if the data represent an event that requires a response. One or more of the sensors and/or the indicators is a wearable device for wireless communication. Optionally, other components may be vehicle-mounted or deployed on-site. The components form an ad-hoc network enabling users to keep track of each other in challenging environments where traditional communication may be impossible, unreliable, or inadvisable. The sensors, processors, and indicators may be linked and activated manually or they may be linked and activated automatically when they come within a threshold proximity or when a user does a triggering action, such as exiting a vehicle. The processors distinguish extremely urgent events requiring an immediate response from less-urgent events that can wait longer for response, routing and timing the responses accordingly.

Abstract: A system is disclosed including a mountable component configured with an attachment to allow the mountable component to be readily attached to and removed from an independent wearable garment, a positioning slide connected to the mountable component, a display mount connected to the positioning slide, a lens adjustment arm attached to the positioning slide, and a curved lens surface connected to the lens adjustment arm. The positioning slide is configured to move the lens and display mount in a back and forth direction and a left to right direction. Another system and method are also disclosed.

Abstract: Provided is an electronic device, including: a display unit that displays, on one screen, a plurality of objects to be selected by a user; a line-of-sight detection unit that detects visually-recognized coordinates on the display unit to which the user's line-of-sight is directed; an object visually-recognized time period calculation unit that calculates a time period during which the visually-recognized coordinates detected by the line-of-sight detection unit fall within a range of the object; and a selection exclusion unit that excludes the object from candidates to be selected by the user if the time period calculated by the object visually-recognized time period calculation unit is longer than a time period threshold value and a frequency at which the object is selected is higher than a frequency threshold value.

Abstract: Methods and systems involving the orienting of video data based on the orientation of a display are described herein. An example system may be configured to (1) receive first video data, the first video data corresponding to a first orientation of the image-capture device; (2) send the first video data to a second computing device; (3) receive, from the second computing device, first orientation data indicating a requested orientation of the image-capture device; (4) cause a visual depiction of the requested orientation to be displayed on a graphical display; (5) receive second video data, the second video data corresponding to a second orientation of the image-capture device, where the second orientation is closer to the requested orientation than is the first orientation; and (6) send the second video data to the second computing device.

Abstract: A metal detector of the disclosure includes a detection head, a supporting rod, and a headset. The supporting rod is connected with the detection head. A first control printed circuit board (PCB) is located inside the supporting rod, and is electrically connected to the detection head. The headset includes a second control PCB connected to the first control PCB by wireless communication, and a display electrically connected to the second control PCB. When an operator wears the headset, a visual direction of the operator keeps within a display range of the display. The detection head detects a metal signal, the first control PCB transmits the metal signal to the second control PCB by wireless communication. The second control PCB transforms the metal signal into a video signal transmitted to the display to display for the operator.

Abstract: A rotary device may include: a binding unit detachably coupled to a part of an external object and including a side surface defining a geometric plane and having a length through which a longitudinal axis extends; a stationary unit positioned adjacent to the side surface of the binding unit and formed integrally with or coupled to the binding unit; and a rotary unit coupled to the stationary unit, the rotary unit including a first end and a second end, the first and second ends being opposite one another, the first end being rotatably coupled to the stationary unit, wherein the rotary unit is rotatable between a first position where the rotary unit forms a first angle with respect to the longitudinal axis and a second position where the rotary unit forms a second angle with the respect to the longitudinal axis, wherein at the first position, the second end of the rotary unit is a first distance from the geometric plane, and wherein at the second position, the second end of the rotary unit is a second distance

Abstract: A method may involve: forming a first bio-compatible layer; forming a conductive pattern on the first bio-compatible layer, wherein the conductive pattern defines an antenna, sensor electrodes, electrical contacts, and one or more electrical interconnects; forming a protective layer over the sensor electrodes, such that the sensor electrodes are covered by the protective layer; mounting an electronic component to the electrical contacts; forming a second bio-compatible layer over the first bio-compatible layer, the electronic component, the antenna, the protective layer, the electrical contacts, and the one or more electrical interconnects; removing a portion of the second bio-compatible layer to form an opening in the second bio-compatible layer; and removing the protective layer through the opening in the second bio-compatible layer to thereby expose the sensor electrodes.

Abstract: Aspects of the present invention relate to methods and systems for imaging, recognizing, and tracking of a user's eye that is wearing a HWC. Aspects further relate to the processing of images reflected from the user's eye and controlling displayed content in accordance therewith.

Abstract: The present application generally relates to guidance systems configured to assist individuals in learning to play a piano. Specifically, the invention relates to a system for projecting animated guidance onto the keys of a standard piano, with such system being controlled by a computing device directing the speed, tempo, location and other aspects of displaying such guidance. Further embodiments of the invention also provide for the system projecting graphical images onto the keys of the piano to assist with note association.

Abstract: Embodiments that relate to sharing mixed reality experiences among multiple display devices are disclosed. In one embodiment, a method includes receiving current versions of a plurality of data subtypes geo-located at a keyframe location. A world map data structure is updated to include the current versions, and a neighborhood request including the keyframe location is received from a display device. Based on the keyframe location, an identifier and current version indicator for each data subtype is provided to the device. A data request from the device for two or more of the data subtypes is received, and the two or more data subtypes are prioritized based on a priority hierarchy. Based on the prioritization, current versions of the data subtypes are sequentially provided to the device for augmenting an appearance of a mixed reality environment.

Abstract: A binocular display vergence correction system is described. The binocular display vergence correction system includes a binocular display, a display tracking system, and a controller. The binocular display, being pixelated, includes a left eye image display and a right eye image display. The display tracking system is configured to determine the position and angular orientation of the binocular display relative to an origin position and origin angular orientation. The controller, includes a processor, and is configured to correct the vergence of the binocular display based on an apparent viewing distance from an eye position of the binocular display to a target object to be viewed on a screen based on the determined position and angular orientation of the binocular display. A method of correcting vergence for a pixelated binocular display is also described.

Abstract: An augmented reality system in which video imagery of a physical environment is combined with video images output by a game engine by the use of a traveling matte which identifies portions of the visible physical environment by techniques such as Computer vision or chroma keying and replaces them with the video images output by the video game engine. The composited imagery of the physical environment and the video game imagery is supplied to a trainee through a headmounted display screen. Additionally, peripheral vision is preserved either by providing complete binocular display to the limits of peripheral vision, or by providing a visual path to the peripheral vision which is matched in luminance to higher resolution augmented reality images provided by the binocular displays. A software/hardware element comprised of a server control station and a controller onboard the trainee performs the modeling, scenario generation, communications, tracking, and metric generation.

Abstract: A head-mounted display device includes: an image display unit including an image-light generating unit that generates image light representing an image and emits the image light and a light guide unit that guides the emitted image light to the eye of a user, the image display unit being for causing the user to visually recognize a virtual image; and a control unit that includes an operation surface, is connected to the image display unit, and controls image display by the image display unit. When it is assumed that the user shifts the user's attention from the virtual image, the control unit adjusts the luminance of the image-light generating unit or adjusts the image light generated by the image-light generating unit to reduce the visibility of the virtual image.

Abstract: Image capturing systems and methods of capturing images, especially in contact prone environments are provided. The image capturing systems includes an optical sensor and output device adapted to receive the electrical signals from optical sensor and transmit a corresponding signal, and a housing retaining the optical sensor and the output device. The housing is adapted to mount to a surface, for example, inside or on top of a helmet, and includes a low profile above the surface and a footprint in contact with the surface. The size of the footprint is selected whereby and impact loads on the housing are minimized or attenuated before being transmitted to the surface to which the housing is mounted. The invention is uniquely applicable to head gear mounting, but may be used on a broad range of article and a broad range of fields.

Abstract: A computing device can be controlled based on changes in the angle of a user's head with respect to the device, such as due to the user tilting the device and/or the user tilting his head with respect to the device. Such control based on the angle of the user's head can be achieved even when the user is operating the device “off-axis” or when the device is not orthogonal and/or not centered with respect to the user. This can be accomplished by using an elastic reference point that dynamically adjusts to a detected angle of the user's head with respect to the device. Such an approach can account for differences between when the user is changing his natural resting position and/or the resting position of the device and when the user is intending to perform a gesture based on the angle of the user's head relative to the device.

Abstract: Methods and apparatus, including computer program products, implementing and using techniques for projecting a source image in a head-mounted display apparatus having a left and a right display for projecting a left and right images viewable by the left and right eyes, respectively, of a user. Source image data is received. The source image has right, left, top, and bottom edges. The source image data is processed to generate left image data for the left display and right image data for the right display. The left image data includes the left edge, but not the right edge, of the source image and the right image data includes the right edge, but not the left edge, of the source image. The right image data is presented on the right display and the left image data is presented on the left display.

Abstract: A method for distributing sports entertainment includes the step of providing a plurality of video cameras positioned on vehicles or athletes that are participating in sporting events, transmitters for transmitting information from the plurality of cameras to a processing station, retransmission equipment for directing the camera feed from each of the plurality of cameras to separate channels for distribution and remote viewing at viewers' locations, and channel selectors that permit viewers to select from among the various channels, thereby allowing the viewers to select from the plurality of camera feeds. The cameras are simultaneously operated during the sporting event so as to generate a plurality of camera feeds during the event, each feed reflecting the perspective of an individual participant. The plurality of feeds is received by the retransmission equipment and retransmitted to selectable channels, each channel being associated with a respective camera feed.

Abstract: An eyewear display system includes a camera coupled to capture an image of an object in a surrounding environment. A projector is coupled to receive the captured image to output a projected image. A polarizing beam splitter optically coupled to receive the projected image and an actual view of the surrounding environment. The polarizing beam splitter is optically coupled to output a combined view of the projected image combined with the actual view of the surrounding environment. The combined image is to be directed to an eye of a user. An intensity controller is optically coupled between the surrounding environment and the polarizing beam splitter to controlling an intensity of the actual view of the surrounding environment received by the polarizing beam splitter.

Abstract: A display system comprises a head-mounted projector including an exit aperture and a projection engine to project image light through the exit aperture. The image light is projected onto a retro-reflective display that reflects image light in a first dimension at above 90% efficiency within a 25 degree exit angular spread and reflects image light in the first dimension below 10% efficiency outside of a 35 degree exit angular spread.

Abstract: Three dimensional (3-D) program content viewing system in a media environment with different media presentation devices detects a first synchronization signal transmitted from a first synchronization signal source, wherein the first synchronization signal is associated with first 3-D program content and includes a first signal identifier. The system detects a second synchronization signal transmitted from a second synchronization signal source, wherein the second synchronization signal is associated with second 3-D program content and includes a second signal identifier. The system then receives a selection of one of the first synchronization signal and the second synchronization signal and discriminates between the first synchronization signal and the second synchronization signal based upon the first signal identifier and the second signal identifier.

Abstract: A computer system comprising a headset configured to sit on top of a user's head. The headset includes a microphone and a headset haptic device. The headset is configured to receive audio signals and for outputting a plurality of sound waves based on the audio signals received. The computer system also includes a sound processing module configured for receiving a plurality of sound data corresponding with a sound profile associated with a virtual reality environment and converting the sound data so that sound can be emitted from a sound emitting device of the headset. The headset haptic device is configured for converting audio signals into a haptic profile corresponding to the sound profile and transmitting vibrations corresponding with the haptic profile from the headset haptic device through the headband to the crown of the user's skull and from each ear cup to the skull around each user's ears.

Abstract: A head mountable display (HMD) system comprises an eye position detector comprising one or more cameras configured to detect the position of each of the HMD user's eyes; a dominant eye detector configured to detect a dominant eye of the HMD user; and an image generator configured to generate images for display by the HMD in dependence upon the HMD user's eye positions, the image generator being configured to apply a greater weight to the detected position of the dominant eye than to the detected position of the non-dominant eye.