Leap motion technology

Vertical leap measuring apparatus and method

A vertical leap measuring apparatus and method is illustrated including a series of vertically spaced electric pressure-sensitive switches and a device for recording an indication reflecting height reached both from a standing position and subsequently from a vertical leap, wherein the difference between the two indications of height reached is calculated to determine the height of the vertical leap.

Identifying an object in a field of view

The technology disclosed relates to identifying an object in a field of view of a camera. In particular, it relates to identifying a display in the field of view of the camera. This is achieved by monitoring a space including acquiring a series of image frames of the space using the camera and detecting one or more light sources in the series of image frames. Further, one or more frequencies of periodic intensity or brightness variations, also referred to as 'refresh rate', of light emitted from the light sources is measured. Based on the one or more frequencies of periodic intensity variations of light emitted from the light sources, at least one display that includes the light sources is identified.

Access control device and method for operating same

The device (1) has a reading device (9) reading access authorization such that a locking element (2) e.g. rotary lock, is moved from a locking position into a release position when the read access authorization is valid. A sensor (13) e.g. proximity sensor, detect overrun and/or leaping of the locking element in a passage direction after the locking element. A sensor (12) detects crawling of the locking element in a passage direction before the locking element. A digital camera (11) records images for recording the person during detecting the crawling and/or leaping attempt made by the person. An independent claim is also included for a method for operating an access control device for persons.

Feature tracking for device input

A user can emulate touch screen events with motions and gestures that the user performs at a distance from a computing device. A user can utilize specific gestures, such as a pinch gesture, to designate portions of motion that are to be interpreted as input, to differentiate from other portions of the motion. A user can then perform actions such as text input by performing motions with the pinch gesture that correspond to words or other selections recognized by a text input program. A camera-based detection approach can be used to recognize the location of features performing the motions and gestures, such as a hand, finger, and/or thumb of the user.

Interactive bridge console system

The present invention is related to an interactive bridge console system (1) comprising a computer server system being configurable for marine vessel operations providing interactive interfaces for a plurality of instruments and apparatus being operationally connected to the interactive bridge console system (1), wherein a three dimensional image recording and object position tracking system (13) is located above a console surface (11), and is configurable to provide recording and tracking of hand movements in space above the console surface (11).

System and method for detecting three dimensional gestures to initiate and complete the transfer of application data between networked devices

An apparatus and method for detecting a three-dimensional gesture are provided. The method includes detecting, by at least one three dimensional motion sensing input device embedded in a network having a plurality of interconnected hardware, a three-dimensional gesture of a user, selecting, based on the detected gesture, application data corresponding to an application being executed, stored or displayed on a first device in the network to be transmitted to a second device in the network, transmitting the selected application data to hardware and software associated with the second device, and performing at least one of executing, storing or displaying the selected application data on the second device, wherein the at least one three dimensional motion sensing input device comprises gesture detection hardware and software.

System for warning the driver of a motor vehicle

The device has a video camera (1) provided for generating an analog video signal that is supplied to an amplifier (5). The amplifier transmits the signal leaps present in the video signal to a microcontroller (8), where the amplifier is provided with a differentiator and threshold circuits, and the camera is a rear view camera. The microcontroller determines the position of the signal leaps and delivers a warning signal during the variation of the position of the signal leaps from a target area.

Touch free hygienic display control panel for a smart toilet

A touch free toilet control panel display which may be used, for activating various components of a smart toilet. The user may touch freely use a touch free input device, for choosing from a displayed menu of toilet functions. Various input devices may used to activate the displayed toilet devices, such as, a floating capacitive field (404) to detect a user's fingers hovering above the displayed menu options (406), body movement recognition used to control the movement of a displayed cursor, an eye tracking system (604), that uses the user's gaze point (610), tongue recognition, etc. Touch free activation of the menu functions, reduces the transmission of bacteria from the menu panel to the user, and may also ease the use of the toilet devices. The display menu may show toilet devices, such as, a bidet, bidet water temperature, internet connection, etc. Other embodiments are described and shown.

User gesture recognition

A method and device for gesture recognition; the gesture being executed by a user in a gesture region which may be defined relative to a display surface. In an embodiment the gesture comprises a select and the device comprises at least three cameras operating in the visual range where a first camera is used to determine a horizontal location of the select gesture and the other cameras are used to determine a vertical location thereof.

Computer display object controller

A display object controller includes a sensor elevated above the keys of a keyboard. The sensor is directed to view and monitor a user's first hand in a typing position on the keyboard. The controller includes a switch positioned adjacent the spacebar of the keyboard. The sensor and switch are connected to a processor which is connected to a computer system. The switch is arranged to enable or disable display object control. When the switch is engaged by the thumb of the user's second hand, the processor is responsive to the sensor to track hand motion for controlling display objects. When the switch is disengaged by the thumb, the processor communicates with the computer system to disable tracking hand motion so the fingers may type on the keyboard without controlling the display objects. In another embodiment, a switching means is provided in software, wherein the sensor is arranged to detect the second hand performing or ceasing to perform a predetermined gesture as a command to enable or disable display object control mode.

Leap second support in content timestamps

In embodiments, apparatuses, methods and storage media are described that are associated with support for leap seconds for provision of media content. In embodiments, a leap second is identified for a time during which media content may be timestampped. In embodiments, timestamps may be generated so that no segment of the media content contains a repeated timestamps and the media content is provisioned. In embodiments, content may be provisioned using a non-repeating time standard, such as TAI, and segments of media content maybe defined to have different lengths. In other embodiments, different time standards may be used, but seconds may be repeated across segment boundaries. Other embodiments may be described and/or claimed.

Creating a virtual environment for touchless interaction

This disclosure is directed to a touchless interactive environment. An input device may be configured to capture electronic images corresponding to physical objects detectable within a physical three-dimensional region. A computer system may establish a virtual three-dimensional region mapped to the physical three-dimensional region, with the virtual three-dimensional region defining a space where a plurality of virtual objects are instantiated based on the plurality of electronic images. The computer system may select a virtual object from the plurality of virtual objects as one or more commanding objects, with the one or more commanding objects indicating a command of a graphical user interface to be performed based on a position of the one or more commanding objects. The computer system may then perform the command of the graphical user interface based on the position of the one or more commanding objects.

Computing device

A computing device includes a base member, an input device attached to the base member and a display member connected to the base member. A sensor is attached to the base member and a controller adjusts input sensitivity of the input device when the sensor detects an object close to the display member.