Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

This disclosure concerns an interactive head-mounted eyepiece with an
integrated processor for handling content for display and an integrated
image source for introducing the content to an optical assembly through
which the user views a surrounding environment and the displayed content,
wherein the optical assembly comprises a light transmissive wedge-shaped
illumination system with an LED lighting system coupled to an edge of the
wedge, and wherein an angled surface of the wedge directs light from the
LED lighting system to uniformly irradiate a reflective image display to
produce an image that is reflected through the illumination system to
provide the displayed content to the user.

Claims:

1. A system, comprising: an interactive head-mounted eyepiece worn by a
user, wherein the eyepiece includes an optical assembly through which the
user views a surrounding environment combined with displayed content, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly, wherein the optical assembly comprises a light transmissive
wedge-shaped illumination system with an LED lighting system coupled to
an edge of the wedge, and wherein an angled surface of the wedge directs
light from the LED lighting system to uniformly irradiate a reflective
image display to produce an image that is reflected through the
illumination system to provide the displayed content to the user.

2. The system of claim 1, wherein the LED lighting system is coupled to a
long edge of the wedge.

3. The system of claim 1, wherein the LED lighting system is coupled to a
short edge of the wedge.

4. The system of claim 1, wherein a polarizing layer is disposed on the
angled surface of the wedge.

5. The system of claim 1, wherein a polarizing layer is disposed on the
LED lighting system.

Description:

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent
Application 61/584,029, filed Jan. 6, 2012, which is incorporated herein
by reference in its entirety.

[0002] This application is a continuation-in-part of the following United
States non-provisional patent applications, each of which is incorporated
herein by reference in its entirety:

[0003] U.S. Non-Provisional application Ser. No. 13/341,758, filed Dec.
30, 2011, which claims the benefit of the following provisional
applications, each of which is hereby incorporated herein by reference in
its entirety: U.S. Provisional Patent Application 61/557,289, filed Nov.
8, 2011. U.S. Non-Provisional application Ser. No. 13/232,930, filed Sep.
14, 2011, which claims the benefit of the following provisional
applications, each of which is hereby incorporated herein by reference in
its entirety: U.S. Provisional Application 61/382,578, filed Sep. 14,
2010; U.S. Provisional Application 61/472,491, filed Apr. 6, 2011; U.S.
Provisional Application 61/483,400, filed May 6, 2011; U.S. Provisional
Application 61/487,371, filed May 18, 2011; and U.S. Provisional
Application 61/504,513, filed Jul. 5, 2011.

[0005] The present disclosure relates to an augmented reality eyepiece,
associated control technologies, and applications for use, and more
specifically to software applications running on the eyepiece.

SUMMARY

[0006] In embodiments, the eyepiece may include an internal software
application running on an integrated multimedia computing facility that
has been adapted for 3D augmented reality (AR) content display and
interaction with the eyepiece. 3D AR software applications may be
developed in conjunction with mobile applications and provided through
application store(s), or as stand-alone applications specifically
targeting the eyepiece as the end-use platform and through a dedicated 3D
AR eyepiece store. Internal software applications may interface with
inputs and output facilities provided by the eyepiece through facilities
internal and external to the eyepiece, such as initiated from the
surrounding environment, sensing devices, user action capture devices,
internal processing facilities, internal multimedia processing
facilities, other internal applications, camera, sensors, microphone,
through a transceiver, through a tactile interface, from external
computing facilities, external applications, event and/or data feeds,
external devices, third parties, and the like. Command and control modes
operating in conjunction with the eyepiece may be initiated by sensing
inputs through input devices, user action, external device interaction,
reception of events and/or data feeds, internal application execution,
external application execution, and the like. In embodiments, there may
be a series of steps included in the execution control as provided
through the internal software application, including at least
combinations of two of the following: events and/or data feeds, sensing
inputs and/or sensing devices, user action capture inputs and/or outputs,
user movements and/or actions for controlling and/or initiating commands,
command and/or control modes and interfaces in which the inputs may be
reflected, applications on the platform that may use commands to respond
to inputs, communications and/or connection from the on-platform
interface to external systems and/or devices, external devices, external
applications, feedback to the user (such as related to external devices,
external applications), and the like.

[0007] These and other systems, methods, objects, features, and advantages
of the present disclosure will be apparent to those skilled in the art
from the following detailed description of the embodiments and the
drawings.

[0008] All documents mentioned herein are hereby incorporated in their
entirety by reference. References to items in the singular should be
understood to include items in the plural, and vice versa, unless
explicitly stated otherwise or clear from the text. Grammatical
conjunctions are intended to express any and all disjunctive and
conjunctive combinations of conjoined clauses, sentences, words, and the
like, unless otherwise stated or clear from the context.

BRIEF DESCRIPTION OF THE FIGURES

[0009] The present disclosure and the following detailed description of
certain embodiments thereof may be understood by reference to the
following figures:

[0132] FIG. 106 depicts a top-level block diagram showing software
application facilities and markets in conjunction with functional and
control aspects of the eyepiece in an embodiment of the present
invention.

[0133] FIG. 107 depicts a functional block diagram of the eyepiece
application development environment in an embodiment of the present
invention.

[0134] FIG. 108 depicts a platform elements development stack in relation
to software applications for the eyepiece in an embodiment of the present
invention.

[0135] FIG. 109 is an illustration of a head mounted display with
see-through capability according to an embodiment of the present
invention.

[0136] FIG. 110 is an illustration of a view of an unlabeled scene as
viewed through the head mounted display depicted in FIG. 109.

[0137] FIG. 111 is an illustration of a view of the scene of FIG. 110 with
2D overlaid labels.

[0138] FIG. 112 is an illustration of 3D labels of FIG. 111 as displayed
to the viewer's left eye.

[0139] FIG. 113 is an illustration of 3D labels of FIG. 111 as displayed
to the viewer's right eye.

[0140] FIG. 114 is an illustration of the left and right 3D labels of FIG.
111 overlaid on one another to show the disparity.

[0141] FIG. 115 is an illustration of the view of a scene of FIG. 110 with
the 3D labels.

[0142] FIG. 116 is an illustration of stereo images captured of the scene
of FIG. 110.

[0143] FIG. 117 is an illustration of the overlaid left and right stereo
images of FIG. 116 showing the disparity between the images.

[0144] FIG. 118 is an illustration of the scene of FIG. 110 showing the
overlaid 3D labels.

[0145] FIG. 119 is a flowchart for a depth cue method embodiment of the
present invention for providing 3D labels.

[0146] FIG. 120 is a flowchart for another depth cue method embodiment of
the present invention for providing 3D labels.

[0147] FIG. 121 is a flowchart for yet another depth cue method embodiment
of the present invention for providing 3D labels.

[0148] FIG. 122 is a flowchart for a still another depth cue method
embodiment of the present invention for providing 3D labels.

DETAILED DESCRIPTION

[0149] The present disclosure relates to eyepiece electro-optics. The
eyepiece may include projection optics suitable to project an image onto
a see-through or translucent lens, enabling the wearer of the eyepiece to
view the surrounding environment as well as the displayed image. The
projection optics, also known as a projector, may include an RGB LED
module that uses field sequential color. With field sequential color, a
single full color image may be broken down into color fields based on the
primary colors of red, green, and blue and imaged by an LCoS (liquid
crystal on silicon) optical display 210 individually. As each color field
is imaged by the optical display 210, the corresponding LED color is
turned on. When these color fields are displayed in rapid sequence, a
full color image may be seen. With field sequential color illumination,
the resulting projected image in the eyepiece can be adjusted for any
chromatic aberrations by shifting the red image relative to the blue
and/or green image and so on. The image may thereafter be reflected into
a two surface freeform waveguide where the image light engages in total
internal reflections (TIR) until reaching the active viewing area of the
lens where the user sees the image. A processor, which may include a
memory and an operating system, may control the LED light source and the
optical display. The projector may also include or be optically coupled
to a display coupling lens, a condenser lens, a polarizing beam splitter,
and a field lens.

[0150] Referring to FIG. 1, an illustrative embodiment of the augmented
reality eyepiece 100 may be depicted. It will be understood that
embodiments of the eyepiece 100 may not include all of the elements
depicted in FIG. 1 while other embodiments may include additional or
different elements. In embodiments, the optical elements may be embedded
in the arm portions 122 of the frame 102 of the eyepiece. Images may be
projected with a projector 108 onto at least one lens 104 disposed in an
opening of the frame 102. One or more projectors 108, such as a
nanoprojector, picoprojector, microprojector, femtoprojector, LASER-based
projector, holographic projector, and the like may be disposed in an arm
portion of the eyepiece frame 102. In embodiments, both lenses 104 are
see-through or translucent while in other embodiments only one lens 104
is translucent while the other is opaque or missing. In embodiments, more
than one projector 108 may be included in the eyepiece 100.

[0151] In embodiments such as the one depicted in FIG. 1, the eyepiece 100
may also include at least one articulating ear bud 120, a radio
transceiver 118 and a heat sink 114 to absorb heat from the LED light
engine, to keep it cool and to allow it to operate at full brightness.
There are also one or more TI OMAP4 (open multimedia applications
processors) 112, and a flex cable with RF antenna 110, all of which will
be further described herein.

[0152] In an embodiment and referring to FIG. 2, the projector 200 may be
an RGB projector. The projector 200 may include a housing 202, a heatsink
204 and an RGB LED engine or module 206. The RGB LED engine 206 may
include LEDs, dichroics, concentrators, and the like. A digital signal
processor (DSP) (not shown) may convert the images or video stream into
control signals, such as voltage drops/current modifications, pulse width
modulation (PWM) signals, and the like to control the intensity,
duration, and mixing of the LED light. For example, the DSP may control
the duty cycle of each PWM signal to control the average current flowing
through each LED generating a plurality of colors. A still image
co-processor of the eyepiece may employ noise-filtering, image/video
stabilization, and face detection, and be able to make image
enhancements. An audio back-end processor of the eyepiece may employ
buffering, SRC, equalization and the like.

[0153] The projector 200 may include an optical display 210, such as an
LCoS display, and a number of components as shown. In embodiments, the
projector 200 may be designed with a single panel LCoS display 210;
however, a three panel display may be possible as well. In the single
panel embodiment, the display 210 is illuminated with red, blue, and
green sequentially (aka field sequential color). In other embodiments,
the projector 200 may make use of alternative optical display
technologies, such as a back-lit liquid crystal display (LCD), a
front-lit LCD, a transflective LCD, an organic light emitting diode
(OLED), a field emission display (FED), a ferroelectric LCoS (FLCOS),
liquid crystal technologies mounted on Sapphire, transparent
liquid-crystal micro-displays, quantum-dot displays, and the like.

[0154] The eyepiece may be powered by any power supply, such as battery
power, solar power, line power, and the like. The power may be integrated
in the frame 102 or disposed external to the eyepiece 100 and in
electrical communication with the powered elements of the eyepiece 100.
For example, a solar energy collector may be placed on the frame 102, on
a belt clip, and the like. Battery charging may occur using a wall
charger, car charger, on a belt clip, in an eyepiece case, and the like.

[0155] The projector 200 may include the LED light engine 206, which may
be mounted on heat sink 204 and holder 208, for ensuring vibration-free
mounting for the LED light engine, hollow tapered light tunnel 220,
diffuser 212 and condenser lens 214. Hollow tunnel 220 helps to
homogenize the rapidly-varying light from the RGB LED light engine. In
one embodiment, hollow light tunnel 220 includes a silvered coating. The
diffuser lens 212 further homogenizes and mixes the light before the
light is led to the condenser lens 214. The light leaves the condenser
lens 214 and then enters the polarizing beam splitter (PBS) 218. In the
PBS, the LED light is propagated and split into polarization components
before it is refracted to a field lens 216 and the LCoS display 210. The
LCoS display provides the image for the microprojector. The image is then
reflected from the LCoS display and back through the polarizing beam
splitter, and then reflected ninety degrees. Thus, the image leaves
microprojector 200 in about the middle of the microprojector. The light
then is led to the coupling lens 504, described below.

[0156] FIG. 2 depicts an embodiment of the projector assembly along with
other supporting figures as described herein, but one skilled in the art
will appreciate that other configurations and optical technologies may be
employed. For instance, transparent structures, such as with substrates
of Sapphire, may be utilized to implement the optical path of the
projector system rather than with reflective optics, thus potentially
altering and/or eliminating optical components, such as the beam
splitter, redirecting mirror, and the like. The system may have a backlit
system, where the LED RGB triplet may be the light source directed to
pass light through the display. As a result the back light and the
display may be mounted either adjacent to the wave guide, or there may be
collumnizing/directing optics after the display to get the light to
properly enter the optic. If there are no directing optics, the display
may be mounted on the top, the side, and the like, of the waveguide. In
an example, a small transparent display may be implemented with a silicon
active backplane on a transparent substrate (e.g. sapphire), transparent
electrodes controlled by the silicon active backplane, a liquid crystal
material, a polarizer, and the like. The function of the polarizer may be
to correct for depolarization of light passing through the system to
improve the contrast of the display. In another example, the system may
utilize a spatial light modulator that imposes some form of
spatially-varying modulation on the light path, such as a micro-channel
spatial light modulator where a membrane-mirror light shutters based on
micro-electromechanical systems (MEMS). The system may also utilize other
optical components, such as a tunable optical filter (e.g. with a
deformable membrane actuator), a high angular deflection micro-mirror
system, a discrete phase optical element, and the like.

[0157] In other embodiments the eyepiece may utilize OLED displays,
quantum-dot displays, and the like, that provide higher power efficiency,
brighter displays, less costly components, and the like. In addition,
display technologies such as OLED and quantum-dot displays may allow for
flexible displays, and so allowing greater packaging efficiency that may
reduce the overall size of the eyepiece. For example, OLED and
quantum-dot display materials may be printed through stamping techniques
onto plastic substrates, thus creating a flexible display component. For
example, the OLED (organic LED) display may be a flexible, low-power
display that does not require backlighting. It can be curved, as in
standard eyeglass lenses. In one embodiment, the OLED display may be or
provide for a transparent display.

[0158] Referring to FIG. 82, the eyepiece may utilize a planar
illumination facility 8208 in association with a reflective display 8210,
where light source(s) 8202 are coupled 8204 with an edge of the planar
illumination facility 8208, and where the planar side of the planar
illumination facility 8208 illuminates the reflective display 8210 that
provides imaging of content to be presented to the eye 8222 of the wearer
through transfer optics 8212. In embodiments, the reflective display 8210
may be an LCD, an LCD on silicon (LCoS), cholesteric liquid crystal,
guest-host liquid crystal, polymer dispersed liquid crystal, phase
retardation liquid crystal, and the like, or other liquid crystal
technology know in the art. In other embodiments, the reflective display
8210 may be a bi-stable display, such as electrophoretic, electrofluidic,
electrowetting, electrokinetic, cholesteric liquid crystal, and the like,
or any other bi-stable display known to the art. The reflective display
8210 may also be a combination of an LCD technology and a bi-stable
display technology. In embodiments, the coupling 8204 between a light
source 8202 and the `edge` of the planar illumination facility 8208 may
be made through other surfaces of the planar illumination facility 8208
and then directed into the plane of the planar illumination facility
8208, such as initially through the top surface, bottom surface, an
angled surface, and the like. For example, light may enter the planar
illumination facility from the top surface, but into a 45° facet
such that the light is bent into the direction of the plane. In an
alternate embodiment, this bending of direction of the light may be
implemented with optical coatings.

[0159] In an example, the light source 8202 may be an RGB LED source (e.g.
an LED array) coupled 8204 directly to the edge of the planar
illumination facility. The light entering the edge of the planar
illumination facility may then be directed to the reflective display for
imaging, such as described herein. Light may enter the reflective display
to be imaged, and then redirected back through the planar illumination
facility, such as with a reflecting surface at the backside of the
reflective display. Light may then enter the transfer optics 8212 for
directing the image to the eye 8222 of the wearer, such as through a lens
8214, reflected by a beam splitter 8218 to a reflective surface 8220,
back through the beam splitter 8218, and the like, to the eye 8222.
Although the transfer optics 8212 have been described in terms of the
8214, 8218, and 8220, it will be appreciated by one skilled in the art
that the transfer optics 8212 may include any transfer optics
configuration known, including more complex or simpler configurations
than describe herein. For instance, with a different focal length in the
field lens 8214, the beam splitter 8218 could bend the image directly
towards the eye, thus eliminating the curved mirror 8220, and achieving a
simpler design implementation. In embodiments, the light source 8202 may
be an LED light source, a laser light source, a white light source, and
the like, or any other light source known in the art. The light coupling
mechanism 8204 may be direct coupling between the light source 8202 and
the planar illumination facility 8208, or through coupling medium or
mechanism, such as a waveguide, fiber optic, light pipe, lens, and the
like. The planar illumination facility 8208 may receive and redirect the
light to a planar side of its structure through an interference grating,
optical imperfections, scattering features, reflective surfaces,
refractive elements, and the like. The planar illumination facility 8208
may be a cover glass over the reflective display 8210, such as to reduce
the combined thickness of the reflective display 8210 and the planar
illumination facility 8208. The planar illumination facility 8208 may
further include a diffuser located on the side nearest the transfer
optics 8212, to expand the cone angle of the image light as it passes
through the planar illumination facility 8208 to the transfer optics
8212. The transfer optics 8212 may include a plurality of optical
elements, such as lenses, mirrors, beam splitters, and the like, or any
other optical transfer element known to the art.

[0160] FIG. 83 presents an embodiment of an optical system 8302 for the
eyepiece 8300, where a planar illumination facility 8310 and reflective
display 8308 mounted on substrate 8304 are shown interfacing through
transfer optics 8212 including an initial diverging lens 8312, a beam
splitter 8314, and a spherical mirror 8318, which present the image to
the eyebox 8320 where the wearer's eye receives the image. In an example,
the flat beam splitter 8314 may be a wire-grid polarizer, a metal
partially transmitting mirror coating, and the like, and the spherical
reflector 8318 may be a series of dielectric coatings to give a partial
mirror on the surface. In another embodiment, the coating on the
spherical mirror 8318 may be a thin metal coating to provide a partially
transmitting mirror.

[0161] In an embodiment of an optics system, FIG. 84 shows a planar
illumination facility 8408 as part of a ferroelectric light-wave circuit
(FLC) 8404, including a configuration that utilizes laser light sources
8402 coupling to the planar illumination facility 8408 through a
waveguide wavelength converter 8420 8422, where the planar illumination
facility 8408 utilizes a grating technology to present the incoming light
from the edge of the planar illumination facility to the planar surface
facing the reflective display 8410. The image light from the reflective
display 8410 is then redirected back though the planar illumination
facility 8408 though a hole 8412 in the supporting structure 8414 to the
transfer optics. Because this embodiment utilizes laser light, the FLC
also utilizes optical feedback to reduce speckle from the lasers, by
broadening the laser spectrum as described in U.S. Pat. No. 7,265,896. In
this embodiment, the laser source 8402 is an IR laser source, where the
FLC combines the beams to RGB, with back reflection that causes the laser
light to hop and produce a broadened bandwidth to provide the speckle
suppression. In this embodiment, the speckle suppression occurs in the
wave-guides 8420. The laser light from laser sources 8402 is coupled to
the planar illumination facility 8408 through a multi-mode interference
combiner (MMI) 8422. Each laser source port is positioned such that the
light traversing the MMI combiner superimposes on one output port to the
planar illumination facility 8408. The grating of the planar illumination
facility 8408 produces uniform illumination for the reflective display.
In embodiments, the grating elements may use a very fine pitch (e.g.
interferometric) to produce the illumination to the reflective display,
which is reflected back with very low scatter off the grating as the
light passes through the planar illumination facility to the transfer
optics. That is, light comes out aligned such that the grating is nearly
fully transparent. Note that the optical feedback utilized in this
embodiment is due to the use of laser light sources, and when LEDs are
utilized, speckle suppression may not be required because the LEDs are
already broadband enough.

[0162] In an embodiment of an optics system utilizing a planar
illumination facility 8502 that includes a configuration with optical
imperfections, in this case a `grooved` configuration, is shown in FIG.
85. In this embodiment, the light source(s) 8202 are coupled 8204
directly to the edge of the planar illumination facility 8502. Light then
travels through the planar illumination facility 8502 and encounters
small grooves 8504A-D in the planar illumination facility material, such
as grooves in a piece of Poly-methyl methacrylate (PMMA). In embodiments,
the grooves 8504A-D may vary in spacing as they progress away from the
input port (e.g. less `aggressive` as they progress from 8504A to 8504D),
vary in heights, vary in pitch, and the like. The light is then
redirected by the grooves 8504A-D to the reflective display 8210 as an
incoherent array of light sources, producing fans of rays traveling to
the reflective display 8210, where the reflective display 8210 is far
enough away from the grooves 8504A-D to produce illumination patterns
from each groove that overlap to provide uniform illumination of the area
of the reflective display 8210. In other embodiments, there may be an
optimum spacing for the grooves, where the number of grooves per pixel on
the reflective display 8210 may be increased to make the light more
incoherent (more fill), but where in turn this produces lower contrast in
the image provided to the wearer with more grooves to interfere within
the provided image. While this embodiment has been discussed with respect
to grooves, other optical imperfections, such as dots, are also possible.

[0163] In embodiments, and referring to FIG. 86, counter ridges 8604 (or
`anti-grooves`) may be applied into the grooves of the planar
illumination facility, such as in a `snap-on` ridge assembly 8602.
Wherein the counter ridges 8604 are positioned in the grooves 8504A-D
such that there is an air gap between the groove sidewalls and the
counter ridge sidewalls. This air gap provides a defined change in
refractive index as perceived by the light as it travels through the
planar illumination facility that promotes a reflection of the light at
the groove sidewall. The application of counter ridges 8604 reduces
aberrations and deflections of the image light caused by the grooves.
That is, image light reflected from reflective display 8210 is refracted
by the groove sidewall and as such it changes direction because of
Snell's law. By providing counter ridges in the grooves, where the
sidewall angle of the groove matches the sidewall angle of the counter
ridge, the refraction of the image light is compensated for and the image
light is redirected toward the transfer optics 8214.

[0164] In embodiments, and referring to FIG. 87, the planar illumination
facility 8702 may be a laminate structure created out of a plurality of
laminating layers 8704 wherein the laminating layers 8704 have
alternating different refractive indices. For instance, the planar
illumination facility 8702 may be cut across two diagonal planes 8708 of
the laminated sheet. In this way, the grooved structure shown in FIGS. 85
and 86 is replaced with the laminate structure 8702. For example, the
laminating sheet may be made of similar materials (PMMA 1 versus PMMA
2--where the difference is in the molecular weight of the PMMA). As long
as the layers are fairly thick, there may be no interference effects, and
act as a clear sheet of plastic. In the configuration shown, the diagonal
laminations will redirect a small percentage of light source 8202 to the
reflective display, where the pitch of the lamination is selected to
minimize aberration.

[0165] In an embodiment of an optics system, FIG. 88 shows a planar
illumination facility 8802 utilizing a `wedge` configuration. In this
embodiment, the light source(s) are coupled 8204 directly to the edge of
the planar illumination facility 8802. Light then travels through the
planar illumination facility 8802 and encounters the slanted surface of
the first wedge 8804, where the light is redirected to the reflective
display 8210, and then back to the illumination facility 8802 and through
both the first wedge 8804 and the second wedge 8812 and on to the
transfer optics. In addition, multi-layer coatings 8808 8810 may be
applied to the wedges to improve transfer properties. In an example, the
wedge may be made from PMMA, with dimensions of 1/2 mm high-10 mm width,
and spanning the entire reflective display, have 1 to 1.5 degrees angle,
and the like. In embodiments, the light may go through multiple
reflections within the wedge 8804 before passing through the wedge 8804
to illuminate the reflective display 8210. If the wedge 8804 is coated
with a highly reflecting coating 8808 and 8810, the ray may make many
reflections inside wedge 8804 before turning around and coming back out
to the light source 8202 again. However, by employing multi-layer
coatings 8808 and 8810 on the wedge 8804, such as with SiO2, Niobium
Pentoxide, and the like, light may be directed to illuminate the
reflective display 8210. The coatings 8808 and 8810 may be designed to
reflect light at a specified wavelength over a wide range of angles, but
transmit light within a certain range of angles (e.g. theta out angles).
In embodiments, the design may allow the light to reflect within the
wedge until it reaches a transmission window for presentation to the
reflective display 8210, where the coating is then configured to enable
transmission. The angle of the wedge directs light from an LED lighting
system to uniformly irradiate a reflective image display to produce an
image that is reflected through the illumination system. By providing
light from the light source 8202 such that a wide cone angle of light
enters the wedge 8804, different rays of light will reach transmission
windows at different locations along the length of the wedge 8804 so that
uniform illumination of the surface of the reflective display 8210 is
provided and as a result, the image provided to the wearer's eye has
uniform brightness as determined by the image content in the image.

[0166] In embodiments, the see-through optics system including a planar
illumination facility 8208 and reflective display 8210 as described
herein may be applied to any head-worn device known to the art, such as
including the eyepiece as described herein, but also to helmets (e.g.
military helmets, pilot helmets, bike helmets, motorcycle helmets, deep
sea helmets, space helmets, and the like) ski goggles, eyewear, water
diving masks, dusk masks, respirators, Hazmat head gear, virtual reality
headgear, simulation devices, and the like. In addition, the optics
system and protective covering associated with the head-worn device may
incorporate the optics system in a plurality of ways, including inserting
the optics system into the head-worn device in addition to optics and
covering traditionally associated with the head-worn device. For
instance, the optics system may be included in a ski goggle as a separate
unit, providing the user with projected content, but where the optics
system doesn't replace any component of the ski goggle, such as the
see-through covering of the ski goggle (e.g. the clear or colored plastic
covering that is exposed to the outside environment, keeping the wind and
snow from the user's eyes). Alternatively, the optics system may replace,
at least in part, certain optics traditionally associated with the
head-worn gear. For instance, certain optical elements of the transfer
optics 8212 may replace the outer lens of an eyewear application. In an
example, a beam splitter, lens, or mirror of the transfer optics 8212
could replace the front lens for an eyewear application (e.g.
sunglasses), thus eliminating the need for the front lens of the glasses,
such as if the curved reflection mirror 8220 is extended to cover the
glasses, eliminating the need for the cover lens. In embodiments, the
see-through optics system including a planar illumination facility 8208
and reflective display 8210 may be located in the head-worn gear so as to
be unobtrusive to the function and aesthetic of the head-worn gear. For
example, in the case of eyewear, or more specifically the eyepiece, the
optics system may be located in proximity with an upper portion of the
lens, such as in the upper portion of the frame.

[0167] A planar illumination facility, also know as an illumination
module, may provide light in a plurality of colors including
Red-Green-Blue (RGB) light and/or white light. The light from the
illumination module may be directed to a 3LCD system, a Digital Light
Processing (DLP®) system, a Liquid Crystal on Silicon (LCoS) system,
or other micro-display or micro-projection systems. The illumination
module may use wavelength combining and nonlinear frequency conversion
with nonlinear feedback to the source to provide a source of
high-brightness, long-life, speckle-reduced or speckle-free light.
Various embodiments of the invention may provide light in a plurality of
colors including Red-Green-Blue (RGB) light and/or white light. The light
from the illumination module may be directed to a 3LCD system, a Digital
Light Processing (DLP) system, a Liquid Crystal on Silicon (LCoS) system,
or other micro-display or micro-projection systems. The illumination
modules described herein may be used in the optical assembly for the
eyepiece 100.

[0168] One embodiment of the invention includes a system comprising a
laser, LED or other light source configured to produce an optical beam at
a first wavelength, a planar lightwave circuit coupled to the laser and
configured to guide the optical beam, and a waveguide optical frequency
converter coupled to the planar lightwave circuit, and configured to
receive the optical beam at the first wavelength, convert the optical
beam at the first wavelength into an output optical beam at a second
wavelength. The system may provide optically coupled feedback which is
nonlinearly dependent on the power of the optical beam at the first
wavelength to the laser.

[0169] Another embodiment of the invention includes a system comprising a
substrate, a light source, such as a laser diode array or one or more
LEDs disposed on the substrate and configured to emit a plurality of
optical beams at a first wavelength, a planar lightwave circuit disposed
on the substrate and coupled to the light source, and configured to
combine the plurality of optical beams and produce a combined optical
beam at the first wavelength, and a nonlinear optical element disposed on
the substrate and coupled to the planar lightwave circuit, and configured
to convert the combined optical beam at the first wavelength into an
optical beam at a second wavelength using nonlinear frequency conversion.
The system may provide optically coupled feedback which is nonlinearly
dependent on a power of the combined optical beam at the first wavelength
to the laser diode array.

[0170] Another embodiment of the invention includes a system comprising a
light source, such as a semiconductor laser array or one or more LEDs
configured to produce a plurality of optical beams at a first wavelength,
an arrayed waveguide grating coupled to the light source and configured
to combine the plurality of optical beams and output a combined optical
beam at the first wavelength, a quasi-phase matching
wavelength-converting waveguide coupled to the arrayed waveguide grating
and configured to use second harmonic generation to produce an output
optical beam at a second wavelength based on the combined optical beam at
the first wavelength.

[0171] Power may be obtained from within a wavelength conversion device
and fed back to the source. The feedback power has a nonlinear dependence
on the input power provided by the source to the wavelength conversion
device. Nonlinear feedback may reduce the sensitivity of the output power
from the wavelength conversion device to variations in the nonlinear
coefficients of the device because the feedback power increases if a
nonlinear coefficient decreases. The increased feedback tends to increase
the power supplied to the wavelength conversion device, thus mitigating
the effect of the reduced nonlinear coefficient.

[0172] Referring to FIGS. 109A and 109B, a processor 10902 (e.g. a digital
signal processor) may provide display sequential frames 10924 for image
display through a display component 10928 (e.g. an LCOS display
component) of the eyepiece 100. In embodiments, the sequential frames
10924 may be produced with or without a display driver 10912 as an
intermediate component between the processor 10902 and the display
component 10928. For example, and referring to FIG. 109A, the processor
10902 may include a frame buffer 10904 and a display interface 10908
(e.g. a mobile industry processor interface (MIPI), with a display serial
interface (DSI)). The display interface 10908 may provide per-pixel RGB
data 10910 to the display driver 10912 as an intermediate component
between the processor 10902 and the display component 10928, where the
display driver 10912 accepts the per-pixel RGB data 10910 and generates
individual full frame display data for red 10918, green 10920, and blue
10922, thus providing the display sequential frames 10924 to the display
component 10928. In addition, the display driver 10912 may provide timing
signals, such as to synchronize the delivery of the full frames 10918
10920 10922 as display sequential frames 10924 to the display component
10928. In another example, and referring to FIG. 109B, the display
interface 10930 may be configured to eliminate the display driver 10912
by providing full frame display data for red 10934, green 10938, and blue
10940 directly to the display component 10928 as display sequential
frames 10924. In addition, timing signals 10932 may be provided directly
from the display interface 10930 to the display components. This
configuration may provide significantly lower power consumption by
removing the need for a display driver. Not only may this direct panel
information remove the need for a driver, but also may simplify the
overall logic of the configuration, and remove redundant memory required
to reform panel information from pixels, to generate pixel information
from frame, and the like.

[0174] Suitable optical sources 8902 and 8904 include one or more LEDs or
any source of optical radiation having an emission wavelength that is
influenced by optical feedback. Examples of sources include lasers, and
may be semiconductor diode lasers. For example, optical sources 8902 and
8904 may be elements of an array of semiconductor lasers. Sources other
than lasers may also be employed (e.g., an optical frequency converter
may be used as a source). Although two sources are shown on FIG. 89, the
invention may also be practiced with more than two sources. Combiner 8906
is shown in general terms as a three port device having ports 8922, 8924,
and 8926. Although ports 8922 and 8924 are referred to as input ports,
and port 8926 is referred to as a combiner output port, these ports may
be bidirectional and may both receive and emit optical radiation as
indicated above.

[0175] Combiner 8906 may include a wavelength dispersive element and
optical elements to define the ports. Suitable wavelength dispersive
elements include arrayed waveguide gratings, reflective diffraction
gratings, transmissive diffraction gratings, holographic optical
elements, assemblies of wavelength-selective filters, and photonic
band-gap structures. Thus, combiner 8906 may be a wavelength combiner,
where each of the input ports i has a corresponding, non-overlapping
input port wavelength range for efficient coupling to the combiner output
port.

[0177] In general, optical frequency converter 8908 accepts optical inputs
at an input set of optical wavelengths and provides an optical output at
an output set of optical wavelengths, where the output set differs from
the input set.

[0179] In cases where optical frequency converter 8908 provides a
parametric nonlinear optical process, this nonlinear optical process is
preferably phase-matched. Such phase-matching may be birefringent
phase-matching or quasi-phase-matching. Quasi-phase matching may include
methods disclosed in U.S. Pat. No. 7,116,468 to Miller, the disclosure of
which is hereby incorporated by reference.

[0180] Optical frequency converter 8908 may also include various elements
to improve its operation, such as a wavelength selective reflector for
wavelength selective output coupling, a wavelength selective reflector
for wavelength selective resonance, and/or a wavelength selective loss
element for controlling the spectral response of the converter.

[0181] In embodiments, multiple illumination modules as described in FIG.
89 may be associated to form a compound illumination module.

[0182] FIG. 90 is a block diagram of an optical frequency converter,
according to an embodiment of the invention. FIG. 90 illustrates how
feedback radiation 8920 is provided by an exemplary optical frequency
converter 8908 which provides parametric frequency conversion. Combined
radiation 8918 provides forward radiation 9002 within optical frequency
converter 8908 that propagates to the right on FIG. 90, and parametric
radiation 9004, also propagating to the right on FIG. 90, is generated
within optical frequency converter 8908 and emitted from optical
frequency converter 8908 as output optical radiation 8928. Typically
there is a net power transfer from forward radiation 9002 to parametric
radiation 9004 as the interaction proceeds (i.e., as the radiation
propagates to the right in this example). A reflector 9008, which may
have wavelength-dependent transmittance, is disposed in optical frequency
converter 8908 to reflect (or partially reflect) forward radiation 9002
to provide backward radiation 9006 or may be disposed externally to
optical frequency converter 8908 after endface 9010. Reflector 9008 may
be a grating, an internal interface, a coated or uncoated endface, or any
combination thereof. The preferred level of reflectivity for reflector
9008 is greater than 90%. A reflector located at an input interface 9012
provides purely linear feedback (i.e., feedback that does not depend on
the process efficiency). A reflector located at an endface 9010 provides
a maximum degree of nonlinear feedback, since the dependence of forward
power on process efficiency is maximized at the output interface
(assuming a phase-matched parametric interaction).

[0183] FIG. 91 is a block diagram of a laser illumination module,
according to an embodiment of the invention. While lasers are used in
this embodiment, it is understood that other light sources, such as LEDs,
may also be used. Laser illumination module 9100 comprises an array of
diode lasers 9102, waveguides 9104 and 9106, star couplers 9108 and 9110
and optical frequency converter 9114. An array of diode lasers 9102 has
lasing elements coupled to waveguides 9104 acting as input ports (such as
ports 8922 and 8924 on FIG. 89) to a planar waveguide star coupler 9108.
Star coupler 9108 is coupled to another planar waveguide star coupler
9110 by waveguides 9106 which have different lengths. The combination of
star couplers 9108 and 9110 with waveguides 9106 may be an arrayed
waveguide grating, and acts as a wavelength combiner (e.g., combiner 8906
on FIG. 89) providing combined radiation 8918 to waveguide 9112.
Waveguide 9112 provides combined radiation 8918 to optical frequency
converter 9114. Within optical frequency converter 9114, an optional
reflector 9116 provides a back reflection of combined radiation 8918. As
indicated above in connection with FIG. 90, this back reflection provides
nonlinear feedback according to embodiments of the invention. One or more
of the elements described with reference to FIG. 91 may be fabricated on
a common substrate using planar coating methods and/or lithography
methods to reduce cost, parts count and alignment requirements.

[0184] A second waveguide may be disposed such that its core is in close
proximity with the core of the waveguide in optical frequency converter
8908. As is known in the art, this arrangement of waveguides functions as
a directional coupler, such that radiation in waveguide may provide
additional radiation in optical frequency converter 8908. Significant
coupling may be avoided by providing radiation at wavelengths other than
the wavelengths of forward radiation 9002 or additional radiation may be
coupled into optical frequency converter 8908 at a location where forward
radiation 9002 is depleted.

[0185] While standing wave feedback configurations where the feedback
power propagates backward along the same path followed by the input power
are useful, traveling wave feedback configurations may also be used. In a
traveling wave feedback configuration, the feedback re-enters the gain
medium at a location different from the location at which the input power
is emitted from.

[0186] FIG. 92 is a block diagram of a compound laser illumination module,
according to another embodiment of the invention. Compound laser
illumination module 9200 comprises one or more laser illumination modules
9100 described with reference to FIG. 91. Although FIG. 92 illustrates
compound laser illumination module 9200 including three laser
illumination modules 9100 for simplicity, compound laser illumination
module 9200 may include more or fewer laser illumination modules 9100. An
array of diode lasers 9210 may include one or more arrays of diode lasers
9102 which may be an array of laser diodes, a diode laser array, and/or a
semiconductor laser array configured to emit optical radiation within the
infrared spectrum, i.e., with a wavelength shorter than radio waves and
longer than visible light.

[0187] Laser array output waveguides 9220 couple to the diode lasers in
the array of diode lasers 9210 and directs the outputs of the array of
diode lasers 9210 to star couplers 9108A-C. The laser array output
waveguides 9220, the arrayed waveguide gratings 9230, and the optical
frequency converters 9114A-C may be fabricated on a single substrate
using a planar lightwave circuit, and may comprise silicon oxynitride
waveguides and/or lithium tantalate waveguides.

[0190] Compound laser illumination module 9200 may produce output optical
radiation at a plurality of wavelengths. The plurality of wavelengths may
be within a visible spectrum, i.e., with a wavelength shorter than
infrared and longer than ultraviolet light. For example, waveguide 9240A
may similarly provide output optical radiation between about 450 nm and
about 470 nm, waveguide 9240B may provide output optical radiation
between about 525 nm and about 545 nm, and waveguide 9240C may provide
output optical radiation between about 615 nm and about 660 nm. These
ranges of output optical radiation may again be selected to provide
visible wavelengths (for example, blue, green and red wavelengths,
respectively) that are pleasing to a human viewer, and may again be
combined to produce a white light output.

[0191] The waveguides 9240A-C may be fabricated on the same planar
lightwave circuit as the laser array output waveguides 9220, the arrayed
waveguide gratings 9230, and the optical frequency converters 9114A-C. In
some embodiments, the output optical radiation provided by each of the
waveguides 9240A-C may provide an optical power in a range between
approximately 1 watts and approximately 20 watts.

[0192] The optical frequency converter 9114 may comprise a quasi-phase
matching wavelength-converting waveguide configured to perform second
harmonic generation (SHG) on the combined radiation at a first
wavelength, and generate radiation at a second wavelength. A quasi-phase
matching wavelength-converting waveguide may be configured to use the
radiation at the second wavelength to pump an optical parametric
oscillator integrated into the quasi-phase matching wavelength-converting
waveguide to produce radiation at a third wavelength, the third
wavelength optionally different from the second wavelength. The
quasi-phase matching wavelength-converting waveguide may also produce
feedback radiation propagated via waveguide 9112 through the arrayed
waveguide grating 9230 to the array of diode lasers 9210, thereby
enabling each laser disposed within the array of diode lasers 9210 to
operate at a distinct wavelength determined by a corresponding port on
the arrayed waveguide grating.

[0193] For example, compound laser illumination module 9200 may be
configured using an array of diode lasers 9210 nominally operating at a
wavelength of approximately 830 nm to generate output optical radiation
in a visible spectrum corresponding to any of the colors red, green, or
blue.

[0194] Compound laser illumination module 9200 may be optionally
configured to directly illuminate spatial light modulators without
intervening optics. In some embodiments, compound laser illumination
module 9200 may be configured using an array of diode lasers 9210
nominally operating at a single first wavelength to simultaneously
produce output optical radiation at multiple second wavelengths, such as
wavelengths corresponding to the colors red, green, and blue. Each
different second wavelength may be produced by an instance of laser
illumination module 9100.

[0195] The compound laser illumination module 9200 may be configured to
produce diffraction-limited white light by combining output optical
radiation at multiple second wavelengths into a single waveguide using,
for example, waveguide-selective taps (not shown).

[0196] The array of diode lasers 9210, laser array output waveguides 9220,
arrayed waveguide gratings 9230, waveguides 9112, optical frequency
converters 9114, and frequency converter output waveguides 9240 may be
fabricated on a common substrate using fabrication processes such as
coating and lithography. The beam shaping element 9250 is coupled to the
compound laser illumination module 9200 by waveguides 9240A-C, described
with reference to FIG. 92.

[0197] Beam shaping element 9250 may be disposed on a same substrate as
the compound laser illumination module 9200. The substrate may, for
example, comprise a thermally conductive material, a semiconductor
material, or a ceramic material. The substrate may comprise
copper-tungsten, silicon, gallium arsenide, lithium tantalate, silicon
oxynitride, and/or gallium nitride, and may be processed using
semiconductor manufacturing processes including coating, lithography,
etching, deposition, and implantation.

[0198] Some of the described elements, such as the array of diode lasers
9210, laser array output waveguides 9220, arrayed waveguide gratings
9230, waveguides 9112, optical frequency converters 9114, waveguides
9240, beam shaping element 9250, and various related planar lightwave
circuits may be passively coupled and/or aligned, and in some
embodiments, passively aligned by height on a common substrate. Each of
the waveguides 9240A-C may couple to a different instance of beam shaping
element 9250, rather than to a single element as shown.

[0199] Beam shaping element 9250 may be configured to shape the output
optical radiation from waveguides 9240A-C into an approximately
rectangular diffraction-limited optical beam, and may further configure
the output optical radiation from waveguides 9240A-C to have a brightness
uniformity greater than approximately 95% across the approximately
rectangular beam shape.

[0200] The beam shaping element 9250 may comprise an aspheric lens, such
as a "top-hat" microlens, a holographic element, or an optical grating.
In some embodiments, the diffraction-limited optical beam output by the
beam shaping element 9250 produces substantially reduced or no speckle.
The optical beam output by the beam shaping element 9250 may provide an
optical power in a range between approximately 1 watt and approximately
20 watts, and a substantially flat phase front.

[0201] FIG. 93 is a block diagram of an imaging system, according to an
embodiment of the invention. Imaging system 9300 comprises light engine
9310, optical beams 9320, spatial light modulator 9330, modulated optical
beams 9340, and projection lens 9350. The light engine 9310 may be a
compound optical illumination module, such as multiple illumination
modules described in FIG. 89, a compound laser illumination module 9200,
described with reference to FIG. 92, or a laser illumination system 9300,
described with reference to FIG. 93. Spatial light modulator 9330 may be
a 3LCD system, a DLP system, a LCoS system, a transmissive liquid crystal
display (e.g. transmissive LCoS), a liquid-crystal-on-silicon array, a
grating-based light valve, or other micro-display or micro-projection
system or reflective display.

[0202] The spatial light modulator 9330 may be configured to spatially
modulate the optical beam 9320. The spatial light modulator 9330 may be
coupled to electronic circuitry configured to cause the spatial light
modulator 9330 to modulate a video image, such as may be displayed by a
television or a computer monitor, onto the optical beam 9320 to produce a
modulated optical beam 9340. In some embodiments, modulated optical beam
9340 may be output from the spatial light modulator on a same side as the
spatial light modulator receives the optical beam 9320, using optical
principles of reflection. In other embodiments, modulated optical beam
9340 may be output from the spatial light modulator on an opposite side
as the spatial light modulator receives the optical beam 9320, using
optical principles of transmission. The modulated optical beam 9340 may
optionally be coupled into a projection lens 9350. The projection lens
9350 is typically configured to project the modulated optical beam 9340
onto a display, such as a video display screen.

[0203] A method of illuminating a video display may be performed using a
compound illumination module such as one comprising multiple illumination
modules 8900, a compound laser illumination module 9100, a laser
illumination system 9200, or an imaging system 9300. A
diffraction-limited output optical beam is generated using a compound
illumination module, compound laser illumination module 9100, laser
illumination system 9200 or light engine 9310. The output optical beam is
directed using a spatial light modulator, such as spatial light modulator
9330, and optionally projection lens 9350. The spatial light modulator
may project an image onto a display, such as a video display screen.

[0204] The illumination module may be configured to emit any number of
wavelengths including one, two, three, four, five, six, or more, the
wavelengths spaced apart by varying amounts, and having equal or unequal
power levels. An illumination module may be configured to emit a single
wavelength per optical beam, or multiple wavelengths per optical beam. An
illumination module may also comprise additional components and
functionality including polarization controller, polarization rotator,
power supply, power circuitry such as power FETs, electronic control
circuitry, thermal management system, heat pipe, and safety interlock. In
some embodiments, an illumination module may be coupled to an optical
fiber or a lightguide, such as glass (e.g. BK7).

[0205] Some options for an LCoS front light design include: 1) Wedge with
MultiLayer Coating (MLC). This concept uses MLC to define specific
reflected and transmitted angles.; 2) Wedge with polarized beamsplitter
coating. This concept works like a regular PBS Cube, but at a much
shallower angle. This can be PBS coating or a wire grid film.; 3) PBS
Prism bars (these are similar to Option #2) but have a seam down the
center of the panel.; and 4) Wire Grid Polarizer plate beamsplitter
(similar to the PBS wedge, but just a plate, so that it is mostly air
instead of solid glass).

[0206] FIG. 95 depicts an embodiment of an LCoS front light design. In
this embodiment, light from an RGB LED 9508 illuminates a front light
9504, which can be a wedge, PBS, and the like. The light strikes a
polarizer 9510 and is transmitted in its S state to an LCoS 9502 where it
gets reflected as image light in its P state back through an asphere
9512. An inline polarizer 9514 may polarize the image light again and/or
cause a 1/2 wave rotation to the S state. The image light then hits a
wire grid polarizer 9520 and reflects to a curved (spherical) partial
mirror 9524, passing through a 1/2 wave retarder 9522 on its way. The
image light reflects from the mirror to the user's eye 9518, once more
traversing the 1/2 wave retarder 9522 and wire grid polarizer 9520.
Various examples of the front light 9504 will now be described.

[0207] In embodiments, the optical assembly includes a partially
reflective, partially transmitting optical element that reflects
respective portions of image light from the image source and transmits
scene light from a see-through view of the surrounding environment, so
that a combined image comprised of portions of the reflected image light
and the transmitted scene light is provided to a user's eye.

[0208] FIG. 96 depicts an embodiment of a front light 9504 comprising
optically bonded prisms with a polarizer. The prisms appear as two
rectangular solids with a substantially transparent interface 9602
between the two. Each rectangular is diagonally bisected and a polarizing
coating 9604 is disposed along the interface of the bisection. The lower
triangle formed by the bisected portion of the rectangular solid may
optionally be made as a single piece 9608. The prisms may be made from
BK-7 or the equivalent. In this embodiment, the rectangular solids have
square ends that measure 2 mm by 2 mm. The length of the solids in this
embodiment is 10 mm. In an alternate embodiment, the bisection comprises
a 50% mirror 9704 surface and the interface between the two rectangular
solids comprises a polarizer 9702 that may pass light in the P state.

[0209] FIG. 98 depicts three versions of an LCoS front light design. FIG.
98A depicts a wedge with MultiLayer Coating (MLC). This concept uses MLC
to define specific reflected and transmitted angles. In this embodiment,
image light of either P or S polarization state is observed by the user's
eye. FIG. 98B depicts a PBS with a polarizer coating. Here, only
S-polarized image light is transmitted to the user's eye. FIG. 98C
depicts a right angle prism, eliminating much of the material of the
prism enabling the image light to be transmitted through air as
S-polarized light.

[0211] FIG. 100 depicts two embodiments of prisms with light entering the
short end (A) and light entering along the long end (B). In FIG. 100A, a
wedge is formed by offset bisecting a rectangular solid to form at least
one 8.6 degree angle at the bisect interface. In this embodiment, the
offset bisection results in a segment that is 0.5 mm high and another
that is 1.5 mm on the side through which the RGB LEDs 10002 are
transmitting light. Along the bisection, a polarizing coating 10004 is
disposed. In FIG. 100B, a wedge is formed by offset bisecting a
rectangular solid to form at least one 14.3 degree angle at the bisect
interface. In this embodiment, the offset bisection results in a segment
that is 0.5 mm high and another that is 1.5 mm on the side through which
the RGB LEDs 10008 are transmitting light. Along the bisection, a
polarizing coating 10010 is disposed.

[0212] FIG. 101 depicts a curved PBS film 10104 illuminated by an RGB LED
10102 disposed over an LCoS chip 10108. The PBS film 10104 reflects the
RGB light from the LED array 10102 onto the LCOS chip's surface 10108,
but lets the light reflected from the imaging chip pass through
unobstructed to the optical assembly and eventually to the user's eye.
Films used in this system include Asahi Film, which is a Tri-Acetate
Cellulose or cellulose acetate substrate (TAC). In embodiments, the film
may have UV embossed corrugations at 100 nm and a calendared coating
built up on ridges that can be angled for incidence angle of light. The
Asahi film may come in rolls that are 20 cm wide by 30 m long and has BEF
properties when used in LCD illumination. The Asahi film may support
wavelengths from visible through IR and may be stable up to 100°
C.

[0213] In another embodiment, FIGS. 21 and 22 depict an alternate
arrangement of the waveguide and projector in exploded view. In this
arrangement, the projector is placed just behind the hinge of the arm of
the eyepiece and it is vertically oriented such that the initial travel
of the RGB LED signals is vertical until the direction is changed by a
reflecting prism in order to enter the waveguide lens. The vertically
arranged projection engine may have a PBS 218 at the center, the RGB LED
array at the bottom, a hollow, tapered tunnel with thin film diffuser to
mix the colors for collection in an optic, and a condenser lens. The PBS
may have a pre-polarizer on an entrance face. The pre-polarizer may be
aligned to transmit light of a certain polarization, such as p-polarized
light and reflect (or absorb) light of the opposite polarization, such as
s-polarized light. The polarized light may then pass through the PBS to
the field lens 216. The purpose of the field lens 216 may be to create
near telecentric illumination of the LCoS panel. The LCoS display may be
truly reflective, reflecting colors sequentially with correct timing so
the image is displayed properly. Light may reflect from the LCoS panel
and, for bright areas of the image, may be rotated to s-polarization. The
light then may refract through the field lens 216 and may be reflected at
the internal interface of the PBS and exit the projector, heading toward
the coupling lens. The hollow, tapered tunnel 220 may replace the
homogenizing lenslet from other embodiments. By vertically orienting the
projector and placing the PBS in the center, space is saved and the
projector is able to be placed in a hinge space with little moment arm
hanging from the waveguide.

[0214] Light reflected or scattered from the image source or associated
optics of the eyepiece may pass outward into the environment. These light
losses are perceived by external viewers as `eyeglow` or `night glow`
where portions of the lenses or the areas surrounding the eyepiece appear
to be glowing when viewed in a dimly lit environment. In certain cases of
eyeglow as shown in FIG. 22A, the displayed image can be seen as an
observable image 2202A in the display areas when viewed externally by
external viewers. To maintain privacy of the viewing experience for the
user both in terms of maintaining privacy of the images being viewed and
in terms of making the user less noticeable when using the eyepiece in a
dimly lit environment, it is preferable to reduce eyeglow. Methods and
apparatus may reduce eyeglow through a light control element, such as
with a partially reflective mirror in the optics associated with the
image source, with polarizing optics, and the like. For instance, light
entering the waveguide may be polarized, such as s-polarized. The light
control element may include a linear polarizer. Wherein the linear
polarizer in the light control element is oriented relative to the
linearly polarized image light so that the second portion of the linearly
polarized image light that passes through the partially reflecting mirror
is blocked and eyeglow is reduced. In embodiments, eyeglow may be
minimized or eliminated by attaching lenses to the waveguide or frame,
such as the snap-fit optics described herein, that are oppositely
polarized from the light reflecting from the user's eye, such as
p-polarized in this case.

[0215] In embodiments, the light control element may include a second
quarter wave film and a linear polarizer. Wherein the second quarter wave
film converts a second portion of a circularly polarized image light into
linearly polarized image light with a polarization state that is blocked
by the linear polarizer in the light control element so that eyeglow is
reduced. For example, when the light control element includes a linear
polarizer and a quarter wave film, incoming unpolarized scene light from
the external environment in front of the user is converted to linearly
polarized light while 50% of the light is blocked. The first portion of
scene light that passes through the linear polarizer is linearly
polarized light which is converted by the quarter wave film to circularly
polarized light. The third portion of scene light that is reflected from
the partially reflecting mirror has reversed circular polarization which
is then converted to linearly polarized light by the second quarter wave
film. The linear polarizer then blocks the reflected third portion of the
scene light thereby reducing escaping light and reducing eyeglow. FIG.
22B shows an example of a see-through display assembly with a light
control element in a glasses frame. The glasses cross-section 2200B shows
the components of see-through display assembly in a glasses frame 2202B.
Wherein, the light control element covers the entire see-through view
seen by the user. Supporting members 2204B and 2208B are shown supporting
the partially reflecting mirror 2210B and the beam splitter layer 2212B
respectively in the field of view of the user's eye 2214B. The supporting
members 2204B and 2208B along with the light control element 2218B are
connected to the glasses frame 2202B. The other components such as the
folding mirror 2220B and the first quarter wave film 2222B are also
connected to the supporting members 2204B and 2208B so that the combined
assembly is structurally sound.

[0216] In an embodiment, an absorptive polarizer in the optical assembly
is used to reduce stray light. The absorptive polarizer may include an
anti-reflective coating. The absorptive polarizer may be disposed after a
focusing lens of the optical assembly to reduce light passing through an
optically flat film of the optical assembly. The light from the image
source may be polarized to increase contrast.

[0217] In an embodiment, an anti-reflective coating in the optical
assembly may be used to reduce stray light. The anti-reflective coating
may be disposed on a polarizer of the optical assembly or a retarding
film of the optical assembly. The retarding film may be a quarter wave
film or a half wave film. The anti-reflective coating may be disposed on
an outer surface of a partially reflecting mirror. The light from the
image source may be polarized to increase contrast.

[0218] Referring to FIG. 102, an image source 10228 directs image light to
a beam splitter layer of the optical assembly. FIG. 103 depicts a blow-up
of the image source 10228. In this particular embodiment, the image
source 10228 is shown containing a light source (LED Bar 10302) that
directs light through a diffuser 10304 and prepolarizer 10308 to a curved
wire grid polarizer 10310 where the light is reflected to an LCoS display
10312. Image light from the LCoS is then reflected back through the
curved wire grid polarizer 10310 and a half wave film 10312 to the beam
splitter layer of the optical assembly 10200.

[0219] Referring to FIG. 104, LEDs provide unpolarized light. The diffuser
spreads and homogenizes the light from the LEDs. The absorptive
prepolarizer converts the light to S polarization. The S polarized light
is then reflected toward the LCOS by the curved wire grid polarizer. The
LCOS reflects the S polarized light and converts it to P polarized light
depending on local image content. The P polarized light passes through
the curved wire grid polarizer becoming P polarized image light. The half
wave film converts the P polarized image light to S polarized image
light.

[0220] Referring again to FIG. 102, the beam splitter layer 10204 is a
polarizing beam splitter, or the image source provides polarized image
light 10208 and the beam splitter layer 10204 is a polarizing beam
splitter, so that the reflected image light 10208 is linearly polarized
light, this embodiment and the associated polarization control is shown
in FIG. 102. For the case where the image source provides linearly
polarized image light and the beam splitter layer 10204 is a polarizing
beam splitter, the polarization state of the image light is aligned to
the polarizing beam splitter so that the image light 10208 is reflected
by the polarizing beam splitter. FIG. 102 shows the reflected image light
as having S state polarization. In cases where the beam splitter layer
10204 is a polarizing beam splitter, a first quarter wave film 10210 is
provided between the beam splitter layer 10204 and the partially
reflecting mirror 10212. The first quarter wave film 10210 converts the
linearly polarized image light to circularly polarized image light (shown
as S being converted to CR in FIG. 102). The reflected first portion of
image light 10208 is then also circularly polarized where the circular
polarization state is reversed (shown as CL in FIG. 102) so that after
passing back through the quarter wave film, the polarization state of the
reflected first portion of image light 10208 is reversed (to P
polarization) compared to the polarization state of the image light 10208
provided by the image source (shown as S). As a result, the reflected
first portion of the image light 10208 passes through the polarizing beam
splitter without reflection losses. When the beam splitter layer 10204 is
a polarizing beam splitter and the see-through display assembly 10200
includes a first quarter wave film 10210, the light control element 10230
is a second quarter wave film and a linear polarizer 10220. In
embodiments, the light control element 10230 includes a controllable
darkening layer 10214. Wherein the second quarter wave film 10218
converts the second portion of the circularly polarized image light 10208
into linearly polarized image light 10208 (shown as CR being converted to
S) with a polarization state that is blocked by the linear polarizer
10220 in the light control element 10230 so that eyeglow is reduced.

[0221] When the light control element 10230 includes a linear polarizer
10220 and a quarter wave film 10218, incoming unpolarized scene light
10222 from the external environment in front of the user is converted to
linearly polarized light (shown as P polarization state in FIG. 102)
while 50% of the light is blocked. The first portion of scene light 10222
that passes through the linear polarizer 10220 is linearly polarized
light which is converted by the quarter wave film to circularly polarized
light (shown as P being converted to CL in FIG. 102). The third portion
of scene light that is reflected from the partially reflecting mirror
10212 has reversed circular polarization (shown as converting from CL to
CR in FIG. 102) which is then converted to linearly polarized light by
the second quarter wave film 10218 (shown as CR converting to S
polarization in FIG. 102). The linear polarizer 10220 then blocks the
reflected third portion of the scene light thereby reducing escaping
light and reducing eyeglow.

[0222] As shown in FIG. 102, the reflected first portion of image light
10208 and the transmitted second portion of scene light have the same
circular polarization state (shown as CL) so that they combine and are
converted by the first quarter wave film 10210 into linearly polarized
light (shown as P) which passes through the beam splitter when the beam
splitter layer 10204 is a polarizing beam splitter. The linearly
polarized combined light 10224 then provides a combined image to the
user's eye 10202 located at the back of the see-through display assembly
10200, where the combined image is comprised of overlaid portions of the
displayed image from the image source and the see-through view of the
external environment in front of the user.

[0223] The beamsplitter layer 10204 includes an optically flat film, such
as the Asahi TAC film discussed herein. The beamsplitter layer 10204 may
be disposed at an angle in front of a user's eye so that it reflects and
transmits respective portions of image light and transmits scene light
from a see-through view of the surrounding environment, so that a
combined image comprised of portions of the image light and the
transmitted scene light is provided to a user's eye. The optically flat
film may be a wire grid polarizer. The optically flat film may be
laminated to a transparent substrate. The optically flat film may be
molded into a surface of the eyepiece. The optically flat film may be
positioned at less then 40 degrees from vertical.

[0224] In an embodiment, the components in FIG. 102 collectively form an
electro-optic module. The angle of the optical axis associated with the
display may be 10 degrees or more forward of vertical. This degree of
tilt refers to how the upper part of the optics module leans forward.
This allows the beamsplitter angle to be reduced which makes the optics
module thinner.

[0225] The ratio of the height of the curved polarizing film to the width
of the reflective image display is less than 1:1. The curve on the
polarizing film determines the width of the illuminated area on the
reflective display, and the tilt of the curved area determines the
positioning of the illuminated area on the reflective display. The curved
polarizing film reflects illumination light of a first polarization state
onto the reflective display, which changes the polarization of the
illumination light and generates image light, and the curved polarizing
film passes reflected image light. The curved polarizing film includes a
portion that is parallel to the reflective display over the light source.
The height of the image source may be at least 80% of the display active
area width, at least 3.5 mm, or less than 4 mm.

[0226] Referring to FIGS. 105 A through C, the angle of the curved wire
grid polarizer controls the direction of the image light. The curve of
the curved wire grid polarizer controls the width of the image light. The
curve enables use of a narrow light source because it spreads the light
when the light strikes it and then folds it/reflects it to uniformly
illuminate an image display. Image light passing back through the wire
grid polarizer is unperturbed. Thus, the curve also enables the
miniaturization of the optical assembly.

[0227] In FIGS. 21-22, augmented reality eyepiece 2100 includes a frame
2102 and left and right earpieces or temple pieces 2104. Protective
lenses 2106, such as ballistic lenses, are mounted on the front of the
frame 2102 to protect the eyes of the user or to correct the user's view
of the surrounding environment if they are prescription lenses. The front
portion of the frame may also be used to mount a camera or image sensor
2130 and one or more microphones 2132. Not visible in FIG. 21, waveguides
are mounted in the frame 2102 behind the protective lenses 2106, one on
each side of the center or adjustable nose bridge 2138. The front cover
2106 may be interchangeable, so that tints or prescriptions may be
changed readily for the particular user of the augmented reality device.
In one embodiment, each lens is quickly interchangeable, allowing for a
different prescription for each eye. In one embodiment, the lenses are
quickly interchangeable with snap-fits as discussed elsewhere herein.
Certain embodiments may only have a projector and waveguide combination
on one side of the eyepiece while the other side may be filled with a
regular lens, reading lens, prescription lens, or the like. The left and
right ear pieces 2104 may each vertically mount a projector or
microprojector 2114 or other image source atop a spring-loaded hinge 2128
for easier assembly and vibration/shock protection. Each temple piece
also includes a temple housing 2116 for mounting associated electronics
for the eyepiece, and each may also include an elastomeric head grip pad
2120, for better retention on the user. Each temple piece also includes
extending, wrap-around ear buds 2112 and an orifice 2126 for mounting a
headstrap 2142.

[0228] As noted, the temple housing 2116 contains electronics associated
with the augmented reality eyepiece. The electronics may include several
circuit boards, as shown, such as for the microprocessor and radios 2122,
the communications system on a chip (SOC) 2124, and the open multimedia
applications processor (OMAP) processor board 2140. The communications
system on a chip (SOC) may include electronics for one or more
communications capabilities, including a wide local area network (WLAN),
BlueTooth® communications, frequency modulation (FM) radio, a global
positioning system (GPS), a 3-axis accelerometer, one or more gyroscopes,
and the like. In addition, the right temple piece may include an optical
trackpad (not shown) on the outside of the temple piece for user control
of the eyepiece and one or more applications.

[0229] In an embodiment, a digital signal processor (DSP) may be
programmed and/or configured to receive video feed information and
configure the video feed to drive whatever type of image source is being
used with the optical display. The DSP may include a bus or other
communication mechanism for communicating information, and an internal
processor coupled with the bus for processing the information. The DSP
may include a memory, such as a random access memory (RAM) or other
dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and
synchronous DRAM (SDRAM)), coupled to the bus for storing information and
instructions to be executed. The DSP can include a non-volatile memory
such as for example a read only memory (ROM) or other static storage
device (e.g., programmable ROM (PROM), erasable PROM (EPROM), and
electrically erasable PROM (EEPROM)) coupled to the bus for storing
static information and instructions for the internal processor. The DSP
may include special purpose logic devices (e.g., application specific
integrated circuits (ASICs)) or configurable logic devices (e.g., simple
programmable logic devices (SPLDs), complex programmable logic devices
(CPLDs), and field programmable gate arrays (FPGAs)).

[0230] The DSP may include at least one computer readable medium or memory
for holding instructions programmed and for containing data structures,
tables, records, or other data necessary to drive the optical display.
Examples of computer readable media suitable for applications of the
present disclosure may be compact discs, hard disks, floppy disks, tape,
magneto-optical disks, PROMs (EPROM, EEPROM, flash EPROM), DRAM, SRAM,
SDRAM, or any other magnetic medium, compact discs (e.g., CD-ROM), or any
other optical medium, punch cards, paper tape, or other physical medium
with patterns of holes, a carrier wave (described below), or any other
medium from which a computer can read. Various forms of computer readable
media may be involved in carrying out one or more sequences of one or
more instructions to the optical display for execution. The DSP may also
include a communication interface to provide a data communication
coupling to a network link that can be connected to, for example, a local
area network (LAN), or to another communications network such as the
Internet. Wireless links may also be implemented. In any such
implementation, an appropriate communication interface can send and
receive electrical, electromagnetic or optical signals that carry digital
data streams representing various types of information (such as the video
information) to the optical display.

[0231] In embodiments, the eyepiece may provide an external interface to
computer peripheral devices, such as a monitor, display, TV, keyboards,
mice, memory storage (e.g. external hard drive, optical drive, solid
state memory), network interface (e.g. to the Internet), and the like.
For instance, the external interface may provide direct connectivity to
external computer peripheral devices (e.g. connect directly to a
monitor), indirect connectivity to external computer peripheral devices
(e.g. through a central external peripheral interface device), through a
wired connection, though a wireless connection, and the like. In an
example, the eyepiece may be able to connect to a central external
peripheral interface device that provides connectivity to external
peripheral devices, where the external peripheral interface device may
include computer interface facilities, such as a computer processor,
memory, operating system, peripheral drivers and interfaces, USB port,
external display interface, network port, speaker interface, microphone
interface, and the like. In embodiments, the eyepiece may be connected to
the central external peripheral interface by a wired connection, wireless
connection, directly in a cradle, and the like, and when connected may
provide the eyepiece with computational facilities similar to or
identical to a personal computer.

[0232] The frame 2102 is in a general shape of a pair of wrap-around
sunglasses. The sides of the glasses include shape-memory alloy straps
2134, such as nitinol straps. The nitinol or other shape-memory alloy
straps are fitted for the user of the augmented reality eyepiece. The
straps are tailored so that they assume their trained or preferred shape
when worn by the user and warmed to near body temperature. In
embodiments, the fit of the eyepiece may provide user eye width alignment
techniques and measurements. For instance, the position and/or alignment
of the projected display to the wearer of the eyepiece may be adjustable
in position to accommodate the various eye widths of the different
wearers. The positioning and/or alignment may be automatic, such as
though detection of the position of the wearer's eyes through the optical
system (e.g. iris or pupil detection), or manual, such as by the wearer,
and the like.

[0233] Other features of this embodiment include detachable,
noise-cancelling earbuds. As seen in the figure, the earbuds are intended
for connection to the controls of the augmented reality eyepiece for
delivering sounds to ears of the user. The sounds may include inputs from
the wireless internet or telecommunications capability of the augmented
reality eyepiece. The earbuds also include soft, deformable plastic or
foam portions, so that the inner ears of the user are protected in a
manner similar to earplugs. In one embodiment, the earbuds limit inputs
to the user's ears to about 85 dB. This allows for normal hearing by the
wearer, while providing protection from gunshot noise or other explosive
noises and listening in high background noise environments. In one
embodiment, the controls of the noise-cancelling earbuds have an
automatic gain control for very fast adjustment of the cancelling feature
in protecting the wearer's ears.

[0234] FIG. 23 depicts a layout of the vertically arranged projector 2114
in an eyepiece 2300, where the illumination light passes from bottom to
top through one side of the PBS on its way to the display and imager
board, which may be silicon backed, and being refracted as image light
where it hits the internal interfaces of the triangular prisms which
constitute the polarizing beam splitter, and is reflected out of the
projector and into the waveguide lens. In this example, the dimensions of
the projector are shown with the width of the imager board being 11 mm,
the distance from the end of the imager board to the image centerline
being 10.6 mm, and the distance from the image centerline to the end of
the LED board being about 11.8 mm.

[0235] A detailed and assembled view of the components of the projector
discussed above may be seen in FIG. 25. This view depicts how compact the
micro-projector 2500 is when assembled, for example, near a hinge of the
augmented reality eyepiece. Microprojector 2500 includes a housing and a
holder 2508 for mounting certain of the optical pieces. As each color
field is imaged by the optical display 2510, the corresponding LED color
is turned on. The RGB LED light engine 2502 is depicted near the bottom,
mounted on heat sink 2504. The holder 2508 is mounted atop the LED light
engine 2502, the holder mounting light tunnel 2520, diffuser lens 2512
(to eliminate hotspots) and condenser lens 2514. Light passes from the
condenser lens into the polarizing beam splitter 2518 and then to the
field lens 2516. The light then refracts onto the LCoS (liquid crystal on
silicon) chip 2510, where an image is formed. The light for the image
then reflects back through the field lens 2516 and is polarized and
reflected 90° through the polarizing beam splitter 2518. The light
then leaves the microprojector for transmission to the optical display of
the glasses.

[0236] FIG. 26 depicts an exemplary RGB LED module 2600. In this example,
the LED is a 2×2 array with 1 red, 1 blue and 2 green die and the
LED array has 4 cathodes and a common anode. The maximum current may be
0.5 A per die and the maximum voltage (≈4V) may be needed for the
green and blue die.

[0237] In embodiments, the system may utilize an optical system that is
able to generate a monochrome display to the wearer, which may provide
advantages to image clarity, image resolution, frame rate, and the like.
For example, the frame rate may triple (over an RGB system) and this may
be useful in a night vision and the like situation where the camera is
imaging the surroundings, where those images may be processed and
displayed as content. The image may be brighter, such as be three times
brighter if three LEDs are used, or provide a space savings with only one
LED. If multiple LEDs are used, they may be the same color or they could
be different (RGB). The system may be a switchable monochrome/color
system where RGB is used but when the wearer wants monochrome they could
either choose an individual LED or a number of them. All three LEDs may
be used at the same time, as opposed to sequencing, to create white
light. Using three LEDs without sequencing may be like any other white
light where the frame rate goes up by a factor of three. The "switching"
between monochrome and color may be done "manually" (e.g. a physical
button, a GUI interface selection) or it may be done automatically
depending on the application that is running. For instance, a wearer may
go into a night vision mode or fog clearing mode, and the processing
portion of the system automatically determines that the eyepiece needs to
go into a monochrome high refresh rate mode.

[0238] FIG. 3 depicts an embodiment of a horizontally disposed projector
in use. The projector 300 may be disposed in an arm portion of an
eyepiece frame. The LED module 302, under processor control 304, may emit
a single color at a time in rapid sequence. The emitted light may travel
down a light tunnel 308 and through at least one homogenizing lenslet 310
before encountering a polarizing beam splitter 312 and being deflected
towards an LCoS display 314 where a full color image is displayed. The
LCoS display may have a resolution of 1280×720p. The image may then
be reflected back up through the polarizing beam splitter, reflected off
a fold mirror 318 and travel through a collimator on its way out of the
projector and into a waveguide. The projector may include a diffractive
element to eliminate aberrations.

[0239] In an embodiment, the interactive head-mounted eyepiece includes an
optical assembly through which a user views a surrounding environment and
displayed content, wherein the optical assembly includes a corrective
element that corrects the user's view of the surrounding environment, a
freeform optical waveguide enabling internal reflections, and a coupling
lens positioned to direct an image from an optical display, such as an
LCoS display, to the optical waveguide. The eyepiece further includes one
or more integrated processors for handling content for display to the
user and an integrated image source, such as a projector facility, for
introducing the content to the optical assembly. In embodiments where the
image source is a projector, the projector facility includes a light
source and the optical display. Light from the light source, such as an
RGB module, is emitted under control of the processor and traverses a
polarizing beam splitter where it is polarized before being reflected off
the optical display, such as the LCoS display or LCD display in certain
other embodiments, and into the optical waveguide. A surface of the
polarizing beam splitter may reflect the color image from the optical
display into the optical waveguide. The RGB LED module may emit light
sequentially to form a color image that is reflected off the optical
display. The corrective element may be a see-through correction lens that
is attached to the optical waveguide to enable proper viewing of the
surrounding environment whether the image source is on or off. This
corrective element may be a wedge-shaped correction lens, and may be
prescription, tinted, coated, or the like. The freeform optical
waveguide, which may be described by a higher order polynomial, may
include dual freeform surfaces that enable a curvature and a sizing of
the waveguide. The curvature and the sizing of the waveguide enable its
placement in a frame of the interactive head-mounted eyepiece. This frame
may be sized to fit a user's head in a similar fashion to sunglasses or
eyeglasses. Other elements of the optical assembly of the eyepiece
include a homogenizer through which light from the light source is
propagated to ensure that the beam of light is uniform and a collimator
that improves the resolution of the light entering the optical waveguide.

[0240] In embodiments, the prescription lens may be mounted on the inside
of the eyepiece lens or on the outside. In some embodiments, the
prescription power may be divided into prescription lenses mounted on the
outside and inside of the eyepiece lens. In embodiments, the prescription
correction is provided by corrective optics that cling to eyepiece lens
or a component of the optical assembly, such as the beamsplitter, such as
through surface tension. Suitable optics may be provided by 3M's Press-On
Optics, which are available at least as Prisms (a.k.a. Fresnel Prisms),
Aspheric Minus Lenses, Aspheric Plus Lenses, and Bifocal Lenses. The
corrective optics may be a user removable and replaceable diopter
correction facility adapted to be removably attached in a position
between the user's eye and the displayed content such that the diopter
correction facility corrects the users eyesight with respect to the
displayed content and the surrounding environment. The diopter correction
facility may be adapted to mount to the optical assembly. The diopter
correction facility may be adapted to mount to the head-mounted eyepiece.
The diopter correction facility may mount using a friction fit. The
diopter correction facility may mount using a magnetic attachment
facility. The user may select from a plurality of different diopter
correction facilities depending on the user's eyesight.

[0241] Referring to FIG. 4, the image light, which may be polarized and
collimated, may optionally traverse a display coupling lens 412, which
may or may not be the collimator itself or in addition to the collimator,
and enter the waveguide 414. In embodiments, the waveguide 414 may be a
freeform waveguide, where the surfaces of the waveguide are described by
a polynomial equation. The waveguide may be rectilinear. The waveguide
414 may include two reflective surfaces. When the image light enters the
waveguide 414, it may strike a first surface with an angle of incidence
greater than the critical angle above which total internal reflection
(TIR) occurs. The image light may engage in TIR bounces between the first
surface and a second facing surface, eventually reaching the active
viewing area 418 of the composite lens. In an embodiment, light may
engage in at least three TIR bounces. Since the waveguide 414 tapers to
enable the TIR bounces to eventually exit the waveguide, the thickness of
the composite lens 420 may not be uniform. Distortion through the viewing
area of the composite lens 420 may be minimized by disposing a
wedge-shaped correction lens 410 along a length of the freeform waveguide
414 in order to provide a uniform thickness across at least the viewing
area of the lens 420. The correction lens 410 may be a prescription lens,
a tinted lens, a polarized lens, a ballistic lens, and the like, mounted
on the inside or outside of the eyepiece tense, or in some embodiments,
mounted on both the inside and outside of the eyepiece lens.

[0242] In some embodiments, while the optical waveguide may have a first
surface and a second surface enabling total internal reflections of the
light entering the waveguide, the light may not actually enter the
waveguide at an internal angle of incidence that would result in total
internal reflection. The eyepiece may include a mirrored surface on the
first surface of the optical waveguide to reflect the displayed content
towards the second surface of the optical waveguide. Thus, the mirrored
surface enables a total reflection of the light entering the optical
waveguide or a reflection of at least a portion of the light entering the
optical waveguide. In embodiments, the surface may be 100% mirrored or
mirrored to a lower percentage. In some embodiments, in place of a
mirrored surface, an air gap between the waveguide and the corrective
element may cause a reflection of the light that enters the waveguide at
an angle of incidence that would not result in TIR.

[0243] In an embodiment, the eyepiece includes an integrated image source,
such as a projector, that introduces content for display to the optical
assembly from a side of the optical waveguide adjacent to an arm of the
eyepiece. As opposed to prior art optical assemblies where image
injection occurs from a top side of the optical waveguide, the present
disclosure provides image injection to the waveguide from a side of the
waveguide. The displayed content aspect ratio is between approximately
square to approximately rectangular with the long axis approximately
horizontal. In embodiments, the displayed content aspect ratio is 16:9.
In embodiments, achieving a rectangular aspect ratio for the displayed
content where the long axis is approximately horizontal may be done via
rotation of the injected image. In other embodiments, it may be done by
stretching the image until it reaches the desired aspect ratio.

[0244] FIG. 5 depicts a design for a waveguide eyepiece showing sample
dimensions. For example, in this design, the width of the coupling lens
504 may be 13˜15 mm, with the optical display 502 optically coupled
in series. These elements may be disposed in an arm or redundantly in
both arms of an eyepiece. Image light from the optical display 502 is
projected through the coupling lens 504 into the freeform waveguide 508.
The thickness of the composite lens 520, including waveguide 508 and
correction lens 510, may be 9 mm. In this design, the waveguide 502
enables an exit pupil diameter of 8 mm with an eye clearance of 20 mm.
The resultant see-through view 512 may be about 60-70 mm. The distance
from the pupil to the image light path as it enters the waveguide 502
(dimension a) may be about 50-60 mm, which can accommodate a large % of
human head breadths. In an embodiment, the field of view may be larger
than the pupil. In embodiments, the field of view may not fill the lens.
It should be understood that these dimensions are for a particular
illustrative embodiment and should not be construed as limiting. In an
embodiment, the waveguide, snap-on optics, and/or the corrective lens may
comprise optical plastic. In other embodiments, the waveguide snap-on
optics, and/or the corrective lens may comprise glass, marginal glass,
bulk glass, metallic glass, palladium-enriched glass, or other suitable
glass. In embodiments, the waveguide 508 and correction lens 510 may be
made from different materials selected to result in little to no
chromatic aberrations. The materials may include a diffraction grating, a
holographic grating, and the like.

[0245] In embodiments such as that shown in FIG. 1, the projected image
may be a stereo image when two projectors 108 are used for the left and
right images. To enable stereo viewing, the projectors 108 may be
disposed at an adjustable distance from one another that enables
adjustment based on the interpupillary distance for individual wearers of
the eyepiece. For example, a single optical assembly may include two
independent electro-optic modules with individual adjustments for
horizontal, vertical and tilt positioning. Alternatively, the optical
assembly may include only a single electro-optic module.

[0246] FIG. 6 depicts an embodiment of the eyepiece 600 with a see-through
or translucent lens 602. A projected image 618 can be seen on the lens
602. In this embodiment, the image 618 that is being projected onto the
lens 602 happens to be an augmented reality version of the scene that the
wearer is seeing, wherein tagged points of interest (POI) in the field of
view are displayed to the wearer. The augmented reality version may be
enabled by a forward facing camera embedded in the eyepiece (not shown in
FIG. 6) that images what the wearer is looking and identifies the
location/POI. In one embodiment, the output of the camera or optical
transmitter may be sent to the eyepiece controller or memory for storage,
for transmission to a remote location, or for viewing by the person
wearing the eyepiece or glasses. For example, the video output may be
streamed to the virtual screen seen by the user. The video output may
thus be used to help determine the user's location, or may be sent
remotely to others to assist in helping to locate the location of the
wearer, or for any other purpose. Other detection technologies, such as
GPS, RFID, manual input, and the like, may be used to determine a
wearer's location. Using location or identification data, a database may
be accessed by the eyepiece for information that may be overlaid,
projected or otherwise displayed with what is being seen. Augmented
reality applications and technology will be further described herein.

[0247] In FIG. 7, an embodiment of the eyepiece 700 is depicted with a
translucent lens 702 on which is being displayed streaming media (an
e-mail application) and an incoming call notification 704. In this
embodiment, the media obscures a portion of the viewing area, however, it
should be understood that the displayed image may be positioned anywhere
in the field of view. In embodiments, the media may be made to be more or
less transparent.

[0248] In an embodiment, the eyepiece may receive input from any external
source, such as an external converter box. The source may be depicted in
the lens of eyepiece. In an embodiment, when the external source is a
phone, the eyepiece may use the phone's location capabilities to display
location-based augmented reality, including marker overlay from
marker-based AR applications. In embodiments, a VNC client running on the
eyepiece's processor or an associated device may be used to connect to
and control a computer, where the computer's display is seen in the
eyepiece by the wearer. In an embodiment, content from any source may be
streamed to the eyepiece, such as a display from a panoramic camera
riding atop a vehicle, a user interface for a device, imagery from a
drone or helicopter, and the like. For example, a gun-mounted camera may
enable shooting a target not in direct line of sight when the camera feed
is directed to the eyepiece.

[0249] The lenses may be chromic, such as photochromic or electrochromic.
The electrochromic lens may include integral chromic material or a
chromic coating which changes the opacity of at least a portion of the
lens in response to a burst of charge applied by the processor across the
chromic material. For example, and referring to FIG. 9, a chromic portion
902 of the lens 904 is shown darkened, such as for providing greater
viewability by the wearer of the eyepiece when that portion is showing
displayed content to the wearer. In embodiments, there may be a plurality
of chromic areas on the lens that may be controlled independently, such
as large portions of the lens, sub-portions of the projected area,
programmable areas of the lens and/or projected area, controlled to the
pixel level, and the like. Activation of the chromic material may be
controlled via the control techniques further described herein or
automatically enabled with certain applications (e.g. a streaming video
application, a sun tracking application, an ambient brightness sensor, a
camera tracking brightness in the field of view) or in response to a
frame-embedded UV sensor. In embodiments, an electrochromic layer may be
located between optical elements and/or on the surface of an optical
element on the eyepiece, such as on a corrective lens, on a ballistic
lens, and the like. In an example, the electrochromic layer may consist
of a stack, such as an Indium Tin Oxide (ITO) coated PET/PC film with two
layers of electrochromic (EC) between, which may eliminate another layer
of PET/PC, thereby reducing reflections (e.g. a layer stack may comprise
a PET/PC-EC-PET/PC-EC-PET/PC). In embodiments, the electrically
controllable optical layer may be provided as a liquid crystal based
solution with a binary state of tint. In other embodiments, multiple
layers of liquid crystal or an alternative e-tint forming the optical
layer may be used to provide variable tint such that certain layers or
segments of the optical layer may be turned on or off in stages.
Electrochromic layers may be used generically for any of the electrically
controlled transparencies in the eyepiece, including SPD, LCD,
electrowetting, and the like.

[0250] In embodiments, the lens may have an angular sensitive coating
which enables transmitting light-waves with low incident angles and
reflecting light, such as s-polarized light, with high incident angles.
The chromic coating may be controlled in portions or in its entirety,
such as by the control technologies described herein. The lenses may be
variable contrast and the contrast may be under the control of a push
button or any other control technique described herein. In embodiments,
the user may wear the interactive head-mounted eyepiece, where the
eyepiece includes an optical assembly through which the user views a
surrounding environment and displayed content. The optical assembly may
include a corrective element that corrects the user's view of the
surrounding environment, an integrated processor for handling content for
display to the user, and an integrated image source for introducing the
content to the optical assembly. The optical assembly may include an
electrochromic layer that provides a display characteristic adjustment
that is dependent on displayed content requirements and surrounding
environmental conditions. In embodiments, the display characteristic may
be brightness, contrast, and the like. The surrounding environmental
condition may be a level of brightness that without the display
characteristic adjustment would make the displayed content difficult to
visualize by the wearer of the eyepiece, where the display characteristic
adjustment may be applied to an area of the optical assembly where
content is being displayed.

[0251] In embodiments, the eyepiece may have brightness, contrast,
spatial, resolution, and the like control over the eyepiece projected
area, such as to alter and improve the user's view of the projected
content against a bright or dark surrounding environment. For example, a
user may be using the eyepiece under bright daylight conditions, and in
order for the user to clearly see the displayed content the display area
my need to be altered in brightness and/or contrast. Alternatively, the
viewing area surrounding the display area may be altered. In addition,
the area altered, whether within the display area or not, may be
spatially oriented or controlled per the application being implemented.
For instance, only a small portion of the display area may need to be
altered, such as when that portion of the display area deviates from some
determined or predetermined contrast ratio between the display portion of
the display area and the surrounding environment. In embodiments,
portions of the lens may be altered in brightness, contrast, spatial
extent, resolution, and the like, such as fixed to include the entire
display area, adjusted to only a portion of the lens, adaptable and
dynamic to changes in lighting conditions of the surrounding environment
and/or the brightness-contrast of the displayed content, and the like.
Spatial extent (e.g. the area affected by the alteration) and resolution
(e.g. display optical resolution) may vary over different portions of the
lens, including high resolution segments, low resolution segments, single
pixel segments, and the like, where differing segments may be combined to
achieve the viewing objectives of the application(s) being executed. In
embodiments, technologies for implementing alterations of brightness,
contrast, spatial extent, resolution, and the like, may include
electrochromic materials, LCD technologies, embedded beads in the optics,
flexible displays, suspension particle device (SPD) technologies, colloid
technologies, and the like.

[0252] In embodiments, there may be various modes of activation of the
electrochromic layer. For example, the user may enter sunglass mode where
the composite lenses appear only somewhat darkened or the user may enter
"Blackout" mode, where the composite lenses appear completely blackened.

[0253] In an example of a technology that may be employed in implementing
the alterations of brightness, contrast, spatial extent, resolution, and
the like, may be electrochromic materials, films, inks, and the like.
Electrochromism is the phenomenon displayed by some materials of
reversibly changing appearance when electric charge is applied. Various
types of materials and structures can be used to construct electrochromic
devices, depending on the specific applications. For instance,
electrochromic materials include tungsten oxide (WO3), which is the
main chemical used in the production of electrochromic windows or smart
glass. In embodiments, electrochromic coatings may be used on the lens of
the eyepiece in implementing alterations. In another example,
electrochromic displays may be used in implementing `electronic paper`,
which is designed to mimic the appearance of ordinary paper, where the
electronic paper displays reflected light like ordinary paper. In
embodiments, electrochromism may be implemented in a wide variety of
applications and materials, including gyricon (consisting of polyethylene
spheres embedded in a transparent silicone sheet, with each sphere
suspended in a bubble of oil so that they can rotate freely),
electro-phoretic displays (forming images by rearranging charged pigment
particles using an applied electric field), E-Ink technology,
electro-wetting, electro-fluidic, interferometric modulator, organic
transistors embedded into flexible substrates, nano-chromics displays
(NCD), and the like.

[0254] In another example of a technology that may be employed in
implementing the alterations of brightness, contrast, spatial extent,
resolution, and the like, may be suspended particle devices (SPD). When a
small voltage is applied to an SPD film, its microscopic particles, which
in their stable state are randomly dispersed, become aligned and allow
light to pass through. The response may be immediate, uniform, and with
stable color throughout the film. Adjustment of the voltage may allow
users to control the amount of light, glare and heat passing through. The
system's response may range from a dark blue appearance, with up to full
blockage of light in its off state, to clear in its on state. In
embodiments, SPD technology may be an emulsion applied on a plastic
substrate creating the active film. This plastic film may be laminated
(as a single glass pane), suspended between two sheets of glass, plastic
or other transparent materials, and the like.

[0255] Referring to FIGS. 8A-C, in certain embodiments, the electro-optics
may be mounted in a monocular or binocular flip-up/flip-down arrangement
in two parts: 1) electro-optics; and 2) correction lens. FIG. 8A depicts
a two part eyepiece where the electro-optics are contained within a
module 802 that may be electrically connected to the eyepiece 804 via an
electrical connector 810, such as a plug, pin, socket, wiring, and the
like. In this arrangement, the lens 818 in the frame 814 may be a
correction lens entirely. The interpupillary distance (IPD) between the
two halves of the electro-optic module 802 may be adjusted at the bridge
808 to accommodate various IPDs. Similarly, the placement of the display
812 may be adjusted via the bridge 808. FIG. 8B depicts the binocular
electro-optics module 802 where one half is flipped up and the other half
is flipped down. The nose bridge may be fully adjustable and elastomeric.
This enables 3-point mounting on nose bridge and ears with a head strap
to assure the stability of images in the user's eyes, unlike the
instability of helmet-mounted optics, that shift on the scalp. Referring
to FIG. 8C, the lens 818 may be ANSI-compliant, hard-coat
scratch-resistant polycarbonate ballistic lenses, may be chromic, may
have an angular sensitive coating, may include a UV-sensitive material,
and the like. In this arrangement, the electro-optics module may include
a CMOS-based VIS/NIR/SWIR black silicon sensor for night vision
capability. The electro-optics module 802 may feature quick disconnect
capability for user flexibility, field replacement and upgrade. The
electro-optics module 802 may feature an integrated power dock.

[0256] As in FIG. 79, the flip-up/flip-down lens 7910 may include a light
block 7908. Removable, elastomeric night adapters/light dams/light blocks
7908 may be used to shield the flip-up/flip-down lens 7910, such as for
night operations. The exploded top view of the eyepiece also depicts a
headstrap 7900, frame 7904, and adjustable nose bridge 7902. FIG. 80
depicts an exploded view of the electro-optic assembly in a front (A) and
side angle (B) view. A holder 8012 holds the see-through optic with
corrective lens 7910. An O-ring 8020 and screw 8022 secures the holder to
the shaft 8024. A spring 8028 provides a spring-loaded connection between
the holder 8012 and shaft 8024. The shaft 8024 connects to the attachment
bracket 8014, which secures to the eyepiece using the thumbscrew 8018.
The shaft 8024 serves as a pivot and an IPD adjustment tool using the IPD
adjustment knob 8030. As seen in FIG. 81, the knob 8030 rotates along
adjustment threads 8134. The shaft 8024 also features two set screw
grooves 8132.

[0257] In embodiments, a photochromic layer may be included as part of the
optics of the eyepiece. Photochromism is the reversible transformation of
a chemical species between two forms by the absorption of electromagnetic
radiation, where the two forms have different absorption spectra, such as
a reversible change of color, darkness, and the like, upon exposure to a
given frequency of light. In an example, a photochromic layer may be
included between the waveguide and corrective optics of the eyepiece, on
the outside of the corrective optic, and the like. In embodiments, a
photochromic layer (such as used as a darkening layer) may be activated
with a UV diode, or other photochromic responsive wavelength known in the
art. In the case of the photochromic layer being activated with UV light,
the eyepiece optics may also include a UV coating outside the
photochromic layer to prevent UV light from the Sun from accidentally
activating it.

[0258] Photochromics are presently fast to change from light to dark and
slow to change from dark to light. This due to the molecular changes that
are involved with the photochromic material changing from clear to dark.
Photochromic molecules are vibrating back to clear after the UV light,
such as UV light from the sun, is removed. By increasing the vibration of
the molecules, such as by exposure to heat, the optic will clear quicker.
The speed at which the photochromic layer goes from dark to light may be
temperature-dependent. Rapid changing from dark to light is particularly
important for military applications where users of sunglasses often go
from a bright outside environment to a dark inside environment and it is
important to be able to see quickly in the inside environment.

[0259] This disclosure provides a photochromic film device with an
attached heater that is used to accelerate the transition from dark to
clear in the photochromic material. This method relies on the
relationship between the speed of transition of photochromic materials
from dark to clear wherein the transition is faster at higher
temperatures. To enable the heater to increase the temperature of the
photochromic material rapidly, the photochromic material is provided as a
thin layer with a thin heater. By keeping the thermal mass of the
photochromic film device low per unit area, the heater only has to
provide a small amount of heat to rapidly produce a large temperature
change in the photochromic material. Since the photochromic material only
needs to be at a higher temperature during the transition from dark to
clear, the heater only needs to be used for short periods of time so the
power requirement is low.

[0260] The heater may be a thin and transparent heater element, such as an
ITO heater or any other transparent and electrically conductive film
material. When a user needs the eyepiece to go clear quickly, the user
may activate the heater element by any of the control techniques
discussed herein.

[0261] In an embodiment, the heater element may be used to calibrate the
photochromic element to compensate for cold ambient conditions when the
lenses might go dark on their own.

[0262] In another embodiment, a thin coat of photochromic material may be
deposited on a thick substrate with the heater element layered on top.
For example, the cover sunglass lens may comprise an accelerated
photochromic solution and still have a separate electrochromic patch over
the display area that may optionally be controlled with or without UV
light.

[0263] FIG. 94A depicts a photochromic film device with a serpentine
heater pattern and FIG. 94B depicts a side view of a photochromic film
device wherein the device is a lens for sunglasses. The photochromic film
device is shown above and not contacting a protective cover lens to
reduce the thermal mass of the device.

[0264] U.S. Pat. No. 3,152,215 describes a heater layer combined with a
photochromic layer to heat the photochromic material for the purpose of
reducing the time to transition from dark to clear. However, the
photochromic layer is positioned in a wedge which would greatly increase
the thermal mass of the device and thereby decrease the rate that the
heater could change the temperature of the photochromic material or
alternately greatly increase the power required to change the temperature
of the photochromic material.

[0265] This disclosure includes the use of a thin carrier layer that the
photochromic material is applied to. The carrier layer can be glass or
plastic. The photochromic material can be applied by vacuum coating, by
dipping or by thermal diffusion into the carrier layer as is well known
in the art. The thickness of the carrier layer can be 150 microns or
less. The selection of the thickness of the carrier layer is selected
based on the desired darkness of the photochromic film device in the dark
state and the desired speed of transition between the dark state and the
clear state. Thicker carrier layers can be darker in the dark state while
being slower to heat to an elevated temperature due to having more
thermal mass. Conversely, thinner carrier layers can be less dark in the
dark state while being faster to heat to an elevated temperature due to
having less thermal mass.

[0266] The protective layer shown in FIG. 94 is separated from the
photochromic film device to keep the thermal mass of the photochromic
film device low. In this way, the protective layer can be made thicker to
provide higher impact strength. The protective layer can be glass or
plastic, for example the protective layer can be polycarbonate.

[0267] The heater can be a transparent conductor that is patterned into a
conductive path that is relatively uniform so that the heat generated
over the length of the patterned heater is relatively uniform. An example
of a transparent conductor that can be patterned is titanium dioxide. A
larger area is provided at the ends of the heater pattern for electrical
contacts such as is shown in FIG. 94.

[0268] As noted in the discussion for FIG. 8A-C, the augmented reality
glasses may include a lens 818 for each eye of the wearer. The lenses 818
may be made to fit readily into the frame 814, so that each lens may be
tailored for the person for whom the glasses are intended. Thus, the
lenses may be corrective lenses, and may also be tinted for use as
sunglasses, or have other qualities suitable for the intended
environment. Thus, the lenses may be tinted yellow, dark or other
suitable color, or may be photochromic, so that the transparency of the
lens decreases when exposed to brighter light. In one embodiment, the
lenses may also be designed for snap fitting into or onto the frames,
i.e., snap on lenses are one embodiment. For example, the lenses may be
made from high-quality Schott optical glass and may include a polarizing
filter.

[0269] Of course, the lenses need not be corrective lenses; they may
simply serve as sunglasses or as protection for the optical system within
the frame. In non-flip up/flip down arrangements, it goes without saying
that the outer lenses are important for helping to protect the rather
expensive waveguides, viewing systems and electronics within the
augmented reality glasses. At a minimum, the outer lenses offer
protection from scratching by the environment of the user, whether sand,
brambles, thorns and the like, in one environment, and flying debris,
bullets and shrapnel, in another environment. In addition, the outer
lenses may be decorative, acting to change a look of the composite lens,
perhaps to appeal to the individuality or fashion sense of a user. The
outer lenses may also help one individual user to distinguish his or her
glasses from others, for example, when many users are gathered together.

[0270] It is desirable that the lenses be suitable for impact, such as a
ballistic impact. Accordingly, in one embodiment, the lenses and the
frames meet ANSI Standard Z87.1-2010 for ballistic resistance. In one
embodiment, the lenses also meet ballistic standard CE EN166B. In another
embodiment, for military uses, the lenses and frames may meet the
standards of MIL-PRF-31013, standards 3.5.1.1 or 4.4.1.1. Each of these
standards has slightly different requirements for ballistic resistance
and each is intended to protect the eyes of the user from impact by
high-speed projectiles or debris. While no particular material is
specified, polycarbonate, such as certain Lexan® grades, usually is
sufficient to pass tests specified in the appropriate standard.

[0271] In one embodiment, as shown in FIG. 8D, the lenses snap in from the
outside of the frame, not the inside, for better impact resistance, since
any impact is expected from the outside of the augmented reality
eyeglasses. In this embodiment, replaceable lens 819 has a plurality of
snap-fit arms 819a which fit into recesses 820a of frame 820. The
engagement angle 819b of the arm is greater than 90°, while the
engagement angle 820b of the recess is also greater than 90°.
Making the angles greater than right angles has the practical effect of
allowing removal of lens 819 from the frame 820. The lens 819 may need to
be removed if the person's vision has changed or if a different lens is
desired for any reason. The design of the snap fit is such that there is
a slight compression or bearing load between the lens and the frame. That
is, the lens may be held firmly within the frame, such as by a slight
interference fit of the lens within the frame.

[0272] The cantilever snap fit of FIG. 8D is not the only possible way to
removably snap-fit the lenses and the frame. For example, an annular snap
fit may be used, in which a continuous sealing lip of the frame engages
an enlarged edge of the lens, which then snap-fits into the lip, or
possibly over the lip. Such a snap fit is typically used to join a cap to
an ink pen. This configuration may have an advantage of a sturdier joint
with fewer chances for admission of very small dust and dirt particles.
Possible disadvantages include the fairly tight tolerances required
around the entire periphery of both the lens and frame, and the
requirement for dimensional integrity in all three dimensions over time.

[0273] It is also possible to use an even simpler interface, which may
still be considered a snap-fit. A groove may be molded into an outer
surface of the frame, with the lens having a protruding surface, which
may be considered a tongue that fits into the groove. If the groove is
semi-cylindrical, such as from about 270° to about 300°,
the tongue will snap into the groove and be firmly retained, with removal
still possible through the gap that remains in the groove. In this
embodiment, shown in FIG. 8E, a lens or replacement lens or cover 826
with a tongue 828 may be inserted into a groove 827 in a frame 825, even
though the lens or cover is not snap-fit into the frame. Because the fit
is a close one, it will act as a snap-fit and securely retain the lens in
the frame.

[0274] In another embodiment, the frame may be made in two pieces, such as
a lower portion and an upper portion, with a conventional
tongue-and-groove fit. In another embodiment, this design may also use
standard fasteners to ensure a tight grip of the lens by the frame. The
design should not require disassembly of anything on the inside of the
frame. Thus, the snap-on or other lens or cover should be assembled onto
the frame, or removed from the frame, without having to go inside the
frame. As noted in other parts of this disclosure, the augmented reality
glasses have many component parts. Some of the assemblies and
subassemblies may require careful alignment. Moving and jarring these
assemblies may be detrimental to their function, as will moving and
jarring the frame and the outer or snap-on lens or cover.

[0275] In embodiments, the flip-up/flip-down arrangement enables a modular
design for the eyepiece. For example, not only can the eyepiece be
equipped with a monocular or binocular module 802, but the lens 818 may
also be swapped. In embodiments, additional features may be included with
the module 802, either associated with one or both displays 812.
Referring to FIG. 8F, either monocular or binocular versions of the
module 802 may be display only 852 (monocular), 854 (binocular) or may be
equipped with a forward-looking camera 858 (monocular), and 860 & 862
(binocular). In some embodiments, the module may have additional
integrated electronics, such as a GPS, a laser range finder, and the
like. In the embodiment 862 enabling urban leader tactical response,
awareness & visualization, also known as `Ultra-V is`, a binocular
electro-optic module 862 is equipped with stereo forward-looking cameras
870, GPS, and a laser range finder 868. These features may enable the
Ultra-V is embodiment to have panoramic night vision, and panoramic night
vision with laser range finder and geo location.

[0276] In an embodiment, the electro-optics characteristics may be, but
not limited to, as follows:

[0278] In another embodiment, an augmented reality eyepiece may include
electrically-controlled lenses as part of the microprojector or as part
of the optics between the microprojector and the waveguide. FIG. 21
depicts an embodiment with such liquid lenses 2152.

[0279] The glasses may also include at least one camera or optical sensor
2130 that may furnish an image or images for viewing by the user. The
images are formed by a microprojector 2114 on each side of the glasses
for conveyance to the waveguide 2108 on that side. In one embodiment, an
additional optical element, a variable focus lens 2152 may also be
furnished. The lens may be electrically adjustable by the user so that
the image seen in the waveguides 2108 are focused for the user. In
embodiments, the camera may be a multi-lens camera, such as an `array
camera`, where the eyepiece processor may combine the data from the
multiple lenses and multiple viewpoints of the lenses to build a single
high-quality image. This technology may be referred to as computational
imaging, since software is used to process the image. Computational
imaging may provide image-processing advantages, such as allowing
processing of the composite image as a function of individual lens
images. For example, since each lens may provide it's own image, the
processor may provide image processing to create images with special
focusing, such as foveal imaging, where the focus from one of the lens
images is clear, higher resolution, and the like, and where the rest of
the image is defocused, lower resolution, and the like. The processor may
also select portions of the composite image to store in memory, while
deleting the rest, such as when memory storage is limited and only
portions of the composite image are critical to save. In embodiments, use
of the array camera may provide the ability to alter the focus of an
image after the image has been taken. In addition to the imaging
advantages of an array camera, the array camera may provide a thinner
mechanical profile than a traditional single-lens assembly, thus making
it easier to integrate into the eyepiece.

[0280] Variable lenses may include the so-called liquid lenses furnished
by Varioptic, S.A., Lyons, France, or by LensVector, Inc., Mountain View,
Calif., U.S.A. Such lenses may include a central portion with two
immiscible liquids. Typically, in these lenses, the path of light through
the lens, i.e., the focal length of the lens is altered or focused by
applying an electric potential between electrodes immersed in the
liquids. At least one of the liquids is affected by the resulting
electric or magnetic field potential. Thus, electrowetting may occur, as
described in U.S. Pat. Appl. Publ. 2010/0007807, assigned to LensVector,
Inc. Other techniques are described in LensVector Pat. Appl. Pubis.
2009/021331 and 2009/0316097. All three of these disclosures are
incorporated herein by reference, as though each page and figures were
set forth verbatim herein.

[0281] Other patent documents from Varioptic, S.A., describe other devices
and techniques for a variable focus lens, which may also work through an
electrowetting phenomenon. These documents include U.S. Pats. No.
7,245,440 and 7,894,440 and U.S. Pat. Appl. Pubis. 2010/0177386 and
2010/0295987, each of which is also incorporated herein by reference, as
though each page and figures were set forth verbatim herein. In these
documents, the two liquids typically have different indices of refraction
and different electrical conductivities, e.g., one liquid is conductive,
such as an aqueous liquid, and the other liquid is insulating, such as an
oily liquid. Applying an electric potential may change the thickness of
the lens and does change the path of light through the lens, thus
changing the focal length of the lens.

[0282] The electrically-adjustable lenses may be controlled by the
controls of the glasses. In one embodiment, a focus adjustment is made by
calling up a menu from the controls and adjusting the focus of the lens.
The lenses may be controlled separately or may be controlled together.
The adjustment is made by physically turning a control knob, by
indicating with a gesture, or by voice command. In another embodiment,
the augmented reality glasses may also include a rangefinder, and focus
of the electrically-adjustable lenses may be controlled automatically by
pointing the rangefinder, such as a laser rangefinder, to a target or
object a desired distance away from the user.

[0283] As shown in U.S. Pat. No. 7,894,440, discussed above, the variable
lenses may also be applied to the outer lenses of the augmented reality
glasses or eyepiece. In one embodiment, the lenses may simply take the
place of a corrective lens. The variable lenses with their
electric-adjustable control may be used instead of or in addition to the
image source- or projector-mounted lenses. The corrective lens inserts
provide corrective optics for the user's environment, the outside world,
whether the waveguide displays are active or not.

[0284] It is important to stabilize the images presented to the wearer of
the augmented reality glasses or eyepiece(s), that is, the images seen in
the waveguide. The view or images presented travel from one or two
digital cameras or sensors mounted on the eyepiece, to digital circuitry,
where the images are processed and, if desired, stored as digital data
before they appear in the display of the glasses. In any event, and as
discussed above, the digital data is then used to form an image, such as
by using an LCOS display and a series of RGB light emitting diodes. The
light images are processed using a series of lenses, a polarizing beam
splitter, an electrically-powered liquid corrective lens and at least one
transition lens from the projector to the waveguide.

[0285] The process of gathering and presenting images includes several
mechanical and optical linkages between components of the augmented
reality glasses. It seems clear, therefore, that some form of
stabilization will be required. This may include optical stabilization of
the most immediate cause, the camera itself, since it is mounted on a
mobile platform, the glasses, which themselves are movably mounted on a
mobile user. Accordingly, camera stabilization or correction may be
required. In addition, at least some stabilization or correction should
be used for the liquid variable lens. Ideally, a stabilization circuit at
that point could correct not only for the liquid lens, but also for any
aberration and vibration from many parts of the circuit upstream from the
liquid lens, including the image source. One advantage of the present
system is that many commercial off-the-shelf cameras are very advanced
and typically have at least one image-stabilization feature or option.
Thus, there may be many embodiments of the present disclosure, each with
a same or a different method of stabilizing an image or a very fast
stream of images, as discussed below. The term optical stabilization is
typically used herein with the meaning of physically stabilizing the
camera, camera platform, or other physical object, while image
stabilization refers to data manipulation and processing.

[0286] One technique of image stabilization is performed on digital images
as they are formed. This technique may use pixels outside the border of
the visible frame as a buffer for the undesired motion. Alternatively,
the technique may use another relatively steady area or basis in
succeeding frames. This technique is applicable to video cameras,
shifting the electronic image from frame to frame of the video in a
manner sufficient to counteract the motion. This technique does not
depend on sensors and directly stabilizes the images by reducing
vibrations and other distracting motion from the moving camera. In some
techniques, the speed of the images may be slowed in order to add the
stabilization process to the remainder of the digital process, and
requiring more time per image. These techniques may use a global motion
vector calculated from frame-to-frame motion differences to determine the
direction of the stabilization.

[0287] Optical stabilization for images uses a gravity- or
electronically-driven mechanism to move or adjust an optical element or
imaging sensor such that it counteracts the ambient vibrations. Another
way to optically stabilize the displayed content is to provide gyroscopic
correction or sensing of the platform housing the augmented reality
glasses, e.g., the user. As noted above, the sensors available and used
on the augmented reality glasses or eyepiece include MEMS gyroscopic
sensors. These sensors capture movement and motion in three dimensions in
very small increments and can be used as feedback to correct the images
sent from the camera in real time. It is clear that at least a large part
of the undesired and undesirable movement probably is caused by movement
of the user and the camera itself. These larger movements may include
gross movements of the user, e.g., walking or running, riding in a
vehicle. Smaller vibrations may also result within the augmented reality
eyeglasses, that is, vibrations in the components in the electrical and
mechanical linkages that form the path from the camera (input) to the
image in the waveguide (output). These gross movements may be more
important to correct or to account for, rather than, for instance,
independent and small movements in the linkages of components downstream
from the projector. In embodiments, the gyroscopic stabilization may
stabilize the image when it is subject to a periodic motion. For such
periodic motion, the gyroscope may determine the periodicity of the
user's motion and transmit the information to a processor to correct for
the placement of content in the user's view. The gyroscope may utilize a
rolling average of two or three or more cycles of periodic motion in
determining the periodicity. Other sensors may also be used to stabilize
the image or correctly place the image in the user's field of view, such
as an accelerometer, a position sensor, a distance sensor, a rangefinder,
a biological sensor, a geodetic sensor, an optical sensor, a video
sensor, a camera, an infrared sensor, a light sensor, a photocell sensor,
or an RF sensor. When a sensor detects user head or eye movement, the
sensor provides an output to a processor which may determine the
direction, speed, amount, and rate of the user's head or eye movement.
The processor may convert this information into a suitable data structure
for further processing by the processor controlling the optical assembly
(which may be the same processor). The data structure may be one or more
vector quantities. For example, the direction of the vector may define
the orientation of the movement, and the length of the vector may define
the rate of the movement. Using the processed sensor output, the display
of content is adjusted accordingly.

[0288] Motion sensing may thus be used to sense the motion and correct for
it, as in optical stabilization, or to sense the motion and then correct
the images that are being taken and processed, as in image stabilization.
An apparatus for sensing motion and correcting the images or the data is
depicted in FIG. 34A. In this apparatus, one or more kinds of motion
sensors may be used, including accelerometers, angular position sensors
or gyroscopes, such as MEMS gyroscopes. Data from the sensors is fed back
to the appropriate sensor interfaces, such as analog to digital
converters (ADCs) or other suitable interface, such as digital signal
processors (DSPs). A microprocessor then processes this information, as
discussed above, and sends image-stabilized frames to the display driver
and then to the see-through display or waveguide discussed above. In one
embodiment, the display begins with the RGB display in the microprojector
of the augmented reality eyepiece.

[0289] In another embodiment, a video sensor or augmented reality glasses,
or other device with a video sensor may be mounted on a vehicle. In this
embodiment, the video stream may be communicated through a
telecommunication capability or an Internet capability to personnel in
the vehicle. One application could be sightseeing or touring of an area.
Another embodiment could be exploring or reconnaissance, or even
patrolling, of an area. In these embodiments, gyroscopic stabilization of
the image sensor would be helpful, rather than applying a gyroscopic
correction to the images or digital data representing the images. An
embodiment of this technique is depicted in FIG. 34B. In this technique,
a camera or image sensor 3407 is mounted on a vehicle 3401. One or more
motion sensors 3406, such as gyroscopes, are mounted in the camera
assembly 3405. A stabilizing platform 3403 receives information from the
motion sensors and stabilizes the camera assembly 3405, so that jitter
and wobble are minimized while the camera operates. This is true optical
stabilization. Alternatively, the motion sensors or gyroscopes may be
mounted on or within the stabilizing platform itself. This technique
would actually provide optical stabilization, stabilizing the camera or
image sensor, in contrast to digital stabilization, correcting the image
afterwards by computer processing of the data taken by the camera.

[0290] In one technique, the key to optical stabilization is to apply the
stabilization or correction before an image sensor converts the image
into digital information. In one technique, feedback from sensors, such
as gyroscopes or angular velocity sensors, is encoded and sent to an
actuator that moves the image sensor, much as an autofocus mechanism
adjusts a focus of a lens. The image sensor is moved in such a way as to
maintain the projection of the image onto the image plane, which is a
function of the focal length of the lens being used. Autoranging and
focal length information, perhaps from a range finder of the interactive
head-mounted eyepiece, may be acquired through the lens itself. In
another technique, angular velocity sensors, sometimes also called
gyroscopic sensors, can be used to detect, respectively, horizontal and
vertical movements. The motion detected may then be fed back to
electromagnets to move a floating lens of the camera. This optical
stabilization technique, however, would have to be applied to each lens
contemplated, making the result rather expensive.

[0291] Stabilization of the liquid lens is discussed in U.S. Pat. Appl.
Publ. 2010/0295987, assigned to Varioptic, S.A., Lyon, France. In theory,
control of a liquid lens is relatively simple, since there is only one
variable to control: the level of voltage applied to the electrodes in
the conducting and non-conducting liquids of the lens, using, for
examples, the lens housing and the cap as electrodes. Applying a voltage
causes a change or tilt in the liquid-liquid interface via the
electrowetting effect. This change or tilt adjusts the focus or output of
the lens. In its most basic terms, a control scheme with feedback would
then apply a voltage and determine the effect of the applied voltage on
the result, i.e., a focus or an astigmatism of the image. The voltages
may be applied in patterns, for example, equal and opposite + and -
voltages, both positive voltages of differing magnitude, both negative
voltages of differing magnitude, and so forth. Such lenses are known as
electrically variable optic lenses or electro-optic lenses.

[0292] Voltages may be applied to the electrodes in patterns for a short
period of time and a check on the focus or astigmatism made. The check
may be made, for instance, by an image sensor. In addition, sensors on
the camera or in this case the lens, may detect motion of the camera or
lens. Motion sensors would include accelerometers, gyroscopes, angular
velocity sensors or piezoelectric sensors mounted on the liquid lens or a
portion of the optic train very near the liquid lens. In one embodiment,
a table, such as a calibration table, is then constructed of voltages
applied and the degree of correction or voltages needed for given levels
of movement. More sophistication may also be added, for example, by using
segmented electrodes in different portions of the liquid so that four
voltages may be applied rather than two. Of course, if four electrodes
are used, four voltages may be applied, in many more patterns than with
only two electrodes. These patterns may include equal and opposite
positive and negative voltages to opposite segments, and so forth. An
example is depicted in FIG. 34C. Four electrodes 3409 are mounted within
a liquid lens housing (not shown). Two electrodes are mounted in or near
the non-conducting liquid and two are mounted in or near the conducting
liquid. Each electrode is independent in terms of the possible voltage
that may be applied.

[0293] Look-up or calibration tables may be constructed and placed in the
memory of the augmented reality glasses. In use, the accelerometer or
other motion sensor will sense the motion of the glasses, i.e., the
camera on the glasses or the lens itself. A motion sensor such as an
accelerometer will sense in particular, small vibration-type motions that
interfere with smooth delivery of images to the waveguide. In one
embodiment, the image stabilization techniques described here can be
applied to the electrically-controllable liquid lens so that the image
from the projector is corrected immediately. This will stabilize the
output of the projector, at least partially correcting for the vibration
and movement of the augmented reality eyepiece, as well as at least some
movement by the user. There may also be a manual control for adjusting
the gain or other parameter of the corrections. Note that this technique
may also be used to correct for near-sightedness or far-sightedness of
the individual user, in addition to the focus adjustment already provided
by the image sensor controls and discussed as part of the
adjustable-focus projector.

[0294] Another variable focus element uses tunable liquid crystal cells to
focus an image. These are disclosed, for example, in U.S. Pat. Appl.
Publ. Nos. 2009/0213321, 2009/0316097 and 2010/0007807, which are hereby
incorporated by reference in their entirety and relied on. In this
method, a liquid crystal material is contained within a transparent cell,
preferably with a matching index of refraction. The cell includes
transparent electrodes, such as those made from indium tin oxide (ITO).
Using one spiral-shaped electrode, and a second spiral-shaped electrode
or a planar electrode, a spatially non-uniform magnetic field is applied.
Electrodes of other shapes may be used. The shape of the magnetic field
determines the rotation of molecules in the liquid crystal cell to
achieve a change in refractive index and thus a focus of the lens. The
liquid crystals can thus be electromagnetically manipulated to change
their index of refraction, making the tunable liquid crystal cell act as
a lens.

[0295] In a first embodiment, a tunable liquid crystal cell 3420 is
depicted in FIG. 34D. The cell includes an inner layer of liquid crystal
3421 and thin layers 3423 of orienting material such as polyimide. This
material helps to orient the liquid crystals in a preferred direction.
Transparent electrodes 3425 are on each side of the orienting material.
An electrode may be planar, or may be spiral shaped as shown on the right
in FIG. 34D. Transparent glass substrates 3427 contain the materials
within the cell. The electrodes are formed so that they will lend shape
to the magnetic field. As noted, a spiral shaped electrode on one or both
sides, such that the two are not symmetrical, is used in one embodiment.
A second embodiment is depicted in FIG. 34E. Tunable liquid crystal cell
3430 includes central liquid crystal material 3431, transparent glass
substrate walls 3433, and transparent electrodes. Bottom electrode 3435
is planar, while top electrode 3437 is in the shape of a spiral.
Transparent electrodes may be made of indium tin oxide (ITO).

[0296] Additional electrodes may be used for quick reversion of the liquid
crystal to a non-shaped or natural state. A small control voltage is thus
used to dynamically change the refractive index of the material the light
passes through. The voltage generates a spatially non-uniform magnetic
field of a desired shape, allowing the liquid crystal to function as a
lens.

[0297] In one embodiment, the camera includes the black silicon, short
wave infrared (SWIR) CMOS sensor described elsewhere in this patent. In
another embodiment, the camera is a 5 megapixel (MP) optically-stabilized
video sensor. In one embodiment, the controls include a 3 GHz
microprocessor or microcontroller, and may also include a 633 MHz digital
signal processor with a 30 M polygon/second graphic accelerator for
real-time image processing for images from the camera or video sensor. In
one embodiment, the augmented reality glasses may include a wireless
internet, radio or telecommunications capability for wideband, personal
area network (PAN), local area network (LAN), a wide local area network,
WLAN, conforming to IEEE 802.11, or reach-back communications. The
equipment furnished in one embodiment includes a Bluetooth capability,
conforming to IEEE 802.15. In one embodiment, the augmented reality
glasses include an encryption system, such as a 256-bit Advanced
Encryption System (AES) encryption system or other suitable encryption
program, for secure communications.

[0298] In one embodiment, the wireless telecommunications may include a
capability for a 3G or 4G network and may also include a wireless
internet capability. In order for an extended life, the augmented reality
eyepiece or glasses may also include at least one lithium-ion battery,
and as discussed above, a recharging capability. The recharging plug may
comprise an AC/DC power converter and may be capable of using multiple
input voltages, such as 120 or 240 VAC. The controls for adjusting the
focus of the adjustable focus lenses in one embodiment comprises a 2D or
3D wireless air mouse or other non-contact control responsive to gestures
or movements of the user. A 2D mouse is available from Logitech, Fremont,
Calif., USA. A 3D mouse is described herein, or others such as the Cideko
AVK05 available from Cideko, Taiwan, R.O.C, may be used.

[0299] In an embodiment, the eyepiece may comprise electronics suitable
for controlling the optics, and associated systems, including a central
processing unit, non-volatile memory, digital signal processors, 3-D
graphics accelerators, and the like. The eyepiece may provide additional
electronic elements or features, including inertial navigation systems,
cameras, microphones, audio output, power, communication systems,
sensors, stopwatch or chronometer functions, thermometer, vibratory
temple motors, motion sensor, a microphone to enable audio control of the
system, a UV sensor to enable contrast and dimming with photochromic
materials, and the like.

[0300] In an embodiment, the central processing unit (CPU) of the eyepiece
may be an OMAP 4, with dual 1 GHz processor cores. The CPU may include a
633 MHz DSP, giving a capability for the CPU of 30 million
polygons/second.

[0301] The system may also provide dual micro-SD (secure digital) slots
for provisioning of additional removable non-volatile memory.

[0302] An on-board camera may provide 1.3 MP color and record up to 60
minutes of video footage. The recorded video may be transferred
wirelessly or using a mini-USB transfer device to off-load footage.

[0303] The communications system-on-a-chip (SOC) may be capable of
operating with wide local area networks (WLAN), Bluetooth version 3.0, a
GPS receiver, an FM radio, and the like.

[0304] The eyepiece may operate on a 3.6 VDC lithium-ion rechargeable
battery for long battery life and ease of use. An additional power source
may be provided through solar cells on the exterior of the frame of the
system. These solar cells may supply power and may also be capable of
recharging the lithium-ion battery.

[0305] The total power consumption of the eyepiece may be approximately
400 mW, but is variable depending on features and applications used. For
example, processor-intensive applications with significant video graphics
demand more power, and will be closer to 400 mW. Simpler, less
video-intensive applications will use less power. The operation time on a
charge also may vary with application and feature usage.

[0306] The micro-projector illumination engine, also known herein as the
projector, may include multiple light emitting diodes (LEDs). In order to
provide life-like color, Osram red, Cree green, and Cree blue LEDs are
used. These are die-based LEDs. The RGB engine may provide an adjustable
color output, allowing a user to optimize viewing for various programs
and applications.

[0307] In embodiments, illumination may be added to the glasses or
controlled through various means. For example, LED lights or other lights
may be embedded in the frame of the eyepiece, such as in the nose bridge,
around the composite lens, or at the temples.

[0308] The intensity of the illumination and or the color of illumination
may be modulated. Modulation may be accomplished through the various
control technologies described herein, through various applications,
filtering and magnification.

[0309] By way of example, illumination may be modulated through various
control technologies described herein such as through the adjustment of a
control knob, a gesture, eye movement, or voice command. If a user
desires to increase the intensity of illumination, the user may adjust a
control knob on the glasses or he may adjust a control knob in the user
interface displayed on the lens or by other means. The user may use eye
movements to control the knob displayed on the lens or he may control the
knob by other means. The user may adjust illumination through a movement
of the hand or other body movement such that the intensity or color of
illumination changes based on the movement made by the user. Also, the
user may adjust the illumination through a voice command such as by
speaking a phrase requesting increased or decreased illumination or
requesting other colors to be displayed. Additionally, illumination
modulation may be achieved through any control technology described
herein or by other means.

[0310] Further, the illumination may be modulated per the particular
application being executed. As an example, an application may
automatically adjust the intensity of illumination or color of
illumination based on the optimal settings for that application. If the
current levels of illumination are not at the optimal levels for the
application being executed, a message or command may be sent to provide
for illumination adjustment.

[0311] In embodiments, illumination modulation may be accomplished through
filtering and or through magnification. For example, filtering techniques
may be employed that allow the intensity and or color of the light to be
changed such that the optimal or desired illumination is achieved. Also,
in embodiments, the intensity of the illumination may be modulated by
applying greater or less magnification to reach the desired illumination
intensity.

[0312] The projector may be connected to the display to output the video
and other display elements to the user. The display used may be an SVGA
800×600 dots/inch SYNDIANT liquid crystal on silicon (LCoS)
display.

[0313] The target MPE dimensions for the system may be 24 mm×12
mm×6 mm

[0314] The focus may be adjustable, allowing a user to refine the
projector output to suit their needs.

[0315] The optics system may be contained within a housing fabricated for
6061-T6 aluminum and glass-filled ABS/PC.

[0316] The weight of the system, in an embodiment, is estimated to be 3.75
ounces, or 95 grams.

[0317] In an embodiment, the eyepiece and associated electronics provide
night vision capability. This night vision capability may be enabled by a
black silicon SWIR sensor. Black silicon is a complementary metal-oxide
silicon (CMOS) processing technique that enhances the photo response of
silicon over 100 times. The spectral range is expanded deep into the
short wave infra-red (SWIR) wavelength range. In this technique, a 300 nm
deep absorbing and anti-reflective layer is added to the glasses. This
layer offers improved responsivity as shown in FIG. 11, where the
responsivity of black silicon is much greater than silicon's over the
visible and NIR ranges and extends well into the SWIR range. This
technology is an improvement over current technology, which suffers from
extremely high cost, performance issues, as well as high volume
manufacturability problems. Incorporating this technology into night
vision optics brings the economic advantages of CMOS technology into the
design.

[0318] Unlike current night-vision goggles (NVGs), which amplify starlight
or other ambient light from the visible light spectrum, SWIR sensors pick
up individual photons and convert light in the SWIR spectrum to
electrical signals, similar to digital photography. The photons can be
produced from the natural recombination of oxygen and hydrogen atoms in
the atmosphere at night, also referred to as "Night Glow." Shortwave
infrared devices see objects at night by detecting the invisible,
shortwave infrared radiation within reflected star light, city lights or
the moon. They also work in daylight, or through fog, haze or smoke,
whereas the current NVG Image Intensifier infrared sensors would be
overwhelmed by heat or brightness. Because shortwave infrared devices
pick up invisible radiation on the edge of the visible spectrum, the SWIR
images look like the images produced by visible light with the same
shadows and contrast and facial details, only in black and white,
dramatically enhancing recognition so people look like people; they don't
look like blobs often seen with thermal Imagers. One of the important
SWIR capabilities is of providing views of targeting lasers on the
battlefield. Targeting lasers (1.064 um) are not visible with current
night-vision goggles. With SWIR Electro-optics, soldiers will be able to
view every targeting laser in use, including those used by the enemy.
Unlike Thermal Imagers, which do not penetrate windows on vehicles or
buildings, the Visible/Near Infrared/Short Wave Infrared Sensor can see
through them--day or night, giving users an important tactical advantage.

[0319] Certain advantages include using active illumination only when
needed. In some instances there may be sufficient natural illumination at
night, such as during a full moon. When such is the case, artificial
night vision using active illumination may not be necessary. With black
silicon CMOS-based SWIR sensors, active illumination may not be needed
during these conditions, and is not provided, thus improving battery
life.

[0320] In addition, a black silicon image sensor may have over eight times
the signal to noise ratio found in costly indium-gallium arsenide image
sensors under night sky conditions. Better resolution is also provided by
this technology, offering much higher resolution than available using
current technology for night vision. Typically, long wavelength images
produced by CMOS-based SWIR have been difficult to interpret, having good
heat detection, but poor resolution. This problem is solved with a black
image silicon SWIR sensor, which relies on much shorter wavelengths. SWIR
is highly desirable for battlefield night vision glasses for these
reasons. FIG. 12 illustrates the effectiveness of black silicon night
vision technology, providing both before and after images of seeing
through a) dust; b) fog, and c) smoke. The images in FIG. 12 demonstrate
the performance of the new VIS/NIR/SWIR black silicon sensor. In
embodiments, the image sensor may be able to distinguish between changes
in the natural environment, such as disturbed vegetation, disturbed
ground, and the like. For example, an enemy combatant may have recently
placed an explosive device in the ground, and so the ground over the
explosive will be `disturbed ground`, and the image sensor (along with
processing facilities internal or external to the eyepiece) may be able
to distinguish the recently disturbed ground from the surrounding ground.
In this way, a soldier may be able to detect the possible placement of an
underground explosive device (e.g. an improvised explosive device (IED))
from a distance.

[0321] Previous night vision systems suffered from "blooms" from bright
light sources, such as streetlights. These "blooms" were particularly
strong in image intensifying technology and are also associated with a
loss of resolution. In some cases, cooling systems are necessary in image
intensifying technology systems, increasing weight and shortening battery
power lifespan. FIG. 17 shows the difference in image quality between A)
a flexible platform of uncooled CMOS image sensors capable of
VIS/NIR/SWIR imaging and B) an image intensified night vision system.

[0322] FIG. 13 depicts the difference in structure between current or
incumbent vision enhancement technology 1300 and uncooled CMOS image
sensors 1307. The incumbent platform (FIG. 13A) limits deployment because
of cost, weight, power consumption, spectral range, and reliability
issues. Incumbent systems are typically comprised of a front lens 1301,
photocathode 1302, micro channel plate 1303, high voltage power supply
1304, phosphorous screen 1305, and eyepiece 1306. This is in contrast to
a flexible platform (FIG. 13B) of uncooled CMOS image sensors 1307
capable of VIS/NIR/SWIR imaging at a fraction of the cost, power
consumption, and weight. These much simpler sensors include a front lens
1308 and an image sensor 1309 with a digital image output.

[0323] These advantages derive from the CMOS compatible processing
technique that enhances the photo response of silicon over 100 times and
extends the spectral range deep into the short wave infrared region. The
difference in responsivity is illustrated in FIG. 13C. While typical
night vision goggles are limited to the UV, visible and near infrared
(NIR) ranges, to about 1100 nm (1.1 micrometers) the newer CMOS image
sensor ranges also include the short wave infrared (SWIR) spectrum, out
to as much as 2000 nm (2 micrometers).

[0324] The black silicon core technology may offer significant improvement
over current night vision glasses. Femtosecond laser doping may enhance
the light detection properties of silicon across a broad spectrum.
Additionally, optical response may be improved by a factor of 100 to
10,000. The black silicon technology is a fast, scalable, and CMOS
compatible technology at a very low cost, compared to current night
vision systems. Black silicon technology may also provide a low operation
bias, with 3.3 V typical. In addition, uncooled performance may be
possible up to 50° C. Cooling requirements of current technology
increase both weight and power consumption, and also create discomfort in
users. As noted above, the black silicon core technology offers a
high-resolution replacement for current image intensifier technology.
Black silicon core technology may provide high speed electronic
shuttering at speeds up to 1000 frames/second with minimal cross talk. In
certain embodiments of the night vision eyepiece, an OLED display may be
preferred over other optical displays, such as the LCoS display.

[0326] In some embodiments, the VIS/NIR/SWIR black silicon sensor may be
incorporated into a form factor suitable for night vision only, such as a
night vision goggle or a night vision helmet. The night vision goggle may
include features that make it suitable for the military market, such as
ruggedization and alternative power supplies, while other form factors
may be suitable for the consumer or toy market. In one example, the night
vision goggles may have extended range, such as 500-1200 nm, and may also
useable as a camera.

[0327] In some embodiments, the VIS/NIR/SWIR black silicon sensor as well
as other outboard sensors may be incorporated into a mounted camera that
may be mounted on transport or combat vehicles so that the real-time feed
can be sent to the driver or other occupants of the vehicle by
superimposing the video on the forward view without obstructing it. The
driver can better see where he or she is going, the gunner can better see
threats or targets of opportunity, and the navigator can better sense
situational awareness (SAAS) while also looking for threats. The feed
could also be sent to off-site locations as desired, such as higher
headquarters of memory/storage locations for later use in targeting,
navigation, surveillance, data mining, and the like.

[0328] Further advantages of the eyepiece may include robust connectivity.
This connectivity enables download and transmission using Bluetooth,
Wi-Fi/Internet, cellular, satellite, 3G, FM/AM, TV, and UVB transceiver
for sending/receiving vast amounts of data quickly. For example, the UWB
transceiver may be used to create a very high data rate,
low-probability-of-intercept/low-probability-of-detection (LPI/LPD),
Wireless Personal Area Network (WPAN) to connect weapons sights,
weapons-mounted mouse/controller, E/O sensors, medical sensors,
audio/video displays, and the like. In other embodiments, the WPAN may be
created using other communications protocols. For example, a WPAN
transceiver may be a COTS-compliant module front end to make the power
management of a combat radio highly responsive and to avoid jeopardizing
the robustness of the radio. By integrating the ultra wideband (UWB)
transceiver, baseband/MAC and encryption chips onto a module, a
physically small dynamic and configurable transceiver to address multiple
operational needs is obtained. The WPAN transceivers create a low power,
encrypted, wireless personal area network (WPAN) between soldier worn
devices. The WPAN transceivers can be attached or embedded into nearly
any fielded military device with a network interface (handheld computers,
combat displays, etc). The system is capable of supporting many users,
AES encryption, robust against jamming and RF interference as well as
being ideal for combat providing low probabilities of interception and
detection (LPI/LPD). The WPAN transceivers eliminate the bulk, weight and
"snagability" of data cables on the soldier. Interfaces include USB 1.1,
USB 2.0 OTG, Ethernet 10-, 100 Base-T and RS232 9-pin D-Sub. The power
output may be -10, -20 dBm outputs for a variable range of up to 2
meters. The data capacity may be 768 Mbps and greater. The bandwidth may
be 1.7 GHz. Encryption may be 128-bit, 192-bit or 256-bit AES. The WPAN
transceiver may include Optimized Message Authentication Code (MAC)
generation. The WPAN transceiver may comply to MIL-STD-461F. The WPAN
transceiver may be in the form of a connector dust cap and may attach to
any fielded military device. The WPAN transceiver allows simultaneous
video, voice, stills, text and chat, eliminates the need for data cables
between electronic devices, allows hands-free control of multiple devices
without distraction, features an adjustable connectivity range,
interfaces with Ethernet and USB 2.0, features an adjustable frequency
3.1 to 10.6 GHz and 200 mw peak draw and nominal standby.

[0329] For example, the WPAN transceiver may enable creating a WPAN
between the eyepiece 100 in the form of a GSE stereo heads-up combat
display glasses, a computer, a remote computer controller, and biometric
enrollment devices like that seen in FIG. 58. In another example, the
WPAN transceiver may enable creating a WPAN between the eyepiece in the
form of flip-up/-down heads-up display combat glasses, the HUD CPU (if it
is external), a weapon fore-grip controller, and a forearm computer
similar to that seen in FIG. 58.

[0330] The eyepiece may provide its own cellular connectivity, such as
though a personal wireless connection with a cellular system. The
personal wireless connection may be available for only the wearer of the
eyepiece, or it may be available to a plurality of proximate users, such
as in a Wi-Fi hot spot (e.g. MiFi), where the eyepiece provides a local
hotspot for others to utilize. These proximate users may be other wearers
of an eyepiece, or users of some other wireless computing device, such as
a mobile communications facility (e.g. mobile phone). Through this
personal wireless connection, the wearer may not need other cellular or
Internet wireless connections to connect to wireless services. For
instance, without a personal wireless connection integrated into the
eyepiece, the wearer may have to find a WiFi connection point or tether
to their mobile communications facility in order to establish a wireless
connection. In embodiments, the eyepiece may be able to replace the need
for having a separate mobile communications device, such as a mobile
phone, mobile computer, and the like, by integrating these functions and
user interfaces into the eyepiece. For instance, the eyepiece may have an
integrated WiFi connection or hotspot, a real or virtual keyboard
interface, a USB hub, speakers (e.g. to stream music to) or speaker input
connections, integrated camera, external camera, and the like. In
embodiments, an external device, in connectivity with the eyepiece, may
provide a single unit with a personal network connection (e.g. WiFi,
cellular connection), keyboard, control pad (e.g. a touch pad), and the
like.

[0331] Communications from the eyepiece may include communication links
for special purposes. For instance, an ultra-wide bandwidth
communications link may be utilized when sending and/or receiving large
volumes of data in a short amount of time. In another instance, a
near-field communications (NFC) link may be used with very limited
transmission range in order to post information to transmit to personnel
when they are very near, such as for tactical reasons, for local
directions, for warnings, and the like. For example, a soldier may be
able to post/hold information securely, and transmit only to people very
near by with a need-to-know or need-to-use the information. In another
instance, a wireless personal area network (PAN) may be utilized, such as
to connect weapons sights, weapons-mounted mouse/controller,
electro-optic sensors, medical sensors, audio-visual displays, and the
like.

[0332] The eyepiece may include MEMS-based inertial navigation systems,
such as a GPS processor, an accelerometer (e.g. for enabling head control
of the system and other functions), a gyroscope, an altimeter, an
inclinometer, a speedometer/odometer, a laser rangefinder, and a
magnetometer, which also enables image stabilization.

[0333] The eyepiece may include integrated headphones, such as the
articulating earbud 120, that provide audio output to the user or wearer.

[0334] In an embodiment, a forward facing camera (see FIG. 21) integrated
with the eyepiece may enable basic augmented reality. In augmented
reality, a viewer can image what is being viewed and then layer an
augmented, edited, tagged, or analyzed version on top of the basic view.
In the alternative, associated data may be displayed with or over the
basic image. If two cameras are provided and are mounted at the correct
interpupillary distance for the user, stereo video imagery may be
created. This capability may be useful for persons requiring vision
assistance. Many people suffer from deficiencies in their vision, such as
near-sightedness, far-sightedness, and so forth. A camera and a very
close, virtual screen as described herein provides a "video" for such
persons, the video adjustable in terms of focal point, nearer or farther,
and fully in control by the person via voice or other command. This
capability may also be useful for persons suffering diseases of the eye,
such as cataracts, retinitis pigmentosa, and the like. So long as some
organic vision capability remains, an augmented reality eyepiece can help
a person see more clearly. Embodiments of the eyepiece may feature one or
more of magnification, increased brightness, and ability to map content
to the areas of the eye that are still healthy. Embodiments of the
eyepiece may be used as bifocals or a magnifying glass. The wearer may be
able to increase zoom in the field of view or increase zoom within a
partial field of view. In an embodiment, an associated camera may make an
image of the object and then present the user with a zoomed picture. A
user interface may allow a wearer to point at the area that he wants
zoomed, such as with the control techniques described herein, so the
image processing can stay on task as opposed to just zooming in on
everything in the camera's field of view.

[0335] A rear-facing camera (not shown) may also be incorporated into the
eyepiece in a further embodiment. In this embodiment, the rear-facing
camera may enable eye control of the eyepiece, with the user making
application or feature selection by directing his or her eyes to a
specific item displayed on the eyepiece.

[0336] A further embodiment of a device for capturing biometric data about
individuals may incorporate a microcassegrain telescoping folded optic
camera into the device. The microcassegrain telescoping folded optic
camera may be mounted on a handheld device, such as the bio-print device,
the bio-phone, and could also be mounted on glasses used as part of a
bio-kit to collect biometric data.

[0337] A cassegrain reflector is a combination of a primary concave mirror
and a secondary convex mirror. These reflectors are often used in optical
telescopes and radio antennas because they deliver good light (or sound)
collecting capability in a shorter, smaller package.

[0338] In a symmetrical cassegrain both mirrors are aligned about the
optical axis, and the primary mirror usually has a hole in the center,
allowing light to reach the eyepiece or a camera chip or light detection
device, such as a CCD chip. An alternate design, often used in radio
telescopes, places the final focus in front of the primary reflector. A
further alternate design may tilt the mirrors to avoid obstructing the
primary or secondary mirror and may eliminate the need for a hole in the
primary mirror or secondary mirror. The microcassegrain telescoping
folded optic camera may use any of the above variations, with the final
selection determined by the desired size of the optic device.

[0339] The classic cassegrain configuration 3500 uses a parabolic
reflector as the primary mirror and a hyperbolic mirror as the secondary
mirror. Further embodiments of the microcassegrain telescoping folded
optic camera may use a hyperbolic primary mirror and/or a spherical or
elliptical secondary mirror. In operation the classic cassegrain with a
parabolic primary mirror and a hyperbolic secondary mirror reflects the
light back down through a hole in the primary, as shown in FIG. 35.
Folding the optical path makes the design more compact, and in a "micro"
size, suitable for use with the bio-print sensor and bio-print kit
described herein. In a folded optic system, the beam is bent to make the
optical path much longer than the physical length of the system. One
common example of folded optics is prismatic binoculars. In a camera lens
the secondary mirror may be mounted on an optically flat, optically clear
glass plate that closes the lens tube. This support eliminates
"star-shaped" diffraction effects that are caused by a straight-vaned
support spider. This allows for a sealed closed tube and protects the
primary mirror, albeit at some loss of light collecting power.

[0340] The cassegrain design also makes use of the special properties of
parabolic and hyperbolic reflectors. A concave parabolic reflector will
reflect all incoming light rays parallel to its axis of symmetry to a
single focus point. A convex hyperbolic reflector has two foci and
reflects all light rays directed at one focus point toward the other
focus point. Mirrors in this type of lens are designed and positioned to
share one focus, placing the second focus of the hyperbolic mirror at the
same point as where the image is observed, usually just outside the
eyepiece. The parabolic mirror reflects parallel light rays entering the
lens to its focus, which is coincident with the focus of the hyperbolic
mirror. The hyperbolic mirror then reflects those light rays to the other
focus point, where the camera records the image.

[0341]FIG. 36 shows the configuration of the microcassegrain telescoping
folded optic camera. The camera may be mounted on augmented reality
glasses, a bio-phone, or other biometric collection device. The assembly,
3600 has multiple telescoping segments that allow the camera to extend
with cassegrain optics providing for a longer optical path. Threads 3602
allow the camera to be mounted on a device, such as augmented reality
glasses or other biometric collection device. While the embodiment
depicted in FIG. 36 uses threads, other mounting schemes such as bayonet
mount, knobs, or press-fit, may also be used. A first telescoping section
3604 also acts as an external housing when the lens is in the fully
retracted position. The camera may also incorporate a motor to drive the
extension and retraction of the camera. A second telescoping section 3606
may also be included. Other embodiments may incorporate varying numbers
of telescoping sections, depending on the length of optical path needed
for the selected task or data to be collected. A third telescoping
section 3608 includes the lens and a reflecting mirror. The reflecting
mirror may be a primary reflector if the camera is designed following
classic cassegrain design. The secondary mirror may be contained in first
telescoping section 3604.

[0342] Further embodiments may utilize microscopic mirrors to form the
camera, while still providing for a longer optical path through the use
of folded optics. The same principles of cassegrain design are used.

[0343] Lens 3610 provides optics for use in conjunction with the folded
optics of the cassegrain design. The lens 3610 may be selected from a
variety of types, and may vary depending on the application. The threads
3602 permit a variety of cameras to be interchanged depending on the
needs of the user.

[0344] Eye control of feature and option selection may be controlled and
activated by object recognition software loaded on the system processor.
Object recognition software may enable augmented reality, combine the
recognition output with querying a database, combine the recognition
output with a computational tool to determine dependencies/likelihoods,
and the like.

[0345] Three-dimensional viewing is also possible in an additional
embodiment that incorporates a 3D projector. Two stacked picoprojectors
(not shown) may be used to create the three dimensional image output.

[0346] Referring to FIG. 10, a plurality of digital CMOS Sensors with
redundant micros and DSPs for each sensor array and projector detect
visible, near infrared, and short wave infrared light to enable passive
day and night operations, such as real-time image enhancement 1002,
real-time keystone correction 1004, and real-time virtual perspective
correction 1008. The eyepiece may utilize digital CMOS image sensors and
directional microphones (e.g. microphone arrays) as described herein,
such as for visible imaging for monitoring the visible scene (e.g. for
biometric recognition, gesture control, coordinated imaging with 2D/3D
projected maps), IR/UV imaging for scene enhancement (e.g. seeing through
haze, smoke, in the dark), sound direction sensing (e.g. the direction of
a gunshot or explosion, voice detection), and the like. In embodiments,
each of these sensor inputs may be fed to a digital signal processor
(DSP) for processing, such as internal to the eyepiece or as interfaced
to external processing facilities. The outputs of the DSP processing of
each sensor input stream may then be algorithmically combined in a manner
to generate useful intelligence data. For instance, this system may be
useful for a combination of real-time facial recognition, real time voice
detection, and analysis through links to a database, especially with
distortion corrections and contemporaneous GPS location for soldiers,
service personnel, and the like, such as in monitoring remote areas of
interest, e.g., known paths or trails, or high-security areas.

[0347] The augmented reality eyepiece or glasses may be powered by any
stored energy system, such as battery power, solar power, line power, and
the like. A solar energy collector may be placed on the frame, on a belt
clip, and the like. Battery charging may occur using a wall charger, car
charger, on a belt clip, in a glasses case, and the like. In one
embodiment, the eyepiece may be rechargeable and be equipped with a
mini-USB connector for recharging. In another embodiment, the eyepiece
may be equipped for remote inductive recharging by one or more remote
inductive power conversion technologies, such as those provided by
Powercast, Ligonier, Pa., USA; and Fulton Int'l Inc., Ada, MI, USA, which
also owns another provider, Splashpower, Inc., Cambridge, UK.

[0348] The augmented reality eyepiece also includes a camera and any
interface necessary to connect the camera to the circuit. The output of
the camera may be stored in memory and may also be displayed on the
display available to the wearer of the glasses. A display driver may also
be used to control the display. The augmented reality device also
includes a power supply, such as a battery, as shown, power management
circuits and a circuit for recharging the power supply. As noted
elsewhere, recharging may take place via a hard connection, e.g., a
mini-USB connector, or by means of an inductor, a solar panel input, and
so forth.

[0349] The control system for the eyepiece or glasses may include a
control algorithm for conserving power when the power source, such as a
battery, indicates low power. This conservation algorithm may include
shutting power down to applications that are energy intensive, such as
lighting, a camera, or sensors that require high levels of energy, such
as any sensor requiring a heater, for example. Other conservation steps
may include slowing down the power used for a sensor or for a camera,
e.g., slowing the sampling or frame rates, going to a slower sampling or
frame rate when the power is low; or shutting down the sensor or camera
at an even lower level. Thus, there may be at least three operating modes
depending on the available power: a normal mode; a conserve power mode;
and an emergency or shutdown mode.

[0350] Applications of the present disclosure may be controlled through
movements and direct actions of the wearer, such as movement of his or
her hand, finger, feet, head, eyes, and the like, enabled through
facilities of the eyepiece (e.g. accelerometers, gyros, cameras, optical
sensors, GPS sensors, and the like) and/or through facilities worn or
mounted on the wearer (e.g. body mounted sensor control facilities). In
this way, the wearer may directly control the eyepiece through movements
and/or actions of their body without the use of a traditional hand-held
remote controller. For instance, the wearer may have a sense device, such
as a position sense device, mounted on one or both hands, such as on at
least one finger, on the palm, on the back of the hand, and the like,
where the position sense device provides position data of the hand, and
provides wireless communications of position data as command information
to the eyepiece. In embodiments, the sense device of the present
disclosure may include a gyroscopic device (e.g. electronic gyroscope,
MEMS gyroscope, mechanical gyroscope, quantum gyroscope, ring laser
gyroscope, fiber optic gyroscope), accelerometers, MEMS accelerometers,
velocity sensors, force sensors, pressure sensors, optical sensors,
proximity sensor, RFID, and the like, in the providing of position
information. For example, a wearer may have a position sense device
mounted on their right index finger, where the device is able to sense
motion of the finger. In this example, the user may activate the eyepiece
either through some switching mechanism on the eyepiece or through some
predetermined motion sequence of the finger, such as moving the finger
quickly, tapping the finger against a hard surface, and the like. Note
that tapping against a hard surface may be interpreted through sensing by
accelerometers, force sensors, pressure sensors, and the like. The
position sense device may then transmit motions of the finger as command
information, such as moving the finger in the air to move a cursor across
the displayed or projected image, moving in quick motion to indicate a
selection, and the like. In embodiments, the position sense device may
send sensed command information directly to the eyepiece for command
processing, or the command processing circuitry may be co-located with
the position sense device, such as in this example, mounted on the finger
as part of an assembly including the sensors of the position sense
device.

[0351] In embodiments, the wearer may have a plurality of position sense
devices mounted on their body. For instance, and in continuation of the
preceding example, the wearer may have position sense devices mounted on
a plurality of points on the hand, such as with individual sensors on
different fingers, or as a collection of devices, such as in a glove. In
this way, the aggregate sense command information from the collection of
sensors at different locations on the hand may be used to provide more
complex command information. For instance, the wearer may use a sensor
device glove to play a game, where the glove senses the grasp and motion
of the user's hands on a ball, bat, racket, and the like, in the use of
the present disclosure in the simulation and play of a simulated game. In
embodiments, the plurality of position sense devices may be mounted on
different parts of the body, allowing the wearer to transmit complex
motions of the body to the eyepiece for use by an application.

[0352] In embodiments, the sense device may have a force sensor, pressure
sensor, and the like, such as for detecting when the sense device comes
in contact with an object. For instance, a sense device may include a
force sensor at the tip of a wearer's finger. In this case, the wearer
may tap, multiple tap, sequence taps, swipe, touch, and the like to
generate a command to the eyepiece. Force sensors may also be used to
indicate degrees of touch, grip, push, and the like, where predetermined
or learned thresholds determine different command information. In this
way, commands may be delivered as a series of continuous commands that
constantly update the command information being used in an application
through the eyepiece. In an example, a wearer may be running a
simulation, such as a game application, military application, commercial
application, and the like, where the movements and contact with objects,
such as through at least one of a plurality of sense devices, are fed to
the eyepiece as commands that influence the simulation displayed through
the eyepiece. For instance, a sense device may be included in a pen
controller, where the pen controller may have a force sensor, pressure
sensor, inertial measurement unit, and the like, and where the pen
controller may be used to produce virtual writing, control a cursor
associated with the eyepiece's display, act as a computer mouse, provide
control commands though physical motion and/or contact, and the like.

[0353] In embodiments, the sense device may include an optical sensor or
optical transmitter as a way for movement to be interpreted as a command.
For instance, a sense device may include an optical sensor mounted on the
hand of the wearer, and the eyepiece housing may include an optical
transmitter, such that when a user moves their hand past the optical
transmitter on the eyepiece, the motions may be interpreted as commands.
A motion detected through an optical sensor may include swiping past at
different speeds, with repeated motions, combinations of dwelling and
movement, and the like. In embodiments, optical sensors and/or
transmitters may be located on the eyepiece, mounted on the wearer (e.g.
on the hand, foot, in a glove, piece of clothing), or used in
combinations between different areas on the wearer and the eyepiece, and
the like.

[0354] In one embodiment, a number of sensors useful for monitoring the
condition of the wearer or a person in proximity to the wearer are
mounted within the augmented reality glasses. Sensors have become much
smaller, thanks to advances in electronics technology. Signal transducing
and signal processing technologies have also made great progress in the
direction of size reduction and digitization. Accordingly, it is possible
to have not merely a temperature sensor in the AR glasses, but an entire
sensor array. These sensors may include, as noted, a temperature sensor,
and also sensor to detect: pulse rate; beat-to-beat heart variability;
EKG or ECG; respiration rate; core body temperature; heat flow from the
body; galvanic skin response or GSR; EMG; EEG; EOG; blood pressure; body
fat; hydration level; activity level; oxygen consumption; glucose or
blood sugar level; body position; and UV radiation exposure or
absorption. In addition, there may also be a retinal sensor and a blood
oxygenation sensor (such as an Sp02 sensor), among others. Such
sensors are available from a variety of manufacturers, including Vermed,
Bellows Falls, VT, USA; VTI, Ventaa, Finland; and ServoFlow, Lexington,
Mass., USA.

[0355] In some embodiments, it may be more useful to have sensors mounted
on the person or on equipment of the person, rather than on the glasses
themselves. For example, accelerometers, motion sensors and vibration
sensors may be usefully mounted on the person, on clothing of the person,
or on equipment worn by the person. These sensors may maintain continuous
or periodic contact with the controller of the AR glasses through a
Bluetooth® radio transmitter or other radio device adhering to IEEE
802.11 specifications. For example, if a physician wishes to monitor
motion or shock experienced by a patient during a foot race, the sensors
may be more useful if they are mounted directly on the person's skin, or
even on a T-shirt worn by the person, rather than mounted on the glasses.
In these cases, a more accurate reading may be obtained by a sensor
placed on the person or on the clothing rather than on the glasses. Such
sensors need not be as tiny as the sensors which would be suitable for
mounting on the glasses themselves, and be more useful, as seen.

[0356] The AR glasses or goggles may also include environmental sensors or
sensor arrays. These sensors are mounted on the glasses and sample the
atmosphere or air in the vicinity of the wearer. These sensors or sensor
array may be sensitive to certain substances or concentrations of
substances. For example, sensors and arrays are available to measure
concentrations of carbon monoxide, oxides of nitrogen ("NOx"),
temperature, relative humidity, noise level, volatile organic chemicals
(VOC), ozone, particulates, hydrogen sulfide, barometric pressure and
ultraviolet light and its intensity. Vendors and manufacturers include:
Sensares, Crolles, FR; Cairpol, Ales, FR; Critical Environmental
Technologies of Canada, Delta, B.C., Canada; Apollo Electronics Co.,
Shenzhen, China; and AV Technology Ltd., Stockport, Cheshire, UK. Many
other sensors are well known. If such sensors are mounted on the person
or on clothing or equipment of the person, they may also be useful. These
environmental sensors may include radiation sensors, chemical sensors,
poisonous gas sensors, and the like.

[0357] In one embodiment, environmental sensors, health monitoring
sensors, or both, are mounted on the frames of the augmented reality
glasses. In another embodiment, the sensors may be mounted on the person
or on clothing or equipment of the person. For example, a sensor for
measuring electrical activity of a heart of the wearer may be implanted,
with suitable accessories for transducing and transmitting a signal
indicative of the person's heart activity.

[0358] The signal may be transmitted a very short distance via a
Bluetooth® radio transmitter or other radio device adhering to IEEE
802.15.1 specifications. Other frequencies or protocols may be used
instead. The signal may then be processed by the signal-monitoring and
processing equipment of the augmented reality glasses, and recorded and
displayed on the virtual screen available to the wearer. In another
embodiment, the signal may also be sent via the AR glasses to a friend or
squad leader of the wearer. Thus, the health and well-being of the person
may be monitored by the person and by others, and may also be tracked
over time.

[0359] In another embodiment, environmental sensors may be mounted on the
person or on equipment of the person. For example, radiation or chemical
sensors may be more useful if worn on outer clothing or a web-belt of the
person, rather than mounted directly on the glasses. As noted above,
signals from the sensors may be monitored locally by the person through
the AR glasses. The sensor readings may also be transmitted elsewhere,
either on demand or automatically, perhaps at set intervals, such as
every quarter-hour or half-hour. Thus, a history of sensor readings,
whether of the person's body readings or of the environment, may be made
for tracking or trending purposes.

[0360] In an embodiment, an RF/micropower impulse radio (MIR) sensor may
be associated with the eyepiece and serve as a short-range medical radar.
The sensor may operate on an ultra-wide band. The sensor may include an
RF/impulse generator, receiver, and signal processor, and may be useful
for detecting and measuring cardiac signals by measuring ion flow in
cardiac cells within 3 mm of the skin. The receiver may be a phased array
antenna to enable determining a location of the signal in a region of
space. The sensor may be used to detect and identify cardiac signals
through blockages, such as walls, water, concrete, dirt, metal, wood, and
the like. For example, a user may be able to use the sensor to determine
how many people are located in a concrete structure by how many heart
rates are detected. In another embodiment, a detected heart rate may
serve as a unique identifier for a person so that they may be recognized
in the future. In an embodiment, the RF/impulse generator may be embedded
in one device, such as the eyepiece or some other device, while the
receiver is embedded in a different device, such as another eyepiece or
device. In this way, a virtual "tripwire" may be created when a heart
rate is detected between the transmitter and receiver. In an embodiment,
the sensor may be used as an in-field diagnostic or self-diagnosis tool.
EKG's may be analyzed and stored for future use as a biometric
identifier. A user may receive alerts of sensed heart rate signals and
how many heart rates are present as displayed content in the eyepiece.

[0361] FIG. 29 depicts an embodiment 2900 of an augmented reality eyepiece
or glasses with a variety of sensors and communication equipment. One or
more than one environmental or health sensors are connected to a sensor
interface locally or remotely through a short range radio circuit and an
antenna, as shown. The sensor interface circuit includes all devices for
detecting, amplifying, processing and sending on or transmitting the
signals detected by the sensor(s). The remote sensors may include, for
example, an implanted heart rate monitor or other body sensor (not
shown). The other sensors may include an accelerometer, an inclinometer,
a temperature sensor, a sensor suitable for detecting one or more
chemicals or gasses, or any of the other health or environmental sensors
discussed in this disclosure. The sensor interface is connected to the
microprocessor or microcontroller of the augmented reality device, from
which point the information gathered may be recorded in memory, such as
random access memory (RAM) or permanent memory, read only memory (ROM),
as shown.

[0362] In an embodiment, a sense device enables simultaneous electric
field sensing through the eyepiece. Electric field (EF) sensing is a
method of proximity sensing that allows computers to detect, evaluate and
work with objects in their vicinity. Physical contact with the skin, such
as a handshake with another person or some other physical contact with a
conductive or a non-conductive device or object, may be sensed as a
change in an electric field and either enable data transfer to or from
the eyepiece or terminate data transfer. For example, videos captured by
the eyepiece may be stored on the eyepiece until a wearer of the eyepiece
with an embedded electric field sensing transceiver touches an object and
initiates data transfer from the eyepiece to a receiver. The transceiver
may include a transmitter that includes a transmitter circuit that
induces electric fields toward the body and a data sense circuit, which
distinguishes transmitting and receiving modes by detecting both
transmission and reception data and outputs control signals corresponding
to the two modes to enable two-way communication. An instantaneous
private network between two people may be generated with a contact, such
as a handshake. Data may be transferred between an eyepiece of a user and
a data receiver or eyepiece of the second user. Additional security
measures may be used to enhance the private network, such as facial or
audio recognition, detection of eye contact, fingerprint detection,
biometric entry, and the like.

[0363] In embodiments, there may be an authentication facility associated
with accessing functionality of the eyepiece, such as access to displayed
or projected content, access to restricted projected content, enabling
functionality of the eyepiece itself (e.g. as through a login to access
functionality of the eyepiece) either in whole or in part, and the like.
Authentication may be provided through recognition of the wearer's voice,
iris, retina, fingerprint, and the like, or other biometric identifier.
For example, the eyepiece or an associated controller may have an IR,
ultrasonic or capacitive tactile sensor for receiving control input
related to authentication or other eyepiece functions. A capacitance
sensor can detect a fingerprint and launch an application or otherwise
control an eyepiece function. Each finger has a different fingerprint so
each finger can be used to control different eyepiece functions or quick
launch different applications or provide various levels of
authentication. Capacitance does not work with gloves but an ultrasonic
sensor does and can be used in the same way to provide biometric
authentication or control. Ultrasonic sensors useful in the eyepiece or
associated controller include Sonavation's SonicTouch® technology used
in Sonavation's SonicSlide® sensors, which works by acoustically
measuring the ridges and valleys of the fingerprint to image the
fingerprint in 256 shades of gray in order to discern the slightest
fingerprint detail. The key imaging component of the SonicSlide®
sensor is the ceramic Micro-Electro Mechanical System (MEMS)
piezoelectric transducer array that is made from a ceramic composite
material.

[0364] The authentication system may provide for a database of biometric
inputs for a plurality of users such that access control may be provided
for use of the eyepiece based on policies and associated access
privileges for each of the users entered into the database. The eyepiece
may provide for an authentication process. For instance, the
authentication facility may sense when a user has taken the eyepiece off,
and require re-authentication when the user puts it back on. This better
ensures that the eyepiece only provides access to those users that are
authorized, and for only those privileges that the wearer is authorized
for. In an example, the authentication facility may be able to detect the
presence of a user's eye or head as the eyepiece is put on. In a first
level of access, the user may only be able to access low-sensitivity
items until authentication is complete. During authentication, the
authentication facility may identify the user, and look up their access
privileges. Once these privileges have been determined, the
authentication facility may then provide the appropriate access to the
user. In the case of an unauthorized user being detected, the eyepiece
may maintain access to low-sensitivity items, further restrict access,
deny access entirely, and the like.

[0365] In an embodiment, a receiver may be associated with an object to
enable control of that object via touch by a wearer of the eyepiece,
wherein touch enables transmission or execution of a command signal in
the object. For example, a receiver may be associated with a car door
lock. When a wearer of the eyepiece touches the car, the car door may
unlock. In another example, a receiver may be embedded in a medicine
bottle. When the wearer of the eyepiece touches the medicine bottle, an
alarm signal may be initiated. In another example, a receiver may be
associated with a wall along a sidewalk. As the wearer of the eyepiece
passes the wall or touches the wall, advertising may be launched either
in the eyepiece or on a video panel of the wall.

[0366] In an embodiment, when a wearer of the eyepiece initiates a
physical contact, a WiFi exchange of information with a receiver may
provide an indication that the wearer is connected to an online activity
such as a game or may provide verification of identity in an online
environment. In the embodiment, a representation of the person could
change color or undergo some other visual indication in response to the
contact.

[0367] In embodiments, the eyepiece may include a tactile interface as in
FIG. 14, such as to enable haptic control of the eyepiece, such as with a
swipe, tap, touch, press, click, roll of a rollerball, and the like. For
instance, the tactile interface 1402 may be mounted on the frame of the
eyepiece 1400, such as on an arm, both arms, the nosepiece, the top of
the frame, the bottom of the frame, and the like. In embodiments, the
tactile interface 1402 may include controls and functionality similar to
a computer mouse, with left and right buttons, a 2D position control pad
such as described herein, and the like. For example, the tactile
interface may be mounted on the eyepiece near the user's temple and act
as a `temple mouse` controller for the eyepiece projected content to the
user and may include a temple-mounted rotary selector and enter button.
In another example, the tactile interface may be one or more vibratory
temple motors which may vibrate to alert or notify the user, such as to
danger left, danger right, a medical condition, and the like. The tactile
interface may be mounted on a controller separate from the eyepiece, such
as a worn controller hand-carried controller, and the like. If there is
an accelerometer in the controller then it may sense the user tapping,
such as on a keyboard, on their hand (either on the hand with the
controller or tapping with the hand that has the controller), and the
like. The wearer may then touch the tactile interface in a plurality of
ways to be interpreted by the eyepiece as commands, such as by tapping
one or multiple times on the interface, by brushing a finger across the
interface, by pressing and holding, by pressing more than one interface
at a time, and the like. In embodiments, the tactile interface may be
attached to the wearer's body (e.g. their hand, arm, leg, torso, neck),
their clothing, as an attachment to their clothing, as a ring 1500, as a
bracelet, as a necklace, and the like. For example, the interface may be
attached on the body, such as on the back of the wrist, where touching
different parts of the interface provides different command information
(e.g. touching the front portion, the back portion, the center, holding
for a period of time, tapping, swiping, and the like). In embodiments,
user contact with the tactile interface may be interpreted through force,
pressure, movement, and the like. For instance, the tactile interface may
incorporate resistive touch technologies, capacitive touch technologies,
proportional pressure touch technologies, and the like. In an example,
the tactile interface may utilize discrete resistive touch technologies
where the application requires the interface to be simple, rugged, low
power, and the like. In another example, the tactile interface may
utilize capacitive tough technologies where more functionality is
required through the interface, such as though movement, swiping,
multi-point contacts, and the like. In another example, the tactile
interface may utilize pressure touch technologies, such as when variable
pressure commanding is required. In embodiments, any of these, or like
touch technologies, may be used in any tactile interface as described
herein.

[0368] In another example, the wearer may have an interface mounted in a
ring as shown in FIG. 15, a hand piece, and the like, where the interface
may have at least one of a plurality of command interface types, such as
a tactile interface, a position sensor device, and the like with wireless
command connection to the eyepiece. In an embodiment, the ring 1500 may
have controls that mirror a computer mouse, such as buttons 1504 (e.g.
functioning as a one-button, multi-button, and like mouse functions), a
2D position control 1502, scroll wheel, and the like. The buttons 1504
and 2D position control 1502 may be as shown in FIG. 15, where the
buttons are on the side facing the thumb and the 2D position controller
is on the top. Alternately, the buttons and 2D position control may be in
other configurations, such as all facing the thumb side, all on the top
surface, or any other combination. The 2D position control 1502 may be a
2D button position controller (e.g. such as the TrackPoint pointing
device embedded in some laptop keyboards to control the position of the
mouse), a pointing stick, joystick, an optical track pad, an opto touch
wheel, a touch screen, touch pad, track pad, scrolling track pad,
trackball, any other position or pointing controller, and the like. In
embodiments, control signals from the tactile interface (such as the ring
tactile interface 1500) may be provided with a wired or wireless
interface to the eyepiece, where the user is able to conveniently supply
control inputs, such as with their hand, thumb, finger, and the like. For
example, the user may be able to articulate the controls with their
thumb, where the ring is worn on the user's index finger. In embodiments,
a method or system may provide an interactive head-mounted eyepiece worn
by a user, wherein the eyepiece includes an optical assembly through
which the user views a surrounding environment and displayed content, a
processor for handling content for display to the user, and an integrated
projector facility for projecting the content to the optical assembly,
and a control device worn on the body of the user, such as a hand of the
user, including at least one control component actuated by the user, and
providing a control command from the actuation of the at least one
control component to the processor as a command instruction. The command
instruction may be directed to the manipulation of content for display to
the user. The control device may be worn on a first digit of the hand of
the user, and the at least one control component may be actuated by a
second digit of a hand of the user. The first digit may be the index
finger, the second digit the thumb, and the first and second digit on the
same hand of the user. The control device may have at least one control
component mounted on the index finger side facing the thumb. The at least
one control component may be a button. The at least one control component
may be a 2D position controller. The control device may have at least one
button actuated control component mounted on the index finger side facing
the thumb, and a 2D position controller actuated control component
mounted on the top facing side of the index finger. The control
components may be mounted on at least two digits of the user's hand. The
control device may be worn as a glove on the hand of the user. The
control device may be worn on the wrist of the user. The at least one
control component may be worn on at least one digit of the hand, and a
transmission facility may be worn separately on the hand. The
transmission facility may be worn on the wrist. The transmission facility
may be worn on the back of the hand. The control component may be at
least one of a plurality of buttons. The at least one button may provide
a function substantially similar to a conventional computer mouse button.
Two of the plurality of buttons may function substantially similar to
primary buttons of a conventional two-button computer mouse. The control
component may be a scrolling wheel. The control component may be a 2D
position control component. The 2D position control component may be a
button position controller, pointing stick, joystick, optical track pad,
opto-touch wheel, touch screen, touch pad, track pad, scrolling track
pad, trackball, capacitive touch screen, and the like. The 2D position
control component may be controlled with the user's thumb. The control
component may be a touch-screen capable of implementing touch controls
including button-like functions and 2D manipulation functions. The
control component may be actuated when the user puts on the projected
processor content pointing and control device.

[0369] In embodiments, the wearer may have an interface mounted in a ring
1500AA that includes a camera 1502AA, such as shown in FIG. 15AA. In
embodiments, the ring controller 1502AA may have control interface types
as described herein, such as through buttons 1504, 2D position control
1502, 3D position control (e.g. utilizing accelerometers, gyros), and the
like. The ring controller 1500AA may then be used to control functions
within the eyepiece, such as controlling the manipulation of the
projected display content to the wearer. In embodiments, the control
interfaces 1502, 1504 may provide control aspects to the embedded camera
1502AA, such as on/off, zoom, pan, focus, recording a still image
picture, recording a video, and the like. Alternately, the functions may
be controlled through other control aspects of the eyepiece, such as
through voice control, other tactile control interfaces, eye gaze
detection as described herein, and the like. The camera may also have
automatic control functions enabled, such as auto-focus, timed functions,
face detection and/or tracking, auto-zoom, and the like. For example, the
ring controller 1500AA with integrated camera 1502AA may be used to view
the wearer 1508AA during a videoconference enabled through the eyepiece,
where the wearer 1508AA may hold the ring controller (e.g. as mounted on
their finger) out in order to allow the camera 1502AA a view of their
face for transmission to at least one other participant on the
videoconference. Alternately, the wearer may take the ring controller
1500AA off and place it down on a surface 1510AA (e.g. a table top) such
that the camera 1502AA has a view of the wearer. An image of the wearer
1512AA may then be displayed on the display area 1518AA of the eyepiece
and transmitted to others on the videoconference, such as along with the
images 1514AA of other participants on the videoconference call. In
embodiments, the camera 1502AA may provide for manual or automatic FOV
1504AA adjustment. For instance, the wearer may set the ring controller
1500AA down on a surface 1510AA for use in a video conference call, and
the FOV 1504AA may be controlled either manually (e.g. through button
controls 1502, 1504, voice control, other tactile interface) or
automatically (e.g. though face recognition) in order for the camera's
FOV 1504AA to be directed to the wearer's face. The FOV 1504AA may be
enabled to change as the wearer moves, such as by tracking by face
recognition. The FOV 1504AA may also zoomed in/out to adjust to changes
in the position of the wearer's face. In embodiments, the camera 1502AA
may be used for a plurality of still and/or video applications, where the
view of the camera is provided to the wearer on the display area 1518AA
of the eyepiece, and where storage may be available in the eyepiece for
storing the images/videos, which may be transferred, communicated, and
the like, from the eyepiece to some external storage facility, user,
web-application, and the like. In embodiments, a camera may be
incorporated in a plurality of different mobile devices, such as worn on
the arm, hand, wrist, finger, and the like, such as the watch 3202 with
embedded camera 3200 as shown in FIGS. 32-33. As with the ring controller
1502AA, any of these mobile devices may include manual and/or automatic
functions as described for the ring controller 1502AA. In embodiments,
the ring controller 1502AA may have additional sensors, embedded
functions, control features, and the like, such as a fingerprint scanner,
tactile feedback, and LCD screen, an accelerometer, Bluetooth, and the
like. For instance, the ring controller may provide for synchronized
monitoring between the eyepiece and other control components, such as
described herein.

[0370] In embodiments, the eyepiece may provide a system and method for
providing an image of the wearer to videoconference participants through
the use of an external mirror, where the wearer views themselves in the
mirror and an image of themselves is captured through an integrated
camera of the eyepiece. The captured image may be used directly, or the
image may be flipped to correct for the image reversal of the mirror. In
an example, the wearer may enter into a videoconference with a plurality
of other people, where the wearer may be able to view live video images
of the others though the eyepiece. By utilizing an ordinary mirror, and
an integrated camera in the eyepiece, the user may be able to view
themselves in the mirror, have the image captured by the integrated
camera, and provide the other people with a image of themselves for
purposes of the videoconference. This image may also be available to the
wearer as a projected image to the eyepiece, such as in addition to the
images of the other people involved in the videoconference.

[0371] In embodiments, a control component may provide a surface-sensing
component in the control device for detecting motion across a surface may
also be provided. The surface sensing component may be disposed on the
palmar side of the user's hand. The surface may be at least one of a hard
surface, a soft surface, surface of the user's skin, surface of the
user's clothing, and the like. Providing control commands may be
transmitted wirelessly, through a wired connection, and the like. The
control device may control a pointing function associated with the
displayed processor content. The pointing function may be control of a
cursor position; selection of displayed content, selecting and moving
displayed content; control of zoom, pan, field of view, size, position of
displayed content; and the like. The control device may control a
pointing function associated with the viewed surrounding environment. The
pointing function may be placing a cursor on a viewed object in the
surrounding environment. The viewed object's location position may be
determined by the processor in association with a camera integrated with
the eyepiece. The viewed object's identification may be determined by the
processor in association with a camera integrated with the eyepiece. The
control device may control a function of the eyepiece. The function may
be associated with the displayed content. The function may be a mode
control of the eyepiece. The control device may be foldable for ease of
storage when not worn by the user. In embodiments, the control device may
be used with external devices, such as to control the external device in
association with the eyepiece. External devices may be entertainment
equipment, audio equipment, portable electronic devices, navigation
devices, weapons, automotive controls, and the like.

[0372] In embodiments, a body worn control device (e.g. as worn on a
finger, attached to the hand at the palm, on the arm, leg, torso, and the
like) may provide 3D position sensor information to the eyepiece. For
instance, the control device may act as an `air mouse`, where 3D position
sensors (e.g. accelerometers, gyros, and the like) provide position
information when a user commands so, such as with the click of a button,
a voice command, a visually detected gesture, and the like. The user may
be able to use this feature to navigate either a 2D or 3D image being
projected to the user via the eyepiece projection system. Further, the
eyepiece may provide an external relay of the image for display or
projection to others, such as in the case of a presentation. The user may
be able to change the mode of the control device between 2D and 3D, in
order to accommodate different functions, applications, user interfaces,
and the like. In embodiments, multiple 3D control devices may be utilized
for certain applications, such as in simulation applications.

[0373] In embodiments, a system may comprise an interactive head-mounted
eyepiece worn by a user, wherein the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content, wherein the optical assembly comprises a corrective
element that corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly; and a tactile control interface mounted on the eyepiece that
accepts control inputs from the user through at least one of a user
touching the interface and the user being proximate to the interface.

[0374] In embodiments, control of the eyepiece, and especially control of
a cursor associated with displayed content to the user, may be enabled
through hand control, such as with a worn device 1500 as in FIG. 15, as a
virtual computer mouse 1500A as in FIG. 15A, and the like. For instance,
the worn device 1500 may transmit commands through physical interfaces
(e.g. a button 1502, scroll wheel 1504), and the virtual computer mouse
1500A may be able interpret commands though detecting motion and actions
of the user's thumb, fist, hand, and the like. In computing, a physical
mouse is a pointing device that functions by detecting two-dimensional
motion relative to its supporting surface. A physical mouse traditionally
consists of an object held under one of the user's hands, with one or
more buttons. It sometimes features other elements, such as "wheels",
which allow the user to perform various system-dependent operations, or
extra buttons or features that can add more control or dimensional input.
The mouse's motion translates into the motion of a cursor on a display,
which allows for fine control of a graphical user interface. In the case
of the eyepiece, the user may be able to utilize a physical mouse, a
virtual mouse, or combinations of the two. In embodiments, a virtual
mouse may involve one or more sensors attached to the user's hand, such
as on the thumb 1502A, finger 1504A, palm 1508A, wrist 1510A, and the
like, where the eyepiece receives signals from the sensors and translates
the received signals into motion of a cursor on the eyepiece display to
the user. In embodiments, the signals may be received through an exterior
interface, such as the tactile interface 1402, through a receiver on the
interior of the eyepiece, at a secondary communications interface, on an
associated physical mouse or worn interface, and the like. The virtual
mouse may also include actuators or other output type elements attached
to the user's hand, such as for haptic feedback to the user through
vibration, force, pressure, electrical impulse, temperature, and the
like. Sensors and actuators may be attached to the user's hand by way of
a wrap, ring, pad, glove, and the like. As such, the eyepiece virtual
mouse may allow the user to translate motions of the hand into motion of
the cursor on the eyepiece display, where `motions` may include slow
movements, rapid motions, jerky motions, position, change in position,
and the like, and may allow users to work in three dimensions, without
the need for a physical surface, and including some or all of the six
degrees of freedom. Note that because the `virtual mouse` may be
associated with multiple portions of the hand, the virtual mouse may be
implemented as multiple `virtual mouse` controllers, or as a distributed
controller across multiple control members of the hand. In embodiments,
the eyepiece may provide for the use of a plurality of virtual mice, such
as for one on each of the user's hands, one or more of the user's feet,
and the like.

[0375] In embodiments, the eyepiece virtual mouse may need no physical
surface to operate, and detect motion such as through sensors, such as
one of a plurality of accelerometer types (e.g. tuning fork,
piezoelectric, shear mode, strain mode, capacitive, thermal, resistive,
electromechanical, resonant, magnetic, optical, acoustic, laser, three
dimensional, and the like), and through the output signals of the
sensor(s) determine the translational and angular displacement of the
hand, or some portion of the hand. For instance, accelerometers may
produce output signals of magnitudes proportional to the translational
acceleration of the hand in the three directions. Pairs of accelerometers
may be configured to detect rotational accelerations of the hand or
portions of the hand. Translational velocity and displacement of the hand
or portions of the hand may be determined by integrating the
accelerometer output signals and the rotational velocity and displacement
of the hand may be determined by integrating the difference between the
output signals of the accelerometer pairs. Alternatively, other sensors
may be utilized, such as ultrasound sensors, imagers, IR/RF,
magnetometer, gyro magnetometer, and the like. As accelerometers, or
other sensors, may be mounted on various portions of the hand, the
eyepiece may be able to detect a plurality of movements of the hand,
ranging from simple motions normally associated with computer mouse
motion, to more highly complex motion, such as interpretation of complex
hand motions in a simulation application. In embodiments, the user may
require only a small translational or rotational action to have these
actions translated to motions associated with user intended actions on
the eyepiece projection to the user.

[0376] In embodiments, the virtual mouse may have physical switches
associated with it to control the device, such as an on/off switch
mounted on the hand, the eyepiece, or other part of the body. The virtual
mouse may also have on/off control and the like through pre-defined
motions or actions of the hand. For example, the operation of the virtual
mouse may be enabled through a rapid back and forth motion of the hand.
In another example, the virtual mouse may be disabled through a motion of
the hand past the eyepiece, such as in front of the eyepiece. In
embodiments, the virtual mouse for the eyepiece may provide for the
interpretation of a plurality of motions to operations normally
associated with physical mouse control, and as such, familiar to the user
without training, such as single clicking with a finger, double clicking,
triple clicking, right clicking, left clicking, click and drag,
combination clicking, roller wheel motion, and the like. In embodiments,
the eyepiece may provide for gesture recognition, such as in interpreting
hand gestures via mathematical algorithms.

[0377] In embodiments, gesture control recognition may be provided through
technologies that utilize capacitive changes resulting from changes in
the distance of a user's hand from a conductor element as part of the
eyepiece's control system, and so would require no devices mounted on the
user's hand. In embodiments, the conductor may be mounted as part of the
eyepiece, such as on the arm or other portion of the frame, or as some
external interface mounted on the user's body or clothing. For example,
the conductor may be an antenna, where the control system behaves in a
similar fashion to the touch-less musical instrument known as the
theremin The theremin uses the heterodyne principle to generate an audio
signal, but in the case of the eyepiece, the signal may be used to
generate a control input signal. The control circuitry may include a
number of radio frequency oscillators, such as where one oscillator
operates at a fixed frequency and another controlled by the user's hand,
where the distance from the hand varies the input at the control antenna.
In this technology, the user's hand acts as a grounded plate (the user's
body being the connection to ground) of a variable capacitor in an L-C
(inductance-capacitance) circuit, which is part of the oscillator and
determines its frequency. In another example, the circuit may use a
single oscillator, two pairs of heterodyne oscillators, and the like. In
embodiments, there may be a plurality of different conductors used as
control inputs. In embodiments, this type of control interface may be
ideal for control inputs that vary across a range, such as a volume
control, a zoom control, and the like. However, this type of control
interface may also be used for more discrete control signals (e.g. on/off
control) where a predetermined threshold determines the state change of
the control input.

[0378] In embodiments, the eyepiece may interface with a physical remote
control device, such as a wireless track pad mouse, hand held remote
control, body mounted remote control, remote control mounted on the
eyepiece, and the like. The remote control device may be mounted on an
external piece of equipment, such as for personal use, gaming,
professional use, military use, and the like. For example, the remote
control may be mounted on a weapon for a soldier, such as mounted on a
pistol grip, on a muzzle shroud, on a fore grip, and the like, providing
remote control to the soldier without the need to remove their hands from
the weapon. The remote control may be removably mounted to the eyepiece.

[0379] In embodiments, a remote control for the eyepiece may be activated
and/or controlled through a proximity sensor. A proximity sensor may be a
sensor able to detect the presence of nearby objects without any physical
contact. For example, a proximity sensor may emit an electromagnetic or
electrostatic field, or a beam of electromagnetic radiation (infrared,
for instance), and look for changes in the field or return signal. The
object being sensed is often referred to as the proximity sensor's
target. Different proximity sensor targets may demand different sensors.
For example, a capacitive or photoelectric sensor might be suitable for a
plastic target; an inductive proximity sensor requires a metal target.
Other examples of proximity sensor technologies include capacitive
displacement sensors, eddy-current, magnetic, photocell (reflective),
laser, passive thermal infrared, passive optical, CCD, reflection of
ionizing radiation, and the like. In embodiments, the proximity sensor
may be integral to any of the control embodiments described herein,
including physical remote controls, virtual mouse, interfaces mounted on
the eyepiece, controls mounted on an external piece of equipment (e.g. a
game controller, a weapon), and the like.

[0380] In embodiments, sensors for measuring a user's body motion may be
used to control the eyepiece, or as an external input, such as using an
inertial measurement unit (IMU), a 3-axis magnetometer, a 3-axis gyro, a
3-axis accelerometer, and the like. For instance, an sensor may be
mounted on the hand(s) of the user, thereby enabling the use of the
signals from the sensor for control the eyepiece, as described herein. In
another instance, sensor signals may be received and interpreted by the
eyepiece to assess and/or utilize the body motions of the user for
purposes other than control. In an example, sensors mounted on each leg
and each arm of the user may provide signals to the eyepiece that allow
the eyepiece to measure the gait of the user. The gait of the user may
then in turn be used to monitor the gait of the user over time, such as
to monitor changes in physical behavior, improvement during physical
therapy, changes due to a head trauma, and the like. In the instance of
monitoring for a head trauma, the eyepiece may initially determine a
baseline gait profile for the user, and then monitor the user over time,
such as before and after a physical event (e.g. a sports-related
collision, an explosion, an vehicle accident, and the like). In the case
of an athlete or person in physical therapy, the eyepiece may be used
periodically to measure the gait of the user, and maintain the
measurements in a database for analysis. A running gait time profile may
be produced, such as to monitor the user's gait for indications of
physical traumas, physical improvements, and the like.

[0381] In embodiments, control of the eyepiece, and especially control of
a cursor associated with displayed content to the user, may be enabled
through the sensing of the motion of a facial feature, the tensing of a
facial muscle, the clicking of the teeth, the motion of the jaw, and the
like, of the user wearing the eyepiece through a facial actuation sensor
1502B. For instance, as shown in FIG. 15B, the eyepiece may have a facial
actuation sensor as an extension from the eyepiece earphone assembly
1504B, from the arm 1508B of the eyepiece, and the like, where the facial
actuation sensor may sense a force, a vibration, and the like associated
with the motion of a facial feature. The facial actuation sensor may also
be mounted separate from the eyepiece assembly, such as part of a
standalone earpiece, where the sensor output of the earpiece and the
facial actuation sensor may be either transferred to the eyepiece by
either wired or wireless communication (e.g. Bluetooth or other
communications protocol known to the art). The facial actuation sensor
may also be attached to around the ear, in the mouth, on the face, on the
neck, and the like. The facial actuation sensor may also be comprised of
a plurality of sensors, such as to optimize the sensed motion of
different facial or interior motions or actions. In embodiments, the
facial actuation sensor may detect motions and interpret them as
commands, or the raw signals may be sent to the eyepiece for
interpretation. Commands may be commands for the control of eyepiece
functions, controls associated with a cursor or pointer as provided as
part of the display of content to the user, and the like. For example, a
user may click their teeth once or twice to indicate a single or double
click, such as normally associated with the click of a computer mouse. In
another example, the user may tense a facial muscle to indicate a
command, such as a selection associated with the projected image. In
embodiments, the facial actuation sensor may utilize noise reduction
processing to minimize the background motions of the face, the head, and
the like, such as through adaptive signal processing technologies. A
voice activity sensor may also be utilized to reduce interference, such
as from the user, from other individuals nearby, from surrounding
environmental noise, and the like. In an example, the facial actuation
sensor may also improve communications and eliminate noise by detecting
vibrations in the cheek of the user during speech, such as with multiple
microphones to identify the background noise and eliminate it through
noise cancellation, volume augmentation, and the like.

[0382] In embodiments, the user of the eyepiece may be able to obtain
information on some environmental feature, location, object, and the
like, viewed through the eyepiece by raising their hand into the field of
view of the eyepiece and pointing at the object or position. For
instance, the pointing finger of the user may indicate an environmental
feature, where the finger is not only in the view of the eyepiece but
also in the view of an embedded camera. The system may now be able to
correlate the position of the pointing finger with the location of the
environmental feature as seen by the camera. Additionally, the eyepiece
may have position and orientation sensors, such as GPS and a
magnetometer, to allow the system to know the location and line of sight
of the user. From this, the system may be able to extrapolate the
position information of the environmental feature, such as to provide the
location information to the user, to overlay the position of the
environmental information onto a 2D or 3D map, to further associate the
established position information to correlate that position information
to secondary information about that location (e.g. address, names of
individuals at the address, name of a business at that location,
coordinates of the location), and the like. Referring to FIG. 15C, in an
example, the user is looking though the eyepiece 1502C and pointing with
their hand 1504C at a house 1508C in their field of view, where an
embedded camera 1510C has both the pointed hand 1504C and the house 1508C
in its field of view. In this instance, the system is able to determine
the location of the house 1508C and provide location information 1514C
and a 3D map superimposed onto the user's view of the environment. In
embodiments, the information associated with an environmental feature may
be provided by an external facility, such as communicated with through a
wireless communication connection, stored internal to the eyepiece, such
as downloaded to the eyepiece for the current location, and the like. In
embodiments, information provided to the wearer of the eyepiece may
include any of a plurality of information related to the scene as viewed
by the wearer, such as geographic information, point of interest
information, social networking information (e.g. Twitter, Facebook, and
the like information related to a person standing in front of the wearer
augmented around the person, such as `floating` around the person),
profile information (e.g. such as stored in the wearer's contact list),
historical information, consumer information, product information, retail
information, safety information, advertisements, commerce information,
security information, game related information, humorous annotations,
news related information, and the like.

[0383] In embodiments, the user may be able to control their view
perspective relative to a 3D projected image, such as a 3D projected
image associated with the external environment, a 3D projected image that
has been stored and retrieved, a 3D displayed movie (such as downloaded
for viewing), and the like. For instance, and referring again to FIG.
15C, the user may be able to change the view perspective of the 3D
displayed image 1512C, such as by turning their head, and where the live
external environment and the 3D displayed image stay together even as the
user turns their head, moves their position, and the like. In this way,
the eyepiece may be able to provide an augmented reality by overlaying
information onto the user's viewed external environment, such as the
overlaid 3D displayed map 1512C, the location information 1514C, and the
like, where the displayed map, information, and the like, may change as
the user's view changes. In another instance, with 3D movies or 3D
converted movies, the perspective of the viewer may be changed to put the
viewer `into` the movie environment with some control of the viewing
perspective, where the user may be able to move their head around and
have the view change in correspondence to the changed head position,
where the user may be able to `walk into` the image when they physically
walk forward, have the perspective change as the user moves the gazing
view of their eyes, and the like. In addition, additional image
information may be provided, such as at the sides of the user's view that
could be accessed by turning the head.

[0384] In embodiments, the user of one eyepiece may be able to synchronize
their view of a projected image with at least the view of a second user
of an eyepiece. For instance, two separate eyepiece users may wish to
view the same 3D map, game projection, point-of-interest projection, and
the like, where the two viewers are not only seeing the same projected
content, but where the projected content's view is synchronized between
them. In an example, two users may want to jointly view a 3D map of a
region, and the image is synchronized such that the one user may be able
to point at a position on the 3D map that the other user is able to see
and interact with. The two users may be able to move around the 3D map
and share a virtual-physical interaction between the two users and the 3D
map, and the like. Further, a group of eyepiece wearers may be able to
jointly interact with a projection as a group. In this way, two or more
users may be able to have a unified augmented reality experience through
the coordination-synchronization of their eyepieces. Synchronization of
two or more eyepieces may be provided by communication of position
information between the eyepieces, such as absolute position information,
relative position information, translation and rotational position
information, and the like, such as from position sensors as described
herein (e.g. gyroscopes, IMU, GPS, and the like). Communications between
the eyepieces may be direct, through an Internet network, through the
cell-network, through a satellite network, and the like. Processing of
position information contributing to the synchronization may be executed
in a master processor in a single eyepiece, collectively amongst a group
of eyepieces, in remote server system, and the like, or any combination
thereof. In embodiments, the coordinated, synchronized view of projected
content between multiple eyepieces may provide an extended augmented
reality experience from the individual to a plurality of individuals,
where the plurality of individuals benefit from the group augmented
reality experience. For example, a group of concertgoers may synchronize
their eyepieces with a feed from the concert producers such that visual
effects or audio may be pushed to people with eyepieces by the concert
producer, performers, other audience members, and the like. In an
example, the performer may have a master eyepiece and may control sending
content to audience members. In one embodiment, the content may be the
performer's view of the surrounding environment. The performer may be
using the master eyepiece for applications as well, such as controlling
an external lighting system, interacting with an augmented reality drum
kit or sampling board, calling up song lyrics, and the like.

[0385] In embodiments, the eyepiece may utilize sound projection
techniques to realize a direction of sound for the wearer of the
eyepiece, such as with surround sound techniques. Realization of a
direction of sound for a wearer may include the reproduction of the sound
from the direction of origin, either in real-time or as a playback. It
may include a visual or audible indicator to provide a direction for the
source of sound. Sound projection techniques may be useful to an
individual that has their hearing impaired or blocked, such as due to the
user experiencing hearing loss, a user wearing headphones, a user wearing
hearing protection, and the like. In this instance, the eyepiece may
provide enhanced 3D audible reproduction. In an example, the wearer may
have headphones on, and a gunshot has been fired. In this example, the
eyepiece may be able to reproduce the 3D sound profile for the sound of
the gunshot, thus allowing the wearer to respond to the gunshot knowing
where the sound came from. In another example, a wearer with headphones,
hearing loss, in a loud environment, and the like, may not otherwise be
able to tell what's being said and/or the direction of the person
speaking, but is provided with a 3D sound enhancement from the eyepiece
(e.g. the wearer is listening to other proximate individuals through
headphones and so does not have directionality information). In another
example, a wearer may be in a loud ambient environment, or in an
environment where periodic loud noises can occur. In this instance, the
eyepiece may have the ability to cut off the loud sound to protect the
wearer's hearing, or the sound could be so loud that the wearer can't
tell where the sound came from, and further, now their ears could be
ringing so loud they can't hear anything. To aid in this situation, the
eyepiece may provide visible, auditory, vibration, and the like queues to
the wearer to indicate the direction of the sound source. In embodiments,
the eyepiece may provide "augmented" hearing where the wearer's ears are
plugged to protect their ears from loud noises, but using the ear buds to
generate a reproduction of sound to replace what's missing form the
natural world. This artificial sound may then be used to give
directionality to wirelessly transmitted communication that the operator
couldn't hear naturally.

[0386] In embodiments, an example of a configuration for establishing
directionality of a source sound may be point different microphones in
different directions. For instance, at least one microphone may be used
for the voice of the wearer, at least one microphone for the surrounding
environment, at least one pointing down at the ground, and potentially in
a plurality of different discrete directions. In this instance, the
microphone pointing down may be subtracted to isolate other sounds, which
may be combined with 3D sound surround, and augmented hearing techniques,
as described herein.

[0387] In an example of a sound augmented system as part of the eyepiece,
there are a number of users with eyepieces, such as in a noisy
environment where all the users have `plugged ears` as implemented
through artificial noise blockage through the eyepiece ear buds. One of
wearers may yell out that they need some piece of equipment. Because of
all the ambient noise and the hearing protection the eyepiece creates, no
one can hear the request for equipment. Here, the wearer making the
verbal request has a filtered microphone close to their mouth, and they
could wirelessly transmit the request to the others, where their eyepiece
could relay a sound signal to the other user's eyepieces, and to the ear
on the correct side, and the others would know to look to the right or
left to see who has made the request. This system could be further
enhanced with geo-locations of all the wearers, and a "virtual" surround
sound system that uses the two ear buds to give the perception of 3D
space (such as the SRS True Surround Technology).

[0388] In embodiments, auditory queues could also be computer generated so
the communicating user doesn't need to verbalize their communication but
can select it from a list of common commands, the computer generates the
communication based on preconfigured conditions, and the like. In an
example, the wearers may be in a situation where they don't want a
display in front of their eyes but want to have ear buds in their ears.
In this case, if they wanted to notify someone in a group to get up and
follow them, they could just click a controller a certain number of
times, or provide a visual hand gesturer with a camera, an IMU, and the
like. The system may choose the `follow me` command and transmit it to
the other users with the communicating user's location for the 3D system
to trick them into hearing from where they are actually sitting out of
sight of them. In embodiments, directional information may be determined
and/or provided through position information from the users of eyepieces.

[0389] In embodiments, the eyepiece may provide aspects of signals
intelligence (SIGINT), such as in the use of existing WiFi, 3G,
Bluetooth, and the like communications signals to gather signals
intelligence for devices and users in proximity to the wearer of the
eyepiece. These signals may be from other eyepieces, such as to gather
information about other known friendly users; other eyepieces that have
been picked up by an unauthorized individual, such as through a signal
that is generated when an unauthorized user tries to use the eyepiece;
other communications devices (e.g. radios, cell phones, pagers,
walky-talkies, and the like); electronic signals emanating from devices
that may not be directly used for communications; and the like.
Information gathered by the eyepiece may be direction information,
position information, motion information, number of and/or rate of
communications, and the like. Further, information may be gathered
through the coordinated operations of multiple eyepieces, such as in the
triangulation of a signal for determination of the signal's location.

[0390] Referring to FIG. 15D, in embodiments the user of the eyepiece
1502D may be able to use multiple hand/finger points from their hand
1504D to define the field of view (FOV) 1508D of the camera 1510D
relative to the see-thru view, such as for augmented reality
applications. For instance, in the example shown, the user is utilizing
their first finger and thumb to adjust the FOV 1508D of the camera 1510D
of the eyepiece 1502D. The user may utilize other combinations to adjust
the FOV 1508D, such as with combinations of fingers, fingers and thumb,
combinations of fingers and thumbs from both hands, use of the palm(s),
cupped hand(s), and the like. The use of multiple hand/finger points may
enable the user to alter the FOV 1508 of the camera 1510D in much the
same way as users of touch screens, where different points of the
hand/finger establish points of the FOV to establish the desired view. In
this instance however, there is no physical contact made between the
user's hand(s) and the eyepiece. Here, the camera may be commanded to
associate portions of the user's hand(s) to the establishing or changing
of the FOV of the camera. The command may be any command type described
herein, including and not limited to hand motions in the FOV of the
camera, commands associated with physical interfaces on the eyepiece,
commands associated with sensed motions near the eyepiece, commands
received from a command interface on some portion of the user, and the
like. The eyepiece may be able to recognize the finger/hand motions as
the command, such as in some repetitive motion. In embodiments, the user
may also utilize this technique to adjust some portion of the projected
image, where the eyepiece relates the viewed image by the camera to some
aspect of the projected image, such as the hand/finger points in view to
the projected image of the user. For example, the user may be
simultaneously viewing the external environment and a projected image,
and the user utilizes this technique to change the projected viewing
area, region, magnification, and the like. In embodiments, the user may
perform a change of FOV for a plurality of reasons, including zooming in
or out from a viewed scene in the live environment, zoom in or out from a
viewed portion of the projected image, to change the viewing area
allocated to the projected image, to change the perspective view of the
environment or projected image, and the like.

[0391] In embodiments, the eyepiece may enable simultaneous FOVs. For
example, simultaneous wide, medium, and narrow camera FOVs may be used,
where the user can have different FOVs up simultaneously in view (i.e.
wide to show the entire field, perhaps static, and narrow to focus on a
particular target, perhaps moving with the eye or with a cursor).

[0392] In embodiments the eyepiece may be able to determine where the user
is gazing, or the motion of the user's eye, by tracking the eye through
reflected light off the user's eye. This information may then be used to
help correlate the user's line of sight with respect to the projected
image, a camera view, the external environment, and the like, and used in
control techniques as described herein. For instance, the user may gaze
at a location on the projected image and make a selection, such as with
an external remote control or with some detected eye movement (e.g.
blinking). In an example of this technique, and referring to FIG. 15E,
transmitted light 1508E, such as infrared light, may be reflected 1510E
from the eye 1504E and sensed at the optical display 502 (e.g. with a
camera or other optical sensor). The information may then be analyzed to
extract eye rotation from changes in reflections. In embodiments, an eye
tracking facility may use the corneal reflection and the center of the
pupil as features to track over time; use reflections from the front of
the cornea and the back of the lens as features to track; image features
from inside the eye, such as the retinal blood vessels, and follow these
features as the eye rotates; and the like. Alternatively, the eyepiece
may use other techniques to track the motions of the eye, such as with
components surrounding the eye, mounted in contact lenses on the eye, and
the like. For instance, a special contact lens may be provided to the
user with an embedded optical component, such as a mirror, magnetic field
sensor, and the like, for measuring the motion of the eye. In another
instance, electric potentials may be measured and monitored with
electrodes placed around the eyes, utilizing the steady electric
potential field from the eye as a dipole, such as with its positive pole
at the cornea and its negative pole at the retina. In this instance, the
electric signal may be derived using contact electrodes placed on the
skin around the eye, on the frame of the eyepiece, and the like. If the
eye moves from the centre position towards the periphery, the retina
approaches one electrode while the cornea approaches the opposing one.
This change in the orientation of the dipole and consequently the
electric potential field results in a change in the measured signal. By
analyzing these changes eye movement may be tracked.

[0393] In another example of how eye gaze direction of the user and
associated control may be applied involves placement (by the eyepiece)
and optional selection (by the user) of a visual indicator in the user's
peripheral vision, such as in order to reduce clutter in the narrow
portion of the user's visual field around the gaze direction where the
eye's highest visual input resides. Since the brain is limited as to how
much information it can process at a time, and the brain pays the most
attention to visual content close to the direction of gaze, the eyepiece
may provide projected visual indicators in the periphery of vision as
cues to the user. This way the brain may only have to process the
detection of the indicator, and not the information associated with the
indicator, thus decrease the potential for overloading the user with
information. The indicator may be an icon, a picture, a color, symbol, a
blinking object, and the like, and indicate an alert, an email arriving,
an incoming phone call, a calendar event, an internal or external
processing facility that requires attention from the user, and the like.
With the visual indicator in the periphery, the user may become aware of
it without being distracted by it. The user may then optionally decide to
elevate the content associated with the visual cue in order to see more
information, such as gazing over to the visual indicator, and by doing
so, opening up it's content. For example, an icon representing an
incoming email may indicate an email being received. The user may notice
the icon, and choose to ignore it (such as the icon disappearing after a
period of time if not activated, such as by a gaze or some other control
facility). Alternately, the user may notice the visual indicator and
choose to `active` it by gazing in the direction of the visual indicator.
In the case of the email, when the eyepiece detects that the user's eye
gaze is coincident with the location of the icon, the eyepiece may open
up the email and reveal it's content. In this way the user maintains
control over what information is being paid attention to, and as a
result, minimize distractions and maximize content usage efficiency.

[0394] In embodiments, the eyepiece may utilize sub-conscious control
aspects, such as images in the wearer's periphery, images presented to
the user at rates below conscious perception, sub-conscious perceptions
to a viewed scene by the viewer, and the like. For instance, a wearer may
be presented images through the eyepiece that are at a rate the wearer is
unaware of, but is subconsciously made aware of as presented content,
such as a reminder, an alert (e.g. an alert that calls on the wearer to
increase a level of attention to something, but not so much so that the
user needs a full conscious reminder), an indication related to the
wearer's immediate environment (e.g. the eyepiece has detected something
in the wearer's field of view that may have some interest to the wearer,
and to which the indication draws the wearer's attention), and the like.
In another instance, the eyepiece may provide indicators to the wearer
through a brain activity monitoring interface, where electrical signals
within the brain fire before a person realizes they've recognized an
image. For instance, the brain activity-monitoring interface may include
electroencephalogram (EEG) sensors (or the like) to monitor brain
activity as the wearer is viewing the current environment. When the
eyepiece, through the brain activity-monitoring interface, senses that
the wearer has become `aware` of an element of the surrounding
environment, the eyepiece may provide conscious level feedback to the
wearer to make the wearer more aware of the element. For example, a
wearer may unconsciously become aware of seeing a familiar face in a
crowd (e.g. a friend, a suspect, a celebrity), and the eyepiece provides
a visual or audio indication to the wearer to bring the person more
consciously to the attention of the wearer. In another example, the
wearer may view a product that arouses their attention at a subconscious
level, and the eyepiece provides a conscious indication to the wearer,
more information about the product, an enhanced view of the product, a
link to more information about the product, and the like. In embodiments,
the ability for the eyepiece to extend the wearer's reality to a
subconscious level may enable the eyepiece to provide the wearer with an
augmented reality beyond their normal conscious experience with the world
around them.

[0395] In embodiments, the eyepiece may have a plurality of modes of
operation where control of the eyepiece is controlled at least in part by
positions, shapes, motions of the hand, and the like. To provide this
control the eyepiece may utilize hand recognition algorithms to detect
the shape of the hand/fingers, and to then associate those hand
configurations, possibly in combination with motions of the hand, as
commands. Realistically, as there may be only a limited number of hand
configurations and motions available to command the eyepiece, these hand
configurations may need to be reused depending upon the mode of operation
of the eyepiece. In embodiments, certain hand configurations or motions
may be assigned for transitioning the eyepiece from one mode to the next,
thereby allowing for the reuse of hand motions. For instance, and
referring to FIG. 15F, the user's hand 1504F may be moved in view of a
camera on the eyepiece, and the movement may then be interpreted as a
different command depending upon the mode, such as a circular motion
1508F, a motion across the field of view 1510F, a back and forth motion
1512F, and the like. In a simplistic example, suppose there are two modes
of operation, mode one for panning a view from the projected image and
mode two for zooming the projected image. In this example the user may
want to use a left-to-right finger-pointed hand motion to command a
panning motion to the right. However, the user may also want to use a
left-to-right finger-pointed hand motion to command a zooming of the
image to greater magnification. To allow the dual use of this hand motion
for both command types, the eyepiece may be configured to interpret the
hand motion differently depending upon the mode the eyepiece is currently
in, and where specific hand motions have been assigned for mode
transitions. For instance, a clockwise rotational motion may indicate a
transition from pan to zoom mode, and a counter-clockwise rotational
motion may indicate a transition from zoom to pan mode. This example is
meant to be illustrative and not limiting in anyway, where one skilled in
the art will recognize how this general technique could be used to
implement a variety of command/mode structures using the hand(s) and
finger(s), such as hand-finger configurations-motions, two-hand
configuration-motions, and the like.

[0396] In embodiments, a system may comprise an interactive head-mounted
eyepiece worn by a user, wherein the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content, wherein the optical assembly comprises a corrective
element that corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly; and an integrated camera facility that images a gesture,
wherein the integrated processor identifies and interprets the gesture as
a command instruction. The control instruction may provide manipulation
of the content for display, a command communicated to an external device,
and the like.

[0397] In embodiments, control of the eyepiece may be enabled through eye
movement, an action of the eye, and the like. For instance, there may be
a camera on the eyepiece that views back to the wearer's eye(s), where
eye movements or actions may be interpreted as command information, such
as through blinking, repetitive blinking, blink count, blink rate, eye
open-closed, gaze tracking, eye movements to the side, up and down, side
to side, through a sequence of positions, to a specific position, dwell
time in a position, gazing toward a fixed object (e.g. the corner of the
lens of the eyepiece), through a certain portion of the lens, at a
real-world object, and the like. In addition, eye control may enable the
viewer to focus on a certain point on the displayed image from the
eyepiece, and because the camera may be able to correlate the viewing
direction of the eye to a point on the display, the eyepiece may be able
to interpret commands through a combination of where the wearer is
looking and an action by the wearer (e.g. blinking, touching an interface
device, movement of a position sense device, and the like). For example,
the viewer may be able to look at an object on the display, and select
that object through the motion of a finger enabled through a position
sense device.

[0398] In some embodiments, the glasses may be equipped with eye tracking
devices for tracking movement of the user's eye, or preferably both eyes;
alternatively, the glasses may be equipped with sensors for six-degree
freedom of movement tracking, i.e., head movement tracking. These devices
or sensors are available, for example, from Chronos Vision GmbH, Berlin,
Germany and ISCAN, Woburn, Mass. Retinal scanners are also available for
tracking eye movement. Retinal scanners may also be mounted in the
augmented reality glasses and are available from a variety of companies,
such as Tobii, Stockholm, Sweden, and SMI, Teltow, Germany, and ISCAN.

[0399] The augmented reality eyepiece also includes a user input
interface, as shown, to allow a user to control the device. Inputs used
to control the device may include any of the sensors discussed above, and
may also include a trackpad, one or more function keys and any other
suitable local or remote device. For example, an eye tracking device may
be used to control another device, such as a video game or external
tracking device. As an example, FIG. 29A depicts a user with an augmented
reality eyepiece equipped with an eye tracking device 2900A, discussed
elsewhere in this document. The eye tracking device allows the eyepiece
to track the direction of the user's eye or preferably, eyes, and send
the movements to the controller of the eyepiece. Control system includes
the augmented reality eyepiece and a control device for the weapon. The
movements may then be transmitted to the control device for a weapon
controlled by the control device, which may be within sight of the user.
The movement of the user's eyes is then converted by suitable software to
signals for controlling movement in the weapon, such as quadrant (range)
and azimuth (direction). Additional controls may be used in conjunction
with eye tracking, such as with the user's trackpad or function keys. The
weapon may be large caliber, such as a howitzer or mortar, or may small
caliber, such as a machine gun.

[0400] The movement of the user's eyes is then converted by suitable
software to signals for controlling movement of the weapon, such as
quadrant (range) and azimuth (direction) of the weapon. Additional
controls may be used for single or continuous discharges of the weapon,
such as with the user's trackpad or function keys. Alternatively, the
weapon may be stationary and non-directional, such as an implanted mine
or shape-charge, and may be protected by safety devices, such as by
requiring specific encoded commands. The user of the augmented reality
device may activate the weapon by transmitting the appropriate codes and
commands, without using eye-tracking features.

[0401] In embodiments, control of the eyepiece may be enabled though
gestures by the wearer. For instance, the eyepiece may have a camera that
views outward (e.g. forward, to the side, down) and interprets gestures
or movements of the hand of the wearer as control signals. Hand signals
may include passing the hand past the camera, hand positions or sign
language in front of the camera, pointing to a real-world object (such as
to activate augmentation of the object), and the like. Hand motions may
also be used to manipulate objects displayed on the inside of the
translucent lens, such as moving an object, rotating an object, deleting
an object, opening-closing a screen or window in the image, and the like.
Although hand motions have been used in the preceding examples, any
portion of the body or object held or worn by the wearer may also be
utilized for gesture recognition by the eyepiece.

[0402] In embodiments, head motion control may be used to send commands to
the eyepiece, where motion sensors such as accelerometers, gyros, or any
other sensor described herein, may be mounted on the wearer's head, on
the eyepiece, in a hat, in a helmet, and the like. Referring to FIG. 14A,
head motions may include quick motions of the head, such as jerking the
head in a forward and/or backward motion 1412, in an up and/or down
motion 1410, in a side to side motion as a nod, dwelling in a position,
such as to the side, moving and holding in position, and the like. Motion
sensors may be integrated into the eyepiece, mounted on the user's head
or in a head covering (e.g. hat, helmet) by wired or wireless connection
to the eyepiece, and the like. In embodiments, the user may wear the
interactive head-mounted eyepiece, where the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content. The optical assembly may include a corrective element
that corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly. At least one of a plurality of head motion sensing control
devices may be integrated or in association with the eyepiece that
provide control commands to the processor as command instructions based
upon sensing a predefined head motion characteristic. The head motion
characteristic may be a nod of the user's head such that the nod is an
overt motion dissimilar from ordinary head motions. The overt motion may
be a jerking motion of the head. The control instructions may provide
manipulation of the content for display, be communicated to control an
external device, and the like. Head motion control may be used in
combination with other control mechanisms, such as using another control
mechanism as discussed herein to activate a command and for the head
motion to execute it. For example, a wearer may want to move an object to
the right, and through eye control, as discussed herein, select the
object and activate head motion control. Then, by tipping their head to
the right, the object may be commanded to move to the right, and the
command terminated through eye control.

[0403] In embodiments, the eyepiece may be controlled through audio, such
as through a microphone. Audio signals may include speech recognition,
voice recognition, sound recognition, sound detection, and the like.
Audio may be detected though a microphone on the eyepiece, a throat
microphone, a jaw bone microphone, a boom microphone, a headphone, ear
bud with microphone, and the like.

[0404] In embodiments, command inputs may provide for a plurality of
control functions, such as turning on/off the eyepiece projector, turn
on/off audio, turn on/off a camera, turn on/off augmented reality
projection, turn on/off GPS, interaction with display (e.g. select/accept
function displayed, replay of captured image or video, and the like),
interaction with the real-world (e.g. capture image or video, turn a page
of a displayed book, and the like), perform actions with an embedded or
external mobile device (e.g. mobile phone, navigation device, music
device, VoIP, and the like), browser controls for the Internet (e.g.
submit, next result, and the like), email controls (e.g. read email,
display text, text-to-speech, compose, select, and the like), GPS and
navigation controls (e.g. save position, recall saved position, show
directions, view location on map), and the like.

[0405] In embodiments, the eyepiece may provide 3D display imaging to the
user, such as through conveying a stereoscopic, auto-stereoscopic,
computer-generated holography, volumetric display image,
stereograms/stereoscopes, view-sequential displays, electro-holographic
displays, parallax "two view" displays and parallax panoramagrams,
re-imaging systems, and the like, creating the perception of 3D depth to
the viewer. Display of 3D images to the user may employ different images
presented to the user's left and right eyes, such as where the left and
right optical paths have some optical component that differentiates the
image, where the projector facility is projecting different images to the
user's left and right eye's, and the like. The optical path, including
from the projector facility through the optical path to the user's eye,
may include a graphical display device that forms a visual representation
of an object in three physical dimensions. A processor, such as the
integrated processor in the eyepiece or one in an external facility, may
provide 3D image processing as at least a step in the generation of the
3D image to the user.

[0406] In embodiments, holographic projection technologies may be employed
in the presentation of a 3D imaging effect to the user, such as
computer-generated holography (CGH), a method of digitally generating
holographic interference patterns. For instance, a holographic image may
be projected by a holographic 3D display, such as a display that operates
on the basis of interference of coherent light. Computer generated
holograms have the advantage that the objects which one wants to show do
not have to possess any physical reality at all, that is, they may be
completely generated as a `synthetic hologram`. There are a plurality of
different methods for calculating the interference pattern for a CGH,
including from the fields of holographic information and computational
reduction as well as in computational and quantization techniques. For
instance, the Fourier transform method and point source holograms are two
examples of computational techniques. The Fourier transformation method
may be used to simulate the propagation of each plane of depth of the
object to the hologram plane, where the reconstruction of the image may
occur in the far field. In an example process, there may be two steps,
where first the light field in the far observer plane is calculated, and
then the field is Fourier transformed back to the lens plane, where the
wavefront to be reconstructed by the hologram is the superposition of the
Fourier transforms of each object plane in depth. In another example, a
target image may be multiplied by a phase pattern to which an inverse
Fourier transform is applied. Intermediate holograms may then be
generated by shifting this image product, and combined to create a final
set. The final set of holograms may then be approximated to form
kinoforms for sequential display to the user, where the kinoform is a
phase hologram in which the phase modulation of the object wavefront is
recorded as a surface-relief profile. In the point source hologram method
the object is broken down in self-luminous points, where an elementary
hologram is calculated for every point source and the final hologram is
synthesized by superimposing all the elementary holograms.

[0407] In an embodiment, 3-D or holographic imagery may be enabled by a
dual projector system where two projectors are stacked on top of each
other for a 3D image output. Holographic projection mode may be entered
by a control mechanism described herein or by capture of an image or
signal, such as an outstretched hand with palm up, an SKU, an RFID
reading, and the like. For example, a wearer of the eyepiece may view a
letter `X` on a piece of cardboard which causes the eyepiece to enter
holographic mode and turning on the second, stacked projector. Selecting
what hologram to display may be done with a control technique. The
projector may project the hologram onto the cardboard over the letter
`X`. Associated software may track the position of the letter `X` and
move the projected image along with the movement of the letter `X`. In
another example, the eyepiece may scan a SKU, such as a SKU on a toy
construction kit, and a 3-D image of the completed toy construction may
be accessed from an online source or non-volatile memory. Interaction
with the hologram, such as rotating it, zooming in/out, and the like, may
be done using the control mechanisms described herein. Scanning may be
enabled by associated bar code/SKU scanning software. In another example,
a keyboard may be projected in space or on a surface. The holographic
keyboard may be used in or to control any of the associated
applications/functions.

[0408] In embodiments, eyepiece facilities may provide for locking the
position of a virtual keyboard down relative to a real environmental
object (e.g. a table, a wall, a vehicle dashboard, and the like) where
the virtual keyboard then does not move as the wearer moves their head.
In an example, and referring to FIG. 24, the user may be sitting at a
table and wearing the eyepiece 2402, and wish to input text into an
application, such as a word processing application, a web browser, a
communications application, and the like. The user may be able to bring
up a virtual keyboard 2408, or other interactive control element (e.g.
virtual mouse, calculator, touch screen, and the like), to use for input.
The user may provide a command for bringing up the virtual keyboard 2408,
and use a hand gesture 2404 for indicating the fixed location of the
virtual keyboard 2408. The virtual keyboard 2408 may then remain fixed in
space relative to the outside environment, such as fixed to a location on
the table 2410, where the eyepiece facilities keep the location of the
virtual keyboard 2408 on the table 2410 even when the user turns their
head. That is, the eyepiece 2402 may compensate for the user's head
motion in order to keep the user's view of the virtual keyboard 2408
located on the table 2410. In embodiments, the user may wear the
interactive head-mounted eyepiece, where the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content. The optical assembly may include a corrective element
that corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly. An integrated camera facility may be provided that images the
surrounding environment, and identifies a user hand gesture as an
interactive control element location command, such as a hand-finger
configuration moved in a certain way, positioned in a certain way, and
the like. The location of the interactive control element then may remain
fixed in position with respect to an object in the surrounding
environment, in response to the interactive control element location
command, regardless of a change in the viewing direction of the user. In
this way, the user may be able to utilize a virtual keyboard in much the
same way they would a physical keyboard, where the virtual keyboard
remains in the same location. However, in the case of the virtual
keyboard there are not `physical limitations`, such as gravity, to limit
where the user may locate the keyboard. For instance, the user could be
standing next to a wall, and place the keyboard location on the wall, and
the like. It will be appreciated by one skilled in the art that the
`virtual keyboard` technology may be applied to any controller, such as a
virtual mouse, virtual touch pad, virtual game interface, virtual phone,
virtual calculator, virtual paintbrush, virtual drawing pad, and the
like. For example, a virtual touchpad may be visualized through the
eyepiece to the user, and positioned by the user such as by use of hand
gestures, and used in place of a physical touchpad.

[0409] In embodiments, eyepiece facilities may use visual techniques to
render the projection of an object (e.g. virtual keyboard, keypad,
calculator, notepad, joystick, control panel, book) onto a surface, such
as by applying distortions like parallax, keystone, and the like. For
example, the appearance of a keyboard projected onto a tabletop in front
of the user with proper perspective may be aided through applying a
keystone effect, where the projection as provided through the eyepiece to
the user is distorted so that it looks like it is lying down on the
surface of the table. In addition, these techniques may be applied
dynamically, to provide the proper perspective even as the user moves
around in relationship to the surface.

[0410] In embodiments, eyepiece facilities may provide for gesture
recognition that may be used to provide a keyboard and mouse experience
with the eyepiece. For instance, with images of a keyboard, mouse, and
fingers overlaid on the lower part of the display, the system may be
capable of tracking finger positions in real time to enable a virtual
desktop. Through gesture recognition, tracking may be done without wires
and external powered devices. In another instance, fingertip locations
may be tracked through gesture recognition through the eyepiece without
wires and external power, such as with gloves with passive RFID chips in
each fingertip. In this instance, each RFID chip may have its own
response characteristic, enabling a plurality of digits of the fingers to
be read simultaneously. The RFID chips may be paired with the eyewear so
that they are distinguishable from other RFID chips that may be operating
nearby. The eyewear may provide the signals to activate the RFID chips
and have two or more receiving antennas. Each receiving antenna may be
connected to a phase-measurement circuit element that in turn provides
input to a location-determining algorithm. The location-determining
algorithm may also provide velocity and acceleration information, and the
algorithm that ultimately may provide keyboard and mouse information to
the eyepiece operating system. In embodiments, with two receiving
antennas, the azimuthal positions of each fingertip can be determined
with the phase difference between the receiving antennas. The relative
phase difference between RFID chips may then be used to determine the
radial positions of the fingertips.

[0411] In embodiments, eyepiece facilities may use visual techniques to
render the projection of a previously taken medical scan onto the
wearer's body, such as an x-ray, an ultrasound, an MRI, a PET scan, and
the like. For example, and referring to FIG. 24A, the eyepiece may have
access to an x-ray image taken of the wearer's hand. The eyepiece may
then utilize its integrated camera to view the wear's hand 2402A, and
overlay a projected image 2404A of the x-ray onto the hand. Further, the
eyepiece may be able to maintain the image overlay as the wearer moves
their hand and gaze relative to one other. In embodiments, this technique
may also be implemented while the wearer is looking in the mirror, where
the eyepiece transposes an image on top of the reflected image. This
technique may be used as part of a diagnostic procedure, for
rehabilitation during physical therapy, to encourage exercise and diet,
to explain to a patient a diagnosis or condition, and the like. The
images may be the images of the wearer, generic images from a database of
images for medical conditions, and the like. The generic overlay may show
some type of internal issue that is typical of a physical condition, a
projection of what the body will look like if a certain routine is
followed for a period of time, and the like. In embodiments, an external
control device, such as pointer controller, may enable the manipulation
of the image. Further, the overlay of the image may be synchronized
between multiple people, each wearing an eyepiece, as described herein.
For instance, a patient and a doctor may both project the image onto the
patient's hand, where the doctor may now explain a physical ailment while
the patient views the synchronized images of the projected scan and the
doctor's explanation.

[0412] In embodiments, eyepiece facilities may provide for removing the
portions of a virtual keyboard projection where intervening obstructions
appear (e.g. the user's hand getting in the way, where it is not desired
to project the keyboard onto the user's hand). In an example, and
referring to FIG. 30, the eyepiece 3002 may provide a projected virtual
keyboard 3008 to the wearer, such as onto a tabletop. The wearer may then
reach `over` the virtual keyboard 3008 to type. As the keyboard is merely
a projected virtual keyboard, rather than a physical keyboard, without
some sort of compensation to the projected image the projected virtual
computer would be projected `onto` the back of the user's hand. However,
as in this example, the eyepiece may provide compensation to the
projected image such that the portion of the wearer's hand 3004 that is
obstructing the intended projection of the virtual keyboard onto the
table may be removed from the projection. That is, it may not be
desirable for portions of the keyboard projection 3008 to be visualized
onto the user's hand, and so the eyepiece subtracts the portion of the
virtual keyboard projection that is co-located with the wearer's hand
3004. In embodiments, the user may wear the interactive head-mounted
eyepiece, where the eyepiece includes an optical assembly through which
the user views a surrounding environment and displayed content. The
optical assembly may include a corrective element that corrects the
user's view of the surrounding environment, an integrated processor for
handling content for display to the user, and an integrated image source
for introducing the content to the optical assembly. The displayed
content may include an interactive control element (e.g. virtual
keyboard, virtual mouse, calculator, touch screen, and the like). An
integrated camera facility may image a user's body part as it interacts
with the interactive control element, wherein the processor removes a
portion of the interactive control element by subtracting the portion of
the interactive control element that is determined to be co-located with
the imaged user body part based on the user's view. In embodiments, this
technique of partial projected image removal may be applied to other
projected images and obstructions, and is not meant to be restricted to
this example of a hand over a virtual keyboard.

[0413] In embodiments, eyepiece facilities may provide for intervening
obstructions for any virtual content that is displayed over "real" world
content. If some reference frame is determined that places the content at
some distance, then any object that passes between the virtual image and
the viewer may be subtracted from the displayed content so as not to
create a discontinuity for the user that is expecting the displayed
information to exist at a certain distance away. In embodiments, variable
focus techniques may also be used to increase the perception of a
distance hierarchy amongst the viewed content.

[0414] In embodiments, eyepiece facilities may provide for the ability to
determine an intended text input from a sequence of character contacts
swiped across a virtual keypad, such as with the finger, a stylus, the
entire hand, and the like. For example, and referring to FIG. 37, the
eyepiece may be projecting a virtual keyboard 3700, where the user wishes
to input the word `wind`. Normally, the user would discretely press the
key positions for `w`, then T, then `n`, and finally `d`, and a facility
(camera, accelerometer, and the like, such as described herein)
associated with the eyepiece would interpret each position as being the
letter for that position. However, the system may also be able to monitor
the movement, or swipe, of the user's finger or other pointing device
across the virtual keyboard and determine best fit matches for the
pointer movement. In the figure, the pointer has started at the character
`w` and swept a path 3704 though the characters e, r, t, y, u, i, k, n,
b, v, f, and d where it stops. The eyepiece may observe this sequence and
determine the sequence, such as through an input path analyzer, feed the
sensed sequence into a word matching search facility, and output a best
fit word, in this case `wind` as text 3708. In embodiments, the eyepiece
may monitor the motion of the pointing device across the keypad and
determine the word more directly, such as though auto complete word
matching, pattern recognition, object recognition, and the like, where
some `separator` indicates the space between words, such as a pause in
the motion of the pointing device, a tap of the pointing device, a
swirling motion of the pointing device, and the like. For instance, the
entire swipe path may be used with pattern or object recognition
algorithms to associate whole words with the discrete patterns formed by
the user's finger as they move through each character to form words, with
a pause between the movements as demarcations between the words. The
eyepiece may provide the best-fit word, a listing of best-fit words, and
the like. In embodiments, the user may wear the interactive head-mounted
eyepiece, where the eyepiece includes an optical assembly through which
the user views a surrounding environment and displayed content. The
optical assembly may include a corrective element that corrects the
user's view of the surrounding environment, an integrated processor for
handling content for display to the user, and an integrated image source
for introducing the content to the optical assembly. The displayed
content may comprise an interactive keyboard control element (e.g. a
virtual keyboard, calculator, touch screen, and the like), and where the
keyboard control element is associated with an input path analyzer, a
word matching search facility, and a keyboard input interface. The user
may input text by sliding a pointing device (e.g. a finger, a stylus, and
the like) across character keys of the keyboard input interface in a
sliding motion through an approximate sequence of a word the user would
like to input as text, wherein the input path analyzer determines the
characters contacted in the input path, the word matching facility finds
a best word match to the sequence of characters contacted and inputs the
best word match as input text. In embodiments, the reference displayed
content may be something other than a keyboard, such as a sketch pad for
freehand text, or other interface references like a 4-way joystick pad
for controlling a game or real robots and aircraft, and the like. Another
example may be a virtual drum kit, such as with colored pads the user
"taps" to make a sound. The eyepiece's ability to interpret patterns of
motion across a surface may allow for projecting reference content in
order to give the user something to point at and provide them with visual
and/or audio feedback. In embodiments, the `motion` detected by the
eyepiece may be the motion of the user's eye as they look at the surface.
For example, the eyepiece may have facilities for tracking the eye
movement of the user, and by having both the content display locations of
a projected virtual keyboard and the gazing direction of the user's eye,
the eyepiece may be able to detect the line-of-sight motion of the user's
eye across the keyboard, and then interpret the motions as words as
described herein.

[0415] In embodiments, the eyepiece may provide the capability to command
the eyepiece via hand gesture `air lettering`, such as the wearer using
their finger to air swipe out a letter, word, and the like in view of an
embedded eyepiece camera, where the eyepiece interprets the finger motion
as letters, words, symbols for commanding, signatures, writing, emailing,
texting, and the like. For instance, the wearer may use this technique to
sign a document utilizing an `air signature`. The wearer may use this
technique to compose text, such as in an email, text, document, and the
like. The wearer eyepiece may recognize a symbol made through the hand
motion as a control command. In embodiments, the air lettering may be
implemented through hand gesture recognition as interpreted by images
captured through an eyepiece camera, or through other input control
devices, such as via an inertial measurement unit (IMU) mounted in a
device on the user's finger, hand, and the like, as described herein.

[0416] In embodiments, eyepiece facilities may provide for presenting
displayed content corresponding to an identified marker indicative of the
intention to display the content. That is, the eyepiece may be commanded
to display certain content based upon sensing a predetermined external
visual cue. The visual cue may be an image, an icon, a picture, face
recognition, a hand configuration, a body configuration, and the like.
The displayed content may be an interface device that is brought up for
use, a navigation aid to help the user find a location once they get to
some travel location, an advertisement when the eyepiece views a target
image, an informational profile, and the like. In embodiments, visual
marker cues and their associated content for display may be stored in
memory on the eyepiece, in an external computer storage facility and
imported as needed (such as by geographic location, proximity to a
trigger target, command by the user, and the like), generated by a
third-party, and the like. In embodiments, the user may wear the
interactive head-mounted eyepiece, where the eyepiece includes an optical
assembly through which the user views a surrounding environment and
displayed content. The optical assembly may include a corrective element
that corrects the user's view of the surrounding environment, an
integrated processor for handling content for display to the user, and an
integrated image source for introducing the content to the optical
assembly. An integrated camera facility may be provided that images an
external visual cue, wherein the integrated processor identifies and
interprets the external visual cue as a command to display content
associated with the visual cue. Referring to FIG. 38, in embodiments the
visual cue 3812 may be included in a sign 3814 in the surrounding
environment, where the projected content is associated with an
advertisement. The sign may be a billboard, and the advertisement for a
personalized advertisement based on a preferences profile of the user.
The visual cue 3802,3808 may be a hand gesture, and the projected content
a projected virtual keyboard 3804, 3810. For instance, the hand gesture
may be a thumb and index finger gesture 3802 from a first user hand, and
the virtual keyboard 3804 projected on the palm of the first user hand,
and where the user is able to type on the virtual keyboard with a second
user hand. The hand gesture 3808 may be a thumb and index finger gesture
combination of both user hands, and the virtual keyboard 3810 projected
between the user hands as configured in the hand gesture, where the user
is able to type on the virtual keyboard using the thumbs of the user's
hands. Visual cues may provide the wearer of the eyepiece with an
automated resource for associating a predetermined external visual cue
with a desired outcome in the way of projected content, thus freeing the
wearer from searching for the cues themselves.

[0417] In embodiments, the eyepiece may include a visual recognition
language translation facility for providing translations for visually
presented content, such as for road signs, menus, billboards, store
signs, books, magazines, and the like. The visual recognition language
translation facility may utilize optical character recognition to
identify letters from the content, match the strings of letters to words
and phrases through a database of translations. This capability may be
completely contained within the eyepiece, such as in an offline mode, or
at least in part in an external computing facility, such as on an
external server. For instance, a user may be in a foreign country, where
the signs, menus, and the like are not understood by the wearer of the
eyepiece, but for which the eyepiece is able to provide translations.
These translations may appear as an annotation to the user, replace the
foreign language words (such as on the sign) with the translation,
provided through an audio translation to the user, and the like. In this
way, the wearer won't have to take the effort to look up word
translations, but rather they would be provided automatically. In an
example, a user of the eyepiece may be Italian, and coming to the United
States they have the need to interpret the large number of road signs in
order to drive around safely. Referring to FIG. 38A, the Italian user of
the eyepiece is viewing a U.S. stop sign 3802A. In this instance, the
eyepiece may identify the letters on the sign, translate the word `stop`
in the Italian for stop, `arresto`, and make the stop sign 3804A appear
to read the word `arresto` rather than `stop`. In embodiments, the
eyepiece may also provide simple translation messages to the wearer,
provide audio translations, provide a translation dictionary to the
wearer, and the like.

[0418] In one example, the eyepiece may be used in an adaptive
environment, such as for blind users. In embodiments, the results of face
recognition or object identification may be processed to obtain an
audible result and can be presented as audio to a wearer of the glasses
through associated earbuds/headphones. In other embodiments, the results
of face recognition or object identification may be translated into
haptic vibrations in the glasses or an associated controller. In an
example, if someone stands in front of a user of the adaptive glasses, a
camera may image the person and transmit the image to the integrated
processor for processing by face recognition software or to face
recognition software operating on a server or in the cloud. The results
of the face recognition may be presented as written text in the display
of the glasses for certain individuals, but for blind or poor vision
users, the result may be processed to obtain audio. In other examples,
object recognition may determine the user is approaching a curb, doorway,
or other object and the glasses or controller would audibly or haptically
warn the user. For poor vision users, the text on the display could be
magnified or the contrast could be increased.

[0419] In embodiments, a GPS sensor may be used to determine a location of
the user wearing the adaptive display. The GPS sensor may be accessed by
a navigation application to audibly announce various points of interest
to the user as they are approached or reached. In embodiments, the user
may be audibly guided to an endpoint by the navigation application.

[0420] The eyepiece may be useful for various applications and markets. It
should be understood that the control mechanisms described herein may be
used to control the functions of the applications described herein. The
eyepiece may run a single application at a time or multiple applications
may run at a time. Switching between applications may be done with the
control mechanisms described herein. The eyepiece may be used in military
applications, gaming, image recognition applications, to view/order
e-books, GPS Navigation (Position, Direction, Speed and ETA), Mobile TV,
athletics (view pacing, ranking, and competition times; receive
coaching), telemedicine, industrial inspection, aviation, shopping,
inventory management tracking, firefighting (enabled by VIS/NIRSWIR
sensor that sees through fog, haze, dark), outdoor/adventure, custom
advertising, and the like. In an embodiment, the eyepiece may be used
with e-mail, such as GMAIL in FIG. 7, the Internet, web browsing, viewing
sports scores, video chat, and the like. In an embodiment, the eyepiece
may be used for educational/training purposes, such as by displaying step
by step guides, such as hands-free, wireless maintenance and repair
instructions. For example, a video manual and/or instructions may be
displayed in the field of view. In an embodiment, the eyepiece may be
used in Fashion, Health, and Beauty. For example, potential outfits,
hairstyles, or makeup may be projected onto a mirror image of a user. In
an embodiment, the eyepiece may be used in Business Intelligence,
Meetings, and Conferences. For example, a user's name tag can be scanned,
their face run through a facial recognition system, or their spoken name
searched in database to obtain biographical information. Scanned name
tags, faces, and conversations may be recorded for subsequent viewing or
filing.

[0421] In an embodiment, a "Mode" may be entered by the eyepiece. In the
mode, certain applications may be available. For example, a consumer
version of the eyepiece may have a Tourist Mode, Educational Mode,
Internet Mode, TV Mode, Gaming Mode, Exercise Mode, Stylist Mode,
Personal Assistant Mode, and the like.

[0422] A user of the augmented reality glasses may wish to participate in
video calling or video conferencing while wearing the glasses. Many
computers, both desktop and laptop have integrated cameras to facilitate
using video calling and conferencing. Typically, software applications
are used to integrate use of the camera with calling or conferencing
features. With the augmented reality glasses providing much of the
functionality of laptops and other computing devices, many users may wish
to utilize video calling and video conferencing while on the move wearing
the augmented reality glasses.

[0423] In an embodiment, a video calling or video conferencing application
may work with a WiFi connection, or may be part of a 3G or 4G calling
network associated with a user's cell phone. The camera for video calling
or conferencing is placed on a device controller, such as a watch or
other separate electronic computing device. Placing the video calling or
conferencing camera on the augmented reality glasses is not feasible, as
such placement would provide the user with a view only of themselves, and
would not display the other participants in the conference or call.
However, the user may choose to use the forward-facing camera to display
their surroundings or another individual in the video call.

[0424] FIG. 32 depicts a typical camera 3200 for use in video calling or
conferencing. Such cameras are typically small and could be mounted on a
watch 3202, as shown in FIG. 32, cell phone or other portable computing
device, including a laptop computer. Video calling works by connecting
the device controller with the cell phone or other communications device.
The devices utilize software compatible with the operating system of the
glasses and the communications device or computing device. In an
embodiment, the screen of the augmented reality glasses may display a
list of options for making the call and the user may gesture using a
pointing control device or use any other control technique described
herein to select the video calling option on the screen of the augmented
reality glasses.

[0425]FIG. 33 illustrates an embodiment 3300 of a block diagram of a
video-calling camera. The camera incorporates a lens 3302, a CCD/CMOS
sensor 3304, analog to digital converters for video signals, 3306, and
audio signals, 3314. Microphone 3312 collects audio input. Both analog to
digital converters 3306 and 3314 send their output signals to a signal
enhancement module 3308. The signal enhancement module 3308 forwards the
enhanced signal, which is a composite of both video and audio signals to
interface 3310. Interface 3310 is connected to an IEEE 1394 standard bus
interface, along with a control module 3316.

[0426] In operation, the video call camera depends on the signal capture
which transforms the incident light, as well as incident sound into
electrons. For light this process is performed by CCD or CMOS chip 3304.
The microphone transforms sound into electrical impulses.

[0427] The first step in the process of generating an image for a video
call is to digitize the image. The CCD or CMOS chip 3304 dissects the
image and converts it into pixels. If a pixel has collected many photons,
the voltage will be high. If the pixel has collected few photons, the
voltage will be low. This voltage is an analog value. During the second
step of digitization, the voltage is transformed into a digital value by
the analog to digital converter 3306, which handles image processing. At
this point, a raw digital image is available.

[0428] Audio captured by the microphone 3312 is also transformed into a
voltage. This voltage is sent to the analog to digital converter 3314
where the analog values are transformed into digital values.

[0429] The next step is to enhance the signal so that it may be sent to
viewers of the video call or conference. Signal enhancement includes
creating color in the image using a color filter, located in front of the
CCD or CMOS chip 3304. This filter is red, green, or blue and changes its
color from pixel to pixel, and in an embodiment, may be a color filter
array, or Bayer filter. These raw digital images are then enhanced by the
filter to meet aesthetic requirements. Audio data may also be enhanced
for a better calling experience.

[0430] In the final step before transmission, the image and audio data are
compressed and output as a digital video stream, in an embodiment using a
digital video camera. If a photo camera is used, single images may be
output, and in a further embodiment, voice comments may be appended to
the files. The enhancement of the raw digital data takes place away from
the camera, and in an embodiment may occur in the device controller or
computing device that the augmented reality glasses communicate with
during a video call or conference.

[0431] Further embodiments may provide for portable cameras for use in
industry, medicine, astronomy, microscopy, and other fields requiring
specialized camera use. These cameras often forgo signal enhancement and
output the raw digital image. These cameras may be mounted on other
electronic devices or the user's hand for ease of use.

[0432] The camera interfaces to the augmented reality glasses and the
device controller or computing device using an IEEE 1394 interface bus.
This interface bus transmits time critical data, such as a video and data
whose integrity is critically important, including parameters or files to
manipulate data or transfer images.

[0433] In addition to the interface bus, protocols define the behavior of
the devices associated with the video call or conference. The camera for
use with the augmented reality glasses, may, in embodiments, employ one
of the following protocols: AV/C, DCAM, or SBP-2.

[0434] AV/C is a protocol for Audio Video Control and defines the behavior
of digital video devices, including video cameras and video recorders.

[0435] DCAM refers to the 1394 based Digital Camera Specification and
defines the behavior of cameras that output uncompressed image data
without audio.

[0436] SBP-2 refers to Serial Bus Protocol and defines the behavior of
mass storage devices, such as hard drives or disks.

[0437] Devices that use the same protocol are able to communicate with
each other. Thus, for video calling using the augmented reality glasses,
the same protocol may be used by the video camera on the device
controller and the augmented reality glasses. Because the augmented
reality glasses, device controller, and camera use the same protocol,
data may be exchanged among these devices. Files that may be transferred
among devices include: image and audio files, image and audio data flows,
parameters to control the camera, and the like.

[0438] In an embodiment, a user desiring to initiate a video call may
select a video call option from a screen presented when the call process
is initiated. The user selects by making a gesture using a pointing
device, or gesture to signal the selection of the video call option. The
user then positions the camera located on the device controller,
wristwatch, or other separable electronic device so that the user's image
is captured by the camera. The image is processed through the process
described above and is then streamed to the augmented reality glasses and
the other participants for display to the users.

[0439] In embodiments, the camera may be mounted on a cell phone, personal
digital assistant, wristwatch, pendant, or other small portable device
capable of being carried, worn, or mounted. The images or video captured
by the camera may be streamed to the eyepiece. For example, when a camera
is mounted on a rifle, a wearer may be able to image targets not in the
line of sight and wirelessly receive imagery as a stream of displayed
content to the eyepiece.

[0440] In embodiments, the present disclosure may provide the wearer with
GPS-based content reception, as in FIG. 6. As noted, augmented reality
glasses of the present disclosure may include memory, a global
positioning system, a compass or other orienting device, and a camera.
GPS-based computer programs available to the wearer may include a number
of applications typically available from the Apple Inc. App Store for
iPhone use. Similar versions of these programs are available for other
brands of smart phone and may be applied to embodiments of the present
disclosure. These programs include, for example, SREngine (scene
recognition engine), NearestTube, TAT Augmented ID, Yelp, Layar, and
TwittARound, as well as other more specialized applications, such as
RealSki.

[0441] SREngine is a scene recognition engine that is able to identify
objects viewed by the user's camera. It is a software engine able to
recognize static scenes, such as scenes of architecture, structures,
pictures, objects, rooms, and the like. It is then able to automatically
apply a virtual "label" to the structures or objects according to what it
recognizes. For example, the program may be called up by a user of the
present disclosure when viewing a street scene, such as FIG. 6. Using a
camera of the augmented reality glasses, the engine will recognize the
Fontaines de la Concorde in Paris. The program will then summon a virtual
label, shown in FIG. 6 as part of a virtual image 618 projected onto the
lens 602. The label may be text only, as seen at the bottom of the image
618. Other labels applicable to this scene may include "fountain,"
"museum," "hotel," or the name of the columned building in the rear.
Other programs of this type may include the Wikitude AR Travel Guide,
Yelp and many others.

[0442] NearestTube, for example, uses the same technology to direct a user
to the closest subway station in London, and other programs may perform
the same function, or similar, in other cities. Layar is another
application that uses the camera, a compass or direction, and GPS data to
identify a user's location and field of view. With this information, an
overlay or label may appear virtually to help orient and guide the user.
Yelp and Monocle perform similar functions, but their databases are
somewhat more specialized, helping to direct users in a similar manner to
restaurants or to other service providers.

[0443] The user may control the glasses, and call up these functions,
using any of the controls described in this patent. For example, the
glasses may be equipped with a microphone to pick up voice commands from
a user and process them using software contained with a memory of the
glasses. The user may then respond to prompts from small speakers or
earbuds also contained within the glasses frame. The glasses may also be
equipped with a tiny track pad, similar to those found on smartphones.
The trackpad may allow a user to move a pointer or indicator on the
virtual screen within the AR glasses, similar to a touch screen. When the
user reaches a desired point on the screen, the user depresses the track
pad to indicate his or her selection. Thus, a user may call up a program,
e.g., a travel guide, and then find his or her way through several menus,
perhaps selecting a country, a city and then a category. The category
selections may include, for example, hotels, shopping, museums,
restaurants, and so forth. The user makes his or her selections and is
then guided by the AR program. In one embodiment, the glasses also
include a GPS locator, and the present country and city provides default
locations that may be overridden.

[0444] In an embodiment, the eyepiece's object recognition software may
process the images being received by the eyepiece's forward facing camera
in order to determine what is in the field of view. In other embodiments,
the GPS coordinates of the location as determined by the eyepiece's GPS
may be enough to determine what is in the field of view. In other
embodiments, an RFID or other beacon in the environment may be
broadcasting a location. Any one or combination of the above may be used
by the eyepiece to identify the location and the identity of what is in
the field of view.

[0445] When an object is recognized, the resolution for imaging that
object may be increased or images or video may be captured at low
compression. Additionally, the resolution for other objects in the user's
view may be decreased, or captured at a higher compression rate in order
to decrease the needed bandwidth.

[0446] Once determined, content related to points of interest in the field
of view may be overlaid on the real world image, such as social
networking content, interactive tours, local information, and the like.
Information and content related to movies, local information, weather,
restaurants, restaurant availability, local events, local taxis, music,
and the like may be accessed by the eyepiece and projected on to the lens
of the eyepiece for the user to view and interact with. For example, as
the user looks at the Eiffel Tower, the forward facing camera may take an
image and send it for processing to the eyepiece's associated processor.
Object recognition software may determine that the structure in the
wearer's field of view is the Eiffel Tower. Alternatively, the GPS
coordinates determined by the eyepiece's GPS may be searched in a
database to determine that the coordinates match those of the Eiffel
Tower. In any event, content may then be searched relating to the Eiffel
Tower visitor's information, restaurants in the vicinity and in the Tower
itself, local weather, local Metro information, local hotel information,
other nearby tourist spots, and the like. Interacting with the content
may be enabled by the control mechanisms described herein. In an
embodiment, GPS-based content reception may be enabled when a Tourist
Mode of the eyepiece is entered.

[0447] In an embodiment, the eyepiece may be used to view streaming video.
For example, videos may be identified via search by GPS location, search
by object recognition of an object in the field of view, a voice search,
a holographic keyboard search, and the like. Continuing with the example
of the Eiffel Tower, a video database may be searched via the GPS
coordinates of the Tower or by the term `Eiffel Tower` once it has been
determined that is the structure in the field of view. Search results may
include geo-tagged videos or videos associated with the Eiffel Tower. The
videos may be scrolled or flipped through using the control techniques
described herein. Videos of interest may be played using the control
techniques described herein. The video may be laid over the real world
scene or may be displayed on the lens out of the field of view. In an
embodiment, the eyepiece may be darkened via the mechanisms described
herein to enable higher contrast viewing. In another example, the
eyepiece may be able to utilize a camera and network connectivity, such
as described herein, to provide the wearer with streaming video
conferencing capabilities.

[0448] As noted, the user of augmented reality may receive content from an
abundance of sources. A visitor or tourist may desire to limit the
choices to local businesses or institutions; on the other hand,
businesses seeking out visitors or tourists may wish to limit their
offers or solicitations to persons who are in their area or location but
who are visiting rather than local residents. Thus, in one embodiment,
the visitor or tourist may limit his or her search only to local
businesses, say those within certain geographic limits. These limits may
be set via GPS criteria or by manually indicating a geographic
restriction. For example, a person may require that sources of streaming
content or ads be limited to those within a certain radius (a set number
or km or miles) of the person. Alternatively, the criteria may require
that the sources are limited to those within a certain city or province.
These limits may be set by the augmented reality user just as a user of a
computer at a home or office would limit his or her searches using a
keyboard or a mouse; the entries for augmented reality users are simply
made by voice, by hand motion, or other ways described elsewhere in the
portions of this disclosure discussing controls.

[0449] In addition, the available content chosen by a user may be
restricted or limited by the type of provider. For example, a user may
restrict choices to those with a website operated by a government
institution (.gov) or by a non-profit institution or organization (.org).
In this way, a tourist or visitor who may be more interested in visiting
government offices, museums, historical sites and the like, may find his
or her choices less cluttered. The person may be more easily able to make
decisions when the available choices have been pared down to a more
reasonable number. The ability to quickly cut down the available choices
is desirable in more urban areas, such as Paris or Washington, D.C.,
where there are many choices.

[0450] The user controls the glasses in any of the manners or modes
described elsewhere in this patent. For example, the user may call up a
desired program or application by voice or by indicating a choice on the
virtual screen of the augmented reality glasses. The augmented glasses
may respond to a track pad mounted on the frame of the glasses, as
described above. Alternatively, the glasses may be responsive to one or
more motion or position sensors mounted on the frame. The signals from
the sensors are then sent to a microprocessor or microcontroller within
the glasses, the glasses also providing any needed signal transducing or
processing. Once the program of choice has begun, the user makes
selections and enters a response by any of the methods discussed herein,
such as signaling "yes" or "no" with a head movement, a hand gesture, a
trackpad depression, or a voice command.

[0451] At the same time, content providers, that is, advertisers, may also
wish to restrict their offerings to persons who are within a certain
geographic area, e.g., their city limits. At the same time, an
advertiser, perhaps a museum, may not wish to offer content to local
persons, but may wish to reach visitors or out-of-towners. In another
example, advertisements may not be presented when the user is home but
may be presented when the user is traveling or away from home. The
augmented reality devices discussed herein are desirably equipped with
both GPS capability and telecommunications capability and an integrated
processor for implementing geographic-based rules for advertisement
presentation. It will be a simple matter for the museum to provide
streaming content within a limited area by limiting its broadcast power.
The museum, however, may provide the content through the Internet and its
content may be available world-wide. In this instance, a user may receive
content through an augmented reality device advising that the museum is
open today and is available for touring.

[0452] The user may respond to the content by the augmented reality
equivalent of clicking on a link for the museum. The augmented reality
equivalent may be a voice indication, a hand or eye movement, or other
sensory indication of the user's choice, or by using an associated
body-mounted controller. The museum then receives a cookie indicating the
identity of the user or at least the user's internet service provider
(ISP). If the cookie indicates or suggests an internet service provider
other than local providers, the museum server may then respond with
advertisements or offers tailored to visitors. The cookie may also
include an indication of a telecommunications link, e.g., a telephone
number. If the telephone number is not a local number, this is an
additional clue that the person responding is a visitor. The museum or
other institution may then follow up with the content desired or
suggested by its marketing department.

[0453] Another application of the augmented reality eyepiece takes
advantage of a user's ability to control the eyepiece and its tools with
a minimum use of the user's hands, using instead voice commands, gestures
or motions. As noted above, a user may call upon the augmented reality
eyepiece to retrieve information. This information may already be stored
in a memory of the eyepiece, but may instead be located remotely, such as
a database accessible over the Internet or perhaps via an intranet which
is accessible only to employees of a particular company or organization.
The eyepiece may thus be compared to a computer or to a display screen
which can be viewed and heard at an extremely close range and generally
controlled with a minimal use of one's hands.

[0454] Applications may thus include providing information on-the-spot to
a mechanic or electronics technician. The technician can don the glasses
when seeking information about a particular structure or problem
encountered, for example, when repairing an engine or a power supply.
Using voice commands, he or she may then access the database and search
within the database for particular information, such as manuals or other
repair and maintenance documents. The desired information may thus be
promptly accessed and applied with a minimum of effort, allowing the
technician to more quickly perform the needed repair or maintenance and
to return the equipment to service. For mission-critical equipment, such
time savings may also save lives, in addition to saving repair or
maintenance costs.

[0455] The information imparted may include repair manuals and the like,
but may also include a full range of audio-visual information, i.e., the
eyepiece screen may display to the technician or mechanic a video of how
to perform a particular task at the same time the person is attempting to
perform the task. The augmented reality device also includes
telecommunications capabilities, so the technician also has the ability
to call on others to assist if there is some complication or unexpected
difficulty with the task. This educational aspect of the present
disclosure is not limited to maintenance and repair, but may be applied
to any educational endeavor, such as secondary or post-secondary classes,
continuing education courses or topics, seminars, and the like.

[0456] In an embodiment, a Wi-Fi enabled eyepiece may run a location-based
application for geo-location of opted-in users. Users may opt-in by
logging into the application on their phone and enabling broadcast of
their location, or by enabling geo-location on their own eyepiece. As a
wearer of the eyepiece scans people, and thus their opted-in device, the
application may identify opted-in users and send an instruction to the
projector to project an augmented reality indicator on an opted-in user
in the user's field of view. For example, green rings may be placed
around people who have opted-in to have their location seen. In another
example, yellow rings may indicate people who have opted-in but don't
meet some criteria, such as they do not have a FACEBOOK account, or that
there are no mutual friends if they do have a FACEBOOK account.

[0457] Some social networking, career networking, and dating applications
may work in concert with the location-based application. Software
resident on the eyepiece may coordinate data from the networking and
dating sites and the location-based application. For example, TwittARound
is one such program which makes use of a mounted camera to detect and
label location-stamped tweets from other tweeters nearby. This will
enable a person using the present disclosure to locate other nearby
Twitter users. Alternatively, users may have to set their devices to
coordinate information from various networking and dating sites. For
example, the wearer of the eyepiece may want to see all E-HARMONY users
who are broadcasting their location. If an opted-in user is identified by
the eyepiece, an augmented reality indicator may be laid over the
opted-in user. The indicator may take on a different appearance if the
user has something in common with the wearer, many things in common with
the user, and the like. For example, and referring to FIG. 16, two people
are being viewed by the wearer. Both of the people are identified as
E-HARMONY users by the rings placed around them. However, the woman shown
with solid rings has more than one item in common with the wearer while
the woman shown with dotted rings has no items in common with the wearer.
Any available profile information may get accessed and displayed to the
user.

[0458] In an embodiment, when the wearer directs the eyepiece in the
direction of a user who has a networking account, such as FACEBOOK,
TWITTER, BLIPPY, LINKEDIN, GOOGLE, WIKIPEDIA, and the like, the user's
recent posts or profile information may be displayed to the wearer. For
example, recent status updates, "tweets", "blips", and the like may get
displayed, as mentioned above for TwittARound. In an embodiment, when the
wearer points the eyepiece in a target user's direction, they may
indicate interest in the user if the eyepiece is pointed for a duration
of time and/or a gesture, head, eye, or audio control is activated. The
target user may receive an indication of interest on their phone or in
their glasses. If the target user had marked the wearer as interesting
but was waiting on the wearer to show interest first, an indication may
immediately pop up in the eyepiece of the target user's interest. A
control mechanism may be used to capture an image and store the target
user's information on associated non-volatile memory or in an online
account.

[0459] In other applications for social networking, a facial recognition
program, such as TAT Augmented ID, from TAT--The Astonishing Tribe,
Malmo, Sweden, may be used. Such a program may be used to identify a
person by his or her facial characteristics. This software uses facial
recognition software to identify a person. Using other applications, such
as photo identifying software from Flickr, one can then identify the
particular nearby person, and one can then download information from
social networking sites with information about the person. This
information may include the person's name and the profile the person has
made available on sites such as Facebook, Twitter, and the like. This
application may be used to refresh a user's memory of a person or to
identify a nearby person, as well as to gather information about the
person.

[0460] In other applications for social networking, the wearer may be able
to utilize location-based facilities of the eyepiece to leave notes,
comments, reviews, and the like, at locations, in association with
people, places, products, and the like. For example, a person may be able
to post a comment on a place they visited, where the posting may then be
made available to others through the social network. In another example,
a person may be able to post that comment at the location of the place
such that the comment is available when another person comes to that
location. In this way, a wearer may be able to access comments left by
others when they come to the location. For instance, a wearer may come to
the entrance to a restaurant, and be able to access reviews for the
restaurant, such as sorted by some criteria (e.g. most recent review, age
of reviewer, and the like).

[0461] A user may initiate the desired program by voice, by selecting a
choice from a virtual touchscreen, as described above, by using a
trackpad to select and choose the desired program, or by any of the
control techniques described herein. Menu selections may then be made in
a similar or complementary manner. Sensors or input devices mounted in
convenient locations on the user's body may also be used, e.g., sensors
and a track pad mounted on a wrist pad, on a glove, or even a discreet
device, perhaps of the size of a smart phone or a personal digital
assistant.

[0462] Applications of the present disclosure may provide the wearer with
Internet access, such as for browsing, searching, shopping,
entertainment, and the like, such as through a wireless communications
interface to the eyepiece. For instance, a wearer may initiate a web
search with a control gesture, such as through a control facility worn on
some portion of the wearer's body (e.g. on the hand, the head, the foot),
on some component being used by the wearer (e.g. a personal computer, a
smart phone, a music player), on a piece of furniture near the wearer
(e.g. a chair, a desk, a table, a lamp), and the like, where the image of
the web search is projected for viewing by the wearer through the
eyepiece. The wearer may then view the search through the eyepiece and
control web interaction though the control facility.

[0463] In an example, a user may be wearing an embodiment configured as a
pair of glasses, with the projected image of an Internet web browser
provided through the glasses while retaining the ability to
simultaneously view at least portions of the surrounding real
environment. In this instance, the user may be wearing a motion sensitive
control facility on their hand, where the control facility may transmit
relative motion of the user's hand to the eyepiece as control motions for
web control, such as similar to that of a mouse in a conventional
personal computer configuration. It is understood that the user would be
enabled to perform web actions in a similar fashion to that of a
conventional personal computer configuration. In this case, the image of
the web search is provided through the eyepiece while control for
selection of actions to carry out the search is provided though motions
of the hand. For instance, the overall motion of the hand may move a
cursor within the projected image of the web search, the flick of the
finger(s) may provide a selection action, and so forth. In this way, the
wearer may be enabled to perform the desired web search, or any other
Internet browser-enabled function, through an embodiment connected to the
Internet. In one example, a user may have downloaded computer programs
Yelp or Monocle, available from the App Store, or a similar product, such
as NRU ("near you"), an application from Zagat to locate nearby
restaurants or other stores, Google Earth, Wikipedia, or the like. The
person may initiate a search, for example, for restaurants, or other
providers of goods or services, such as hotels, repairmen, and the like,
or information. When the desired information is found, locations are
displayed or a distance and direction to a desired location is displayed.
The display may take the form of a virtual label co-located with the real
world object in the user's view.

[0464] Other applications from Layar (Amsterdam, the Netherlands) include
a variety of "layers" tailored for specific information desired by a
user. A layer may include restaurant information, information about a
specific company, real estate listings, gas stations, and so forth. Using
the information provided in a software application, such as a mobile
application and a user's global positioning system (GPS), information may
be presented on a screen of the glasses with tags having the desired
information. Using the haptic controls or other control discussed
elsewhere in this disclosure, a user may pivot or otherwise rotate his or
her body and view buildings tagged with virtual tags containing
information. If the user seeks restaurants, the screen will display
restaurant information, such as name and location. If a user seeks a
particular address, virtual tags will appear on buildings in the field of
view of the wearer. The user may then make selections or choices by
voice, by trackpad, by virtual touch screen, and so forth.

[0465] Applications of the present disclosure may provide a way for
advertisements to be delivered to the wearer. For example, advertisements
may be displayed to the viewer through the eyepiece as the viewer is
going about his or her day, while browsing the Internet, conducting a web
search, walking though a store, and the like. For instance, the user may
be performing a web search, and through the web search the user is
targeted with an advertisement. In this example, the advertisement may be
projected in the same space as the projected web search, floating off to
the side, above, or below the view angle of the wearer. In another
example, advertisements may be triggered for delivery to the eyepiece
when some advertising providing facility, perhaps one in proximity to the
wearer, senses the presence of the eyepiece (e.g. through a wireless
connection, RFID, and the like), and directs the advertisement to the
eyepiece. In embodiments, the eyepiece may be used for tracking of
advertisement interactions, such as the user seeing or interacting with a
billboard, a promotion, an advertisement, and the like. For instance,
user's behavior with respect to advertisements may be tracked, such as to
provide benefits, rewards, and the like to the user. In an example, the
user may be paid five dollars in virtual cash whenever they see a
billboard. The eyepiece may provide impression tracking, such as based on
seeing branded images (e.g. based on time, geography), and the like. As a
result, offers may be targeted based on the location and the event
related to the eyepiece, such as what the user saw, heard, interacted
with, and the like. In embodiments, ad targeting may be based on
historical behavior, such as based on what the user has interacted with
in the past, patterns of interactions, and the like.

[0466] For example, the wearer may be window-shopping in Manhattan, where
stores are equipped with such advertising providing facilities. As the
wearer walks by the stores, the advertising providing facilities may
trigger the delivery of an advertisement to the wearer based on a known
location of the user determined by an integrated location sensor of the
eyepiece, such as a GPS. In an embodiment, the location of the user may
be further refined via other integrated sensors, such as a magnetometer
to enable hyperlocal augmented reality advertising. For example, a user
on a ground floor of a mall may receive certain advertisements if the
magnetometer and GPS readings place the user in front of a particular
store. When the user goes up one flight in the mall, the GPS location may
remain the same, but the magnetometer reading may indicate a change in
elevation of the user and a new placement of the user in front of a
different store. In embodiments, one may store personal profile
information such that the advertising providing facility is able to
better match advertisements to the needs of the wearer, the wearer may
provide preferences for advertisements, the wearer may block at least
some of the advertisements, and the like. The wearer may also be able to
pass advertisements, and associated discounts, on to friends. The wearer
may communicate them directly to friends that are in close proximity and
enabled with their own eyepiece; they may also communicate them through a
wireless Internet connection, such as to a social network of friends,
though email, SMS; and the like. The wearer may be connected to
facilities and/or infrastructure that enables the communication of
advertisements from a sponsor to the wearer; feedback from the wearer to
an advertisement facility, the sponsor of the advertisement, and the
like; to other users, such as friends and family, or someone in proximity
to the wearer; to a store, such as locally on the eyepiece or in a remote
site, such as on the Internet or on a user's home computer; and the like.
These interconnectivity facilities may include integrated facilities to
the eyepiece to provide the user's location and gaze direction, such as
through the use of GPS, 3-axis sensors, magnetometer, gyros,
accelerometers, and the like, for determining direction, speed, attitude
(e.g. gaze direction) of the wearer. Interconnectivity facilities may
provide telecommunications facilities, such as cellular link, a WiFi/MiFi
bridge, and the like. For instance, the wearer may be able to communicate
through an available WiFi link, through an integrated MiFi (or any other
personal or group cellular link) to the cellular system, and the like.
There may be facilities for the wearer to store advertisements for a
later use. There may be facilities integrated with the wearer's eyepiece
or located in local computer facilities that enable caching of
advertisements, such as within a local area, where the cached
advertisements may enable the delivery of the advertisements as the
wearer nears the location associated with the advertisement. For example,
local advertisements may be stored on a server that contains geo-located
local advertisements and specials, and these advertisements may be
delivered to the wearer individually as the wearer approaches a
particular location, or a set of advertisements may be delivered to the
wearer in bulk when the wearer enters a geographic area that is
associated with the advertisements so that the advertisements are
available when the user nears a particular location. The geographic
location may be a city, a part of the city, a number of blocks, a single
block, a street, a portion of the street, sidewalk, and the like,
representing regional, local, hyper-local areas. Note that the preceding
discussion uses the term advertisement, but one skilled in the art will
appreciate that this can also mean an announcement, a broadcast, a
circular, a commercial, a sponsored communication, an endorsement, a
notice, a promotion, a bulletin, a message, and the like.

[0467] FIGS. 18-20A depict ways to deliver custom messages to persons
within a short distance of an establishment that wishes to send a
message, such as a retail store. Referring to FIG. 18 now, embodiments
may provide for a way to view custom billboards, such as when the wearer
of the eyepiece is walking or driving, by applications as mentioned above
for searching for providers of goods and services. As depicted in FIG.
18, the billboard 1800 shows an exemplary augmented reality-based
advertisement displayed by a seller or a service provider. The exemplary
advertisement, as depicted, may relate to an offer on drinks by a bar.
For example, two drinks may be provided for the cost of just one drink.
With such augmented reality-based advertisements and offers, the wearer's
attention may be easily directed towards the billboards. The billboards
may also provide details about location of the bar such as street
address, floor number, phone number, and the like. In accordance with
other embodiments, several devices other than eyepiece may be utilized to
view the billboards. These devices may include without limitations smart
phones, IPHONEs, IPADs, car windshields, user glasses, helmets,
wristwatches, headphones, vehicle mounts, and the like. In accordance
with an embodiment, a user (wearer in case the augmented reality
technology is embedded in the eyepiece) may automatically receive offers
or view a scene of the billboards as and when the user passes or drives
by the road. In accordance with another embodiment, the user may receive
offers or view the scene of the billboards based on his request.

[0468] FIG. 19 illustrates two exemplary roadside billboards 1900
containing offers and advertisements from sellers or service providers
that may be viewed in the augmented reality manner. The augmented
advertisement may provide a live and near-to-reality perception to the
user or the wearer.

[0469] As illustrated in FIG. 20, the augmented reality enabled device
such as the camera lens provided in the eyepiece may be utilized to
receive and/or view graffiti 2000, slogans, drawings, and the like, that
may be displayed on the roadside or on top, side, front of the buildings
and shops. The roadside billboards and the graffiti may have a visual
(e.g. a code, a shape) or wireless indicator that may link the
advertisement, or advertisement database, to the billboard. When the
wearer nears and views the billboard, a projection of the billboard
advertisement may then be provided to the wearer. In embodiments, one may
also store personal profile information such that the advertisements may
better match the needs of the wearer, the wearer may provide preferences
for advertisements, the wearer may block at least some of the
advertisements, and the like. In embodiments, the eyepiece may have
brightness and contrast control over the eyepiece projected area of the
billboard so as to improve readability for the advertisement, such as in
a bright outside environment.

[0470] In other embodiments, users may post information or messages on a
particular location, based on its GPS location or other indicator of
location, such as a magnetometer reading. The intended viewer is able to
see the message when the viewer is within a certain distance of the
location, as explained with FIG. 20A. In a first step 2001 of the method
FIG. 20A, a user decides the location where the message is to be received
by persons to whom the message is sent. The message is then posted 2003,
to be sent to the appropriate person or persons when the recipient is
close to the intended "viewing area." Location of the wearers of the
augmented reality eyepiece is continuously updated 2005 by the GPS system
which forms a part of the eyepiece. When the GPS system determines that
the wearer is within a certain distance of the desired viewing area,
e.g., 10 meters, the message is then sent 2007 to the viewer. In one
embodiment, the message then appears as e-mail or a text message to the
recipient, or if the recipient is wearing an eyepiece, the message may
appear in the eyepiece. Because the message is sent to the person based
on the person's location, in one sense, the message may be displayed as
"graffiti" on a building or feature at or near the specified location.
Specific settings may be used to determine if all passersby to the
"viewing area" can see the message or if only a specific person or group
of people or devices with specific identifiers. For example, a soldier
clearing a village may virtually mark a house as cleared by associating a
message or identifier with the house, such as a big X marking the
location of the house. The soldier may indicate that only other American
soldiers may be able to receive the location-based content. When other
American soldiers pass the house, they may receive an indication
automatically, such as by seeing the virtual `X` on the side of the house
if they have an eyepiece or some other augmented reality-enabled device,
or by receiving a message indicating that the house has been cleared. In
another example, content related to safety applications may be streamed
to the eyepiece, such as alerts, target identification, communications,
and the like.

[0471] Embodiments may provide for a way to view information associated
with products, such as in a store. Information may include nutritional
information for food products, care instructions for clothing products,
technical specifications for consumer electronics products, e-coupons,
promotions, price comparisons with other like products, price comparisons
with other stores, and the like. This information may be projected in
relative position with the product, to the periphery of sight to the
wearer, in relation to the store layout, and the like. The product may be
identified visually through a SKU, a brand tag, and the like; transmitted
by the product packaging, such as through an RFID tag on the product;
transmitted by the store, such as based on the wearer's position in the
store, in relative position to the products; and the like.

[0472] For example, a viewer may be walking though a clothing store, and
as they walk are provided with information on the clothes on the rack,
where the information is provided through the product's RFID tag. In
embodiments, the information may be delivered as a list of information,
as a graphic representation, as audio and/or video presentation, and the
like. In another example, the wearer may be food shopping, and
advertisement providing facilities may be providing information to the
wearer in association with products in the wearer's proximity, the wearer
may be provided information when they pick up the product and view the
brand, product name, SKU, and the like. In this way, the wearer may be
provided a more informative environment in which to effectively shop.

[0473] One embodiment may allow a user to receive or share information
about shopping or an urban area through the use of the augmented reality
enabled devices such as the camera lens fitted in the eyepiece of
exemplary sunglasses. These embodiments will use augmented reality (AR)
software applications such as those mentioned above in conjunction with
searching for providers of goods and services. In one scenario, the
wearer of the eyepiece may walk down a street or a market for shopping
purposes. Further, the user may activate various modes that may assist in
defining user preferences for a particular scenario or environment. For
example the user may enter navigation mode through which the wearer may
be guided across the streets and the market for shopping of the preferred
accessories and products. The mode may be selected and various directions
may be given by the wearer through various methods such as through text
commands, voice commands, and the like. In an embodiment, the wearer may
give a voice command to select the navigation mode which may result in
the augmented display in front of the wearer. The augmented information
may depict information pertinent to the location of various shops and
vendors in the market, offers in various shops and by various vendors,
current happy hours, current date and time and the like. Various sorts of
options may also be displayed to the wearer. The wearer may scroll the
options and walk down the street guided through the navigation mode.
Based on options provided, the wearer may select a place that suits him
the best for shopping based on such as offers and discounts and the like.
In embodiments, the eyepiece may provide the ability to search, browse,
select, save, share, receive advertisements, and the like for items of
purchase, such as viewed through the eyepiece. For example, the wearer
may search for an item across the Internet and make a purchase without
making a phone call, such as through an application store, commerce
application, and the like.

[0474] The wearer may give a voice command to navigate toward the place
and the wearer may then be guided toward it. The wearer may also receive
advertisements and offers automatically or based on request regarding
current deals, promotions and events in the interested location such as a
nearby shopping store. The advertisements, deals and offers may appear in
proximity of the wearer and options may be displayed for purchasing
desired products based on the advertisements, deals and offers. The
wearer may for example select a product and purchase it through a Google
checkout. A message or an email may appear on the eyepiece, similar to
the one depicted in FIG. 7, with information that the transaction for the
purchase of the product has been completed. A product delivery
status/information may also be displayed. The wearer may further convey
or alert friends and relatives regarding the offers and events through
social networking platforms and may also ask them to join.

[0475] In embodiments, the user may wear the head-mounted eyepiece wherein
the eyepiece includes an optical assembly through which the user may view
a surrounding environment and displayed content. The displayed content
may comprise one or more local advertisements. The location of the
eyepiece may be determined by an integrated location sensor and the local
advertisement may have a relevance to the location of the eyepiece. By
way of example, the user's location may be determined via GPS, RFID,
manual input, and the like. Further, the user may be walking by a coffee
shop, and based on the user's proximity to the shop, an advertisement,
similar to that depicted in FIG. 19, showing the store's brand 1900, such
as the band for a fast food restaurant or coffee may appear in the user's
field of view. The user may experience similar types of local
advertisements as he or she moves about the surrounding environment.

[0476] In other embodiments, the eyepiece may contain a capacitive sensor
capable of sensing whether the eyepiece is in contact with human skin.
Such sensor or group of sensors may be placed on the eyepiece and or
eyepiece arm in such a manner that allows detection of when the glasses
are being worn by a user. In other embodiments, sensors may be used to
determine whether the eyepiece is in a position such that they may be
worn by a user, for example, when the earpiece is in the unfolded
position. Furthermore, local advertisements may be sent only when the
eyepiece is in contact with human skin, in a wearable position, a
combination of the two, actually worn by the user and the like. In other
embodiments, the local advertisement may be sent in response to the
eyepiece being powered on or in response to the eyepiece being powered on
and worn by the user and the like. By way of example, an advertiser may
choose to only send local advertisements when a user is in proximity to a
particular establishment and when the user is actually wearing the
glasses and they are powered on allowing the advertiser to target the
advertisement to the user at the appropriate time.

[0477] In accordance with other embodiments, the local advertisement may
be displayed to the user as a banner advertisement, two-dimensional
graphic, text and the like. Further, the local advertisement may be
associated with a physical aspect of the user's view of the surrounding
environment. The local advertisement may also be displayed as an
augmented reality advertisement wherein the advertisement is associated
with a physical aspect of the surrounding environment. Such advertisement
may be two or three-dimensional. By way of example, a local advertisement
may be associated with a physical billboard as described further in FIG.
18 wherein the user's attention may be drawn to displayed content showing
a beverage being poured from a billboard 1800 onto an actual building in
the surrounding environment. The local advertisement may also contain
sound that is displayed to the user through an earpiece, audio device or
other means. Further, the local advertisement may be animated in
embodiments. For example, the user may view the beverage flow from the
billboard onto an adjacent building and, optionally, into the surrounding
environment. Similarly, an advertisement may display any other type of
motion as desired in the advertisement. Additionally, the local
advertisement may be displayed as a three-dimensional object that may be
associated with or interact with the surrounding environment. In
embodiments where the advertisement is associated with an object in the
user's view of the surrounding environment, the advertisement may remain
associated with or in proximity to the object even as the user turns his
head. For example, if an advertisement, such as the coffee cup as
described in FIG. 19, is associated with a particular building, the
coffee cup advertisement may remain associated with and in place over the
building even as the user turns his head to look at another object in his
environment.

[0478] In other embodiments, local advertisements may be displayed to the
user based on a web search conducted by the user where the advertisement
is displayed in the content of the web search results. For example, the
user may search for "happy hour" as he is walking down the street, and in
the content of the search results, a local advertisement may be displayed
advertising a local bar's beer prices.

[0479] Further, the content of the local advertisement may be determined
based on the user's personal information. The user's information may be
made available to a web application, an advertising facility and the
like. Further, a web application, advertising facility or the user's
eyepiece may filter the advertising based on the user's personal
information. Generally, for example, a user may store personal
information about his likes and dislikes and such information may be used
to direct advertising to the user's eyepiece. By way of specific example,
the user may store data about his affinity for a local sports team, and
as advertisements are made available, those advertisements with his
favorite sports team may be given preference and pushed to the user.
Similarly, a user's dislikes may be used to exclude certain
advertisements from view. In various embodiments, the advertisements may
be cashed on a server where the advertisement may be accessed by at least
one of an advertising facility, web application and eyepiece and
displayed to the user.

[0480] In various embodiments, the user may interact with any type of
local advertisement in numerous ways. The user may request additional
information related to a local advertisement by making at least one
action of an eye movement, body movement and other gesture. For example,
if an advertisement is displayed to the user, he may wave his hand over
the advertisement in his field of view or move his eyes over the
advertisement in order to select the particular advertisement to receive
more information relating to such advertisement. Moreover, the user may
choose to ignore the advertisement by any movement or control technology
described herein such as through an eye movement, body movement, other
gesture and the like. Further, the user may chose to ignore the
advertisement by allowing it to be ignored by default by not selecting
the advertisement for further interaction within a given period of time.
For example, if the user chooses not to gesture for more information from
the advertisement within five seconds of the advertisement being
displayed, the advertisement may be ignored by default and disappear from
the users view. Furthermore, the user may select to not allow local
advertisements to be displayed whereby said user selects such an option
on a graphical user interface or by turning such feature off via a
control on said eyepiece.

[0481] In other embodiments, the eyepiece may include an audio device.
Accordingly, the displayed content may comprise a local advertisement and
audio such that the user is also able to hear a message or other sound
effects as they relate to the local advertisement. By way of example, and
referring again to FIG. 18, while the user sees the beer being poured, he
will actually be able to hear an audio transmission corresponding to the
actions in the advertisement. In this case, the user may hear the bottle
open and then the sound of the liquid pouring out of the bottle and onto
the rooftop. In yet other embodiments, a descriptive message may be
played, and or general information may be given as part of the
advertisement. In embodiments, any audio may be played as desired for the
advertisement.

[0482] In accordance with another embodiment, social networking may be
facilitated with the use of the augmented reality enabled devices such as
a camera lens fitted in the eyepiece. This may be utilized to connect
several users or other persons that may not have the augmented reality
enabled device together who may share thoughts and ideas with each other.
For instance, the wearer of the eyepiece may be sitting in a school
campus along with other students. The wearer may connect with and send a
message to a first student who may be present in a coffee shop. The
wearer may ask the first student regarding persons interested in a
particular subject such as environmental economics for example. As other
students pass through the field of view of the wearer, the camera lens
fitted inside the eyepiece may track and match the students to a
networking database such as `Google me` that may contain public profiles.
Profiles of interested and relevant persons from the public database may
appear and pop-up in front of the wearer on the eyepiece. Some of the
profiles that may not be relevant may either be blocked or appear blocked
to the user. The relevant profiles may be highlighted for quick reference
of the wearer. The relevant profiles selected by the wearer may be
interested in the subject environmental economics and the wearer may also
connect with them. Further, they may also be connected with the first
student. In this manner, a social network may be established by the
wearer with the use of the eyepiece enabled with the feature of the
augmented reality. The social networks managed by the wearer and the
conversations therein may be saved for future reference.

[0483] The present disclosure may be applied in a real estate scenario
with the use of the augmented reality enabled devices such as a camera
lens fitted in an eyepiece. The wearer, in accordance with this
embodiment, may want to get information about a place in which the user
may be present at a particular time such as during driving, walking,
jogging and the like. The wearer may, for instance, want to understand
residential benefits and loss in that place. He may also want to get
detailed information about the facilities in that place. Therefore, the
wearer may utilize a map such as a Google online map and recognize the
real estate that may be available there for lease or purchase. As noted
above, the user may receive information about real estate for sale or
rent using mobile Internet applications such as Layar. In one such
application, information about buildings within the user's field of view
is projected onto the inside of the glasses for consideration by the
user. Options may be displayed to the wearer on the eyepiece lens for
scrolling, such as with a trackpad mounted on a frame of the glasses. The
wearer may select and receive information about the selected option. The
augmented reality enabled scenes of the selected options may be displayed
to the wearer and the wearer may be able to view pictures and take a
facility tour in the virtual environment. The wearer may further receive
information about real estate agents and fix an appointment with one of
those. An email notification or a call notification may also be received
on the eyepiece for confirmation of the appointment. If the wearer finds
the selected real estate of worth, a deal may be made and that may be
purchased by the wearer.

[0484] In accordance with another embodiment, customized and sponsored
tours and travels may be enhanced through the use of the augmented
reality-enabled devices, such as a camera lens fitted in the eyepiece.
For instance, the wearer (as a tourist) may arrive in a city such as
Paris and wants to receive tourism and sightseeing related information
about the place to accordingly plan his visit for the consecutive days
during his stay. The wearer may put on his eyepiece or operate any other
augmented reality enabled device and give a voice or text command
regarding his request. The augmented reality enabled eyepiece may locate
wearer position through geo-sensing techniques and decide tourism
preferences of the wearer. The eyepiece may receive and display
customized information based on the request of the wearer on a screen.
The customized tourism information may include information about art
galleries and museums, monuments and historical places, shopping
complexes, entertainment and nightlife spots, restaurants and bars, most
popular tourist destinations and centers/attractions of tourism, most
popular local/cultural/regional destinations and attractions, and the
like without limitations. Based on user selection of one or more of these
categories, the eyepiece may prompt the user with other questions such as
time of stay, investment in tourism and the like. The wearer may respond
through the voice command and in return receive customized tour
information in an order as selected by the wearer. For example the wearer
may give a priority to the art galleries over monuments. Accordingly, the
information may be made available to the wearer. Further, a map may also
appear in front of the wearer with different sets of tour options and
with different priority rank such as:

[0488] The wearer, for instance, may select the first option since it is
ranked as highest in priority based on wearer indicated preferences.
Advertisements related to sponsors may pop up right after selection.
Subsequently, a virtual tour may begin in the augmented reality manner
that may be very close to the real environment. The wearer may for
example take a 30 seconds tour to a vacation special to the Atlantis
Resort in the Bahamas. The virtual 3D tour may include a quick look at
the rooms, beach, public spaces, parks, facilities, and the like. The
wearer may also experience shopping facilities in the area and receive
offers and discounts in those places and shops. At the end of the day,
the wearer might have experienced a whole day tour sitting in his chamber
or hotel. Finally, the wearer may decide and schedule his plan
accordingly.

[0489] Another embodiment may allow information concerning auto repairs
and maintenance services with the use of the augmented reality enabled
devices such as a camera lens fitted in the eyepiece. The wearer may
receive advertisements related to auto repair shops and dealers by
sending a voice command for the request. The request may, for example
include a requirement of oil change in the vehicle/car. The eyepiece may
receive information from the repair shop and display to the wearer. The
eyepiece may pull up a 3D model of the wearer's vehicle and show the
amount of oil left in the car through an augmented reality enabled
scene/view. The eyepiece may show other relevant information also about
the vehicle of the wearer such as maintenance requirements in other parts
like brake pads. The wearer may see 3D view of the wearing brake pads and
may be interested in getting those repaired or changed. Accordingly, the
wearer may schedule an appointment with a vendor to fix the problem via
using the integrated wireless communication capability of the eyepiece.
The confirmation may be received through an email or an incoming call
alert on the eyepiece camera lens.

[0490] In accordance with another embodiment, gift shopping may benefit
through the use of the augmented reality enabled devices such as a camera
lens fitted in the eyepiece. The wearer may post a request for a gift for
some occasion through a text or voice command. The eyepiece may prompt
the wearer to answer his preferences such as type of gifts, age group of
the person to receive the gift, cost range of the gift and the like.
Various options may be presented to the user based on the received
preferences. For instance, the options presented to the wearer may be:
Cookie basket, Wine and cheese basket, Chocolate assortment, Golfer's
gift basket, and the like.

[0491] The available options may be scrolled by the wearer and the best
fit option may be selected via the voice command or text command. For
example, the wearer may select the Golfer's gift basket. A 3D view of the
Golfer's gift basket along with a golf course may appear in front of the
wearer. The virtual 3D view of the Golfer's gift basket and the golf
course enabled through the augmented reality may be perceived very close
to the real world environment. The wearer may finally respond to the
address, location and other similar queries prompted through the
eyepiece. A confirmation may then be received through an email or an
incoming call alert on the eyepiece camera lens.

[0492] Another application that may appeal to users is mobile on-line
gaming using the augmented reality glasses. These games may be computer
video games, such as those furnished by Electronic Arts Mobile, UbiSoft
and Activision Blizzard, e.g., World of Warcraft® (WoW). Just as
games and recreational applications are played on computers at home
(rather than computers at work), augmented reality glasses may also use
gaming applications. The screen may appear on an inside of the glasses so
that a user may observe the game and participate in the game. In
addition, controls for playing the game may be provided through a virtual
game controller, such as a joystick, control module or mouse, described
elsewhere herein. The game controller may include sensors or other output
type elements attached to the user's hand, such as for feedback from the
user through acceleration, vibration, force, pressure, electrical
impulse, temperature, electric field sensing, and the like. Sensors and
actuators may be attached to the user's hand by way of a wrap, ring, pad,
glove, bracelet, and the like. As such, an eyepiece virtual mouse may
allow the user to translate motions of the hand, wrist, and/or fingers
into motions of the cursor on the eyepiece display, where "motions" may
include slow movements, rapid motions, jerky motions, position, change in
position, and the like, and may allow users to work in three dimensions,
without the need for a physical surface, and including some or all of the
six degrees of freedom.

[0493] As seen in FIG. 27, gaming application implementations 2700 may use
both the internet and a GPS. In one embodiment, a game is downloaded from
a customer database via a game provider, perhaps using their web services
and the internet as shown, to a user computer or augmented reality
glasses. At the same time, the glasses, which also have telecommunication
capabilities, receive and send telecommunications and telemetry signals
via a cellular tower and a satellite. Thus, an on-line gaming system has
access to information about the user's location as well as the user's
desired gaming activities.

[0494] Games may take advantage of this knowledge of the location of each
player. For example, the games may build in features that use the
player's location, via a GPS locator or magnetometer locator, to award
points for reaching the location. The game may also send a message, e.g.,
display a clue, or a scene or images, when a player reaches a particular
location. A message, for example, may be to go to a next destination,
which is then provided to the player. Scenes or images may be provided as
part of a struggle or an obstacle which must be overcome, or as an
opportunity to earn game points. Thus, in one embodiment, augmented
reality eyepieces or glasses may use the wearer's location to quicken and
enliven computer-based video games.

[0495] One method of playing augmented reality games is depicted in FIG.
28. In this method 2800, a user logs into a website whereby access to a
game is permitted. The game is selected. In one example, the user may
join a game, if multiple player games are available and desired;
alternatively, the user may create a custom game, perhaps using special
roles the user desired. The game may be scheduled, and in some instances,
players may select a particular time and place for the game, distribute
directions to the site where the game will be played, etc. Later, the
players meet and check into the game, with one or more players using the
augmented reality glasses. Participants then play the game and if
applicable, the game results and any statistics (scores of the players,
game times, etc.) may be stored. Once the game has begun, the location
may change for different players in the game, sending one player to one
location and another player or players to a different location. The game
may then have different scenarios for each player or group of players,
based on their GPS or magnetometer-provided locations. Each player may
also be sent different messages or images based on his or her role, his
or her location, or both. Of course, each scenario may then lead to other
situations, other interactions, directions to other locations, and so
forth. In one sense, such a game mixes the reality of the player's
location with the game in which the player is participating.

[0496] Games can range from simple games of the type that would be played
in a palm of a player's hand, such as small, single player games.
Alternatively, more complicated, multi-player games may also be played.
In the former category are games such as SkySiege, AR Drone and Fire
Fighter 360. In addition, multiplayer games are also easily envisioned.
Since all players must log into the game, a particular game may be played
by friends who log in and specify the other person or persons. The
location of the players is also available, via GPS or other method.
Sensors in the augmented reality glasses or in a game controller as
described above, such as accelerometers, gyroscopes or even a magnetic
compass, may also be used for orientation and game playing. An example is
AR Invaders, available for iPhone applications from the App Store. Other
games may be obtained from other vendors and for non-iPhone type systems,
such as Layar, of Amsterdam and Paris SA, Paris, France, supplier of AR
Drone, AR Flying Ace and AR Pursuit.

[0497] In embodiments, games may also be in 3D such that the user can
experience 3D gaming. For example, when playing a 3D game, the user may
view a virtual, augmented reality or other environment where the user is
able to control his view perspective. The user may turn his head to view
various aspects of the virtual environment or other environment. As such,
when the user turns his head or makes other movements, he may view the
game environment as if he were actually in such environment. For example,
the perspective of the user may be such that the user is put `into` a 3D
game environment with at least some control over the viewing perspective
where the user may be able to move his head and have the view of the game
environment change in correspondence to the changed head position.
Further, the user may be able to `walk into` the game when he physically
walks forward, and have the perspective change as the user moves.
Further, the perspective may also change as the user moves the gazing
view of his eyes, and the like. Additional image information may be
provided, such as at the sides of the user's view that could be accessed
by turning the head.

[0498] In embodiments, the 3D game environment may be projected onto the
lenses of the glasses or viewed by other means. Further, the lenses may
be opaque or transparent. In embodiments, the 3D game image may be
associated with and incorporate the external environment of the user such
that the user may be able to turn his head and the 3D image and external
environment stay together. Further, such 3D gaming image and external
environment associations may change such that the 3D image associates
with more than one object or more than one part of an object in the
external environment at various instances such that it appears to the
user that the 3D image is interacting with various aspects or objects of
the actual environment. By way of example, the user may view a 3D game
monster climb up a building or on to an automobile where such building or
automobile is an actual object in the user's environment. In such a game,
the user may interact with the monster as part of the 3D gaming
experience. The actual environment around the user may be part of the 3D
gaming experience. In embodiments where the lenses are transparent, the
user may interact in a 3D gaming environment while moving about his or
her actual environment. The 3D game may incorporate elements of the
user's environment into the game, it may be wholly fabricated by the
game, or it may be a mixture of both.

[0499] In embodiments, the 3D images may be associated with or generated
by an augmented reality program, 3D game software and the like or by
other means. In embodiments where augmented reality is employed for the
purpose of 3D gaming, a 3D image may appear or be perceived by the user
based on the user's location or other data. Such an augmented reality
application may provide for the user to interact with such 3D image or
images to provide a 3D gaming environment when using the glasses. As the
user changes his location, for example, play in the game may advance and
various 3D elements of the game may become accessible or inaccessible to
the viewer. By way of example, various 3D enemies of the user's game
character may appear in the game based on the actual location of the
user. The user may interact with or cause reactions from other users
playing the game and or 3D elements associated with the other users
playing the game. Such elements associated with users may include
weapons, messages, currency, a 3D image of the user and the like. Based
on a user's location or other data, he or she may encounter, view, or
engage, by any means, other users and 3D elements associated with other
users. In embodiments, 3D gaming may also be provided by software
installed in or downloaded to the glasses where the user's location is or
is not used.

[0500] In embodiments, the lenses may be opaque to provide the user with a
virtual reality or other virtual 3D gaming experience where the user is
`put into` the game where the user's movements may change the viewing
perspective of the 3D gaming environment for the user. The user may move
through or explore the virtual environment through various body, head,
and or eye movements, use of game controllers, one or more touch screens,
or any of the control techniques described herein which may allow the
user to navigate, manipulate, and interact with the 3D environment, and
thereby play the 3D game.

[0501] In various embodiments, the user may navigate, interact with and
manipulate the 3D game environment and experience 3D gaming via body,
hand, finger, eye, or other movements, through the use of one or more
wired or wireless controllers, one or more touch screens, any of the
control techniques described herein, and the like.

[0502] In embodiments, internal and external facilities available to the
eyepiece may provide for learning the behavior of a user of the eyepiece,
and storing that learned behavior in a behavioral database to enable
location-aware control, activity-aware control, predictive control, and
the like. For example, a user may have events and/or tracking of actions
recorded by the eyepiece, such as commands from the user, images sensed
through a camera, GPS location of the user, sensor inputs over time,
triggered actions by the user, communications to and from the user, user
requests, web activity, music listened to, directions requested,
recommendations used or provided, and the like. This behavioral data may
be stored in a behavioral database, such as tagged with a user identifier
or autonomously. The eyepiece may collect this data in a learn mode,
collection mode, and the like. The eyepiece may utilize past data taken
by the user to inform or remind the user of what they did before, or
alternatively, the eyepiece may utilize the data to predict what eyepiece
functions and applications the user may need based on past collected
experiences. In this way, the eyepiece may act as an automated assistant
to the user, for example, launching applications at the usual time the
user launches them, turning off augmented reality and the GPS when
nearing a location or entering a building, streaming in music when the
user enters the gym, and the like. Alternately, the learned behavior
and/or actions of a plurality of eyepiece users may be autonomously
stored in a collective behavior database, where learned behaviors amongst
the plurality of users are available to individual users based on similar
conditions. For example, a user may be visiting a city, and waiting for a
train on a platform, and the eyepiece of the user accesses the collective
behavior database to determine what other users have done while waiting
for the train, such as getting directions, searching for points of
interest, listening to certain music, looking up the train schedule,
contacting the city website for travel information, connecting to social
networking sites for entertainment in the area, and the like. In this
way, the eyepiece may be able to provide the user with an automated
assistant with the benefit of many different user experiences. In
embodiments, the learned behavior may be used to develop preference
profiles, recommendations, advertisement targeting, social network
contacts, behavior profiles for the user or groups of users, and the
like, for/to the user.

[0503] In an embodiment, the augmented reality eyepiece or glasses may
include one or more acoustic sensors for detecting sound 2900. An example
is depicted above in FIG. 29. In one sense, acoustic sensors are similar
to microphones, in that they detect sounds. Acoustic sensors typically
have one or more frequency bandwidths at which they are more sensitive,
and the sensors can thus be chosen for the intended application. Acoustic
sensors are available from a variety of manufacturers and are available
with appropriate transducers and other required circuitry. Manufacturers
include ITT Electronic Systems, Salt Lake City, Utah, USA; Meggitt
Sensing Systems, San Juan Capistrano, Calif., USA; and National
Instruments, Austin, Tex., USA. Suitable microphones include those which
comprise a single microphone as well as those which comprise an array of
microphones, or a microphone array.

[0504] Acoustic sensors may include those using micro electromechanical
systems (MEMS) technology. Because of the very fine structure in a MEMS
sensor, the sensor is extremely sensitive and typically has a wide range
of sensitivity. MEMS sensors are typically made using semiconductor
manufacturing techniques. An element of a typical MEMS accelerometer is a
moving beam structure composed of two sets of fingers. One set is fixed
to a solid ground plane on a substrate; the other set is attached to a
known mass mounted on springs that can move in response to an applied
acceleration. This applied acceleration changes the capacitance between
the fixed and moving beam fingers. The result is a very sensitive sensor.
Such sensors are made, for example, by STMicroelectronics, Austin, Tex.
and Honeywell International, Morristown N.J., USA.

[0505] In addition to identification, sound capabilities of the augmented
reality devices may also be applied to locating an origin of a sound. As
is well known, at least two sound or acoustic sensors are needed to
locate a sound. The acoustic sensor will be equipped with appropriate
transducers and signal processing circuits, such as a digital signal
processor, for interpreting the signal and accomplishing a desired goal.
One application for sound locating sensors may be to determine the origin
of sounds from within an emergency location, such as a burning building,
an automobile accident, and the like. Emergency workers equipped with
embodiments described herein may each have one or more than one acoustic
sensors or microphones embedded within the frame. Of course, the sensors
could also be worn on the person's clothing or even attached to the
person. In any event, the signals are transmitted to the controller of
the augmented reality eyepiece. The eyepiece or glasses are equipped with
GPS technology and may also be equipped with direction-finding
capabilities; alternatively, with two sensors per person, the
microcontroller can determine a direction from which the noise
originated.

[0506] If there are two or more firefighters, or other emergency
responders, their location is known from their GPS capabilities. Either
of the two, or a fire chief, or the control headquarters, then knows the
position of two responders and the direction from each responder to the
detected noise. The exact point of origin of the noise can then be
determined using known techniques and algorithms. See e.g., Acoustic
Vector-Sensor Beamforming and Capon Direction Estimation, M. Hawkes and
A. Nehorai, IEEE Transactions on Signal Processing, vol. 46, no. 9,
September 1998, at 2291-2304; see also Cramer-Rao Bounds for Direction
Finding by an Acoustic Vector Sensor Under Nonideal Gain-Phase Responses,
Noncollocation or Nonorthogonal Orientation, P. K. Tam and K. T. Wong,
IEEE Sensors Journal, vol. 9. No. 8, August 2009, at 969-982. The
techniques used may include timing differences (differences in time of
arrival of the parameter sensed), acoustic velocity differences, and
sound pressure differences. Of course, acoustic sensors typically measure
levels of sound pressure (e.g., in decibels), and these other parameters
may be used in appropriate types of acoustic sensors, including acoustic
emission sensors and ultrasonic sensors or transducers.

[0507] The appropriate algorithms and all other necessary programming may
be stored in the microcontroller of the eyepiece, or in memory accessible
to the eyepiece. Using more than one responder, or several responders, a
likely location may then be determined, and the responders can attempt to
locate the person to be rescued. In other applications, responders may
use these acoustic capabilities to determine the location of a person of
interest to law enforcement. In still other applications, a number of
people on maneuvers may encounter hostile fire, including direct fire
(line of sight) or indirect fire (out of line of sight, including high
angle fire). The same techniques described here may be used to estimate a
location of the hostile fire. If there are several persons in the area,
the estimation may be more accurate, especially if the persons are
separated at least to some extent, over a wider area. This may be an
effective tool to direct counter-battery or counter-mortar fire against
hostiles. Direct fire may also be used if the target is sufficiently
close.

[0508] An example using embodiments of the augmented reality eyepieces is
depicted in FIG. 29B. In this example 2900B, numerous soldiers are on
patrol, each equipped with augmented reality eyepieces, and are alert for
hostile fire. The sounds detected by their acoustic sensors or
microphones may be relayed to a squad vehicle as shown, to their platoon
leader, or to a remote tactical operations center (TOC) or command post
(CP). Alternatively, or in addition to these, the signals may also be
sent to a mobile device, such as an airborne platform, as shown.
Communications among the soldiers and the additional locations may be
facilitated using a local area network, or other network. In addition,
all the transmitted signals may be protected by encryption or other
protective measures. One or more of the squad vehicle, the platoon
commander, the mobile platform, the TOC or the CP will have an
integration capability for combining the inputs from the several soldiers
and determining a possible location of the hostile fire. The signals from
each soldier will include the location of the soldier from a GPS
capability inherent in the augmented reality glasses or eyepiece. The
acoustic sensors on each soldier may indicate a possible direction of the
noise. Using signals from several soldiers, the direction and possibly
the location of the hostile fire may be determined. The soldiers may then
neutralize the location.

[0509] In addition to microphones, the augmented reality eyepiece may be
equipped with ear buds, which may be articulating ear buds, as mentioned
else where herein, and may be removably attached 1403, or may be equipped
with an audio output jack 1401. The eyepiece and ear buds may be equipped
to deliver noise-cancelling interference, allowing the user to better
hear sounds delivered from the audio-video communications capabilities of
the augmented reality eyepiece or glasses, and may feature automatic gain
control. The speakers or ear buds of the augmented reality eyepiece may
also connect with the full audio and visual capabilities of the device,
with the ability to deliver high quality and clear sound from the
included telecommunications device. As noted elsewhere herein, this
includes radio or cellular telephone (smart phone) audio capabilities,
and may also include complementary technologies, such as Bluetooth®
capabilities or related technologies, such as IEEE 802.11, for wireless
personal area networks (WPAN).

[0510] Another aspect of the augmented audio capabilities includes speech
recognition and identification capabilities. Speech recognition concerns
understanding what is said while speech identification concerns
understanding who the speaker is. Speech identification may work hand in
hand with the facial recognition capabilities of these devices to more
positively identify persons of interest. As described elsewhere in this
document, a camera connected as part of the augmented reality eyepiece
can unobtrusively focus on desired personnel, such as a single person in
a crowd or multiple faces in a crowd. Using the camera and appropriate
facial recognition software, an image of the person or people may be
taken. The features of the image are then broken down into any number of
measurements and statistics, and the results are compared to a database
of known persons. An identity may then be made. In the same manner, a
voice or voice sampling from the person of interest may be taken. The
sample may be marked or tagged, e.g., at a particular time interval, and
labeled, e.g., a description of the person's physical characteristics or
a number. The voice sample may be compared to a database of known
persons, and if the person's voice matches, then an identification may be
made. In embodiments, multiple individuals of interest may by selected,
such as for biometric identification. The multiple selection may be
through the use of a cursor, a hand gesture, an eye movement, and the
like. As a result of the multiple selection, information concerning the
selected individuals may be provided to the user, such as through the
display, through audio, and the like.

[0511] In embodiments where the camera is used for biometric
identification of multiple people in a crowd, control technologies
described herein may be used to select faces or irises for imaging. For
example, a cursor selection using the hand-worn control device may be
used to select multiple faces in a view of the user's surrounding
environment. In another example, gaze tracking may be used to select
which faces to select for biometric identification. In another example,
the hand-worn control device may sense a gesture used to select the
individuals, such as pointing at each individual.

[0512] In one embodiment, important characteristics of a particular
person's speech may be understood from a sample or from many samples of
the person's voice. The samples are typically broken into segments,
frames and subframes. Typically, important characteristics include a
fundamental frequency of the person's voice, energy, formants, speaking
rate, and the like. These characteristics are analyzed by software which
analyses the voice according to certain formulae or algorithms. This
field is constantly changing and improving. However, currently such
classifiers may include algorithms such as neural network classifiers,
k-classifiers, hidden Markov models, Gaussian mixture models and pattern
matching algorithms, among others.

[0513] A general template 3100 for speech recognition and speaker
identification is depicted in FIG. 31. A first step 3101 is to provide a
speech signal. Ideally, one has a known sample from prior encounters with
which to compare the signal. The signal is then digitized in step 3102
and is partitioned in step 3103 into fragments, such as segments, frames
and subframes. Features and statistics of the speech sample are then
generated and extracted in step 3104. The classifier, or more than one
classifier, is then applied in step 3105 to determine general
classifications of the sample. Post-processing of the sample may then be
applied in step 3106, e.g., to compare the sample to known samples for
possible matching and identification. The results may then be output in
step 3107. The output may be directed to the person requesting the
matching, and may also be recorded and sent to other persons and to one
or more databases.

[0514] In an embodiment, the audio capabilities of the eyepiece include
hearing protection with the associated earbuds. The audio processor of
the eyepiece may enable automatic noise suppression, such as if a loud
noise is detected near the wearer's head. Any of the control technologies
described herein may be used with automatic noise suppression.

[0515] In an embodiment, the eyepiece may include a nitinol head strap.
The head strap may be a thin band of curved metal which may either pull
out from the arms of the eyepiece or rotate out and extend out to behind
the head to secure the eyepiece to the head. In one embodiment, the tip
of the nitinol strap may have a silicone cover such that the silicone
cover is grasped to pull out from the ends of the arms. In embodiments,
only one arm has a nitinol band, and it gets secured to the other arm to
form a strap. In other embodiments, both arms have a nitinol band and
both sides get pulled out to either get joined to form a strap or
independently grasp a portion of the head to secure the eyepiece on the
wearer's head. In embodiments, the eyepiece may have interchangeable
equipment to attach the eyepiece to an individual's head, such as a joint
where a head strap, glasses arms, helmet strap, helmet snap connection,
and the like may be attached. For example, there may be a joint in the
eyepiece near the user's temple where the eyepiece may attach to a strap,
and where the strap may be disconnected so the user may attach arms to
make the eyepiece take the form of glasses, attach to a helmet, and the
like. In embodiments, the interchangeable equipment attaching the
eyepiece to the user's head or to a helmet may include an embedded
antenna. For example, a Nitinol head strap may have an embedded antenna
inside, such as for a particular frequency, for a plurality of
frequencies, and the like. In addition, the arm, strap, and the like, may
contain RF absorbing foam in order to aid in the absorption of RF energy
while the antenna is used in transmission.

[0516] Referring to FIG. 21, the eyepiece may include one or more
adjustable wrap around extendable arms 2134. The adjustable wrap around
extendable arms 2134 may secure the position of the eyepiece to the
user's head. One or more of the extendable arms 2134 may be made out of a
shape memory material. In embodiments, one or both of the arms may be
made of nitinol and/or any shape-memory material. In other instances, the
end of at least one of the wrap around extendable arms 2134 may be
covered with silicone. Further, the adjustable wrap around extendable
arms 2134 may extend from the end of an eyepiece arm 2116. They may
extend telescopically and/or they may slide out from an end of the
eyepiece arms. They may slide out from the interior of the eyepiece arms
2116 or they may slide along an exterior surface of the eyepiece arms
2116. Further, the extendable arms 2134 may meet and secure to each
other. The extendable arms may also attach to another portion of the head
mounted eyepiece to create a means for securing the eyepiece to the
user's head. The wrap around extendable arms 2134 may meet to secure to
each other, interlock, connect, magnetically couple, or secure by other
means so as to provide a secure attachment to the user's head. In
embodiments, the adjustable wrap around extendable arms 2134 may also be
independently adjusted to attach to or grasp portions of the user's head.
As such the independently adjustable arms may allow the user increased
customizability for a personalized fit to secure the eyepiece to the
user's head. Further, in embodiments, at least one of the wrap around
extendable arms 2134 may be detachable from the head mounted eyepiece. In
yet other embodiments, the wrap around extendable arms 2134 may be an
add-on feature of the head mounted eyepiece. In such instances, the user
may chose to put extendable, non-extendable or other arms on to the head
mounted eyepiece. For example, the arms may be sold as a kit or part of a
kit that allows the user to customize the eyepiece to his or her specific
preferences. Accordingly, the user may customize that type of material
from which the adjustable wrap around extendable arm 2134 is made by
selecting a different kit with specific extendable arms suited to his
preferences. Accordingly, the user may customize his eyepiece for his
particular needs and preferences.

[0517] In yet other embodiments, an adjustable strap, 2142, may be
attached to the eyepiece arms such that it extends around the back of the
user's head in order to secure the eyepiece in place. The strap may be
adjusted to a proper fit. It may be made out of any suitable material,
including but not limited to rubber, silicone, plastic, cotton and the
like.

[0518] In an embodiment, the eyepiece may be secured to the user's head by
a plurality of other structures, such a rigid arm, a flexible arm, a
gooseneck flex arm, a cable tensioned system, and the like. For instance,
a flexible arm may be constructed from a flexible tubing, such as in a
gooseneck configuration, where the flexible arm may be flexed into
position to adjust to the fit of a given user, and where the flexible arm
may be reshaped as needed. In another instance, a flexible arm may be
constructed from a cable tensioned system, such as in a robotic finger
configuration, having multiple joints connecting members that are bent
into a curved shape with a pulling force applied to a cable running
through the joints and members. In this case, the cable-driven system may
implement an articulating ear horn for size adjustment and eyepiece
headwear retention. The cable-tensioned system may have two or more
linkages, the cable may be stainless steel, Nitinol-based,
electro-actuated, ratcheted, wheel adjusted, and the like.

[0519] In an embodiment, the eyepiece may include security features, such
as M-Shield Security, Secure content, DSM, Secure Runtime, IPSec, and the
like. Other software features may include: User Interface, Apps,
Framework, BSP, Codecs, Integration, Testing, System Validation, and the
like.

[0520] In an embodiment, the eyepiece materials may be chosen to enable
ruggedization.

[0521] In an embodiment, the eyepiece may be able to access a 3G access
point that includes a 3G radio, an 802.11b connection and a Bluetooth
connection to enable hopping data from a device to a 3G-enable embodiment
of the eyepiece.

[0522] The present disclosure also relates to methods and apparatus for
the capture of biometric data about individuals. The methods and
apparatus provide wireless capture of fingerprints, iris patterns, facial
structure and other unique biometric features of individuals and then
send the data to a network or directly to the eyepiece. Data collected
from an individual may also be compared with previously collected data
and used to identify a particular individual.

[0523] In embodiments, the eyepiece 100 may be associated with mobile
biometric devices, such as a biometric flashlight 7300, a biometric phone
5000, a biometric camera, a pocket biometric device 5400, an arm strap
biometric device 5600, and the like, where the mobile biometrics device
may act as a stand-alone device or in communications with the eyepiece,
such as for control of the device, display of data from the device,
storage of data, linking to an external system, linking to other
eyepieces and/or other mobile biometrics devices, and the like. The
mobile biometrics device may enable a soldier or other non-military
personnel to collect or utilize existing biometrics to profile an
individual. The device may provide for tracking, monitoring, and
collecting biometric records such as including video, voice, gait, face,
iris biometrics and the like. The device may provide for geo-location
tags for collected data, such as with time, date, location, data-taking
personnel, the environment, and the like. The device may be able to
capture and record fingerprints, palm prints, scars, marks, tattoos,
audio, video, annotations, and the like, such as utilizing a thin film
sensor, recording, collecting, identifying, and verifying face,
fingerprint, iris, latent fingerprints, latent palm prints, voice, pocket
litter, and other identifying visible marks and environmental data. The
device may be able to read prints wet or dry. The device may include a
camera, such as with, IR illumination, UV illumination, and the like,
with a capability to see through, dust, smoke, haze, and the like. The
camera may support dynamic range extension, adaptive defect pixel
correction, advanced sharpness enhancement, geometric distortion
correction, advanced color management, hardware-based face detection,
video stabilization, and the like. In embodiments, the camera output may
be transmitted to the eyepiece for presentation to the soldier. The
device may accommodate a plurality of other sensors, such as described
herein, including an accelerometer, compass, ambient light, proximity,
barometric and temperature sensors, and the like, depending on
requirements. The device may also have a mosaic print sensor, as
described herein, producing high resolution images of the whorls and
pores of an individual's fingerprint, multiple finger prints
simultaneously, palm print, and the like. A soldier may utilize a mobile
biometrics device to more easily collect personnel information, such as
for document and media exploitation (DOMEX). For instance, during an
interview, enrollment, interrogations, and the like, operators may
photograph and read identifying data or `pocket litter` (e.g. passport,
ID cards, personal documents, cell phone directories, pictures), take
biometric data, and the like, into a person of interest profile that may
be entered into a searchable secure database. In embodiments, biometric
data may be filed using the most salient image plus manual entry,
enabling partial data capture. Data may be automatically geo-located,
time/date stamped, filed into a digital dossier, and the like, such as
with a locally or network assigned global unique identifier (GUID). For
instance, a face image may be captured at the scene of an IED bombing,
the left iris image may be captured at a scene of a suicide bombing,
latent fingerprints may be lifted from a sniper rifle, each taken from a
different mobile biometrics device at different locations and times, and
together identifying a person of interest from the multiple inputs, such
as at a random vehicle inspection point.

[0524] A further embodiment of the eyepiece may be used to provide
biometric data collection and result reporting. Biometric data may be
visual biometric data, such as facial biometric data or iris biometric
data, or may be audio biometric data. FIG. 39 depicts an embodiment
providing biometric data capture. The assembly, 3900 incorporates the
eyepiece 100, discussed above in connection with FIG. 1. Eyepiece 100
provides an interactive head-mounted eyepiece that includes an optical
assembly. Other eyepieces providing similar functionality may also be
used. Eyepieces may also incorporate global positioning system capability
to permit location information display and reporting.

[0525] The optical assembly allows a user to view the surrounding
environment, including individuals in the vicinity of the wearer. An
embodiment of the eyepiece allows a user to biometrically identify nearby
individuals using facial images and iris images or both facial and iris
images or audio samples. The eyepiece incorporates a corrective element
that corrects a user's view of the surrounding environment and also
displays content provided to the user through in integrated processor and
image source. The integrated image source introduces the content to be
displayed to the user to the optical assembly.

[0526] The eyepiece also includes an optical sensor for capturing
biometric data. The integrated optical sensor, in an embodiment may
incorporate a camera mounted on the eyepiece. This camera is used to
capture biometric images of an individual near the user of the eyepiece.
The user directs the optical sensor or the camera toward a nearby
individual by positioning the eyepiece in the appropriate direction,
which may be done just by looking at the individual. The user may select
whether to capture one or more of a facial image, an iris image, or an
audio sample.

[0527] The biometric data that may be captured by the eyepiece illustrated
in FIG. 39 includes facial images for facial recognition, iris images for
iris recognition, and audio samples for voice identification. The
eyepiece 3900 incorporates multiple microphones 3902 in an endfire array
disposed along both the right and left temples of the eyepiece. The
microphone arrays 3902 are specifically tuned to enable capture of human
voices in an environment with a high level of ambient noise. The
microphones may be directional, steerable, and covert. Microphones 3902
provide selectable options for improved audio capture, including
omni-directional operation, or directional beam operation. Directional
beam operation allows a user to record audio samples from a specific
individual by steering the microphone array in the direction of the
subject individual. Adaptive microphone arrays may be created that will
allow the operator to steer the directionality of the microphone array in
three dimensions, where the directional beam may be adjusted in real time
to maximize signal or minimize interfering noise for a non stationary
target. Array processing may allow summing of cardioid elements by analog
or digital means, where there may be switching between omni and
directional array operations. In embodiments, beam forming, array
steering, adaptive array processing (speech source location), and the
like, may be performed by the on-board processor. In an embodiment, the
microphone may be capable of 10 dB directional recording.

[0528] Audio biometric capture is enhanced by incorporating phased array
audio and video tracking for audio and video capture. Audio tracking
allows for continuing to capture an audio sample when the target
individual is moving in an environment with other noise sources. In
embodiments, the user's voice may be subtracted from the audio track so
as to enable a clearer rendition of the target individual, such as for
distinguishing what is being said, to provide better location tracking,
to provide better audio tracking, and the like.

[0529] To provide power for the display optics and biometric data
collection the eyepiece 3900 also incorporates a lithium-ion battery
3904, that is capable of operating for over twelve hours on a single
charge. In addition, the eyepiece 100 also incorporates a processor and
solid-state memory 3906 for processing the captured biometric data. The
processor and memory are configurable to function with any software or
algorithm used as part of a biometric capture protocol or format, such as
the .wav format.

[0530] A further embodiment of the eyepiece assembly 3900 provides an
integrated communications facility that transmits the captured biometric
data to a remote facility that stores the biometric data in a biometric
data database. The biometric data database interprets the captured
biometric data, interprets the data, and prepares content for display on
the eyepiece.

[0531] In operation, a wearer of the eyepiece desiring to capture
biometric data from a nearby observed individual positions himself or
herself so that the individual appears in the field of view of the
eyepiece. Once in position the user initiates capture of biometric
information. Biometric information that may be captured includes iris
images, facial images, and audio data.

[0532] In operation, a wearer of the eyepiece desiring to capture audio
biometric data from a nearby observed individual positions himself or
herself so that the individual appears is near the eyepiece,
specifically, near the microphone arrays located in the eyepiece temples.
Once in position the user initiates capture of audio biometric
information. This audio biometric information consists of a recorded
sample of the target individual speaking. Audio samples may be captured
in conjunction with visual biometric data, such as iris and facial
images.

[0533] To capture an iris image, the wearer/user observes the desired
individual and positions the eyepiece such that the optical sensor
assembly or camera may collect an image of the biometric parameters of
the desired individual. Once captured the eyepiece processor and
solid-state memory prepare the captured image for transmission to the
remote computing facility for further processing.

[0534] The remote computing facility receives the transmitted biometric
image and compares the transmitted image to previously captured biometric
data of the same type. Iris or facial images are compared with previously
collected iris or facial images to determine if the individual has been
previously encountered and identified.

[0535] Once the comparison has been made, the remote computing facility
transmits a report of the comparison to the wearer/user's eyepiece, for
display. The report may indicate that the captured biometric image
matches previously captured images. In such cases, the user receives a
report including the identity of the individual, along with other
identifying information or statistics. Not all captured biometric data
allows for an unambiguous determination of identity. In such cases, the
remote computing facility provides a report of findings and may request
the user to collect additional biometric data, possibly of a different
type, to aid in the identification and comparison process. Visual
biometric data may be supplemented with audio biometric data as a further
aid to identification.

[0536] Facial images are captured in a similar manner as iris images. The
field of view is necessarily larger, due to the size of the images
collected. This also permits to user to stand further off from the
subject whose facial biometric data is being captured.

[0537] In operation the user may have originally captured a facial image
of the individual. However, the facial image may be incomplete or
inconclusive because the individual may be wearing clothing or other
apparel, such as a hat, that obscures facial features. In such a case,
the remote computing facility may request that a different type of
biometric capture be used and additional images or data be transmitted.
In the case described above, the user may be directed to obtain an iris
image to supplement the captured facial image. In other instances, the
additional requested data may be an audio sample of the individual's
voice.

[0538] FIG. 40 illustrates capturing an iris image for iris recognition.
The figure illustrates the focus parameters used to analyze the image and
includes a geographical location of the individual at the time of
biometric data capture. FIG. 40 also depicts a sample report that is
displayed on the eyepiece.

[0539] FIG. 41 illustrates capture of multiple types of biometric data, in
this instance, facial and iris images. The capture may be done at the
same time, or by request of the remote computing facility if a first type
of biometric data leads to an inconclusive result.

[0540]FIG. 42 shows the electrical configuration of the multiple
microphone arrays contained in the temples of the eyepiece of FIG. 39.
The endfire microphone arrays allow for greater discrimination of signals
and better directionality at a greater distance. Signal processing is
improved by incorporating a delay into the transmission line of the back
microphone. The use of dual omni-directional microphones enables
switching from an omni-directional microphone to a directional
microphone. This allows for better direction finding for audio capture of
a desired individual. FIG. 43 illustrates the directionality improvements
available with multiple microphones.

[0541] The multiple microphones may be arranged in a composite microphone
array. Instead of using one standard high quality microphone to capture
an audio sample, the eyepiece temple pieces house multiple microphones of
different character. For example, this may be provided when the user is
generating a biometric fingerprint of someone's voice for future capture
and comparison. One example of multiple microphone use uses microphones
from cut off cell phones to reproduce the exact electrical and acoustic
properties of the individual's voice. This sample is stored for future
comparison in a database. If the individual's voice is later captured,
the earlier sample is available for comparison, and will be reported to
the eyepiece user, as the acoustic properties of the two samples will
match.

[0542]FIG. 44 shows the use of adaptive arrays to improve audio data
capture. By modifying pre-existing algorithms for audio processing
adaptive arrays can be created that allow the user to steer the
directionality of the antenna in three dimensions. Adaptive array
processing permits location of the source of the speech, thus tying the
captured audio data to a specific individual. Array processing permits
simple summing of the cardioid elements of the signal to be done either
digitally or using analog techniques. In normal use, a user should switch
the microphone between the omni-directional pattern and the directional
array. The processor allows for beamforming, array steering and adaptive
array processing, to be performed on the eyepiece. In embodiments, an
audio phase array may be used for audio tracking of a specific
individual. For instance, the user may lock onto the audio signature of
an individual in the surrounding environment (such as acquired in
real-time or from a database of sound signatures), and track the location
of the individual without the need to maintain eye contact or the user
moving their head. The location of the individual may be projected to the
user through the eyepiece display. In embodiments, the tracking of an
individual may also be provided through an embedded camera in the
eyepiece, where the user would not be required to maintain eye contact
with the individual, or move their head to follow. That is, in the case
of either the audio or visual tracking, the eyepiece may be able to track
the individual within the local environment, without the user needing to
show an physical motion to indicate that tracking is taking place and
even as the user moves their direction of view.

[0543] In an embodiment, the integrated camera may continuously record a
video file, and the integrated microphone may continuously record an
audio file. The integrated processor of the eyepiece may enable event
tagging in long sections of the continuous audio or video recording. For
example, a full day of passive recording may be tagged whenever an event,
conversation, encounter, or other item of interest takes place. Tagging
may be accomplished through the explicit press of a button, a noise or
physical tap, a hand gesture, or any other control technique described
herein. A marker may be placed in the audio or video file or stored in a
metadata header. In embodiments, the marker may include the GPS
coordinate of the event, conversation, encounter, or other item of
interest. In other embodiments, the marker may be time-synced with a GPS
log of the day. Other logic-based triggers can also tag the audio or
video file such as proximity relationships to other users, devices,
locations, or the like. Event tags may be active event tags that the user
triggers manually, passive event tags that occur automatically (such as
through preprogramming, through an event profile management facility, and
the like), a location-sensitive tag triggered by the user's location, and
the like. The event that triggers the event tag may be triggered by a
sound, a sight, a visual marker, received from a network connection, an
optical trigger, an acoustic trigger, a proximity trigger, a temporal
trigger, a geo-spatial trigger, and the like. The event trigger may
generate feedback to the user (such as an audio tone, a visual indicator,
a message, and the like), store information (such as storing a file,
document, entry in a listing, an audio file, a video file, and the like),
generate an informational transmission, and the like.

[0544] In an embodiment, the eyepiece may be used as SigInt Glasses. Using
one or more of an integrated WiFi, 3G or Bluetooth radios, the eyepiece
may be used to conspicuously and passively gather signals intelligence
for devices and individuals in the user's proximity. Signals intelligence
may be gathered automatically or may be triggered when a particular
device ID is in proximity, when a particular audio sample is detected,
when a particular geo-location has been reached, and the like.

[0545] Various embodiments of tactical glasses may include standalone
identification or collection of biometrics to geo-locate POIs, with
visual biometrics (face, iris, walking gait) at a safe distance and
positively identify POIs with robust sparse recognition algorithms for
the face and iris. The glasses may include a hands free display for
biometric computer interface to merge print and visual biometrics on one
comprehensive display with augmented target highlighting and view matches
and warnings without alerting the POI. The glasses may include location
awareness, such as displaying current and average speeds plus routes and
ETA to destination and preloading or recording trouble spots and
ex-filtration routes. The glasses may include real-time networked
tracking of blue and red forces to always know where your friendly's are,
achieve visual separation range between blue and red forces, and
geo-locate the enemy and share their location in real-time. A processor
associated with the glasses may include capabilities for OCR translation
and speech translation.

[0546] The tactical glasses can be used in combat to provide a graphical
user interface projected on the lens that provides users with directions
and augmented reality data on such things as team member positional data,
map information of the area, SWIR/CMOS night vision, vehicular S/A for
soldiers, geo locating laser range finder for geo-locating a POI or a
target to >500m with positional accuracy of typically less than two
meters, S/A blue force range rings, Domex registration, AR field repair
overlay, and real time UAV video. In one embodiment, the laser range
finder may be a 1.55 micron eye-safe laser range finder.

[0547] The eyepiece may utilize GPS and inertial navigation (e.g.
utilizing an inertial measurement unit) as described herein, such as
described herein, to provide positional and directional accuracy.
However, the eyepiece may utilize additional sensors and associated
algorithms to enhance positional and directional accuracy, such as with a
3-axis digital compass, inclinometer, accelerometer, gyroscope, and the
like. For instance, a military operation may require greater positional
accuracy then is available from GPS, and so other navigation sensors may
be utilized in combination to increase the positional accuracy of GPS.

[0548] The tactical glasses may feature enhanced resolution, such as
1280×1024 pixels, and may also feature auto-focus.

[0549] In dismounted and occupied enemy engagement missions, defeating a
low-intensity, low-density, asymmetrical form of warfare is incumbent
upon efficient information management. The tactical glasses system
incorporates ES2 (every soldier is a sensor) capabilities through
uncooperative data recording and intuitive tactical displays for a
comprehensive picture of situational awareness.

[0550] In embodiments, the tactical glasses may include one or more
waveguides being integrated into the frame. In some embodiments, the
total internal reflection lens is attached to a pair of ballistic glasses
in a monocular or binocular flip-up/flip-down arrangement. The tactical
glasses may include omni-directional ear buds for advanced hearing and
protection and a noise-cancelling boom microphone for communication
phonetically differentiated commands.

[0551] In some embodiments, the waveguides may have contrast control. The
contrast may be controlled using any of the control techniques described
herein, such as gesture control, automatic sensor control, manual control
using a temple mounted controller, and the like.

[0552] The tactical glasses may include a non-slip, adjustable elastic
head-strap. The tactical glasses may include clip-in corrective lenses.

[0553] In some embodiments, the total internal reflection lens is attached
to a device that is helmet-mounted, such as in FIG. 74, and may include a
day/night, VIS/NIR/SWIR CMOS color camera. The device enables unimpeded
"sight" of the threat as well as the soldier's own weapon with "see
through", flip-up electro-optic projector image display. The
helmet-mounted device, shown in FIG. 74A, may include an IR/SWIR
illuminator 7402, UV/SWIR illuminator 7404, visible to SWIR panoramic
lens 7408, visible to SWIR objective lens (not shown), transparent
viewing pane 7410, iris recognition objective lens 7412, laser emitter
7414, laser receiver 7418, or any other sensor, processor, or technology
described with respect to the eyepiece described herein, such as an
integrated IMU, an eye-safe laser range finder, integrated GPS receiver,
compass and inclinometer for positional accuracy, perspective control
that changes the viewing angle of the image to match the eye position,
electronic image stabilization and real-time enhancement, a library of
threats stored onboard or remotely for access over a tactical network,
and the like. A body-worn wireless computer may interface with the device
in FIG. 74. The helmet-mounted device includes visible to SWIR projector
optics, such as RGB microprojector optics. Multi-spectral IR and UV
imaging helps spot fake or altered docs. The helmet-mounted device may be
controlled with an encrypted wireless UWB wrist or weapon fore grip
controller.

[0554] In an embodiment, the transparent viewing pane 7410 can rotate
through 180° to project imagery onto a surface to share with
others.

[0555] FIG. 74B shows a side view of the exploded device mounted to a
helmet. The device may include a fully ambidextrous mount for mounting on
the left or right side of the helmet. In some embodiments, two devices
may be mounted on each of the left and right sides of the helmet to
enable binocular vision. The device or devices may snap into a standard
MICH or PRO-TECH helmet mount.

[0556] Today the warfighter cannot utilize fielded data devices
effectively. The tactical glasses system combines a low profile form,
lightweight materials and fast processers to make quick and accurate
decisions in the field. The modular design of the system allows the
devices to be effectively deployed to the individual, squad or company
while retaining the ability to interoperate with any fielded computer.
The tactical glasses system incorporates real-time dissemination of data.
With the onboard computer interface the operator can view, upload or
compare data in real time. This provides valuable situational and
environmental data can be rapidly disseminated to all networked personnel
as well as command posts (CPs) and tactical operations centers (TOCs).

[0557] FIGS. 75A and 75B in a front and side view, respectively, depict an
exemplary embodiment of biometric and situational awareness glasses. This
embodiment may include multiple field of view sensors 7502 for biometric
collection situational awareness and augmented view user interface, fast
locking GPS receiver and IMU, including 3-axis digital compass,
gyroscope, accelerometer and inclinometer for positional and directional
accuracy, 1.55 micron eye-safe laser range finder 7504 to assist
biometric capture and targeting integrated digital video recorder storing
two Flash SD cards, real-time electronic image stabilization and
real-time image enhancement, library of threats stored in onboard mini-SD
card or remotely loaded over a tactical network, flip-up photochromic
lenses 7508, noise-cancelling flexible boom mike 7510 and 3-axis
detachable stereo ear buds plus augmented hearing and protection system
7512. For example, the multiple field of view sensors 7502 may enable a
100°×40° FOV, which may be panoramic SXGA. For
example, the sensors may be a VGA sensor, SXGA sensor, and a VGA sensor
that generates a panoramic SXGA view with stitched
100°×40° FOV on a display of the glasses. The
displays may be translucent with perspective control that changes the
viewing angle of the image to match the eye position. This embodiment may
also include SWIR detection to let wearers see 1064 nm and 1550 nm laser
designators, invisible to the enemy and may feature ultra-low power
256-bit AES Encrypted connection between glasses, tactical radios and
computers, instant 2× zoom, auto face tracking, face and iris
recording, and recognition and GPS geo-location with a lm
auto-recognition range. This embodiment may include a power supply, such
as a 24 hour duration 4-AA alkaline, lithium and rechargeable battery box
with its computer and memory expansion slots with a water- and dust-proof
cord. In an embodiment, the glasses include a curved holographic wave
guide.

[0558] In embodiments, the eyepiece may be able to sense lasers such as
used in battlefield targeting. For instance, sensors in the eyepiece may
be able to detect laser light in typical military-use laser transmission
bands, such as 1064 nm, 1550 nm, and the like. In this way, the eyepiece
may be able to detect whether their position is being targeted, if
another location is being targeted, the location of a spotter using the
laser as a targeting aid, and the like. Further, since the eyepiece may
be able to sense laser light, such as directly or reflected, the soldier
may not only detect enemy laser sources that have been directed or
reflected to their position, but may supply the laser source themselves
in order to locate optical surfaces (e.g. binoculars) in the battlefield
scene. For example, the soldier scans the field with a laser and watches
with the eyepiece for a reflected return of the laser as a possible
location of an enemy viewing though binoculars. In embodiments, the
eyepiece may continuously scan the surrounding environment for laser
light, and provide feedback and/or action as a result of a detection,
such as an audible alarm to the soldier, a location indicted through a
visual indicator on the eyepiece display, and the like.

[0559] In some embodiments, a Pocket Camera may video record and captures
still pictures, allowing the operator to record environmental data for
analysis with a mobile, lightweight, rugged biometric device sized to be
stored in a pocket. An embodiment may be 2.25''×3.5''×0.375''
and capable of face capture at 10 feet, iris capture at 3 feet, recording
voice, pocket litter, walking gait, and other identifying visible marks
and environmental data in EFTS and EBTS compliant formatting compatible
with any Iris/Face algorithm. The device is designed to pre-qualify and
capture EFTS/EBTS/NIST/ISO/ITL 1-2007 compliant salient images to be
matched and filed by any biometric matching software or user interface.
The device may include a high definition video chip, 1 GHz processor with
533 Mhz DSP, GPS chip, active illumination and pre-qualification
algorithms. In some embodiments, the Pocket Bio Cam may not incorporate a
biometric watch list so it can be used at all echelons and/or for
constabulary leave-behind operations. Data may be automatically
geo-located and date/time stamped. In some embodiments, the device may
operate Linux SE OS, meet MIL-STD-810 environmental standards, and be
waterproof to 3 ft depth.

[0560] In an embodiment, a device for collection of fingerprints may be
known as a bio-print device. The bio-print apparatus comprises a clear
platen with two beveled edges. The platen is illuminated by a bank of
LEDs and one or more cameras. Multiple cameras are used and are closely
disposed and directed to the beveled edge of the platen. A finger or palm
is disposed over the platen and pressed against an upper surface of the
platen, where the cameras capture the ridge pattern. The image is
recorded using frustrated total internal reflection (FTIR). In FTIR,
light escapes the platen across the air gap created by the ridges and
valleys of the fingers or palm pressed against the platen.

[0561] Other embodiments are also possible. In one embodiment, multiple
cameras are place in inverted `V`s of a saw tooth pattern. In another
embodiment, a rectangle is formed and uses light direct through one side
and an array of cameras capture the images produced. The light enters the
rectangle through the side of the rectangle, while the cameras are
directly beneath the rectangle, enabling the cameras to capture the
ridges and valleys illuminated by the light passing through the
rectangle.

[0562] After the images are captured, software is used to stitch the
images from the multiple cameras together. A custom FPGA may be used for
the digital image processing.

[0563] Once captured and processed, the images may be streamed to a remote
display, such as a smart phone, computer, handheld device, or eyepiece,
or other device.

[0564] The above description provides an overview of the operation of the
methods and apparatus of the disclosure. Additional description and
discussion of these and other embodiments is provided below.

[0565] FIG. 45 illustrates the construction and layout of an optics based
finger and palm print system according to an embodiment. The optical
array consists of approximately 60 wafer scale cameras 4502. The optics
based system uses sequential perimeter illumination 4503, 4504 for high
resolution imaging of the whorls and pores that comprise a finger or palm
print. This configuration provides a low profile, lightweight, and
extremely rugged configuration. Durability is enhanced with a scratch
proof, transparent platen.

[0566] The mosaic print sensor uses a frustrated total internal reflection
(FTIR) optical faceplate provides images to an array of wafer scale
cameras mounted on a PCB like substrate 4505. The sensor may be scaled to
any flat width and length with a depth of approximately 1/2''. Size may
vary from a plate small enough to capture just one finger roll print, up
to a plate large enough to capture prints of both hands simultaneously.

[0567] The mosaic print sensor allows an operator to capture prints and
compare the collected data against an on-board database. Data may also be
uploaded and downloaded wirelessly. The unit may operate as a standalone
unit or may be integrated with any biometric system.

[0568] In operation the mosaic print sensor offers high reliability in
harsh environments with excessive sunlight. To provide this capability,
multiple wafer scale optical sensors are digitally stitched together
using pixel subtraction. The resulting images are engineered to be over
500 dots per inch (dpi). Power is supplied by a battery or by
parasitically drawing power from other sources using a USB protocol.
Formatting is EFTS, EBTS NIST, ISO, and ITL 1-2007 compliant.

[0569]FIG. 46 illustrates the traditional optical approach used by other
sensors. This approach is also based on FTIR. In the figure, the fringes
contact the prism and scatter the light. The fringes on the finger being
printed show as dark lines, while the valleys of the fingerprint show as
bright lines.

[0570]FIG. 47 illustrates the approach used by the mosaic sensor 4700.
The mosaic sensor also uses FTIR. However, the plate is illuminated from
the side and the internal reflections are contained within the plate of
the sensor. The fringes contact the prism and scatter the light, allowing
the camera to capture the scattered light. The fringes on the finger show
as bright lines, whiles the valleys show as dark lines.

[0571]FIG. 48 depicts the layout of the mosaic sensor 4800. The LED array
is arranged around the perimeter of the plate. Underneath the plate are
the cameras used to capture the fingerprint image. The image is captured
on this bottom plate, known as the capture plane. The capture plane is
parallel to the sensor plane, where the fingers are placed. The thickness
of the plate, the number of the cameras, and the number of the LEDs may
vary, depending on the size of the active capturing area of the plate.
The thickness of the plate may be reduced by adding mirrors that fold the
optical path of the camera, reducing the thickness needed. Each camera
should cover one inch of space with some pixels overlapping between the
cameras. This allows the mosaic sensor to achieve 500 ppi. The cameras
may have a field of view of 60 degrees; however, there may be significant
distortion in the image.

[0572]FIG. 49 shows an embodiment 4900 of a camera field of view and the
interaction of the multiple cameras used in the mosaic sensor. Each
camera covers a small capturing area. This area depends on the camera
field of view and the distance between the camera and the top surface of
the plate. α is one half of the camera's horizontal field of view
and β is one half of the camera's vertical field of view.

[0573] The mosaic sensor may be incorporated into a bio-phone and tactical
computer as illustrated in FIG. 50. The bio-phone and tactical computer
uses a completed mobile computer architecture that incorporates dual core
processors, DSP, 3-D graphics accelerator, 3G-4G Wi-Lan (in accordance
with 802.11a/b/g/n), Bluetooth 3.0, and a GPS receiver. The bio-phone and
tactical computer delivers power equivalent to a standard laptop in a
phone size package.

[0574] FIG. 50 illustrates the components of the bio-phone and tactical
computer. The bio-phone and tactical computer assembly, 5000 provides a
display screen 5001, speaker 5002 and keyboard 5003 contained within case
5004. These elements are visible on the front of the bio-phone and
tactical computer assembly 5000. On the rear of the assembly 3800 are
located a camera for iris imaging 5005, a camera for facial imaging and
video recording 5006 and a bio-print fingerprint sensor 5009.

[0577] The bio-phone can search, collect, enroll, and verify multiple
types of biometric data, including face, iris, two-finger fingerprint, as
well as biographic data. The device also records video, voice, gait,
identifying marks, and pocket litter. Pocket litter includes a variety of
small items normally carried in a pocket, wallet, or purse and may
include such items as spare change, identification, passports, charge
cards, and the like. FIG. 52 shows a typical collection of this type of
information. Depicted in FIG. 52 are examples of a collection of pocket
litter 5200. The types of items that may be included are personal
documents and pictures 5201, books 5202, notebooks and paper, 5203, and
documents, such as a passport 5204.

[0578] The biometrics phone and tactical computer may include a camera,
such as a high definition still and video camera, capable of biometric
data taking and video conferencing. In embodiments, the eyepiece camera
and videoconference capabilities, as described herein, may be used in
conjunction with the biometrics phone and tactical computer. For
instance, a camera integrated into the eyepiece may capture images and
communicate the images to the biometrics phone and tactical computer, and
vice a versa. Data may be exchanged between the eyepiece and biometrics
phone, network connectivity may be established by either, and shared, and
the like. In addition, the biometric phone and tactical computer may be
housed in a rugged, fully militarized construction, tolerant to a
militarized temperature range, waterproof (such as to a depth of 5 m),
and the like.

[0579] FIG. 51 illustrates an embodiment 5100 of the use of the bio-phone
to capture latent fingerprints and palm prints. Fingerprints and palm
prints are captured at 1000 dpi with active illumination from an
ultraviolet diode with scale overlay. Both fingerprint and palm prints
5100 may be captured using the bio-phone.

[0580] Data collected by the bio-phone is automatically geo-located and
date and time stamped using the GPS capability. Data may be uploaded or
downloaded and compared against onboard or networked databases. This data
transfer is facilitated by the 3G-4G, Wi-Lan, and Bluetooth capabilities
of the device. Data entry may be done with the QWERTY keyboard, or other
methods that may be provided, such as stylus or touch screen, or the
like. Biometric data is filed after collection using the most salient
image. Manual entry allows for partial data capture. FIG. 53 illustrates
the interplay 5300 between the digital dossier images and the biometric
watch list held at a database. The biometric watch list is used for
comparing data captured in the field with previously captured data

[0581] Formatting may use EFTS, EBTS NIST, ISO, and ITL 1-2007 formats to
provide compatibility with a range and variety of databases for biometric
data.

[0603] Additional devices and kits may also incorporate the mosaic sensors
and may operate in conjunction with the bio-phone and tactical computer
to provide a complete field solution for collection biometric data.

[0621] The features of the bio-phone and tactical computer may also be
provided in a bio-kit that provides for a biometric data collection
system that folds into a rugged and compact case. Data is collected in
biometric standard image and data formats that can be cross-referenced
for near real-time data communication with Department of Defense
Biometric Authoritative Databases.

[0622] The pocket bio-kit shown in FIG. 55 can capture latent fingerprints
and palm prints at 1,000 dpi with active illumination from an ultraviolet
diode with scale overlay. The bio-kit holds 32 GB memory storage cards
that are capable of interoperation with combat radios or computers for
upload and download of data in real-time field conditions. Power is
provided by lithium ion batteries. Components of the bio-kit assembly
5500 include a GPS antenna 5501, a bio-print sensor 5502, and a case 5503
with a base bottom 5505.

[0623] Biometric data collect is geo-located for monitoring and tracking
individual movement. Finger and palm prints, iris images, face images,
latent fingerprints, and video may be collected and enrolled in a
database using the bio-kit. Algorithms for finger and palm prints, iris
images, and face images facilitate these types of data collection. To aid
in capturing iris images and latent fingerprint images simultaneously,
the bio-kit has IR and UV diodes that actively illuminate an iris or
latent fingerprint. In addition, the pocket bio-kit is also fully
EFTS/EBTS compliant, including ITL 1-2007 and WSQ. The bio-kit meets
MIL-STD-810 for operation in environmental extremes and uses a Linux
operating system.

[0625] The bio-kit is also capable of recording video and stores
full-motion (30 fps) color video in an onboard "camcorder on chip."

[0626] The eyepiece 100 may interface with the mobile folding biometrics
enrollment kit (aka bio-kit) 5500, a biometric data collection system
that folds into a compact rugged case, such that unfolds into a mini
workstation for fingerprints, iris and facial recognition, latent
fingerprint, and the like biometric data as described herein. As is the
case for the other mobile biometrics devices, the mobile folding
biometrics enrollment kit 5500 may be used as a stand-alone device or in
association with the eyepiece 100, as described herein. In an embodiment,
the mobile folding biometrics enrollment kit may fold up to a small size
such as 6''×3''×1.5'' with weight such as 2 pounds. It may
contain a processor, digital signal processor, 3D accelerator, fast
syndrome-based hash (FSB) functions, solid state memory (e.g.
package-on-package (PoP)), hard drive, display (e.g. 75 mm×50 mm,
640×480 (VGA) daylight-readable LCD anti-glare, anti-reflective,
anti scratch screen), USB, Ethernet, embedded battery, mosaic optical
fingerprint reader, digital iris camera (such as with active IR
illumination), digital face and DOMEX camera with flash, fast lock GPS,
and the like. Data may be collected in biometric standard image and data
formats that may be cross-referenced for a near real-time data
communication with the DoD biometric authoritative databases. The device
may be capable of collecting biometric data and geo-location of persons
of interest for monitoring and tracking, wireless data upload/download
using combat radio or computer with standard networking interface, and
the like.

[0627] In addition to the bio-kit, the mosaic sensor may be incorporated
into a wrist mounted fingerprint, palm print, geo-location, and POI
enrollment device, shown in FIG. 56. The eyepiece 100 may interface with
the biometric device 5600, a biometric data collection system that straps
on a soldier's wrist or arm and folds open for fingerprints, iris
recognition, computer, and the like biometric data as described herein.
The device may have an integrated computer, keyboard, sunlight-readable
display, biometric sensitive platen, and the like, so operators may
rapidly and remotely store or compare data for collection and
identification purposes. For instance, the arm strap biometric sensitive
platen may be used to scan a palm, fingerprints, and the like. The device
may provide geo-location tags for person of interest and collected data
with time, date, location, and the like. As is the case for the other
mobile biometrics devices, the biometric device 5600 may be used as a
stand-alone device or in association with the eyepiece 100, as described
herein. In an embodiment, the biometric device may be small and light to
allow it to be comfortably worn on a soldier's arm, such as with
dimensions 5''×2.5'' for the active fingerprint and palm print
sensor, and a weight of 16 ounces. There may be algorithms for
fingerprint and palm capture. The device may include a processor, digital
signal processor, a transceiver, a Qwerty key board, large
weather-resistant pressure driven print sensor, sunlight readable
transflective QVGA color backlit LCD display, internal power source, and
the like.

[0628] In one embodiment, the wrist mounted assembly 5600 includes the
following elements in case 5601: straps 5602, setting and on/off buttons
5603, protective cover for sensor 5604, pressure-driven sensor 4405, and
a keyboard and LCD screen 5606.

[0629] The fingerprint, palm print, geo-location, and POI enrollments
device includes an integrated computer, QWERTY keyboard, and display. The
display is designed to allow easy operation in strong sunlight and uses
an LCD screen or LED indicator to alert the operator of successful
fingerprint and palm print capture. The display uses transflective QVGA
color, with a backlit LCD screen to improve readability. The device is
lightweight and compact, weighing 16 oz. and measuring 5''×2.5'' at
the mosaic sensor. This compact size and weight allows the device to slip
into an LBV pocket or be strapped to a user's forearm, as shown in FIG.
56. As with other devices incorporating the mosaic sensor, all POIs are
tagged with geo-location information at the time of capture.

[0630] The size of the sensor screen allows 10 fingers, palm, four-finger
slap, and finger tip capture. The sensor incorporates a large pressure
driven print sensor for rapid enrollment in any weather conditions as
specified in MIL-STD-810, at a rate of 500 dpi. Software algorithms
support both fingerprint and palm print capture modes and uses a Linux
operating system for device management. Capture is rapid, due to the 720
MHz processor with 533 MHZ DSP. This processing capability delivers
correctly formatted, salient images to any existing approved system
software. In addition, the device is also fully EFTS/EBTS compliant,
including ITL 1-2007 and WSQ.

[0631] As with other mosaic sensor devices, communication in wireless mode
is possible using a removable UWB wireless 256-bit AES transceiver. This
also provides secure upload and download to and from biometric databases
stored off the device.

[0632] Power is supplied using lithium polymer or AA alkaline batteries.

[0633] The wrist-mounted device described above may also be used in
conjunction with other devices, including augmented reality eyepieces
with data and video display, shown in FIG. 57. The assembly 5700 includes
the following components: an eyepiece 5702, and a bio-print sensor device
5700. The augmented reality eyepiece provides redundant, binocular,
stereo sensors and display and provides the ability to see in a variety
of lighting conditions, from glaring sun at midday, to the extremely low
light levels found at night Operation of the eyepiece is simple with a
rotary switch located on the temple of the eyepiece a user can access
data from a forearm computer or sensor, or a laptop device. The eyepiece
also provides omni-directional earbuds for hearing protection and
improved hearing. A noise cancelling boom microphone may also be
integrated into the eyepiece to provide better communication of
phonetically differentiated commands.

[0634] The eyepiece is capable of communicating wirelessly with the
bio-phone sensor and forearm mounted devices using a 256-bit AES
encrypted UWB. This also allows the device to communicate with a laptop
or combat radio, as well as network to CPs, TOCs, and biometric
databases. The eyepiece is ABIS, EBTS, EFTS, and JPEG 2000 compatible.

[0635] Similar to other mosaic sensor devices described above, the
eyepiece uses a networked GPS to provide highly accurate geo-location of
POIs, as well as a RF filter array.

[0636] In operation the low profile forearm mounted computer and tactical
display integrate face, iris, fingerprint, palm print, and fingertip
collection and identification. The device also records video, voice,
gait, and other distinguishing characteristics. Facial and iris tracking
is automatic, allowing the device to assist in recognizing
non-cooperative POIs. With the transparent display provided by the
eyepiece, the operator may also view sensor imagery, moving maps,
superimposed applications with navigation, targeting, position or other
information from sensors, UAVs, and the like, and data as well as the
individual whose biometric data is being captured or other targets/POIs.

[0637] FIG. 58 illustrates a further embodiment of the fingerprint, palm
print, geo-location, and POI enrollment device. The device is 16 oz and
uses a 5''×2.5'' active fingerprint and palm print capacitance
sensor. The sensor is capable of enrolling 10 fingers, a palm, 4 finger
slap, and finger tip prints at 500 dpi. A 0.6-1 GHz processor with 430
MHz DSP provides rapid enrollment and data capture. The device is ABIS,
EBTS, EFTS, and JPEG 2000 compatible and features networked GPS for
highly accurate location of persons of interest. In addition, the device
communicates wirelessly over a 256-bit AES encrypted UWB, laptop, or
combat radio. Database information may also be stored on the device,
allowing in the field comparison without uploading information. This
onboard data may also be shared wirelessly with other devices, such as a
laptop or combat radio.

[0638] A further embodiment of the wrist mounted bio-print sensor assembly
5800 includes the following elements: a bio-print sensor 5801, wrist
strap 5802, keyboard 5803, and combat radio connector interface 5804.

[0639] Data may be stored on the forearm device since the device can
utilize Mil-con data storage caps for increased storage capacity. Data
entry is performed on the QWERTY keyboard and may be done wearing gloves.

[0640] The display is a transflective QVGA, color, backlit LCD display
designed to be readable in sunlight. In addition to operation in strong
sunlight, the device may be operated in a wide range of environments, as
the device meets the requirements of MIL-STD-810 operation in
environmental extremes.

[0641] The mosaic sensor described above may also be incorporated into a
mobile, folding biometric enrollment kit, as shown in FIG. 59. The mobile
folding biometric enrollment kit 5900 folds into itself and is sized to
fit into a tactical vest pocket, having dimensions of 8×12×4
inches when unfolded.

[0642] FIG. 60 illustrates an embodiment 6000 of how the eyepiece and
forearm mounted device may interface to provide a complete system for
biometric data collection.

[0644] In operation the mobile folding biometric enrollment kit allows a
user to search, collect, identify, verify, and enroll face, iris, palm
print, fingertip, and biographic data for a subject and may also record
voice samples, pocket litter, and other visible identifying marks. Once
collected, the data is automatically geo-located, date, and time stamped.
Collected data may be searched and compared against onboard and networked
databases. For communicating with databases not onboard the device,
wireless data up/download using combat radio or laptop computer with
standard networking interface is provided. Formatting is compliant with
EFTS, EBTS, NIST, ISO, and ITL 1-2007. Prequalified images may be sent
directly to matching software as the device may use any matching and
enrollment software.

[0645] The devices and systems incorporating described above provide a
comprehensive solution for mobile biometric data collection,
identification, and situational awareness. The devices are capable of
collecting fingerprints, palm prints, fingertips, faces, irises, voice,
and video data for recognition of uncooperative persons of interest
(POI). Video is captured using high speed video to enable capture in
unstable situations, such as from a moving video. Captured information
may be readily shared and additional data entered via the keyboard. In
addition, all data is tagged with date, time, and geo-location. This
facilitates rapid dissemination of information necessary for situational
awareness in potentially volatile environments. Additional data
collection is possible with more personnel equipped with the devices,
thus, demonstrating the idea that "every soldier is a sensor." Sharing is
facilitated by integration of biometric devices with combat radios and
battlefield computers.

[0646] In embodiments, the eyepiece may utilize flexible thin-film
sensors, such as integrated into the eyepiece itself, into an external
device that the eyepiece interfaces with, and the like. A thin film
sensor may comprise a thin multi-layer electromechanical arrangement that
produces an electrical signal when subjected to a sudden contact force or
to continuously varying forces. Typical applications of electromechanical
thin film sensors employ both on-off electrical switch sensing and the
time-resolved sensing of forces. Thin-film sensors may include switches,
force gauges, and the like, where thin film sensors may rely upon the
effects of sudden electrical contact (switching), the gradual change of
electrical resistance under the action of force, the gradual release of
electrical charges under the action of stress forces, the generation of a
gradual electromotive force across a conductor when, moving in a magnetic
field, and the like. For example, flexible thin-film sensors may be
utilized in force-pressure sensors with microscopic force sensitive
pixels for two-dimensional force array sensors. This may be useful for
touch screens for computers, smart-phones, notebooks, MP-3-like devices,
especially those with military applications; screens for controlling
anything under computer control including unmanned aerial vehicles (UAV),
drones, mobile robots, exoskeleton-based devices; and the like. Thin-film
sensors may be useful in security applications, such as in remote or
local sensors for detecting intrusion, opening or closing of devices,
doors, windows, equipment, and the like. Thin-film sensors may be useful
for trip wire detection, such as with electronics and radio used in
silent, remote trip-wire detectors. Thin-film sensors may be used in
open-close detections, such as force sensors for detecting strain-stress
in vehicle compartments, ship hulls, aircraft panels, and the like.
Thin-film sensors may be useful as biometric sensors, such as in
fingerprinting, palm-printing, finger tip printing, and the like.
Thin-film sensors may be useful leak detection, such as detecting leaking
tanks, storage facilities, and the like. Thin-film sensors may be useful
in medical sensors, such as in detecting liquid or blood external to a
body, and the like. These sensor applications are meant to be
illustrative of the many applications thin-film sensors may be employed
in association with control and monitoring of external devices through
the eyepiece, and are not meant to be limiting in any way.

[0647] FIG. 62 illustrates an embodiment 6200 of a thin-film finger and
palm print collection device. The device can record four fingerprint
slaps and rolls, palm prints, and fingerprints to the NIST standard.
Superior quality finger print images can be captured with either wet or
dry hands. The device is reduced in weight and power consumption compared
to other large sensors. In addition, the sensor is self-contained and is
hot swappable. The configuration of the sensor may be varied to suit a
variety of needs, and the sensor may be manufactured in various shapes
and dimensions.

[0648] FIG. 63 depicts an embodiment 6300 of a finger, palm, and
enrollment data collection device. This device records fingertip, roll,
slap, and palm prints. A built in QWERTY keyboard allows entry of written
enrollment data. As with the devices described above, all data is tagged
with date, time, and geo-location of collection. A built in database
provides on board matching of potential POIs against the built in
database. Matching may also be performed with other databases over a
battlefield network. This device can be integrated with the optical
biometric collection eyepiece described above to support face and iris
recognition.

[0649] The specifications for the finger, palm, and enrollment device are
given below:

[0670] FIGS. 64-66 depict use of the devices incorporating a sensor for
collecting biometric data. FIG. 64 shows an embodiment 6400 of the
capture of a two-stage palm print. FIG. 65 shows collection 6500 using a
fingertip tap. FIG. 66 demonstrates an embodiment 6600 of a slap and roll
print being collected.

[0671] The discussion above pertains to methods of gathering biometric
data, such as fingerprints or palm prints using a platen or touch screen,
as shown in FIGS. 66 and 62-66. This disclosure also includes methods and
systems for touchless or contactless fingerprinting using polarized
light. In one embodiment, fingerprints may be taken by persons using a
polarized light source and retrieving images of the fingerprints using
reflected polarized light in two planes. In another embodiment,
fingerprints may be taken by persons using a light source and retrieving
images of the fingerprints using multispectral processing, e.g., using
two imagers at two different locations with different inputs. The
different inputs may be caused by using different filters or different
sensors/imagers. Applications of this technology may include biometric
checks of unknown persons or subjects in which the safety of the persons
doing the checking may be at issue.

[0672] In this method, an unknown person or subject may approach a
checkpoint, for example, to be allowed further travel to his or her
destination. As depicted in the system 6700 shown in FIG. 67, the person
P and an appropriate body part, such as a hand, a palm P, or other part,
are illuminated by a source of polarized light 6701. As is well known to
those with skill in optical arts, the source of polarized light may
simply be a lamp or other source of illumination with a polarizing filter
to emit light that is polarized in one plane. The light travels to the
person in an area which has been specified for non-contact
fingerprinting, so that the polarized light impinges on the fingers or
other body part of the person P. The incident polarized light is then
reflected from the fingers or other body part and passes in all
directions from the person. Two imagers or cameras 6704 receive the
reflected light after the light has passed through optical elements such
as a lens 6702 and a polarizing filter 6703. The cameras or imagers may
be mounted on the augmented reality glasses, as discussed above with
respect to FIG. 8F.

[0673] The light then passes from palm or finger or fingers of the person
of interest to two different polarizing filters 6704a, 6704b and then to
the imagers or cameras 6705. Light which has passed through the
polarizing filters may have a 90o orientation difference (horizontal and
vertical) or other orientation difference, such as 30o, 45o, 60o or 120o.
The cameras may be digital cameras with appropriate digital imaging
sensors to convert the incident light into appropriate signals. The
signals are then processed by appropriate processing circuitry 6706, such
as digital signal processors. The signals may then be combined in a
conventional manner, such as by a digital microprocessor with memory
6707. The digital processor with appropriate memory is programmed to
produce data suitable for an image of a palm, fingerprint, or other image
as desired. The digital data from the imagers may then be combined in
this process, for example, using the techniques of U.S. Pat. No.
6,249,616 and others. As noted above in the present disclosure, the
combined "image" may then be checked against a database to determine an
identity of the person. The augmented reality glasses may include such a
database in the memory, or may refer the signals data elsewhere 6708 for
comparison and checking.

[0674] A process for taking contactless fingerprints, palm prints or other
biometric prints is disclosed in the flowchart of FIG. 68. In one
embodiment, a polarized light source is provided 6801. In a second step
6802, the person of interest and the selected body part is positioned for
illumination by the light. In another embodiment, it may be possible to
use incident white light rather than using a polarized light source. When
the image is ready to be taken, light is reflected 6803 from the person
to two cameras or imagers. A polarizing filter is placed in front of each
of the two cameras, so that the light received by the cameras is
polarized 6804 in two different planes, such as in a horizontal and
vertical plane. Each camera then detects 6805 the polarized light. The
cameras or other sensors then convert the incidence of light into signals
or data 6806 suitable for preparation of images. Finally, the images are
then combined 6807 to form a very distinct, reliable print. The result is
an image of very high quality that may be compared to digital databases
to identify the person and to detect persons of interest.

[0675] It should be understood that while digital cameras are used in this
contactless system, other imagers may be used, such as active pixel
imagers, CMOS imagers, imagers that image in multiple wavelengths, CCD
cameras, photo detector arrays, TFT imagers, and so forth. It should also
be understood that while polarized light has been used to create two
different images, other variations in the reflected light may also be
used. For example, rather than using polarized light, white light may be
used and then different filters applied to the imagers, such as a Bayer
filter, a CYGM filter, or an RGBE filter. In other embodiments, it may be
possible to dispense with a source of polarized light and instead use
natural or white light rather than a source of polarized light.

[0676] The use of touchless or contactless fingerprinting has been under
development for some time, as evidenced by earlier systems. For example,
U.S. Pat. Appl. 2002/0106115 used polarized light in a non-contact
system, but required a metallic coating on the fingers of the person
being fingerprinted. Later systems, such as those described in U.S. Pat.
No. 7,651,594 and U.S. Pat. Appl. Publ. 2008/0219522, required contact
with a platen or other surface. The contactless system described herein
does not require contact at the time of imaging, nor does it require
prior contact, e.g., placing a coating or a reflective coating on the
body part of interest. Of course, the positions of the imagers or cameras
with respect to each other should be known for easier processing.

[0677] In use, the contactless fingerprint system may be employed at a
checkpoint, such as a compound entrance, a building entrance, a roadside
checkpoint or other convenient location. Such a location may be one where
it is desirable to admit some persons and to refuse entrance or even
detain other persons of interest. In practice, the system may make use of
an external light source, such as a lamp, if polarized light is used. The
cameras or other imagers used for the contactless imaging may be mounted
on opposite sides of one set of augmented reality glasses (for one
person). For example, a two-camera version is shown in FIG. 8F, with two
cameras 870 mounted on frame 864. In this embodiment, the software for at
least processing the image may be contained within a memory of the
augmented reality glasses. Alternatively, the digital data from the
cameras/imagers may be routed to a nearby datacenter for appropriate
processing. This processing may include combining the digital data to
form an image of the print. The processing may also include checking a
database of known persons to determine whether the subject is of
interest.

[0678] Alternatively, one camera on each of two persons may be used, as
seen in the camera 858 in FIG. 8F. In this configuration, the two persons
would be relatively near so that their respective images would be
suitably similar for combining by the appropriate software. For example,
the two cameras 6705 in FIG. 67 may be mounted on two different pairs of
augmented reality glasses, such as on two soldiers manning a checkpoint.
Alternatively, the cameras may be mounted on a wall or on stationary
parts of the checkpoint itself. The two images may then be combined by a
remote processor with memory 6707, such as a computer system at the
building checkpoint.

[0679] As discussed above, persons using the augmented reality glasses may
be in constant contact with each other through at least one of many
wireless technologies, especially if they are both on duty at a
checkpoint. Accordingly, the data from the single cameras or from the
two-camera version may be sent to a data center or other command post for
the appropriate processing, followed by checking the database for a match
of the palm print, fingerprint, iris print, and so forth. The data center
may be conveniently located near the checkpoint. With the availability of
modern computers and storage, the cost of providing multiple datacenters
and wirelessly updating the software will not be a major cost
consideration in such systems.

[0680] The touchless or contactless biometric data gathering discussed
above may be controlled in several ways, such as the control techniques
discussed else in this disclosure. For example, in one embodiment, a user
may initiate a data-gathering session by pressing a touch pad on the
glasses, or by giving a voice command. In another embodiment, the user
may initiate a session by a hand movement or gesture or using any of the
control techniques described herein. Any of these techniques may bring up
a menu, from which the user may select an option, such as "begin data
gathering session," "terminate data-gathering session," or "continue
session." If a data-gathering session is selected, the
computer-controlled menu may then offer menu choices for number of
cameras, which cameras, and so forth, much as a user selects a printer.
There may also be modes, such as a polarized light mode, a color filter
mode, and so forth. After each selection, the system may complete a task
or offer another choice, as appropriate. User intervention may also be
required, such as turning on a source of polarized light or other light
source, applying filters or polarizers, and so forth.

[0681] After fingerprints, palm prints, iris images or other desired data
has been acquired, the menu may then offer selections as to which
database to use for comparison, which device(s) to use for storage, etc.
The touchless or contactless biometric data gathering system may be
controlled by any of the methods described herein.

[0682] While the system and sensors have obvious uses in identifying
potential persons of interest, there are positive battlefield uses as
well. The fingerprint sensor may be used to call up a soldier's medical
history, giving information immediately on allergies, blood type, and
other time sensitive and treatment determining data quickly and easily,
thus allowing proper treatment to be provided under battlefield
conditions. This is especially helpful for patients who may be
unconscious when initially treated and who may be missing identification
tags.

[0683] A further embodiment of a device for capturing biometric data from
individuals may incorporate a server to store and process biometric data
collected. The biometric data captured may include a hand image with
multiple fingers, a palm print, a face camera image, an iris image, an
audio sample of an individual's voice, and a video of the individual's
gait or movement. The collected data must be accessible to be useful.

[0684] Processing of the biometric data may be done locally or remotely at
a separate server. Local processing may offer the option to capture raw
images and audio and make the information available on demand from a
computer host over a WiFi or USB link. As an alternative, another local
processing method processes the images and then transmits the processed
data over the Internet. This local processing includes the steps of
finding the finger prints, rating the finger prints, finding the face and
then cropping it, finding and then rating the iris, and other similar
steps for audio and video data. While processing the data locally
requires more complex code, it does offer the advantage of reduced data
transmission over the Internet.

[0685] A scanner associated with the biometric data collection devices may
use code that is compliant with the USB Image Device protocol that is a
commonly used scanner standard. Other embodiments may use different
scanner standards, depending on need.

[0686] When a WiFi network is used to transfer the data, the Bio-Print
device, which is further described herein, can function or appear like a
web server to the network. Each of the various types of images may be
available by selecting or clicking on a web page link or button from a
browser client. This web server functionality may be part of the
Bio-Print device, specifically, included in the microcomputer
functionality.

[0687] A web server may be a part of the Bio-Print microcomputer host,
allowing for the Bio-Print device to author a web page that exposes
captured data and also provides some controls. An additional embodiment
of the browser application could provide controls to capture high
resolution hand prints, face images, iris images, set the camera
resolution, set the capture time for audio samples, and also enable a
streaming connection, using a web cam, Skype, or similar mechanism. This
connection could be attached to the audio and face camera.

[0688] A further embodiment provides a browser application that gives
access to images and audio captured via file transfer protocol (FTP) or
other protocol. A still further embodiment of the browser application may
provide for automatic refreshes at a selectable rate to repeatedly grab
preview images.

[0689] An additional embodiment provides local processing of captured
biometric data using a microcomputer and provides additional controls to
display a rating of the captured image, allowing a user to rate each of
the prints found, retrieve faces captured, and also to retrieve cropped
iris images and allow a user to rate each of the iris prints.

[0690] Yet another embodiment provides a USB port compatible with the Open
Multimedia Application Platform (OMAP3) system. OMAP3 is a proprietary
system on a chip for portable multimedia applications. The OMAP3 device
port is equipped with a Remote Network Driver Interface Specification
(RNDIS), a proprietary protocol that may be used on top of USB. These
systems provide the capability that when a Bio-Print device is plugged
into a Windows PC USB host port, the device shows up as an IP interface.
This IP interface would be the same as over WiFi (TCP/IP web server).
This allows for moving data off the microcomputer host and provides for
display of the captured print.

[0691] An application on the microcomputer may implement the above by
receiving data from an FPGA over the USB bus. Once received, JPEG content
is created. This content may be written over a socket to a server running
on a laptop, or be written to a file. Alternately, the server could
receive the socket stream, pop the image, and leave it open in a window,
thus creating a new window for each biometric capture. If the
microcomputer runs Network File System (NFS), a protocol for use with
Sun-based systems or SAMBA, a free software reimplementation that
provides file and print services for Windows clients, the files captured
may be shared and accessed by any client running NFS or System Management
Bus (SMB), a PC communication bus implementation. In this embodiment, a
JPEG viewer would display the files. The display client could include a
laptop, augmented reality glasses, or a phone running the Android
platform.

[0692] An additional embodiment provides for a server-side application
offering the same services described above.

[0693] An alternative embodiment to a server-side application displays the
results on the augmented reality glasses.

[0694] A further embodiment provides the microcomputer on a removable
platform, similar to a mass storage device or streaming camera. The
removable platform also incorporates an active USB serial port.

[0695] In embodiments, the eyepiece may include audio and/or visual
sensors to capture sounds and/or visuals from 360 degrees around the
wearer of an eyepiece. This may be from sensors mounted on the eyepiece
itself, or coupled to sensors mounted on a vehicle that the wearer is in.
For instance, sound sensors and/or cameras may be mounted to the outside
of a vehicle, where the sensors are communicatively coupled to the
eyepiece to provide a surround sound and/or sight `view` of the
surrounding environment. In addition, the sound system of the eyepiece
may provide sound protection, canceling, augmentation, and the like, to
help improve the hearing quality of the wearer while they are surrounded
by extraneous or loud noise. In an example, a wearer may be coupled to
cameras mounted on the vehicle they are driving. These cameras may then
be in communication with the eyepiece, and provide a 360-degree view
around the vehicle, such as provided in a projected graphical image
through the eyepiece display to the wearer.

[0696] In an example, and referring to FIG. 69, control aspects of the
eyepiece may include a remote device in the form of a watch controller
6902, such as including a receiver and/or transmitter for interfacing
with the eyepiece for messaging and/or controlling the eyepiece when the
user is not wearing the eyepiece. The watch controller may include a
camera, a fingerprint scanner, discrete control buttons, 2D control pad,
an LCD screen, a capacitive touch screen for multi-touch control, a shake
motor/piezo bumper to give tactile feedback, buttons with tactile feel,
Bluetooth, camera, fingerprint scanner, accelerometer, and the like, such
as provided in a control function area 6904 or on other functional
portions 6910 of the watch controller 6902. For instance, a watch
controller may have a standard watch display 6908, but additionally have
functionality to control the eyepiece, such as through control functions
6914 in the control function area 6904. The watch controller may display
and/or otherwise notify the user (e.g. vibration, audible sounds) of
messages from the eyepiece, such an email, advertisements, calendar
alerts, and the like, and show the content of the message that comes in
from the eyepiece that the user is currently not wearing. A shake motor,
piezo bumper, and the like, may provide tactile feedback to the touch
screen control interface. The watch receiver may be able to provide
virtual buttons and clicks in the control function area 6904 user
interface, buzz and bump the user's wrist, and the like, when a message
is received. Communications connectivity between the eyepiece and the
watch receiver may be provided through Bluetooth, WiFi, Cell network, or
any other communications interface known to the art. The watch controller
may utilize an embedded camera for videoconferencing (such as described
herein), iris scanning (e.g. for recording an image of the iris for
storage in a database, for use in authentication in conjunction with an
existing iris image in storage, and the like), picture taking, video, and
the like. The watch controller may have a fingerprint scanner, such as
described herein. The watch controller, or any other tactile interface
described herein, may measure a user's pulse, such as through a pulse
sensor 6912 (which may be located in the band, on the underside of the
main body of the watch, and the like. In embodiments, the eyepiece and
other control/tactile interface components may have pulse detection such
that the pulse from different control interface components are monitored
in a synchronized way, such as for health, activity monitoring,
authorization, and the like. For example, a watch controller and the
eyepiece may both have pulse monitoring, where the eyepiece is capable of
sensing whether the two are in synchronization, if both match a
previously measured profile (such as for authentication), and the like.
Similarly, other biometrics may be used for authentication between
multiple control interfaces and the eyepiece, such as with fingerprints,
iris scans, pulse, health profile, and the like, where the eyepiece knows
whether the same person is wearing the interface component (e.g. the
watch controller) and the eyepiece. Biometric/health of a person may be
determined by looking at IR LED view of the skin, for looking at
subsurface pulse, and the like. In embodiments, multi-device
authentication (e.g. token for Bluetooth handshake) may be used, such as
using the sensors on both devices (e.g. fingerprint on both devices as a
hash for the Bluetooth token), and the like.

[0697] Referring to FIGS. 70A-70D, the eyepiece may be stored in an
eyepiece carrying case, such as including a recharge capability, an
integrated display, and the like. FIG. 70A depicts an embodiment of a
case, shown closed, with integrated recharge AC plug and digital display,
and FIG. 70B shows the same embodiment case open. FIG. 70C shows another
embodiment case closed, and FIG. 70D shows the same embodiment open,
where a digital display is shown through the cover. In embodiments, the
case may have the ability to recharge the eyepiece while in the case,
such as through an AC connection or battery (e.g. a rechargeable
lithium-ion battery built into the carrying case for charging the
eyepiece while away from AC power). Electrical power may be transferred
to the eyepiece through a wired or wireless connection, such as though a
wireless induction pad configuration between the case and the eyepiece.
In embodiments, the case may include a digital display in communications
with the eyepiece, such as through Bluetooth wireless, and the like. The
display may provide information about the state of the eyepiece, such as
messages received, battery level indication, notifications, and the like.

[0698] Referring to FIG. 71, the eyepiece 7120 may be used in conjunction
with an unattended ground sensor unit 7102, such as formed as a stake
7104 that can be inserted in the ground 7118 by personnel, fired from a
remote control helicopter, dropped by plane, and the like. The ground
sensor unit 7102 may include a camera 7108, a controller 7110, a sensor
7112, and the like. Sensors 7112 may include a magnetic sensor, sound
sensor, vibration sensor, thermal sensor, passive IR sensor, motion
detector, GPS, real-time clock, and the like, and provide monitoring at
the location of the ground sensor unit 7102. The camera 7108 may have a
field of view 7114 in both azimuth and elevation, such as a full or
partial 360-degree camera array in azimuth and +/-90 degrees in
elevation. The ground sensor unit 7102 may capture a sensor and image
data of an event(s) and transmit it over a wireless network connection to
an eyepiece 7120. Further, the eyepiece may then transmit the data to an
external communications facility 7122, such as a cell network, a
satellite network, a WiFi network, to another eyepiece, and the like. In
embodiments, ground sensor units 7102 may relay data from unit to unit,
such as from 7102A to 7102B to 7102C. Further, the data may then be
relayed from eyepiece 7120A to eyepiece 7120B and on to the
communications facility 7122, such as in a backhaul data network. Data
collected from a ground sensor unit 7102, or array of ground sensor
units, may be shared with a plurality of eyepieces, such as from eyepiece
to eyepiece, from the communications facility to the eyepiece, and the
like, such that users of the eyepiece may utilize and share the data,
either in it's raw form or in a post-processed form (i.e. as a graphic
display of the data through the eyepiece). In embodiments, the ground
sensor units may be inexpensive, disposable, toy-grade, and the like. In
embodiments, the ground sensor unit 7102 may provide backup for computer
files from the eyepiece 7120.

[0699] Referring to FIG. 72, the eyepiece may provide control through
facilities internal and external to the eyepiece, such as initiated from
the surrounding environment 7202, input devices 7204, sensing devices
7208, user action capture devices 7210, internal processing facilities
7212, internal multimedia processing facilities, internal applications
7214, camera 7218, sensors 7220, earpiece 7222, projector 7224, through a
transceiver 7228, through a tactile interface 7230, from external
computing facilities 7232, external applications 7234, event and/or data
feeds 7238, external devices 7240, third parties 7242, and the like.
Command and control modes 7260 of the eyepiece may be initiated by
sensing inputs through input devices 7244, user action 7248, external
device interaction 7250, reception of events and/or data feeds 7252,
internal application execution 7254, external application execution 7258,
and the like. In embodiments, there may be a series of steps included in
the execution control, including at least combinations of two of the
following: events and/or data feeds, sensing inputs and/or sensing
devices, user action capture inputs and/or outputs, user movements and/or
actions for controlling and/or initiating commands, command and/or
control modes and interfaces in which the inputs may be reflected,
applications on the platform that may use commands to respond to inputs,
communications and/or connection from the on-platform interface to
external systems and/or devices, external devices, external applications,
feedback 7262 to the user (such as related to external devices, external
applications), and the like.

[0700] In embodiments, events and/or data feeds may include email,
military related communications, calendar alerts, security events, safety
events, financial events, personal events, a request for input,
instruction, entering an activity state, entering a military engagement
activity state, entering a type of environment, entering a hostile
environment, entering a location, and the like, and combinations of the
same.

[0704] In embodiments, command and/or control modes and interfaces in
which inputs can be reflected may include a graphical user interface
(GUI), auditory command interface, clickable icons, navigable lists,
virtual reality interface, augmented reality interface, heads-up display,
semi-opaque display, 3D navigation interface, command line, virtual touch
screen, robot control interface, typing (e.g. with persistent virtual
keyboard locked in place), predictive and/or learning based user
interface (e.g. learns what the wearer does in a `training mode`, and
when and where they do it), simplified command mode (e.g. hand gestures
to kick off an application, etc), Bluetooth controllers, cursor hold,
lock a virtual display, head movement around a located cursor, and the
like, and combinations of the same.

[0710] In an example, control aspects of the eyepiece may include
combinations of a head nod from a soldier as movement to initiate a
silent command (such as during a combat engagement), through a graphical
user interface for reflecting modes and/or interfaces in which the
control input is reflected, a military application on the eyepiece that
uses the commands and/or responds to the control input, an audio system
controller to communicate and/or connect from the eyepiece interface to
an external system or device, and the like. For instance, the soldier may
be controlling a secure communications device through the eyepiece during
a combat engagement, and wish to change some aspect of communications,
such as a channel, a frequency, an encoding level, and the like, without
making a sound and with minimal motion so as to minimize the chance of
being heard or seen. In this instance, a nod of the soldier's head may be
programmed to indicate the change, such as a quick nod forward to
indicate the beginning of a transmission, a quick nod backward to
indicate the end of a transmission, and the like. In addition, the
eyepiece may be projecting a graphical user interface to the soldier for
the secure communications device, such as showing what channel is active,
what alternative channels are available, others in their team that are
currently transmitting, and the like. The nod of the soldier may then be
interpreted by processing facilities of the eyepiece as a change command,
the command transmitted to the audio system controller, and the graphical
user interface for the communications device showing the change. Further,
certain nods/body motions may be interpreted as specific commands to be
transmitted such that the eyepiece sends a pre-established communication
without the soldier needing to be audible. That is, the soldier may be
able to send pre-canned communications to their team though body motions
(for example, as determined together with the team prior to the
engagement). In this way, a soldier wearing and utilizing the facilities
of the eyepiece may be able to connect and interface with the external
secure communications device in a completely stealthy manner, maintaining
silent communications with their team during engagement, even when out of
sight with the team. In embodiments, other movements or actions for
controlling or initiating commands, command and/or control modes and
interfaces in which the inputs can be reflected, applications on platform
that can use commands and/or respond to inputs, communication or
connection from the on-platform interface to external systems and
devices, and the like, as described herein, may also be applied.

[0711] In an example, control aspects of the eyepiece may include
combinations of motion and position sensors as sensing inputs, an
augmented reality interface as a command and control interface in which
the inputs can be reflected to a soldier, a motion sensor and range
finder for a weapon system as external devices to be controlled and
information collected from, feedback to the soldier related to the
external devices, and the like. For instance, a soldier wearing the
eyepiece may be monitoring military movements within an environment with
the motion sensor, and when the motion sensor is triggered an augmented
reality interface may be projected to the wearer that helps identify a
target, such as a person, vehicle, and the like for further monitoring
and/or targeting. In addition, the range finder may be able to determine
the range to the object and feedback that information to the soldier for
use in targeting (such as manually, with the soldier executing a firing
action; or automatically, with the weapon system receiving the
information for targeting and the soldier providing a command to fire).
In embodiments, the augmented reality interface may provide information
to the soldier about the target, such as the location of the object on a
2D or 3D projected map, identity of the target from previously collected
information (e.g. as stored in an object database, including face
recognition, object recognition), coordinates of the target, night vision
imaging of the target, and the like. In embodiments, the triggering of
the motion detector may be interpreted by processing facilities of the
eyepiece as a warning event, the command may be transmitted to the range
finder to determine the location of the object, as well as to the
speakers of the ear phones of the eyepiece to provide an audio warning to
the soldier that a moving object has been sensed in the area being
monitored. The audio warning plus visual indicators to the soldier may
serve as inputs to the soldier that attention should be paid to the
moving object, such as if the object has been identified as an object of
interest to the soldier, such as through an accessed database for known
combatants, known vehicle types, and the like. For instance, the soldier
may be at a guard post monitoring the perimeter around the post at night.
In this case, the environment may be dark, and the soldier may have
fallen into a low attentive state, as it may be late at night, with all
environmental conditions quiet. The eyepiece may then act as a sentry
augmentation device, `watching` from the soldier's personal perspective
(as opposed to some external monitoring facility for the guard post).
When the eyepiece senses movement, the soldier may be instantly alerted
as well as guided to the location, range, identity, and the like, of the
motion. In this way, the soldier may be able to react to avoid personal
danger, to target fire to the located movement, and the like, as well as
alert the post to potential danger. Further, if a firelight were to
ensue, the soldier may have improved reaction time as a result of the
warning from the eyepiece, with better decision making though information
about the target, and minimizing the danger of being injured or the guard
post from being infiltrated. In embodiments, other sensing inputs and/or
sensing devices, command and/or control modes and interfaces in which the
inputs can be reflected, useful external devices to be controlled,
feedback related to external devices and/or external applications, and
the like, as described herein, may also be applied.

[0712] In embodiments, the eyepiece may enable remote control of vehicles,
such as a truck, robot, drone, helicopter, watercraft, and the like. For
instance, a soldier wearing the eyepiece may be able to command through
an internal communications interface for control of the vehicle. Vehicle
control may be provided through voice commands, body movement (e.g. a
soldier instrumented with movement sensors that are in interactive
communication with the eyepiece, and interfaced through the eyepiece to
control the vehicle), keyboard interface, and the like. In an example, a
soldier wearing an eyepiece may provide remote control to a bomb disposal
robot or vehicle, where commands are generated by the soldier though a
command interface of the eyepiece, such as described herein. In another
example, a soldier may command an aircraft, such as a remote control
drone, remote control tactical counter-rotating helicopter, and the like.
Again, the soldier may provide control of the remote control aircraft
through control interfaces as described herein.

[0713] In an example, control aspects of the eyepiece may include
combinations of a wearable sensor set as an action capture input for a
soldier, utilizing a robot control interface as a command and control
interface in which the inputs can be reflected, a drone or other robotic
device as an external device to be controlled, and the like. For
instance, the soldier wearing the eyepiece may be instrumented with a
sensor set for the control of a military drone, such as with motion
sensor inputs to control motion of the drone, hand recognition control
for manipulation of control features of the drone (e.g. such as through a
graphical user interface displayed through the eyepiece), voice command
inputs for control of the drone, and the like. In embodiments, control of
the drone through the eyepiece may include control of flight, control of
on-board interrogation sensors (e.g. visible camera, IR camera, radar),
threat avoidance, and the like. The soldier may be able to guide the
drone to its intended target using body mounted sensors and picturing the
actual battlefield through a virtual 2D/3D projected image, where flight,
camera, monitoring controls are commanded though body motions of the
soldier. In this way, the soldier may be able to maintain an
individualistic, full visual immersion, of the flight and environment of
the drone for greater intuitive control. The eyepiece may have a robot
control interface for managing and reconciling the various control inputs
from the soldier-worn sensor set, and for providing an interface for
control of the drone. The drone may then be controlled remotely through
physical action of the soldier, such as through a wireless connection to
a military control center for drone control and management. In another
similar example, a soldier may control a bomb-disarming robot that may be
controlled through a soldier-worn sensor set and associated eyepiece
robot control interface. For instance, the soldier may be provided with a
graphical user interface that provides a 2D or 3D view of the environment
around the bomb disarming robot, and where the sensor pack provides
translation of the motion of the soldier (e.g. arms, hands, and the like)
to motions of the robot. In this way, the soldier may be able to provide
a remote control interface to the robot to better enable sensitive
control during the delicate bomb disarming process. In embodiments, other
user action capture inputs and/or devices, command and/or control modes
and interfaces in which the inputs can be reflected, useful external
devices to be controlled, and the like, as described herein, may also be
applied.

[0714] In an example, control aspects of the eyepiece may include
combinations of an event indication to the soldier as they enter a
location, a predictive-learning based user interface as a command and
control mode and/or interface in which the input occurrence of the event
is reflected, a weapons control system as an external device to be
controlled, and the like. For instance, an eyepiece may be programmed to
learn the behavior of a soldier, such as what the soldier typically does
when they enter a particular environment with a particular weapons
control system, e.g. does the wearer turn on the system, arm the system,
bring up visual displays for the system, and the like. From this learned
behavior, the eyepiece may be able to make a prediction of what the
soldier wants in the way of an eyepiece control function. For example,
the soldier may be thrust into a combat situation, and needs the
immediate use of a weapons control system. In this case, the eyepiece may
sense the location and/or the identity of the weapons system as the
soldier approaches, and configure/enable the weapons system to how the
soldier typically configures the system when they are near the weapons
control system, such as in previous uses of the weapons system where the
eyepiece was in a learning mode, and commanding the weapons control
system to turn on the system as last configured. In embodiments, the
eyepiece may sense the location and/or identity of the weapons system
through a plurality of methods and systems, such as through a vision
system recognizing the location, an RFID system, a GPS system, and the
like. In embodiments, the commanding of the weapons control system may be
through a graphical user interface that provides the soldier with a
visual for fire-control of the weapon system, an audio-voice command
system interface that provides choices to the soldier and voice
recognition for commanding, pre-determined automatic activation of a
function, and the like. In embodiments, there may be a profile associated
with such learned commanding, where the soldier is able to modify the
learned profile and/or set preferences within the learned profile to help
optimize automated actions, and the like. For example, the soldier may
have separate weapon control profiles for weapons readiness (i.e. while
on post and awaiting action) and for active weapons engagement with the
enemy. The soldier may need to modify a profile to adjust to changing
conditions associated with use of the weapon system, such as a change in
fire command protocols, ammunition type, added capabilities of the weapon
system, and the like. In embodiments, other events and/or data feeds,
command and/or control modes and interfaces in which the inputs can be
reflected, useful external devices to be controlled, and the like, as
described herein, may also be applied.

[0715] In an example, control aspects of the eyepiece may include
combinations of an individual responsibility event for a soldier (such as
deployed in a theater of action, and managing their time) as an event
and/or data feed, a voice recognition system as a user action capture
input device, an auditory command interface as a command and control
interface in which the inputs can be reflected, video-based
communications as an application on the eyepiece that is used to respond
to the input from the soldier, and the like. For instance, a soldier
wearing the eyepiece may get a visual indication projected to them of a
scheduled event for a group video supported communication between
commanders. The soldier may then use a voice command to an auditory
command interface on the eyepiece to bring up the contact information for
the call, and voice command the group video communication to be
initiated. In this way, the eyepiece may serve as a personal assistant
for the soldier, bringing up scheduled events and providing the soldier
with a hands-free command interface to execute the scheduled events. In
addition, the eyepiece may provide for the visual interface for the group
video communication, where the images of the other commanders are
projected to the soldier through the eyepiece, and where an external
camera is providing the soldier's video image through communicative
connection to the eyepiece (such as with an external device with a
camera, using a mirror with the internally integrated camera, and the
like, as described herein). In this way, the eyepiece may provide a fully
integrated personal assistant and phone/video-based communications
platform, subsuming the functions of other traditionally separate
electronics devices, such as the radio, mobile phone, a video-phone, a
personal computer, a calendar, a hands-free command and control
interface, and the like. In embodiments, other events and/or data feeds,
user action capture inputs and/or devices, command and/or control modes
and interfaces in which the inputs can be reflected, applications on
platform that can use commands and/or respond to inputs, and the like, as
described herein, may also be applied.

[0716] In an example, control aspects of the eyepiece may include
combinations of a security event to a soldier as an event and/or data
feed; a camera and touch screen as user action capture input devices; an
information processing, fingerprint capture, facial recognition
application on the eyepiece to respond to the inputs; a graphical user
interface for communications and/or connection between the eyepiece and
external systems and devices; and an external information processing,
fingerprint capture, facial recognition application and database for
access to external security facilities and connectivity, and the like.
For instance, a soldier may receive a `security event` while on post at a
military checkpoint where a plurality of individuals is to be security
checked and/or identified. In this case there may be a need for recording
the biometrics of the individuals, such as because they don't show up in
a security database, because of suspicious behavior, because they fit the
profile of a member of a combatant, and the like. The soldier may then
use biometric input devices, such as a camera for photographing faces and
a touch screen for recording fingerprints, where the biometric inputs are
managed though an internal information, processing, fingerprint capture,
and facial recognition application on the eyepiece. In addition, the
eyepiece may provide a graphical user interface as a communications
connection to an external information, processing, fingerprint capture,
and facial recognition application, where the graphical user interface
provides data capture interfaces, external database access, people of
interest database, and the like. The eyepiece may provide for an
end-to-end security management facility, including monitoring for people
of interest, input devices for taking biometric data, displaying inputs
and database information, connectivity to external security and database
applications, and the like. For instance, the soldier may be checking
people through a military checkpoint, and the soldier has been commanded
to collect facial images, such as with iris biometrics, for anyone that
meets a profile and is not currently in a security database. As
individuals approach the soldier, as in a line to pass through the
checkpoint, the soldier's eyepiece takes high-resolution images of each
individual for facial and/or iris recognition, such as checked though a
database accessible though a network communication link A person may be
allowed to pass the checkpoint if they do not meet the profile (e.g. a
young child), or is in the database with an indication that they are not
considered a threat. A person may not be allowed to pass through the
checkpoint, and is pulled aside, if the individual is indicated to be a
threat or meets the profile and is not in the database. If they need to
be entered into the security database, the soldier may be able to process
the individual directly through facilities of the eyepiece or with the
eyepiece controlling an external device, such as for collecting personal
information for the individual, taking a close-up image of the
individual's face and/or iris, recording fingerprints, and the like, such
as described herein. In embodiments, other events and/or data feeds, user
action capture inputs and/or devices, applications on platform that can
use commands and/or respond to inputs, communication or connection from
the on-platform interface to external systems and devices, applications
for external devices, and the like, as described herein, may also be
applied.

[0717] In an example, control aspects of the eyepiece may include
combinations of a finger movement as a user action for a soldier
initiating an eyepiece command, a clickable icon as a command and control
mode and/or interface in which the user action can be reflected, an
application on the eyepiece (e.g. weapons control, troop movements,
intelligence data feed, and the like), a military application tracking
API as a communication and/or connection from the eyepiece application to
an external system, an external personnel tracking application, feedback
to military personnel, and the like. For instance, a system for
monitoring a soldier's selection of an on-eyepiece application may be
implemented through an API such that the monitoring provides a service to
the military for monitoring and tracking application usage, feedback to
the soldier as to other applications available to them based on the
monitored behavior, and the like. In the course of a day, the soldier may
select an application for use and/or download, such as through a
graphical user interface where clickable icons are presented, and to
which the soldier may be able to select the icon based on a finger
movement control implementation facility (such as a camera or inertial
system through which the soldier's finger action is used as a control
input, in this case to select the clickable icon). The selection may then
be monitored through the military application tracking API that sends the
selection, or stored number of selections (such as transmitting stored
selections over a period of time), to the external personnel tracking
application. The soldier's application selections, in this case `virtual
clicks`, may then be analyzed for the purpose of optimizing usage, such
as through increasing bandwidth, change of available applications,
improvement to existing applications, and the like. Further, the external
personnel tracking application may utilize the analysis to determine what
the wearer's preferences are in terms of applications use, and send the
wearer feedback in the form of recommendations of applications the wearer
may be interested in, a preference profile, a list of what other similar
military users are utilizing, and the like. In embodiments, the eyepiece
may provide services to improve the soldier's experience with the
eyepiece, such as with recommendations for usage that the soldier may
benefit from, and the like, while aiding in guiding the military use of
the eyepiece and applications thereof. For instance, a soldier that is
new to using the eyepiece may not fully utilize its capabilities, such as
in use of augmented reality interfaces, organizational applications,
mission support, and the like. The eyepiece may have the capability to
monitor the soldier's utilization, compare the utilization to utilization
metrics (such as stored in an external eyepiece utilization facility),
and provide feedback to the soldier in order to improve use and
associated efficiency of the eyepiece, and the like. In embodiments,
other user movements or actions for controlling or initiating commands,
command and/or control modes and interfaces in which the inputs can be
reflected, applications on platform that can use commands and/or respond
to inputs, communication or connection from the on-platform interface to
external systems and devices, applications for external devices, feedback
related to external devices and/or external applications, and the like,
as described herein, may also be applied.

[0718] In an example, control aspects of the eyepiece may include
combinations of body movement (e.g. kinetic sensor) and touch sensors as
user action capture sensing devices, head and hand movement as user
actions for controlling and/or initiating commands, a virtual reality
interface as a command and control interface through which the inputs can
be reflected, an information display as an application on the eyepiece
that can respond to the inputs, a combat simulator as an external device
to be controlled through a combat simulation application, and the
activation of the combat simulator content to the soldier with
performance, rating, score, and the like, as feedback to the user related
to the external device and application. For instance, a soldier may be
able to interact with an artificial reality enhanced combat simulator,
where the wearer's body movements are interpreted as control inputs, such
as though body movement sensors, touch sensors, and the like. In this
way, movements of the wearer's body may be fed into the combat simulator,
rather than using more traditional control inputs such as a handheld
controller. Thus, the soldier's experience may be more realistic, such as
to provide better muscle memory from the simulated combat exercise, such
as when engaged in defensive avoidance, in a firelight, and the like, and
where the eyepiece provides a full immersion experience for the soldier
without the need for external devices that would normally not be used by
the soldier in a live action. Body motion control inputs may feed into a
virtual reality interface and information display application on the
eyepiece to provide the user with the visual depiction of the simulated
combat environment. In embodiments, the combat simulator may be run
entirely on-board the eyepiece as a local application, interfaced to an
external combat simulator facility local to the wearer, interfaced to a
networked combat simulator facility (e.g. a massively multiplayer combat
simulator, an individual combat simulator, a group combat simulator
through a local network connection), and the like. In the case where the
eyepiece is interfacing and controlling a hybrid local-external combat
simulator environment, the eyepiece application portion of simulation
execution may provide the visual environment and information display to
the soldier, and the external combat simulator facility may provide the
combat simulator application execution. It would be clear to one skilled
in the art that many different partitioning configurations between the
processing provided by the eyepiece and processing provided by external
facilities may be implemented. Further, the combat simulator
implementation may extend to external facilities across a secure network.
External facilities, whether local or across the secure network, may then
provide feedback to the soldier, such as in providing at least a portion
of the executed content (e.g. the locally provided projection combined
with content from the external facilities and other soldiers),
performance indications, scores, rankings, and the like. In embodiments,
the eyepiece may provide a soldier environment where the eyepiece
interfaces with external control inputs and external processing
facilities, to create the next generation of combat simulator platform.
In embodiments, other sensing inputs and/or sensing devices, user
movements or actions for controlling or initiating commands, command
and/or control modes and interfaces in which the inputs can be reflected,
applications on platform that can use commands and/or respond to inputs,
useful external devices to be controlled, feedback related to external
devices and/or external applications, and the like, as described herein,
may also be applied.

[0719] In an example, control aspects of the eyepiece may include
combinations of IR, thermal, force, carbon monoxide, and the like sensors
as inputs; microphone as an additional input device; voice commands as an
action by a soldier to initiate commands; a heads-up display as a command
and control interface in which the inputs can be reflected; an
instructional guidance application to provide guidance while reducing the
need for the soldier to use their hands, such as in emergency repair in
the field, maintenance, assembly, and the like; a visual display that
provides feedback to the soldier based on the actions of the soldier and
the sensor inputs; and the like. For instance, a soldier's vehicle may
have been damaged in a firelight, leaving the soldier(s) stranded without
immediate transport capabilities. The soldier may be able to bring up an
instructional guidance application, as running through the eyepiece, to
provide hands-free instruction and computer-based expert knowledge access
to diagnosing the problem with the vehicle. In addition, the application
may provide a tutorial for procedures not familiar to the soldier, such
as restoring basic and temporary functionality of the vehicle. The
eyepiece may also be monitoring various sensor inputs relevant to the
diagnosis, such as an IR, thermal, force, ozone, carbon monoxide, and the
like sensors, so that the sensor input may be accessible to the
instructional application and/or directly accessible to the soldier. The
application may also provide for a microphone through which voice
commands may be accepted; a heads-up display for the display of
instruction information, 2D or 3D depiction of the portion of the vehicle
under repair; and the like. In embodiments, the eyepiece may be able to
provide a hands-free virtual assistant to the soldier to assist them in
the diagnosis and repair of the vehicle in order to re-establish a means
for transport, allowing the soldier to re-engage the enemy or move to
safety. In embodiments, other sensing inputs and/or sensing devices, user
action capture inputs and/or devices, user movements or actions for
controlling or initiating commands, command and/or control modes and
interfaces in which the inputs can be reflected, applications on platform
that can use commands and/or respond to inputs, feedback related to
external devices and/or external applications, and the like, as described
herein, may also be applied.

[0720] In an example, control aspects of the eyepiece may include
combinations of the eyepiece entering an `activity state`, such as a
`military engagement` activity mode, e.g. the soldier commanding the
eyepiece into a military engagement mode, or the eyepiece sensing it is
in proximity to a military activity, perhaps even a predetermined or
targeted engagement area through a received mission directive, which may
have further been developed in part through self monitoring and learning
the wearer's general engagement assignment. Continuing with this example,
entering an activity state e.g. a military engagement activity state,
such as while driving in a vehicle into an encounter with the enemy or
into hostile territory, may be combined with an object detector as a
sensing input or sensing device, a head-mounted camera and/or eye-gaze
detection system as a user action capture input, eye movement as a user
movement or action for controlling or initiating commands, a 3D
navigation interface as a command and control mode and/or interface in
which the inputs can be reflected, an engagement management application
on-board the eyepiece as an application for coordinating command inputs
and user interface, a navigation system controller to communicate or
connect with external systems or devices, a vehicle navigation system as
an external device to be controlled and/or interfaced with, a military
planning and execution facility as an external application for processing
user actions with regard to a military directive, bulls-eye or target
tracking system as feedback to the wearer as to enemy targeting
opportunities within sight while driving, and the like. For instance, a
soldier may enter a hostile environment while driving their vehicle, and
the eyepiece, detecting the presence of the enemy engagement area (e.g.
through GPS, direct viewing targets through an integrated camera, and the
like) may enter a `military engagement activity state` (such as enabled
and/or approved by the soldier). The eyepiece may then detect an enemy
vehicle, hostile dwelling, and the like with an object detector that
locates an enemy targeting opportunity, such as through a head-mounted
camera. Further, an eye-gaze detection system on the eyepiece may monitor
where the soldier is looking, and possibly highlight information about a
target at the location of the wearer's gaze, such as enemy personnel,
enemy vehicle, enemy weapons, as well as friendly forces, where friend
and foe are identified and differentiated. The soldier's eye movement may
also be tracked, such as for changing targets of interest, or for command
inputs (e.g. a quick nod indicating a selection command, a downward eye
movement indicating a command for additional information, and the like).
The eyepiece may invoke a 3D navigation interface projection to assist in
providing the soldier with information associated with their
surroundings, and a military engagement application for coordinating the
military engagement activity state, such as taking inputs from the
soldier, providing outputs to the 3D navigation interface, interfacing
with external devices and applications, and the like. The eyepiece may
for instance utilize a navigation system controller to interface with a
vehicle navigation system, and thus may include the vehicle navigation
system into the military engagement experience. Alternately, the eyepiece
may use its own navigation system, such as in place of the vehicle system
or to augment it, such as when the soldier gets out of the vehicle and
wishes to have over-the-ground directions provided to them. As part of
the military engagement activity state, the eyepiece may interface with
an external military planning and execution facility, such as to provide
current status, troop movements, weather conditions, friendly forces
position and strength, and the like. In embodiments, the soldier, through
entering an activity state, may be provided feedback associated with the
activity state, such as for a military engagement activity state being
supplied feedback in the form of information associated with an
identified target. In embodiments, other events and/or data feeds,
sensing inputs and/or sensing devices, user action capture inputs and/or
devices, user movements or actions for controlling or initiating
commands, command and/or control modes and interfaces in which the inputs
can be reflected, applications on platform that can use commands and/or
respond to inputs, communication or connection from the on-platform
interface to external systems and devices, applications for external
devices, feedback related to external devices and/or external
applications, and the like, as described herein, may also be applied.

[0721] In an example, control aspects of the eyepiece may include
combinations of a secure communications reception as a triggering event
to a soldier, inertial movement tracking as a user action capture input
device, drag-and-drop with fingers and swipe movements by the soldier as
user movements or actions for controlling or initiating commands,
navigable lists as a command and control interface in which the inputs
can be reflected, information conveying as a type of application on the
eyepiece that can use commands and respond to inputs, a reconciliation
system as a communication or connection from the on-eyepiece interface to
external systems and devices, iris capture and recognition system as an
external application for external systems and devices, and the like. A
soldier wearing the eyepiece may receive a secure communication, and the
communication may come in to the eyepiece as an `event` to the soldier,
such as to trigger an operations mode of the eyepiece, with a visual
and/or audible alert, to initiate an application or action on the
eyepiece, and the like. The soldier may be able to react to the event
through a plurality of control mechanisms, such as the wearer `drag and
dropping`, swiping, and the like with their fingers and hands through a
hand gesture interface (e.g. through a camera and hand gesture
application on-board the eyepiece, where the wearer drags the email or
information within the communication into a file, an application, another
communication, and the like). The wearer may call up navigable lists as
part of acting on the communication. The user may convey the information
from the secure communication through an eyepiece application to external
systems and devices, such as a reconciliation system for tracking
communications and related actions. In embodiments, the eyepiece and/or
secure access system may require identification verification, such as
through biometric identity verification e.g. fingerprint capture, iris
capture recognition, and the like. For instance, the soldier may receive
a secure communication that is a security alert, where the secure
communication comes with secure links to further information, and where
the soldier is required to provide biometric authentication before being
provided access. Once authenticated, the soldier may be able to use hand
gestures in their response and manipulation of content available through
the eyepiece, such as manipulating lists, links, data, images, and the
like available directly from the communications and/or through the
included links. Providing the capability for the soldier to respond and
manipulate content in association with the secure communication, may
better allow the soldier to interact with the message and content in a
manner that does not compromise any non-secure environment they may
currently be in. In embodiments, other events and/or data feeds, user
action capture inputs and/or devices, user movements or actions for
controlling or initiating commands, command and/or control modes and
interfaces in which the inputs can be reflected, applications on platform
that can use commands and/or respond to inputs, communication or
connection from the on-platform interface to external systems and
devices, applications for external devices, and the like, as described
herein, may also be applied.

[0722] In an example, control aspects of the eyepiece may include
combinations of using an inertial user interface as a user action capture
input device to provide military instruction to a soldier through the
eyepiece to an external display device. For instance, a soldier, wearing
the eyepiece, may wish to provide instruction to a group of other
soldiers in the field from a briefing that has been made available to
them through the facilities of the eyepiece. The soldier may be aided
though the use of a physical 3D or 2D mouse (e.g. with inertial motion
sensor, MEMS inertial sensor, ultrasonic 3D motion sensor, IR,
ultrasonic, or capacitive tactile sensor, accelerometer, and the like), a
virtual mouse, a virtual touch screen, a virtual keyboard, and the like
to provide an interface for manipulating content in the briefing. The
briefing may be viewable to and manipulated though the eyepiece, but also
exported in real-time, such as to an external router that is connected to
an external display device (e.g. computer monitor, projector, video
screen, TV screen, and the like). As such, the eyepiece may provide a way
for the soldier to have others view what they see through the eyepiece
and as controlled through the control facilities of the eyepiece,
allowing the soldier to export multimedia content associated with the
briefing as enabled through the eyepiece to other non-eyepiece wearers.
In an example, a mission briefing may be provided to a commander in the
field, and the commander, through the eyepiece, may be able to brief
their team with multimedia and augmented reality resources available
through the eyepiece, as described herein, thus gaining the benefit that
such visual resources provide. In embodiments, other sensing inputs
and/or sensing devices, user action capture inputs and/or devices,
command and/or control modes and interfaces in which the inputs can be
reflected, communication or connection from the on-platform interface to
external systems and devices, useful external devices to be controlled,
feedback related to external devices and/or external applications, and
the like, as described herein, may also be applied.

[0723] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and sensing inputs/sensing
devices, such as where a security event plus an acoustic sensor may be
implemented. There may be a security alert sent to a soldier and an
acoustic sensor is utilized as an input device to monitor voice content
in the surrounding environment, directionality of gunfire, and the like.
For instance, a security alert is broadcast to all military personnel in
a specific area, and with the warning, the eyepiece activates an
application that monitors an embedded acoustic sensor array that analyzes
loud sounds to identify the type of source for the sound, and direction
from which the sound came. In embodiments, other events and/or data
feeds, sensing inputs and/or sensing devices, and the like, as described
herein, may also be applied.

[0724] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and user action capture
inputs/devices, such as for a request for an input plus use of a camera.
A soldier may be in a location of interest and is sent a request for
photos or video from their location, such as where the request is
accompanied with instructions for what to photograph. For instance, the
soldier is at a checkpoint, and at some central command post it is
determined that an individual on interest may attempt to cross the
checkpoint. Central command may then provide instructions to eyepiece
users in proximity to the checkpoint to record and upload images and
video, which may in embodiments be preformed automatically without the
soldier needing to manually turn on the camera. In embodiments, other
events and/or data feeds, user action capture inputs and/or devices, and
the like, as described herein, may also be applied.

[0725] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and user movements or actions
for controlling or initiating commands, such as when a soldier is
entering an `activity state` and they use a hand gesture for control. A
soldier may be put in an activity state of readiness to engage the enemy,
and the soldier uses hand gestures to silently command the eyepiece
within an engagement command and control environment. For instance, the
soldier may suddenly enter an enemy area as determined by new
intelligence received that places the eyepiece in a heightened alert
state. In this state it may be a requirement that silence may be
required, and so the eyepiece transitions to a hand gesture command mode.
In embodiments, other events and/or data feeds, user movements or actions
for controlling or initiating commands, and the like, as described
herein, may also be applied.

[0726] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and command/control modes and
interfaces in which the inputs can be reflected, such as entering a type
of environment and the user of a virtual touch screen. A soldier may
enter a weapons system area, and a virtual touch screen is made available
to the wearer for at least a portion of the control of the weapons
system. For instance, the soldier enters a weapons vehicle, and the
eyepiece detecting the presence of the weapons system, and that the
soldier is authorized to use the weapon, brings up a virtual fire control
interface with virtual touch screen. In embodiments, other events and/or
data feeds, command and/or control modes and interfaces in which the
inputs can be reflected, and the like, as described herein, may also be
applied.

[0727] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and applications on platform
that can use commands/respond to inputs, such as for a safety event in
combination with easy access to information for pilots. A military pilot
(or someone responsible for the flight checkout of a pilotless aircraft)
may receive a safety event notification as they approach an aircraft
prior to the aircraft taking off, and an application is brought up to
walk them through the pre-flight checkout. For instance, a drone
specialist approaches a drone to prepare it for launch, and an
interactive checkout procedure is displayed to the soldier by the
eyepiece. In addition, a communications channel may be opened to the
pilot of the drone so they are included in the pre-flight checkout. In
embodiments, other events and/or data feeds, applications on platform
that can use commands and/or respond to inputs, and the like, as
described herein, may also be applied.

[0728] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and a communication or
connection from the on-platform interface to external systems and
devices, such as the soldier entering a location and a graphical user
interface (GUI). A soldier may enter a location where they are required
to interact with external devices, and where the external device is
interfaced through the GIU. For instance, a soldier gets in a military
transport, and the soldier is presented with a GUI that opens up an
interactive interface that instructs the soldier on what they need to do
during different phases of the transport. In embodiments, other events
and/or data feeds, communication or connection from the on-platform
interface to external systems and devices, and the like, as described
herein, may also be applied.

[0729] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and a useful external device to
be controlled, such as for an instruction provided and a weapon system. A
soldier may be provided instructions, or a feed of instructions, where at
least one instruction pertains to the control of an external weapons
system. For instance, a soldier may be operating a piece of artillery,
and the eyepiece is providing them not only performance and procedural
information in association with the weapon, but also provides a feed of
instructions, corrections, and the like, associated with targeting. In
embodiments, other events and/or data feeds, useful external devices to
be controlled, and the like, as described herein, may also be applied.

[0730] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and an application for a useful
external device, such as in a security event/feed and biometrics
capture/recognition. A soldier may be sent a security event notification
through (such as through a security feed) to capture biometrics
(fingerprints, iris scan, walking gait profile) of certain individuals,
where the biometrics are stored, evaluated, analyzed, and the like,
through an external biometrics application (such as served from a secure
military network-based server/cloud). In embodiments, other events and/or
data feeds, applications for external devices, and the like, as described
herein, may also be applied.

[0731] In an example, control aspects of the eyepiece may include
combinations of using an events/data feed and feedback to a soldier
related to the external devices and applications, such as entering an
activity state and the soldier being provided a display of information. A
soldier may place the eyepiece into an activity state such as for
military staging, readiness, action, debrief, and the like, and as
feedback to being placed into the activity state the soldier receives a
display of information pertaining to the entered state. For instance, a
soldier enters into a staging state for a mission, where the eyepiece
fetches information from a remote server as part of the tasks the soldier
has to complete during staging, including securing equipment, additional
training, and the like. In embodiments, other events and/or data feeds,
feedback related to external devices and/or external applications, and
the like, as described herein, may also be applied.

[0732] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and user action
capture inputs/devices, such as with an inertial motion sensor and head
tracking system. The head motion of a soldier may be tracked through
inertial motion sensor(s) in the eyepiece, such as for nod control of the
eyepiece, view direction sensing for the eyepiece, and the like. For
instance, the soldier may be a targeting a weapon system, and the
eyepiece senses the gaze direction of the soldier's head through the
inertial motion sensor(s) to provide continuous targeting of the weapon.
Further, the weapon system may move continuously in response to the
soldier's gaze direction, and so be continuously ready to fire on the
target. In embodiments, other sensing inputs and/or sensing devices, user
action capture inputs and/or devices, and the like, as described herein,
may also be applied.

[0733] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and user movements
or actions for controlling or initiating commands, such as with an
optical sensor and an eye shut, blink, and the like movement. The state
of the soldier's eye may be sensed by an optical sensor that is included
in the optical chain of the eyepiece, such as for using eye movement for
control of the eyepiece. For instance, the soldier may be aiming their
rifle, where the rifle has the capability to be fired through control
commands from the eyepiece (such as in the case of a sniper, where
commanding through the eyepiece may decrease the errors in targeting due
to pulling the trigger manually). The soldier may then fire the weapon
through a command initiated by the optical sensor detecting a
predetermined eye movement, such as in a command profile kept on the
eyepiece. In embodiments, other sensing inputs and/or sensing devices,
user movements or actions for controlling or initiating commands, and the
like, as described herein, may also be applied.

[0734] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and command/control
modes and interfaces in which the inputs can be reflected, such as with a
proximity sensor and robotic control interface. A proximity sensor
integrated into the eyepiece may be used to sense the soldier's proximity
to a robotic control interface in order to activate and enable the use of
the robotics. For instance, a soldier walks up to a bomb-detecting robot,
and the robot automatically activates and initializes configuration for
this particular soldier (e.g. configuring for the preferences of the
soldier). In embodiments, other sensing inputs and/or sensing devices,
command and/or control modes and interfaces in which the inputs can be
reflected, and the like, as described herein, may also be applied.

[0735] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and applications on
platform that can use commands/respond to inputs, such as with an audio
sensor and music/sound application. An audio sensor may monitor the
ambient sound and initiate and/or adjust the volume for music, ambient
sound, sound cancelling, and the like, to help counter an undesirable
ambient sound. For instance, a soldier is loaded onto a transport and the
engines of the transport are initially off. At this time the soldier may
have no other duties except to rest, so they initiate music to help them
rest. When the engines of the transport come on the music/sound
application adjusts the volume and/or initiates additional sound
cancelling audio in order to help keep the music input the same as before
the engines started up. In embodiments, other sensing inputs and/or
sensing devices, applications on platform that can use commands and/or
respond to inputs, and the like, as described herein, may also be
applied.

[0736] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and communication or
connection from the on-platform interface to external systems and
devices, such as with a passive IR proximity sensor and external digital
signal processor. A soldier may be monitoring a night scene with the
passive IR proximity sensor, the sensor indicates a motion, and the
eyepiece initiates a connection to an external digital signal processor
for aiding in identifying the target from the proximity sensor data.
Further, an IR imaging camera may be initiated to contribute additional
data to the digital signal processor. In embodiments, other sensing
inputs and/or sensing devices, communication or connection from the
on-platform interface to external systems and devices, and the like, as
described herein, may also be applied.

[0737] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and useful external
devices to be controlled, such as with an acoustic sensor and a weapons
system, where an eyepiece being worn by a soldier senses a loud sound,
such as may be an explosion or gun fire, and where the eyepiece then
initiates the control of a weapons system for possible action against a
target associated with the creation of the loud sound. For instance, a
soldier is on guard duty, and gunfire is heard. The eyepiece may be able
to detect the direction of the gunshot, and direct the soldier to the
position from which the gunshot was made. In embodiments, other sensing
inputs and/or sensing devices, useful external devices to be controlled,
and the like, as described herein, may also be applied.

[0738] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and applications for
those useful external devices, such as with a camera and external
application for instructions. The camera embedded in a soldier's eyepiece
may view a target icon indicating that instructions are available, and
the eyepiece accessing the external application for instructions. For
instance, a soldier is delivered to a staging area, and upon entry the
eyepiece camera views the icon, accesses the instructions externally, and
provides the soldier with the instructions for what to do, where all the
steps may be automatic so that the instructions are provided without the
soldier being aware of the icon. In embodiments, other sensing inputs
and/or sensing devices, applications for external devices, and the like,
as described herein, may also be applied.

[0739] In an example, control aspects of the eyepiece may include
combinations of using sensing inputs/sensing devices and feedback to user
related to the external devices and applications, such as with a GPS
sensor and a visual display from a remote application. The soldier may
have an embedded GPS sensor that sends/streams location coordinates to a
remote location facility/application that sends/streams a visual display
of the surrounding physical environment to the eyepiece for display. For
instance, a soldier may be constantly viewing the surrounding environment
though the eyepiece, and by way of the embedded GPS sensor, is
continuously streamed a visual display overlay that allows for the
soldier to have an augmented reality view of the surrounding environment,
even as the change locations. In embodiments, other sensing inputs and/or
sensing devices, feedback related to external devices and/or external
applications, and the like, as described herein, may also be applied.

[0740] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and user
movements or actions for controlling or initiating commands, such as with
a body movement sensor (e.g. kinetic sensor) and an arm motion. The
soldier may have body movement sensors attached to their arms, where the
motion of their arms convey a command. For instance, a soldier may have
kinetic sensors on their arms, and the motion of their arms are
duplicated in an aircraft landing lighting system, such that the lights
normally held by personnel aiding in a landing may be made to be larger
and more visible. In embodiments, other user action capture inputs and/or
devices, user movements or actions for controlling or initiating
commands, and the like, as described herein, may also be applied.

[0741] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and
command/control modes and interfaces in which the inputs can be
reflected, such as wearable sensor sets and a predictive learning-based
user interface. A soldier may wear a sensor set where the data from the
sensor set is continuously collected and fed to a machine-learning
facility through a learning-based user interface, where the soldier may
be able to accept, reject, modify, and the like, the learning from their
motions and behaviors. For instance, a soldier may perform the same tasks
in generally the same physical manner every Monday morning, and the
machine-learning facility may establish a learned routine that it
provides to the soldier on subsequent Monday mornings, such as a reminder
to clean certain equipment, fill out certain forms, play certain music,
meet with certain people, and the like. Further, the soldier may be able
to modify the outcome of the learning through direct edits to the
routine, such as in a learned behavior profile. In embodiments, other
user action capture inputs and/or devices, command and/or control modes
and interfaces in which the inputs can be reflected, and the like, as
described herein, may also be applied.

[0742] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and applications
on platform that can use commands/respond to inputs, such as a
finger-following camera and video application. A soldier may be able to
control the direction that the eyepiece embedded camera is taking video
through a resident video application. For instance, a soldier may be
viewing a battle scene where they have need to be gazing in one
direction, such as being watchful for new developments in the engagement,
while filming in a different direction, such as the current point of
engagement. In embodiments, other user action capture inputs and/or
devices, applications on platform that can use commands and/or respond to
inputs, and the like, as described herein, may also be applied.

[0743] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and
communication or connection from the on-platform interface to external
systems and devices, such as a microphone and voice recognition input
plus a steering wheel control interface. The soldier may be able to
change aspects of the handling of a vehicle through voice commands
received through the eyepiece and delivered to a vehicle's steering wheel
control interface (such as through radio communications between the
eyepiece and the steering wheel control interface). For instance, a
soldier is driving a vehicle on a road, and so the vehicle has certain
handling capabilities that are ideal for the road. But the vehicle also
has other modes for diving under different conditions, such as off-road,
in snow, in mud, in heavy rain, while in pursuit of another vehicle, and
the like. In this instance, the soldier may be able to change the mode
through voice command as the vehicle changes driving conditions. In
embodiments, other user action capture inputs and/or devices,
communication or connection from the on-platform interface to external
systems and devices, and the like, as described herein, may also be
applied.

[0744] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and useful
external devices to be controlled, such as a microphone and voice
recognition input plus an automotive dashboard interface device. The
soldier may use voice commands to control various devices associated with
the dashboard of a vehicle, such as heating and ventilation, radio,
music, lighting, trip computer, and the like. For instance, a soldier may
be driving a vehicle on a mission, across rough terrain, such that they
cannot let go of the steering wheel with either hand in order to manually
control a vehicle dashboard device. In this instance, the soldier may be
able to control the vehicle dashboard device through voice controls to
the eyepiece. Voice commands through the eyepiece may be especially
advantageous, such as opposed to voice control through a dashboard
microphone system, because the military vehicle may be immersed in a very
loud acoustic environment, and so using the microphone in the eyepiece
may give substantially improved performance under such conditions. In
embodiments, other user action capture inputs and/or devices, useful
external devices to be controlled, and the like, as described herein, may
also be applied.

[0745] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and applications
for useful external devices, such as with a joystick device and external
entertainment application. A soldier may have access to a gaming joystick
controller and is able to play a game through an external entertainment
application, such as a multi-player game hosted on a network server. For
instance, the soldier may be experiencing down time during a deployment,
and on base they have access to a joystick device that interfaces to the
eyepiece, and the eyepiece in turn to the external entertainment
application. In embodiments, the soldier may be networked together with
other military personnel across the network. The soldier may have stored
preferences, a profile, and the like, associated with the game play. The
external entertainment application may manage the game play of the
soldier, such as in terms of their deployment, current state of
readiness, required state of readiness, past history, ability level,
command position, rank, geographic location, future deployment, and the
like. In embodiments, other user action capture inputs and/or devices,
applications for external devices, and the like, as described herein, may
also be applied.

[0746] In an example, control aspects of the eyepiece may include
combinations of using user action capture inputs/devices and feedback to
the user related to external devices and applications, such as with an
activity determination system and tonal output or sound warning. The
soldier may have access to the activity determination system through the
eyepiece to monitor and determine the soldier's state of activity, such
as in extreme activity, at rest, bored, anxious, in exercise, and the
like, and where the eyepiece may provide forms of tonal output or sound
warning when conditions go out of limits in any way, such as pre-set,
learned, as typical, and the like. For instance, the soldier may be
monitored for current state of health during combat, and where the
soldier and/or another individual (e.g. medic, hospital personnel,
another member of the soldier's team, a command center, and the like) are
provided an audible signal when health conditions enter a dangerous
level, such as indicating that the soldier has been hurt in battle. As
such, others may be alerted to the soldier's injuries, and would be able
to attend to the injuries in a more time effective manner. In
embodiments, other user action capture inputs and/or devices, feedback
related to external devices and/or external applications, and the like,
as described herein, may also be applied.

[0747] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands plus command/control modes and interfaces in which
the inputs can be reflected, such as a clenched fist and Navigable list.
A soldier may bring up a navigable list as projected content on the
eyepiece display with a gesture such as a clenched fist, and the like.
For instance, the eyepiece camera may be able to view the soldier's hand
gesture(s), recognize and identify the hand gesture(s), and execute the
command in terms of a pre-determined gesture-to-command database. In
embodiments, hand gestures may include gestures of the hand, finger, arm,
leg, and the like. In embodiments, other user movements or actions for
controlling or initiating commands, command and/or control modes and
interfaces in which the inputs can be reflected, and the like, as
described herein, may also be applied.

[0748] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands plus applications on platform that can use
commands/respond to inputs, such as a head nod and information display.
The soldier may bring up an information display application with a
gesture such as a headshake, arm motion, leg motion, eye motion, and the
like. For instance, the soldier may wish to access an application,
database, network connection, and the like, through the eyepiece, and is
able to bring up a display application as part of a graphical user
interface with the nod of their head (such as sensed though motion
detectors in the eyepiece, on the soldier's head, on the soldier's
helmet, and the like. In embodiments, other user movements or actions for
controlling or initiating commands, applications on platform that can use
commands and/or respond to inputs, and the like, as described herein, may
also be applied.

[0749] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands plus communication or connection from the on-platform
interface to external systems and devices, such as the blink of an eye
and through an API to external applications. The soldier may be able to
bring up an application program interface to access external
applications, such as with the blink of an eye, a nod of the head, the
movement of an arm or leg, and the like. For instance, the soldier may be
able to access an external application through an API embedded in an
eyepiece facility, and do so with the blink of an eye, such as detected
though an optical monitoring capability through the optics system of the
eyepiece. In embodiments, other user movements or actions for controlling
or initiating commands, communication or connection from the on-platform
interface to external systems and devices, and the like, as described
herein, may also be applied.

[0750] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands and external devices to be controlled, such as
through the tap of a foot accessing an external range finder device. A
soldier may have a sensor such as a kinetic sensor on their shoe that
will detect the motion of the soldier's foot, and the soldier uses a foot
motion such as a tap of their foot to use an external range finder device
to determine the range to an object such an enemy target. For instance,
the soldier may be targeting a weapon system, and using both hands in the
process. In this instance, commanding by way of a foot action through the
eyepiece may allow for `hands free` commanding. In embodiments, other
user movements or actions for controlling or initiating commands, useful
external devices to be controlled, and the like, as described herein, may
also be applied.

[0751] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands plus applications for those useful external devices,
such as making a symbol with a hand and an information conveying
application. The soldier may utilize a hand formed symbol to trigger
information shared through an external information conveying application,
such as an external information feed, a photo/video sharing application,
a text application, and the like. For instance, a soldier uses a hand
signal to turn on the embedded camera and share the video stream with
another person, to storage, and the like. In embodiments, other user
movements or actions for controlling or initiating commands, applications
for external devices, and the like, as described herein, may also be
applied.

[0752] In an example, control aspects of the eyepiece may include
combinations of using user movements or actions for controlling or
initiating commands plus feedback to soldier as related to an external
device and application, such as a headshake plus an audible alert. The
soldier may be wearing an eyepiece equipped with an accelerometer (or
like capable sensor for detecting g-force headshake), where when the
soldier experiences a g-force headshake that is at a dangerously high
level, an audible alert is sounded as feedback to the user, such as
determined either as a part of on- or off-eyepiece applications. Further,
the output of the accelerometer may be recorded and stored for analysis.
For instance, the soldier may experience a g-force headshake from a
proximate explosion, and the eyepiece may sense and record the sensor
data associated with the headshake. Further, headshakes of a dangerous
level may trigger automatic actions by the eyepiece, such as transmitting
an alert to other soldiers and/or to a command center, begin monitoring
and/or transmitting the health of the soldier from other body mounted
sensors, provide audible instructions to the soldier related to their
potential injuries, and the like. In embodiments, other user movements or
actions for controlling or initiating commands, feedback related to
external devices and/or external applications, and the like, as described
herein, may also be applied.

[0753] In an example, control aspects of the eyepiece may include
combinations of using command/control modes and interfaces in which the
inputs can be reflected plus applications on platform that can use
commands/respond to inputs, such as a graphical user interface plus
various applications resident on the eyepiece. The eyepiece may provide a
graphical user interface to the soldier and applications presented for
selection. For instance, the soldier may have a graphical user interface
projected by the eyepiece that provides different domains of application,
such as military, personal, civil, and the like. In embodiments, other
command and/or control modes and interfaces in which the inputs can be
reflected, applications on platform that can use commands and/or respond
to inputs, and the like, as described herein, may also be applied.

[0754] In an example, control aspects of the eyepiece may include
combinations of using command/control modes and interfaces in which the
inputs can be reflected plus a communication or connection from the
on-platform interface to external systems and devices, such as a 3D
navigation eyepiece interface plus navigation system controller interface
to external system. The eyepiece may enter a navigation mode and connect
to an external system through a navigation system controller interface.
For instance, a soldier is in military maneuvers and brings up a
preloaded 3D image of the surrounding terrain through the eyepiece
navigation mode, and the eyepiece automatically connects to the external
system for updates, current objects of interest such as overlaid by
satellite images, and the like. In embodiments, other command and/or
control modes and interfaces in which the inputs can be reflected,
communication or connection from the on-platform interface to external
systems and devices, and the like, as described herein, may also be
applied.

[0755] In an example, control aspects of the eyepiece may include
combinations of using command/control modes and interfaces in which the
inputs can be reflected plus an external device to be controlled, such as
an augmented reality interface plus external tracking device. The
soldier's eyepiece may enter into an augmented reality mode and interface
with an external tracking device to overlay information pertaining to the
location of a traced object or person with an augmented reality display.
For instance, the augmented reality mode may include a 3D map, and a
person's location as determined by the external tracking device may be
overlaid onto the map, and show a trail as the tracked person moves. In
embodiments, other command and/or control modes and interfaces in which
the inputs can be reflected, useful external devices to be controlled,
and the like, as described herein, may also be applied.

[0756] In an example, control aspects of the eyepiece may include
combinations of using command/control modes and interfaces in which the
inputs can be reflected plus applications for those external devices,
such as semi-opaque display mode plus simulation application. The
eyepiece may be placed into a semi-opaque display mode to enhance the
display of a simulation display application to the solder. For instance,
the soldier is preparing for a mission, and before entering the field the
soldier is provided a simulation of the mission environment, and since
there is no real need for the user to see the real environment around
them during the simulation, the eyepiece places the eyepiece into a
semi-opaque display mode. In embodiments, other command and/or control
modes and interfaces in which the inputs can be reflected, applications
for external devices, and the like, as described herein, may also be
applied.

[0757] In an example, control aspects of the eyepiece may include
combinations of using command/control modes and interfaces in which the
inputs can be reflected plus feedback to user related to the external
devices and applications, such as an auditory command interface plus a
tonal output feedback. The soldier may place the eyepiece into an
auditory command interface mode and the eyepiece responds back with a
tonal output as feedback from the system that the eyepiece is ready to
receive the auditory commands. For instance, the auditory command
interface may include at least portions of the auditory command interface
in an external location, such as out on a network, and the tone is
provided once the entire system is ready to accept auditory commands. In
embodiments, other command and/or control modes and interfaces in which
the inputs can be reflected, feedback related to external devices and/or
external applications, and the like, as described herein, may also be
applied.

[0758] In an example, control aspects of the eyepiece may include
combinations of using applications on platform that can use
commands/respond to inputs plus Communication or connection from the
on-platform interface to external systems and devices, such as a
communication application plus a network router, where the soldier is
able to open up a communications application, and the eyepiece
automatically searches for a network router for connectivity to a network
utility. For instance, a soldier is in the field with their unit, and a
new base camp is established. The soldier's eyepiece may be able to
connect into the secure wireless connection once communications
facilities have been established. Further, the eyepiece may alert the
soldier once communications facilities have been established, even if the
soldier has not yet attempted communications. In embodiments, other
applications on platform that can use commands and/or respond to inputs,
communication or connection from the on-platform interface to external
systems and devices, and the like, as described herein, may also be
applied.

[0759] In an example, control aspects of the eyepiece may include
combinations of using applications on platform that can use
commands/respond to inputs plus useful external devices to be controlled,
such as a video application plus and external camera. The soldier may
interface with deployed cameras, such as for surveillance in the field.
For instance, mobile deployable cameras may be dropped from an aircraft,
and the soldier then has connection to the cameras through the eyepiece
video application. In embodiments, other applications on platform that
can use commands and/or respond to inputs, useful external devices to be
controlled, and the like, as described herein, may also be applied.

[0760] In an example, control aspects of the eyepiece may include
combinations of using applications on platform that can use
commands/respond to inputs plus applications for external devices, such
as an on-eyepiece search application plus an external search application.
A search application on the eyepiece may be augmented with an external
search application. For instance, a soldier may be searching for the
identity of an individual that is being questioned, and when the
on-eyepiece search results in no find, the eyepiece connects with an
external search facility. In embodiments, other applications on platform
that can use commands and/or respond to inputs, applications for external
devices, and the like, as described herein, may also be applied.

[0761] In an example, control aspects of the eyepiece may include
combinations of using applications on platform that can use
commands/respond to inputs plus feedback to the soldier as related to the
external devices and applications, such as an entertainment application
plus a performance indicator feedback. The entertainment application may
be used as a resting mechanism for a soldier that needs to rest but may
be otherwise anxious, and performance feedback is designed for the
soldier in given environments, such as in a deployment when they need to
rest but remain sharp, during down time when attentiveness is declining
and needs to be brought back up, and the like. For instance, a soldier
may be on a transport and about to enter an engagement. In this instance,
an entertainment application may be an action-thinking game to heighten
attention and aggressiveness, and where the performance indicator
feedback is designed to maximize the soldier's desire to perform and to
think through problems in a quick and efficient manner. In embodiments,
other applications on platform that can use commands and/or respond to
inputs, feedback related to external devices and/or external
applications, and the like, as described herein, may also be applied.

[0762] In an example, control aspects of the eyepiece may include
combinations of using a communication or connection from the on-platform
interface to external systems and devices plus external devices to be
controlled, such as an on-eyepiece processor interface to external
facilities plus an external projector. The eyepiece processor may be able
to connect to an external projector so that others may view the content
available to the eyepiece. For instance, a soldier may be in the field
and has access to content that they need to share with others who are not
wearing an eyepiece, such as individuals not in the military. In this
instance, the soldier's eyepiece may be able to interface with an
external projector, and feed content from the eyepiece to the projector.
In embodiments, the projector may be a pocket projector, a projector in a
vehicle, in a conference room, remotely located, and the like. In
embodiments the projector may also be integrated into the eyepiece, such
that the content may be externally projected from the integrated
projector. In embodiments, other communication or connection from the
on-platform interface to external systems and devices, useful external
devices to be controlled, and the like, as described herein, may also be
applied.

[0763] In an example, control aspects of the eyepiece may include
combinations of using a communication or connection from the on-platform
interface to external systems and devices plus an application for
external devices, such as an audio system controller interface plus an
external sound system. The soldier may be able to connect the audio
portion of the eyepiece facilities (e.g. music, audio playback, audio
network files, and the like) to an external sound system. For instance,
the soldier may be able to patch a communications being received by the
eyepiece to a vehicle sound system so that others can hear. In
embodiments, other communication or connection from the on-platform
interface to external systems and devices, applications for external
devices, and the like, as described herein, may also be applied.

[0764] In an example, control aspects of the eyepiece may include
combinations of using a communication or connection from the on-platform
interface to external systems and devices plus feedback to a soldier
related to the external devices and applications, such as a stepper
controller interface plus status feedback. The soldier may have access
and control of a mechanism with digital stepper control through a stepper
controller interface, where the mechanism provides feedback to the
soldier as to the state of the mechanism. For instance, a solder working
on removing a roadblock may have a lift mechanism on their vehicle, and
the soldier may be able to directly interface with the lift mechanism
through the eyepiece. In embodiments, other communication or connection
from the on-platform interface to external systems and devices, feedback
related to external devices and/or external applications, and the like,
as described herein, may also be applied.

[0765] In an example, control aspects of the eyepiece may include
combinations of using external devices to be controlled plus applications
for those external devices, such as storage-enabled devices plus
automatic backup applications. The soldier in the field may be provided
data storage facilities and associated automatic backup applications. For
instance, the storage facility may be located in a military vehicle, so
that data may be backed up from a plurality of soldier's eyepieces to the
vehicle, especially when a network link is not available to download to a
remote backup site. A storage facility may be associated with an
encampment, with a subset of soldiers in the field (e.g. in a pack),
located on the soldier themselves, and the like. In embodiments, a local
storage facility may upload the backup when network service connections
become available. In embodiments, other useful external devices to be
controlled, applications for external devices, and the like, as described
herein, may also be applied.

[0766] In an example, control aspects of the eyepiece may include
combinations of using external devices to be controlled plus feedback to
a soldier related to external devices and applications, such as an
external payment system plus feedback from the system. The soldier may
have access to a military managed payment system, and where that system
provides feedback to the soldier (e.g. receipts, account balance, account
activity, and the like). For instance, the soldier may make payments to a
vendor via the eyepiece where the eyepiece and external payment system
exchange data, authorization, funds, and the like, and the payment system
provides feedback data to the soldier. In embodiments, other useful
external devices to be controlled, feedback related to external devices
and/or external applications, and the like, as described herein, may also
be applied.

[0767] In an example, control aspects of the eyepiece may include
combinations of using applications for external devices plus feedback to
a soldier related to external devices and applications, such as an
information display from an external 3D mapping-rendering facility plus
feedback along with the information display. The soldier may be able to
have 3D mapping information data displayed through the eyepiece, where
the mapping facility may provide feedback to the soldier, such as based
on past information delivered, past information requested, requests from
others in the area, based on changes associated with the geographical
area, and the like. For instance, a soldier may be receiving a 3D map
rendering from an external application, where the external application is
also providing 3D map rendering to at least a second soldier in the same
geographic area. The soldier may then receive feedback from the external
facility related to the second soldier, such as their position depicted
on the 3D map rendering, identity information, history of movement, and
the like. In embodiments, other applications for external devices,
feedback related to external devices and/or external applications, and
the like, as described herein, may also be applied.

[0768] In embodiments, the eyepiece may provide a user with various forms
of guidance in responding to medical situations. As a first example, the
user may use the eyepiece for training purposes to simulate medical
situations that may arise in combat, training, on or off duty and the
like. The simulation may be geared towards a medical professional or
non-medical personnel.

[0769] By way of example, a low level combat soldier may use the eyepiece
to view a medical simulation as part of a training module to provide
training for response to medical situations on the battlefield. The
eyepiece may provide an augmented environment where the user views
injuries overlaid on another solider to simulate those common or capable
of being found on the battlefield. The soldier may then be prompted
through a user interface to respond to the situation as presented. The
user may be given step-by-step instructions of a course of action in
providing emergency medical care on the field, or the user may carry out
actions in response to the situation that are then corrected until the
appropriate response is given.

[0770] Similarly, the eyepiece may provide a training environment for a
medical professional. The eyepiece may present the user with a medical
emergency or situation requiring a medical response for the purpose of
training the medical professional. The eyepiece may play out common
battle field scenarios for which the user must master appropriate
responses and lifesaving techniques.

[0771] By way of example, the user may be presented with an augmented
reality of a wounded soldier with a gunshot wound to the soldier's body.
The medical professional may then act out the steps he feels to be the
appropriate response for the situation, select steps through a user
interface of the eyepiece that he feels are appropriate for the
situation, input the steps into a user interface of the eyepiece, and the
like. The user may act out the response through use of sensors and or an
input device or he may input the steps of his response into a user
interface via eye movements, hand gestures and the like. Similarly, he
may select the appropriate steps as presented to him through the user
interface via eye movements, hand gestures and the like. As actions are
carried out and the user makes decisions about treatment, the user may be
presented with additional guidance and instruction based on his
performance. For example, if the user is presented with a soldier with a
gunshot wound to the chest, and the user begins to lift the soldier to a
dangerous position, the user may be given a warning or prompt to change
his course of treatment. Alternatively, the user may be prompted with the
correct steps in order to practice proper procedure. Further, the trainee
may be presented with an example of a medical chart for the wounded
soldier in the training situation where the user may have to base his
decisions at least in part on what is contained in the medical chart. In
various embodiments, the user's actions and performance may be recorded
and or documented by the eyepiece for further critiquing and instruction
after the training session has paused or otherwise stopped.

[0772] In embodiments, the eyepiece may provide a user with various forms
of guidance in responding to actual medical situations in combat. By way
of example, a non-trained soldier may be prompted with step-by-step life
saving instructions for fellow soldiers in medical emergencies when a
medic is not immediately present. When a fellow soldier is wounded, the
user may input the type of injury, the eyepiece may detect the injury or
a combination of these may occur. From there, the user may be provided
with life saving instruction with which to treat the wounded soldier.
Such instruction may be presented in the form of augmented reality in a
step-wise process of instructions for the user. Further, the eyepiece may
provide augmented visual aids to the user regarding location of vital
organs near the wounded soldier's injury, an anatomical overlay of the
soldier's body and the like. Further, the eyepiece may take video of the
situation that is then sent back to a medic not in the field or on his
way to the field, thereby allowing the medic to walk the untrained user
through an appropriate lifesaving technique on the battlefield. Further,
the wounded soldier's eyepiece may send vital information, such as
information collected through integral or associated sensors, about the
wounded soldier to the treating soldier's eyepiece to be sent to the
medic or it may be sent directly to the medic in a remote location such
that the treating soldier may provide the wounded solider with medical
help based on the information gathered from the wounded soldier's
eyepiece.

[0773] In other embodiments, when presented with a medical emergency on
the battlefield, a trained medic may use the eyepiece to provide an
anatomical overlay of the soldier's body so that he may respond more
appropriately to the situation at hand. By way of example only and not to
limit the present invention, if the wounded soldier is bleeding from a
gunshot wound to the leg, the user may be presented with an augmented
reality view of the soldier's arteries such that the user may determine
whether an artery has been hit and how severe the wound may be. The user
may be presented with the proper protocol via the eyepiece for the given
wound so that he may check each step as he moves through treatment. Such
protocol may also be presented to the user in an augmented reality,
video, audio or other format. The eyepiece may provide the medic with
protocols in the form of augmented reality instructions in a step-wise
process. In embodiments, the user may also be presented with an augmented
reality overlay of the wounded soldier's organs in order to guide the
medic through any procedure such that the medic does not do additional
harm to the soldier's organs during treatment. Further, the eyepiece may
provide augmented visual aids to the user regarding location of vital
organs near the wounded soldier's injury, an anatomical overlay of the
soldier's body and the like.

[0774] In embodiments, the eyepiece may be used to scan the retina of the
wounded soldier in order to pull up his medical chart on the battlefield.
This may alert the medic to possible allergies to medication or other
important issues that may provide a benefit during medical treatment.

[0775] Further, if the wounded soldier is wearing the eyepiece, the device
may send information to the medic's glasses including the wounded
soldier's heart rate, blood pressure, breathing stress, and the like. The
eyepiece may also help the user observe the walking gait of a soldier to
determine if the soldier has a head injury and they may help the user
determine the location of bleeding or an injury. Such information may
provide the user with information of possible medical treatment, and in
embodiments, the proper protocol or a selection of protocols may be
displayed to the user to help him in treating the patient.

[0776] In other embodiments, the eyepiece may allow the user to monitor
other symptoms of the patient for a mental health status check.
Similarly, the user can check to determine if the patient is exhibiting
rapid eye movement and further may use the eyepiece to provide the
patient with calming treatment such as providing the patient with eye
movement exercises, breathing exercises, and the like. Further, the medic
may be provided with information regarding the wounded soldier's vital
signs and health data as it is collected from the wounded soldier's
eyepiece and sent to the medic's eyepiece. This may provide the medic
with real time data from the wounded soldier without having to determine
such data on his own for example by taking the wounded soldier's blood
pressure.

[0777] In various embodiments, the user may be provided with alerts from
the eyepiece that tells him how for away an air or ground rescue is from
his location on the battlefield. This may provide a medic with important
information and alert him to whether certain procedures should or must be
attempted given the time available in the situation, and it may provide
an injured soldier with comfort knowing help is on the way or alert him
that he may need other sources of help.

[0778] In other embodiments, the user may be provided alerts of his own
vital signs if a problem is detected. For example, a soldier may be
alerted if his blood pressure is too high, thereby alerting him that he
must take medication or remove himself from combat if possible to return
his blood pressure to a safe level. Also, the user may be alerted of
other such personal data such as his pupil size, heart rate, waking gait
change and the like in order to determine if the user is experiencing a
medical problem. In other embodiments, a user's eyepiece may also alert
medical personnel in another location of the user's medical status in
order to send help for the user whether or not he knows he requires such
help. Further, general data may be aggregated from multiple eyepieces in
order to provide the commanding office with detailed information on his
wounded soldiers, how many soldiers he has in combat, how many of those
are wounded, and the like.

[0779] In various embodiments, a trained medical professional may use the
eyepiece in medical responses out of combat as well. Such eyepiece may
have similar uses as described above on or off the home base of the medic
but outside of combat situations. In this way, the eyepiece may provide a
user with a means to gain augmented reality assistance during a medical
procedure, to document a medical procedure, perform a medical procedure
at the guidance of a remote commanding officer via video and/or audio,
and the like on or off a military base. This may provide assistance in a
plurality of situations where the medic may need additional assistance.
An example of this may occur when the medic is on duty on a training
exercise, a calisthenics outing, a military hike and the like. Such
assistance may be of importance when the medic is the only responder,
when he is a new medic, approached with a new situation and the like.

[0780] In some embodiments, the eyepiece may provide user guidance in an
environment related to a military transport plane. For example, the
eyepiece may be used in such an environment when training, going into
battle, on a reconnaissance or rescue mission, while moving equipment,
performing maintenance on the plane and the like. Such use may be suited
for personnel of various ranks and levels.

[0781] For illustrative purposes, a user may receive audio and visual
information through the eyepiece while on the transport plane and going
into a training exercise. The information may provide the user with
details about the training mission such as the battle field conditions,
weather conditions, mission instructions, map of the area and the like.
The eyepiece may simulate actual battle scenarios to prepare the user for
battle. The eyepiece may also record the user's responses and actions
through various means. Such data gathering may allow the user to receive
feedback about his performance. Further, the eyepiece may then change the
simulation based on the results obtained during the training exercise to
change the simulation while it is underway or to change future
simulations for the user or various users.

[0782] In embodiments, the eyepiece may provide user guidance and or
interaction on a military transport plane when going into battle. The
user may receive audio and visual information about the mission as the
user boards the plane. Check lists may be presented to the user for
ensuring he has the appropriate materials and equipment of the mission.
Further, instructions for securing equipment and proper use of safety
harnesses may be presented along with information about the aircraft such
as emergency exits, location of oxygen tanks, and safety devices. The
user may be presented with instructions such as when to rest prior to the
mission and have a drug administered for that purpose. The eyepiece may
provide the user with noise cancellation for rest prior to mission, and
then may alert the user when his rest is over and further mission
preparation is to begin. Additional information may be provided such as a
map of the battle area, number of vehicles and/or people on the field,
weather conditions of the battle area and the like. The device may
provide a link to other soldiers so that instructions and battle
preparation may include soldier interaction where the commanding officer
is heard by subordinates and the like. Further, information for each user
may be formatted to suit his particular needs. For example, a commanding
officer may receive higher level or more confidential information that
may not be necessary to provide a lower ranking officer.

[0783] In embodiments, the user may use the eyepiece on a military
transport plane in a reconnaissance or rescue mission where the eyepiece
captures and stores various images and or video of places of interest as
it flies over areas which may be used for gaining information about a
potential ground battle area and the like. The eyepiece may be used to
detect movement of people and vehicles on the ground and thereby detect
enemy to be defeated or friendlies to be rescued or assisted. The
eyepiece may provide the ability to apply tags to a map or images of
areas flown over and searched giving a particular color coding for areas
that have been searched or still need to be searched.

[0784] In embodiments, a user on a military transport plane may be
provided with instructions and or a checklist for equipment to be
stocked, the quantity and location to be moved and special handling
instructions for various equipment. Alerts may be provided to the user
for approaching vehicles as items are unloaded or loaded in order to
ensure security.

[0785] For maintenance and safety of the military transport plane, the
user may be provided with a preflight check for proper functioning of the
aircraft. The pilot may be alerted if proper maintenance was not
completed prior to mission. Further, the aircraft operators may be
provided with a graphic overview or a list of the aircraft history to
track the history of the aircraft maintenance.

[0786] In some embodiments, the eyepiece may provide user guidance in an
environment related to a military fighter plane. For example, the
eyepiece may be used in such an environment when training, going into
battle, for maintenance and the like. Such use may be suited for
personnel of various ranks and levels.

[0787] By way of example, a user may use the eyepiece for training for
military fighter plane combat. The user may be presented with augmented
reality situations that simulate combat situations in a particular
military jet or plane. The user's responses and actions may be recorded
and or analyzed to provide the user with additional information, critique
and to alter training exercises based on past data.

[0788] In embodiments related to actual combat, the user may be presented
with information showing him friendly and non-friendly aircraft
surrounding and/or approaching him. The user may be presented information
regarding the enemy aircraft such as top speed, maneuvering ability and
missile range. In embodiments, the user may receive information relating
to the presence of ground threats and may be alerted about the same. The
eyepiece may sync to the user's aircraft and or aircraft instruments and
gauges such that the pilot may see emergency alerts and additional
information regarding the aircraft that may not normally be displayed in
the cockpit. Further, the eyepiece may display the number of seconds to
targeted area, the time to fire a missile or eject from the aircraft
based on incoming threats. The eyepiece may suggest maneuvers for the
pilot to preform based on the surrounding environment, potential threats
and the like. In embodiments, the eyepiece may detect and display
friendly aircraft even when such aircraft is in stealth mode.

[0789] In embodiments, the user may be provided with a preflight check for
proper functioning of the fighter aircraft. The pilot may be alerted if
proper routing maintenance was not completed prior to mission by linking
with maintenance records, aircraft computers and otherwise. The eyepiece
may allow the pilot to view history of the aircraft maintenance along
with diagrams and schematics of the same.

[0790] In some embodiments, the eyepiece may provide user guidance in an
environment related to a military helicopter. For example, the eyepiece
may be used in such an environment when training, going into combat, for
maintenance and the like. Such use may be suited for personnel of various
ranks and levels.

[0791] By way of example, a user may use the eyepiece for training for
military helicopter operation in combat or high stress situation. The
user may be presented with augmented reality situations that simulate
combat situations in a particular aircraft. The user's responses and
actions may be recorded and or analyzed to provide the user with
additional information, critique and to alter training exercises based on
past data.

[0792] During training and/or combat a user's eyepiece may sync into the
aircraft for alerts about the vital statistics and maintenance of the
aircraft. The user may view program and safety procedures and emergency
procedures for passengers as he boards the aircraft. Such procedures may
show how to ride in the aircraft safely, how to operate the doors for
entering and exiting the aircraft, the location of lifesaving equipment,
among other information. In embodiments, the eyepiece may present the
user with the location and/or position of threats such as those that
could pose a danger to a helicopter during its typical flight. For
example, the user may be presented with the location of low flying
threats such as drones, other helicopters and the location of land
threats. In embodiments, noise cancelling earphones and a multi-user user
interface may be provided with the eyepiece allowing for communication
during flight. In an event where the helicopter goes down, the user's
eyepiece may transmit the location and helicopter information to a
commanding officer and a rescue team. Further, use of night vision of the
eyepiece during a low flying mission may enable a user to turn a
high-powered helicopter spotlight off in order to search or find enemy
without being detected.

[0793] In embodiments, and as described in various instances herein, the
eyepiece may provide assistance in tracking the maintenance of the
aircraft and to determine if proper routine maintenance has been
performed. Further, and with other aircraft and vehicles mentioned
herein, augmented reality may be used in the assistance of maintaining
and working on the aircraft.

[0794] In some embodiments, the eyepiece may provide user guidance in an
environment related to a military drone aircraft or robots. For example,
the eyepiece may be used in such an environment in reconnaissance,
capture and rescue missions, combat, in areas that pose particular danger
to humans, and the like.

[0795] In embodiments, the eyepiece may provide video feed to the user
regarding the drone's surrounding environment. Real time video may be
displayed for up to the second information about various areas of
interest. Gathering such information may provide a soldier with the
knowledge of the number of enemy soldiers in the area, the layout of
buildings and the like. Further, data may be gathered and sent to the
eyepiece from the drone and or robot in order to gather intelligence on
the location of persons of interest to be captured or rescued. By way of
illustration, a user outside of a secure compound or bunker may use the
drone and or robot to send back video or data feed to of the location,
number and activity of persons in the secure compound in preparation of a
capture or rescue.

[0796] In embodiments, use of the eyepiece with a drone and/or robot may
allow a commanding officer to gather battlefield data during a mission to
make plan changes and to give various instructions of the team depending
on the data gathered. Further, the eyepiece and controls associated
therewith may allow users to deploy weapons on the drone and/or robot via
a user interface in the eyepiece. The data feed sent from the drone
and/or robot may give the user information as to what weapons to deploy
and when to deploy them.

[0797] In embodiments, the data gathered from the drone and/or robot may
allow the user to get up close to potential hazardous situations. For
example this may allow the user to investigate biological spills, bombs,
alleyways, foxholes, and the like to provide the user with data of the
situation and environment while keeping him out of direct harm's way.

[0798] In some embodiments, the eyepiece may provide user guidance in an
environment related to a military ship at sea. For example, the eyepiece
may be used in such an environment when training, going into battle,
performing a search and rescue mission, performing disaster clean up,
when performing maintenance and the like. Such use may be suited for
personnel of various ranks and levels.

[0799] In embodiments, the eyepiece may be used in training to prepare
users of various skill sets for performance of their job duties on the
vessel. The training may include simulations testing the user's ability
to navigate, control the ship and/or perform various tasks while in a
combat situation, and the like. The user's responses and actions may be
recorded and or analyzed to provide the user with additional information,
critique and to alter training exercises based on past data.

[0800] In embodiments, the eyepiece may allow the user to view potential
ship threats out on the horizon by providing him with an augmented
reality view of the same. Such threats may be indicated by dots,
graphics, or other means. Instructions may be sent to the user via the
eyepiece regarding preparation for enemy engagement once the eyepiece
detects a particular threat. Further, the user may view a map or video of
the port where they will dock and be provided with enemy location. In
embodiments, the eyepiece may allow the user to sync with the ship and/or
weapon equipment to guide the user in the use of the equipment during
battle. The user may be alerted by the eyepiece to where international
and national water boundaries lie.

[0801] In embodiments where search and rescue is needed, the eyepiece may
provide for tracking the current and/or for tagging the area of water
recently searched. In embodiments where the current is tracked, this may
provide the user information conveying the potential location or changed
location of persons of interest to be rescued. Similarly, the eyepiece
may be used in environments where the user must survey the surrounding
environment. For example, the user may be alerted to significant shifts
in water pressure and/or movement that may signal mantle movement and or
the imminence of an upcoming disaster. Alerts may be sent to the user via
the eyepiece regarding the shifting of the mantle, threat of earthquake
and/or tsunami and the like. Such alerts may be provided by the eyepiece
synching with devices on the ship, by tracking ocean water movement,
current change, change in water pressure, a drop or increase of the
surrounding water and the like.

[0802] In embodiments where military ships are deployed for disaster clean
up, the eyepiece may be used in detecting areas of pollution, the speed
of travel of the pollution and predictions of the depth and where the
pollution will settle. In embodiments the eyepiece may be useful in
detecting the parts per million of pollution and the variance thereon to
determine the change in position of the volume of the pollution.

[0803] In various embodiments the eyepiece may provide a user with a
program to check for proper functioning of the ship and the equipment
thereon. Further, various operators of the ship may be alerted if proper
routine maintenance was not completed prior to deployment. In embodiments
the user may also be able to view the maintenance history of the ship
along with the status of vital functioning of the ship.

[0804] In embodiments, the eyepiece may provide a user with various forms
of guidance in the environment of a submarine. For example, the eyepiece
may be used in such an environment when training, going into combat, for
maintenance and the like. Such use may be suited for personnel of various
ranks and levels.

[0805] By way of example, a user may use the eyepiece for training for
submarine operation in combat or high stress situation. The user may be
presented with augmented reality situations or otherwise that simulate
combat situations in a particular submarine. The training program may be
based on the user's rank such that his rank will determine the type of
situation presented. The user's responses and actions may be recorded and
or analyzed to provide the user with additional information, critique and
to alter training exercises based on past data. In embodiments, the
eyepiece may also train the user in maintaining the submarine, use of the
submarine and proper safety procedures and the like.

[0806] In combat environments, the eyepiece may be used to provide the
user with information relating to the user's depth, the location of the
enemy and objects, friendlies and/or enemies on the surface. In
embodiments, such information may be conveyed to the user in a visual
representation, through audio and the like. In various embodiments the
eyepiece may sync into and/or utilize devices and equipment of the
submarine to gather data from GPS, sonar and the like to gather various
information such as the location of other objects, submarines, and the
like. The eyepiece may display instructions to the soldier regarding
safety procedures, mission specifics, and the presences of enemies in the
area. In embodiments, the device may communicate or sync with the ship
and/or weapon equipment to guide the soldier in the use of such equipment
and to provide a display relating to the particular equipment. Such
display may include a visual and audio data relating to the equipment. By
further way of example, the device may be used with the periscope to
augment the user's visual picture and/or audio to show potential threats,
places of interest, and information that may not otherwise be displayed
by using the periscope such as the location of enemies out of view,
national and international water boundaries, various threats, and the
like.

[0807] The eyepiece may also be used in maintenance of the submarine. For
example, it may provide the user with a pre journey check for proper
functioning of the ship, it may alert the operation of proper routine
maintenance was performed or not completed prior to the mission. Further,
a user may be provided with a detailed history to review maintenance
performed and the like. In embodiments, the eyepiece may also assist in
maintaining the submarine by providing an augmented reality or other
program that instructs the user in performing such maintenance.

[0808] In embodiments, the eyepiece may provide a user with various forms
of guidance in the environment of a ship in port. For example, the
eyepiece may be used in such an environment when training, going into
combat, for maintenance and the like. Such use may be suited for
personnel of various ranks and levels.

[0809] By way of example, a user may use the eyepiece for training for a
ship in a port when in combat, under attach or a high stress situation.
The user may be presented with augmented reality situations, or
otherwise, that simulate combat situations that may be seen in a
particular port and on such a ship. The training program may show various
ports from around the world and the surrounding land data, data for the
number of ally ships or enemy ships that may be in the port at a give
time, and it may show the local fueling stations and the like. The
training program may be based on the user's rank such that his rank will
determine the type of situation presented. The user's responses and
actions may be recorded and/or analyzed to provide the user with
additional information, critique and to alter training exercises based on
past data. In embodiments, the eyepiece may also train the user in
maintaining and performing mechanical maintenance on the ship, use of the
ship and proper safety procedures to employ on the ship and the like.

[0810] In combat environments, the eyepiece may be used to provide the
user with information relating to the port where the user will or is
docked. They user may be provided with information on the location or
other visual representation of the enemy and or friendly ships in the
port. In embodiments, the user may obtain alerts of approaching aircraft
and enemy ships and the user may sync into the ship and/or weapon
equipment to guide the user in using the equipment while providing
information and/or display data about the equipment. Such data may
include the amount and efficacy of particular ammunition and the like.
The eyepiece may display instructions to the soldier regarding safety
procedures, mission specifics, and the presences of enemies in the area.
Such display may include visual and/or audio information.

[0811] The eyepiece may also be used in maintenance of the ship. For
example, it may provide the user with a pre-journey check for proper
functioning of the ship, it may alert the operation of proper routine
maintenance was performed or not completed prior to the mission. Further,
a user may be provided with a detailed history to review maintenance
performed and the like. In embodiments, the eyepiece may also assist in
maintaining the ship by providing an augmented reality or other program
that instructs the user in performing such maintenance.

[0812] In other embodiments, the user may use the eyepiece or other device
to gain biometric information of those coming into the port. Such
information may provide the user's identity and allow the user to know if
the person is a threat or someone of interest. In other embodiments, the
user may scan an object or container imported into the port for potential
threats in shipments of cargo and the like. The user may be able to
detect hazardous material based on density or various other information
collected by the sensors associated with the eyepiece or device. The
eyepiece may record information or scan a document to determine whether
the document may be counterfeit or altered in some way. This may assist
the user in checking an individual's credentials, and it may be used to
check the papers associated with particular pieces of cargo to alert the
user to potential threats or issues that may be related to the cargo such
as inaccurate manifests, counterfeit documents, and the like.

[0813] In embodiments, the eyepiece may provide a user with various forms
of guidance when using a tank or other land vehicles. For example, the
eyepiece may be used in such an environment when training, going into
combat, for surveillance, group transport, for maintenance and the like.
Such use may be suited for personnel of various ranks and levels.

[0814] By way of example, a user may use the eyepiece for training for
using a tank or other ground vehicle when in combat, under attack or a
high stress situation or otherwise. The user may be presented with
augmented reality situations, or otherwise, that simulate combat
situations that may be seen when in and/or operating a tank. The training
program may test the user on proper equipment and weapon use and the
like. The training program may be based on the user's rank such that his
rank will determine the type of situation presented. The user's responses
and actions may be recorded and/or analyzed to provide the user with
additional information, critique and to alter training exercises based on
past data. In embodiments, the eyepiece may also train the user in
maintaining the tank, use of the tank and proper safety procedures to
employ when in the tank or land vehicle and the like.

[0815] In combat environments, the eyepiece may be used to provide the
user with information and/or visual representations relating to the
location of the enemy and/or friendly vehicles on the landscape. In
embodiments, the user may obtain alerts of approaching aircraft and enemy
vehicles and the user may sync into the tank and/or weapon equipment to
guide the user in using the equipment while providing information and/or
display data about the equipment. Such data may include the amount and
efficacy of particular ammunition and the like. The eyepiece may display
instructions to the soldier regarding safety procedures, mission
specifics, and the presences of enemies and friendlies in the area. Such
display may include visual and audio information. In embodiments, the
user may stream a 360-degree view from the surrounding environment out
side of the tank by using he eyepiece to sync into a camera or other
device with such a view. Video/audio feed may be provided to as many
users inside of or outside of the tank/vehicle as necessary. This may
allow the user to monitor vehicle and stationary threats. The eyepiece
may communicate with the vehicle, and various vehicles, aircraft vessels
and devices as described herein or otherwise apparent to one of ordinary
skill in the art, to monitor vehicle statistics such as armor breach,
engine status, and the like. The eyepiece may further provide GPS for
navigational purposes, and use of Black Silicon or other technology as
described herein to detect enemy and navigate to the environment at night
and in times of less than optimal viewing and the like.

[0816] Further, the eyepiece may be used in the tank/land vehicle
environment for surveillance. In embodiments, the user may be able to
sync into cameras or other devices to get a 360-degree field of view to
gather information. Night vision and/or SWIR and the like as described
herein may be used for further information gathering where necessary. The
user may use the eyepiece to detect heat signatures to survey the
environment to detect potential threats, and may view soil density and
the like to detect roadside bombs, vehicle tracks, various threats and
the like.

[0817] In embodiments, the eyepiece may be used to facilitate group
transport with a tank or other land vehicle. For example the user may be
provided with a checklist that is visual, interactive or otherwise for
items and personnel to be transported. The user may be able to track and
update a manifest of items to track such as those in transport and the
like. The user may be able to view maps of the surrounding area, scan
papers and documents for identification of personnel, identify and track
items associated with individuals in transport, view the
itinerary/mission information of the individual in transport and the
like.

[0818] The eyepiece may also be used in maintenance of the vehicle. For
example, it may provide the user with a pre-journey check for proper
functioning of the tank or other vehicle, it may alert the operation of
proper routine maintenance was performed or not completed prior to the
mission. Further, a user may be provided with a detailed history to
review maintenance performed and the like. In embodiments, the eyepiece
may also assist in maintaining the vehicle by providing an augmented
reality or other program that instructs the user in performing such
maintenance.

[0819] In embodiments, the eyepiece may provide a user with various forms
of guidance when in an urban or suburban environment. For example, the
eyepiece may be used in such environments when training, going into
combat, for surveillance, and the like. Such use may be suited for
personnel of various ranks and levels.

[0820] By way of example, a user may use the eyepiece for training when in
combat, under attack or a high stress situation, when interacting with
local people, and the like in an urban or suburban environment. The user
may be presented with augmented reality situations, or otherwise, that
simulate combat situations that may be seen when in such an environment.
The training program may test the user on proper equipment and weapon use
and the like. The training program may be based on the user's rank such
that his rank will determine the type of situation presented. The user's
responses and actions may be recorded and or analyzed to provide the user
with additional information, critique and to alter training exercises
based on past data. In embodiments, the user may view alternate scenarios
of urban and suburban settings including actual buildings and layouts of
buildings and areas of potential combat. The user may be provided with
climate and weather information prior to going into the area, and may be
apprised of the number of people in the area at a given time generally or
at that time of day to prepare for possible attacks or other engagement.
Further, the user may be provided with the location of individuals in,
around and atop of buildings in a given area so that the user is prepared
prior to entering the environment.

[0821] In urban and suburban environments, the eyepiece or other device
may allow the user to survey the local people as well. The user may be
able to gather face, iris, voice, and finger and palm print data of
person's of interest. The user may be able to scan such data without the
user's detection from 0-5 meters, a greater distance or right next the
POI. In embodiments, the user may employ the eyepiece to see through
smoke and/or destroyed environments, to note and record the presence of
vehicles in the area, to record environment images for future use such as
in battle plans, to note population density of an area at various times
of day, the lay out of various buildings and alleys, and the like.
Furthermore, the user may gather and receive facts about a particular
indigenous population with which the soldier will have contact.

[0822] The user may also employ the eyepiece or other device in
urban/suburban environments when in combat. The device may allow the user
to use geo location with a laser range finder to locate and kill an enemy
target. In embodiments, it may give an areal view of the surrounding
environment and buildings. It may display enemy in the user's surrounding
area and identify the location of individuals such as enemies or
friendlies or those on the user's team. The user may use the eyepiece or
other device to stay in contact with his home base, to view/hear
instructions from commanding officers through the eyepiece where the
instructions may be developed after viewing or hearing data from the
user's environment. Further, the eyepiece may also allow the user to give
orders to others on his team. In embodiments, the user may perform
biometric data collection on those in the vicinity, record such
information and/or retrieve information about them for use in combat. The
user may link with other soldier devices for monitoring and using various
equipment carried by the soldier. In embodiments, the eyepiece may alert
the user for upcoming edges of buildings when on a roof top and alert
when approaching a ground shift or ledge and the like. The use may be
enabled to view a map overlay of the environment and the members of his
team, and he may be able to detect nearby signals to be alerted and to
alert others of possible enemies in the vicinity. In various embodiments,
the user may use the eyepiece for communicating with other team members
to execute a plan. Further, the user may use the eyepiece to detect
enemies located in dark tunnels and other areas where they may be
located.

[0823] The eyepiece may also be used in a desert environment. In addition
to the general and/or applicable uses noted herein in relation to
training, combat, survival, surveillance purposes, and the like, the
eyepiece may be further employed in various use scenarios that may be
encountered in environments such as a desert environment. By way of
example, when going into combat or training, the user may use the
eyepiece to correct impaired vision through sand storms in combat,
surveillance, and training. Further, the eyepiece may simulate the poor
visibility of sand storms and other desert dangers for the user in
training mode. In combat, the eyepiece may assist the user in seeing or
detecting the enemy in the presence of a sandstorm through various means
as described above. Further, the user may be alerted to and/or be able to
see the difference between sand clouds caused by vehicles and those
generated by the wind in order to be alerted of potential enemy approach.

[0824] In various embodiments, the user may use the eyepiece to detect
ground hazards and environmental hazards. For example the user may use
the eyepiece to detect the edge of sand dunes, sand traps and the like.
The user may also use the eyepiece to detect sand density to detect
various hazards such as ground holes, cliffs, buried devices such as
landmines and bombs, and the like. The user may be presented with a map
of the desert to view the location of such hazards. In embodiments, the
user may be provided a means by which to monitor his vital signs and to
give him alerts when he is in danger to do the extreme environmental
conditions such as heat during the day, cold at night, fluctuating
temperatures, dehydration and the like. Such alerts and monitoring may be
provided graphically in a user interface displayed in the eyepiece and/or
via audio information.

[0825] In embodiments, the user may be presented with a map of the desert
to view the location of his team, and he may use the eyepiece to detect
nearby signals, or otherwise, to get alerts of possible enemy forces that
may be displayed on the map or in an audio alert from an earpiece. In
such embodiments, the user may have an advantage over his enemies as he
may have the ability to determine the location of his team and enemies in
sandstorms, buildings, vehicles and the like. The user may view a map of
his location which may show areas in which the user has traveled recently
as one color and new areas as another. In this way or through other
means, the device may allow the user to not get lost and or stay moving
in the proper direction. In embodiments, the user may be provided with a
weather satellite overlay to warn the user of sand storms and hazardous
weather.

[0826] The eyepiece may also be used in a wilderness environment. In
addition to the general and/or applicable uses noted herein in relation
to training, combat, survival, surveillance purposes, and the like, the
eyepiece may be further employed in various use scenarios that may be
encountered in environments such as a wilderness environment.

[0827] By way of example the user may use the eyepiece in training for
preparation of being in the wilderness. For example the user may employ
the eyepiece to simulate varying degrees of wilderness environments. In
embodiments, the user may experience very thick and heavy trees/brush
with dangerous animals about and in other training environments, he may
be challenged with fewer places to hide from the enemy.

[0828] In combat, the user may use the eyepiece for various purposes. The
user may use the eyepiece to detect freshly broken twigs and branches to
detect recent enemy presence. Further, the user may use the eyepiece to
detect dangerous cliffs, caves, changes in terrain, recently
moved/disturbed dirt and the like. By way of example, by detecting the
presence of recently disturbed dirt, which may be detected if it has a
different density or heat signature from the surrounding dirt/leaves or
which may be detected by other means, the user may be alerted to a trap,
bomb or other dangerous device. In various environments described herein,
the user may use the eyepiece to communicate with his team via a user
interface or other means such that communication may remain silent and/or
undetected by the enemy in close environments, open environments
susceptible to echo, and the like. Also, in various environments, the
user may employ night vision as described herein to detect the presence
of enemies. The user may also view an overlay of trail maps and/or
mountain trail maps in the eyepiece so that the user may view a path
prior to encountering potentially dangerous terrain and or situations
where the enemy may be located. In various environments as described
herein, the eyepiece may also amplify the user's hearing for the
detection of potential enemies.

[0829] In embodiments, a user may employ the eyepiece in a wilderness
environment in a search and rescue use scenario. For example, the user
may use the eyepiece to detect soil/leaf movement to determine if it's
been disturbed for tracking human tracks and for finding a buried body.
The user may view a map of the area which has been tagged to show areas
already covered by air and or other team member searches to direct the
user from areas already scoured and toward areas not searched. Further,
the user may use the eyepiece for night vision for human and/or animal
detection through trees, brush, thickets and the like. Further, by using
the eyepiece to detect the presence of freshly broken twigs, the user may
be able to detect the presence or recent presence of persons of interest
when in a surveillance and/or rescue mission. In embodiments, the user
may also view an overlay of trail maps and/or mountain trail maps in the
eyepiece s so that the user may view a path prior to encountering
potentially dangerous terrain and or situations.

[0830] In yet other embodiments, a user may employ the use of the eyepiece
in a wilderness for living off of the land and survival-type situations.
By way of example, the user may use the eyepiece to track animal presence
and movement when hunting for food. Further, the user may use the
eyepiece for detection of soil moisture and to detect the presence and
location of a water supply. In embodiments, the eyepiece may also amplify
the user's hearing to detect potential prey.

[0831] The eyepiece may also be used in an artic environment. In addition
to the general and/or applicable uses noted herein in relation to
training, combat, survival, surveillance purposes, and the like, the
eyepiece may be further employed in various use scenarios that may be
encountered in environments such as an arctic environment. For example,
when in training, the eyepiece may simulate visual and audio white out
conditions that a user may encounter in an arctic environment so that the
user may adapt to operating under such stresses. Further, the eyepiece
may provide the user with a program that simulates various conditions and
scenarios due to extreme cold that he may encounter, and the program may
track and display data related to the user's predicted loss of heat.
Further, the program may adapt to simulate such conditions that the user
would experience with such heat loss. In embodiments, the program may
simulate the inability of the user to control his limbs properly which
may manifest in a loss of weapon accuracy. In other embodiments, the user
may be provided life saving information and instructions about such
things as burrowing in the snow for warmth, and various survival tips for
artic conditions. In yet other embodiments, the eyepiece may sync into a
vehicle such that the vehicle responds as if the vehicle were performing
in a particular environment, for example with artic conditions and snow
and ice. Accordingly the vehicle may respond to the user as such and the
eyepiece may also simulate visual and audio as if the user were in such
an environment.

[0832] In embodiments, the user may use the eyepiece in combat. The
soldier may use the eyepiece to allow him to see through white out
conditions. The use may be able to pull up an overlay map and/or audio
that provides a information of buildings ditches, land hazards and the
like to allow the soldier to move around the environment safely. The
eyepiece may alert the user to detections in the increase or decrease of
snow density to let him know when the landmass under the snow has changed
such as to denote a possible ditch, hole or other hazard, an object
buried in the snow and the like. Further, in conditions where it is
difficult to see, the user may be provided with the location of his team
members and enemies whether or not snow has obstructed his view. The
eyepiece may also provide heat signatures to display animals and
individuals to the user in an artic environment. In embodiments, a user
interface in the eyepiece may show a soldier's his vitals and give alerts
when he is in danger doe to the surrounding extreme environmental
conditions. Furthermore, the eyepiece may help the user operate a vehicle
in snowy conditions by providing alerts from the vehicle to the user
regarding transmission slipping, wheel spinning, and the like.

[0833] The eyepiece may also be used in a jungle environment. In addition
to the general and/or applicable uses noted herein in relation to
training, combat, survival, surveillance purposes, and the like, the
eyepiece may be further employed in various use scenarios that may be
encountered in environments such as a jungle environment. For example the
eyepiece may be employed in training to provide the user with information
regarding which plants may be eaten, which are poisonous and what insects
and animals may present the user with danger. In embodiments, the
eyepiece may simulate various noises and environments the user may
encounter in the jungle so that when in battle the environment is not a
distraction. Further, when in combat or an actual jungle environment, the
user may be provided with a graphical overlay or other map to show him
the surrounding area and/or to help him track where he's been and where
he must go. It may alert him of allies and enemies in the area, and it
may sense movement in order to alert the user of potential animals and/or
insects nearby. Such alerts may help the user survive by avoiding attack
and finding food. In other embodiments, the user may be provided with
augmented reality data such as in the form of a graphical overlay that
allows the user to compare a creature and/or animal to those encountered
to help the user discern which are safe for eating, which are poisonous
and the like. By having information that a particular creature is not a
threat to the user, he may be spared of having to deploy a weapon when in
stealth or quiet mode.

[0834] The eyepiece may also be used in relation to Special Forces
missions. In addition to the general and/or applicable uses noted herein
in relation to training, combat, survival, surveillance purposes, and the
like, the eyepiece may be further employed in various use scenarios that
may be encountered in relation to special forces missions. In
embodiments, the eyepiece may be of particular use on stealth missions.
For example, the user may communicate with his team in complete silence
through a user interface that each member may see on his eyepiece. The
user sharing information may navigate through the user interface with eye
movements and/or a controller device and the like. As the user puts up
instructions and/or navigates through the user interface and particular
data concerning the information to convey, the other users may see the
data as well. In embodiments, various users may be able to insert
questions via the user interface to be answered by the instruction
leader. In embodiments, a user may speak or launch other audio that all
users may hear through their eyepiece or other device. This may allow
users in various locations on the battlefield to communicate battle
plans, instructions, questions, share information and the like and may
allow them to do so without being detected.

[0835] In embodiments, the eyepiece may also be used for military fire
fighting. By way of example, the user may employ the eyepiece to run a
simulation of firefighting scenarios. The device may employ augmented
reality to simulate fire and structural damage to a building as time goes
by and it may otherwise recreate life-like scenarios. As noted herein,
the training program may monitor the user's progress and/or alter
scenarios and training modules based on the user's actions. In
embodiments, the eyepiece may be used in actual firefighting. The
eyepiece may allow the user to see though smoke through various means as
described herein. The user may view, download or otherwise, access a
layout of the building, vessel, aircraft vehicle or structure that's on
fire. In embodiments, the user will have an overview map or other map
that displays where each team member is located. The eyepiece may monitor
the user-worn or other devices during firefighting. The user may see his
oxygen supply levels in his eyepiece and may be alerted as to when he
should come out for more. The eyepiece may send notifications from the
user's devices to the command outside of the structure to deploy new
personnel to come in or out of the fire and to give status updates and
alert of possible fire fighter danger. The user may have his vital signs
displayed to determine if he is overheating, losing too much oxygen and
the like. In embodiments, the eyepiece may be used to analyze whether
cracks in beams or forming based on beam density, heat signatures and the
like and inform the user of the structural integrity of the building or
other environment. The eyepiece may provide automatic alerts when
structural integrity is compromised.

[0836] In embodiments, the eyepiece may also be used for maintenance
purposes. For example, the eyepiece may provide the user with a
pre-mission and/or use checklist for proper functioning of the item to be
used. It may alert the operator if proper maintenance has not been logged
in the item's database. It may provide a virtual maintenance and/or
performance history for the user to determine the safety of the item or
of necessary measures to be taken for safety and/or performance. In
embodiments, the eyepiece may be used to perform augmented reality
programs and the like for training the user in weapon care and
maintenance and for lessons in the mechanics of new and/or advanced
equipment. In embodiments, the eyepiece may be used in maintenance and/or
repair of various items such as weapons, vehicles, aircraft, devices and
the like. The user may use the eyepiece to view an overlay of visual
and/or audio instructions of the item to walk the user through
maintenance without the need for a handheld manual. In embodiments,
video, still images, 3D and/or 2D images, animated images, audio and the
like may be used for such maintenance. In embodiments, the user may view
an overlay and/or video of various images of the item such that the user
is shown what parts to remove, in what order, and how, which parts to
add, replace, repair, enhance and the like. In embodiments such
maintenance programs may be augmented reality programs or otherwise. In
embodiments, the user may use the eyepiece to connect with the machine or
device to monitor the functioning and or vital statistics of the machine
or device to assist in repair and/or to provide maintenance information.
In embodiments, the user may be able to use the eyepiece to propose a
next course of action during maintenance and the eyepiece may send the
user information on the likelihood of such action harming the machine,
helping to fix the machine, how and/or if the machine will function after
the next step and the like. In embodiments, the eyepiece may be used for
maintenance of all items, machines, vehicles, devices, aircraft and the
like as mentioned herein or otherwise applicable to or encountered in a
military environment.

[0837] The eyepiece may also be used in environments where the user has
some degree of unfamiliarity with the language spoken. By way of example,
a soldier may use the eyepiece and/or device to access near real-time
translation of those speaking around him. Through the device's earpiece,
he may hear a translation in his native language of one speaking to him.
Further, he may record and translate comments made by prisoners and/or
other detainees. In embodiments, the soldier may have a user interface
that enables translating a phrase or providing translation to the user
via an earpiece, via the user's eyepiece in a textual image or otherwise.
In embodiments, the eyepiece may be used by a linguist to provide a
skilled linguist with supplemental information regarding dialect spoken
in a particular area or that which is being spoken by people near him. In
embodiments, the linguist may use the eyepiece to record language samples
for further comparison and/or study. Other experts may use the eyepiece
to employ voice analysis to determine if the speaker is experiencing
anger, shame, lying, and the like by monitoring inflection, tone,
stutters and the like. This may give the listener native the speaker's
intentions even when the listener and speaker speak different languages.

[0838] In embodiments, the eyepiece may allow the user to decipher body
language and/or facial expressions or other biometric data from another.
For example, the user may use the device to analyze a person's pupil
dilation, eye blink rates, voice inflection, body movement and the like
to determine if the person is lying, hostile, under stress, likely a
threat, and the like. In embodiments, the eyepiece may also gather data
such as that of facial expressions to detect and warn the user if the
speaker is lying or likely making unreliable statements, hostile, and the
like. In embodiments, the eyepiece may provide alerts to the user when
interacting with a population or other individuals to warn about
potential threatening individuals that may be disguised as non-combative
or ordinary citizens or other individuals. User alerts may be audio
and/or visual and may appear in the user's eyepiece in a user interface
or overlaid in the user's vision and/or be associated with the surveyed
individual in the user's line of vision. Such monitoring as described
herein may be undetected as the user employs the eyepiece and/or device
to gather the data from a distance or it may be performed up-close in a
disguised or discrete fashion, or performed with the knowledge and/or
consent of the individual in question.

[0839] The eyepiece may also be used when dealing with bombs and other
hazardous environments. By way of example, the eyepiece may provide a
user with alerts of soil density changes near the roadside which could
alert the user and/or team of a buried bomb. In embodiments, similarly
methods may be employed in various environments, such as testing the
density of snow to determine if a bomb or other explosive may be found in
artic environments and the like. In embodiments, the eyepiece may provide
a density calculation to determine whether luggage and/or transport items
tend to have an unexpected density or one that falls outside of a
particular range for the items being transported. In embodiments, the
eyepiece may provide a similar density calculation and provide an alert
if the density is found to be one that falls within that expected for
explosive devices, other weapons and the like. One skilled in the art
will recognize that bomb detection may be employed via chemical sensors
as well and/or means known in the art and may be employed by the eyepiece
in various embodiments. In embodiments, the eyepiece may be useful in
bomb disposal. The user may be provided with an augmented reality or
other audio and/or visual overlay in order to gain instructions on how to
diffuse the particular type of bomb present. Similar to the maintenance
programs described above, the user may be provided with instructions for
diffusing a bomb. In embodiments, if the bomb type is unknown a user
interface may provide the user with instructions for safe handling and
possible next steps to be taken. In embodiments, the user may be alerted
of a potential bomb in the vicinity and may be presented with
instructions for safe dealing with the situation such as how to safely
flee the bomb area, how to safely exit a vehicle with a bomb, how closely
the user may come to the bomb safely, how to diffuse the bomb via
instructions appropriate for the situation and the user's skill level,
and the like. In embodiments, the eyepiece may also provide a user with
training in such hazardous environments and the like.

[0840] In embodiments, the eyepiece may detect various other hazards such
as biological spills, chemical spills, and the like and provide the user
with alerts of the hazardous situation. In embodiments, the user may also
be provided with various instructions on diffusing the situation, getting
to safety and keeping others safe in the environment and/or under such
conditions. Although situations with bombs have been described, it is
intended that the eyepiece may be used similarly in various hazardous
and/or dangerous situations and to guard against and to neutralize and/or
provide instruction and the like when such danger and hazards are
encountered.

[0841] The eyepiece may be used in a general fitness and training
environment in various embodiments. The eyepiece may provide the user
with such information as the miles traveled during his run, hike, walk
and the like. The eyepiece may provide the user with information such as
the number of exercised performed, the calories burned, and the like. In
embodiments, the eyepiece may provide virtual instructions to the user in
relation to performing particular exercises correctly, and it may provide
the user with additional exercises as needed or desired. Further, the
eyepiece may provide a user interface or otherwise where physical
benchmarks are disclosed for the soldier to meet the requirements for his
particular program. Further, the eyepiece may provide data related to the
amount and type of exercise needed to be carried out in order for user to
meet such requirements. Such requirements may be geared toward Special
Forces qualification, basic training, and the like. In embodiments, the
user may work with virtual obstacles during the workout to prevent the
user from setting up actual hurdles, obstacles and the like.

[0842] Although specific various environments and use scenarios have been
described herein, such description is not intended to be limiting.
Further, it is intended that the eyepiece may be used in various
instances apparent to one of ordinary skill in the art. It is also
intended that applicable uses of the eyepiece as noted for particular
environments may be applied in various other environments even though not
specifically mentioned therewith.

[0843] In embodiments, a user may access and/or otherwise manipulate a
library of information stored on a secure digital (SD) card, Mini SD
card, other memory, remotely loaded over a tactical network, or stored by
other means. The library may be part of the user's equipment and/or it
may be remotely accessible. The user's equipment may include a DVR or
other means for storing information gathered by the user and the recorded
data and/or feed may be transmitted elsewhere as desired. In embodiments,
the library may include images of local threats, information and/or
images of various persons listed as threats and the like. The library of
threats may be stored in an onboard mini-SD card or other means. In
embodiments, it may be remotely loaded over a tactical network.
Furthermore, in embodiments, the library of information may contain
programs and other information useful in the maintenance of military
vehicles or the data may be of any variety or concerning any type of
information. In various embodiments, the library of information may be
used with a device such that data is transferred and/or sent to or from
the storage medium and the user's device. By way of example, data may be
sent to a user's eyepiece and from a stored library such that he is able
to view images of local persons of interest. In embodiments, data may be
sent to and from a library included in the soldier's equipment or located
remotely and data may be sent to and from various devices as described
here. Further, data may be sent between various devices as described
herein and various libraries as described above.

[0844] In embodiments, military simulation and training may be employed.
By way of example, gaming scenarios normally used for entertainment may
be adapted and used for battlefield simulation and training. Various
devices, such as the eyepiece described herein may be used for such
purpose. Near field communications may be used in such simulation to
alert personnel, present dangers, change strategy and scenario and for
various other communication. Such information may be posted to share
information where it is needed to give instruction and/or information.
Various scenarios, training modules and the like may be run on the user's
equipment. For example only, and not to limit the use of such training, a
user's eyepiece may display an augmented reality battle environment. In
embodiments, the user may act and react in such an environment as if he
were actually in battle. The user may advance or regress depending on his
performance. In various embodiments, the user's actions may be recorded
for feedback to be provided based on his performance. In embodiments, the
use may be provided with feedback independent of whether his performance
was recorded. In embodiments, information posted as described above may
be password or biometrically protected and or encrypted and instantly
available or available after a particular period of time. Such
information stored in electronic form may be updated instantly for all
the change orders and updates that may be desired.

[0845] Near field communications or other means may also be used in
training environments and for maintenance to share and post information
where it is needed to give instruction and/or information. By way of
example, information may be posed in classrooms, laboratories maintenance
facilitates, repair bays, and the like or wherever it is needed for such
training and instruction. A user's device, such as the eyepiece described
herein, may allow such transmission and receipt of information.
Information may be shared via augmented reality where a user encounters a
particular area and once there he is notified of such information.
Similarly as descried herein, near field communications may be used in
maintenance. By way of example, information may be posted precisely where
it is needed, such as in maintenance facilities, repair bays, associated
with the item to be repaired, and the like. More specifically, and not to
limit the present disclosure, repair instructions may be posted under the
hood of a military vehicle and visible with the use of the soldier's
eyepiece. Similarly, various instruction and training information may be
shared with various users in any given training situation such as
training for combat and/or training for military device maintenance. In
embodiments, information posted as described above may be password or
biometrics protected and or encrypted and instantly available or
available after a particular period of time. Such information stored in
electronic form may be updated instantly for all the change orders and
updates that may be desired.

[0846] In embodiments, an application applied to the present invention may
be for facial recognition or sparse facial recognition. Such sparse
facial recognition may use one or more facial features to exclude
possibilities in identifying persons of interest. Space facial
recognition may have automatic obstruction masking and error and angle
correction. In embodiments, and by way of example and not to limit the
present invention, the eyepiece, flashlight and devices as described
herein may allow for sparse facial recognition. This may work like human
vision and quickly exclude regions or entire profiles that don't match by
using sparse matching on all image vectors at once. This may make it
almost impossible for false positives. Further, this may simultaneously
utilize multiple images to enlarge the vector space and increase
accuracy. This may work with either multiple database or multiple target
images based on availability or operational requirement. In embodiments,
a device may manually or automatically identify one or more specific
clean features with minimal reduction in accuracy. By way of example,
accuracy may be of various ranges and it may be at least 87.3% for a
nose, 93.7% for an eye, and 98.3% for a mouth and chin. Further angle
correction with facial reconstruction may be employed and, in
embodiments, up to a 45 degree off angle correction with facial
reconstruction may be achieved. This may be further enhanced with 3D
image mapping technology. Further, obscured area masking and replacement
may be employed. In embodiments, 97.5% and 93.5% obscured area masking
and replacement may be achieved for sunglasses and a scarf respectively.
In embodiments, the ideal input image may be 640 by 480. The target image
may match reliably with less than 10% of the input resolution due to long
range or atmospheric obscurants. Further, the specific ranges as noted
above may be greater or lesser in various embodiments.

[0847] In various embodiments, the devices and/or networks described
herein may be applied for the identification and or tracking of friends
and/or allies. In embodiments, facial recognition may be employed to
positively identify friends and or friendly forces. Further, real-time
network tracking and/or real-time network tracking of blue and red forces
may allow a user to know where his allies and/or friendlies are. In
embodiments, there may be a visual separation range between blue and red
forces and/or forces identified by various markers and/or means. Further,
the user may be able to geo-locate the enemy and share the enemy's
location in real-time. Further, the location of friendlies may be shared
in real time as well. Devices used for such an application may be
biometric collection glasses, eyepiece other devices as described herein
and those known to one of ordinary skill in the art.

[0848] In embodiments, the devices and/or networks described herein may be
applied in medical treatment in diagnosis. By way of example, such
devices may enable medical personnel to make remote diagnoses. Further,
and by way of example, when field medics arrive on a scene, or remotely,
they may use a device such as a fingerprint sensor to instantaneously
call up the soldier's medical history, allergies, blood type and other
time sensitive medical data to apply the most effective treatment. In
embodiment, such data may be called up via facial recognition, iris
recognition, and the like of the soldier which may be accomplished via
the eyepiece described herein or another device.

[0849] In embodiments, users may share various data via various networks
and devices as described herein. By way of example, a 256-bit AES
encrypted video wireless transceiver may bi-directionally share video
between units and/or with a vehicle's computer. Further, biometric
collection of data, enrollment, identification and verification of
potential persons of interest, biometric data of persons of interest and
the like may be shared locally and/or remotely over a wireless network.
Further, such identification and verification of potential persons of
interest may be accomplished or aided by the data shared locally and/or
remotely over a wireless network. The line of biometric systems and
devices as described herein may be enabled to share data over a network
as well. In embodiments, data may be shared with, from and/or between
various devices, individuals, vehicles, locations, units and the like. In
embodiments there may be inter-unit and intra unit communication and data
sharing. Data may be shared via, from and/or between existing
communications assets, a mesh network or other network, a mil-con type
ultra wide band transceiver caps with 256-bit encryption, a mil-con type
cable, removable SD and/or microSD memory card, a Humvee, PSDS2, unmanned
aerial vehicle, WBOTM, or other network relay, a combat radio, a mesh
networked computer, devices such as but not limited to various devices
described herein, a bio-phone 3G/4G networked computer, a digital
dossier, tactical operating centers, command posts, DCSG-A, BAT servers,
individuals and/or groups of individuals, and any eyepiece and/or device
described herein and/or those known to persons skilled in the art and the
like.

[0850] In embodiments, a device as described herein or other device may
contain a viewing pane that reverses to project imagery on any surface
for combat team viewing by a squad and/or team leader. The transparent
viewing pane or other viewing pane may be rotated 180 degrees or another
quantity of degrees in projection mode to share data with a team and/or
various individuals. In embodiments, devices including but not limited to
a monocular and binocular NVG may interface with all or virtually all
tactical radios in use and allow the user to share live video, S/A,
biometric data and other data in real-time or otherwise. Such devices as
the binocular and monocular noted above may be a, VIS, NIRand/or SWIR
binocular or monocular that may be self-contained, and comprise a color
day/night vision and/or digital display with a compact, encrypted,
wireless-enabled computer for interfacing with tactical radios. Various
data may be shared over combat radios, mesh networks and long-range
tactical networks in real time or near real time. Further, data may be
organized into a digital dossier. Data of a person of interest (POI) may
be organized into a digital dossier whether such POI rest was enrolled or
not. Data that is shared, in embodiments, may be compared, manipulated
and the like. While specific devices are mentioned, any device mentioned
herein may be capable of sharing information as described herein and/or
as would be recognized by one having ordinary skill in the art.

[0851] In embodiments, biometric data, video, and various other types of
data may be collected via various devices, methods and means. For
example, fingerprints and other data may be collected from weapons and
other objects at a battle, terrorism and/or crime scene. Such collection
may be captured by video or other means. A pocket bio cam, flashlight as
described herein with built in still video camera, various other devices
described herein, or other device may collect video, record, monitor, and
collect and identify biometric photographic data. In embodiments, various
devices may record, collect, identify and verify data and biometric data
relating to the face, fingerprints, latent fingerprints, latent palm
prints, iris, voice, pocket litter, scars, tattoos, and other identifying
visible marks and environmental data. Data may be geo-located and
date/time stamped. The device may capture EFTS/EBTS compliant salient
images to be matched and filed by any biometric matching software.
Further, video scanning and potential matching against a built-in or
remote iris and facial database may be performed. In embodiments, various
biometric data may be captured and/or compared against a database and/or
it may be organized into a digital dossier. In embodiments, an imaging
and detection system may provide for biometrics scanning and may allow
facial tracking and iris recognition of multiple subjects. The subjects
may be moving in or out of crowds at high speeds and may be identified
immediately and local and/or remote storage and/or analysis may be
performed on such images and/or data. In embodiments, devices may perform
multi-modal biometric recognition. For example, a device may collect and
identify a face and iris, an iris and latent fingerprints, various other
combinations of biometric data, and the like. Further, a device may
record video, voice, gait, fingerprints, latent fingerprints, palm
prints, latent palm prints and the like and other distinguishing marks
and/or movements. In various embodiments, biometric data may be filed
using the most salient image plus manual entry, enabling partial data
capture. Data may be automatically geo-located, time/date stamped and
filed into a digital dossier with a locally or network assigned GUID. In
embodiments, devices may record full livescan 4 fingerprint slaps and
rolls, fingerprint slaps and rolls, palm prints, finger tips and finger
prints. In embodiments, operators may collect and verify POIs with an
onboard or remote database while overseeing indigenous forces. In
embodiments, a device may access web portals and biometric enabled watch
list databases and/or may contain existing biometric pre-qualification
software for POI acquisition. In embodiments, biometrics may be matched
and filed by any approved biometric matching software for sending and
receiving secure perishable voice, video and data. A device may integrate
and/or otherwise analyze biometric content. In embodiments, biometric
data may be collected in biometric standard image and data formats that
can be cross referenced for a near real or real time data communication
with the Department of Defense Biometric Authoritative or other data
base. In embodiments, a device may employ algorithms for detection,
analysis, or otherwise in relation to finger and palm prints, iris and
face images. A device, in embodiments, may illuminate an iris or latent
fingerprint simultaneously for a comprehensive solution. In embodiments,
a device may use high-speed video to capture salient images in unstable
situations and may facilitate rapid dissemination of situational
awareness with intuitive tactical display. Real time situational
awareness may be provided to command posts and/or tactical operating
centers. In embodiments, a device may allow every soldier to be a sensor
and to observe and report. Collected data may be tagged with date, time
and geo-location of collection. Further, biometric images may be NIST/ISO
compliant, including ITL 1-2007. Further, in embodiments, a laser range
finder may assist in biometric capture and targeting. A library of
threats may be stored in onboard Mini-SD card or remotely loaded over a
tactical network. In embodiments, devices may wirelessly transfer
encrypted data between devices with a band transceiver and/or ultrawide
band transceiver. A device may perform onboard matching of potential
POI'S against a built in database or securely over a battlefield network.
Further, a device may employ high-speed video to capture salient images
in all environmental conditions. Biometric profiles may be uploaded
downloaded and searched in seconds or less. In embodiments, a user may
employ a device to geo-locate a POI with visual biometrics at a safe
distance and positively identify a POI with robust sparse recognition
algorithms for the face, iris and the like. In embodiments, a user may
merge and print a visual biometrics on one comprehensive display with
augmented target highlighting and view matches and warnings without
alerting the POI. Such display may be in various devices such as an
eyepiece, handheld device and the like.

[0852] In embodiments, as indigenous persons filter through a controlled
checkpoint and/or vehicle stops, an operator can collect, enroll,
identify and verify POIs from a watch list using low profile face and
iris biometrics. In embodiments, biometric collection and identification
may take place at a crime scene. For example an operator may rapidly
collect biometric data from all potential POIs at a bombing or other
crime scene. The data may be collected, geo-tagged and stored in a
digital dossier to compare POIs against past and future crime scenes.
Further, biometric data may be collected in real time from POIs in house
and building searches. Such data displayed may let the operator know
whether to release detain or arrest a potential POI. In other
embodiments, low profile collection of data and identification may occur
in street environments or otherwise. A user may move through a market
place for example and assimilate with the local population while
collecting biometric, geo-location and/or environmental data with minimal
visible impact. Furthermore, biometric data may be collected on the dead
or wounded to identify whether they were or are a POI. In embodiments, a
user may identify known or unknown POI'S by facial identification, iris
identification, fingerprint identification, visible identifying marks,
and the like of the deceased or wounded, or others and keep a digital
dossier updated with such data.

[0853] In embodiments, a laser range finder and/or inclinometer may be
used to determine the location of persons of interest and/or improvised
explosive devices, other items of interest, and the like. Various devices
described herein may contain a digital compass, inclinometer and a laser
range finder to provide geo-location of POIs, targets, IEDs, items of
interest and the like. The geo-location of a POI and/or item of interest
may be transmitted over networks, tactical networks, or otherwise, and
such data may be shared among individuals. In embodiments, a device may
allow an optical array and a laser range finder to geo-locate and range
multiple POIs simultaneously with continuous observation of a group or
crowd in the field in an uncontrolled environment. Further, in
embodiments, a device may contain a laser range finder and designator to
range and paint a target simultaneously with continuous observation of
one or more targets. Further, in embodiments, a device may be
soldier-worn, handheld or otherwise and include target geo-location with
integrated laser range finder, digital compass, inclinometer and GPS
receiver to locate the enemy in the filed. In embodiments, a device may
contain an integrated digital compass, inclinometer, MEMs Gyro and GPS
receiver to record and display the soldier's position and direction of
his sight. Further, various devices may include an integrated GPS
receiver or other GPS receiver, IMU, 3-axis digital compass or other
compass, laser range finer, gyroscope, micro-electro-mechanical system
based gyroscope, accelerometer and/or an inclinometer for positional and
directional accuracy and the like. Various devices and methods as
described herein may enable a user to locate enemy and POIs in the filed
and share such information with friendlies via a network or other means.

[0854] In embodiments, users may be mesh networked or networked together
with communications and geo-location. Further, each user may be provided
with a pop-up, or other location map of all users or proximate users.
This may provide the user with knowledge of where friendly forces are
located. A described above, the location of enemies may be discovered.
The location of enemies may be tracked and provided with a pop-up or
other location map of enemies which may provide the user with knowledge
of where friendly forces are located. Location of friendlies and enemies
may be shared in real time. Users may be provided with a map depicting
such locations. Such maps of the location and/or number of friendlies,
enemies and combinations thereof may be displayed in the user's eyepiece
or other device for viewing.

[0855] In embodiments, devices, methods, and applications may allow for
hands-free, wireless, maintenance and repair visually and/or audio
enhanced instructions. Such applications may include RFID sensing for
parts location and kitting. In examples, a user may use a device for
augmented reality guided filed repair. Such filed repair may be guided by
hands-free, wireless, maintenance and repair instructions. A device, such
as an eyepiece, projector, monocular and the like and/or other devices as
described herein may display images of maintenance and repair procedures.
In embodiments, such images may be still and/or video, animated, 3-D,
2-D, and the like. Further, the user may be provided with voice and/or
audio annotation of such procedures. In embodiments, this application may
be used in high threat environments where working undetected is a safety
consideration. Augmented reality images and video may be projected on or
otherwise overlaid on the actual object with which the user is working or
in the user's field of view of the object to provide video, graphical,
textual or other instructions of the procedure to be performed. In
embodiments, a library of programs for various procedures may be
downloaded and accessed wired or wirelessly from a body worn computer or
from a remote device, database and/or server, and the like. Such programs
may be used for actual maintenance or training purposes.

[0856] In embodiments, the devises, methods and descriptions found herein
may provide for an inventory tracking system. In embodiments, such
tracking system may allow a scan from up to 100m distance to handle more
than 1000 simultaneous links with 2 mb/s data rate. The system may give
annotated audio and/or visual information regarding inventory tracking
when viewing and/or in the vicinity of the inventory. In embodiments,
devices may include an eyepiece, monocular, binocular and/or other
devices as described herein and inventory tracking may use SWIR, SWIR
color, and/or night vision technology, body worn wired or wireless
computers, wireless UWB secure tags, RFID tags, a helmet/hardhat reader
and display and the like. In embodiments, and by way of example only, a
user may receive visual and/or audio information regarding inventory such
as which items are to be destroyed, transferred, the quantity of items to
be destroyed or transferred, where the items are to be transferred or
disposed and the like. Further, such information may highlight, or
otherwise provide a visual identification of the items in question along
with instructions. Such information may be displayed on a user's
eyepiece, projected onto an item, displayed on a digital or other display
or monitor and the like. The items in question may be tagged via UWB
and/or RFID tags, and/or augmented reality programs may be used to
provide visualization and/or instruction to the user such that the
various devices as described herein may provide the information as
necessary for inventory tracking and management.

[0857] In various embodiments, SWIR, SWIR color, monocular, night vision,
body worn wireless computer, the eyepiece as described herein and/or
devices as described herein may be used when firefighting. In
embodiments, a user may have increased visibility through smoke, and the
location of various individuals may be displayed to the user by his
device in an overlaid map or other map so that he may know the location
of firefighters and/or others. The device may show real-time display of
all firefighters' locations and provide hot spot detection of areas with
temperatures of less than and greater than 200 degrees Celsius without
triggering false alarms. Maps of the facility may also be provided by the
device, displayed on the device, projected from the device and/or
overlaid in the user's line of site through augmented reality or other
means to help guide the user through the structure and/or environment.

[0858] Systems and devices as described herein may be configurable to any
software and/or algorithm to conform to mission specific needs and/or
system upgrades.

[0859] Referring to FIG. 73, the eyepiece 100 may interface with a
`biometric flashlight` 7300, such as including biometric data taking
sensors for recording an individual's biometric signature(s) as well as
the function and in the form factor of a typical handheld flashlight. The
biometric flashlight may interface with the eyepiece directly, such as
though a wireless connection directly from the biometric flashlight to
the eyepiece 100, or as shown in the embodiment represented in FIG. 73,
through an intermediate transceiver 7302 that interfaces wirelessly with
the biometric flashlight, and through a wired or wireless interface from
the transceiver to the eyepiece (e.g. where the transceiver device is
worn, such as on the belt). Although other mobile biometric devices are
depicted in figures without showing the transceiver, one skilled in the
art will appreciate that any of the mobile biometric devices may be made
to communicate with the eyepiece 100 indirectly through the transceiver
7300, directly to the eyepiece 100, or operate independently. Data may be
transferred from the biometric flashlight to the eyepiece memory, to
memory in the transceiver device, in removable storage cards 7304 as part
of the biometric flashlight, and the like. The biometric flashlight may
include an integrated camera and display, as described herein. In
embodiments, the biometric flashlight may be used as a stand-alone
device, without the eyepiece, where data is stored internally and
information provided on a display. In this way, non-military personnel
may more easily and securely use the biometric flashlight. The biometric
flashlight may have a range for capturing curtain types of biometric
data, such as a range of 1 meter, 3 meters, 10 meters, and the like. The
camera may provide for monochrome or color images. In embodiments, the
biometric flashlight may provide a covert biometric data collection
flashlight-camera that may rapidly geo-locate, monitor and collect
environmental and biometric data, for onboard or remote biometric
matching. In an example use scenario, a soldier may be assigned to a
guard post at nighttime. The soldier may utilize the biometric flashlight
seemingly only as a typical flashlight, but where unbeknownst to the
individuals being illuminated by the device, is also running and/or
taking biometrics as part of a data collection and/or biometrics
identification process.

[0860] Referring now to FIG. 76, a 360° imager utilizes digital
foveated imaging to concentrates pixels to any given region, delivering a
high resolution image of the specified region. Embodiments of the
360° imager may feature continuous 360°×40°
panoramic FOV with super-high resolution foveated view and simultaneous
and independent 10× optical zoom. The 360° imager may
include dual 5 megapixel sensors and imaging capabilities of 30 fps and
image acquisition time <100. The 360° imager may include a
gyro-stabilized platform with independently stabilized image sensors. The
360° imager may have only one moving part and two imaging sensors
that allows for reduced image processing bandwidth in a compact optical
system design. The 360° image may also feature low angular
resolution and high-speed video processing and may be sensor agnostic.
The 360° image may be used as a surveillance fixture in a
facility, on a mobile vehicle with a gyro stabilized platform, mounted on
a traffic light or telephone pole, robot, aircraft, or other location
that allows for persistent surveillance. Multiple users may independently
and simultaneously view the environment imaged by the 360° imager.
For example, imagery captured by the 360° imager may be displayed
in the eyepiece to allow all recipients of the data, such as all
occupants in a combat vehicle, to have real-time 360° situational
awareness. The panoramic 360° imager may recognize a person at 100
meters and foveated 10× zoom can be used to read a license plate at
500 meters. The 360° imager allows constant recording of the
environment and features an independent controllable foveated imager.

[0862] The 360° imager may be part of a network with wireless or
physical reach back to a TOC or database. For example, a user may use a
display with a 360° imager driver to view imagery from a
360° imager wirelessly or using a wired connection, such as a
mil-con type cable. The display may be a combat radio or mesh networked
computer that is networked with a headquarters. Data from a database,
such as a DoD authoritative database may be accessed by the combat radio
or mesh networked computer, such as by using a removable memory storage
card or through a networked connection.

[0863] Referring now to FIG. 77, a multi-coincident view camera may be
used for imaging. The feed from the multi-coincident view camera may be
transmitted to the eyepiece 100 or any other suitable display device. In
one embodiment, the multi-coincident view camera may be a
fully-articulating, 3- or 4-coincident view, SWIR/LWIR imaging, and
target designating system that allows simultaneous: wide, medium and
narrow field-of-view surveillance, with each sensor at VGA or SXVGA
resolution for day or night operations. The lightweight, gimbaled sensor
array may be inertially stabilized as well as geo-referenced enabling a
highly accurate sensor positioning and target designating with its NVG
compatible laser pointer capability in all conditions. Its unique
multiple and simultaneous fields-of-view enable wide area surveillance in
the visible, near-infrared, short wave infrared and long wave infrared
regions. It also permits a high resolution, narrow field-of-view for more
precise target identification and designation with point-to-grid
coordinates, when coupled with outputs from a digital compass,
inclinometer and GPS receiver.

[0864] In one embodiment of the multi-coincident view camera, there may be
separate, steerable, co-incident fields of view, such as 30°,
10° and 1°, with automated POI or multiple POIs tracking,
face and iris recognition, onboard matching and communication wirelessly
over 256-bit AES encrypted UWB with laptop, combat radio, or other
networked or mesh-networked device. The camera may network to CP's, TOC's
and biometric databases and may include a 3-axis, gyro-stabilized, high
dynamic range, high resolution sensor to deliver the ability to see in
conditions from a glaring sun to extremely low light. IDs may be made
immediately and stored and analyzed locally or in remote storage. The
camera may feature "look and locate" accurate geo-location of POI's and
threats, to >1,000 m distance, integrated 1550 nm, eye-safe laser
range finder, networked GPS, 3-axis gyro, 3-axis magnetometer,
accelerometer and inclinometer, electronic image enhancement and
augmenting electronic stabilization aids in tracking, recording
full-motion (30 fps) color video, be ABIS,EBTS, EFTS and JPEG 2000
compatible, and meet MIL-STD 810 for operation in environmental extremes.
The camera may be mounted via a gimbaled ball system that integrates
mobile uncooperative biometric collection and identification for a stand
off biometric capture solution as well as laser range-finding and POI
geo-location, such as at chokepoints, checkpoints, and facilities.
Multi-modal biometric recognition includes collecting and identifying
faces and irises and recording video, gait and other distinguishing marks
or movements. The camera may include the capability to geo-location tag
all POI'S and collected data with time, date and location. The camera
facilitates rapid dissemination of situational awareness to
network-enabled units CP's and TOC's.

[0865] In another embodiment of the multi-coincident view camera, the
camera features 3 separate, Color VGA SWIR Electro-optic Modules that
provide co-incident 20°, 7.5° and 2.5° Fields of
View and 1 LWIR Thermal Electro-optic Modules for broad area to pinpoint
imaging of POIs and Targets in an ultra-compact configuration. The
3-axis, gyro-stabilized, high dynamic range, color VGA SWIR cameras
deliver the ability to see in conditions from a glaring sun to extremely
low light as well as through fog, smoke and haze--with no "blooming
Geo-location is obtained by integration of Micro-Electro-Mechanical
System (MEMS) 3-axis gyroscopes and 3-axis accelerometers which augment
the GPS receiver and magnetometer data. Integrated 1840 nm, eye-safe
laser range finder and target designator, GPS receiver and IMU provide
"look and locate", accurate geo-location of POIs and threats, to a 3 km
distance. The camera displays and stores full-motion (30 fps) color video
in its "camcorder on chip", and stores it on solid state, removable
drives, for remote access during flight or for post-op review. Electronic
image enhancement and augmenting electronic stabilization aids in
tracking, geo-location range-finding and designation of POIs and targets.
Thus, the eyepiece 100 delivers unimpeded "sight" of the threat by
displaying the feed from the multi-coincident view camera. In certain
embodiments of the eyepiece 100, the eyepiece 100 may also provide an
unimpeded view of the soldier's own weapon with "see through", flip
up/down, electro-optic display mechanism showing sensor imagery, moving
maps, and data. In one embodiment, the flip up/down, electro-optic
display mechanism may snap into any standard, MICH or PRO-TECH helmet's
NVG mount.

[0867] Referring to FIG. 78, a flight eye is depicted. The feed from the
flight eye may be transmitted to the eyepiece 100 or any other suitable
display device. The flight eye may include multiple individual SWIR
sensors mounted in a folded imager array with multiple FOVs. The flight
eye is a low profile, surveillance and target designating system that
enables a continuous image of a whole battlefield in a single flyover,
with each sensor at VGA to SXGA resolution, day or night, through fog,
smoke and haze. Its modular design allows selective, fixed resolution
changes in any element from 1° to 30° for telephoto to wide
angle imaging in any area of the array. Each SWIR imager's resolution is
1280×1024 and sensitive from 380-1600 nm. A multi-DSP array board
"stiches" all the imagery together and auto-subtracts the overlapping
pixels for a seamless image. A coincident 1064 nm laser designator and
rangefinder 7802 can be mounted coincident with any imager, without
blocking its FOV.

[0868] Referring to FIG. 106, the eyepiece 100 may operate in conjunction
with software internal applications 7214 for the eyepiece that may be
developed in association with an eyepiece application development
environment 10604, where the eyepiece 100 may include a projection
facility suitable to project an image onto a see-through or translucent
lens, enabling the wearer of the eyepiece to view the surrounding
environment as well as the displayed image as provided through the
software internal application 7214. A processor, which may include a
memory and an operating system (OS) 10624, may host the software internal
application 7214, control interfaces between eyepiece command & control
and the software application, control the projection facility, and the
like.

[0869] In embodiments, the eyepiece 100 may include an operating system
10624 running on a multimedia computing facility 7212 that hosts an
software internal application 7214, wherein the internal application 7214
may be a software application that has been developed by a third-party
7242 and provided for download to the eyepiece 100, such as from an app
store 10602, a 3D AR eyepiece app store 10610, from third party networked
application servers 10612, and the like. The internal application 7214
may interact with the eyepiece control process facility 10634 processes,
such as in conjunction with an API 10608, through input devices 7204,
external devices 7240, external computing facilities 7232, command and
control 10630 facilities of the eyepiece, and the like. Internal
applications 7214 may be made available to the eyepiece 100 through a
network communications connection 10622, such as the Internet, a local
area network (LAN), a mesh network with other eyepieces or mobile
devices, a satellite communications link, a cellular network, and the
like. Internal applications 7214 may be purchased though an applications
store, such as the app store 10602, 3D AR eyepiece app store 10610, and
the like. Internal applications 7214 may be provided through a 3D AR
eyepiece store 10610, such as software internal applications 7214
specifically developed for the eyepiece 100.

[0870] An eyepiece applications development environment 10604 may be
available for software developers to create new eyepiece applications
(e.g. 3D applications), modify base applications to create new 3D
application versions of the base application, and the like. The eyepiece
application development environment 10604 may include a 3D application
environment that is adapted to provide a developer with access to control
schemes, UI parameters and other specifications available on the eyepiece
once the finished application is loaded on or otherwise made functional
for the eyepiece. The eyepiece may include API 10608 that is designed to
facilitate communications between the finished application and the
eyepiece computing systems. The application developer, within the
developer's development environment may then focus on developing an
application with certain functionality without concerning themselves with
particulars of how to interact with the eyepiece hardware. The API may
also make it more straightforward for a developer to modify an existing
application to create a 3D application for use on the eyepiece 100. In
embodiments, an internal application 7214 may utilize networked servers
10612 for client-server configurations, hybrid client-server
configurations (e.g. running the internal application 7214 in part
locally on the eyepiece 100 and in part on the application server 7214),
hosting the application completely on the server, downloaded from the
server, and the like. Network data storage 10614 may be provided in
association with the internal application 7214, such as in further
association with application servers 10612, purchased applications, and
the like. In embodiments, internal applications 7214 may interact with a
sponsor facility 10614, markets 10620, and the like, such as to provide
sponsored advertisements in conjunction with the execution of the
internal application 7214, to provide marketplace content to the user of
the eyepiece 100, and the like.

[0871] Referring to FIG. 107, the eyepiece application development
environment 10604 may be used for development of applications that may be
presented to the app store 10602, the 3D AR eyepiece app store 10610, and
the like. The eyepiece application development environment 10604 may
include a user interface 10702, access to control schemes 10704, and the
like. For instance, a developer may utilize menus and dialog boxes within
the user interface for accessing control schemes 10704 for selection so
the application developer may choose a scheme. The developer may be able
select a template scheme that generally operates the application, but may
also have individual controls that may be selected for various functions
that may override the template function scheme at a point in the
application execution. The developer may also be able to utilize the user
interface 10702 to develop applications with control schemes with a field
of view (FOV) control, such as through a FOV interface. The FOV interface
may provide a way to go between a FOV that shows both displays (for each
eye) and a single display. In embodiments, 3D applications for the
eyepiece may be designed within the single display view because the API
10610 will provide the translation which determines which display to be
used for which content, although developers may be able to select a
specific eye display for certain content. In embodiments, developers may
be able to manually select and/or see what is going to be displayed in
each eye, such as through the user interface 10702.

[0872] The eyepiece may have a software stack 10800 as described in FIG.
108. The software stack 10800 may have a head-mounted hardware and
software platform layer 10818, an interface-API-wrapper to platform layer
10814, libraries for development 10812 layer, an applications layer
10801, and the like. The applications layer 10801 may in turn include
consumer applications 10802, enterprise applications 10804, industrial
applications 10808, and other like applications 10810. In addition,
hardware 10820 associated with the execution or development of internal
applications 7214 may also be incorporated into the software stack 10800.

[0873] In embodiments, the user experience may be optimized by ensuring
that the augmented images are in focus with respect to the surrounding
environment and that the displays are set at the appropriate brightness
given the ambient light and the content being displayed.

[0874] In an embodiment, the eyepiece optical assembly may include an
electrooptic module, aka display, for each eye that delivers content in a
stereoscopic manner. In certain cases, a stereoscopic view is not
desired. In embodiments, for certain content, only one display may be
turned on or only one electrooptic module may be included in the optical
assembly. In other embodiments, the brightness of each display may be
varied so that the brain ignores the dimmer display. An auto-brightness
control of the image source may control the brightness of the displayed
content based on the brightness in the environment. The rate of
brightness change may depend on the change in the environment. The rate
of brightness change may be matched to the adaptation of the eye. The
display content may be turned off for a period following a sudden change
in environment brightness. The display content may be dimmed with a
darkening of the environment. The display content may get brighter with a
brightening of the environment.

[0875] The methods and systems described herein may be deployed in part or
in whole through a machine that executes computer software, program
codes, and/or instructions on a processor. The processor may be part of a
server, a cloud server, client, network infrastructure, mobile computing
platform, stationary computing platform, or other computing platform. A
processor may be any kind of computational or processing device capable
of executing program instructions, codes, binary instructions and the
like. The processor may be or include a signal processor, digital
processor, embedded processor, microprocessor or any variant such as a
co-processor (math co-processor, graphic co-processor, communication
co-processor and the like) and the like that may directly or indirectly
facilitate execution of program code or program instructions stored
thereon. In addition, the processor may enable execution of multiple
programs, threads, and codes. The threads may be executed simultaneously
to enhance the performance of the processor and to facilitate
simultaneous operations of the application. By way of implementation,
methods, program codes, program instructions and the like described
herein may be implemented in one or more thread. The thread may spawn
other threads that may have assigned priorities associated with them; the
processor may execute these threads based on priority or any other order
based on instructions provided in the program code. The processor may
include memory that stores methods, codes, instructions and programs as
described herein and elsewhere. The processor may access a storage medium
through an interface that may store methods, codes, and instructions as
described herein and elsewhere. The storage medium associated with the
processor for storing methods, programs, codes, program instructions or
other type of instructions capable of being executed by the computing or
processing device may include but may not be limited to one or more of a
CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the
like.

[0876] A processor may include one or more cores that may enhance speed
and performance of a multiprocessor. In embodiments, the process may be a
dual core processor, quad core processors, other chip-level
multiprocessor and the like that combine two or more independent cores
(called a die).

[0877] The methods and systems described herein may be deployed in part or
in whole through a machine that executes computer software on a server,
client, firewall, gateway, hub, router, or other such computer and/or
networking hardware. The software program may be associated with a server
that may include a file server, print server, domain server, internet
server, intranet server and other variants such as secondary server, host
server, distributed server and the like. The server may include one or
more of memories, processors, computer readable media, storage media,
ports (physical and virtual), communication devices, and interfaces
capable of accessing other servers, clients, machines, and devices
through a wired or a wireless medium, and the like. The methods, programs
or codes as described herein and elsewhere may be executed by the server.
In addition, other devices required for execution of methods as described
in this application may be considered as a part of the infrastructure
associated with the server.

[0878] The server may provide an interface to other devices including,
without limitation, clients, other servers, printers, database servers,
print servers, file servers, communication servers, distributed servers,
social networks, and the like. Additionally, this coupling and/or
connection may facilitate remote execution of program across the network.
The networking of some or all of these devices may facilitate parallel
processing of a program or method at one or more location. In addition,
any of the devices attached to the server through an interface may
include at least one storage medium capable of storing methods, programs,
code and/or instructions. A central repository may provide program
instructions to be executed on different devices. In this implementation,
the remote repository may act as a storage medium for program code,
instructions, and programs.

[0879] The software program may be associated with a client that may
include a file client, print client, domain client, internet client,
intranet client and other variants such as secondary client, host client,
distributed client and the like. The client may include one or more of
memories, processors, computer readable media, storage media, ports
(physical and virtual), communication devices, and interfaces capable of
accessing other clients, servers, machines, and devices through a wired
or a wireless medium, and the like. The methods, programs or codes as
described herein and elsewhere may be executed by the client. In
addition, other devices required for execution of methods as described in
this application may be considered as a part of the infrastructure
associated with the client.

[0880] The client may provide an interface to other devices including,
without limitation, servers, other clients, printers, database servers,
print servers, file servers, communication servers, distributed servers
and the like. Additionally, this coupling and/or connection may
facilitate remote execution of program across the network. The networking
of some or all of these devices may facilitate parallel processing of a
program or method at one or more location. In addition, any of the
devices attached to the client through an interface may include at least
one storage medium capable of storing methods, programs, applications,
code and/or instructions. A central repository may provide program
instructions to be executed on different devices. In this implementation,
the remote repository may act as a storage medium for program code,
instructions, and programs.

[0881] The methods and systems described herein may be deployed in part or
in whole through network infrastructures. The network infrastructure may
include elements such as computing devices, servers, routers, hubs,
firewalls, clients, personal computers, communication devices, routing
devices and other active and passive devices, modules and/or components
as known in the art. The computing and/or non-computing device(s)
associated with the network infrastructure may include, apart from other
components, a storage medium such as flash memory, buffer, stack, RAM,
ROM and the like. The processes, methods, program codes, instructions
described herein and elsewhere may be executed by one or more of the
network infrastructural elements.

[0882] The methods, program codes, and instructions described herein and
elsewhere may be implemented on a cellular network having multiple cells.
The cellular network may either be frequency division multiple access
(FDMA) network or code division multiple access (CDMA) network. The
cellular network may include mobile devices, cell sites, base stations,
repeaters, antennas, towers, and the like. The cell network may be a GSM,
GPRS, 3G, EVDO, mesh, or other networks types.

[0883] The methods, programs codes, and instructions described herein and
elsewhere may be implemented on or through mobile devices. The mobile
devices may include navigation devices, cell phones, mobile phones,
mobile personal digital assistants, laptops, palmtops, netbooks, pagers,
electronic books readers, music players and the like. These devices may
include, apart from other components, a storage medium such as a flash
memory, buffer, RAM, ROM and one or more computing devices. The computing
devices associated with mobile devices may be enabled to execute program
codes, methods, and instructions stored thereon. Alternatively, the
mobile devices may be configured to execute instructions in collaboration
with other devices. The mobile devices may communicate with base stations
interfaced with servers and configured to execute program codes. The
mobile devices may communicate on a peer to peer network, mesh network,
or other communications network. The program code may be stored on the
storage medium associated with the server and executed by a computing
device embedded within the server. The base station may include a
computing device and a storage medium. The storage device may store
program codes and instructions executed by the computing devices
associated with the base station.

[0885] The methods and systems described herein may transform physical
and/or or intangible items from one state to another. The methods and
systems described herein may also transform data representing physical
and/or intangible items from one state to another.

[0886] The elements described and depicted herein, including in flow
charts and block diagrams throughout the figures, imply logical
boundaries between the elements. However, according to software or
hardware engineering practices, the depicted elements and the functions
thereof may be implemented on machines through computer executable media
having a processor capable of executing program instructions stored
thereon as a monolithic software structure, as standalone software
modules, or as modules that employ external routines, code, services, and
so forth, or any combination of these, and all such implementations may
be within the scope of the present disclosure. Examples of such machines
may include, but may not be limited to, personal digital assistants,
laptops, personal computers, mobile phones, other handheld computing
devices, medical equipment, wired or wireless communication devices,
transducers, chips, calculators, satellites, tablet PCs, electronic
books, gadgets, electronic devices, devices having artificial
intelligence, computing devices, networking equipments, servers, routers,
processor-embedded eyewear and the like. Furthermore, the elements
depicted in the flow chart and block diagrams or any other logical
component may be implemented on a machine capable of executing program
instructions. Thus, while the foregoing drawings and descriptions set
forth functional aspects of the disclosed systems, no particular
arrangement of software for implementing these functional aspects should
be inferred from these descriptions unless explicitly stated or otherwise
clear from the context. Similarly, it will be appreciated that the
various steps identified and described above may be varied, and that the
order of steps may be adapted to particular applications of the
techniques disclosed herein. All such variations and modifications are
intended to fall within the scope of this disclosure. As such, the
depiction and/or description of an order for various steps should not be
understood to require a particular order of execution for those steps,
unless required by a particular application, or explicitly stated or
otherwise clear from the context.

[0887] The methods and/or processes described above, and steps thereof,
may be realized in hardware, software or any combination of hardware and
software suitable for a particular application. The hardware may include
a general purpose computer and/or dedicated computing device or specific
computing device or particular aspect or component of a specific
computing device. The processes may be realized in one or more
microprocessors, microcontrollers, embedded microcontrollers,
programmable digital signal processors or other programmable device,
along with internal and/or external memory. The processes may also, or
instead, be embodied in an application specific integrated circuit, a
programmable gate array, programmable array logic, or any other device or
combination of devices that may be configured to process electronic
signals. It will further be appreciated that one or more of the processes
may be realized as a computer executable code capable of being executed
on a machine readable medium.

[0888] The computer executable code may be created using a structured
programming language such as C, an object oriented programming language
such as C++, or any other high-level or low-level programming language
(including assembly languages, hardware description languages, and
database programming languages and technologies) that may be stored,
compiled or interpreted to run on one of the above devices, as well as
heterogeneous combinations of processors, processor architectures, or
combinations of different hardware and software, or any other machine
capable of executing program instructions.

[0889] Thus, in one aspect, each method described above and combinations
thereof may be embodied in computer executable code that, when executing
on one or more computing devices, performs the steps thereof. In another
aspect, the methods may be embodied in systems that perform the steps
thereof, and may be distributed across devices in a number of ways, or
all of the functionality may be integrated into a dedicated, standalone
device or other hardware. In another aspect, the means for performing the
steps associated with the processes described above may include any of
the hardware and/or software described above. All such permutations and
combinations are intended to fall within the scope of the present
disclosure.

[0890] In embodiments, the augmented reality eyepiece (AR) of the present
invention, e.g., AR eyepiece 100 in FIG. 1, is adapted to determine
and/or compensate for the vergence of the user's eyes. Vergence is the
simultaneous rotation of the user's eyes around a vertical axis to move
their respective optical axes in opposite directions to obtain or
maintain binocular vision. When a person looks at a closer object, the
person's eyes move their respective optical axes inwardly toward the
nose, a composite motion that is known as convergence. To look at a
farther object, the person's eyes move their respective optical axes
outwardly away from the nose, a composite motion that is known as
divergence. The person's eyes diverge until their respective optical axes
are essentially parallel to each other when the person is fixating on a
point at infinity or very far away. Vergence works in conjunction with
eye accommodation to permit a person to maintain a clear image of an
object as the object moves relative to the person. Vergence compensation
becomes important in situations where a virtual image, i.e., an AR image,
such as a label or other information, is to be placed near to or overlap
a real image or when a virtual image of an object is to be superimposed
upon a real image of the object in order to make the placement of the
virtual image correct with respect to the real image. Methods of the
present invention for vergence compensation and/or determination are
described in the following paragraphs [00829] through [00850] and are
collectively referred to as vergence methods.

[0891] The vergence methods may include a determination of the distance an
object of interest from the user of the AR eyepiece and subsequent use of
that distance to determine the vergence angle, i.e, the angle formed by
the intersection of the optical axes of the user's eyes as they look at
the object. The vergence angle is then used to determine the correct
placement of the AR image with respect to the object, which may be in
front of, behind, or at the matched to the object. For example, in a
first set of vergence method embodiments, a single autofocus digital
camera having a output signal is mounted in the AR eyepiece at some
convenient location, e.g., in the bridge section or near one of the
temples. The output of the camera is provided to a microprocessor within
the AR eyepiece and/or transmitted to a remote processor. In either case,
its signals relating to its autofocus capabilities are used to determine
the distance to objects which the user may see when the user is looking
straight ahead. This distance, along with the interpupilary distance of
the user's eyes, is to determine the vergence and the correct placement
of a virtual image, e.g., a label, which may be desired for those
objects. The distance and/or the vergence angle may also be used to
determine the level of focus of the virtual object is to have to be
properly observable by the user. Optionally, additional information about
that particular user's vergence characteristics may be input and stored
in memory associated with the microprocessor and used to adjust the
determination of the vergence.

[0892] In a second set of vergence method embodiments, an electronic range
finder that is independent of a camera is incorporated into the AR
eyepiece at some convenient location, e.g., in the bridge section or near
one of the temples. In these embodiments, the output of the electronic
range finder is used in the same manner as was the output of the
autofocus camera described with regard to the first set of vergence
method embodiments.

[0893] In a third set of vergence method embodiments, the AR eyepiece
includes a plurality of range finding devices which may be autofocus
cameras and/or electronic range finders. All of the plurality of devices
may be aligned so as to determine the distance of objects in the same
direction or one or more of the devices may be aligned differently from
the other devices so that information about the distance to a variety of
objects is obtainable. The outputs from one or more of the devices are
input and analyzed in the same manner as was the output of the autofocus
camera described with regard to the first set of vergence methods.

[0894] In a fourth set of vergence method embodiments, one or more range
finding devices are employed in the manner discussed above. Additionally,
the AR eyepiece includes one or more eye-tracking devices which are
configured to track the movement and/or viewing direction of one or both
of the user's eyes. The output of the eye-tracking devices is provided to
a microprocessor within the AR eyepiece or may be transmitted to a remote
processor. This output is used to determine the direction the user is
viewing, and, when eye-tracking information from both eyes is available,
to determine the vergence of the user's eyes. This direction and, if
available, vergence information is then used alone or in conjunction with
the vergence information determined from the range finding devices to
determine placement, and optionally, the level of focus, of one or more
virtual images related to one or more objects which the user may be
viewing.

[0895] In a fifth set of vergence methods, one or more range finding
device is directed away from the direction that is straight ahead of the
user of the AR eyepiece. Distances to objects detected by the range
finder device are used to display virtual images of the object in the
manner described above. Although the user may or may not be aware of the
virtual images when he is looking straight ahead, the user will be aware
of the virtual images when the user looks in the direction of the objects
to which they are related.

[0896] A calibration sequence may be used with any of the vergence method
embodiments. The calibration sequence may employ steps of mechanical
calibration nature, of an electronic calibration nature, or both. During
the calibration sequence, the interpupilarly distance of the user may be
determined. Also, the user may be requested to look at a series of real
or virtual objects having range of real or virtual distances, e.g., from
near to far, and the vergence of the eyes is measured either mechanically
or electronically or both. The information from this calibration sequence
may then be employed in the determinations of vergence, focusing, and/or
virtual image placement when the AR eyepieces are in use. The calibration
sequence is preferably employed when a user first puts on the AR
eyepiece, but may be employed anytime the user believes that a
recalibration would be helpful. Information correlating the user to that
obtained during a calibration sequence may be stored for use whenever
that particular user identifies himself to the AR eyepiece as its user,
e.g., using any of the techniques described in this document.

[0897] It is to be noted that some range finding devices use range
determining methodologies in which information received from a device's
sensors is mapped upon a space-representing rectilinear or
non-rectilinear grid. The information from the various sectors of the
grid is inter-compared to determine the range distance. In the vergence
method embodiments, the raw sensor information, the mapping information,
the calculated distance, or any combination of these may be used in the
determination of the placement and/or focus of the virtual image or
images.

[0898] It is to be understood that the vergence method embodiments include
the placement of a virtual image for one of the user's eyes or for both
of the user's eyes. In some embodiments, one virtual image is provided to
the user's left eye and a different virtual image is provided to the
user's right eye. In cases where multiple images are placed before the
user, whether or not the images are the same or different, the placement
may be simultaneous, at different times, or interlaced in time, e.g., the
images are shown at a predetermined flicker rate or rates (e.g., 30, 60,
and/or 180 Hz) with the image for the left eye being present when the
image for the right eye is not and vice versa. In some embodiments, a
virtual image is shown only to the person's dominant eye and in others a
virtual image is shown only to the person's non-dominant eye. In some
embodiments which employ images which are interlaced in time, virtual
images of various objects which are located at various distances from the
user are displayed in the manner described above; when the user looks
from the real image of one object to the real image of another object,
only the virtual image corresponding to the real image of the object
being viewed will be seen by the user's brain.

[0899] In embodiments, the invention provides methods for providing a
depth cue with augmented reality virtual objects or virtual information
that can convey a wide range of perceived depth to a broad range of
individuals with different eye characteristics. These depth cue method
embodiments of the present invention use differences in the lateral
positioning or disparity of the augmented reality images provided to the
two eyes of the individual to provide differences in the vergence of the
virtual objects or virtual information that convey a sense of depth. One
advantage of these methods is that the lateral shifting of the augmented
reality images can be different for different portions of the augmented
reality images so that the perceived depth is different for those
portions. In addition, the lateral shifting can be done through image
processing of the portions of the augmented reality images. The user can
experience a full range of perceived depth through this method from as
near as the individual can focus to infinity regardless of the
individual's age.

[0900] In order to better understand these depth cue method embodiments of
the present invention, it is useful to keep in mind that in some aspects
of augmented reality, head mounted displays are used to add images of
virtual objects or virtual information that are associated with the view
of a scene as seen by a user. To add additional effects to the perception
of the augmented it is useful to place the virtual objects or virtual
information at a perceived depth in the scene. As an example, a virtual
label can be placed onto an object in a scene such as the name of a
building. The perceived association of the virtual label with the
building is enhanced if the label and the building are perceived by the
user to be at the same depth in the scene. Head mounted displays with
see-through capabilities are well suited to providing augmented reality
information such as labels and objects because they provide the user with
a clear view of the environment. However, for the augmented reality
information to be of value, it must be easily associated with the objects
in the environment and as such the positioning of the augmented reality
information relative to the objects in the see-through view is important.
While horizontal and vertical positioning of augmented reality
information is relatively straight forward if the head mounted display
has a camera that can be calibrated to the see-through view, the depth
positioning is more complicated. U.S. Pat. No. 6,690,393 describes a
method for positioning 2D labels in a 3D virtual world. However, this
method is not directed at displays with a see-through view where the
majority of the image the user sees is not provided digitally and as such
the 3D location of objects is not known. U.S. Pat. No. 7,907,166
describes a robotic surgical system using a stereo viewer in which
telestration graphics are overlaid onto stereo images of an operating
site. However, similar to the method described in U.S. Pat. No.
6,690,393, this system uses captured images which are then manipulated to
add graphics and as such does not address the unique situation with
see-through displays wherein the majority of the image is not provided
digitally and the relative locations of objects that the user sees are
not known. Another prior art method for augmented reality is to adjust
the focus of the virtual objects or virtual information so that the user
perceives differences in focus depth that provide a depth cue to the
user. As the user has to refocus his/her eyes to look at objects in the
scene and to look at the virtual objects or virtual information, the user
perceives an associated depth. However, the range of depth that can be
associated with focus is limited by the accommodation that the user's
eyes are capable of. This accommodation can be limited in some
individuals particularly if the individual is older when eyes lose much
of their accommodation range. In addition, the accommodation range is
different depending on whether the user is near sighted or far sighted.
These factors make the result of using focus cues unreliable for a large
population of user's with different ages and different eye
characteristics. Therefore, the need persists beyond what is available in
the prior art for a widely useable method for associating depth
information with augmented reality.

[0901] Some of the depth cue method embodiments of the present invention
are described in this and the following paragraphs [0XXX] through [0XYZ].
Head mounted displays with see-through capabilities provide a clear view
of the scene in front of the user while also providing the ability to
display an image, where the user sees a combined image comprised of the
see-through view with the display image overlaid. The methods are entail
the displaying of 3D labels and other 3D information using the
see-through display to aid the user in interpreting the environment
surrounding the user. A stereo pair of images of the 3D labels and other
3D information may be presented to the left and right eyes of the user to
position the 3D labels and other 3D information at different depths in
the scene as perceived by the user. In this way the 3D labels and other
3D information can be more easily associated with the see-through view
and the surrounding environment.

[0902] FIG. 109 is an illustration of a head mounted display device 109100
with see-through capabilities and is a special version of augmented
reality eyepiece 100 shown in FIG. 1 and described throughout this
document. The head mounted display device 109100 includes see-through
displays 109110, stereo cameras 109120, electronics 109130, and range
finder 109140. This Wherein the electronics can include one or more of
the following: a processor, a battery, a global positioning sensor (GPS),
a direction sensor, data storage, a wireless communication system and a
user interface.

[0903] FIG. 110 is an illustration of the scene in front of the user as
seen by the user in the see-through view. A number of objects at
different depths in the scene are shown for discussion. In FIG. 111
several of the objects in the scene have been identified and labeled.
However, the labels are presented in two dimensional (2D) fashion either
by presenting the labels only to one eye of the user or by presenting the
labels at the same positions in the image to each eye so the labels are
coincident when viewed simultaneously. This type of labeling makes it
more difficult to associate the labels with the objects particularly when
there are foreground and background objects as the labels appear to be
all located at the same perceived depth.

[0904] To make it easier to associate labels or other information with the
desired objects or aspects of the environment, it is advantageous to
present the labels or other information as three dimensional (3D) labels
or other 3D information so that the information is perceived by the user
to be at different depths. This may be done by presenting 3D labels or
other 3D information in overlaid images to the two eyes of the user with
a lateral shift in position between the images that are overlaid onto the
see-through image so that the overlaid images have a perceived depth.
This lateral shifting between images is also known as disparity to those
skilled in stereo imaging and it causes the user to change the relative
pointing of his/her eyes to align the images visually and this induces a
perception of depth. The images with disparity are images of the 3D
labels or other 3D information that are overlaid onto the see-through
view of the scene seen by the user. By providing 3D labels with a large
disparity, the user must align the optical axes of his/her eyes somewhat
to bring the labels in the stereo images into alignment which gives a
perception of the labels being located close to the user. The 3D labels
that have a small disparity (or no disparity) can be visually aligned
with the user's eyes looking straight ahead and this gives the perception
of the 3D labels being located at a distance.

[0905] FIGS. 112 and 113 illustrate a stereo image pair for 3D labels to
be applied to the see-through view shown in FIG. 110. FIG. 112 is an
image of the 3D labels shown to the user's left eye, while FIG. 113 is an
image of the 3D labels shown to the user's right eye. Together, FIG. 112
and FIG. 113 provide a stereo pair of images. In this stereo pair, the
lateral positioning of the 3D labels is different between images shown in
FIG. 112 and FIG. 113. FIG. 114 provides an overlaid image of FIG. 112
and FIG. 113. For added clarity in FIG. 114, the 3D labels from FIG. 113
have been shown in grey while the 3D labels from FIG. 112 are shown in
black. In the foreground of FIG. 114, the 3D labels from FIG. 113 are
positioned to the left of the 3D labels from FIG. 112 with a relatively
large disparity. In the background of FIG. 114, the 3D labels from FIG.
113 are coincident with and positioned on top of the 3D labels from FIG.
112 with no disparity. In the mid-ground region shown in FIG. 114, the 3D
labels from FIG. 112 and FIG. 113 have a medium disparity. This relative
disparity of the 3D labels as presented to the left and right eyes
corresponds to the depth perceived by the user. By selecting a depth for
the 3D labels that is coincident with the depth of the object in the
scene that the 3D label is associated with makes it easy for the user to
understand the connection between the 3D label and the object or other
aspect of the environment that the user sees in the see-through view.
FIG. 115 shows the see-through view of the scene with the 3D labels
showing their disparity. However, when viewed in real life, the user
would change the pointing direction of his/her eyes to make the 3D labels
be coincident within each left/right set and it is this that provides the
perception of depth to the user. The calculation of disparity is known to
those skilled in the art. The equation for relating disparity and
distance is given by the equation

Z=Tf/d

where Z is the distance to the object from the stereo cameras, T is the
separation distance between the stereo cameras, f is the focal length of
the camera lens, and d is the disparity distance on the camera sensor
between images of the same object in the scene. Rearranging the term to
solve for the disparity, the equation becomes

d=TF/Z

[0906] For example, for 7 mm focal length cameras which are separated by
120 mm and used in conjunctions with image sensors having
center-to-center pixel distances of 2.2 micron, the disparities expressed
in number of pixels a visual target point is shifted when one display is
compared to the other are given Table 1 for some representative distances
(given in meters).

[0907] It is noted that sometimes in the art the disparity values for
stereo images are described using numbers which range from negative to
positive, wherein zero disparity is defined for an object at a selected
distance from the observer which the observer would perceive as being in
the mid-ground. The above-recited equations must be adapted to account
for this shift of the zero point. When disparity values are described in
this way, the disparities of a close object and a far object may be the
same in magnitude and but opposite in sign.

[0908] FIG. 116 shows illustrations of the stereo pair of images captured
by the stereo cameras 109120 on the head mounted display device 109100.
Since these images are captured from different perspectives, they will
have disparities that correspond to the distance from the head mounted
display device 109100. In FIG. 117, the two images from FIG. 116 are
overlaid to show the disparity between the images in the stereo pair.
This disparity matches the disparity seen in the 3D labels shown for the
objects in FIGS. 114 and 115. As such, the 3D labels will be perceived to
be located at the same depth as the objects that they are intended to be
associated with FIG. 118 shows an illustration of the 3D labels as seen
by the user as overlays to the see-through view seen with the left and
right eyes.

[0909] FIG. 119 is a flowchart for a depth cue method embodiment of the
present invention. In step 119010, the electronics 109130 in the head
mounted display device 109100, determine the GPS location using the GPS
for the head mounted display device 109100. In step 119020, the
electronics 109130 determine the direction of view using an electronic
compass. This enables the view location and view direction to be
determined so that objects in the view and nearby objects can be located
relative to the user's field of view by comparing the GPS location of the
head mounted display device 109100 to databases of the GPS locations of
other objects in the head mounted display device 109100 or by connecting
to other databases using a wireless connection. In step 119030, objects
of interest are identified relative to the user's field of view either by
the electronics 109130 analyzing databases stored on the device 109100 or
by wirelessly communicating in conjunction with another device. In step
119040, distances to the objects of interest are determined by comparing
the GPS location of the head mounted display device 109100 to the GPS
locations of the objects of interest. Labels relating names or other
information about the objects of interest are then generated along with
disparities to provide 3D labels at distances perceived by the user
corresponding to the distances to the objects of interest in step 119050.
FIG. 111 shows examples of labels comprising names, distances and
descriptions for objects of interest in the user's field of view. In step
119060, the 3D labels for the objects of interest are displayed to the
user's left and right eyes with the disparities to provide the 3D labels
at the desired depths.

[0910] FIG. 120 is a flowchart for another a depth cue method embodiment
of the present invention wherein steps similar to those in the steps of
FIG. 119 have been numbered using the same reference numerals as used in
FIG. 119. In step 120140, the distances and directions to the objects of
interest relative to the user's field of view are determined either by
the electronics 109130 on the device or in conjunction with a wirelessly
connected other device. In step 120160, 3D labels are displayed to the
user's left and right eyes with disparities to provide the 3D labels at
the desired depths, and in addition, the 3D labels are provided in the
portions of the user's field of view that correspond to the direction to
the objects of interest. FIG. 111 shows an example where the label for a
distant object of interest is provided toward the rear of the user's
field of view and in the direction toward the distant object of interest,
shown in this example as the label "10 miles to town this direction."
This feature provides a visual cue in the 3D information which makes it
easy for the user to navigate to objects of interest. It should be noted
that the 3D labels can be provided in front of other objects in the
see-through field of view.

[0911] FIG. 121 is a flowchart for yet another depth cue method embodiment
of the present invention. In this embodiment, distances to objects of
interest in the scene are determined with a distance measuring device
109140 such as a rangefinder. In step 121010, one or more images are
captured of a scene adjacent to the head mounted display device 109100
using stereo cameras 109120. Alternately, a single camera may be used to
capture the one or more images of the scene. The one or more images of
the scene can be different spectral types of images, for example, the
images can be visible-light images, ultraviolet images, infrared images,
or hyperspectral images. The image or images are analyzed in step 121020
to identify one or more objects of interest, wherein the analysis can be
conducted by the electronics 109130 or the image can be sent wirelessly
to another device for analysis. In step 121030, distances to objects of
interest are determined using the distance measuring device 109140.
Disparities correlating correlate distances of the objects of interest
are determined in step 121040. In step 121050, labels or other
information are determined for the objects of interest. In step 121060,
the 3D labels or other 3D information are displayed for the objects of
interest.

[0912] FIG. 122 is a flowchart for another depth cue method embodiment of
the present invention. In this embodiment, the distances to objects in
the scene are measured directly by using the stereo cameras to obtain a
depth map of the scene. In step 122010, stereo cameras 109120 are used to
capture one or more stereo image sets of the scene adjacent to the head
mounted display device 109100. The one or more stereo image sets of the
scene can be different spectral image types, for example, the stereo
images can be visible-light images, ultraviolet images, infrared images,
or hyperspectral images. The stereo image set or sets are analyzed in
step 122020 to identify one or more objects of interest, wherein the
analysis can be conducted by the electronics 109130 or the stereo image
set or sets can be sent wirelessly to another device for analysis. In
step 122030, the images in the stereo image set or sets are compared to
determine the disparities for the one or more objects of interest. In
step 122040, labels or other related to the one or more objects of
interest are determined. In step 122050, 3D labels and/or 3D information
is displayed for the one or more objects of interest.

[0913] While the present disclosure includes many embodiments shown and
described in detail, various modifications and improvements thereon will
become readily apparent to those skilled in the art. Accordingly, the
spirit and scope of the present invention is not to be limited by the
foregoing examples, but is to be understood in the broadest sense
allowable by law.

[0914] All documents referenced herein are hereby incorporated by
reference.