Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

A touch device to sense and compute the coordinate of a touch object is
provided. The touch device comprises a panel, a light-emitting element,
an image sensor, a reflective strip and a processing unit. The panel has
a sensing area surrounded by first to fourth boundaries where the first
and the third boundaries define an x direction and the other two
boundaries defines y direction. The light-emitting element and the image
sensor are located on the first boundary with a specific distance
therebetween. The reflective strip is located on the second to fourth
boundaries. When the touch object touches the sensing area, a first and a
second light path from the light-emitting element to the image sensor are
blocked to form a real and a virtual image such that the processing unit
computes the coordinate of the touch object according to the real and the
virtual images. A touch method is disclosed herein as well.

Claims:

1. A touch device for sensing and computing a coordinate of a touch
object, wherein the touch device comprises: a panel having a sensing area
surrounded by a first boundary, a second boundary, a third boundary and a
fourth boundary, wherein a coordinate system is defined by an extension
direction of the first boundary and the third boundary define as
x-direction and an extension direction of the second boundary and the
fourth boundary as y-direction; a light-emitting element located on the
first boundary and emit a first light and a second light; an image sensor
located on the first boundary with a specific distance relative to the
light-emitting element for sensing an image of the sensing area; a
reflective strip located on the second, the third and the fourth
boundaries; and a processing unit electrically connected to the image
sensor; wherein when the touch object touches the sensing area, a real
dark point and a virtual dark point are generated in the image; wherein
the processing unit computes the coordinate of the touch object on the
coordinate system according to positions of the real dark point and the
virtual dark point in the image.

2. The touch device of claim 1, wherein the real dark point is formed by
blocking a first light path formed by reflecting the first light on a
first reflection point of the reflective strip, and a virtual dark point
is formed by a second light path formed by reflecting the second light on
a second reflection point of the reflective strip; wherein the second
light is blocked directly by the touch object before being reflected.

3. The touch device of claim 2, wherein the processing module has a
calibration data, which is a relationship between position in the image
and a sensed angle.

4. The touch device of claim 3, wherein the sensed angle is an angle
relative to the x direction.

5. The touch device of claim 3, wherein the processing module computes a
first sensed angle of the first light path and a second sensed angle of
the is second light path according to the calibration data.

6. The touch device of claim 5, wherein the processing module further
computes a first coordinate of the first reflection point and a second
coordinate of the second reflection point according to the first sensed
angle and the second sensed angle.

7. The touch device of claim 6, wherein the processing module further
computes an angle of the second light relative to the x direction
according to the first coordinate and the second coordinate.

8. The touch device of claim 7, wherein the processing module further
computes the coordinate of the touch object according to the angle of
second light.

9. The touch device of claim 2, further comprising an auxiliary
light-emitting element, wherein the light-emitting element and the
auxiliary light-emitting element are driven non-simultaneously.

10. The touch device of claim 1, wherein the specific distance is between
5 mm to 20 mm.

11. The touch device of claim 1, wherein an angle of view the image
sensor is larger than or equal to 90 degrees.

12. A touch method to sense and compute a coordinate of a touch object,
wherein the touch method comprises the steps of: (a) providing a panel
having a sensing area surrounded by a first boundary, a second boundary,
a third boundary and a fourth boundary, wherein a coordinate system is
defined by an extension direction of the first boundary and the third
boundary as x-direction and an extension direction of the second boundary
and the fourth boundary as y-direction; (b) disposing the touch object in
the sensing area to generate a real dark point and a virtual dark point;
(c) generating an image for sensing the real dark point and the virtual
dark point; and (d) computing the coordinate of the touch object
according to positions of the real dark point and the virtual real dark
in the image.

13. The touch method of claim 12, further comprising a step of: providing
a calibration data, which is a relationship between position in the image
and a sensed angle.

14. The touch method of claim 13, wherein the sensed angle is an angle
relative to the x direction.

[0003] The present disclosure relates to a touch device and a touch
method. More particularly, the present disclosure relates to a touch
device and a touch method utilizing a single light-emitting element and
an image sensor.

[0004] 2. Description of Related Art

[0005] Touch panel has become the mainstream panel technology due to
convenience and user-friendly. Usually, the touch panel can be
categorized into resistive touch panel, capacitive touch panel, acoustic
touch panel, optical touch panel and electromagnetic touch panel
depending on the different sensing mechanisms.

[0006] The conventional optical touch panel utilizes two modules, each
includes a sensor and a light-emitting element, disposed at two
neighboring corners of the panel respectively and reflection strips are
disposed on the other three sides of the panel. Once a stylus or a finger
touches the panel (i.e. blocks the light paths between the light-emitting
elements and the reflection strip), a dark point is generated on the
sensed image of each sensor due to the blocked light paths. The position
or the coordinate of the stylus or the finger can be computed according
to the dark points in the sensed image. However, the touch device
deploying two light-emitting elements and two sensors is not economical
enough.

[0007] Accordingly, what is needed is a touch device and a touch method
utilizing less number of light-emitting element and sensor for realizing
the touch input mechanism to lower the cost. The present disclosure
addresses such a need.

SUMMARY

[0008] An aspect of the present disclosure is to provide a touch device.
The touch device senses and computes a coordinate of a touch object. The
touch device comprises a panel, a light-emitting element, an image
sensor, a reflective strip and a processing unit. The panel has a sensing
area successively surrounded by a first boundary, a second boundary, a
third boundary and a fourth boundary, wherein a coordinate system is
defined by an extension direction of the first and the third boundaries
as x-direction and an extension direction of the second and the fourth
boundaries as y-direction. The light-emitting element is located on the
first boundary and emits a first light and a second light. The image
sensor is located on the first boundary with a specific distance relative
to the light-emitting element for sensing an image of the sensing area.
The reflective strip is located on the second, the third and the fourth
boundaries. The processing unit is electrically connected to the image
sensor. When the touch object touches the sensing area, a real dark point
and a virtual dark point are generated in the image and the processing
unit computes the coordinate of the touch object according to positions
of the real dark point and the virtual dark point in the image.

[0009] Another aspect of the present disclosure is to provide a touch
method to sense and compute a coordinate of a touch object. The touch
method comprises the steps as follows. (a) providing a panel having a
sensing area successively surrounded by a first boundary, a second
boundary, a third boundary and a fourth boundary is provided, wherein a
coordinate system is defined by an extension direction of the first and
the third boundaries as x-direction and an extension direction of the
second and the fourth boundaries as y-direction; (b) disposing the touch
object in the sensing area to generate a real dark point and a virtual
dark point; (c) generating an image to sense the real dark point and the
virtual dark point r; (d) computing the coordinate of the touch object
according to positions of the real dark point and the virtual dark point
images in the image.

[0010] It is to be understood that both the foregoing general description
and the following detailed description are by examples, and are intended
to provide further explanation of the disclosure as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The disclosure can be more fully understood by reading the
following detailed description of the embodiments, with reference made to
the accompanying drawings as follows:

[0012] FIG. 1A is a diagram of a touch device in an embodiment of the
present disclosure;

[0013] FIG. 1B is a diagram depicting the position of the real dark point
and the virtual dark point on the sensed image sensed by the image sensor
depicted in FIG. 1;

[0014]FIG. 2 is a diagram of the touch device when the correctional
procedure is performed;

[0015]FIG. 3 is the touch device in another embodiment of the present
disclosure; and

[0016]FIG. 4 is a flow chart of a touch method in an embodiment of the
present disclosure.

DETAILED DESCRIPTION

[0017] Reference will now be made in detail to the present embodiments of
the disclosure, examples of which are illustrated in the accompanying
drawings. Wherever possible, the same reference numbers are used in the
drawings and the description to refer to the same or like parts.

[0018] Please refer to FIG. 1A. FIG. 1A is a diagram of a touch device 1
in an embodiment of the present disclosure. The touch device 1 is able to
sense and compute the position or the coordinate of a touch object or
point 2. The touch device 1 comprises a panel 10, a light-emitting
element 12, an image sensor 14, a reflective strip 16 and a processing
unit (not shown).

[0019] The panel 10 has a sensing area 100 such that the touch object 2
can be disposed therein. The sensing area 100 is surrounded by a first
boundary 101, a second boundary 103, a third boundary 105 and a fourth
boundary 107. A coordinate system is defined with the extension direction
of the first and the third boundaries 101 and 105 as x direction, the
extension direction of the second and the fourth boundaries 103 and 107
as y direction, and the left-top corner of the panel 10 as origin point.
The coordinate of the light-emitting element 12 (x2, y2), the coordinate
of the image sensor 14 (x1, y1), the width W of the panel and the
specific distance D between the image sensor 14 and the light-emitting
element 12 are all known parameters which can be preset when the touch
device 1 is assembled.

[0020] The light-emitting element 12 and the image sensor 14 can be
located on the first boundary 101 or located on a position at a distance
a certain distance from the first boundary 101. In the present
embodiment, the light-emitting element 12 and the image sensor 14 are
both located on the first boundary 101 with a specific distance D along
the x direction therebetween. The specific distance is selected such that
the image sensor 14 is able to receive the reflected light from the
reflective strip 16, where the reflected light is generated according to
light emitted by the light-emitting element 12.

[0021] The image sensor 14 performs the sensing process to sense the touch
object 2 within the sensing area 100 (by retrieving the image comprising
the touch object 2). Substantially, due to the touch object 2 could be
placed at any position within the sensing area 100, a suitable (large
enough) angle of view is necessary for the image sensor 14 to sense the
whole sensing area 100. In an embodiment, if the light-emitting element
12 and the image sensor 14 are located on an end of the first boundary
101 (or a corner of the panel 10), the angle of view of the image sensor
14 has to be larger than or equal to 90 degrees to sense the whole
sensing area 100. In the present embodiment, the image sensor 14 is
placed at about the middle of the first boundary 101, the angle of view
of the image sensor 14 has to be 180 degrees for sensing the whole
sensing area 100.

[0022] The reflective strip 16 is located on the second boundary 103, the
third boundary 105 and the fourth boundary 107 to reflect the light from
the light-emitting element 12. The reflective strip 16 can reflect the
incident light concentratedly along the incident path back. The term
"concentratedly" means that most energy of the incident light is
reflected back to the light-emitting element 12 along the incident path.
However, it is impossible to reflect 100% of the energy back to the
light-emitting element 12 due to the limit of physics. In other words, a
small portion of the energy, not reflected back to the light-emitting
element 12 along the incident path, is scattered to a neighboring area of
the light-emitting element 12, and the farther between the neighboring
area and the light-emitting element 12, the more the energy reduces. In
the present embodiment, the image sensor 14 utilizes the light not
reflected back to the incident path (i.e. the scattered reflected light)
to compute the coordinate of the touch object 2. Hence, the distance D
between the image sensor 14 and the light-emitting element 12 cannot be
too large. If distance D between the image sensor 14 and the
light-emitting element 12 is too large, the image sensor 14 is not able
to sense the reflected, light since the energy of the reflected light
scattered to the image sensor 14 is too low. In a preferable embodiment,
the range of the specific distance D along the x direction between the
image sensor 14 and the light-emitting element 12 is about 5 mm to 20 mm
such that the light reflected to the image sensor 14 has enough energy
for the image sensor 14 to perform the sensing process. The processing
unit is electrically connected to the image sensor 14. In an embodiment,
the processing unit is an internal module of the panel 10 or is
integrally formed with the image sensor 14 (integrated with the image
sensor 14). The processing unit can convert the position of the touch
object 2 within the sensed image to the coordinate of the touch object 2
in the sensing area 2.

[0023] When the touch object (a stylus or a finger) touches the sensing
area 100, the light from the light-emitting element 12 would be blocked.
In details, as described previously, if the touch object 2 is absent, a
first light 11 would be reflected at the first reflection point P1 on the
reflective strip 16, most energy is reflected back to the light-emitting
element 12 along the incidental path and a small portion of the other
energy (those not reflected back to the light-emitting element 12) is
reflected to the image sensor 14 along a first light path 13 and sensed
by the image sensor 14. Besides, a second light 15 would be reflected at
the second reflection point P2 on the reflective strip 16, most of the
energy is reflected back to the light-emitting element 12 along the
incidental path, and a small portion of the other energy (those not
reflected back to the light-emitting element 12) is reflected to the
image sensor 14 along a second light path 17 and is sensed by the image
sensor 14.

[0024] However, once the touch object 2 touches the sensing area 100,
which blocks both the first light path 13 and directly blocks the second
light 15 such that a real dark point 20 corresponding to the first light
path 13 and a virtual dark point 22 corresponding to the second light
path 17 are formed on the sensed image 21 of the image sensor 14 as shown
in FIG. 1B. Afterwards, the processing unit can compute the coordinate of
the touch object 2 on the coordinate system according to the positions of
the real and the virtual dark points 20 and 22 on the sensed image 21.

[0025] Before describing how to compute the coordinate of the touch object
2, it is necessary to understand how optical procedure is performed on
the touch device 1 to obtain a regression curve or optical data such that
sensed angles can be computed according to the regression curve and the
position of the dark points on the sensed image 21. The so-called sensed
angles are the angles relative to the x direction. Please refer to FIG.
2. FIG. 2 is a diagram of the touch device 1 when optical procedure is
performed. The light-emitting element 12 and the image sensor 14 are
disposed on a corner of the panel 10. The touch object 2 is first placed
on a path L1 corresponding to a predetermined first sensed angle
θ10 to form a dark point P10 on the sensed image 21. Then, the
touch object 2 is placed on a path L2 corresponding to a predetermined
first sensed angle θ20 to form a dark point P20 on the sensed image
21. The touch object 2 is kept placing at different positions each on a
path corresponding to a specific sensed angle until the touch object 2 is
placed on a path L8 corresponding to a predetermined first sensed angle
θ80 to form a dark point P80 on the sensed image 21. Hence, a group
of images comprises the dark points (θ10, θ20, . . . ,
θ80) respectively corresponding to different sensed angles
(θ10, θ20, . . . , θ80) are obtained. Then, the two
groups (positions of the dark points and corresponding sensed angles) of
data are used to determine a regression curve or a fitting curve, i.e.
the calibration data. The calibration data is stored in the touch device
1. Consequently, once the dark point on the sensed image 21 is known, the
sensed angle can be determined by the position of the dark point through
the regression curve. It's noticed that the number of the predetermined
sensed angles used to perform the calibration procedure is not limited to
eight. The number of the predetermined sensed angles can be determined
depending on the practical situation. With more predetermined sensed
angles, the computed regression curve is closer to the practical angle.

[0026] After the calibration procedure is performed, the processing unit
stores the calibration data (i.e. the regression curve) that stands for
the relation between the positions of the dark points in the sensed image
21 and the sensed angles. It's noticed that since the light-emitting
element 12 and the image sensor 14 are placed on the top-right corner of
the panel 10 in FIG. 2, 90 degrees for the angle of view of the image
sensor 14 is enough. However, for the angle of view of the image sensor
14 having 180 degrees, as depicted in FIG. 1A, the calibration procedure
is still similar to the procedure described above.

[0027] Please refer to FIG. 1A again. After retrieving the stored
calibration data, the processing module computes a first sensed angle of
the first light path 13 and a second sensed angle of the second light
path 17 according to the positions of the real dark point 20 and the
virtual dark point 22 on the sensed image 21 respectively. Assuming that
the first sensed angle of the first light path 13 is θ1 and the
second sensed angle of the second light path 17 is θ2.

[0028] As described above, The coordinate of the light-emitting element 12
(x2, y2), the coordinate of the image sensor 14 (x1, y1), the width W of
the panel and the specific distance D between the image sensor 14 and the
light-emitting element 12 are all known parameters, wherein

[0029] y1=y2;

[0030] x1-x2=D;

[0031] The processing unit can compute the coordinate (x3, y3) of the
first reflection point P1 and the coordinate (x4, y4) of the second
reflection point P2:

[0032] tan θ1=w/(x1-x3)

[0033] tan θ2=w/(x1-x4)

[0034] where

[0035] y3=y4;

[0036] y3-y1=W

[0037] Accordingly, (x1, y1), (x2, y2), (x3, y3) and (x4, y4) are known.
Then, the angle θ3 of the second light 15 relative to the x
direction can be computed by the following equation:

θ3=tan-1[(y4-y2)/(x4-x2)]

[0038] Finally, two linear equations can be obtained according to the
known angles θ1 and θ3, where (x, y) is the
coordinate of the touch object 2 on the sensing area 100:

y-y1=(tan θ1)(x1-x)

y-y2=(tan θ3)(x-X2)

[0039] By solving the set of linear equations, the solution (x, y), i.e.
the coordinate of the touch object 2 on the sensing area, is obtained.

[0040] By keeping a distance between the light-emitting element and the
image sensor, the image sensor of the touch device of the present
disclosure can sense the real dark point and the virtual dark point when
the touch object touches the sensing area and blocks two light paths. The
coordinate of the touch object can thus be computed with less number of
the light-emitting element and the image sensor.

[0041] Please refer to FIG. 3. FIG. 3 is the touch device 1 in another
embodiment of the present disclosure. Similar to the touch device
depicted in FIG. 1, the touch device 1 in FIG. 3 comprises a panel 10, a
light-emitting element 12, an image sensor 14, a reflective strip 16 and
a processing unit (not shown). In the present embodiment, the touch
device 1 further comprises an auxiliary light-emitting element 3.

[0042] Similar to the light-emitting element 12, when the touch object 2
touches the sensing area 100, a third light path 33 formed by reflecting
a third light 31, which is generated from the auxiliary light-emitting
element 3, on the reflective strip 16 to the image sensor 14 is blocked
to generate an auxiliary real dark point, and a fourth light 35,
generated from the auxiliary light-emitting element 3, is directly
blocked such that an auxiliary virtual dark point is generated. The image
sensor 14 senses an auxiliary real dark point and an auxiliary virtual
dark point into the sensed image 21, then the processing unit computes
the coordinate of the touch object 2 on the coordinate system according
to the positions of the auxiliary real image and the auxiliary virtual
image on the sensed image 21.

[0043] It's noticed that in the present embodiment, the image sensor 14
still senses light from the light-emitting element 12. In other words,
two light-emitting elements 12 and 3 and one image sensor 14 are used to
compute the coordinate of the touch object 2 in the present embodiment.
The results obtained respectively are then averaged or weightedly
averaged to improve the accuracy of the sensing result. However, when the
light-emitting element 12 and the auxiliary light-emitting element 3 are
both presented as shown in FIG. 3, they have to be driven
non-simultaneously such that the virtual dark point generated by one of
the light-emitting elements would not disappear as a result of
light-compensation by the other light-emitting element. In other words,
when light-emitting element 12 emits the light, the auxiliary
light-emitting element 3 should be turned off, and when the auxiliary
light-emitting element 3 emits light-emitting element the light, the
light-emitting element 12 should be turned off, thus avoiding the
unnecessary lights compensate the blocked light path of the virtual dark
point resulting in only the real dark point generated on the sensed image
21.

[0044] Please refer to FIG. 4. FIG. 4 is a flow chart of a touch method in
an embodiment of the present disclosure. The touch method is adapted to
the touch device 1 depicted in FIG. 1A and FIG. 3 to sense the touch
object 2 and compute the coordinate of the touch object 2. The touch
method comprises the steps as follows. (The steps are not recited in the
sequence in which the steps are performed. That is, unless the sequence
of the steps is expressly indicated, the sequence of the steps is
interchangeable, and all or part of the steps may be simultaneously,
partially simultaneously, or sequentially performed).

[0045] In step 401, the panel 10, the light-emitting element 12, the image
sensor 14 and the reflective strip 16 are provided. The image sensor 14
and the light-emitting element 12 are dispatched with a specific distance
D therebetween. The width of the panel 10 is W. The panel 10 has the
sensing area 100 surrounded by the first boundary 101, the second
boundary 103, the third boundary 105 and the fourth boundary 107. The
extension direction of the first and the third boundaries 101 and 105
defines an x-direction and the extension direction of the second and the
fourth boundaries 103 and 107 defines a y-direction. With the x-direction
and y-direction, the origin point is chosen to be the left-top corner of
the panel 10 to define the coordinate system. A processing unit can be
further provided in step 401. The coordinate of the light-emitting
element 12 (x2, y2) and the coordinate of the image sensor 14 (x1, y1)
are known in step 401 already.

[0046] In step 402, the touch object 2 is disposed in the sensing area 100
such that a first light path 13, formed by reflecting a first light 11
generated from the light-emitting element 12 on a first reflection point
P1, to the image sensor 14 is blocked to generate a real dark point, and
a second light 15 is directly blocked such that a virtual dark point is
generated on a second light path 17 formed by reflecting the second light
15 on a second reflection point P2 to the image sensor 14.

[0047] In step 403, the first sensed angle of the real dark point (on the
first light path 13) θ1 relative to the x direction and the
second sensed angle of the virtual dark point (on the second light path
17) θ2 relative to the x direction are computed respectively.
The computing process in step 403 is performed according to the
calibration data, wherein in an embodiment, the calibration data is a
regression curve or a fitting curve. In an embodiment, the calibration
data is pre-stored in the image sensor 14 or the processing unit.

[0048] In step 404, the coordinate (x3, y3) of the first reflection point
P1 and the coordinate (x4, y4) of the second reflection point P2 are
computed. The computing process in step 404 can be performed with the use
of triangulation according to the sensed angles θ1,
θ2, the width W of the panel 10 and the coordinate of the
image sensor 14.

[0049] In step 405, the angle θ3 of the second light 15
relative to the x direction is computed. The computing process in step
405 can be performed with the use of triangulation according to the
coordinate of the second reflection point P2 (x4, y4) and the coordinate
of the light-emitting element 12 (x2, y2).

[0050] In step 406, the coordinate of the touch object 2 (x, y) is
computed. The computing process in step 406 can be performed with the use
of triangulation according to the coordinate of the light-emitting
element 12 (x2, y2), the coordinate of the image sensor 14 (x1, y1), the
angle θ3 and the sensed angle θ1.

[0051] In an embodiment, the auxiliary light-emitting element 3 can be
provided in step 401 as well to let the touch object 2 block the third
light path 33 and the fourth light path 37 in step 402. The image sensor
14 can further sense the real dark point and virtual dark point due to
the auxiliary light-emitting element 3 to compute the coordinate of the
touch object 2, thus improving the accuracy of the sensing result.

[0052] It will be apparent to those skilled in the art that various
modifications and variations can be made to the structure of the present
disclosure without departing from the scope or spirit of the disclosure.
In view of the foregoing, it is intended that the present disclosure
cover modifications and variations of this disclosure provided they fall
within the scope of the following claims.