Abstract:

The present invention relates to an interactive image and graphic system
and method capable of detecting collision. A storage device stores a
plurality of image data streams. Each image data stream includes a
header, which has at least one position coordinate field, and the at
least one position coordinate field corresponds to at least one object of
the image data stream. An image engine plays a first image data stream of
the plurality of image data streams. A graphic engine receives a sprite
picture data. The sprite picture data includes a sprite position
coordinate. The graphic engine receives the header of the first image
data stream. When the sprite position coordinate superimposes over a
position coordinate of the at least one object of the first image data
stream, the graphic engine drives the image engine to select a second
image data stream from the storage device for being played.

Claims:

1. An interactive image and graphic system capable of detecting collision,
comprising:a storage device, storing a plurality of image data streams,
each said image data stream including a header, wherein the header having
at least one position coordinate field corresponding to an object of the
image data stream;an image engine, coupled to the storage device, for
playing a first image data stream of the plurality of image data streams;
anda graphic engine, coupled to the storage device and the image engine,
for receiving a header of the first image data stream and receiving a
sprite picture data having a sprite position coordinate, wherein when the
sprite position coordinate superimposes over a position coordinate of the
at least one object from the first image data stream, the graphic engine
drives the image engine to select a second image data stream from the
storage device for being played.

2. The system as claimed in claim 1, wherein when the sprite position
coordinate does not superimpose over the position coordinate of the at
least one object from the first image data stream, the graphic engine
drives the image engine to continuously play the first image data stream,
or to select a third image data stream from the storage device for being
played after a predetermined time interval.

3. The system as claimed in claim 1, wherein the image engine plays the
plurality of image data streams in YUV format.

4. The system as claimed in claim 1, wherein the graphic engine plays in
RGB format.

5. The system as claimed in claim 4, further comprising:a YUV-to-RGB
converting device, coupled to the image engine and the graphic engine, so
as to convert the data outputted by the image engine from the YUV format
to the RGB format for being played by the graphic engine.

6. The system as claimed in claim 5, wherein the graphic engine has a
first frame buffer, and the first frame buffer is used for temporarily
storing the data outputted by the YUV-to-RGB converting device.

7. The system as claimed in claim 6, wherein the graphic engine has a
second frame buffer, and the second frame buffer is used for temporarily
storing the sprite picture data.

8. The system as claimed in claim 7, wherein the graphic engine executes
an alpha blending process to the first frame buffer and the second frame
buffer.

9. A method for detecting collision in an interactive image and graphic
system, the image and graphic system having an image engine and a graphic
engine, the graphic engine receiving a sprite picture data having a
sprite position coordinate, the image engine receiving one of a plurality
of image data streams, the image data stream including a header having at
least one position coordinate field corresponding to an object of the
image data stream, the method comprising the steps of:(A) the graphic
engine and the image engine respectively playing the sprite picture data
and a first image data stream;(B) the graphic engine receiving a header
of the first image data stream; and(C) when the graphic engine determines
that the sprite position coordinate superimposes over a position
coordinate of the at least one object of the first image data stream, the
graphic engine driving the image engine to select a second image data
stream from the storage device for being played.

10. The method as claimed in claim 9, further comprising the step of:(D)
when the graphic engine determines that the sprite position coordinate
does not superimpose over the position coordinate of the at least one
object of the first image data stream, the graphic engine driving the
image engine to continuously play the first image data stream, or to
select a third image data stream for being played after a predetermined
time interval.

11. The method as claimed in claim 9, wherein the image engine plays the
image data stream in the YUV format.

12. The method as claimed in claim 11, wherein the graphic engine plays in
the RGB format.

13. The method as claimed in claim 12, further comprising the following
step:a YUV-to-RGB converting step, converting the data outputted by the
image engine from the YUV format to the RGB format for being played by
the graphic engine.

Description:

BACKGROUND OF THE INVENTION

[0001]1. Field of the Invention

[0002]The present invention relates to the technology field of 2-D image
and graphic and, more particularly, to an interactive image and graphic
system and method capable of detecting collision.

[0003]2. Description of Related Art

[0004]In a TV game, generally a graphic engine is used to execute a
drawing procedure of a background picture and a sprite for being
displayed on a display monitor. The graphic engine can receive an input
from a user, so as to control the sprite on the display monitor, thereby
achieving the effect of interactive entertainment. Since users have
higher and higher expectation to the image quality, the quality of a
background picture drawn by a graphic engine could not meet the
requirement of user anymore.

[0005]In order to solve the above problem, a known art utilizes a MPEG4
decoder to play a background image animation, and utilizes a graphic
engine to execute the drawing procedure of the sprite. However, the
colors of general image animations are basically in YUV format, while the
image animation is decoded by the MPEG4 decoder and saved in a frame
buffer by means of a frame form. Then, frames would be read one by one
from the frame buffer for being played. At this time, the frames saved in
the frame buffer are in YUV format. If an OSD process or other image
superimposed effects are applied to the frames saved in the frame buffer,
the rendering method of the MPEG4 decoder would be damaged thereby
resulting in incapability of processing the animation playing procedure.

[0006]In order to solve the problem that the MPEG 4 decoder cannot execute
the OSD process or other superimposed effects, a known art utilizes a 3D
game engine to execute the operations of playing the background image
animation, drawing the sprite, executing the background image animation
and superimposing the sprite, etc. The 3D game engine provides a visual
effect close to an actual view. Therefore, it is suitable for being used
in a game platform. However, not only the 3D game engine is very
expensive, but also the game manufacturing company could not successfully
develop 3D games by the 3D game engine due to a long learning curve.
Accordingly, the conventional interactive game image and graphic engine
method still needs further improvements.

SUMMARY OF THE INVENTION

[0007]One object of the present invention is to provide an interactive
image and graphic system and method capable of detecting collision, so as
to avoid the problem that a conventional MPEG4 decoder cannot execute an
OSD process or other image superimposed effects.

[0008]Another object of the present invention is to provide an interactive
image and graphic system and method capable of detecting collision, so as
to avoid utilizing an expensive 3D game engine with a long learning
curve.

[0009]According to one aspect of the present invention, the present
invention provides an interactive image and graphic system capable of
detecting collision, which comprises a storage device, an image engine
and a graphic engine. The storage device stores a plurality of image data
streams. Each of the image data stream includes a header, the header has
at least one position coordinate field, and the at least one position
coordinate field corresponds to at least one object of the image data
stream. The image engine is coupled to the storage device, and is used
for playing a first image data stream of the plurality of image data
streams. The graphic engine receives a sprite picture data. The sprite
picture data includes a sprite position coordinate. The graphic engine is
coupled to the storage device and the image engine for receiving a header
of the first image data stream. When the sprite position coordinate
superimposes over a position coordinate of the at least one object of the
first image data stream, the graphic engine drives the image engine to
select a second image data stream from the storage device for being
played.

[0010]According to another aspect of the present invention, the present
invention provides a method for detecting collision in an interactive
image and graphic system. The image and graphic system has an image
engine and a graphic engine. The graphic engine receives a sprite picture
data, and the sprite picture data includes a sprite position coordinate.
The image engine receives an image data stream of a plurality of image
data streams. The image data stream includes a header, the header has at
least one position coordinate field, and the at least one position
coordinate field corresponds to at least one object of the image data
stream. The method comprises the following steps: (A) the graphic engine
and the image engine respectively playing the sprite picture data and a
first image data stream; (B) the graphic engine receiving a header of the
first image data stream; and (C) when the graphic engine determines that
the sprite position coordinate superimposes over a position coordinate of
the at least one object of the first image data stream, the graphic
engine driving the image engine to select a second image data stream from
the storage device for being played.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011]FIG. 1 illustrates a block diagram of an interactive image and
graphic system capable of detecting collision according to the present
invention.

[0012]FIG. 2 illustrates a schematic drawing of an image data stream
according to the present invention.

[0013]FIG. 3 illustrates a schematic drawing of a sprite picture data
according to the present invention.

[0014]FIG. 4 illustrates a schematic drawing showing the collision
generated between the sprite picture and at least one object 230
according to the present invention.

[0015]FIG. 5 illustrates a flowchart of a method for detecting collision
in the interactive image and graphic system according to the present
invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0016]FIG. 1 illustrates a block diagram of an interactive image and
graphic system capable of detecting collision according to the present
invention. The interactive image and graphic system includes a storage
device 110, an image engine 120, a graphic engine 140 and a YUV-to-RGB
converting device 130.

[0017]The storage device 110 stores a plurality of image data streams. In
this embodiment, the storage device 110 stores a first image data stream
111, a second image data stream 112 and a third image data stream 113. As
shown in FIG. 2, each image data stream is composed of a header 210 and
data 220. The header 210 has at least one position coordinate field 211,
and the at least one position coordinate field 211 corresponds to at
least one object 230 of the image data stream. In this embodiment, the at
least one object 230 could be an airplane, for example. The at least one
object 230 is represented as a rectangle. The position coordinate field
211 is recorded with a left-top coordinate position and a right-bottom
coordinate position of the rectangle of the at least one object 230. The
data 220 is the compressed image data, wherein the compression format
could be MPEG1, MPEG2, MPEG4 or H.263.

[0018]The storage device 110 could be a dynamic random access memory,
which could be an asynchronous dynamic random access memory or a
synchronous dynamic random access memory. If the storage device 110 is a
synchronous double data rate dynamic random access memory, it could be a
DDR-I, DDR-II, DDR-333 or DDR-400.

[0019]The image engine 120 is coupled to the storage device 110, and is
used for playing a first image data stream 111 of the plurality of image
data streams. Since signals decompressed from conventional MPEG or H.263
image data streams are in YUV format, the image engine 120 plays the
plurality of image data streams in YUV format.

[0020]The YUV-to-RGB converting device 130 is coupled to the image engine
120 and the graphic engine 140, so as to convert the data outputted by
the image engine 120 from YUV format to RGB format for being played by
the graphic engine 140.

[0021]The graphic engine 140 is coupled to the storage device 110 and the
YUV-to-RGB converting device 130. The graphic engine 140 has a first
frame buffer 141 and a second frame buffer 142. The first frame buffer
141 is used for temporarily storing the data outputted by the YUV-to-RGB
converting device 130.

[0022]The graphic engine 140 receives a sprite picture data 143. The
sprite picture data 143 is in RGB format and includes a position
coordinate of the sprite picture. As shown in FIG. 3, in this embodiment,
the sprite picture is a crosshair 144, and the sprite picture position
coordinate could be located in the center of the crosshair 144, which is
currently represented as (x3, y3). The sprite picture position coordinate
could be represented as a rectangle, and the sprite picture position
coordinate is recorded with the left-top coordinate position (x4, y4) and
the right-bottom coordinate position (x5, y5) of the rectangle of the
sprite picture.

[0023]The second frame buffer 142 is used for temporarily storing the
sprite picture data 143. The graphic engine 140 executes an alpha
blending process to the data of the first frame buffer 141 and the data
of the second frame buffer 142, so as to superimpose the sprite picture
over the first image data stream 111 outputted by the image engine 120
for being outputted.

[0024]The graphic engine 140 is coupled to the storage device 110 for
receiving a header 210 from the first image data stream 111. The graphic
engine 140 determines whether the sprite position coordinate superimposes
over the position coordinate 211 of the at least one object 230 from the
first image data stream 111 by means of two determinations
x1≦x3≦x2 and y1≦y3≦y2. As shown in FIG. 4,
when the sprite position coordinate superimposes over the position
coordinate 211 of the at least one object 230 of the first image data
stream 111, the graphic engine 140 determines there is a collision
generated between the sprite picture and the picture of the at least one
object 230, therefore, the graphic engine 140 drives the image engine 120
to select a second image data stream 112 from the storage device 110 for
being played, wherein the second image data stream 112 could be an image
data stream relating to an airplane crash, for example.

[0025]When the sprite position coordinate does not superimpose over the
position coordinate 211 of the at least one object 230 from the first
image data stream 111, the image engine 120 could continuously play the
first image data stream 111, or the graphic engine 140 drives the image
engine 120 to select a third image data stream 113 from the storage
device 110 for being played after a predetermined time interval, wherein
the third image data stream 113 could be an image data stream of a
continuously-flying airplane.

[0026]FIG. 5 illustrates a flowchart of a method for detecting collision
in the interactive image and graphic system according to the present
invention. According to the above description, the image and graphic
system has an image engine 120 and a graphic engine 140. The graphic
engine 140 receives a sprite picture data 143, and the sprite picture
data 143 includes a sprite position coordinate. The image engine 120
receives a first image data stream 111 of a plurality of image data
streams. The first image data stream 111 includes a header 210, the
header 210 has at least one position coordinate field 211, and the at
least one position coordinate field 211 corresponds to at least one
object 230 of the first image data stream 111.

[0027]First, in step S510, the graphic engine 140 and the image engine 120
respectively play the sprite picture data 142 and the first image data
stream 111.

[0029]In step S530, the graphic engine 140 determines whether the sprite
position coordinate superimposes over a position coordinate of the at
least one object 230 from the first image data stream 111.

[0030]When the graphic engine 140 determines that the sprite position
coordinate superimposes over the position coordinate of the at least one
object 230 from the first image data stream 111, it means a collision is
generated between the sprite picture and the picture of the at least one
object 230, and then the graphic engine 140 drives the image engine 120
to select a second image data stream 112 for being played (step S540).

[0031]When the graphic engine 140 determines that the sprite position
coordinate does not superimpose over the position coordinate of the at
least one object 230 from the first image data stream 111, it means there
is not collision generated between the sprite picture and the picture of
the at least one object 230, and then the graphic engine 140 drives the
image 120 to continuously play the first image data stream 111, or select
a third image data stream for being played after a predetermined time
interval (step S550).

[0032]According to the above description, the technique of the present
invention could be applied in detecting object collision in an image
plane and a graphic plane. The image plane is driven by the image engine
120, while the graphic plane is driven by the graphic engine 140. The
technique of the present invention could be applied in such as a video
subtitle menu, a video interactive commercial, a karaoke menu, and so on.

[0033]According to the above description, the present invention executes a
superimposed process to video streams and graphics, thereby providing a
more vivid visual effect than conventional game images. Meanwhile, a
collision determination is processed by detecting whether the sprite
position coordinate superimposes over the position coordinate of the at
least one object 230, so as to achieve an interactive effect. Further,
the problem of utilizing an expensive 3D game engine with a long learning
curve could be avoided.

[0034]Although the present invention has been explained in relation to its
preferred embodiment, it is to be understood that many other possible
modifications and variations can be made without departing from the
spirit and scope of the invention as hereinafter claimed.