Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

This invention relates to an animation authoring system and an animation
authoring method, to enable beginners to produce a three-dimensional
animation easily and to solve input ambiguity problem in the
three-dimensional environment. The animation authoring method according
to the invention comprises the steps of: (a) receiving a plane route of
an object on a predetermined reference plane from a user; (b) creating a
motion window formed along the plane route and having a predetermined
angle to the reference plane to receive motion information of the object
on the motion window from the user; and (c) implementing an animation
according to the received motion information.

Claims:

1-40. (canceled)

41. An animation authoring method, comprising the steps of: (a) providing
a first screen displaying a reference plane to a user; (b) determining a
plane route of an object on the reference plane according to a sketch
that the user makes on the first screen; (c) providing a second screen
displaying a motion window to the user; (d) determining a motion of the
object in the motion window according to information on the motion in the
motion window that the user inputs via the second screen; and (e)
implementing an animation based on the determined plane route and the
determined motion.

42. The animation authoring method as claimed in claim 41, wherein the
motion window is formed along the plane route, making an angle to the
reference plane.

43. The animation authoring method as claimed in claim 41, wherein the
motion information includes gesture data of the object and corresponding
parameters.

44. The animation authoring method as claimed in claim 43, wherein the
parameters include the speed or height of the object, and wherein the
speed of the object corresponds to the speed of the user drawing the
gesture and the height of the object corresponds to the height of the
gesture.

45. The animation authoring method as claimed in claim 43, wherein step
(d) comprises the step of displaying the gesture to the user via the
second screen while a virtual camera moves along the motion window in the
input direction of the gesture, maintaining a distance to the motion
window.

46. The animation authoring method as claimed in claim 45, wherein, when
the gesture gets out of the motion window, the virtual camera performs
zoom out with respect to the second screen.

47. The animation authoring method as claimed in claim 45, wherein, when
the input direction of the gesture is changed to the opposite direction,
the virtual camera performs scrolling with respect to the second screen
in the corresponding direction.

48. The animation authoring method as claimed in claim 43, wherein step
(d) comprises the step of converting the gesture data into the
corresponding one of pre-stored motion data.

49. The animation authoring method as claimed in claim 48, further
comprising the step of: (d') after step (d) and before step (e), creating
a motion sequence of the object by arranging the motion data and the
corresponding parameters in a temporal manner.

50. The animation authoring method as claimed in claim 43, wherein step
(d) comprises: (d1) partitioning the gesture data; (d2) determining
whether the gesture data in each partition corresponds to predetermined
motion data; and (d3) if it is determined that the gesture data
corresponds to the predetermined motion data, converting the gesture data
to the motion data, and if it is determined that the gesture data does
not correspond to the predetermined motion data, converting the gesture
data to pre-stored basic motion data.

51. The animation authoring method as claimed in claim 41, wherein the
determined motion of the object in the motion window is a main route
movement of the object in the motion window, and the method further
comprising the step of: (d') after step (d) and before step (e),
providing a third screen displaying a plurality of secondary motion
windows passing through the main route and each formed with an interval
therebetween, and determining detailed motions of the object in the
plurality of secondary motion windows according to information on the
detailed motions in the plurality of secondary motion windows that the
user inputs via the third screen.

52. An animation authoring system, comprising: a plane route module to
provide a first screen displaying a reference plane to a user and
determine a plane route of an object on the reference plane according to
a sketch that the user makes on the first screen; a motion window module
to provide a second screen displaying a motion window to the user and
determine a motion of the object in the motion window according to
information on the motion in the motion window that the user inputs via
the second screen; and an animation implementation unit to implement an
animation based on the determined plane route and the determined motion.

Description:

FIELD OF THE INVENTION

[0001] The present invention relates to an animation authoring system and
a method for authoring animation, and more particularly, to an animation
authoring system and a method for authoring animation which enables
beginners to easily author a three-dimensional animation in which input
ambiguity problems in three-dimensional environments have been solved.

BACKGROUND

[0002] Three-dimensional animations often appear in the movies or TV
programs nowadays, and three-dimensional animation authoring tools are
used for authoring such three-dimensional animations, but the
conventional three-dimensional animation authoring tools are too
complicated and difficult to use so that only persons of professional
skill can use such tools.

[0003] Recently, due to the development of Internet and multimedia,
general users who want to actively author and use three-dimensional
animations rather than passively watch them are increasing.

[0004] Thus, non-professional tools to enable general users, including
children and beginners, to author their own three-dimensional animations
have been developed, and such non-professional tools create an animation
of an object according to routes and motions simultaneously drawn on a
screen by a user. However, accurate input of both the routes and motions
is impossible and the motions that can be inputted are highly limited
since both the routes and motions are simultaneously inputted.

[0005] Therefore, an animation authoring tool which enables
non-professional users to easily author three-dimensional animations and
to accurately input routes and motions of an object is increasingly
needed.

SUMMARY OF THE INVENTION

[0006] The present invention is to solve the above prior art problems, and
it is an objective of the present invention to provide an animation
authoring system and a method for authoring animation which enables
non-professional users, including children and beginners, to easily
author three-dimensional animations and to accurately input routes and
motions of an object.

[0007] The animation authoring method according to the first preferred
embodiment of the present invention to achieve the above objective
comprises the steps of: (a) receiving a plane route of an object on a
predetermined reference plane from a user; (b) creating a motion window
formed along the plane route and having a predetermined angle to the
reference plane to receive motion information of the object on the motion
window from the user; and (c) implementing an animation according to the
received motion information.

[0008] The animation authoring system according to the first preferred
embodiment of the present invention to achieve the above objective
comprises: a plane route module to receive a plane route of an object on
a predetermined reference plane from a user; a motion window module to
create a motion window formed along the plane route and having a
predetermined angle to the reference plane to receive motion information
of the object on the motion window from the user; and an animation
implementation unit to implement an animation according to the received
motion information.

[0009] The animation authoring method according to the second preferred
embodiment of the present invention to achieve the above objective
comprises the steps of: (a) receiving a plane route of an object on a
predetermined reference plane from a user; (b) creating a first motion
window extended from the plane route and having a predetermined angle to
the reference plane to receive a main route of the object on the first
motion window from the user; (c) creating a plurality of second motion
windows passing through the main route to receive a detailed route of the
object on the second motion window from the user; and (d) implementing an
animation according to the received detailed route.

[0010] The animation authoring system according to the second preferred
embodiment of the present invention to achieve the above objective
comprises: a plane route module to receive a plane route of an object on
a predetermined reference plane from a user; a first motion window module
to create a first motion window extended from the plane route and having
a predetermined angle to the reference plane to receive a main route of
the object on the first motion window from the user; a second motion
window module to create a plurality of second motion windows passing
through the main route to receive a detailed route of the object on the
second motion window from the user; and an animation implementation unit
to implement an animation according to the received detailed route.

[0011] The present invention employing the above structure and methodology
has advantages in that non-professional users, including children and
beginners, may easily author three-dimensional animations since a user
may sketch routes and motions of an object with a simple input tool such
as a tablet pen, a mouse, and a touch input device so as to input the
information on the routes and motions.

[0012] The first preferred embodiment of the present invention employing
the above structure and methodology has advantages in that routes and
motions of an object may be accurately inputted since the motions of the
object are inputted after the routes of the object are inputted.

[0013] The second preferred embodiment of the present invention employing
the above structure and methodology has advantages in that the
information on motions of an object may be accurately inputted since
routes of the object are inputted via the reference plane, the first
motion window, and the second motion window.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] FIG. 1 is a block diagram illustrating an animation authoring
system according to the first preferred embodiment of the present
invention.

[0015] FIGS. 2a to 2g are diagrams illustrating the structure of the
animation authoring system of FIG. 1.

[0016] FIG. 3 is a block diagram illustrating an animation authoring
method according to the first preferred embodiment of the present
invention.

[0017] FIG. 4 is a block diagram illustrating an animation authoring
system according to the second preferred embodiment of the present
invention.

[0018] FIGS. 5a and 5b are diagrams illustrating the structure of the
animation authoring system of FIG. 4.

[0019] FIG. 6 is a block diagram illustrating an animation authoring
method according to the second preferred embodiment of the present
invention.

DETAILED DESCRIPTION OF THE INVENTION

[0020] Hereinafter, the animation authoring system and method according to
the preferred embodiments of the invention will be described below in
detail with reference to the accompanying drawings.

First Embodiment

[0021] First, the animation authoring system according to the first
preferred embodiment of the invention will be described below with
reference to FIGS. 1 to 2g.

[0022] Referring to FIG. 1, the animation authoring system according to
the first preferred embodiment of the invention comprises: a plane route
module (101) to provide a first screen displaying a reference plane to a
user and to receive a plane route of an object that the user draws on the
reference plane; a motion window module (102) to provide a second screen
displaying a motion window formed along the plane route to the user and
to receive gesture data and parameters when the user draws a gesture for
a motion of the object on the motion window; an analysis module (103) to
convert the gesture data to the corresponding one of pre-stored motion
data and then arrange the motion data together with the parameters in a
temporal manner to create a motion sequence; and an animation
implementation unit (104) to implement an animation according to the
motion sequence.

[0023] Each element of a memory management module according to the first
preferred embodiment of the invention employing the above structure will
be described below in detail.

[0025] The plane route sketching unit (101a) of the plane route module
(101) provides a first screen displaying a reference plane of a
three-dimensional space to a user, and samples, approximates and stores a
plane route of an object (e.g., a character having joints) drawn on the
reference plane of the first screen by the user using an input tool such
as a tablet pen, a mouse and a touch input device. At this time, a
uniform cubic B-spline interpolation method may be employed in sampling
and approximating the plane route, since such method has advantages in
having local modification properties and affine invariance properties.

[0026] FIG. 2a illustrates how the plane route of the object is drawn by
the user using a tablet pen on the reference plane shown on the first
screen provided by the plane route sketching unit (101a). The reference
plane herein is a ground plane constructed by x- and y-axes, on the basis
of the three-dimensional space including x-, y- and z-axes.

[0027] When the user inputs a revised route crossing the previously drawn
plane route at least once, the plane route control unit (101b) of the
plane route module (101) divides the revised route into multiple domains
based on the crossing points, substitutes the longest one of the multiple
domains for a part of the plane route, and discards the rest of the
domains.

[0029] The gesture sketching unit (102a) of the motion window module (102)
creates a motion window along the plane route inputted via the plane
route module (101) and provides the user with a second screen displaying
the motion window using the virtual camera unit (102b), so that gesture
data and parameters is inputted when the user draws a gesture for a
motion of the object on the motion window using an input tool such as a
tablet pen, a mouse and a touch input device.

[0030] The parameters herein may include the speed and height of the
object, and the speed of the object corresponds to the speed of the user
drawing the gesture and the height of the object corresponds to the
height of the gesture drawn by the user. The basis of the height of the
gesture is the plane route drawn by the user.

[0031] FIG. 2b shows the motion windows shown in the second screen
provided by the gesture sketching unit of the motion window module (102).

[0032] The motion windows are surfaces which are vertically extended in an
upward direction along the plane route sketched on the reference plane
and formed with a predetermined vertical width.

[0033] FIG. 2b shows that the motion windows are perpendicular to the
reference plane and the plane route. However, it is for illustrative
purpose and shall not be construed to limit the present invention
thereto. The motion windows may have a predetermined angle instead of
being perpendicular to the reference plane and the plane route.

[0034] The gesture sketching unit (102a) of the motion window module (102)
may receive gesture data for various motions including moving motions
(e.g., walking motions, running motions, jumping motions, etc.) and
standing motions (e.g., greeting motions, saluting motions, etc.) when
receiving the gesture data for the motions of the object from the user.

[0035] FIG. 2c shows a second screen shown to the user when the gesture
sketching unit (102a) receives gesture data for standing motions of the
object from the user.

[0036] Referring to FIG. 2c, when the user stays for a predetermined time
after drawing a line upward in a direction corresponding to the z-axis in
the three-dimensional space while sketching a gesture, the gesture
sketching unit (102a) shows a standing motion selection menu to the user
so that the user may select/input one of the standing motions.

[0037] The virtual camera unit (102b) of the motion window module (102)
comprises a virtual camera which moves along the moving directions of the
gestures, maintaining a distance to the motion windows, and records the
motion windows and the gestures sketched by the user so that they may be
displayed on the second screen for the user.

[0038] FIG. 2d shows how the virtual camera moves along the motion windows
as shown in FIG. 2b.

[0039] Referring to FIG. 2d, the virtual camera is positioned at a height
h corresponding to the half of the vertical width of the motion windows
with a predetermined distance d to the motion windows, and provides the
recorded images of the motion windows to the user via the second screen.
Here, a Catmull-Rom spline interpolation method may be employed in
displaying via the second screen the images of the motion windows
recorded by the virtual camera, and the Catmull-Rom spline interpolation
method has advantages in that a screen on which a user may easily sketch
gestures can be shown to the user since the Catmull-Rom spline passes the
control points on the motion windows.

[0040] FIG. 2e shows an example in which the virtual camera zooms out when
the gestures drawn by the user get out of the motion windows and another
example in which the virtual camera scrolls to the left or right when the
user sketches the gestures in a predetermined direction and then in an
opposite direction.

[0041] Referring to FIG. 2e, the virtual camera provides the user via the
second screen with a domain corresponding to the solid line rectangular,
that is, a domain corresponding to the vertical width of the motion
windows when the gestures drawn by the user are within the motion
windows, whereas it zooms out and provides the user via the second screen
with a domain corresponding to the dotted line rectangular, that is, a
domain longer than the vertical width of the motion windows when the
gestures drawn by the user get out of the motion windows.

[0042] Referring further to FIG. 2e, when the user sketches the gestures
in a predetermined direction (e.g., to the right in FIG. 2e) and then in
an opposite direction (e.g., to the left in FIG. 2e), the virtual camera
moves along the moving directions of the gestures sketched by the user
and provides the user with the recorded images of the motion windows via
the second screen by scrolling to a direction corresponding to the
opposite direction (e.g., to the left in FIG. 2e).

[0043] When the virtual camera moves along the moving directions of the
gestures, maintaining a distance to the motion windows, it checks the
moving distance of the virtual camera relative to a predetermined length
of the motion windows and determines the corresponding domain to be bent
if the moving distance of the virtual camera is longer than a
predetermined threshold so that it moves along the shortest route instead
of following the motion windows.

[0044] In the upper parts of FIGS. 2f and 2g, examples are shown wherein
the motion windows have a domain bent at a predetermined angle, and in
the lower parts of FIGS. 2f and 2g, examples of the bent domains (a, d),
the moving routes of the virtual camera (b, e) and the camera offset
segments (c, f) are shown when the motion windows have a domain bent at a
predetermined angle. FIG. 2f shows an example wherein the virtual camera
moves along the outside of the bent domain (referred to as "outside
turn") when the motion windows have a domain bent at a predetermined
angle, and FIG. 2g shows an example wherein the virtual camera moves
along the inside of the bent domain (referred to as "inside turn") when
the motion windows have a domain bent at a predetermined angle.

[0045] Referring to FIG. 2f, when the two camera offset segments (c) do
not cross and the moving distance of the virtual camera (li) is
longer than a first threshold (l1), the virtual camera is determined
to be moving along the outside of the bent domain of the motion windows,
and moves along the shortest route instead of following the motion
windows to get out of the bent domain.

[0046] Referring to FIG. 2g, when the two camera offset segments (c) cross
and the moving distance of the virtual camera (li) is longer than a
second threshold (l2), the virtual camera is determined to be moving
along the inside of the bent domain of the motion windows, and moves
along the shortest route instead of following the motion windows to get
out of the bent domain. Here, the first threshold (l1) is larger
than the second threshold (l2).

[0047] When the virtual camera is determined to be moving along the
outside or the inside of the bent domain, the motion window module (102)
indicates the position of the gesture being currently sketched by the
user with a pause mark and saves the current sketch status including the
sketch speed, and after the virtual camera moves via the shortest route
to get out of the bent domain, the motion window module (102) enables the
user to resume sketching in the previously saved sketch status from the
position indicated with the pause mark.

[0049] The gesture analysis unit (103a) of the analysis module (103)
receives gesture data and parameters from the motion window module (102),
partitions the gesture data (in which the unit of partition is a single
gesture), and then converts each of the partitioned gesture data into the
corresponding one of pre-stored motion data to create a motion sequence
by arranging the motion data together with the parameters in a temporal
manner. The analysis module (103a) may use a corner detection algorithm
in converting the gesture data into the corresponding one of the
pre-stored motion data.

[0050] The pre-stored motion data include multiple possible motions of the
object, and if the gesture data corresponding to the motion data for
similar motions are similar, the user may sketch gestures more easily
through the gesture sketching unit (102a) of the motion window module
(102).

[0051] When incorporating the parameters into the motion sequence, the
gesture analysis unit (103a) of the analysis module (103) may make the
change curve of the speed parameter more gradual during a predetermined
period corresponding to the boundaries between the motion data so that a
more natural motion of the object may be created when the animation is
implemented later.

[0052] When there are any gesture data that cannot be recognized in the
animation authoring system of the present invention among the gesture
data inputted by the user, the gesture control unit (103b) of the
analysis module (103) substitutes gesture data corresponding to
pre-stored basic motions for the unrecognizable data.

[0053] That is, the gesture control unit (103b) determines whether each of
the gesture data from the gesture analysis unit (103a) does not
correspond to at least one of the pre-stored motion data and thus is
unrecognizable, and if there are any gesture data that are determined to
be unrecognizable, substitutes the gesture data corresponding to the
pre-stored basic motions for the unrecognizable data and then output them
to the gesture analysis unit (103a).

[0054] Referring to FIG. 1, the animation implementation unit (104)
implements a three-dimensional animation using the motion sequence
received from the analysis module (103).

[0055] Hereafter, the animation authoring method according to the first
preferred embodiment of the present invention will be described with
reference to FIG. 3.

[0056] First, the plane route module (101) provides a first screen
displaying a reference plane to a user, and samples, approximates and
stores a plane route of an object sketched on the reference plane of the
first screen by the user using an input tool such as a tablet pen, a
mouse and a touch input device (S101). At this time, a uniform cubic
B-spline interpolation method may be employed in sampling and
approximating the plane route.

[0057] Then, the plane route module (101) determines whether the user
inputs a revised route crossing the previously drawn plane route at least
once (S102), and if a revised route is determined to be inputted, divides
the revised route into multiple domains based on the crossing points and
substitutes the longest one of the multiple domains for a part of the
plane route (S103).

[0058] Next, the motion window module (102) creates a motion window along
the plane route inputted via the plane route module (101) and provides
the user with a second screen displaying the motion window using a
virtual camera, so that gesture data and parameters are inputted when the
user sketches a gesture for a motion of the object on the motion window
using an input tool such as a tablet pen, a mouse and a touch input
device (S105).

[0059] The motion window herein may have a predetermined angle instead of
being perpendicular to the reference plane and the plane route.

[0060] The parameters herein may include the speed and height of the
object, and the speed of the object corresponds to the speed of the user
drawing the gesture and the height of the object corresponds to the
height of the gesture drawn by the user.

[0061] The motion window module (102) may receive gesture data for various
motions including moving motions (e.g., walking motions, running motions,
jumping motions, etc.) and standing motions (e.g., greeting motions,
saluting motions, etc.) when receiving the gesture data for the motions
of the object from the user. When the user stays for a predetermined time
after drawing a line upward in a direction corresponding to the z-axis in
the three-dimensional space while sketching a gesture, the motion window
module shows a standing motion selection menu to the user so that the
user may select/input one of the standing motions.

[0062] The virtual camera moves along the moving directions of the
gestures, maintaining a distance to the plane route, and records the
motion windows and the gestures sketched by the user so that they may be
displayed on the second screen for the user.

[0063] The virtual camera is positioned at a height corresponding to the
half of the vertical width of the motion windows with a predetermined
distance to the motion windows, and provides the recorded image of the
motion windows to the user via the second screen. Here, a Catmull-Rom
spline interpolation method may be employed in displaying via the second
screen the images of the motion windows recorded by the virtual camera.

[0064] The virtual camera provides the user via the second screen with a
domain corresponding to the vertical width of the motion windows when the
gestures sketched by the user are within the motion windows, whereas it
zooms out and provides the user via the second screen with a domain
longer than the vertical width of the motion windows when the gestures
drawn by the user get out of the motion windows.

[0065] And, when the user sketches the gestures in a predetermined
direction and then in an opposite direction, the virtual camera moves
along the moving directions of the gestures sketched by the user and
provides the user with the recorded images of the motion windows via the
second screen by scrolling to a direction corresponding to the opposite
direction.

[0066] When the virtual camera moves along the moving directions of the
gestures, maintaining a distance to the motion windows, it checks the
moving distance of the virtual camera relative to a predetermined length
of the motion windows and determines the corresponding domain to be bent
if the moving distance of the virtual camera is longer than a
predetermined threshold so that it moves along the shortest route instead
of following the motion windows. The detailed description thereon can be
found in the above description on the animation authoring system
according to the first preferred embodiment of the present invention with
reference to FIGS. 2f and 2g.

[0067] When the virtual camera is determined to be moving along the bent
domain, the motion window module indicates the position of the gesture
being currently sketched by the user with a pause mark and saves the
current sketch status including the sketch speed, and after the virtual
camera moves via the shortest route to get out of the bent domain, the
motion window module enables the user to resume sketching in the
previously saved sketch status from the position indicated with the pause
mark.

[0068] Then, the analysis module (103) partitions gesture data received
from the motion window module (in which the unit of partition is a single
gesture) (S106).

[0069] Next, the analysis module (103) determines whether each of the
partitioned gesture data does not correspond to at least one of
pre-stored motion data and thus is unrecognizable (S107).

[0070] Then, if there are any partitioned gesture data that are determined
to be unrecognizable since they do not correspond to at least one of the
pre-stored motion data, the analysis module (103) substitutes gesture
data corresponding to pre-stored basic motions for the unrecognizable
data (S108), whereas it proceeds to the next step (S109) if each of the
partitioned gesture data is determined to be recognizable.

[0071] Next, the analysis module (103) converts each of the gesture data
into the corresponding one of pre-stored motion data to create a motion
sequence by arranging the motion data together with the parameters from
the motion window module in a temporal manner (S110). The analysis module
(103) may use a corner detection algorithm in converting the gesture data
into the corresponding one of the pre-stored motion data.

[0072] When incorporating the parameters into the motion sequence, the
analysis module (103) may make the change curve of the speed parameter
more gradual during a predetermined period corresponding to the
boundaries between the motion data so that a more natural motion of the
object may be created when the animation is implemented later.

[0073] Then, the animation implementation unit (104) implements a
three-dimensional animation using the motion sequence received from the
analysis module (103) (S111).

[0074] Next, the motion window module (103) determines whether the user
completed the sketch of the gestures for the motions of the object
(S112), and terminates the animation authoring if the sketch is
determined to be completed, whereas the above-mentioned step of
partitioning the gesture data received from the motion window module
(103) (S106) and the following steps are repeated if the sketch is
determined to be incomplete.

Second Embodiment

[0075] Hereinafter, the animation authoring system and method according to
the second preferred embodiment of the present invention will be
described in detail with reference to the accompanying drawings.

[0076] First, referring to FIG. 4 to FIG. 5b, the animation authoring
system according to the second preferred embodiment of the invention will
be described below. In describing the animation authoring system
according to the second preferred embodiment of the invention, reference
can be made to FIGS. 2a, 2b, and 2d to 2g, which are related to the
animation authoring system according to the first embodiment of the
invention.

[0077] Referring to FIG. 4, the animation authoring system according to
the preferred embodiment of the present invention comprises: a plane
route module (101) to receive a plane route of an object on a
predetermined reference plane from a user; a first motion window module
(102) to create a first motion window extended from the plane route and
having a predetermined angle to the reference plane to receive a main
route of the object on the first motion window from the user; a second
motion window module (103) to create a plurality of second motion windows
passing through the main route to receive a detailed route of the object
on the second motion window from the user; and an animation
implementation unit (104) to implement an animation according to the
received detailed route.

[0078] Each element of a memory management module according to the second
preferred embodiment of the invention employing the above structure will
be described below in detail.

[0080] The plane route sketching unit (101a) of the plane route module
(101) provides a first screen displaying a reference plane of a
three-dimensional space to a user, and samples, approximates and stores a
plane route of an object (e.g., an object without joints such as an
airplane, etc.) drawn on the reference plane of the first screen by the
user using an input tool such as a tablet pen, a mouse and a touch input
device. At this time, a uniform cubic B-spline interpolation method may
be employed in sampling and approximating the plane route, since such
method has advantages in having local modification properties and affine
invariance properties.

[0081] FIG. 2a illustrates how the plane route of the object is drawn by
the user using a tablet pen on the reference plane shown on the first
screen provided by the plane route sketching unit (101a). The reference
plane herein is a ground plane constructed by x- and y-axes, on the basis
of the three-dimensional space including x-, y- and z-axes.

[0082] When the user inputs a revised route crossing the previously drawn
plane route at least once, the plane route control unit (101b) of the
plane route module (101) divides the revised route into multiple domains
based on the crossing points, substitutes the longest one of the multiple
domains for a part of the plane route, and discards the rest of the
domains.

[0083] Referring to FIG. 4, the first motion window module (102) comprises
a main route sketching unit (102a) and a first virtual camera unit
(102b).

[0084] The main route sketching unit (102a) of the first motion window
module (102) creates a first motion window along the plane route inputted
via the plane route module (101) and provides the user with a second
screen displaying the motion window using the first virtual camera unit
(102b), so that information on the main route and the speed of the object
is inputted when the user sketches the main route of the object on the
first motion window using an input tool such as a tablet pen, a mouse and
a touch input device. The speed information herein corresponds to the
speed of the user drawing the main route.

[0085] In describing the present invention, an object without joints such
as an airplane is exemplified as an object to be animated, but the
invention is not limited thereto. The object to be animated may be an
object with joints as long as it falls within the scope of the invention,
and in case of an object having joints, the main route sketching unit
(102a) of the first motion window module (102) may receive information on
the main route and the speed of the object as well as information on the
gesture and the height of the object from the user.

[0086] FIG. 2b shows the first motion windows shown in the second screen
provided by the main route sketching unit (102a) of the first motion
window module (102).

[0087] The first motion windows are surfaces which are vertically extended
in an upward direction perpendicular to the plane route sketched on the
reference plane and formed with a predetermined vertical width.

[0088] FIG. 2b shows that the first motion windows are perpendicular to
the reference plane and the plane route. However, it is for illustrative
purpose and shall not be construed to limit the present invention
thereto. The first motion windows may have a predetermined angle instead
of being perpendicular to the reference plane and the plane route as
necessary.

[0089] The first virtual camera unit (102b) of the first motion window
module (102) comprises a first virtual camera which moves along the
moving direction of the main route, maintaining a distance to the motion
windows, and records the first motion windows and the main route sketched
by the user so that they may be displayed on the second screen for the
user.

[0090] FIG. 2d shows how the first virtual camera moves along the first
motion windows as shown in FIG. 2b.

[0091] Referring to FIG. 2d, the first virtual camera is positioned at a
height h corresponding to the half of the vertical width of the first
motion windows with a predetermined distance d to the first motion
windows, and provides the recorded images of the first motion windows to
the user via the second screen. Here, a Catmull-Rom spline interpolation
method may be employed in displaying via the second screen the images of
the first motion windows recorded by the first virtual camera, and the
Catmull-Rom spline interpolation method has advantages in that a screen
on which a user may easily sketch the main route can be shown to the user
since the Catmull-Rom spline passes the control points on the first
motion windows.

[0092] FIG. 2e shows an example in which the first virtual camera zooms
out when the main route drawn by the user gets out of the first motion
windows and another example in which the first virtual camera scrolls to
the left or right when the user sketches the main route in a
predetermined direction and then in an opposite direction.

[0093] Referring to FIG. 2e, the first virtual camera provides the user
via the second screen with a domain corresponding to the solid line
rectangular, that is, a domain corresponding to the vertical width of the
first motion windows when the main route drawn by the user is within the
first motion windows, whereas it zooms out and provides the user via the
second screen with a domain corresponding to the dotted line rectangular,
that is, a domain longer than the vertical width of the first motion
windows when the main route drawn by the user gets out of the first
motion windows.

[0094] Referring further to FIG. 2e, when the user sketches the main route
in a predetermined direction (e.g., to the right in FIG. 2e) and then in
an opposite direction (e.g., to the left in FIG. 2e), the first virtual
camera moves along the moving direction of the main route sketched by the
user and provides the user with the recorded images of the first motion
windows via the second screen by scrolling to a direction corresponding
to the opposite direction (e.g., to the left in FIG. 2e).

[0095] When the first virtual camera moves along the moving direction of
the main route, maintaining a distance to the first motion windows, it
checks the moving distance of the first virtual camera relative to a
predetermined length of the first motion windows and determines the
corresponding domain to be bent if the moving distance of the first
virtual camera is longer than a predetermined threshold so that it moves
along the shortest route instead of following the first motion windows.

[0096] In the upper parts of FIGS. 2f and 2g, examples are shown wherein
the first motion windows have a domain bent at a predetermined angle, and
in the lower parts of FIGS. 2f and 2g, examples of the bent domains (a,
d), the moving routes of the virtual camera (b, e) and the camera offset
segments (c, f) are shown when the first motion windows have a domain
bent at a predetermined angle. FIG. 2f shows an example wherein the first
virtual camera moves along the outside of the bent domain (referred to as
"outside turn") when the first motion windows have a domain bent at a
predetermined angle, and FIG. 2g shows an example wherein the first
virtual camera moves along the inside of the bent domain (referred to as
"inside turn") when the first motion window have a domain bent at a
predetermined angle.

[0097] Referring to FIG. 2f, when the two camera offset segments (c) do
not cross and the moving distance of the virtual camera (li) is
longer than a first threshold (l1), the first virtual camera is
determined to be moving along the outside of the bent domain of the first
motion windows, moves along the shortest route instead of following the
first motion windows to get out of the bent domain.

[0098] Referring to FIG. 2g, when the two camera offset segments (c) cross
and the moving distance of the virtual camera (ll) is longer than a
second threshold (l2), the first virtual camera is determined to be
moving along the inside of the bent domain of the first motion windows,
and moves along the shortest route instead of following the first motion
windows to get out of the bent domain. Here, the first threshold
(l1) is larger than the second threshold (l2).

[0099] When the first virtual camera is determined to be moving along the
outside or the inside of the bent domain, the first motion window module
(102) indicates the position of the main route being currently sketched
by the user with a pause mark and saves the current sketch status
including the sketch speed, and after the first virtual camera moves via
the shortest route to get out of the bent domain, the first motion window
module (102) enables the user to resume sketching in the previously saved
sketch status from the position indicated with the pause mark.

[0101] The detailed route sketching unit (103a) of the second motion
window module (103) successively creates a plurality of second motion
windows passing through the main route at the center thereof with an
interval therebetween and provides the user with the third screen
displaying the second motion windows via the second virtual camera unit
(103b), so that a detailed route of the object is inputted, approximated
and stored when the user sketches the detailed route of the object on the
second motion windows using an input tool such as a tablet pen, a mouse
and a touch input device. At this time, a uniform cubic B-spline
interpolation method may be employed in sampling and approximating the
detailed route.

[0102] The intervals between the plurality of the second motion windows
created in the detailed route sketching unit (103a) are determined by the
speed information inputted via the first motion window module (102).

[0103] That is, the detailed route sketching unit (103a) of the second
motion window module (103) displays the second motion windows on the
third screen with a relatively large interval therebetween in the domains
corresponding to high speeds in the main route inputted via the first
motion window module (102), and displays the second motion windows on the
third screen with a relatively short interval therebetween in the domains
corresponding to low speeds.

[0104] The detailed route of the object sketched on the second motion
windows means a detailed movement when the object makes such detailed
movement (e.g., a spiral movement) around the main route while the object
is moving along the main route.

[0105] In describing the present invention, an object without joints such
as an airplane is exemplified as an object to be animated, but the
invention is not limited thereto. The object to be animated may be an
object with joints as long as it falls within the scope of the invention,
and in case of an object having joints, the detailed route sketching unit
(103a) of the second motion window module (103) may receive information
on the detailed route and the speed of the object as well as information
on the gesture and the height of the object from the user.

[0106] FIG. 5a shows examples of the second motion windows successively
created with an interval therebetween in the detailed route sketching
unit (103a) of the second motion window module (103), and also shows how
the second virtual camera records the images of the second motion
windows, moving forward (or backward) as the second motion windows are
created.

[0107] If the first motion windows formed along the plane route are
heavily bent so that some domains of the second motion windows are
overlapped, a problem may arise as the detailed route sketched on the
second motion windows may be recognized as an undesirable reverse
movement. In order to demonstrate a solution to the problem carried out
in the second motion window module (103), FIG. 5b shows a plane view
illustrating the second motion windows from (+) direction toward (-)
direction of the z-axis in FIG. 5a for illustrative purpose.

[0108] Referring to FIG. 5b, if the first motion windows formed along the
plane route are heavily bent so that some domains of the second motion
windows are overlapped, the detailed route sketching unit (103a) of the
second motion window module (103) may recognize the detailed route as a
reverse movement against the intent of the user when the user sketches
the detailed route in the overlapping domains. In order to solve this
problem, the detailed route sketching unit (103a) monitors for every
detailed route sketched on the second motion windows (e.g., P1 to P6 in
FIG. 5h) the tangent values between the detailed routes sketched on a
predetermined number of the previous and subsequent successive second
motion windows. When an undesirable reverse movement is detected, the
detailed route causing the reverse movement (e.g., P4 in FIG. 5h) is
ignored (deleted). Hereby, a problem such as an abnormal movement of the
object against the intent of the user may not arise when a
three-dimensional animation of the object is implemented later.

[0109] Referring to FIG. 4, the animation implementation unit (104)
performs a beautification process on a three-dimensional route according
to the detailed routes received from the above-mentioned second motion
window module (103) and then implements an animation according to the
beautified three-dimensional route.

[0110] When the above object is an object with joints, the detailed routes
received from the second motion window module (103) as well as the
information on the gesture and the height received from the first motion
window module (102) or the second motion window module (103) are applied
in implementing the animation.

[0111] Hereafter, the animation authoring method according to the second
preferred embodiment of the present invention will be described with
reference to FIG. 6.

[0112] First, the plane route module (101) provides a first screen
displaying a reference plane to a user, and samples, approximates and
stores a plane route of the object (e.g., an object without joints such
as an airplane, etc.) sketched on the reference plane of the first screen
by the user using an input tool such as a tablet pen, a mouse and a touch
input device (S101). At this time, a uniform cubic B-spline interpolation
method may be employed in sampling and approximating the plane route.

[0113] Then, the plane route module (101) determines whether the user
inputs a revised route crossing the previously drawn plane route at least
once (S102), and if a revised route is determined to be inputted, divides
the revised route into multiple domains based on the crossing points and
substitutes the longest one of the multiple domains for a part of the
plane route (S103).

[0114] Next, the first motion window module (102) creates a first motion
window along the plane route inputted via the plane route module (101)
and provides the user with a second screen displaying the first motion
window using a first virtual camera, so that information on the main
route and the speed of the object are inputted when the user sketches the
main route of the object on the first motion window using an input tool
such as a tablet pen, a mouse and a touch input device (S105). The speed
information herein corresponds to the speed of the user drawing the main
route.

[0115] In describing the present invention, an object without joints such
as an airplane is exemplified as an object to be animated, but the
invention is not limited thereto. The object to be animated may be an
object with joints as long as it falls within the scope of the invention,
and in case of an object having joints, the main route sketching unit
(102a) of the first motion window module (102) may receive information on
the main route and the speed of the object as well as information on the
gesture and the height of the object from the user.

[0116] The first motion windows may be perpendicular to or have a
predetermined angle instead of being perpendicular to the reference plane
and the plane route.

[0117] And, the first virtual camera moves along the moving direction of
the main route, maintaining a distance to the plain route, and records
the first motion windows and the main route sketched by the user so that
they may be displayed on the second screen for the user.

[0118] The first virtual camera is positioned at a height corresponding to
the half of the vertical width of the first motion windows with a
predetermined distance d to the first motion windows, and provides the
recorded images of the first motion windows to the user via the second
screen. Here, a Catmull-Rom spline interpolation method may be employed
in displaying via the second screen the images of the first motion
windows recorded by the first virtual camera.

[0119] The first virtual camera provides the user via the second screen
with a domain corresponding to the vertical width of the first motion
windows when the main route sketched by the user is within the first
motion windows, whereas it zooms out and provides the user via the second
screen with a domain longer than the vertical width of the first motion
windows when the main route drawn by the user gets out of the first
motion windows.

[0120] When the user sketches the main route in a predetermined direction
and then in an opposite direction, the first virtual camera moves along
the moving direction of the main route sketched by the user and provides
the user with the recorded images of the first motion windows via the
second screen by scrolling to a direction corresponding to the opposite
direction.

[0121] When the first virtual camera moves along the moving direction of
the main route, maintaining a distance to the first motion windows, it
checks the moving distance of the first virtual camera relative to a
predetermined length of the first motion windows and determines the
corresponding domain to be bent if the moving distance of the first
virtual camera is longer than a predetermined threshold, so that it moves
along the shortest route instead of following the first motion windows.
The detailed description thereon can be found in the above description on
the animation authoring system according to the embodiments of the
invention with reference to FIGS. 2f and 2g.

[0122] When the first virtual camera is determined to be moving along the
bent domain of the first motion windows, the first motion window module
indicates the position of the main route being currently sketched by the
user with a pause mark and saves the current sketch status including the
sketch speed, and after the first virtual camera moves via the shortest
route to get out of the bent domain, the first motion window module
enables the user to resume sketching in the previously saved sketching
status from the position indicated with the pause mark.

[0123] Then, it is determined whether a detailed route for the main route
of the object will be inputted (S106).

[0124] Next, if the detailed route for the main route of the object is
determined to be inputted, the second motion window module (103)
successively creates a plurality of second motion windows passing through
the main route inputted via the first motion window module (102) at the
center thereof with an interval therebetween (S107) and provides the user
with the third screen displaying the second motion windows via the second
virtual camera, so that a detailed route of the object is inputted when
the user sketches the detailed route of the object on the second motion
windows using an input tool such as a tablet pen, a mouse and a touch
input device (S108), and then the inputted detailed route of the object
is approximated and stored before proceeding to the next step (S109). On
the other hand, if the detailed route for the main route of the object is
determined not to be inputted, the next step proceeds without receiving
the detailed route for the main route of the object.

[0125] The uniform cubic B-spline interpolation method may be employed in
sampling and approximating the detailed route.

[0126] The intervals between the plurality of the second motion windows
created in the second motion window module are determined by the speed
information inputted via the first motion window module (102).

[0127] That is, the second motion window module (103) displays the second
motion windows on the third screen with a relatively large interval
therebetween in the domains corresponding to high speeds in the main
route inputted via the first motion window module (102), and displays the
second motion windows on the third screen with a relatively short
interval therebetween in the domains corresponding to low speeds.

[0128] The detailed route of the object sketched on the second motion
windows means a detailed movement when the object makes such detailed
movement (e.g., a spiral movement) around the main route while the object
is moving along the main route.

[0129] In describing the present invention, an object without joints such
as an airplane is exemplified as an object to be animated, but the
invention is not limited thereto. The object to be animated may be an
object with joints as long as it falls within the scope of the invention,
and in case of an object having joints, the detailed route sketching unit
(103a) of the second motion window module (103) may receive the detailed
route of the object as well as information on the gesture and the height
of the object from the user

[0130] And when the second motion window module (103) receives the
detailed route on the second motion window from the user, it monitors for
every detailed route sketched on the second motion windows the tangent
values between the detailed routes sketched on a predetermined number of
the previous and subsequent successive second motion windows. When a
reverse movement is detected, the detailed route causing the reverse
movement is ignored (deleted). Hereby, even though the first motion
windows are heavily bent so that some domains of the second motion
windows are overlapped and the user sketches the detailed route on the
overlapped domains, the second motion window module (103) recognizes the
detailed route coincident with the intent of the user without errors.

[0131] Next, the animation implementation unit (104) performs a
beautification process on a three-dimensional route according to the
detailed route received from the second motion window module (103) or on
a three-dimensional route according to the main route received from the
first motion window module (102) (S109), and then implements an animation
according to the beautified three-dimensional route (S110).

[0132] When the above object is an object with joints, the detailed route
received from the second motion window module (103) as well as the
information of the gesture and the height received from the first motion
window module (102) or the second motion window module (103) are applied
in implementing the animation.

[0134] Although explanatory embodiments have been shown and described, it
would be appreciated by those skilled in the art that various changes,
alterations, and substitutions can be made in the embodiments without
departing from the spirit of the invention. Therefore, it shall be
understood that the embodiments and the accompanying drawings are to
illustrate the invention, and the scope and the spirit of the invention
shall not be construed to be limited to these embodiments and the
accompanying drawings. The scope of the invention may only be interpreted
by the claims below, and the claims below and their equivalents will fall
within the scope of the spirit of the invention.