Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

The present invention enables processing results which are more accurate
and which have higher precision as to events of the real world to be
obtained. (An inverse gamma correction unit 5001 applies inverse gamma
correction to an input image wherein light signals of the actual world
have been projected and a part of the continuity of the light signals of
the actual world has been lost, and which has been subjected to gamma
correction. A data continuity detecting unit 101 detects, in the input
image subjected to inverse gamma correction, data continuity
corresponding to the continuity of light signals of the real world. An
actual world estimating unit 102 estimates the light signals by
estimating the continuity of the light signals of the real world, based
on the data continuity. An image generating unit 103 generates an output
image from the estimated light signals. A presentation system property
correcting unit 5002 applies correction corresponding to the properties
of the presentation system to the output image that has been generated.
The present invention can be applied to an image processing device for
generating images with higher resolution.

Claims:

1. (canceled)

2. An image processing device comprising:inverse gamma correction means
for applying inverse gamma correction to first image data subjected to
gamma correction in which light signals of the real world having
continuity of the light signals of the real world in that, at any
arbitrary position in a length-wise direction of an object in
distribution of intensity of light from the object, the cross-sectional
shape as the change in level corresponding to the change in position in
the direction orthogonal to the length-wise direction are projected so
that a part of the continuity of the light signals of the real world is
lost, and outputting the first image data subjected to inverse gamma
correction;image data continuity detecting means for detecting continuity
of the image data in that a constant characteristic is obtained in a
predetermined dimensional direction which is changed from the continuity
of the light signals of the real world within the first image data output
from the inverse gamma correction means;actual world estimating means for
estimating the light signals by estimating the continuity of the light
signals of the real world, based on the continuity of the image data
detected by the image data continuity detecting means;image generating
means for generating second image data from the estimated light signals;
andpresentation system property correcting means for applying correction
corresponding to the properties of a presentation system, to the second
image data generated by the image generating means.

3. The image processing device according to claim 2, the image data
continuity detecting means including:discontinuous portion detecting
means for detecting a discontinuous portion of a plurality of pixel
values within the first image data;peak detecting means for detecting the
peak of change of the pixel values from the discontinuous
portion;monotonous increase/decrease region detecting means for detecting
a monotonous increase/decrease region wherein the pixel values are
increasing or decreasing monotonously from the peak; andcontinuousness
detecting means for detecting a monotonous increase/decrease region
regarding which the monotonous increase/decrease region exists at an
adjacent position as a continuity region having the continuity of the
image data.

4. An image processing method comprising:an inverse gamma correction step
for applying inverse gamma correction to first image data subjected to
gamma correction in which light signals of the real world having
continuity of the light signals of the real world in that, at any
arbitrary position in a length-wise direction of an object in
distribution of intensity of light from the object, the cross-sectional
shape as the change in level corresponding to the change in position in
the direction orthogonal to the length-wise direction are projected so
that a part of the continuity of the light signals of the real world is
lost, and outputting the first image data subjected to inverse gamma
correction;an image data continuity detecting step for detecting
continuity of the image data in that a constant characteristic is
obtained in a predetermined dimensional direction which is changed from
the continuity of the light signals of the real world within the first
image data output in the inverse gamma correction step;an actual world
estimating step for estimating the light signals by estimating the
continuity of the light signals of the real world, based on the
continuity of the image data detected in the image data continuity
detecting step;an image generating step for generating second image data
from the estimated light signals; anda presentation system property
correcting step for applying correction corresponding to the properties
of a presentation system, to the second image data generated in the image
generating step.

5. A recording medium having recorded thereon a computer-readable program
comprising:an inverse gamma correction step for applying inverse gamma
correction to first image data subjected to gamma correction in which
light signals of the real world having continuity of the light signals of
the real world in that, at any arbitrary position in a length-wise
direction of an object in distribution of intensity of light from the
object, the cross-sectional shape as the change in level corresponding to
the change in position in the direction orthogonal to the length-wise
direction are projected so that a part of the continuity of the light
signals of the real world is lost, and outputting the first image data
subjected to inverse gamma correction;an image data continuity detecting
step for detecting continuity of the image data in that a constant
characteristic is obtained in a predetermined dimensional direction which
is changed from the continuity of the light signals of the real world
within the first image data output in the inverse gamma correction
step;an actual world estimating step for estimating the light signals by
estimating the continuity of the light signals of the real world, based
on the continuity of the image data detected in the image data continuity
detecting step;an image generating step for generating second image data
from the estimated light signals; anda presentation system property
correcting step for applying correction corresponding to the properties
of a presentation system, to the second image data generated in the image
generating step.

6. A computer-readable program comprising:an inverse gamma correction step
for applying inverse gamma correction to first image data subjected to
gamma correction in which light signals of the real world having
continuity of the light signals of the real world in that, at any
arbitrary position in a length-wise direction of an object in
distribution of intensity of light from the object, the cross-sectional
shape as the change in level corresponding to the change in position in
the direction orthogonal to the length-wise direction are projected so
that a part of the continuity of the light signals of the real world is
lost, and outputting the first image data subjected to inverse gamma
correction;an image data continuity detecting step for detecting
continuity of the image data in that a constant characteristic is
obtained in a predetermined dimensional direction which is changed from
the continuity of the light signals of the real world within the first
image data output in the inverse gamma correction step;an actual world
estimating step for estimating the light signals by estimating the
continuity of the light signals of the real world, based on the
continuity of the image data detected in the image data continuity
detecting step;an image generating step for generating second image data
from the estimated light signals; anda presentation system property
correcting step for applying correction corresponding to the properties
of a presentation system, to the second image data generated in the image
generating step.

Description:

[0001]This application is a continuation of and claims the benefit of
priority under 35 USC § 120 from U.S. Ser. No. 10/546,510, filed May
24, 2006, the entire contents of which are incorporated herein by
reference. U.S. Ser. No. 10/546,510 is the National Stage of PCT
Application No. PCT/JP04/01581, filed Feb. 13, 2004. This application is
also based upon and claims the benefit of priority under 35 USC §119
from the Japanese Patent Application No. 2003-048018, filed Feb. 25,
2003.

BACKGROUND OF THE INVENTION

[0002]1. Technical Field

[0003]The present invention relates to an image processing device and
method, and a program, and particularly relates to an image processing
device and method, and program, taking into consideration the real world
where data has been acquired.

[0004]2. Background Art

[0005]Technology for detecting phenomena in the actual world (real world)
with sensor and processing sampling data output from the sensors is
widely used. For example, image processing technology wherein the actual
world is imaged with an imaging sensor and sampling data which is the
image data is processed, is widely employed.

[0006]Also, video cameras have built in a gamma correction circuit, and
subject the output image data to gamma correction processing. Japanese
Unexamined Patent Application Publication No. 10-233942 discloses image
data output from a video camera being subjected to inverse gamma
correction processing by an inverse gamma correction circuit, and formed
into image data with linear properties. Following interpolation filter
processing such as enlarging, reduction, pixel number conversion
processing, or the like, being performed on this image data by an
interpolation filter, an LCD property correction circuit performs
correction processing of non-linear properties and inverse properties at
the LCD.

[0007]Further, Japanese Unexamined Patent Application Publication No.
2001-250119 discloses having second dimensions with fewer dimensions than
first dimensions obtained by detecting with sensors first signals, which
are signals of the real world having first dimensions, obtaining second
signals including distortion as to the first signals, and performing
signal processing based on the second signals, thereby generating third
signals with alleviated distortion as compared to the second signals.

[0008]However, signal processing for estimating the first signals from the
second signals had not been thought of to take into consideration the
fact that the second signals for the second dimensions with fewer
dimensions than first dimensions wherein a part of the continuity of the
real world signals is lost, obtained by first signals which are signals
of the real world which has the first dimensions, have the continuity of
the data corresponding to the stability of the signals of the real world
which has been lost.

DISCLOSURE OF INVENTION

[0009]The present invention has been made in light of such a situation,
and it is an object thereof to take into consideration the real world
where data was acquired, and to obtain processing results which are more
accurate and more precise as to phenomena in the real world.

[0010]The image processing device according to the present invention
includes: inverse gamma correction means for applying inverse gamma
correction to first image data acquired by light signals of the real
world being cast upon a plurality of detecting elements each having
spatio-temporal integration effects, of which a part of continuity of the
light signals of the real world have been lost, and subjected to gamma
correction, and outputting the first image data subjected to inverse
gamma correction; image data continuity detecting means for detecting the
continuity of image data corresponding to the continuity of the light
signals of the real world within the first image data output from the
inverse gamma correction means; real world estimating means for
generating a function approximating the light signals by estimating the
continuity of the light signals of the real world, based on the
continuity of the image data detected by the image data continuity
detecting means; image generating means for generating second image data
from the function generated by the real world estimating means; and
presentation system property correcting means for applying correction
corresponding to the properties of a presentation system, to the second
image data generated by the image generating means.

[0011]The image data continuity detecting means may comprise:
discontinuity detecting means for detecting discontinuity of a plurality
of pixel values within the first image data; peak detecting means for
detecting the peak of change of the pixel values from the discontinuity;
monotonous increase/decrease region detecting means for detecting a
monotonous increase/decrease region wherein the pixel value is increasing
or decreasing monotonously from the peak; and discontinuity detecting
means for detecting a monotonous increase/decrease region regarding which
the monotonous increase/decrease region exists at an adjacent position as
a constant region having continuity of image data.

[0012]The image processing method according to the present invention
includes: an inverse gamma correction step for applying inverse gamma
correction to first image data acquired by light signals of the real
world being cast upon a plurality of detecting elements each having
spatio-temporal integration effects, of which a part of continuity of the
light signals of the real world have been lost, and subjected to gamma
correction, and outputting the first image data subjected to inverse
gamma correction; an image data continuity detecting step for detecting
the continuity of image data corresponding to the continuity of the light
signals of the real world within the first image data output in the
inverse gamma correction step; a real world estimating step for
generating a function approximating the light signals by estimating the
continuity of the light signals of the real world, based on the
continuity of the image data detected in the image data continuity
detecting step; an image generating step for generating second image data
from the function generated in the real world estimating step; and a
presentation system property correcting step for applying correction
corresponding to the properties of a presentation system, to the second
data generated in the image generating step.

[0013]The program according to the present invention causes a computer to
execute: an inverse gamma correction step for applying inverse gamma
correction to first image data acquired by light signals of the real
world being cast upon a plurality of detecting elements each having
spatio-temporal integration effects, of which a part of continuity of the
light signals of the real world have been lost, and subjected to gamma
correction, and outputting the first image data subjected to inverse
gamma correction; an image data continuity detecting step for detecting
the continuity of image data corresponding to the continuity of the light
signals of the real world within the first image data output in the
inverse gamma correction step; a real world estimating step for
generating a function approximating the light signals by estimating the
continuity of the light signals of the real world, based on the
continuity of the image data detected in the image data continuity
detecting step; an image generating step for generating second image data
from the function generated in the real world estimating step; and a
presentation system property correcting step for applying correction
corresponding to the properties of a presentation system, to the second
image data generated in the image generating step.

[0014]With the image processing device and method, and program, according
to the present invention, inverse gamma correction is applied to first
image data acquired by light signals of the real world being cast upon a
plurality of detecting elements each having spatio-temporal integration
effects, of which a part of continuity of the light signals of the real
world have been lost, and subjected to gamma correction, the first image
data subjected to inverse gamma correction are output; the continuity of
image data corresponding to the continuity of the light signals of the
real world within the first image data output is detected; a function
approximating the light signals is generated by estimating the continuity
of the light signals of the real world, based on the continuity of the
detected image data; second image data is generated from the function
generated; and correction corresponding to the properties of a
presentation system is applied to the generated second image data.

[0015]The image-processing device may be a stand-alone device, or may be a
block which performs image processing.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016]FIG. 1 is a diagram illustrating the principle of the present
invention.

[0017]FIG. 2 is a block diagram illustrating an example of a configuration
of a signal processing device.

[0053]FIG. 38 is a diagram illustrating results of detecting a region with
a fine line.

[0054]FIG. 39 is a diagram illustrating an example of an output image
output from a signal processing device.

[0055]FIG. 40 is a flowchart for describing signal processing with the
signal processing device.

[0056]FIG. 41 is a block diagram illustrating the configuration of a data
continuity detecting unit.

[0057]FIG. 42 is a diagram illustrating an image in the actual world with
a fine line in front of the background.

[0058]FIG. 43 is a diagram for describing approximation of a background
with a plane.

[0059]FIG. 44 is a diagram illustrating the cross-sectional shape of image
data regarding which the image of a fine line has been projected.

[0060]FIG. 45 is a diagram illustrating the cross-sectional shape of image
data regarding which the image of a fine line has been projected.

[0061]FIG. 46 is a diagram illustrating the cross-sectional shape of image
data regarding which the image of a fine line has been projected.

[0062]FIG. 47 is a diagram for describing the processing for detecting a
peak and detecting of monotonous increase/decrease regions.

[0063]FIG. 48 is a diagram for describing the processing for detecting a
fine line region wherein the pixel value of the peak exceeds a threshold,
while the pixel value of the adjacent pixel is equal to or below the
threshold value.

[0064]FIG. 49 is a diagram representing the pixel value of pixels arrayed
in the direction indicated by dotted line AA' in FIG. 48.

[0065]FIG. 50 is a diagram for describing processing for detecting
continuity in a monotonous increase/decrease region.

[0066]FIG. 51 is a diagram illustrating an example of an image regarding
which a continuity component has been extracted by approximation on a
plane.

[0067]FIG. 52 is a diagram illustrating results of detecting regions with
monotonous decrease.

[0068]FIG. 53 is a diagram illustrating regions where continuity has been
detected.

[0069]FIG. 54 is a diagram illustrating pixel values at regions where
continuity has been detected.

[0070]FIG. 55 is a diagram illustrating an example of other processing for
detecting regions where an image of a fine line has been projected.

[0071]FIG. 56 is a flowchart for describing continuity detection
processing.

[0072]FIG. 57 is a diagram for describing processing for detecting
continuity of data in the time direction.

[0073]FIG. 58 is a block diagram illustrating the configuration of a
non-continuity component extracting unit.

[0074]FIG. 59 is a diagram for describing the number of time of
rejections.

[0075]FIG. 60 is a diagram illustrating an example of an input image.

[0076]FIG. 61 is a diagram illustrating an image wherein standard error
obtained as the result of planar approximation without rejection is taken
as pixel values.

[0077]FIG. 62 is a diagram illustrating an image wherein standard error
obtained as the result of planar approximation with rejection is taken as
pixel values.

[0078]FIG. 63 is a diagram illustrating an image wherein the number of
times of rejection is taken as pixel values.

[0079]FIG. 64 is a diagram illustrating an image wherein the gradient of
the spatial direction X of a plane is taken as pixel values.

[0080]FIG. 65 is a diagram illustrating an image wherein the gradient of
the spatial direction Y of a plane is taken as pixel values.

[0081]FIG. 66 is a diagram illustrating an image formed of planar
approximation values.

[0082]FIG. 67 is a diagram illustrating an image formed of the difference
between planar approximation values and pixel values.

[0083]FIG. 68 is a flowchart describing the processing for extracting the
non-continuity component.

[0084]FIG. 69 is a flowchart describing the processing for extracting the
continuity component.

[0085]FIG. 70 is a flowchart describing other processing for extracting
the continuity component.

[0086]FIG. 71 is a flowchart describing still other processing for
extracting the continuity component.

[0087]FIG. 72 is a block diagram illustrating another configuration of a
continuity component extracting unit.

[0088]FIG. 73 is a diagram for describing the activity on an input image
having data continuity.

[0089]FIG. 74 is a diagram for describing a block for detecting activity.

[0090]FIG. 75 is a diagram for describing the angle of data continuity as
to activity.

[0091]FIG. 76 is a block diagram illustrating a detailed configuration of
the data continuity detecting unit.

[0092]FIG. 77 is a diagram describing a set of pixels.

[0093]FIG. 78 is a diagram describing the relation between the position of
a pixel set and the angle of data continuity.

[0094]FIG. 79 is a flowchart for describing processing for detecting data
continuity.

[0095]FIG. 80 is a diagram illustrating a set of pixels extracted when
detecting the angle of data continuity in the time direction and space
direction.

[0096]FIG. 81 is a block diagram illustrating another further detailed
configuration of the data continuity detecting unit.

[0097]FIG. 82 is a diagram for describing a set of pixels made up of
pixels of a number corresponding to the range of angle of set straight
lines.

[0098]FIG. 83 is a diagram describing the range of angle of the set
straight lines.

[0099]FIG. 84 is a diagram describing the range of angle of the set
straight lines, the number of pixel sets, and the number of pixels per
pixel set.

[0100]FIG. 85 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0101]FIG. 86 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0102]FIG. 87 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0103]FIG. 88 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0104]FIG. 89 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0105]FIG. 90 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0106]FIG. 91 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0107]FIG. 92 is a diagram for describing the number of pixel sets and the
number of pixels per pixel set.

[0108]FIG. 93 is a flowchart for describing processing for detecting data
continuity.

[0109]FIG. 94 is a block diagram illustrating still another configuration
of the data continuity detecting unit.

[0110]FIG. 95 is a block diagram illustrating a further detailed
configuration of the data continuity detecting unit.

[0111]FIG. 96 is a diagram illustrating an example of a block.

[0112]FIG. 97 is a diagram describing the processing for calculating the
absolute value of difference of pixel values between a block of interest
and a reference block.

[0113]FIG. 98 is a diagram describing the distance in the spatial
direction X between the position of a pixel in the proximity of the pixel
of interest, and a straight line having an angle θ.

[0114]FIG. 99 is a diagram illustrating the relationship between the shift
amount γ and angle θ.

[0115]FIG. 100 is a diagram illustrating the distance in the spatial
direction X between the position of a pixel in the proximity of the pixel
of interest and a straight line which passes through the pixel of
interest and has an angle of θ, as to the shift amount γ.

[0116]FIG. 101 is a diagram illustrating reference block wherein the
distance as to a straight line which passes through the pixel of interest
and has an angle of θ as to the axis of the spatial direction X, is
minimal.

[0117]FIG. 102 is a diagram for describing processing for halving the
range of angle of continuity of data to be detected.

[0118]FIG. 103 is a flowchart for describing the processing for detection
of data continuity.

[0119]FIG. 104 is a diagram illustrating a block which is extracted at the
time of detecting the angle of data continuity in the space direction and
time direction.

[0120]FIG. 105 is a block diagram illustrating the configuration of the
data continuity detecting unit which executes processing for detection of
data continuity, based on components signals of an input image.

[0121]FIG. 106 is a block diagram illustrating the configuration of the
data continuity detecting unit which executes processing for detection of
data continuity, based on components signals of an input image.

[0122]FIG. 107 is a block diagram illustrating still another configuration
of the data continuity detecting unit.

[0123]FIG. 108 is a diagram for describing the angle of data continuity
with a reference axis as a reference, in the input image.

[0124]FIG. 109 is a diagram for describing the angle of data continuity
with a reference axis as a reference, in the input image.

[0125]FIG. 110 is a diagram for describing the angle of data continuity
with a reference axis as a reference, in the input image.

[0126]FIG. 111 is a diagram illustrating the relationship between the
change in pixel values as to the position of pixels in the spatial
direction, and a regression line, in the input image.

[0127]FIG. 112 is a diagram for describing the angle between the
regression line A, and an axis indicating the spatial direction X, which
is a reference axis, for example.

[0128]FIG. 113 is a diagram illustrating an example of a region.

[0129]FIG. 114 is a flowchart for describing the processing for detection
of data continuity with the data continuity detecting unit of which the
configuration is illustrated in FIG. 107.

[0130]FIG. 115 is a block diagram illustrating still another configuration
of the data continuity detecting unit.

[0131]FIG. 116 is a diagram illustrating the relationship between the
change in pixel values as to the position of pixels in the spatial
direction, and a regression line, in the input image.

[0132]FIG. 117 is a diagram for describing the relationship between
standard deviation and a region having data continuity.

[0133]FIG. 118 is a diagram illustrating an example of a region.

[0134]FIG. 119 is a flowchart for describing the processing for detection
of data continuity with the data continuity detecting unit of which the
configuration is illustrated in FIG. 115.

[0135]FIG. 120 is a flowchart for describing other processing for
detection of data continuity with the data continuity detecting unit of
which the configuration is illustrated in FIG. 115.

[0136]FIG. 121 is a block diagram illustrating the configuration of the
data continuity detecting unit for detecting the angle of a fine line or
a two-valued edge, as data continuity information, to which the present
invention has been applied.

[0137]FIG. 122 is a diagram for describing a detection method for data
continuity information.

[0138]FIG. 123 is a diagram for describing a detection method for data
continuity information.

[0139]FIG. 124 is a diagram illustrating a further detailed configuration
of the data continuity detecting unit.

[0140]FIG. 125 is a diagram for describing horizontal/vertical
determination processing.

[0141]FIG. 126 is a diagram for describing horizontal/vertical
determination processing.

[0142]FIG. 127A is a diagram for describing the relationship between a
fine line in the real world and a fine line imaged by a sensor.

[0143]FIG. 127B is a diagram for describing the relationship between a
fine line in the real world and a fine line imaged by a sensor.

[0144]FIG. 127C is a diagram for describing the relationship between a
fine line in the real world and a fine line imaged by a sensor.

[0145]FIG. 128A is a diagram for describing the relationship between a
fine line in the real world and the background.

[0146]FIG. 128B is a diagram for describing the relationship between a
fine line in the real world and the background.

[0147]FIG. 129A is a diagram for describing the relationship between a
fine line in an image imaged by a sensor and the background.

[0148]FIG. 129B is a diagram for describing the relationship between a
fine line in an image imaged by a sensor and the background.

[0149]FIG. 130A is a diagram for describing an example of the relationship
between a fine line in an image imaged by a sensor and the background.

[0150]FIG. 130B is a diagram for describing an example of the relationship
between a fine line in an image imaged by a sensor and the background.

[0151]FIG. 131A is a diagram for describing the relationship between a
fine line in an image in the real world and the background.

[0152]FIG. 131B is a diagram for describing the relationship between a
fine line in an image in the real world and the background.

[0153]FIG. 132A is a diagram for describing the relationship between a
fine line in an image imaged by a sensor and the background.

[0154]FIG. 132B is a diagram for describing the relationship between a
fine line in an image imaged by a sensor and the background.

[0155]FIG. 133A is a diagram for describing an example of the relationship
between a fine line in an image imaged by a sensor and the background.

[0156]FIG. 133B is a diagram for describing an example of the relationship
between a fine line in an image imaged by a sensor and the background.

[0157]FIG. 134 is a diagram illustrating a model for obtaining the angle
of a fine line.

[0158]FIG. 135 is a diagram illustrating a model for obtaining the angle
of a fine line.

[0159]FIG. 136A is a diagram for describing the maximum value and minimum
value of pixel values in a dynamic range block corresponding to a pixel
of interest.

[0160]FIG. 136B is a diagram for describing the maximum value and minimum
value of pixel values in a dynamic range block corresponding to a pixel
of interest.

[0161]FIG. 137A is a diagram for describing how to obtain the angle of a
fine line.

[0162]FIG. 137B is a diagram for describing how to obtain the angle of a
fine line.

[0163]FIG. 137C is a diagram for describing how to obtain the angle of a
fine line.

[0164]FIG. 138 is a diagram for describing how to obtain the angle of a
fine line.

[0165]FIG. 139 is a diagram for describing an extracted block and dynamic
range block.

[0166]FIG. 140 is a diagram for describing a least-square solution.

[0167]FIG. 141 is a diagram for describing a least-square solution.

[0168]FIG. 142A is a diagram for describing a two-valued edge.

[0169]FIG. 142B is a diagram for describing a two-valued edge.

[0170]FIG. 142C is a diagram for describing a two-valued edge.

[0171]FIG. 143A is a diagram for describing a two-valued edge of an image
imaged by a sensor.

[0172]FIG. 143B is a diagram for describing a two-valued edge of an image
imaged by a sensor.

[0173]FIG. 144A is a diagram for describing an example of a two-valued
edge of an image imaged by a sensor.

[0174]FIG. 144B is a diagram for describing an example of a two-valued
edge of an image imaged by a sensor.

[0175]FIG. 145A is a diagram for describing a two-valued edge of an image
imaged by a sensor.

[0176]FIG. 145B is a diagram for describing a two-valued edge of an image
imaged by a sensor.

[0177]FIG. 146 is a diagram illustrating a model for obtaining the angle
of a two-valued edge.

[0178]FIG. 147A is a diagram illustrating a method for obtaining the angle
of a two-valued edge.

[0179]FIG. 147B is a diagram illustrating a method for obtaining the angle
of a two-valued edge.

[0180]FIG. 147C is a diagram illustrating a method for obtaining the angle
of a two-valued edge.

[0181]FIG. 148 is a diagram illustrating a method for obtaining the angle
of a two-valued edge.

[0182]FIG. 149 is a flowchart for describing the processing for detecting
the angle of a fine line or a two-valued edge along with data continuity.

[0183]FIG. 150 is a flowchart for describing data extracting processing.

[0184]FIG. 151 is a flowchart for describing addition processing to a
normal equation.

[0185]FIG. 152A is a diagram for comparing the gradient of a fine line
obtained by application of the present invention, and the angle of a fine
line obtained using correlation.

[0186]FIG. 152B is a diagram for comparing the gradient of a fine line
obtained by application of the present invention, and the angle of a fine
line obtained using correlation.

[0187]FIG. 153A is a diagram for comparing the gradient of a two-valued
edge obtained by application of the present invention, and the angle of a
fine line obtained using correlation.

[0188]FIG. 153B is a diagram for comparing the gradient of a two-valued
edge obtained by application of the present invention, and the angle of a
fine line obtained using correlation.

[0189]FIG. 154 is a block diagram illustrating the configuration of the
data continuity detecting unit for detecting a mixture ratio under
application of the present invention as data continuity information.

[0190]FIG. 155A is a diagram for describing how to obtain the mixture
ratio.

[0191]FIG. 155B is a diagram for describing how to obtain the mixture
ratio.

[0192]FIG. 155C is a diagram for describing how to obtain the mixture
ratio.

[0193]FIG. 156 is a flowchart for describing processing for detecting the
mixture ratio along with data continuity.

[0194]FIG. 157 is a flowchart for describing addition processing to a
normal equation.

[0195]FIG. 158A is a diagram illustrating an example of distribution of
the mixture ratio of a fine line.

[0196]FIG. 158B is a diagram illustrating an example of distribution of
the mixture ratio of a fine line.

[0197]FIG. 159A is a diagram illustrating an example of distribution of
the mixture ratio of a two-valued edge.

[0198]FIG. 159B is a diagram illustrating an example of distribution of
the mixture ratio of a two-valued edge.

[0199]FIG. 160 is a diagram for describing linear approximation of the
mixture ratio.

[0200]FIG. 161A is a diagram for describing a method for obtaining
movement of an object as data continuity information.

[0201]FIG. 161B is a diagram for describing a method for obtaining
movement of an object as data continuity information.

[0202]FIG. 162A is a diagram for describing a method for obtaining
movement of an object as data continuity information.

[0203]FIG. 162B is a diagram for describing a method for obtaining
movement of an object as data continuity information.

[0204]FIG. 163A is a diagram for describing a method for obtaining a
mixture ratio according to movement of an object as data continuity
information.

[0205]FIG. 163B is a diagram for describing a method for obtaining a
mixture ratio according to movement of an object as data continuity
information.

[0206]FIG. 163C is a diagram for describing a method for obtaining a
mixture ratio according to movement of an object as data continuity
information.

[0207]FIG. 164 is a diagram for describing linear approximation of the
mixture ratio at the time of obtaining the mixture ratio according to
movement of the object as data continuity information.

[0208]FIG. 165 is a block diagram illustrating the configuration of the
data continuity detecting unit for detecting the processing region under
application of the present invention, as data continuity information.

[0209]FIG. 166 is a flowchart for describing the processing for detection
of continuity with the data continuity detecting unit shown in FIG. 165.

[0210]FIG. 167 is a diagram for describing the integration range of
processing for detection of continuity with the data continuity detecting
unit shown in FIG. 165.

[0211]FIG. 168 is a diagram for describing the integration range of
processing for detection of continuity with the data continuity detecting
unit shown in FIG. 165.

[0212]FIG. 169 is a block diagram illustrating another configuration of
the data continuity detecting unit for detecting a processing region to
which the present invention has been applied as data continuity
information.

[0213]FIG. 170 is a flowchart for describing the processing for detecting
continuity with the data continuity detecting unit shown in FIG. 169.

[0214]FIG. 171 is a diagram for describing the integration range of
processing for detecting continuity with the data continuity detecting
unit shown in FIG. 169.

[0215]FIG. 172 is a diagram for describing the integration range of
processing for detecting continuity with the data continuity detecting
unit shown in FIG. 169.

[0216]FIG. 173 is a block diagram illustrating the configuration of an
actual world estimating unit 102.

[0217]FIG. 174 is a diagram for describing the processing for detecting
the width of a fine line in actual world signals.

[0218]FIG. 175 is a diagram for describing the processing for detecting
the width of a fine line in actual world signals.

[0219]FIG. 176 is a diagram for describing the processing for estimating
the level of a fine line signal in actual world signals.

[0220]FIG. 177 is a flowchart for describing the processing of estimating
the actual world.

[0221]FIG. 178 is a block diagram illustrating another configuration of
the actual world estimating unit.

[0222]FIG. 179 is a block diagram illustrating the configuration of a
boundary detecting unit.

[0223]FIG. 180 is a diagram for describing the processing for calculating
allocation ratio.

[0224]FIG. 181 is a diagram for describing the processing for calculating
allocation ratio.

[0225]FIG. 182 is a diagram for describing the processing for calculating
allocation ratio.

[0226]FIG. 183 is a diagram for describing the process for calculating a
regression line indicating the boundary of monotonous increase/decrease
regions.

[0227]FIG. 184 is a diagram for describing the process for calculating a
regression line indicating the boundary of monotonous increase/decrease
regions.

[0228]FIG. 185 is a flowchart for describing processing for estimating the
actual world.

[0229]FIG. 186 is a flowchart for describing the processing for boundary
detection.

[0230]FIG. 187 is a block diagram illustrating the configuration of the
real world estimating unit which estimates the derivative value in the
spatial direction as actual world estimating information.

[0231]FIG. 188 is a flowchart for describing the processing of actual
world estimation with the real world estimating unit shown in FIG. 187.

[0232]FIG. 189 is a diagram for describing a reference pixel.

[0233]FIG. 190 is a diagram for describing the position for obtaining the
derivative value in the spatial direction.

[0234]FIG. 191 is a diagram for describing the relationship between the
derivative value in the spatial direction and the amount of shift.

[0235]FIG. 192 is a block diagram illustrating the configuration of the
actual world estimating unit which estimates the gradient in the spatial
direction as actual world estimating information.

[0236]FIG. 193 is a flowchart for describing the processing of actual
world estimation with the actual world estimating unit shown in FIG. 192.

[0237]FIG. 194 is a diagram for describing processing for obtaining the
gradient in the spatial direction.

[0238]FIG. 195 is a diagram for describing processing for obtaining the
gradient in the spatial direction.

[0239]FIG. 196 is a block diagram illustrating the configuration of the
actual world estimating unit for estimating the derivative value in the
frame direction as actual world estimating information.

[0240]FIG. 197 is a flowchart for describing the processing of actual
world estimation with the actual world estimating unit shown in FIG. 196.

[0241]FIG. 198 is a diagram for describing a reference pixel.

[0242]FIG. 199 is a diagram for describing the position for obtaining the
derivative value in the frame direction.

[0243]FIG. 200 is a diagram for describing the relationship between the
derivative value in the frame direction and the amount of shift.

[0244]FIG. 201 is a block diagram illustrating the configuration of the
real world estimating unit which estimates the gradient in the frame
direction as actual world estimating information.

[0245]FIG. 202 is a flowchart for describing the processing of actual
world estimation with the actual world estimating unit shown in FIG. 201.

[0246]FIG. 203 is a diagram for describing processing for obtaining the
gradient in the frame direction.

[0247]FIG. 204 is a diagram for describing processing for obtaining the
gradient in the frame direction.

[0248]FIG. 205 is a diagram for describing the principle of function
approximation, which is an example of an embodiment of the actual world
estimating unit shown in FIG. 3.

[0249]FIG. 206 is a diagram for describing integration effects in the
event that the sensor is a CCD.

[0250]FIG. 207 is a diagram for describing a specific example of the
integration effects of the sensor shown in FIG. 206.

[0251]FIG. 208 is a diagram for describing a specific example of the
integration effects of the sensor shown in FIG. 206.

[0252]FIG. 209 is a diagram representing a fine-line-inclusive actual
world region shown in FIG. 207.

[0253]FIG. 210 is a diagram for describing the principle of an example of
an embodiment of the actual world estimating unit shown in FIG. 3, in
comparison with the example shown in FIG. 205.

[0255]FIG. 212 is a diagram wherein each of the pixel values contained in
the fine-line-inclusive data region shown in FIG. 211 are plotted on a
graph.

[0256]FIG. 213 is a diagram wherein an approximation function,
approximating the pixel values contained in the fine-line-inclusive data
region shown in FIG. 212, is plotted on a graph.

[0257]FIG. 214 is a diagram for describing the continuity in the spatial
direction which the fine-line-inclusive actual world region shown in FIG.
207 has.

[0258]FIG. 215 is a diagram wherein each of the pixel values contained in
the fine-line-inclusive data region shown in FIG. 211 are plotted on a
graph.

[0259]FIG. 216 is a diagram for describing a state wherein each of the
input pixel values indicated in FIG. 215 are shifted by a predetermined
shift amount.

[0260]FIG. 217 is a diagram wherein an approximation function,
approximating the pixel values contained in the fine-line-inclusive data
region shown in FIG. 212, is plotted on a graph, taking into
consideration the spatial-direction continuity.

[0261]FIG. 218 is a diagram for describing space-mixed region.

[0262]FIG. 219 is a diagram for describing an approximation function
approximating actual-world signals in a space-mixed region.

[0263]FIG. 220 is a diagram wherein an approximation function,
approximating the actual world signals corresponding to the
fine-line-inclusive data region shown in FIG. 212, is plotted on a graph,
taking into consideration both the sensor integration properties and the
spatial-direction continuity.

[0264]FIG. 221 is a block diagram for describing a configuration example
of the actual world estimating unit using, of function approximation
techniques having the principle shown in FIG. 205, primary polynomial
approximation.

[0265]FIG. 222 is a flowchart for describing actual world estimation
processing which the actual world estimating unit of the configuration
shown in FIG. 221 executes.

[0266]FIG. 223 is a diagram for describing a tap range.

[0267]FIG. 224 is a diagram for describing actual world signals having
continuity in the spatial direction.

[0268]FIG. 225 is a diagram for describing integration effects in the
event that the sensor is a CCD.

[0269]FIG. 226 is a diagram for describing distance in the cross-sectional
direction.

[0270]FIG. 227 is a block diagram for describing a configuration example
of the actual world estimating unit using, of function approximation
techniques having the principle shown in FIG. 205, quadratic polynomial
approximation.

[0271]FIG. 228 is a flowchart for describing actual world estimation
processing which the actual world estimating unit of the configuration
shown in FIG. 227 executes.

[0272]FIG. 229 is a diagram for describing a tap range.

[0273]FIG. 230 is a diagram for describing direction of continuity in the
time-spatial direction.

[0274]FIG. 231 is a diagram for describing integration effects in the
event that the sensor is a CCD.

[0275]FIG. 232 is a diagram for describing actual world signals having
continuity in the spatial direction.

[0276]FIG. 233 is a diagram for describing actual world signals having
continuity in the space-time directions.

[0277]FIG. 234 is a block diagram for describing a configuration example
of the actual world estimating unit using, of function approximation
techniques having the principle shown in FIG. 205, cubic polynomial
approximation.

[0278]FIG. 235 is a flowchart for describing actual world estimation
processing which the actual world estimating unit of the configuration
shown in FIG. 234 executes.

[0279]FIG. 236 is a diagram for describing the principle of
re-integration, which is an example of an embodiment of the image
generating unit shown in FIG. 3.

[0280]FIG. 237 is a diagram for describing an example of input pixel and
an approximation function for approximation of an actual world signal
corresponding to the input pixel.

[0281]FIG. 238 is a diagram for describing an example of creating four
high-resolution pixels in the one input pixel shown in FIG. 237, from the
approximation function shown in FIG. 237.

[0282]FIG. 239 is a block diagram for describing a configuration example
of an image generating unit using, of re-integration techniques having
the principle shown in FIG. 236, one-dimensional re-integration.

[0283]FIG. 240 is a flowchart for describing the image generating
processing which the image generating unit of the configuration shown in
FIG. 239 executes.

[0284]FIG. 241 is a diagram illustrating an example of the original image
of the input image.

[0285]FIG. 242 is a diagram illustrating an example of image data
corresponding to the image shown in FIG. 241.

[0286]FIG. 243 is a diagram illustrating an example of an input image.

[0287]FIG. 244 is a diagram representing an example of image data
corresponding to the image shown in FIG. 243.

[0288]FIG. 245 is a diagram illustrating an example of an image obtained
by subjecting an input image to conventional class classification
adaptation processing.

[0289]FIG. 246 is a diagram representing an example of image data
corresponding to the image shown in FIG. 245.

[0290]FIG. 247 is a diagram illustrating an example of an image obtained
by subjecting an input image to the one-dimensional re-integration
technique according to the present invention.

[0291]FIG. 248 is a diagram illustrating an example of image data
corresponding to the image shown in FIG. 247.

[0292]FIG. 249 is a diagram for describing actual-world signals having
continuity in the spatial direction.

[0293]FIG. 250 is a block diagram for describing a configuration example
of an image generating unit which uses, of the re-integration techniques
having the principle shown in FIG. 236, a two-dimensional re-integration
technique.

[0294]FIG. 251 is a diagram for describing distance in the cross-sectional
direction.

[0295]FIG. 252 is a flowchart for describing the image generating
processing which the image generating unit of the configuration shown in
FIG. 250 executes.

[0296]FIG. 253 is a diagram for describing an example of an input pixel.

[0297]FIG. 254 is a diagram for describing an example of creating four
high-resolution pixels in the one input pixel shown in FIG. 253, with the
two-dimensional re-integration technique.

[0298]FIG. 255 is a diagram for describing the direction of continuity in
the space-time directions.

[0299]FIG. 256 is a block diagram for describing a configuration example
of the image generating unit which uses, of the re-integration techniques
having the principle shown in FIG. 236, a three-dimensional
re-integration technique.

[0300]FIG. 257 is a flowchart for describing the image generating
processing which the image generating unit of the configuration shown in
FIG. 256 executes.

[0301]FIG. 258 is a block diagram illustrating another configuration of
the image generating unit to which the present invention is applied.

[0302]FIG. 259 is a flowchart for describing the processing for image
generating with the image generating unit shown in FIG. 258.

[0303]FIG. 260 is a diagram for describing processing of creating a
quadruple density pixel from an input pixel.

[0304]FIG. 261 is a diagram for describing the relationship between an
approximation function indicating the pixel value and the amount of
shift.

[0305]FIG. 262 is a block diagram illustrating another configuration of
the image generating unit to which the present invention has been
applied.

[0306]FIG. 263 so a flowchart for describing the processing for image
generating with the image generating unit shown in FIG. 262.

[0307]FIG. 264 is a diagram for describing processing of creating a
quadruple density pixel from an input pixel.

[0308]FIG. 265 is a diagram for describing the relationship between an
approximation function indicating the pixel value and the amount of
shift.

[0309]FIG. 266 is a block diagram for describing a configuration example
of the image generating unit which uses the one-dimensional
re-integration technique in the class classification adaptation process
correction technique, which is an example of an embodiment of the image
generating unit shown in FIG. 3.

[0310]FIG. 267 is a block diagram describing a configuration example of
the class classification adaptation processing unit of the image
generating unit shown in FIG. 266.

[0311]FIG. 268 is a block diagram illustrating the configuration example
of class classification adaptation processing unit shown in FIG. 266, and
a learning device for determining a coefficient for the class
classification adaptation processing correction unit to use by way of
learning.

[0312]FIG. 269 is a block diagram for describing a detailed configuration
example of the learning unit for the class classification adaptation
processing unit, shown in FIG. 268.

[0313]FIG. 270 is a diagram illustrating an example of processing results
of the class classification adaptation processing unit shown in FIG. 267.

[0314]FIG. 271 is a diagram illustrating a difference image between the
prediction image shown in FIG. 270 and an HD image.

[0315]FIG. 272 is a diagram plotting each of specific pixel values of the
HD image in FIG. 270, specific pixel values of the SD image, and actual
waveform (actual world signals), corresponding to the four HD pixels from
the left of the six continuous HD pixels in the X direction contained in
the region shown in FIG. 271.

[0316]FIG. 273 is a diagram illustrating a difference image of the
prediction image in FIG. 270 and an HD image.

[0317]FIG. 274 is a diagram plotting each of specific pixel values of the
HD image in FIG. 270, specific pixel values of the SD image, and actual
waveform (actual world signals), corresponding to the four HD pixels from
the left of the six continuous HD pixels in the X direction contained in
the region shown in FIG. 273.

[0318]FIG. 275 is a diagram for describing understanding obtained based on
the contents shown in FIG. 272 through FIG. 274.

[0319]FIG. 276 is a block diagram for describing a configuration example
of the class classification adaptation processing correction unit of the
image generating unit shown in FIG. 266.

[0320]FIG. 277 is a block diagram for describing a detailed configuration
example of the learning unit for the class classification adaptation
processing correction unit.

[0321]FIG. 278 is a diagram for describing in-pixel gradient.

[0322]FIG. 279 is a diagram illustrating the SD image shown in FIG. 270,
and a features image having as the pixel value thereof the in-pixel
gradient of each of the pixels of the SD image.

[0323]FIG. 280 is a diagram for describing an in-pixel gradient
calculation method.

[0324]FIG. 281 is a diagram for describing an in-pixel gradient
calculation method.

[0325]FIG. 282 is a flowchart for describing the image generating
processing which the image generating unit of the configuration shown in
FIG. 266 executes.

[0327]FIG. 284 is a flowchart for describing detailed correction
processing of the class classification adaptation processing in the image
generating processing in FIG. 282.

[0328]FIG. 285 is a diagram for describing an example of a class tap
array.

[0329]FIG. 286 is a diagram for describing an example of class
classification.

[0330]FIG. 287 is a diagram for describing an example of a prediction tap
array.

[0331]FIG. 288 is a flowchart for describing learning processing of the
learning device shown in FIG. 268.

[0332]FIG. 289 is a flowchart for describing detailed learning processing
for the class classification adaptation processing in the learning
processing shown in FIG. 288.

[0333]FIG. 290 is a flowchart for describing detailed learning processing
for the class classification adaptation processing correction in the
learning processing shown in FIG. 288.

[0334]FIG. 291 is a diagram illustrating the prediction image shown in
FIG. 270, and an image wherein a correction image is added to the
prediction image (the image generated by the image generating unit shown
in FIG. 266).

[0335]FIG. 292 is a block diagram describing a first configuration example
of a signal processing device using a hybrid technique, which is another
example of an embodiment of the signal processing device shown in FIG. 1.

[0336]FIG. 293 is a block diagram for describing a configuration example
of an image generating unit for executing the class classification
adaptation processing of the signal processing device shown in FIG. 292.

[0337]FIG. 294 is a block diagram for describing a configuration example
of the learning device as to the image generating unit shown in FIG. 293.

[0338]FIG. 295 is a flowchart for describing the processing of signals
executed by the signal processing device of the configuration shown in
FIG. 292.

[0339]FIG. 296 is a flowchart for describing the details of executing
processing of the class classification adaptation processing of the
signal processing in FIG. 295.

[0340]FIG. 297 is a flowchart for describing the learning processing of
the learning device shown in FIG. 294.

[0341]FIG. 298 is a block diagram describing a second configuration
example of a signal processing device using a hybrid technique, which is
another example of an embodiment of the signal processing device shown in
FIG. 1.

[0342]FIG. 299 is a flowchart for describing signal processing which the
signal processing device of the configuration shown in FIG. 296 executes.

[0343]FIG. 300 is a block diagram describing a third configuration example
of a signal processing device using a hybrid technique, which is another
example of an embodiment of the signal processing device shown in FIG. 1.

[0344]FIG. 301 is a flowchart for describing signal processing which the
signal processing device of the configuration shown in FIG. 298 executes.

[0345]FIG. 302 is a block diagram describing a fourth configuration
example of a signal processing device using a hybrid technique, which is
another example of an embodiment of the signal processing device shown in
FIG. 1.

[0346]FIG. 303 is a flowchart for describing signal processing which the
signal processing device of the configuration shown in FIG. 300 executes.

[0347]FIG. 304 is a block diagram describing a fifth configuration example
of a signal processing device using a hybrid technique, which is another
example of an embodiment of the signal processing device shown in FIG. 1.

[0348]FIG. 305 is a flowchart for describing signal processing which the
signal processing device of the configuration shown in FIG. 302 executes.

[0349]FIG. 306 is a block diagram illustrating the configuration of
another embodiment of the data continuity detecting unit.

[0358]FIG. 1 illustrates the principle of the present invention. As shown
in the drawing, events (phenomena) in an actual world 1 having dimensions
such as space, time, mass, and so forth, are acquired by a sensor 2, and
formed into data. Events in the actual world 1 refer to light (images),
sound, pressure, temperature, mass, humidity, brightness/darkness, or
acts, and so forth. The events in the actual world 1 are distributed in
the space-time directions. For example, an image of the actual world 1 is
a distribution of the intensity of light of the actual world 1 in the
space-time directions.

[0359]Taking note of the sensor 2, of the events in the actual world 1
having the dimensions of space, time, and mass, the events in the actual
world 1 which the sensor 2 can acquire, are converted into data 3 by the
sensor 2. It can be said that information indicating events in the actual
world 1 are acquired by the sensor 2.

[0360]That is to say, the sensor 2 converts information indicating events
in the actual world 1, into data 3. It can be said that signals which are
information indicating the events (phenomena) in the actual world 1
having dimensions such as space, time, and mass, are acquired by the
sensor 2 and formed into data.

[0361]Hereafter, the distribution of events such as light (images), sound,
pressure, temperature, mass, humidity, rightness/darkness, or smells, and
so forth, in the actual world 1, will be referred to as signals of the
actual world 1, which are information indicating events. Also, signals
which are information indicating events of the actual world 1 will also
be referred to simply as signals of the actual world 1. In the present
Specification, signals are to be understood to include phenomena and
events, and also include those wherein there is no intent on the
transmitting side.

[0362]The data 3 (detected signals) output from the sensor 2 is
information obtained by projecting the information indicating the events
of the actual world 1 on a space-time having a lower dimension than the
actual world 1. For example, the data 3 which is image data of a moving
image, is information obtained by projecting an image of the
three-dimensional space direction and time direction of the actual world
1 on the time-space having the two-dimensional space direction and time
direction. Also, in the event that the data 3 is digital data for
example, the data 3 is rounded off according to the sampling increments.
In the event that the data 3 is analog data, information of the data 3 is
either compressed according to the dynamic range, or a part of the
information has been deleted by a limiter or the like.

[0363]Thus, by projecting the signals shown are information indicating
events in the actual world 1 having a predetermined number of dimensions
onto data 3 (detection signals), a part of the information indicating
events in the actual world 1 is dropped. That is to say, a part of the
information indicating events in the actual world 1 is dropped from the
data 3 which the sensor 2 outputs.

[0364]However, even though a part of the information indicating events in
the actual world 1 is dropped due to projection, the data 3 includes
useful information for estimating the signals which are information
indicating events (phenomena) in the actual world 1.

[0365]With the present invention, information having continuity contained
in the data 3 is used as useful information for estimating the signals
which is information of the actual world 1. Continuity is a concept which
is newly defined.

[0366]Taking note of the actual world 1, events in the actual world 1
include characteristics which are constant in predetermined dimensional
directions. For example, an object (corporeal object) in the actual world
1 either has shape, pattern, or color that is continuous in the space
direction or time direction, or has repeated patterns of shape, pattern,
or color.

[0367]Accordingly, the information indicating the events in actual world 1
includes characteristics constant in a predetermined dimensional
direction.

[0368]With a more specific example, a linear object such as a string,
cord, or rope, has a characteristic which is constant in the length-wise
direction, i.e., the spatial direction, that the cross-sectional shape is
the same at arbitrary positions in the length-wise direction. The
constant characteristic in the spatial direction that the cross-sectional
shape is the same at arbitrary positions in the length-wise direction
comes from the characteristic that the linear object is long.

[0369]Accordingly, an image of the linear object has a characteristic
which is constant in the length-wise direction, i.e., the spatial
direction, that the cross-sectional shape is the same, at arbitrary
positions in the length-wise direction.

[0370]Also, a monotone object, which is a corporeal object, having an
expanse in the spatial direction, can be said to have a constant
characteristic of having the same color in the spatial direction
regardless of the part thereof.

[0371]In the same way, an image of a monotone object, which is a corporeal
object, having an expanse in the spatial direction, can be said to have a
constant characteristic of having the same color in the spatial direction
regardless of the part thereof.

[0372]In this way, events in the actual world 1 (real world) have
characteristics which are constant in predetermined dimensional
directions, so signals of the actual world 1 have characteristics which
are constant in predetermined dimensional directions.

[0373]In the present Specification, such characteristics which are
constant in predetermined dimensional directions will be called
continuity. Continuity of the signals of the actual world 1 (real world)
means the characteristics which are constant in predetermined dimensional
directions which the signals indicating the events of the actual world 1
(real world) have.

[0374]Countless such continuities exist in the actual world 1 (real
world).

[0375]Next, taking note of the data 3, the data 3 is obtained by signals
which is information indicating events of the actual world 1 having
predetermined dimensions being projected by the sensor 2, and includes
continuity corresponding to the continuity of signals in the real world.
It can be said that the data 3 includes continuity wherein the continuity
of actual world signals has been projected.

[0376]However, as described above, in the data 3 output from the sensor 2,
a part of the information of the actual world 1 has been lost, so a part
of the continuity contained in the signals of the actual world 1 (real
world) is lost.

[0377]In other words, the data 3 contains a part of the continuity within
the continuity of the signals of the actual world 1 (real world) as data
continuity. Data continuity means characteristics which are constant in
predetermined dimensional directions, which the data 3 has.

[0378]With the present invention, the data continuity which the data 3 has
is used as significant data for estimating signals which are information
indicating events of the actual world 1.

[0379]For example, with the present invention, information indicating an
event in the actual world 1 which has been lost is generated by signals
processing of the data 3, using data continuity.

[0380]Now, with the present invention, of the length (space), time, and
mass, which are dimensions of signals serving as information indicating
events in the actual world 1, continuity in the spatial direction or time
direction, are used.

[0381]Returning to FIG. 1, the sensor 2 is formed of, for example, a
digital still camera, a video camera, or the like, and takes images of
the actual world 1, and outputs the image data which is the obtained data
3, to a signal processing device 4. The sensor 2 may also be a
thermography device, a pressure sensor using photo-elasticity, or the
like.

[0382]The signal processing device 4 is configured of, for example, a
personal computer or the like.

[0383]The signal processing device 4 is configured as shown in FIG. 2, for
example. A CPU (Central Processing Unit) 21 executes various types of
processing following programs stored in ROM (Read Only Memory) 22 or the
storage unit 28. RAM (Random Access Memory) 23 stores programs to be
executed by the CPU 21, data, and so forth, as suitable. The CPU 21, ROM
22, and RAM 23, are mutually connected by a bus 24.

[0384]Also connected to the CPU 21 is an input/output interface 25 via the
bus 24. An input device 26 made up of a keyboard, mouse, microphone, and
so forth, and an output unit 27 made up of a display, speaker, and so
forth, are connected to the input/output interface 25. The CPU 21
executes various types of processing corresponding to commands input from
the input unit 26. The CPU 21 then outputs images and audio and the like
obtained as a result of processing to the output unit 27.

[0385]A storage unit 28 connected to the input/output interface 25 is
configured of a hard disk for example, and stores the programs and
various types of data which the CPU 21 executes. A communication unit 29
communicates with external devices via the Internet and other networks.
In the case of this example, the communication unit 29 acts as an
acquiring unit for capturing data 3 output from the sensor 2.

[0386]Also, an arrangement may be made wherein programs are obtained via
the communication unit 29 and stored in the storage unit 28.

[0387]A drive 30 connected to the input/output interface 25 drives a
magnetic disk 51, optical disk 52, magneto-optical disk 53, or
semiconductor memory 54 or the like mounted thereto, and obtains programs
and data recorded therein. The obtained programs and data are transferred
to the storage unit 28 as necessary and stored.

[0389]Note that whether the functions of the signal processing device 4
are realized by hardware or realized by software is irrelevant. That is
to say, the block diagrams in the present Specification may be taken to
be hardware block diagrams or may be taken to be software function block
diagrams.

[0390]With the signal processing device 4 shown in FIG. 3, image data
which is an example of the data 3 is input, and the continuity of the
data is detected from the input image data (input image). Next, the
signals of the actual world 1 acquired by the sensor 2 are estimated from
the continuity of the data detected. Then, based on the estimated signals
of the actual world 1, an image is generated, and the generated image
(output image) is output. That is to say, FIG. 3 is a diagram
illustrating the configuration of the signal processing device 4 which is
an image processing device.

[0391]The input image (image data which is an example of the data 3) input
to the signal processing device 4 is supplied to a data continuity
detecting unit 101 and actual world estimating unit 102.

[0392]The data continuity detecting unit 101 detects the continuity of the
data from the input image, and supplies data continuity information
indicating the detected continuity to the actual world estimating unit
102 and an image generating unit 103. The data continuity information
includes, for example, the position of a region of pixels having
continuity of data, the direction of a region of pixels having continuity
of data (the angle or gradient of the time direction and space
direction), or the length of a region of pixels having continuity of
data, or the like in the input image. Detailed configuration of the data
continuity detecting unit 101 will be described later.

[0393]The actual world estimating unit 102 estimates the signals of the
actual world 1, based on the input image and the data continuity
information supplied from the data continuity detecting unit 101. That is
to say, the actual world estimating unit 102 estimates an image which is
the signals of the actual world cast into the sensor 2 at the time that
the input image was acquired. The actual world estimating unit 102
supplies the actual world estimation information indicating the results
of the estimation of the signals of the actual world 1, to the image
generating unit 103. The detailed configuration of the actual world
estimating unit 102 will be described later.

[0394]The image generating unit 103 generates signals further
approximating the signals of the actual world 1, based on the actual
world estimation information indicating the estimated signals of the
actual world 1, supplied from the actual world estimating unit 102, and
outputs the generated signals. Or, the image generating unit 103
generates signals further approximating the signals of the actual world
1, based on the data continuity information supplied from the data
continuity detecting unit 101, and the actual world estimation
information indicating the estimated signals of the actual world 1,
supplied from the actual world estimating unit 102, and outputs the
generated signals.

[0395]That is to say, the image generating unit 103 generates an image
further approximating the image of the actual world 1 based on the actual
world estimation information, and outputs the generated image as an
output image. Or, the image generating unit 103 generates an image
further approximating the image of the actual world 1 based on the data
continuity information and actual world estimation information, and
outputs the generated image as an output image.

[0396]For example, the image generating unit 103 generates an image with
higher resolution in the spatial direction or time direction in
comparison with the input image, by integrating the estimated image of
the actual world 1 within a desired range of the spatial direction or
time direction, based on the actual world estimation information, and
outputs the generated image as an output image. For example, the image
generating unit 103 generates an image by extrapolation/interpolation,
and outputs the generated image as an output image.

[0397]Detailed configuration of the image generating unit 103 will be
described later.

[0398]Next, the principle of the present invention will be described with
reference to FIG. 4 through FIG. 7.

[0399]FIG. 4 is a diagram describing the principle of processing with a
conventional signal processing device 121. The conventional signal
processing device 121 takes the data 3 as the reference for processing,
and executes processing such as increasing resolution and the like with
the data 3 as the object of processing. With the conventional signal
processing device 121, the actual world 1 is never taken into
consideration, and the data 3 is the ultimate reference, so information
exceeding the information contained in the data 3 can not be obtained as
output.

[0400]Also, with the conventional signal processing device 121, distortion
in the data 3 due to the sensor 2 (difference between the signals which
are information of the actual world 1, and the data 3) is not taken into
consideration whatsoever, so the conventional signal processing device
121 outputs signals still containing the distortion. Further, depending
on the processing performed by the signal processing device 121, the
distortion due to the sensor 2 present within the data 3 is further
amplified, and data containing the amplified distortion is output.

[0401]Thus, with conventional signals processing, (the signals of) the
actual world 1, from which the data 3 has been obtained, was never taken
into consideration. In other words, with the conventional signal
processing, the actual world 1 was understood within the framework of the
information contained in the data 3, so the limits of the signal
processing are determined by the information and distortion contained in
the data 3. The present Applicant has separately proposed signal
processing taking into consideration the actual world 1, but this did not
take into consideration the later-described continuity.

[0402]In contrast with this, with the signal processing according to the
present invention, processing is executed taking (the signals of) the
actual world 1 into consideration in an explicit manner.

[0403]FIG. 5 is a diagram for describing the principle of the processing
at the signal processing device 4 according to the present invention.

[0404]This is the same as the conventional arrangement wherein signals,
which are information indicating events of the actual world 1, are
obtained by the sensor 2, and the sensor 2 outputs data 3 wherein the
signals which are information of the actual world 1 are projected.

[0405]However, with the present invention, signals, which are information
indicating events of the actual world 1, obtained by the sensor 2, are
explicitly taken into consideration. That is to say, signal processing is
performed conscious of the fact that the data 3 contains distortion due
to the sensor 2 (difference between the signals which are information of
the actual world 1, and the data 3).

[0406]Thus, with the signal processing according to the present invention,
the processing results are not restricted due to the information
contained in the data 3 and the distortion, and for example, processing
results which are more accurate and which have higher precision than
conventionally can be obtained with regard to events in the actual world
1. That is to say, with the present invention, processing results which
are more accurate and which have higher precision can be obtained with
regard to signals, which are information indicating events of the actual
world 1, input to the sensor 2.

[0407]FIG. 6 and FIG. 7 are diagrams for describing the principle of the
present invention in greater detail.

[0408]As shown in FIG. 6, signals of the actual world, which are an image
for example, are image on the photoreception face of a CCD (Charge
Coupled Device) which is an example of the sensor 2, by an optical system
141 made up of lenses, an optical LPF (Low Pass Filter), and the like.
The CCD, which is an example of the sensor 2, has integration properties,
so difference is generated in the data 3 output from the CCD as to the
image of the actual world 1. Details of the integration properties of the
sensor 2 will be described later.

[0409]With the signal processing according to the present invention, the
relationship between the image of the actual world 1 obtained by the CCD,
and the data 3 taken by the CCD and output, is explicitly taken into
consideration. That is to say, the relationship between the data 3 and
the signals which is information of the actual world obtained by the
sensor 2, is explicitly taken into consideration.

[0410]More specifically, as shown in FIG. 7, the signal processing device
4 uses a model 161 to approximate (describe) the actual world 1. The
model 161 is represented by, for example, N variables. More accurately,
the model 161 approximates (describes) signals of the actual world 1.

[0411]In order to predict the model 161, the signal processing device 4
extracts M pieces of data 162 from the data 3. At the time of extracting
the M pieces of data 162 from the data 3, the signal processing device 4
uses the continuity of the data contained in the data 3. In other words,
the signal processing device 4 extracts data 162 for predicting the model
161, based on the continuity of the data contained in the data 3.
Consequently, the model 161 is constrained by the continuity of the data.

[0412]That is to say, the model 161 approximates (information (signals)
indicating) events of the actual world having continuity (constant
characteristics in a predetermined dimensional direction), which
generates the data continuity in the data 3.

[0413]Now, in the event that the number M of the data 162 is N or more,
which is the number of variables of the model, the model 161 represented
by the N variables can be predicted, from the M pieces of the data 162.

[0414]In this way, the signal processing device 4 can take into
consideration the signals which are information of the actual world 1, by
predicting the model 161 approximating (describing) the (signals of the)
actual world 1.

[0415]Next, the integration effects of the sensor 2 will be described.

[0416]An image sensor such as a CCD or CMOS (Complementary Metal-Oxide
Semiconductor), which is the sensor 2 for taking images, projects
signals, which are information of the real world, onto two-dimensional
data, at the time of imaging the real world. The pixels of the image
sensor each have a predetermined area, as a so-called photoreception face
(photoreception region). Incident light to the photoreception face having
a predetermined area is integrated in the space direction and time
direction for each pixel, and is converted into a single pixel value for
each pixel.

[0417]The space-time integration of images will be described with
reference to FIG. 8 through FIG. 11.

[0418]An image sensor images a subject (object) in the real world, and
outputs the obtained image data as a result of imagining in increments of
single frames. That is to say, the image sensor acquires signals of the
actual world 1 which is light reflected off of the subject of the actual
world 1, and outputs the data 3.

[0419]For example, the image sensor outputs image data of 30 frames per
second. In this case, the exposure time of the image sensor can be made
to be 1/30 seconds. The exposure time is the time from the image sensor
starting conversion of incident light into electric charge, to ending of
the conversion of incident light into electric charge. Hereafter, the
exposure time will also be called shutter time.

[0420]FIG. 8 is a diagram describing an example of a pixel array on the
image sensor. In FIG. 8, A through I denote individual pixels. The pixels
are placed on a plane corresponding to the image displayed by the image
data. A single detecting element corresponding to a single pixel is
placed on the image sensor. At the time of the image sensor taking images
of the actual world 1, the one detecting element outputs one pixel value
corresponding to the one pixel making up the image data. For example, the
position in the spatial direction X (X coordinate) of the detecting
element corresponds to the horizontal position on the image displayed by
the image data, and the position in the spatial direction Y (Y
coordinate) of the detecting element corresponds to the vertical position
on the image displayed by the image data.

[0421]Distribution of intensity of light of the actual world 1 has expanse
in the three-dimensional spatial directions and the time direction, but
the image sensor acquires light of the actual world 1 in two-dimensional
spatial directions and the time direction, and generates data 3
representing the distribution of intensity of light in the
two-dimensional spatial directions and the time direction.

[0422]As shown in FIG. 9, the detecting device which is a CCD for example,
converts light cast onto the photoreception face (photoreception region)
(detecting region) into electric charge during a period corresponding to
the shutter time, and accumulates the converted charge. The light is
information (signals) of the actual world 1 regarding which the intensity
is determined by the three-dimensional spatial position and
point-in-time. The distribution of intensity of light of the actual world
1 can be represented by a function F(x, y, z, t), wherein position x, y,
z, in three-dimensional space, and point-in-time t, are variables.

[0423]The amount of charge accumulated in the detecting device which is a
CCD is approximately proportionate to the intensity of the light cast
onto the entire photoreception face having two-dimensional spatial
expanse, and the amount of time that light is cast thereupon. The
detecting device adds the charge converted from the light cast onto the
entire photoreception face, to the charge already accumulated during a
period corresponding to the shutter time. That is to say, the detecting
device integrates the light cast onto the entire photoreception face
having a two-dimensional spatial expanse, and accumulates a change of an
amount corresponding to the integrated light during a period
corresponding to the shutter time. The detecting device can also be said
to have an integration effect regarding space (photoreception face) and
time (shutter time).

[0424]The charge accumulated in the detecting device is converted into a
voltage value by an unshown circuit, the voltage value is further
converted into a pixel value such as digital data or the like, and is
output as data 3. Accordingly, the individual pixel values output from
the image sensor have a value projected on one-dimensional space, which
is the result of integrating the portion of the information (signals) of
the actual world 1 having time-space expanse with regard to the time
direction of the shutter time and the spatial direction of the
photoreception face of the detecting device.

[0425]That is to say, the pixel value of one pixel is represented as the
integration of F(x, y, t). F(x, y, t) is a function representing the
distribution of light intensity on the photoreception face of the
detecting device. For example, the pixel value P is represented by
Expression (1).

∫ ∫ ∫ ##EQU00001##

[0426]In Expression (1), x1 represents the spatial coordinate at the
left-side boundary of the photoreception face of the detecting device (X
coordinate). x2 represents the spatial coordinate at the right-side
boundary of the photoreception face of the detecting device (X
coordinate). In Expression (1), y1 represents the spatial coordinate
at the top-side boundary of the photoreception face of the detecting
device (Y coordinate). y2 represents the spatial coordinate at the
bottom-side boundary of the photoreception face of the detecting device
(Y coordinate). Also, t1 represents the point-in-time at which
conversion of incident light into an electric charge was started. t2
represents the point-in-time at which conversion of incident light into
an electric charge was ended.

[0427]Note that actually, the gain of the pixel values of the image data
output from the image sensor is corrected for the overall frame.

[0428]Each of the pixel values of the image data are integration values of
the light cast on the photoreception face of each of the detecting
elements of the image sensor, and of the light cast onto the image
sensor, waveforms of light of the actual world 1 finer than the
photoreception face of the detecting element are hidden in the pixel
value as integrated values.

[0429]Hereafter, in the present Specification, the waveform of signals
represented with a predetermined dimension as a reference may be referred
to simply as waveforms.

[0430]Thus, the image of the actual world 1 is integrated in the spatial
direction and time direction in increments of pixels, so a part of the
continuity of the image of the actual world 1 drops out from the image
data, so only another part of the continuity of the image of the actual
world 1 is left in the image data. Or, there may be cases wherein
continuity which has changed from the continuity of the image of the
actual world 1 is included in the image data.

[0431]Further description will be made regarding the integration effect in
the spatial direction for an image taken by an image sensor having
integration effects.

[0432]FIG. 10 is a diagram describing the relationship between incident
light to the detecting elements corresponding to the pixel D through
pixel F, and the pixel values. F(x) in FIG. 10 is an example of a
function representing the distribution of light intensity of the actual
world 1, having the coordinate x in the spatial direction X in space (on
the detecting device) as a variable. In other words, F(x) is an example
of a function representing the distribution of light intensity of the
actual world 1, with the spatial direction Y and time direction constant.
In FIG. 10, L indicates the length in the spatial direction X of the
photoreception face of the detecting device corresponding to the pixel D
through pixel F.

[0433]The pixel value of a single pixel is represented as the integral of
F(x). For example, the pixel value P of the pixel E is represented by
Expression (2).

∫ ##EQU00002##

[0434]In the Expression (2), x1 represents the spatial coordinate in
the spatial direction X at the left-side boundary of the photoreception
face of the detecting device corresponding to the pixel E. x2
represents the spatial coordinate in the spatial direction X at the
right-side boundary of the photoreception face of the detecting device
corresponding to the pixel E.

[0435]In the same way, further description will be made regarding the
integration effect in the time direction for an image taken by an image
sensor having integration effects.

[0436]FIG. 11 is a diagram for describing the relationship between time
elapsed, the incident light to a detecting element corresponding to a
single pixel, and the pixel value. F(t) in FIG. 11 is a function
representing the distribution of light intensity of the actual world 1,
having the point-in-time t as a variable. In other words, F(t) is an
example of a function representing the distribution of light intensity of
the actual world 1, with the spatial direction Y and the spatial
direction X constant. Ts represents the shutter time.

[0437]The frame #n-1 is a frame which is previous to the frame #n
time-wise, and the frame #n+1 is a frame following the frame #n
time-wise. That is to say, the frame #n-1, frame #n, and frame #n+1, are
displayed in the order of frame #n-1, frame #n, and frame #n+1.

[0438]Note that in the example shown in FIG. 11, the shutter time ts
and the frame intervals are the same.

[0439]The pixel value of a single pixel is represented as the integral of
F(x). For example, the pixel value P of the pixel of frame #n for
example, is represented by Expression (2).

∫ ##EQU00003##

[0440]In the Expression (3), t1 represents the time at which
conversion of incident light into an electric charge was started. t2
represents the time at which conversion of incident light into an
electric charge was ended.

[0441]Hereafter, the integration effect in the spatial direction by the
sensor 2 will be referred to simply as spatial integration effect, and
the integration effect in the time direction by the sensor 2 also will be
referred to simply as time integration effect. Also, space integration
effects or time integration effects will be simply called integration
effects.

[0442]Next, description will be made regarding an example of continuity of
data included in the data 3 acquired by the image sensor having
integration effects.

[0443]FIG. 12 is a diagram illustrating a linear object of the actual
world 1 (e.g., a fine line), i.e., an example of distribution of light
intensity. In FIG. 12, the position to the upper side of the drawing
indicates the intensity (level) of light, the position to the upper right
side of the drawing indicates the position in the spatial direction X
which is one direction of the spatial directions of the image, and the
position to the right side of the drawing indicates the position in the
spatial direction Y which is the other direction of the spatial
directions of the image.

[0444]The image of the linear object of the actual world 1 includes
predetermined continuity. That is to say, the image shown in FIG. 12 has
continuity in that the cross-sectional shape (the change in level as to
the change in position in the direction orthogonal to the length
direction), at any arbitrary position in the length direction.

[0445]FIG. 13 is a diagram illustrating an example of pixel values of
image data obtained by actual image-taking, corresponding to the image
shown in FIG. 12.

[0446]FIG. 14 is a model diagram of the image data shown in FIG. 13.

[0447]The model diagram shown in FIG. 14 is a model diagram of image data
obtained by imaging, with the image sensor, an image of a linear object
having a diameter shorter than the length L of the photoreception face of
each pixel, and extending in a direction offset from the array of the
pixels of the image sensor (the vertical or horizontal array of the
pixels). The image cast into the image sensor at the time that the image
data shown in FIG. 14 was acquired is an image of the linear object of
the actual world 1 shown in FIG. 12.

[0448]In FIG. 14, the position to the upper side of the drawing indicates
the pixel value, the position to the upper right side of the drawing
indicates the position in the spatial direction X which is one direction
of the spatial directions of the image, and the position to the right
side of the drawing indicates the position in the spatial direction Y
which is the other direction of the spatial directions of the image. The
direction indicating the pixel value in FIG. 14 corresponds to the
direction of level in FIG. 12, and the spatial direction X and spatial
direction Y in FIG. 14 also are the same as the directions in FIG. 12.

[0449]In the event of taking an image of a linear object having a diameter
narrower than the length L of the photoreception face of each pixel with
the image sensor, the linear object is represented in the image data
obtained as a result of the image-taking as multiple arc shapes
(half-discs) having a predetermined length which are arrayed in a
diagonally-offset fashion, in a model representation, for example. The
arc shapes are of approximately the same shape. One arc shape is formed
on one row of pixels vertically, or is formed on one row of pixels
horizontally. For example, one arc shape shown in FIG. 14 is formed on
one row of pixels vertically.

[0450]Thus, with the image data taken and obtained by the image sensor for
example, the continuity in that the cross-sectional shape in the spatial
direction Y at any arbitrary position in the length direction which the
linear object image of the actual world 1 had, is lost. Also, it can be
said that the continuity, which the linear object image of the actual
world 1 had, has changed into continuity in that arc shapes of the same
shape formed on one row of pixels vertically or formed on one row of
pixels horizontally are arrayed at predetermined intervals.

[0451]FIG. 15 is a diagram illustrating an image in the actual world 1 of
an object having a straight edge, and is of a monotone color different
from that of the background, i.e., an example of distribution of light
intensity. In FIG. 15, the position to the upper side of the drawing
indicates the intensity (level) of light, the position to the upper right
side of the drawing indicates the position in the spatial direction X
which is one direction of the spatial directions of the image, and the
position to the right side of the drawing indicates the position in the
spatial direction Y which is the other direction of the spatial
directions of the image.

[0452]The image of the object of the actual world 1 which has a straight
edge and is of a monotone color different from that of the background,
includes predetermined continuity. That is to say, the image shown in
FIG. 15 has continuity in that the cross-sectional shape (the change in
level as to the change in position in the direction orthogonal to the
length direction) is the same at any arbitrary position in the length
direction.

[0453]FIG. 16 is a diagram illustrating an example of pixel values of the
image data obtained by actual image-taking, corresponding to the image
shown in FIG. 15. As shown in FIG. 16, the image data is in a stepped
shape, since the image data is made up of pixel values in increments of
pixels.

[0455]The model diagram shown in FIG. 17 is a model diagram of image data
obtained by taking, with the image sensor, an image of the object of the
actual world 1 which has a straight edge and is of a monotone color
different from that of the background, and extending in a direction
offset from the array of the pixels of the image sensor (the vertical or
horizontal array of the pixels). The image cast into the image sensor at
the time that the image data shown in FIG. 17 was acquired is an image of
the object of the actual world 1 which has a straight edge and is of a
monotone color different from that of the background, shown in FIG. 15.

[0456]In FIG. 17, the position to the upper side of the drawing indicates
the pixel value, the position to the upper right side of the drawing
indicates the position in the spatial direction X which is one direction
of the spatial directions of the image, and the position to the right
side of the drawing indicates the position in the spatial direction Y
which is the other direction of the spatial directions of the image. The
direction indicating the pixel value in FIG. 17 corresponds to the
direction of level in FIG. 15, and the spatial direction X and spatial
direction Y in FIG. 17 also are the same as the directions in FIG. 15.

[0457]In the event of taking an image of an object of the actual world 1
which has a straight edge and is of a monotone color different from that
of the background with an image sensor, the straight edge is represented
in the image data obtained as a result of the image-taking as multiple
pawl shapes having a predetermined length which are arrayed in a
diagonally-offset fashion, in a model representation, for example. The
pawl shapes are of approximately the same shape. One pawl shape is formed
on one row of pixels vertically, or is formed on one row of pixels
horizontally. For example, one pawl shape shown in FIG. 17 is formed on
one row of pixels vertically.

[0458]Thus, the continuity of image of the object of the actual world 1
which has a straight edge and is of a monotone color different from that
of the background, in that the cross-sectional shape is the same at any
arbitrary position in the length direction of the edge, for example, is
lost in the image data obtained by imaging with an image sensor. Also, it
can be said that the continuity, which the image of the object of the
actual world 1 which has a straight edge and is of a monotone color
different from that of the background had, has changed into continuity in
that pawl shapes of the same shape formed on one row of pixels vertically
or formed on one row of pixels horizontally are arrayed at predetermined
intervals.

[0459]The data continuity detecting unit 101 detects such data continuity
of the data 3 which is an input image, for example. For example, the data
continuity detecting unit 101 detects data continuity by detecting
regions having a constant characteristic in a predetermined dimensional
direction. For example, the data continuity detecting unit 101 detects a
region wherein the same arc shapes are arrayed at constant intervals,
such as shown in FIG. 14. Also, the data continuity detecting unit 101
detects a region wherein the same pawl shapes are arrayed at constant
intervals, such as shown in FIG. 17.

[0460]Also, the data continuity detecting unit 101 detects continuity of
the data by detecting angle (gradient) in the spatial direction,
indicating an array of the same shapes.

[0461]Also, for example, the data continuity detecting unit 101 detects
continuity of data by detecting angle (movement) in the space direction
and time direction, indicating the array of the same shapes in the space
direction and the time direction.

[0462]Further, for example, the data continuity detecting unit 101 detects
continuity in the data by detecting the length of the region having
constant characteristics in a predetermined dimensional direction.

[0463]Hereafter, the portion of data 3 where the sensor 2 has projected
the image of the object of the actual world 1 which has a straight edge
and is of a monotone color different from that of the background, will
also be called a two-valued edge.

[0464]Next, the principle of the present invention will be described in
further detail.

[0466]Conversely, with the signal processing according to the present
invention, the actual world 1 is estimated from the data 3, and the
high-resolution data 181 is generated based on the estimation results.
That is to say, as shown in FIG. 19, the actual world 1 is estimated from
the data 3, and the high-resolution data 181 is generated based on the
estimated actual world 1, taking into consideration the data 3.

[0467]In order to generate the high-resolution data 181 from the actual
world 1, there is the need to take into consideration the relationship
between the actual world 1 and the data 3. For example, how the actual
world 1 is projected on the data 3 by the sensor 2 which is a CCD, is
taken into consideration.

[0468]The sensor 2 which is a CCD has integration properties as described
above. That is to say, one unit of the data 3 (e.g., pixel value) can be
calculated by integrating a signal of the actual world 1 with a detection
region (e.g., photoreception face) of a detection device (e.g., CCD) of
the sensor 2.

[0469]Applying this to the high-resolution data 181, the high-resolution
data 181 can be obtained by applying processing, wherein a virtual
high-resolution sensor projects signals of the actual world 1 to the data
3, to the estimated actual world 1.

[0470]In other words, as shown in FIG. 20, if the signals of the actual
world 1 can be estimated from the data 3, one value contained in the
high-resolution data 181 can be obtained by integrating signals of the
actual world 1 for each detection region of the detecting elements of the
virtual high-resolution sensor (in the time-space direction).

[0471]For example, in the event that the change in signals of the actual
world 1 are smaller than the size of the detection region of the
detecting elements of the sensor 2, the data 3 cannot expresses the small
changes in the signals of the actual world 1. Accordingly,
high-resolution data 181 indicating small change of the signals of the
actual world 1 can be obtained by integrating the signals of the actual
world 1 estimated from the data 3 with each region (in the time-space
direction) that is smaller in comparison with the change in signals of
the actual world 1.

[0472]That is to say, integrating the signals of the estimated actual
world 1 with the detection region with regard to each detecting element
of the virtual high-resolution sensor enables the high-resolution data
181 to be obtained.

[0473]With the present invention, the image generating unit 103 generates
the high-resolution data 181 by integrating the signals of the estimated
actual world 1 in the time-space direction regions of the detecting
elements of the virtual high-resolution sensor.

[0474]Next, with the present invention, in order to estimate the actual
world 1 from the data 3, the relationship between the data 3 and the
actual world 1, continuity, and a space mixture in the data 3, are used.

[0475]Here, a mixture means a value in the data 3 wherein the signals of
two objects in the actual world 1 are mixed to yield a single value.

[0476]A space mixture means the mixture of the signals of two objects in
the spatial direction due to the spatial integration effects of the
sensor 2.

[0477]The actual world 1 itself is made up of countless events, and
accordingly, in order to represent the actual world 1 itself with
mathematical expressions, for example, there is the need to have an
infinite number of variables. It is impossible to predict all events of
the actual world 1 from the data 3.

[0478]In the same way, it is impossible to predict all of the signals of
the actual world 1 from the data 3.

[0479]Accordingly, as shown in FIG. 21, with the present embodiment, of
the signals of the actual world 1, a portion which has continuity and
which can be expressed by the function f(x, y, z, t) is taken note of,
and the portion of the signals of the actual world 1 which can be
represented by the function f(x, y, z, t) and has continuity is
approximated with a model 161 represented by N variables. As shown in
FIG. 22, the model 161 is predicted from the M pieces of data 162 in the
data 3.

[0480]In order to enable the model 161 to be predicted from the M pieces
of data 162, first, there is the need to represent the model 161 with N
variables based on the continuity, and second, to generate an expression
using the N variables which indicates the relationship between the model
161 represented by the N variables and the M pieces of data 162 based on
the integral properties of the sensor 2. Since the model 161 is
represented by the N variables, based on the continuity, it can be said
that the expression using the N variables that indicates the relationship
between the model 161 represented by the N variables and the M pieces of
data 162, describes the relationship between the part of the signals of
the actual world 1 having continuity, and the part of the data 3 having
data continuity.

[0481]In other words, the part of the signals of the actual world 1 having
continuity, that is approximated by the model 161 represented by the N
variables, generates data continuity in the data 3.

[0482]The data continuity detecting unit 101 detects the part of the data
3 where data continuity has been generated by the part of the signals of
the actual world 1 having continuity, and the characteristics of the part
where data continuity has been generated.

[0483]For example, as shown in FIG. 23, in an image of the object of the
actual world 1 which has a straight edge and is of a monotone color
different from that of the background, the edge at the position of
interest indicated by A in FIG. 23, has a gradient. The arrow B in FIG.
23 indicates the gradient of the edge. A predetermined edge gradient can
be represented as an angle as to a reference axis or as a direction as to
a reference position. For example, a predetermined edge gradient can be
represented as the angle between the coordinates axis of the spatial
direction X and the edge. For example, the predetermined edge gradient
can be represented as the direction indicated by the length of the
spatial direction X and the length of the spatial direction Y.

[0484]At the time that the image of the object of the actual world 1 which
has a straight edge and is of a monotone color different from that of the
background is obtained at the sensor 2 and the data 3 is output, pawl
shapes corresponding to the edge are arrayed in the data 3 at the
position corresponding to the position of interest (A) of the edge in the
image of the actual world 1, which is indicated by A' in FIG. 23, and
pawl shapes corresponding to the edge are arrayed in the direction
corresponding to the gradient of the edge of the image in the actual
world 1, in the direction of the gradient indicated by B' in FIG. 23.

[0485]The model 161 represented with the N variables approximates such a
portion of the signals of the actual world 1 generating data continuity
in the data 3.

[0486]At the time of formulating an expression using the N variables
indicating the relationship between the model 161 represented with the N
variables and the M pieces of data 162, the part where data continuity is
generated in the data 3 is used.

[0487]In this case, in the data 3 shown in FIG. 24, taking note of the
values where data continuity is generated and which belong to a mixed
region, an expression is formulated with a value integrating the signals
of the actual world 1 as being equal to a value output by the detecting
element of the sensor 2. For example, multiple expressions can be
formulated regarding the multiple values in the data 3 where data
continuity is generated.

[0488]In FIG. 24, A denotes the position of interest of the edge, and A'
denotes (the position of) the pixel corresponding to the position (A) of
interest of the edge in the image of the actual world 1.

[0489]Now, a mixed region means a region of data in the data 3 wherein the
signals for two objects in the actual world 1 are mixed and become one
value. For example, a pixel value wherein, in the image of the object of
the actual world 1 which has a straight edge and is of a monotone color
different from that of the background in the data 3, the image of the
object having the straight edge and the image of the background are
integrated, belongs to a mixed region.

[0490]FIG. 25 is a diagram illustrating signals for two objects in the
actual world 1 and values belonging to a mixed region, in a case of
formulating an expression.

[0491]FIG. 25 illustrates, to the left, signals of the actual world 1
corresponding to two objects in the actual world 1 having a predetermined
expansion in the spatial direction X and the spatial direction Y, which
are acquired at the detection region of a single detecting element of the
sensor 2. FIG. 25 illustrates, to the right, a pixel value P of a single
pixel in the data 3 wherein the signals of the actual world 1 illustrated
to the left in FIG. 25 have been projected by a single detecting element
of the sensor 2. That is to say, illustrates a pixel value P of a single
pixel in the data 3 wherein the signals of the actual world 1
corresponding to two objects in the actual world 1 having a predetermined
expansion in the spatial direction X and the spatial direction Y which
are acquired by a single detecting element of the sensor 2, have been
projected.

[0492]L in FIG. 25 represents the level of the signal of the actual world
1 which is shown in white in FIG. 25, corresponding to one object in the
actual world 1. R in FIG. 25 represents the level of the signal of the
actual world 1 which is shown hatched in FIG. 25, corresponding to the
other object in the actual world 1.

[0493]Here, the mixture ratio α is the ratio of (the area of) the
signals corresponding to the two objects cast into the detecting region
of the one detecting element of the sensor 2 having a predetermined
expansion in the spatial direction X and the spatial direction Y. For
example, the mixture ratio α represents the ratio of area of the
level L signals cast into the detecting region of the one detecting
element of the sensor 2 having a predetermined expansion in the spatial
direction X and the spatial direction Y, as to the area of the detecting
region of a single detecting element of the sensor 2.

[0494]In this case, the relationship between the level L, level R, and the
pixel value P, can be represented by Expression (4).

α×L+(1-α)×R=P (4)

[0495]Note that there may be cases wherein the level R may be taken as the
pixel value of the pixel in the data 3 positioned to the right side of
the pixel of interest, and there may be cases wherein the level L may be
taken as the pixel value of the pixel in the data 3 positioned to the
left side of the pixel of interest.

[0496]Also, the time direction can be taken into consideration in the same
way as with the spatial direction for the mixture ratio α and the
mixed region. For example, in the event that an object in the actual
world 1 which is the object of image-taking, is moving as to the sensor
2, the ratio of signals for the two objects cast into the detecting
region of the single detecting element of the sensor 2 changes in the
time direction. The signals for the two objects regarding which the ratio
changes in the time direction, that have been cast into the detecting
region of the single detecting element of the sensor 2, are projected
into a single value of the data 3 by the detecting element of the sensor
2.

[0497]The mixture of signals for two objects in the time direction due to
time integration effects of the sensor 2 will be called time mixture.

[0498]The data continuity detecting unit 101 detects regions of pixels in
the data 3 where signals of the actual world 1 for two objects in the
actual world 1, for example, have been projected. The data continuity
detecting unit 101 detects gradient in the data 3 corresponding to the
gradient of an edge of an image in the actual world 1, for example.

[0499]The actual world estimating unit 102 estimates the signals of the
actual world by formulating an expression using N variables, representing
the relationship between the model 161 represented by the N variables and
the M pieces of data 162, based on the region of the pixels having a
predetermined mixture ratio α detected by the data continuity
detecting unit 101 and the gradient of the region, for example, and
solving the formulated expression.

[0500]Description will be made further regarding specific estimation of
the actual world 1.

[0501]Of the signals of the actual world represented by the function F(x,
y, z, t) let us consider approximating the signals of the actual world
represented by the function F(x, y, t) at the cross-section in the
spatial direction Z (the position of the sensor 2), with an approximation
function f(x, y, t) determined by a position x in the spatial direction
X, a position y in the spatial direction Y, and a point-in-time t.

[0502]Now, the detection region of the sensor 2 has an expanse in the
spatial direction X and the spatial direction Y. In other words, the
approximation function f(x, y, t) is a function approximating the signals
of the actual world 1 having an expanse in the spatial direction and time
direction, which are acquired with the sensor 2.

[0503]Let us say that projection of the signals of the actual world 1
yields a value P(x, y, t) of the data 3. The value P(x, y, t) of the data
3 is a pixel value which the sensor 2 which is an image sensor outputs,
for example.

[0504]Now, in the event that the projection by the sensor 2 can be
formulated, the value obtained by projecting the approximation function
f(x, y, t) can be represented as a projection function S(x, y, t).

[0505]Obtaining the projection function S(x, y, t) has the following
problems.

[0506]First, generally, the function F(x, y, z, t) representing the
signals of the actual world 1 can be a function with an infinite number
of orders.

[0507]Second, even if the signals of the actual world could be described
as a function, the projection function S(x, y, t) via projection of the
sensor 2 generally cannot be determined. That is to say, the action of
projection by the sensor 2, in other words, the relationship between the
input signals and output signals of the sensor 2, is unknown, so the
projection function S(x, y, t) cannot be determined.

[0508]With regard to the first problem, let us consider expressing the
function f(x, y, t) approximating signals of the actual world 1 with the
sum of products of the function fi(x, y, t) which is a describable
function (e.g., a function with a finite number of orders) and variables
wi.

[0509]Also, with regard to the second problem, formulating projection by
the sensor 2 allows us to describe the function Si(x, y, t) from the
description of the function fi(x, y, t).

[0510]That is to say, representing the function f(x, y, t) approximating
signals of the actual world 1 with the sum of products of the function
fi(x, y, t) and variables wi, the Expression (5) can be
obtained.

##EQU00004##

[0511]For example, as indicated in Expression (6), the relationship
between the data 3 and the signals of the actual world can be formulated
as shown in Expression (7) from Expression (5) by formulating the
projection of the sensor 2.

∫∫∫ ##EQU00005##

[0512]In Expression (7), j represents the index of the data.

[0513]In the event that M data groups (j=1 through M) common with the N
variables wi (i=1 through N) exists in Expression (7), Expression
(8) is satisfied, so the model 161 of the actual world can be obtained
from data 3.

N≦M (8)

[0514]N is the number of variables representing the model 161
approximating the actual world 1. M is the number of pieces of data 162
include in the data 3.

[0515]Representing the function f(x, y, t) approximating the actual world
1 with Expression (5) allows the variable portion wi to be handled
independently. At this time, i represents the number of variables. Also,
the form of the function represented by fi can be handed
independently, and a desired function can be used for fi.

[0516]Accordingly, the number N of the variables wi can be defined
without dependence on the function fi, and the variables wi can
be obtained from the relationship between the number N of the variables
wi and the number of pieces of data M.

[0517]That is to say, using the following three allows the actual world 1
to be estimated from the data 3.

[0518]First, the N variables are determined. That is to say, Expression
(5) is determined. This enables describing the actual world 1 using
continuity. For example, the signals of the actual world 1 can be
described with a model 161 wherein a cross-section is expressed with a
polynomial, and the same cross-sectional shape continues in a constant
direction.

[0519]Second, for example, projection by the sensor 2 is formulated,
describing Expression (7). For example, this is formulated such that the
results of integration of the signals of the actual world 2 are data 3.

[0520]Third, M pieces of data 162 are collected to satisfy Expression (8).
For example, the data 162 is collected from a region having data
continuity that has been detected with the data continuity detecting unit
101. For example, data 162 of a region wherein a constant cross-section
continues, which is an example of continuity, is collected.

[0521]In this way, the relationship between the data 3 and the actual
world 1 is described with the Expression (5), and M pieces of data 162
are collected, thereby satisfying Expression (8), and the actual world 1
can be estimated.

[0522]More specifically, in the event of N=M, the number of variables N
and the number of expressions M are equal, so the variables wi can
be obtained by formulating a simultaneous equation.

[0523]Also, in the event that N<M, various solving methods can be
applied. For example, the variables wi can be obtained by
least-square.

[0524]Now, the solving method by least-square will be described in detail.

[0525]First, an Expression (9) for predicting data 3 from the actual world
1 will be shown according to Expression (7).

' ##EQU00006##

[0526]In Expression (9), P'j(xj, yj, tj) is a
prediction value.

[0527]The sum of squared differences E for the prediction value P' and
observed value P is represented by Expression (10).

' ##EQU00007##

[0528]The variables wi are obtained such that the sum of squared
differences E is the smallest. Accordingly, the partial differential
value of Expression (10) for each variable wk is 0. That is to say,
Expression (11) holds.

∂∂ ##EQU00008##

[0529]Expression (11) yields Expression (12).

##EQU00009##

[0530]When Expression (12) holds with K=1 through N, the solution by
least-square is obtained. The normal equation thereof is shown in
Expression (13).

##EQU00010##

[0531]Note that in Expression (13), Si(xj, yj, tj) is
described as Si(j).

##EQU00011##

[0532]From Expression (14) through Expression (16), Expression (13) can be
expressed as SMATWMAT=PMAT.

[0533]In Expression (13), Si represents the projection of the actual
world 1. In Expression (13), Pj represents the data 3. In Expression
(13), wi represents variables for describing and obtaining the
characteristics of the signals of the actual world 1.

[0534]Accordingly, inputting the data 3 into Expression (13) and obtaining
WMAT by a matrix solution or the like enables the actual world 1 to
be estimated. That is to say, the actual world 1 can be estimated by
computing Expression (17).

WMAT=SMAT-1PMAT (17)

[0535]Note that in the event that SMAT is not regular, a transposed
matrix of SMAT can be used to obtain WMAT.

[0536]The actual world estimating unit 102 estimates the actual world 1
by, for example, inputting the data 3 into Expression (13) and obtaining
WMAT by a matrix solution or the like.

[0537]Now, an even more detailed example will be described. For example,
the cross-sectional shape of the signals of the actual world 1, i.e., the
change in level as to the change in position, will be described with a
polynomial. Let us assume that the cross-sectional shape of the signals
of the actual world 1 is constant, and that the cross-section of the
signals of the actual world 1 moves at a constant speed. Projection of
the signals of the actual world 1 from the sensor 2 to the data 3 is
formulated by three-dimensional integration in the time-space direction
of the signals of the actual world 1.

[0538]The assumption that the cross-section of the signals of the actual
world 1 moves at a constant speed yields Expression (18) and Expression
(19).

##EQU00012##

[0539]Here, vx and vy are constant.

[0540]Using Expression (18) and Expression (19), the cross-sectional shape
of the signals of the actual world 1 can be represented as in Expression
(20).

f(x',y')=f(x+vxt,y+vyt) (20)

[0541]Formulating projection of the signals of the actual world 1 from the
sensor 2 to the data 3 by three-dimensional integration in the time-space
direction of the signals of the actual world 1 yields Expression (21).

∫ ∫ ∫ '' ∫ ∫ ∫
##EQU00013##

[0542]In Expression (21), S(x, y, t) represents an integrated value the
region from position xs to position xe for the spatial
direction X, from position ys to position ye for the spatial
direction Y, and from point-in-time ts to point-in-time te for
the time direction t, i.e., the region represented as a space-time
cuboid.

[0543]Solving Expression (13) using a desired function f(x', y') whereby
Expression (21) can be determined enables the signals of the actual world
1 to be estimated.

[0544]In the following, we will use the function indicated in Expression
(22) as an example of the function f(x', y').

'' ' ' ##EQU00014##

[0545]That is to say, the signals of the actual world 1 are estimated to
include the continuity represented in Expression (18), Expression (19),
and Expression (22). This indicates that the cross-section with a
constant shape is moving in the space-time direction as shown in FIG. 26.

[0549]FIG. 27 is a diagram illustrating an example of the M pieces of data
162 extracted from the data 3. For example, let us say that 27 pixel
values are extracted as the data 162, and that the extracted pixel values
are Pj(x, y, t). In this case, j is 0 through 26.

[0550]In the example shown in FIG. 27, in the event that the pixel value
of the pixel corresponding to the position of interest at the
point-in-time t which is n is P13(x, y, t), and the direction of
array of the pixel values of the pixels having the continuity of data
(e.g., the direction in which the same-shaped pawl shapes detected by the
data continuity detecting unit 101 are arrayed) is a direction connecting
P4(x, y, t), P13(x, y, t), and P22(x, y, t), the pixel
values P9(x, y, t) through P17(x, y, t) at the point-in-time t
which is n, the pixel values P0(x, y, t) through P8(x, y, t) at
the point-in-time t which is n-1 which is earlier in time than n, and the
pixel values P18(x, y, t) through P26(x, y, t) at the
point-in-time t which is n+1 which is later in time than n, are
extracted.

[0551]Now, the region regarding which the pixel values, which are the data
3 output from the image sensor which is the sensor 2, have been obtained,
have a time-direction and two-dimensional spatial direction expansion, as
shown in FIG. 28. Now, as shown in FIG. 29, the center of gravity of the
cuboid corresponding to the pixel values (the region regarding which the
pixel values have been obtained) can be used as the position of the pixel
in the space-time direction. The circle in FIG. 29 indicates the center
of gravity.

[0552]Generating Expression (13) from the 27 pixel values P0(x, y, t)
through P26(x, y, t) and from Expression (23), and obtaining W,
enables the actual world 1 to be estimated.

[0553]In this way, the actual world estimating unit 102 generates
Expression (13) from the 27 pixel values P0(x, y, t) through
P26(x, y, t) and from Expression (23), and obtains W, thereby
estimating the signals of the actual world 1.

[0554]Note that a Gaussian function, a sigmoid function, or the like, can
be used for the function fi(x, y, t).

[0555]An example of processing for generating high-resolution data 181
with even higher resolution, corresponding to the data 3, from the
estimated actual world 1 signals, will be described with reference to
FIG. 30 through FIG. 34.

[0556]As shown in FIG. 30, the data 3 has a value wherein signals of the
actual world 1 are integrated in the time direction and two-dimensional
spatial directions. For example, a pixel value which is data 3 that has
been output from the image sensor which is the sensor 2 has a value
wherein the signals of the actual world 1, which is light cast into the
detecting device, are integrated by the shutter time which is the
detection time in the time direction, and integrated by the
photoreception region of the detecting element in the spatial direction.

[0557]Conversely, as shown in FIG. 31, the high-resolution data 181 with
even higher resolution in the spatial direction is generated by
integrating the estimated actual world 1 signals in the time direction by
the same time as the detection time of the sensor 2 which has output the
data 3, and also integrating in the spatial direction by a region
narrower in comparison with the photoreception region of the detecting
element of the sensor 2 which has output the data 3.

[0558]Note that at the time of generating the high-resolution data 181
with even higher resolution in the spatial direction, the region where
the estimated signals of the actual world 1 are integrated can be set
completely disengaged from photoreception region of the detecting element
of the sensor 2 which has output the data 3. For example, the
high-resolution data 181 can be provided with resolution which is that of
the data 3 magnified in the spatial direction by an integer, of course,
and further, can be provided with resolution which is that of the data 3
magnified in the spatial direction by a rational number such as 5/3
times, for example.

[0559]Also, as shown in FIG. 32, the high-resolution data 181 with even
higher resolution in the time direction is generated by integrating the
estimated actual world 1 signals in the spatial direction by the same
region as the photoreception region of the detecting element of the
sensor 2 which has output the data 3, and also integrating in the time
direction by a time shorter than the detection time of the sensor 2 which
has output the data 3.

[0560]Note that at the time of generating the high-resolution data 181
with even higher resolution in the time direction, the time by which the
estimated signals of the actual world 1 are integrated can be set
completely disengaged from shutter time of the detecting element of the
sensor 2 which has output the data 3. For example, the high-resolution
data 181 can be provided with resolution which is that of the data 3
magnified in the time direction by an integer, of course, and further,
can be provided with resolution which is that of the data 3 magnified in
the time direction by a rational number such as 7/4 times, for example.

[0561]As shown in FIG. 33, high-resolution data 181 with movement blurring
removed is generated by integrating the estimated actual world 1 signals
only in the spatial direction and not in the time direction.

[0562]Further, as shown in FIG. 34, high-resolution data 181 with higher
resolution in the time direction and space direction is generated by
integrating the estimated actual world 1 signals in the spatial direction
by a region narrower in comparison with the photoreception region of the
detecting element of the sensor 2 which has output the data 3, and also
integrating in the time direction by a time shorter in comparison with
the detection time of the sensor 2 which has output the data 3.

[0563]In this case, the region and time for integrating the estimated
actual world 1 signals can be set completely unrelated to the
photoreception region and shutter time of the detecting element of the
sensor 2 which has output the data 3.

[0564]Thus, the image generating unit 103 generates data with higher
resolution in the time direction or the spatial direction, by integrating
the estimated actual world 1 signals by a desired space-time region, for
example.

[0565]Accordingly, data which is more accurate with regard to the signals
of the actual world 1, and which has higher resolution in the time
direction or the space direction, can be generated by estimating the
signals of the actual world 1.

[0566]An example of an input image and the results of processing with the
signal processing device 4 according to the present invention will be
described with reference to FIG. 35 through FIG. 39.

[0567]FIG. 35 is a diagram illustrating an original image of an input
image. FIG. 36 is a diagram illustrating an example of an input image.
The input image shown in FIG. 36 is an image generated by taking the
average value of pixel values of pixels belonging to blocks made up of 2
by 2 pixels of the image shown in FIG. 35, as the pixel value of a single
pixel. That is to say, the input image is an image obtained by applying
spatial direction integration to the image shown in FIG. 35, imitating
the integrating properties of the sensor.

[0568]The original image shown in FIG. 35 contains an image of a fine line
inclined at approximately 5 degrees in the clockwise direction from the
vertical direction. In the same way, the input image shown in FIG. 36
contains an image of a fine line inclined at approximately 5 degrees in
the clockwise direction from the vertical direction.

[0569]FIG. 37 is a diagram illustrating an image obtained by applying
conventional class classification adaptation processing to the input
image shown in FIG. 36. Now, class classification processing is made up
of class classification processing and adaptation processing, wherein the
data is classified based on the nature thereof by the class
classification adaptation processing, and subjected to adaptation
processing for each class. In the adaptation processing, a low-image
quality or standard image quality image, for example, is converted into a
high image quality image by being subjected to mapping (mapping) using a
predetermined tap coefficient.

[0570]It can be understood in the image shown in FIG. 37 that the image of
the fine line is different to that of the original image in FIG. 35.

[0571]FIG. 38 is a diagram illustrating the results of detecting the fine
line regions from the input image shown in the example in FIG. 36, by the
data continuity detecting unit 101. In FIG. 38, the white region
indicates the fine line region, i.e., the region wherein the arc shapes
shown in FIG. 14 are arrayed.

[0572]FIG. 39 is a diagram illustrating an example of the output image
output from the signal processing device 4 according to the present
invention, with the image shown in FIG. 36 as the input image. As shown
in FIG. 39, the signals processing device 4 according to the present
invention yields an image closer to the fine line image of the original
image shown in FIG. 35.

[0573]FIG. 40 is a flowchart for describing the processing of signals with
the signal processing device 4 according to the present invention.

[0575]The data continuity detecting unit 101 detects the continuity of
data corresponding to the continuity of the signals of the actual world.
In the processing in step S101, the continuity of data detected by the
data continuity detecting unit 101 is either part of the continuity of
the image of the actual world 1 contained in the data 3, or continuity
which has changed from the continuity of the signals of the actual world
1.

[0576]The data continuity detecting unit 101 detects the data continuity
by detecting a region having a constant characteristic in a predetermined
dimensional direction. Also, the data continuity detecting unit 101
detects data continuity by detecting angle (gradient) in the spatial
direction indicating the an array of the same shape.

[0577]Details of the continuity detecting processing in step S101 will be
described later.

[0578]Note that the data continuity information can be used as features,
indicating the characteristics of the data 3.

[0579]In step S102, the actual world estimating unit 102 executes
processing for estimating the actual world. That is to say, the actual
world estimating unit 102 estimates the signals of the actual world based
on the input image and the data continuity information supplied from the
data continuity detecting unit 101. In the processing in step S102 for
example, the actual world estimating unit 102 estimates the signals of
the actual world 1 by predicting a model 161 approximating (describing)
the actual world 1. The actual world estimating unit 102 supplies the
actual world estimation information indicating the estimated signals of
the actual world 1 to the image generating unit 103.

[0580]For example, the actual world estimating unit 102 estimates the
actual world 1 signals by predicting the width of the linear object.
Also, for example, the actual world estimating unit 102 estimates the
actual world 1 signals by predicting a level indicating the color of the
linear object.

[0581]Details of processing for estimating the actual world in step S102
will be described later.

[0582]Note that the actual world estimation information can be used as
features, indicating the characteristics of the data 3.

[0583]In step S103, the image generating unit 103 performs image
generating processing, and the processing ends. That is to say, the image
generating unit 103 generates an image based on the actual world
estimation information, and outputs the generated image. Or, the image
generating unit 103 generates an image based on the data continuity
information and actual world estimation information, and outputs the
generated image.

[0584]For example, in the processing in step S103, the image generating
unit 103 integrates a function approximating the generated real world
light signals in the spatial direction, based on the actual world
estimated information, hereby generating an image with higher resolution
in the spatial direction in comparison with the input image, and outputs
the generated image. For example, the image generating unit 103
integrates a function approximating the generated real world light
signals in the time-space direction, based on the actual world estimated
information, hereby generating an image with higher resolution in the
time direction and the spatial direction in comparison with the input
image, and outputs the generated image. The details of the image
generating processing in step S103 will be described later.

[0585]Thus, the signal processing device 4 according to the present
invention detects data continuity from the data 3, and estimates the
actual world 1 from the detected data continuity. The signal processing
device 4 then generates signals closer approximating the actual world 1
based on the estimated actual world 1.

[0586]As described above, in the event of performing the processing for
estimating signals of the real world, accurate and highly-precise
processing results can be obtained.

[0587]Also, in the event that first signals which are real world signals
having first dimensions are projected, the continuity of data
corresponding to the lost continuity of the real world signals is
detected for second signals of second dimensions, having a number of
dimensions fewer than the first dimensions, from which a part of the
continuity of the signals of the real world has been lost, and the first
signals are estimated by estimating the lost real world signals
continuity based on the detected data continuity, accurate and
highly-precise processing results can be obtained as to the events in the
real world.

[0588]Next, the details of the configuration of the data continuity
detecting unit 101 will be described.

[0589]FIG. 41 is a block diagram illustrating the configuration of the
data continuity detecting unit 101.

[0590]Upon taking an image of an object which is a fine line, the data
continuity detecting unit 101, of which the configuration is shown in
FIG. 41, detects the continuity of data contained in the data 3, which is
generated from the continuity in that the cross-sectional shape which the
object has is the same. That is to say, the data continuity detecting
unit 101 of the configuration shown in FIG. 41 detects the continuity of
data contained in the data 3, which is generated from the continuity in
that the change in level of light as to the change in position in the
direction orthogonal to the length-wise direction is the same at an
arbitrary position in the length-wise direction, which the image of the
actual world 1 which is a fine line, has.

[0591]More specifically, the data continuity detecting unit 101 of which
configuration is shown in FIG. 41 detects the region where multiple arc
shapes (half-disks) having a predetermined length are arrayed in a
diagonally-offset adjacent manner, within the data 3 obtained by taking
an image of a fine line with the sensor 2 having spatial integration
effects.

[0592]The data continuity detecting unit 101 extracts the portions of the
image data other than the portion of the image data where the image of
the fine line having data continuity has been projected (hereafter, the
portion of the image data where the image of the fine line having data
continuity has been projected will also be called continuity component,
and the other portions will be called non-continuity component), from an
input image which is the data 3, detects the pixels where the image of
the fine line of the actual world 1 has been projected, from the
extracted non-continuity component and the input image, and detects the
region of the input image made up of pixels where the image of the fine
line of the actual world 1 has been projected.

[0593]A non-continuity component extracting unit 201 extracts the
non-continuity component from the input image, and supplies the
non-continuity component information indicating the extracted
non-continuity component to a peak detecting unit 202 and a monotonous
increase/decrease detecting unit 203 along with the input image.

[0594]For example, as shown in FIG. 42, in the event that an image of the
actual world 1 wherein a fine line exists in front of a background with
an approximately constant light level is projected on the data 3, the
non-continuity component extracting unit 201 extracts the non-continuity
component which is the background, by approximating the background in the
input image which is the data 3, on a plane, as shown in FIG. 43. In FIG.
43, the solid line indicates the pixel values of the data 3, and the
dotted line illustrates the approximation values indicated by the plane
approximating the background. In FIG. 43, A denotes the pixel value of
the pixel where the image of the fine line has been projected, and the PL
denotes the plane approximating the background.

[0595]In this way, the pixel values of the multiple pixels at the portion
of the image data having data continuity are discontinuous as to the
non-continuity component.

[0596]The non-continuity component extracting unit 201 detects the
discontinuous portion of the pixel values of the multiple pixels of the
image data which is the data 3, where an image which is light signals of
the actual world 1 has been projected and a part of the continuity of the
image of the actual world 1 has been lost.

[0597]Details of the processing for extracting the non-continuity
component with the non-continuity component extracting unit 201 will be
described later.

[0598]The peak detecting unit 202 and the monotonous increase/decrease
detecting unit 203 remove the non-continuity component from the input
image, based on the non-continuity component information supplied from
the non-continuity component extracting unit 201. For example, the peak
detecting unit 202 and the monotonous increase/decrease detecting unit
203 remove the non-continuity component from the input image by setting
the pixel values of the pixels of the input image where only the
background image has been projected, to 0. Also, for example, the peak
detecting unit 202 and the monotonous increase/decrease detecting unit
203 remove the non-continuity component from the input image by
subtracting values approximated by the plane PL from the pixel values of
each pixel of the input image.

[0599]Since the background can be removed from the input image, the peak
detecting unit 202 through continuousness detecting unit 204 can process
only the portion of the image data where the fine line has be projected,
thereby further simplifying the processing by the peak detecting unit 202
through the continuousness detecting unit 204.

[0600]Note that the non-continuity component extracting unit 201 may
supply image data wherein the non-continuity component has been removed
form the input image, to the peak detecting unit 202 and the monotonous
increase/decrease detecting unit 203.

[0601]In the example of processing described below, the image data wherein
the non-continuity component has been removed from the input image, i.e.,
image data made up from only pixel containing the continuity component,
is the object.

[0602]Now, description will be made regarding the image data upon which
the fine line image has been projected, which the peak detecting unit 202
through continuousness detecting unit 204 are to detect.

[0603]In the event that there is no optical LPF, the cross-dimensional
shape in the spatial direction Y (change in the pixel values as to change
in the position in the spatial direction) of the image data upon which
the fine line image has been projected as shown in FIG. 42 can be thought
to be the trapezoid shown in FIG. 44, or the triangle shown in FIG. 45.
However, ordinary image sensors have an optical LPF with the image sensor
obtaining the image which has passed through the optical LPF and projects
the obtained image on the data 3, so in reality, the cross-dimensional
shape of the image data with fine lines in the spatial direction Y has a
shape resembling Gaussian distribution, as shown in FIG. 46.

[0604]The peak detecting unit 202 through continuousness detecting unit
204 detect a region made up of pixels upon which the fine line image has
been projected wherein the same cross-sectional shape (change in the
pixel values as to change in the position in the spatial direction) is
arrayed vertically in the screen at constant intervals, and further,
detect a region made up of pixels upon which the fine line image has been
projected which is a region having data continuity, by detecting regional
connection corresponding to the length-wise direction of the fine line of
the actual world 1. That is to say, the peak detecting unit 202 through
continuousness detecting unit 204 detect regions wherein arc shapes
(half-disc shapes) are formed on a single vertical row of pixels in the
input image, and determine whether or not the detected regions are
adjacent in the horizontal direction, thereby detecting connection of
regions where arc shapes are formed, corresponding to the length-wise
direction of the fine line image which is signals of the actual world 1.

[0605]Also, the peak detecting unit 202 through continuousness detecting
unit 204 detect a region made up of pixels upon which the fine line image
has been projected wherein the same cross-sectional shape is arrayed
horizontally in the screen at constant intervals, and further, detect a
region made up of pixels upon which the fine line image has been
projected which is a region having data continuity, by detecting
connection of detected regions corresponding to the length-wise direction
of the fine line of the actual world 1. That is to say, the peak
detecting unit 202 through continuousness detecting unit 204 detect
regions wherein arc shapes are formed on a single horizontal row of
pixels in the input image, and determine whether or not the detected
regions are adjacent in the vertical direction, thereby detecting
connection of regions where arc shapes are formed, corresponding to the
length-wise direction of the fine line image, which is signals of the
actual world 1.

[0606]First, description will be made regarding processing for detecting a
region of pixels upon which the fine line image has been projected
wherein the same arc shape is arrayed vertically in the screen at
constant intervals.

[0607]The peak detecting unit 202 detects a pixel having a pixel value
greater than the surrounding pixels, i.e., a peak, and supplies peak
information indicating the position of the peak to the monotonous
increase/decrease detecting unit 203. In the event that pixels arrayed in
a single vertical row in the screen are the object, the peak detecting
unit 202 compares the pixel value of the pixel position upwards in the
screen and the pixel value of the pixel position downwards in the screen,
and detects the pixel with the greater pixel value as the peak. The peak
detecting unit 202 detects one or multiple peaks from a single image,
e.g., from the image of a single frame.

[0608]A single screen contains frames or fields. This holds true in the
following description as well.

[0609]For example, the peak detecting unit 202 selects a pixel of interest
from pixels of an image of one frame which have not yet been taken as
pixels of interest, compares the pixel value of the pixel of interest
with the pixel value of the pixel above the pixel of interest, compares
the pixel value of the pixel of interest with the pixel value of the
pixel below the pixel of interest, detects a pixel of interest which has
a greater pixel value than the pixel value of the pixel above and a
greater pixel value than the pixel value of the pixel below, and takes
the detected pixel of interest as a peak. The peak detecting unit
supplies peak information indicating the detected peak to the monotonous
increase/decrease detecting unit 203.

[0610]There are cases wherein the peak detecting unit 202 does not detect
a peak. For example, in the event that the pixel values of all of the
pixels of an image are the same value, or in the event that the pixel
values decrease in one or two directions, no peak is detected. In this
case, no fine line image has been projected on the image data.

[0611]The monotonous increase/decrease detecting unit 203 detects a
candidate for a region made up of pixels upon which the fine line image
has been projected wherein the pixels are vertically arrayed in a single
row as to the peak detected by the peak detecting unit 202, based upon
the peak information indicating the position of the peak supplied from
the peak detecting unit 202, and supplies the region information
indicating the detected region to the continuousness detecting unit 204
along with the peak information.

[0612]More specifically, the monotonous increase/decrease detecting unit
203 detects a region made up of pixels having pixel values monotonously
decreasing with reference to the peak pixel value, as a candidate of a
region made up of pixels upon which the image of the fine line has been
projected. Monotonous decrease means that the pixel values of pixels
which are farther distance-wise from the peak are smaller than the pixel
values of pixels which are closer to the peak.

[0613]Also, the monotonous increase/decrease detecting unit 203 detects a
region made up of pixels having pixel values monotonously increasing with
reference to the peak pixel value, as a candidate of a region made up of
pixels upon which the image of the fine line has been projected.
Monotonous increase means that the pixel values of pixels which are
farther distance-wise from the peak are greater than the pixel values of
pixels which are closer to the peak.

[0614]In the following, the processing regarding regions of pixels having
pixel values monotonously increasing is the same as the processing
regarding regions of pixels having pixel values monotonously decreasing,
so description thereof will be omitted. Also, with the description
regarding processing for detecting a region of pixels upon which the fine
line image has been projected wherein the same arc shape is arrayed
horizontally in the screen at constant intervals, the processing
regarding regions of pixels having pixel values monotonously increasing
is the same as the processing regarding regions of pixels having pixel
values monotonously decreasing, so description thereof will be omitted.

[0615]For example, the monotonous increase/decrease detecting unit 203
detects pixel values of each of the pixels in a vertical row as to a
peak, the difference as to the pixel value of the pixel above, and the
difference as to the pixel value of the pixel below. The monotonous
increase/decrease detecting unit 203 then detects a region wherein the
pixel value monotonously decreases by detecting pixels wherein the sign
of the difference changes.

[0616]Further, the monotonous increase/decrease detecting unit 203
detects, from the region wherein pixel values monotonously decrease, a
region made up of pixels having pixel values with the same sign as that
of the pixel value of the peak, with the sign of the pixel value of the
peak as a reference, as a candidate of a region made up of pixels upon
which the image of the fine line has been projected.

[0617]For example, the monotonous increase/decrease detecting unit 203
compares the sign of the pixel value of each pixel with the sign of the
pixel value of the pixel above and sign of the pixel value of the pixel
below, and detects the pixel where the sign of the pixel value changes,
thereby detecting a region of pixels having pixel values of the same sign
as the peak within the region where pixel values monotonously decrease.

[0618]Thus, the monotonous increase/decrease detecting unit 203 detects a
region formed of pixels arrayed in a vertical direction wherein the pixel
values monotonously decrease as to the peak and have pixels values of the
same sign as the peak.

[0619]FIG. 47 is a diagram describing processing for peak detection and
monotonous increase/decrease region detection, for detecting the region
of pixels wherein the image of the fine line has been projected, from the
pixel values as to a position in the spatial direction Y.

[0620]In FIG. 47 through FIG. 49, P represents a peak. In the description
of the data continuity detecting unit 101 of which the configuration is
shown in FIG. 41, P represents a peak.

[0621]The peak detecting unit 202 compares the pixel values of the pixels
with the pixel values of the pixels adjacent thereto in the spatial
direction Y, and detects the peak P by detecting a pixel having a pixel
value greater than the pixel values of the two pixels adjacent in the
spatial direction Y.

[0622]The region made up of the peak P and the pixels on both sides of the
peak P in the spatial direction Y is a monotonous decrease region wherein
the pixel values of the pixels on both sides in the spatial direction Y
monotonously decrease as to the pixel value of the peak P. In FIG. 47,
the arrow denoted A and the arrow denoted by B represent the monotonous
decrease regions existing on either side of the peak P.

[0623]The monotonous increase/decrease detecting unit 203 obtains the
difference between the pixel values of each pixel and the pixel values of
the pixels adjacent in the spatial direction Y, and detects pixels where
the sign of the difference changes. The monotonous increase/decrease
detecting unit 203 takes the boundary between the detected pixel where
the sign of the difference changes and the pixel immediately prior
thereto (on the peak P side) as the boundary of the fine line region made
up of pixels where the image of the fine line has been projected.

[0624]In FIG. 47, the boundary of the fine line region which is the
boundary between the pixel where the sign of the difference changes and
the pixel immediately prior thereto (on the peak P side) is denoted by C.

[0625]Further, the monotonous increase/decrease detecting unit 203
compares the sign of the pixel values of each pixel with the pixel values
of the pixels adjacent thereto in the spatial direction Y, and detects
pixels where the sign of the pixel value changes in the monotonous
decrease region. The monotonous increase/decrease detecting unit 203
takes the boundary between the detected pixel where the sign of the pixel
value changes and the pixel immediately prior thereto (on the peak P
side) as the boundary of the fine line region.

[0626]In FIG. 47, the boundary of the fine line region which is the
boundary between the pixel where the sign of the pixel value changes and
the pixel immediately prior thereto (on the peak P side) is denoted by P.

[0627]As shown in FIG. 47, the fine line region F made up of pixels where
the image of the fine line has been projected is the region between the
fine line region boundary C and the fine line region boundary D.

[0628]The monotonous increase/decrease detecting unit 203 obtains a fine
line region F which is longer than a predetermined threshold, from fine
line regions F made up of such monotonous increase/decrease regions,
i.e., a fine line region F having a greater number of pixels than the
threshold value. For example, in the event that the threshold value is 3,
the monotonous increase/decrease detecting unit 203 detects a fine line
region F including 4 or more pixels.

[0629]Further, the monotonous increase/decrease detecting unit 203
compares the pixel value of the peak P, the pixel value of the pixel to
the right side of the peak P, and the pixel value of the pixel to the
left side of the peak P, from the fine line region F thus detected, each
with the threshold value, detects a fine pixel region F having the peak P
wherein the pixel value of the peak P exceeds the threshold value, and
wherein the pixel value of the pixel to the right side of the peak P is
the threshold value or lower, and wherein the pixel value of the pixel to
the left side of the peak P is the threshold value or lower, and takes
the detected fine line region F as a candidate for the region made up of
pixels containing the component of the fine line image.

[0630]In other words, determination is made that a fine line region F
having the peak P, wherein the pixel value of the peak P is the threshold
value or lower, or wherein the pixel value of the pixel to the right side
of the peak P exceeds the threshold value, or wherein the pixel value of
the pixel to the left side of the peak P exceeds the threshold value,
does not contain the component of the fine line image, and is eliminated
from candidates for the region made up of pixels including the component
of the fine line image.

[0631]That is, as shown in FIG. 48, the monotonous increase/decrease
detecting unit 203 compares the pixel value of the peak P with the
threshold value, and also compares the pixel value of the pixel adjacent
to the peak P in the spatial direction X (the direction indicated by the
dotted line AA') with the threshold value, thereby detecting the fine
line region F to which the peak P belongs, wherein the pixel value of the
peak P exceeds the threshold value and wherein the pixel values of the
pixel adjacent thereto in the spatial direction X are equal to or below
the threshold value.

[0632]FIG. 49 is a diagram illustrating the pixel values of pixels arrayed
in the spatial direction X indicated by the dotted line AA' in FIG. 48.
The fine line region F to which the peak P belongs, wherein the pixel
value of the peak P exceeds the threshold value Ths and wherein the
pixel values of the pixel adjacent thereto in the spatial direction X are
equal to or below the threshold value Ths, contains the fine line
component.

[0633]Note that an arrangement may be made wherein the monotonous
increase/decrease detecting unit 203 compares the difference between the
pixel value of the peak P and the pixel value of the background with the
threshold value, taking the pixel value of the background as a reference,
and also compares the difference between the pixel value of the pixels
adjacent to the peak P in the spatial direction and the pixel value of
the background with the threshold value, thereby detecting the fine line
region F to which the peak P belongs, wherein the difference between the
pixel value of the peak P and the pixel value of the background exceeds
the threshold value, and wherein the difference between the pixel value
of the pixel adjacent in the spatial direction X and the pixel value of
the background is equal to or below the threshold value.

[0634]The monotonous increase/decrease detecting unit 203 outputs to the
continuousness detecting unit 204 monotonous increase/decrease region
information indicating a region made up of pixels of which the pixel
value monotonously decrease with the peak P as a reference and the sign
of the pixel value is the same as that of the peak P, wherein the peak P
exceeds the threshold value and wherein the pixel value of the pixel to
the right side of the peak P is equal to or below the threshold value and
the pixel value of the pixel to the left side of the peak P is equal to
or below the threshold value.

[0635]In the event of detecting a region of pixels arrayed in a single row
in the vertical direction of the screen where the image of the fine line
has been projected, pixels belonging to the region indicated by the
monotonous increase/decrease region information are arrayed in the
vertical direction and include pixels where the image of the fine line
has been projected. That is to say, the region indicated by the
monotonous increase/decrease region information includes a region formed
of pixels arrayed in a single row in the vertical direction of the screen
where the image of the fine line has been projected.

[0636]In this way, the apex detecting unit 202 and the monotonous
increase/decrease detecting unit 203 detects a continuity region made up
of pixels where the image of the fine line has been projected, employing
the nature that, of the pixels where the image of the fine line has been
projected, change in the pixel values in the spatial direction Y
approximates Gaussian distribution.

[0637]Of the region made up of pixels arrayed in the vertical direction,
indicated by the monotonous increase/decrease region information supplied
from the monotonous increase/decrease detecting unit 203, the
continuousness detecting unit 204 detects regions including pixels
adjacent in the horizontal direction, i.e., regions having similar pixel
value change and duplicated in the vertical direction, as continuous
regions, and outputs the peak information and data continuity information
indicating the detected continuous regions. The data continuity
information includes monotonous increase/decrease region information,
information indicating the connection of regions, and so forth.

[0638]Arc shapes are aligned at constant intervals in an adjacent manner
with the pixels where the fine line has been projected, so the detected
continuous regions include the pixels where the fine line has been
projected.

[0639]The detected continuous regions include the pixels where arc shapes
are aligned at constant intervals in an adjacent manner to which the fine
line has been projected, so the detected continuous regions are taken as
a continuity region, and the continuousness detecting unit 204 outputs
data continuity information indicating the detected continuous regions.

[0640]That is to say, the continuousness detecting unit 204 uses the
continuity wherein arc shapes are aligned at constant intervals in an
adjacent manner in the data 3 obtained by imaging the fine line, which
has been generated due to the continuity of the image of the fine line in
the actual world 1, the nature of the continuity being continuing in the
length direction, so as to further narrow down the candidates of regions
detected with the peak detecting unit 202 and the monotonous
increase/decrease detecting unit 203.

[0641]FIG. 50 is a diagram describing the processing for detecting the
continuousness of monotonous increase/decrease regions.

[0642]As shown in FIG. 50, in the event that a fine line region F formed
of pixels aligned in a single row in the vertical direction of the screen
includes pixels adjacent in the horizontal direction, the continuousness
detecting unit 204 determines that there is continuousness between the
two monotonous increase/decrease regions, and in the event that pixels
adjacent in the horizontal direction are not included, determines that
there is no continuousness between the two fine line regions F. For
example, a fine line region F-1 made up of pixels aligned in a
single row in the vertical direction of the screen is determined to be
continuous to a fine line region F0 made up of pixels aligned in a
single row in the vertical direction of the screen in the event of
containing a pixel adjacent to a pixel of the fine line region F0 in
the horizontal direction. The fine line region F0 made up of pixels
aligned in a single row in the vertical direction of the screen is
determined to be continuous to a fine line region F1 made up of
pixels aligned in a single row in the vertical direction of the screen in
the event of containing a pixel adjacent to a pixel of the fine line
region F1 in the horizontal direction.

[0643]In this way, regions made up of pixels aligned in a single row in
the vertical direction of the screen where the image of the fine line has
been projected are detected by the peak detecting unit 202 through the
continuousness detecting unit 204.

[0644]As described above, the peak detecting unit 202 through the
continuousness detecting unit 204 detect regions made up of pixels
aligned in a single row in the vertical direction of the screen where the
image of the fine line has been projected, and further detect regions
made up of pixels aligned in a single row in the horizontal direction of
the screen where the image of the fine line has been projected.

[0645]Note that the order of processing does not restrict the present
invention, and may be executed in parallel, as a matter of course.

[0646]That is to say, the peak detecting unit 202, with regard to of
pixels aligned in a single row in the horizontal direction of the screen,
detects as a peak a pixel which has a pixel value greater in comparison
with the pixel value of the pixel situated to the left side on the screen
and the pixel value of the pixel situated to the right side on the
screen, and supplies peak information indicating the position of the
detected peak to the monotonous increase/decrease detecting unit 203. The
peak detecting unit 202 detects one or multiple peaks from one image, for
example, one frame image.

[0647]For example, the peak detecting unit 202 selects a pixel of interest
from pixels in the one frame image which has not yet been taken as a
pixel of interest, compares the pixel value of the pixel of interest with
the pixel value of the pixel to the left side of the pixel of interest,
compares the pixel value of the pixel of interest with the pixel value of
the pixel to the right side of the pixel of interest, detects a pixel of
interest having a pixel value greater than the pixel value of the pixel
to the left side of the pixel of interest and having a pixel value
greater than the pixel value of the pixel to the right side of the pixel
of interest, and takes the detected pixel of interest as a peak. The peak
detecting unit 202 supplies peak information indicating the detected peak
to the monotonous increase/decrease detecting unit 203.

[0648]There are cases wherein the peak detecting unit 202 does not detect
a peak.

[0649]The monotonous increase/decrease detecting unit 203 detects
candidates for a region made up of pixels aligned in a single row in the
horizontal direction as to the peak detected by the peak detecting unit
202 wherein the fine line image has been projected, and supplies the
monotonous increase/decrease region information indicating the detected
region to the continuousness detecting unit 204 along with the peak
information.

[0650]More specifically, the monotonous increase/decrease detecting unit
203 detects regions made up of pixels having pixel values monotonously
decreasing with the pixel value of the peak as a reference, as candidates
of regions made up of pixels where the fine line image has been
projected.

[0651]For example, the monotonous increase/decrease detecting unit 203
obtains, with regard to each pixel in a single row in the horizontal
direction as to the peak, the pixel value of each pixel, the difference
as to the pixel value of the pixel to the left side, and the difference
as to the pixel value of the pixel to the right side. The monotonous
increase/decrease detecting unit 203 then detects the region where the
pixel value monotonously decreases by detecting the pixel where the sign
of the difference changes.

[0652]Further, the monotonous increase/decrease detecting unit 203 detects
a region made up of pixels having pixel values with the same sign as the
pixel value as the sign of the pixel value of the peak, with reference to
the sign of the pixel value of the peak, as a candidate for a region made
up of pixels where the fine line image has been projected.

[0653]For example, the monotonous increase/decrease detecting unit 203
compares the sign of the pixel value of each pixel with the sign of the
pixel value of the pixel to the left side or with the sign of the pixel
value of the pixel to the right side, and detects the pixel where the
sign of the pixel value changes, thereby detecting a region made up of
pixels having pixel values with the same sign as the peak, from the
region where the pixel values monotonously decrease.

[0654]Thus, the monotonous increase/decrease detecting unit 203 detects a
region made up of pixels aligned in the horizontal direction and having
pixel values with the same sign as the peak wherein the pixel values
monotonously decrease as to the peak.

[0655]From a fine line region made up of such a monotonous
increase/decrease region, the monotonous increase/decrease detecting unit
203 obtains a fine line region longer than a threshold value set
beforehand, i.e., a fine line region having a greater number of pixels
than the threshold value.

[0656]Further, from the fine line region thus detected, the monotonous
increase/decrease detecting unit 203 compares the pixel value of the
peak, the pixel value of the pixel above the peak, and the pixel value of
the pixel below the peak, each with the threshold value, detects a fine
line region to which belongs a peak wherein the pixel value of the peak
exceeds the threshold value, the pixel value of the pixel above the peak
is within the threshold, and the pixel value of the pixel below the peak
is within the threshold, and takes the detected fine line region as a
candidate for a region made up of pixels containing the fine line image
component.

[0657]Another way of saying this is that fine line regions to which
belongs a peak wherein the pixel value of the peak is within the
threshold value, or the pixel value of the pixel above the peak exceeds
the threshold, or the pixel value of the pixel below the peak exceeds the
threshold, are determined to not contain the fine line image component,
and are eliminated from candidates of the region made up of pixels
containing the fine line image component.

[0658]Note that the monotonous increase/decrease detecting unit 203 may be
arranged to take the background pixel value as a reference, compare the
difference between the pixel value of the pixel and the pixel value of
the background with the threshold value, and also to compare the
difference between the pixel value of the background and the pixel values
adjacent to the peak in the vertical direction with the threshold value,
and take a detected fine line region wherein the difference between the
pixel value of the peak and the pixel value of the background exceeds the
threshold value, and the difference between the pixel value of the
background and the pixel value of the pixels adjacent in the vertical
direction is within the threshold, as a candidate for a region made up of
pixels containing the fine line image component.

[0659]The monotonous increase/decrease detecting unit 203 supplies to the
continuousness detecting unit 204 monotonous increase/decrease region
information indicating a region made up of pixels having a pixel value
sign which is the same as the peak and monotonously decreasing pixel
values as to the peak as a reference, wherein the peak exceeds the
threshold value, and the pixel value of the pixel to the right side of
the peak is within the threshold, and the pixel value of the pixel to the
left side of the peak is within the threshold.

[0660]In the event of detecting a region made up of pixels aligned in a
single row in the horizontal direction of the screen wherein the image of
the fine line has been projected, pixels belonging to the region
indicated by the monotonous increase/decrease region information include
pixels aligned in the horizontal direction wherein the image of the fine
line has been projected. That is to say, the region indicated by the
monotonous increase/decrease region information includes a region made up
of pixels aligned in a single row in the horizontal direction of the
screen wherein the image of the fine line has been projected.

[0661]Of the regions made up of pixels aligned in the horizontal direction
indicated in the monotonous increase/decrease region information supplied
from the monotonous increase/decrease detecting unit 203, the
continuousness detecting unit 204 detects regions including pixels
adjacent in the vertical direction, i.e., regions having similar pixel
value change and which are repeated in the horizontal direction, as
continuous regions, and outputs data continuity information indicating
the peak information and the detected continuous regions. The data
continuity information includes information indicating the connection of
the regions.

[0662]At the pixels where the fine line has been projected, arc shapes are
arrayed at constant intervals in an adjacent manner, so the detected
continuous regions include pixels where the fine line has been projected.

[0663]The detected continuous regions include pixels where arc shapes are
arrayed at constant intervals wherein the fine line has been projected,
so the detected continuous regions are taken as a continuity region, and
the continuousness detecting unit 204 outputs data continuity information
indicating the detected continuous regions.

[0664]That is to say, the continuousness detecting unit 204 uses the
continuity which is that the arc shapes are arrayed at constant intervals
in an adjacent manner in the data 3 obtained by imaging the fine line,
generated from the continuity of the image of the fine line in the actual
world 1 which is continuation in the length direction, so as to further
narrow down the candidates of regions detected by the peak detecting unit
202 and the monotonous increase/decrease detecting unit 203.

[0665]FIG. 51 is a diagram illustrating an example of an image wherein the
continuity component has been extracted by planar approximation.

[0666]FIG. 52 is a diagram illustrating the results of detecting peaks in
the image shown in FIG. 51, and detecting monotonously decreasing
regions. In FIG. 52, the portions indicated by white are the detected
regions.

[0667]FIG. 53 is a diagram illustrating regions wherein continuousness has
been detected by detecting continuousness of adjacent regions in the
image shown in FIG. 52. In FIG. 53, the portions shown in white are
regions where continuity has been detected. It can be understood that
detection of continuousness further identifies the regions.

[0668]FIG. 54 is a diagram illustrating the pixel values of the regions
shown in FIG. 53, i.e., the pixel values of the regions where
continuousness has been detected.

[0669]Thus, the data continuity detecting unit 101 is capable of detecting
continuity contained in the data 3 which is the input image. That is to
say, the data continuity detecting unit 101 can detect continuity of data
included in the data 3 which has been generated by the actual world 1
image which is a fine line having been projected on the data 3. The data
continuity detecting unit 101 detects, from the data 3, regions made up
of pixels where the actual world 1 image which is a fine line has been
projected.

[0670]FIG. 55 is a diagram illustrating an example of other processing for
detecting regions having continuity, where a fine line image has been
projected, with the data continuity detecting unit 101. As shown in FIG.
55, the data continuity detecting unit 101 calculates the absolute value
of difference of pixel values for each pixel and adjacent pixels. The
calculated absolute values of difference are placed corresponding to the
pixels. For example, in a situation such as shown in FIG. 55 wherein
there are pixels aligned which have respective pixel values of P0, P1,
and P2, the data continuity detecting unit 101 calculates the difference
d0=P0-P1 and the difference d1=P1-P2. Further, the data continuity
detecting unit 101 calculates the absolute values of the difference d0
and the difference d1.

[0671]In the event that the non-continuity component contained in the
pixel values P0, P1, and P2 are identical, only values corresponding to
the component of the fine line are set to the difference d0 and the
difference d1.

[0672]Accordingly, of the absolute values of the differences placed
corresponding to the pixels, in the event that adjacent difference values
are identical, the data continuity detecting unit 101 determines that the
pixel corresponding to the absolute values of the two differences (the
pixel between the two absolute values of difference) contains the
component of the fine line. Also, of the absolute values of the
differences placed corresponding to pixels, in the event that adjacent
difference values are identical but the absolute values of difference are
smaller than a predetermined threshold value, the data continuity
detecting unit 101 determines that the pixel corresponding to the
absolute values of the two differences (the pixel between the two
absolute values of difference) does not contain the component of the fine
line.

[0673]The data continuity detecting unit 101 can also detect fine lines
with a simple method such as this.

[0674]FIG. 56 is a flowchart for describing continuity detection
processing.

[0675]In step S201, the non-continuity component extracting unit 201
extracts non-continuity component, which is portions other than the
portion where the fine line has been projected, from the input image. The
non-continuity component extracting unit 201 supplies non-continuity
component information indicating the extracted non-continuity component,
along with the input image, to the peak detecting unit 202 and the
monotonous increase/decrease detecting unit 203. Details of the
processing for extracting the non-continuity component will be described
later.

[0676]In step S202, the peak detecting unit 202 eliminates the
non-continuity component from the input image, based on the
non-continuity component information supplied from the non-continuity
component extracting unit 201, so as to leave only pixels including the
continuity component in the input image. Further, in step S202, the peak
detecting unit 202 detects peaks.

[0677]That is to say, in the event of executing processing with the
vertical direction of the screen as a reference, of the pixels containing
the continuity component, the peak detecting unit 202 compares the pixel
value of each pixel with the pixel values of the pixels above and below,
and detects pixels having a greater pixel value than the pixel value of
the pixel above and the pixel value of the pixel below, thereby detecting
a peak. Also, in step S202, in the event of executing processing with the
horizontal direction of the screen as a reference, of the pixels
containing the continuity component, the peak detecting unit 202 compares
the pixel value of each pixel with the pixel values of the pixels to the
right side and left side, and detects pixels having a greater pixel value
than the pixel value of the pixel to the right side and the pixel value
of the pixel to the left side, thereby detecting a peak.

[0679]In step S203, the monotonous increase/decrease detecting unit 203
eliminates the non-continuity component from the input image, based on
the non-continuity component information supplied from the non-continuity
component extracting unit 201, so as to leave only pixels including the
continuity component in the input image. Further, in step S203, the
monotonous increase/decrease detecting unit 203 detects the region made
up of pixels having data continuity, by detecting monotonous
increase/decrease as to the peak, based on peak information indicating
the position of the peak, supplied from the peak detecting unit 202.

[0680]In the event of executing processing with the vertical direction of
the screen as a reference, the monotonous increase/decrease detecting
unit 203 detects monotonous increase/decrease made up of one row of
pixels aligned vertically where a single fine line image has been
projected, based on the pixel value of the peak and the pixel values of
the one row of pixels aligned vertically as to the peak, thereby
detecting a region made up of pixels having data continuity. That is to
say, in step S203, in the event of executing processing with the vertical
direction of the screen as a reference, the monotonous increase/decrease
detecting unit 203 obtains, with regard to a peak and a row of pixels
aligned vertically as to the peak, the difference between the pixel value
of each pixel and the pixel value of a pixel above or below, thereby
detecting a pixel where the sign of the difference changes. Also, with
regard to a peak and a row of pixels aligned vertically as to the peak,
the monotonous increase/decrease detecting unit 203 compares the sign of
the pixel value of each pixel with the sign of the pixel value of a pixel
above or below, thereby detecting a pixel where the sign of the pixel
value changes. Further, the monotonous increase/decrease detecting unit
203 compares pixel value of the peak and the pixel values of the pixels
to the right side and to the left side of the peak with a threshold
value, and detects a region made up of pixels wherein the pixel value of
the peak exceeds the threshold value, and wherein the pixel values of the
pixels to the right side and to the left side of the peak are within the
threshold.

[0682]In the event of executing processing with the horizontal direction
of the screen as a reference, the monotonous increase/decrease detecting
unit 203 detects monotonous increase/decrease made up of one row of
pixels aligned horizontally where a single fine line image has been
projected, based on the pixel value of the peak and the pixel values of
the one row of pixels aligned horizontally as to the peak, thereby
detecting a region made up of pixels having data continuity. That is to
say, in step S203, in the event of executing processing with the
horizontal direction of the screen as a reference, the monotonous
increase/decrease detecting unit 203 obtains, with regard to a peak and a
row of pixels aligned horizontally as to the peak, the difference between
the pixel value of each pixel and the pixel value of a pixel to the right
side or to the left side, thereby detecting a pixel where the sign of the
difference changes. Also, with regard to a peak and a row of pixels
aligned horizontally as to the peak, the monotonous increase/decrease
detecting unit 203 compares the sign of the pixel value of each pixel
with the sign of the pixel value of a pixel to the right side or to the
left side, thereby detecting a pixel where the sign of the pixel value
changes. Further, the monotonous increase/decrease detecting unit 203
compares pixel value of the peak and the pixel values of the pixels to
the upper side and to the lower side of the peak with a threshold value,
and detects a region made up of pixels wherein the pixel value of the
peak exceeds the threshold value, and wherein the pixel values of the
pixels to the upper side and to the lower side of the peak are within the
threshold.

[0684]In step S204, the monotonous increase/decrease detecting unit 203
determines whether or not processing of all pixels has ended. For
example, the non-continuity component extracting unit 201 detects peaks
for all pixels of a single screen (for example, frame, field, or the
like) of the input image, and whether or not a monotonous
increase/decrease region has been detected is determined.

[0685]In the event that determination is made in step S204 that processing
of all pixels has not ended, i.e., that there are still pixels which have
not been subjected to the processing of peak detection and detection of
monotonous increase/decrease region, the flow returns to step S202, a
pixel which has not yet been subjected to the processing of peak
detection and detection of monotonous increase/decrease region is
selected as an object of the processing, and the processing of peak
detection and detection of monotonous increase/decrease region are
repeated.

[0686]In the event that determination is made in step S204 that processing
of all pixels has ended, in the event that peaks and monotonous
increase/decrease regions have been detected with regard to all pixels,
the flow proceeds to step S205, where the continuousness detecting unit
204 detects the continuousness of detected regions, based on the
monotonous increase/decrease region information. For example, in the
event that monotonous increase/decrease regions made up of one row of
pixels aligned in the vertical direction of the screen, indicated by
monotonous increase/decrease region information, include pixels adjacent
in the horizontal direction, the continuousness detecting unit 204
determines that there is continuousness between the two monotonous
increase/decrease regions, and in the event of not including pixels
adjacent in the horizontal direction, determines that there is no
continuousness between the two monotonous increase/decrease regions. For
example, in the event that monotonous increase/decrease regions made up
of one row of pixels aligned in the horizontal direction of the screen,
indicated by monotonous increase/decrease region information, include
pixels adjacent in the vertical direction, the continuousness detecting
unit 204 determines that there is continuousness between the two
monotonous increase/decrease regions, and in the event of not including
pixels adjacent in the vertical direction, determines that there is no
continuousness between the two monotonous increase/decrease regions.

[0687]The continuousness detecting unit 204 takes the detected continuous
regions as continuity regions having data continuity, and outputs data
continuity information indicating the peak position and continuity
region. The data continuity information contains information indicating
the connection of regions. The data continuity information output from
the continuousness detecting unit 204 indicates the fine line region,
which is the continuity region, made up of pixels where the actual world
1 fine line image has been projected.

[0688]In step S206, a continuity direction detecting unit 205 determines
whether or not processing of all pixels has ended. That is to say, the
continuity direction detecting unit 205 determines whether or not region
continuation has been detected with regard to all pixels of a certain
frame of the input image.

[0689]In the event that determination is made in step S206 that processing
of all pixels has not yet ended, i.e., that there are still pixels which
have not yet been taken as the object of detection of region
continuation, the flow returns to step S205, a pixel which has not yet
been subjected to the processing of detection of region continuity is
selected, and the processing for detection of region continuity is
repeated.

[0690]In the event that determination is made in step S206 that processing
of all pixels has ended, i.e., that all pixels have been taken as the
object of detection of region continuity, the processing ends.

[0691]Thus, the continuity contained in the data 3 which is the input
image is detected. That is to say, continuity of data included in the
data 3 which has been generated by the actual world 1 image which is a
fine line having been projected on the data 3 is detected, and a region
having data continuity, which is made up of pixels on which the actual
world 1 image which is a fine line has been projected, is detected from
the data 3.

[0692]Now, the data continuity detecting unit 101 shown in FIG. 41 can
detect time-directional data continuity, based on the region having data
continuity detected form the frame of the data 3.

[0693]For example, as shown in FIG. 57, the continuousness detecting unit
204 detects time-directional data continuity by connecting the edges of
the region having detected data continuity in frame #n, the region having
detected data continuity in frame #n-1, and the region having detected
data continuity in frame #n+1.

[0694]The frame #n-1 is a frame preceding the frame #n time-wise, and the
frame #n+1 is a frame following the frame #n time-wise. That is to say,
the frame #n-1, the frame #n, and the frame #n+1, are displayed on the
order of the frame #n-1, the frame #n, and the frame #n+1.

[0695]More specifically, in FIG. 57, G denotes a movement vector obtained
by connecting the one edge of the region having detected data continuity
in frame #n, the region having detected data continuity in frame #n-1,
and the region having detected data continuity in frame #n+1, and G'
denotes a movement vector obtained by connecting the other edges of the
regions having detected data continuity. The movement vector G and the
movement vector G' are an example of data continuity in the time
direction.

[0696]Further, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 41 can output information indicating the
length of the region having data continuity as data continuity
information.

[0697]FIG. 58 is a block diagram illustrating the configuration of the
non-continuity component extracting unit 201 which performs planar
approximation of the non-continuity component which is the portion of the
image data which does not have data continuity, and extracts the
non-continuity component.

[0698]The non-continuity component extracting unit 201 of which the
configuration is shown in FIG. 58 extracts blocks, which are made up of a
predetermined number of pixels, from the input image, performs planar
approximation of the blocks, so that the error between the block and a
planar value is below a predetermined threshold value, thereby extracting
the non-continuity component.

[0699]The input image is supplied to a block extracting unit 221, and is
also output without change.

[0700]The block extracting unit 221 extracts blocks, which are made up of
a predetermined number of pixels, from the input image. For example, the
block extracting unit 221 extracts a block made up of 7×7 pixels,
and supplies this to a planar approximation unit 222. For example, the
block extracting unit 221 moves the pixel serving as the center of the
block to be extracted in raster scan order, thereby sequentially
extracting blocks from the input image.

[0701]The planar approximation unit 222 approximates the pixel values of a
pixel contained in the block on a predetermined plane. For example, the
planar approximation unit 222 approximates the pixel value of a pixel
contained in the block on a plane expressed by Expression (24).

Z=ax+by+c (24)

[0702]In Expression (24), x represents the position of the pixel in one
direction on the screen (the spatial direction X), and y represents the
position of the pixel in the other direction on the screen (the spatial
direction Y). z represents the application value represented by the
plane. a represents the gradient of the spatial direction X of the plane,
and b represents the gradient of the spatial direction Y of the plane. In
Expression (24), c represents the offset of the plane (intercept).

[0703]For example, the planar approximation unit 222 obtains the gradient
a, gradient b, and offset c, by regression processing, thereby
approximating the pixel values of the pixels contained in the block on a
plane expressed by Expression (24). The planar approximation unit 222
obtains the gradient a, gradient b, and offset c, by regression
processing including rejection, thereby approximating the pixel values of
the pixels contained in the block on a plane expressed by Expression
(24).

[0704]For example, the planar approximation unit 222 obtains the plane
expressed by Expression (24) wherein the error is least as to the pixel
values of the pixels of the block using the least-square method, thereby
approximating the pixel values of the pixels contained in the block on
the plane.

[0705]Note that while the planar approximation unit 222 has been described
approximating the block on the plane expressed by Expression (24), this
is not restricted to the plane expressed by Expression (24), rather, the
block may be approximated on a plane represented with a function with a
higher degree of freedom, for example, an n-order (wherein n is an
arbitrary integer) polynomial.

[0706]A repetition determining unit 223 calculates the error between the
approximation value represented by the plane upon which the pixel values
of the block have been approximated, and the corresponding pixel values
of the pixels of the block. Expression (25) is an expression which shows
the error ei which is the difference between the approximation value
represented by the plane upon which the pixel values of the block have
been approximated, and the corresponding pixel values zi of the pixels of
the block.

ei=zi-{circumflex over (z)}=zi-(axi+{circumflex over
(b)}yi+c) (25)

[0707]In Expression (25), z-hat (A symbol with over z will be described
as z-hat. The same description will be used in the present specification
hereafter) represents an approximation value expressed by the plane on
which the pixel values of the block are approximated, a-hat represents
the gradient of the spatial direction X of the plane on which the pixel
values of the block are approximated, b-hat represents the gradient of
the spatial direction Y of the plane on which the pixel values of the
block are approximated, and c-hat represents the offset (intercept) of
the plane on which the pixel values of the block are approximated.

[0708]The repetition determining unit 223 rejects the pixel regarding
which the error ei between the approximation value and the corresponding
pixel values of pixels of the block, shown in Expression (25). Thus,
pixels where the fine line has been projected, i.e., pixels having
continuity, are rejected. The repetition determining unit 223 supplies
rejection information indicating the rejected pixels to the planar
approximation unit 222.

[0709]Further, the repetition determining unit 223 calculates a standard
error, and in the event that the standard error is equal to or greater
than threshold value which has been set beforehand for determining ending
of approximation, and half or more of the pixels of the pixels of a block
have not been rejected, the repetition determining unit 223 causes the
planar approximation unit 222 to repeat the processing of planar
approximation on the pixels contained in the block, from which the
rejected pixels have been eliminated.

[0710]Pixels having continuity are rejected, so approximating the pixels
from which the rejected pixels have been eliminated on a plane means that
the plane approximates the non-continuity component.

[0711]At the point that the standard error below the threshold value for
determining ending of approximation, or half or more of the pixels of the
pixels of a block have been rejected, the repetition determining unit 223
ends planar approximation.

[0712]With a block made up of 5×5 pixels, the standard error es
can be calculated with, for example, Expression (26).

##EQU00016##

[0713]Here, n is the number of pixels.

[0714]Note that the repetition determining unit 223 is not restricted to
standard error, and may be arranged to calculate the sum of the square of
errors for all of the pixels contained in the block, and perform the
following processing.

[0715]Now, at the time of planar approximation of blocks shifted one pixel
in the raster scan direction, a pixel having continuity, indicated by the
black circle in the diagram, i.e., a pixel containing the fine line
component, will be rejected multiple times, as shown in FIG. 59.

[0716]Upon completing planar approximation, the repetition determining
unit 223 outputs information expressing the plane for approximating the
pixel values of the block (the gradient and intercept of the plane of
Expression 24)) as non-continuity information.

[0717]Note that an arrangement may be made wherein the repetition
determining unit 223 compares the number of times of rejection per pixel
with a preset threshold value, and takes a pixel which has been rejected
a number of times equal to or greater than the threshold value as a pixel
containing the continuity component, and output the information
indicating the pixel including the continuity component as continuity
component information. In this case, the peak detecting unit 202 through
the continuity direction detecting unit 205 execute their respective
processing on pixels containing continuity component, indicated by the
continuity component information.

[0718]Examples of results of non-continuity component extracting
processing will be described with reference to FIG. 60 through FIG. 67.

[0719]FIG. 60 is a diagram illustrating an example of an input image
generated by the average value of the pixel values of 2×2 pixels in
an original image containing fine lines having been generated as a pixel
value.

[0720]FIG. 61 is a diagram illustrating an image from the image shown in
FIG. 60 wherein standard error obtained as the result of planar
approximation without rejection is taken as the pixel value. In the
example shown in FIG. 61, a block made up of 5×5 pixels as to a
single pixel of interest was subjected to planar approximation. In FIG.
61, white pixels are pixel values which have greater pixel values, i.e.,
pixels having greater standard error, and black pixels are pixel values
which have smaller pixel values, i.e., pixels having smaller standard
error.

[0721]From FIG. 61, it can be confirmed that in the event that the
standard error obtained as the result of planar approximation without
rejection is taken as the pixel value, great values are obtained over a
wide area at the perimeter of non-continuity portions.

[0722]In the examples shown in FIG. 62 through FIG. 67, a block made up of
7×7 pixels as to a single pixel of interest was subjected to planar
approximation. In the event of planar approximation of a block made up of
7×7 pixels, one pixel is repeatedly included in 49 blocks, meaning
that a pixel containing the continuity component is rejected as many as
49 times.

[0723]FIG. 62 is an image wherein standard error obtained by planar
approximation with rejection of the image shown in FIG. 60 is taken as
the pixel value.

[0724]In FIG. 62, white pixels are pixel values which have greater pixel
values, i.e., pixels having greater standard error, and black pixels are
pixel values which have smaller pixel values, i.e., pixels having smaller
standard error. It can be understood that the standard error is smaller
overall in the case of performing rejection, as compared with a case of
not performing rejection.

[0725]FIG. 63 is an image wherein the number of times of rejection in
planar approximation with rejection of the image shown in FIG. 60 is
taken as the pixel value. In FIG. 63, white pixels are greater pixel
values, i.e., pixels which have been rejected a greater number of times,
and black pixels are smaller pixel values, i.e., pixels which have been
rejected a fewer times.

[0726]From FIG. 63, it can be understood that pixels where the fine line
images are projected have been discarded a greater number of times. An
image for masking the non-continuity portions of the input image can be
generated using the image wherein the number of times of rejection is
taken as the pixel value.

[0727]FIG. 64 is a diagram illustrating an image wherein the gradient of
the spatial direction X of the plane for approximating the pixel values
of the block is taken as the pixel value. FIG. 65 is a diagram
illustrating an image wherein the gradient of the spatial direction Y of
the plane for approximating the pixel values of the block is taken as the
pixel value.

[0728]FIG. 66 is a diagram illustrating an image formed of approximation
values expressed by a plane for approximating the pixel values of the
block. It can be understood that the fine lines have disappeared from the
image shown in FIG. 66.

[0729]FIG. 67 is a diagram illustrating an image made up of the difference
between the image shown in FIG. 60 generated by the average value of the
block of 2×2 pixels in the original image being taken as the pixel
value, and an image made up of approximate values expressed as a plane,
shown in FIG. 66. The pixel values of the image shown in FIG. 67 have had
the non-continuity component removed, so only the values where the image
of the fine line has been projected remain. As can be understood from
FIG. 67, with an image made up of the difference between the pixel value
of the original image and approximation values expressed by a plane
whereby approximation has been performed, the continuity component of the
original image is extracted well.

[0730]The number of times of rejection, the gradient of the spatial
direction X of the plane for approximating the pixel values of the pixel
of the block, the gradient of the spatial direction Y of the plane for
approximating the pixel values of the pixel of the block, approximation
values expressed by the plane approximating the pixel values of the
pixels of the block, and the error ei, can be used as features of the
input image.

[0731]FIG. 68 is a flowchart for describing the processing of extracting
the non-continuity component with the non-continuity component extracting
unit 201 of which the configuration is shown in FIG. 58.

[0732]In step S221, the block extracting unit 221 extracts a block made up
of a predetermined number of pixels from the input image, and supplies
the extracted block to the planar approximation unit 222. For example,
the block extracting unit 221 selects one pixel of the pixels of the
input pixel which have not been selected yet, and extracts a block made
up of 7×7 pixels centered on the selected pixel. For example, the
block extracting unit 221 can select pixels in raster scan order.

[0733]In step S222, the planar approximation unit 222 approximates the
extracted block on a plane. The planar approximation unit 222
approximates the pixel values of the pixels of the extracted block on a
plane by regression processing, for example. For example, the planar
approximation unit 222 approximates the pixel values of the pixels of the
extracted block excluding the rejected pixels on a plane, by regression
processing. In step S223, the repetition determining unit 223 executes
repetition determination. For example, repetition determination is
performed by calculating the standard error from the pixel values of the
pixels of the block and the planar approximation values, and counting the
number of rejected pixels.

[0734]In step S224, the repetition determining unit 223 determines whether
or not the standard error is equal to or above a threshold value, and in
the event that determination is made that the standard error is equal to
or above the threshold value, the flow proceeds to step S225.

[0735]Note that an arrangement may be made wherein the repetition
determining unit 223 determines in step S224 whether or not half or more
of the pixels of the block have been rejected, and whether or not the
standard error is equal to or above the threshold value, and in the event
that determination is made that half or more of the pixels of the block
have not been rejected, and the standard error is equal to or above the
threshold value, the flow proceeds to step S225.

[0736]In step S225, the repetition determining unit 223 calculates the
error between the pixel value of each pixel of the block and the
approximated planar approximation value, rejects the pixel with the
greatest error, and notifies the planar approximation unit 222. The
procedure returns to step S222, and the planar approximation processing
and repetition determination processing is repeated with regard to the
pixels of the block excluding the rejected pixel.

[0737]In step S225, in the event that a block which is shifted one pixel
in the raster scan direction is extracted in the processing in step S221,
the pixel including the fine line component (indicated by the black
circle in the drawing) is rejected multiple times, as shown in FIG. 59.

[0738]In the event that determination is made in step S224 that the
standard error is not equal to or greater than the threshold value, the
block has been approximated on the plane, so the flow proceeds to step
S226.

[0739]Note that an arrangement may be made wherein the repetition
determining unit 223 determines in step S224 whether or not half or more
of the pixels of the block have been rejected, and whether or not the
standard error is equal to or above the threshold value, and in the event
that determination is made that half or more of the pixels of the block
have been rejected, or the standard error is not equal to or above the
threshold value, the flow proceeds to step S225.

[0740]In step S226, the repetition determining unit 223 outputs the
gradient and intercept of the plane for approximating the pixel values of
the pixels of the block as non-continuity component information.

[0741]In step S227, the block extracting unit 221 determines whether or
not processing of all pixels of one screen of the input image has ended,
and in the event that determination is made that there are still pixels
which have not yet been taken as the object of processing, the flow
returns to step S221, a block is extracted from pixels not yet been
subjected to the processing, and the above processing is repeated.

[0742]In the event that determination is made in step S227 that processing
has ended for all pixels of one screen of the input image, the processing
ends.

[0743]Thus, the non-continuity component extracting unit 201 of which the
configuration is shown in FIG. 58 can extract the non-continuity
component from the input image. The non-continuity component extracting
unit 201 extracts the non-continuity component from the input image, so
the peak detecting unit 202 and monotonous increase/decrease detecting
unit 203 can obtain the difference between the input image and the
non-continuity component extracted by the non-continuity component
extracting unit 201, so as to execute the processing regarding the
difference containing the continuity component.

[0744]Note that the standard error in the event that rejection is
performed, the standard error in the event that rejection is not
performed, the number of times of rejection of a pixel, the gradient of
the spatial direction X of the plane (a-hat in Expression (24)), the
gradient of the spatial direction Y of the plane (b-hat in Expression
(24)), the level of planar transposing (c-hat in Expression (24)), and
the difference between the pixel values of the input image and the
approximation values represented by the plane, calculated in planar
approximation processing, can be used as features.

[0745]FIG. 69 is a flowchart for describing processing for extracting the
continuity component with the non-continuity component extracting unit
201 of which the configuration is shown in FIG. 58, instead of the
processing for extracting the non-continuity component corresponding to
step S201. The processing of step S241 through step S245 is the same as
the processing of step S221 through step S225, so description thereof
will be omitted.

[0746]In step S246, the repetition determining unit 223 outputs the
difference between the approximation value represented by the plane and
the pixel values of the input image, as the continuity component of the
input image. That is to say, the repetition determining unit 223 outputs
the difference between the planar approximation values and the true pixel
values.

[0747]Note that the repetition determining unit 223 may be arranged to
output the difference between the approximation value represented by the
plane and the pixel values of the input image, regarding pixel values of
pixels of which the difference is equal to or greater than a
predetermined threshold value, as the continuity component of the input
image.

[0748]The processing of step S247 is the same as the processing of step
S227, and accordingly description thereof will be omitted.

[0749]The plane approximates the non-continuity component, so the
non-continuity component extracting unit 201 can remove the
non-continuity component from the input image by subtracting the
approximation value represented by the plane for approximating pixel
values, from the pixel values of each pixel in the input image. In this
case, the peak detecting unit 202 through the continuousness detecting
unit 204 can be made to process only the continuity component of the
input image, i.e., the values where the fine line image has been
projected, so the processing with the peak detecting unit 202 through the
continuousness detecting unit 204 becomes easier.

[0750]FIG. 70 is a flowchart for describing other processing for
extracting the continuity component with the non-continuity component
extracting unit 201 of which the configuration is shown in FIG. 58,
instead of the processing for extracting the non-continuity component
corresponding to step S201. The processing of step S261 through step S265
is the same as the processing of step S221 through step S225, so
description thereof will be omitted.

[0751]In step S266, the repetition determining unit 223 stores the number
of times of rejection for each pixel, the flow returns to step S262, and
the processing is repeated.

[0752]In step S264, in the event that determination is made that the
standard error is not equal to or greater than the threshold value, the
block has been approximated on the plane, so the flow proceeds to step
S267, the repetition determining unit 223 determines whether or not
processing of all pixels of one screen of the input image has ended, and
in the event that determination is made that there are still pixels which
have not yet been taken as the object of processing, the flow returns to
step S261, with regard to a pixel which has not yet been subjected to the
processing, a block is extracted, and the above processing is repeated.

[0753]In the event that determination is made in step S267 that processing
has ended for all pixels of one screen of the input image, the flow
proceeds to step S268, the repetition determining unit 223 selects a
pixel which has not yet been selected, and determines whether or not the
number of times of rejection of the selected pixel is equal to or greater
than a threshold value. For example, the repetition determining unit 223
determines in step S268 whether or not the number of times of rejection
of the selected pixel is equal to or greater than a threshold value
stored beforehand.

[0754]In the event that determination is made in step S268 that the number
of times of rejection of the selected pixel is equal to or greater than
the threshold value, the selected pixel contains the continuity
component, so the flow proceeds to step S269, where the repetition
determining unit 223 outputs the pixel value of the selected pixel (the
pixel value in the input image) as the continuity component of the input
image, and the flow proceeds to step S270.

[0755]In the event that determination is made in step S268 that the number
of times of rejection of the selected pixel is not equal to or greater
than the threshold value, the selected pixel does not contain the
continuity component, so the processing in step S269 is skipped, and the
procedure proceeds to step S270. That is to say, the pixel value of a
pixel regarding which determination has been made that the number of
times of rejection is not equal to or greater than the threshold value is
not output.

[0756]Note that an arrangement may be made wherein the repetition
determining unit 223 outputs a pixel value set to 0 for pixels regarding
which determination has been made that the number of times of rejection
is not equal to or greater than the threshold value.

[0757]In step S270, the repetition determining unit 223 determines whether
or not processing of all pixels of one screen of the input image has
ended to determine whether or not the number of times of rejection is
equal to or greater than the threshold value, and in the event that
determination is made that processing has not ended for all pixels, this
means that there are still pixels which have not yet been taken as the
object of processing, so the flow returns to step S268, a pixel which has
not yet been subjected to the processing is selected, and the above
processing is repeated.

[0758]In the event that determination is made in step S270 that processing
has ended for all pixels of one screen of the input image, the processing
ends.

[0759]Thus, of the pixels of the input image, the non-continuity component
extracting unit 201 can output the pixel values of pixels containing the
continuity component, as continuity component information. That is to
say, of the pixels of the input image, the non-continuity component
extracting unit 201 can output the pixel values of pixels containing the
component of the fine line image.

[0760]FIG. 71 is a flowchart for describing yet other processing for
extracting the continuity component with the non-continuity component
extracting unit 201 of which the configuration is shown in FIG. 58,
instead of the processing for extracting the non-continuity component
corresponding to step S201. The processing of step S281 through step S288
is the same as the processing of step S261 through step S268, so
description thereof will be omitted.

[0761]In step S289, the repetition determining unit 223 outputs the
difference between the approximation value represented by the plane, and
the pixel value of a selected pixel, as the continuity component of the
input image. That is to say, the repetition determining unit 223 outputs
an image wherein the non-continuity component has been removed from the
input image, as the continuity information.

[0762]The processing of step S290 is the same as the processing of step
S270, and accordingly description thereof will be omitted.

[0763]Thus, the non-continuity component extracting unit 201 can output an
image wherein the non-continuity component has been removed from the
input image as the continuity information.

[0764]As described above, in a case wherein real world light signals are
projected, a non-continuous portion of pixel values of multiple pixels of
first image data wherein a part of the continuity of the real world light
signals has been lost is detected, data continuity is detected from the
detected non-continuous portions, a model (function) is generated for
approximating the light signals by estimating the continuity of the real
world light signals based on the detected data continuity, and second
image data is generated based on the generated function, processing
results which are more accurate and have higher precision as to the event
in the real world can be obtained.

[0765]FIG. 72 is a block diagram illustrating another configuration of the
data continuity detecting unit 101.

[0766]With the data continuity detecting unit 101 of which the
configuration is shown in FIG. 72, change in the pixel value of the pixel
of interest which is a pixel of interest in the spatial direction of the
input image, i.e. activity in the spatial direction of the input image,
is detected, multiple sets of pixels made up of a predetermined number of
pixels in one row in the vertical direction or one row in the horizontal
direction are extracted for each angle based on the pixel of interest and
a reference axis according to the detected activity, the correlation of
the extracted pixel sets is detected, and the angle of data continuity
based on the reference axis in the input image is detected based on the
correlation.

[0767]The angle of data continuity means an angle assumed by the reference
axis, and the direction of a predetermined dimension where constant
characteristics repeatedly appear in the data 3. Constant characteristics
repeatedly appearing means a case wherein, for example, the change in
value as to the change in position in the data 3, i.e., the
cross-sectional shape, is the same, and so forth.

[0768]The reference axis may be, for example, an axis indicating the
spatial direction X (the horizontal direction of the screen), an axis
indicating the spatial direction Y (the vertical direction of the
screen), and so forth.

[0770]The activity detecting unit 401 detects change in the pixel values
as to the spatial direction of the input image, i.e., activity in the
spatial direction, and supplies the activity information which indicates
the detected results to the data selecting unit 402 and a continuity
direction derivation unit 404.

[0771]For example, the activity detecting unit 401 detects the change of a
pixel value as to the horizontal direction of the screen, and the change
of a pixel value as to the vertical direction of the screen, and compares
the detected change of the pixel value in the horizontal direction and
the change of the pixel value in the vertical direction, thereby
detecting whether the change of the pixel value in the horizontal
direction is greater as compared with the change of the pixel value in
the vertical direction, or whether the change of the pixel value in the
vertical direction is greater as compared with the change of the pixel
value in the horizontal direction.

[0772]The activity detecting unit 401 supplies to the data selecting unit
402 and the continuity direction derivation unit 404 activity
information, which is the detection results, indicating that the change
of the pixel value in the horizontal direction is greater as compared
with the change of the pixel value in the vertical direction, or
indicating that the change of the pixel value in the vertical direction
is greater as compared with the change of the pixel value in the
horizontal direction.

[0773]In the event that the change of the pixel value in the horizontal
direction is greater as compared with the change of the pixel value in
the vertical direction, arc shapes (half-disc shapes) or pawl shapes are
formed on one row in the vertical direction, as indicated by FIG. 73 for
example, and the arc shapes or pawl shapes are formed repetitively more
in the vertical direction. That is to say, in the event that the change
of the pixel value in the horizontal direction is greater as compared
with the change of the pixel value in the vertical direction, with the
reference axis as the axis representing the spatial direction X, the
angle of the data continuity based on the reference axis in the input
image is a value of any from 45 degrees to 90 degrees.

[0774]In the event that the change of the pixel value in the vertical
direction is greater as compared with the change of the pixel value in
the horizontal direction, arc shapes or pawl shapes are formed on one row
in the vertical direction, for example, and the arc shapes or pawl shapes
are formed repetitively more in the horizontal direction. That is to say,
in the event that the change of the pixel value in the vertical direction
is greater as compared with the change of the pixel value in the
horizontal direction, with the reference axis as the axis representing
the spatial direction X, the angle of the data continuity based on the
reference axis in the input image is a value of any from 0 degrees to 45
degrees.

[0775]For example, the activity detecting unit 401 extracts from the input
image a block made up of the 9 pixels, 3×3 centered on the pixel of
interest, as shown in FIG. 74. The activity detecting unit 401 calculates
the sum of differences of the pixels values regarding the pixels
vertically adjacent, and the sum of differences of the pixels values
regarding the pixels horizontally adjacent. The sum of differences
hdiff of the pixels values regarding the pixels horizontally
adjacent can be obtained with Expression (27).

hdiff=Σ(Pi+1,j-Pi,j) (27)

[0776]In the same way, the sum of differences vdiff of the pixels
values regarding the pixels vertically adjacent can be obtained with
Expression (28).

vdiff=Σ(Pi,j+1-Pi,j) (28)

[0777]In Expression (27) and Expression (28), P represents the pixel
value, i represents the position of the pixel in the horizontal
direction, and j represents the position of the pixel in the vertical
direction.

[0778]An arrangement may be made wherein the activity detecting unit 401
compares the calculated sum of differences hdiff of the pixels
values regarding the pixels horizontally adjacent with the sum of
differences vdiff of the pixels values regarding the pixels
vertically adjacent, so as to determine the range of the angle of the
data continuity based on the reference axis in the input image. That is
to say, in this case, the activity detecting unit 401 determines whether
a shape indicated by change in the pixel value as to the position in the
spatial direction is formed repeatedly in the horizontal direction, or
formed repeatedly in the vertical direction.

[0779]For example, change in pixel values in the horizontal direction with
regard to an arc formed on pixels in one horizontal row is greater than
the change of pixel values in the vertical direction, change in pixel
values in the vertical direction with regard to an arc formed on pixels
in one horizontal row is greater than the change of pixel values in the
horizontal direction, and it can be said that the direction of data
continuity, i.e., the change in the direction of the predetermined
dimension of a constant feature which the input image that is the data 3
has is smaller in comparison with the change in the orthogonal direction
too the data continuity. In other words, the difference of the direction
orthogonal to the direction of data continuity (hereafter also referred
to as non-continuity direction) is greater as compared to the difference
in the direction of data continuity.

[0780]For example, as shown in FIG. 75, the activity detecting unit 401
compares the calculated sum of differences hdiff of the pixels
values regarding the pixels horizontally adjacent with the sum of
differences vdiff of the pixels values regarding the pixels
vertically adjacent, and in the event that the sum of differences
hdiff of the pixels values regarding the pixels horizontally
adjacent is greater, determines that the angle of the data continuity
based on the reference axis is a value of any from 45 degrees to 135
degrees, and in the event that the sum of differences vdiff of the
pixels values regarding the pixels vertically adjacent is greater,
determines that the angle of the data continuity based on the reference
axis is a value of any from 0 degrees to 45 degrees, or a value of any
from 135 degrees to 180 degrees.

[0782]Note that the activity detecting unit 401 can detect activity by
extracting blocks of arbitrary sizes, such as a block made up of 25
pixels of 5×5, a block made up of 49 pixels of 7×7, and so
forth.

[0783]The data selecting unit 402 sequentially selects pixels of interest
from the pixels of the input image, and extracts multiple sets of pixels
made up of a predetermined number of pixels in one row in the vertical
direction or one row in the horizontal direction for each angle based on
the pixel of interest and the reference axis, based on the activity
information supplied from the activity detecting unit 401.

[0784]For example, in the event that the activity information indicates
that the change in pixel values in the horizontal direction is greater in
comparison with the change in pixel values in the vertical direction,
this means that the data continuity angle is a value of any from 45
degrees to 135 degrees, so the data selecting unit 402 extracts multiple
sets of pixels made up of a predetermined number of pixels in one row in
the vertical direction, for each predetermined angle in the range of 45
degrees to 135 degrees, based on the pixel of interest and the reference
axis.

[0785]In the event that the activity information indicates that the change
in pixel values in the vertical direction is greater in comparison with
the change in pixel values in the horizontal direction, this means that
the data continuity angle is a value of any from 0 degrees to 45 degrees
or from 135 degrees to 180 degrees, so the data selecting unit 402
extracts multiple sets of pixels made up of a predetermined number of
pixels in one row in the horizontal direction, for each predetermined
angle in the range of 0 degrees to 45 degrees or 135 degrees to 180
degrees, based on the pixel of interest and the reference axis.

[0786]Also, for example, in the event that the activity information
indicates that the angle of data continuity is a value of any from 45
degrees to 135 degrees, the data selecting unit 402 extracts multiple
sets of pixels made up of a predetermined number of pixels in one row in
the vertical direction, for each predetermined angle in the range of 45
degrees to 135 degrees, based on the pixel of interest and the reference
axis.

[0787]In the event that the activity information indicates that the angle
of data continuity is a value of any from 0 degrees to 45 degrees or from
135 degrees to 180 degrees, the data selecting unit 402 extracts multiple
sets of pixels made up of a predetermined number of pixels in one row in
the horizontal direction, for each predetermined angle in the range of 0
degrees to 45 degrees or 135 degrees to 180 degrees, based on the pixel
of interest and the reference axis.

[0788]The data selecting unit 402 supplies the multiple sets made up of
the extracted pixels to an error estimating unit 403.

[0789]The error estimating unit 403 detects correlation of pixel sets for
each angle with regard to the multiple sets of extracted pixels.

[0790]For example, with regard to the multiple sets of pixels made up of a
predetermined number of pixels in one row in the vertical direction
corresponding to one angle, the error estimating unit 403 detects the
correlation of the pixels values of the pixels at corresponding positions
of the pixel sets. With regard to the multiple sets of pixels made up of
a predetermined number of pixels in one row in the horizontal direction
corresponding to one angle, the error estimating unit 403 detects the
correlation of the pixels values of the pixels at corresponding positions
of the sets.

[0791]The error estimating unit 403 supplies correlation information
indicating the detected correlation to the continuity direction
derivation unit 404. The error estimating unit 403 calculates the sum of
the pixel values of pixels of a set including the pixel of interest
supplied from the data selecting unit 402 as values indicating
correlation, and the absolute value of difference of the pixel values of
the pixels at corresponding positions in other sets, and supplies the sum
of absolute value of difference to the continuity direction derivation
unit 404 as correlation information.

[0792]Based on the correlation information supplied from the error
estimating unit 403, the continuity direction derivation unit 404 detects
the data continuity angle based on the reference axis in the input image,
corresponding to the lost continuity of the light signals of the actual
world 1, and outputs data continuity information indicating an angle. For
example, based on the correlation information supplied from the error
estimating unit 403, the continuity direction derivation unit 404 detects
an angle corresponding to the pixel set with the greatest correlation as
the data continuity angle, and outputs data continuity information
indicating the angle corresponding to the pixel set with the greatest
correlation that has been detected.

[0793]The following description will be made regarding detection of data
continuity angle in the range of 0 degrees through 90 degrees (the
so-called first quadrant).

[0794]FIG. 76 is a block diagram illustrating a more detailed
configuration of the data continuity detecting unit 101 shown in FIG. 72.

[0796]First, description will be made regarding the processing of the
pixel selecting unit 411-1 through pixel selecting unit 411-L in the
event that the data continuity angle indicated by the activity
information is a value of any from 45 degrees to 135 degrees.

[0797]The pixel selecting unit 411-1 through pixel selecting unit 411-L
set straight lines of mutually differing predetermined angles which pass
through the pixel of interest, with the axis indicating the spatial
direction X as the reference axis. The pixel selecting unit 411-1 through
pixel selecting unit 411-L select, of the pixels belonging to a vertical
row of pixels to which the pixel of interest belongs, a predetermined
number of pixels above the pixel of interest, and predetermined number of
pixels below the pixel of interest, and the pixel of interest, as a set.

[0798]For example, as shown in FIG. 77, the pixel selecting unit 411-1
through pixel selecting unit 411-L select 9 pixels centered on the pixel
of interest, as a set of pixels, from the pixels belonging to a vertical
row of pixels to which the pixel of interest belongs.

[0799]In FIG. 77, one grid-shaped square (one grid) represents one pixel.
In FIG. 77, the circle shown at the center represents the pixel of
interest.

[0800]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a vertical row of pixels to the left of
the vertical row of pixels to which the pixel of interest belongs, a
pixel at the position closest to the straight line set for each. In FIG.
77, the circle to the lower left of the pixel of interest represents an
example of a selected pixel. The pixel selecting unit 411-1 through pixel
selecting unit 411-L then select, from the pixels belonging to the
vertical row of pixels to the left of the vertical row of pixels to which
the pixel of interest belongs, a predetermined number of pixels above the
selected pixel, a predetermined number of pixels below the selected
pixel, and the selected pixel, as a set of pixels.

[0801]For example, as shown in FIG. 77, the pixel selecting unit 411-1
through pixel selecting unit 411-L select 9 pixels centered on the pixel
at the position closest to the straight line, from the pixels belonging
to the vertical row of pixels to the left of the vertical row of pixels
to which the pixel of interest belongs, as a set of pixels.

[0802]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a vertical row of pixels second left
from the vertical row of pixels to which the pixel of interest belongs, a
pixel at the position closest to the straight line set for each. In FIG.
77, the circle to the far left represents an example of the selected
pixel. The pixel selecting unit 411-1 through pixel selecting unit 411-L
then select, as a set of pixels, from the pixels belonging to the
vertical row of pixels second left from the vertical row of pixels to
which the pixel of interest belongs, a predetermined number of pixels
above the selected pixel, a predetermined number of pixels below the
selected pixel, and the selected pixel.

[0803]For example, as shown in FIG. 77, the pixel selecting unit 411-1
through pixel selecting unit 411-L select 9 pixels centered on the pixel
at the position closest to the straight line, from the pixels belonging
to the vertical row of pixels second left from the vertical row of pixels
to which the pixel of interest belongs, as a set of pixels.

[0804]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a vertical row of pixels to the right of
the vertical row of pixels to which the pixel of interest belongs, a
pixel at the position closest to the straight line set for each. In FIG.
77, the circle to the upper right of the pixel of interest represents an
example of a selected pixel. The pixel selecting unit 411-1 through pixel
selecting unit 411-L then select, from the pixels belonging to the
vertical row of pixels to the right of the vertical row of pixels to
which the pixel of interest belongs, a predetermined number of pixels
above the selected pixel, a predetermined number of pixels below the
selected pixel, and the selected pixel, as a set of pixels.

[0805]For example, as shown in FIG. 77, the pixel selecting unit 411-1
through pixel selecting unit 411-L select 9 pixels centered on the pixel
at the position closest to the straight line, from the pixels belonging
to the vertical row of pixels to the right of the vertical row of pixels
to which the pixel of interest belongs, as a set of pixels.

[0806]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a vertical row of pixels second right
from the vertical row of pixels to which the pixel of interest belongs, a
pixel at the position closest to the straight line set for each. In FIG.
77, the circle to the far right represents an example of the selected
pixel. The pixel selecting unit 411-1 through pixel selecting unit 411-L
then select, from the pixels belonging to the vertical row of pixels
second right from the vertical row of pixels to which the pixel of
interest belongs, a predetermined number of pixels above the selected
pixel, a predetermined number of pixels below the selected pixel, and the
selected pixel, as a set of pixels.

[0807]For example, as shown in FIG. 77, the pixel selecting unit 411-1
through pixel selecting unit 411-L select 9 pixels centered on the pixel
at the position closest to the straight line, from the pixels belonging
to the vertical row of pixels second right from the vertical row of
pixels to which the pixel of interest belongs, as a set of pixels.

[0810]Note that the number of pixel sets may be an optional number, such
as 3 or 7, for example, and does not restrict the present invention.
Also, the number of pixels selected as one set may be an optional number,
such as 5 or 13, for example, and does not restrict the present
invention.

[0811]Note that the pixel selecting unit 411-1 through pixel selecting
unit 411-L may be arranged to select pixel sets from pixels within a
predetermined range in the vertical direction. For example, the pixel
selecting unit 411-1 through pixel selecting unit 411-L can select pixel
sets from 121 pixels in the vertical direction (60 pixels upward from the
pixel of interest, and 60 pixels downward). In this case, the data
continuity detecting unit 101 can detect the angle of data continuity up
to 88.09 degrees as to the axis representing the spatial direction X.

[0812]The pixel selecting unit 411-1 supplies the selected set of pixels
to the estimated error calculating unit 412-1, and the pixel selecting
unit 411-2 supplies the selected set of pixels to the estimated error
calculating unit 412-2. In the same way, each pixel selecting unit 411-3
through pixel selecting unit 411-L supplies the selected set of pixels to
each estimated error calculating unit 412-3 through estimated error
calculating unit 412-L.

[0813]The estimated error calculating unit 412-1 through estimated error
calculating unit 412-L detect the correlation of the pixels values of the
pixels at positions in the multiple sets, supplied from each of the pixel
selecting unit 411-1 through pixel selecting unit 411-L. For example, the
estimated error calculating unit 412-1 through estimated error
calculating unit 412-L calculates, as a value indicating the correlation,
the sum of absolute values of difference between the pixel values of the
pixels of the set containing the pixel of interest, and the pixel values
of the pixels at corresponding positions in other sets, supplied from one
of the pixel selecting unit 411-1 through pixel selecting unit 411-L.

[0814]More specifically, based on the pixel values of the pixels of the
set containing the pixel of interest and the pixel values of the pixels
of the set made up of pixels belonging to one vertical row of pixels to
the left side of the pixel of interest supplied from one of the pixel
selecting unit 411-1 through pixel selecting unit 411-L, the estimated
error calculating unit 412-1 through estimated error calculating unit
412-L calculates the difference of the pixel values of the topmost pixel,
then calculates the difference of the pixel values of the second pixel
from the top, and so on to calculate the absolute values of difference of
the pixel values in order from the top pixel, and further calculates the
sum of absolute values of the calculated differences. Based on the pixel
values of the pixels of the set containing the pixel of interest and the
pixel values of the pixels of the set made up of pixels belonging to one
vertical row of pixels two to the left from the pixel of interest
supplied from one of the pixel selecting unit 411-1 through pixel
selecting unit 411-L, the estimated error calculating unit 412-1 through
estimated error calculating unit 412-L calculates the absolute values of
difference of the pixel values in order from the top pixel, and
calculates the sum of absolute values of the calculated differences.

[0815]Then, based on the pixel values of the pixels of the set containing
the pixel of interest and the pixel values of the pixels of the set made
up of pixels belonging to one vertical row of pixels to the right side of
the pixel of interest supplied from one of the pixel selecting unit 411-1
through pixel selecting unit 411-L, the estimated error calculating unit
412-1 through estimated error calculating unit 412-L calculates the
difference of the pixel values of the topmost pixel, then calculates the
difference of the pixel values of the second pixel from the top, and so
on to calculate the absolute values of difference of the pixel values in
order from the top pixel, and further calculates the sum of absolute
values of the calculated differences. Based on the pixel values of the
pixels of the set containing the pixel of interest and the pixel values
of the pixels of the set made up of pixels belonging to one vertical row
of pixels two to the right from the pixel of interest supplied from one
of the pixel selecting unit 411-1 through pixel selecting unit 411-L, the
estimated error calculating unit 412-1 through estimated error
calculating unit 412-L calculates the absolute values of difference of
the pixel values in order from the top pixel, and calculates the sum of
absolute values of the calculated differences.

[0816]The estimated error calculating unit 412-1 through estimated error
calculating unit 412-L add all of the sums of absolute values of
difference of the pixel values thus calculated, thereby calculating the
aggregate of absolute values of difference of the pixel values.

[0818]Note that the estimated error calculating unit 412-1 through
estimated error calculating unit 412-L are not restricted to the sum of
absolute values of difference of pixel values, and can also calculate
other values as correlation values as well, such as the sum of squared
differences of pixel values, or correlation coefficients based on pixel
values, and so forth.

[0819]The smallest error angle selecting unit 413 detects the data
continuity angle based on the reference axis in the input image which
corresponds to the continuity of the image which is the lost actual world
1 light signals, based on the correlation detected by the estimated error
calculating unit 412-1 through estimated error calculating unit 412-L
with regard to mutually different angles. That is to say, based on the
correlation detected by the estimated error calculating unit 412-1
through estimated error calculating unit 412-L with regard to mutually
different angles, the smallest error angle selecting unit 413 selects the
greatest correlation, and takes the angle regarding which the selected
correlation was detected as the data continuity angle based on the
reference axis, thereby detecting the data continuity angle based on the
reference axis in the input image.

[0820]For example, of the aggregates of absolute values of difference of
the pixel values supplied from the estimated error calculating unit 412-1
through estimated error calculating unit 412-L, the smallest error angle
selecting unit 413 selects the smallest aggregate. With regard to the
pixel set of which the selected aggregate was calculated, the smallest
error angle selecting unit 413 makes reference to a pixel belonging to
the one vertical row of pixels two to the left from the pixel of interest
and at the closest position to the straight line, and to a pixel
belonging to the one vertical row of pixels two to the right from the
pixel of interest and at the closest position to the straight line.

[0821]As shown in FIG. 77, the smallest error angle selecting unit 413
obtains the distance S in the vertical direction of the position of the
pixels to reference from the position of the pixel of interest. As shown
in FIG. 78, the smallest error angle selecting unit 413 calculates the
angle θ of data continuity based on the axis indicating the spatial
direction X which is the reference axis in the input image which is image
data, that corresponds to the lost actual world 1 light signals
continuity, from Expression (29).

θ=tan-1(s/2) (29)

[0822]Next, description will be made regarding the processing of the pixel
selecting unit 411-1 through pixel selecting unit 411-L in the event that
the data continuity angle indicated by the activity information is a
value of any from 0 degrees to 45 degrees and 135 degrees to 180 degrees.

[0823]The pixel selecting unit 411-1 through pixel selecting unit 411-L
set straight lines of predetermined angles which pass through the pixel
of interest, with the axis indicating the spatial direction X as the
reference axis, and select, of the pixels belonging to a horizontal row
of pixels to which the pixel of interest belongs, a predetermined number
of pixels to the left of the pixel of interest, and predetermined number
of pixels to the right of the pixel of interest, and the pixel of
interest, as a pixel set.

[0824]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a horizontal row of pixels above the
horizontal row of pixels to which the pixel of interest belongs, a pixel
at the position closest to the straight line set for each. The pixel
selecting unit 411-1 through pixel selecting unit 411-L then select, from
the pixels belonging to the horizontal row of pixels above the horizontal
row of pixels to which the pixel of interest belongs, a predetermined
number of pixels to the left of the selected pixel, a predetermined
number of pixels to the right of the selected pixel, and the selected
pixel, as a pixel set.

[0825]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a horizontal row of pixels two above the
horizontal row of pixels to which the pixel of interest belongs, a pixel
at the position closest to the straight line set for each. The pixel
selecting unit 411-1 through pixel selecting unit 411-L then select, from
the pixels belonging to the horizontal row of pixels two above the
horizontal row of pixels to which the pixel of interest belongs, a
predetermined number of pixels to the left of the selected pixel, a
predetermined number of pixels to the right of the selected pixel, and
the selected pixel, as a pixel set.

[0826]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a horizontal row of pixels below the
horizontal row of pixels to which the pixel of interest belongs, a pixel
at the position closest to the straight line set for each. The pixel
selecting unit 411-1 through pixel selecting unit 411-L then select, from
the pixels belonging to the horizontal row of pixels below the horizontal
row of pixels to which the pixel of interest belongs, a predetermined
number of pixels to the left of the selected pixel, a predetermined
number of pixels to the right of the selected pixel, and the selected
pixel, as a pixel set.

[0827]The pixel selecting unit 411-1 through pixel selecting unit 411-L
select, from pixels belonging to a horizontal row of pixels two below the
horizontal row of pixels to which the pixel of interest belongs, a pixel
at the position closest to the straight line set for each. The pixel
selecting unit 411-1 through pixel selecting unit 411-L then select, from
the pixels belonging to the horizontal row of pixels two below the
horizontal row of pixels to which the pixel of interest belongs, a
predetermined number of pixels to the left of the selected pixel, a
predetermined number of pixels to the right of the selected pixel, and
the selected pixel, as a pixel set.

[0830]The pixel selecting unit 411-1 supplies the selected set of pixels
to the estimated error calculating unit 412-1, and the pixel selecting
unit 411-2 supplies the selected set of pixels to the estimated error
calculating unit 412-2. In the same way, each pixel selecting unit 411-3
through pixel selecting unit 411-L supplies the selected set of pixels to
each estimated error calculating unit 412-3 through estimated error
calculating unit 412-L.

[0832]The smallest error angle selecting unit 413 detects the data
continuity angle based on the reference axis in the input image which
corresponds to the continuity of the image which is the lost actual world
1 light signals, based on the correlation detected by the estimated error
calculating unit 412-1 through estimated error calculating unit 412-L.

[0833]Next, data continuity detection processing with the data continuity
detecting unit 101 of which the configuration is shown in FIG. 72,
corresponding to the processing in step S101, will be described with
reference to the flowchart in FIG. 79.

[0834]In step S401, the activity detecting unit 401 and the data selecting
unit 402 select the pixel of interest which is a pixel of interest from
the input image. The activity detecting unit 401 and the data selecting
unit 402 select the same pixel of interest. For example, the activity
detecting unit 401 and the data selecting unit 402 select the pixel of
interest from the input image in raster scan order.

[0835]In step S402, the activity detecting unit 401 detects activity with
regard to the pixel of interest. For example, the activity detecting unit
401 detects activity based on the difference of pixel values of pixels
aligned in the vertical direction of a block made up of a predetermined
number of pixels centered on the pixel of interest, and the difference of
pixel values of pixels aligned in the horizontal direction.

[0836]The activity detecting unit 401 detects activity in the spatial
direction as to the pixel of interest, and supplies activity information
indicating the detected results to the data selecting unit 402 and the
continuity direction derivation unit 404.

[0837]In step S403, the data selecting unit 402 selects, from a row of
pixels including the pixel of interest, a predetermined number of pixels
centered on the pixel of interest, as a pixel set. For example, the data
selecting unit 402 selects a predetermined number of pixels above or to
the left of the pixel of interest, and a predetermined number of pixels
below or to the right of the pixel of interest, which are pixels
belonging to a vertical or horizontal row of pixels to which the pixel of
interest belongs, and also the pixel of interest, as a pixel set.

[0838]In step S404, the data selecting unit 402 selects, as a pixel set, a
predetermined number of pixels each from a predetermined number of pixel
rows for each angle in a predetermined range based on the activity
detected by the processing in step S402. For example, the data selecting
unit 402 sets straight lines with angles of a predetermined range which
pass through the pixel of interest, with the axis indicating the spatial
direction X as the reference axis, selects a pixel which is one or two
rows away from the pixel of interest in the horizontal direction or
vertical direction and which is closest to the straight line, and selects
a predetermined number of pixels above or to the left of the selected
pixel, and a predetermined number of pixels below or to the right of the
selected pixel, and the selected pixel closest to the line, as a pixel
set. The data selecting unit 402 selects pixel sets for each angle.

[0840]In step S405, the error estimating unit 403 calculates the
correlation between the set of pixels centered on the pixel of interest,
and the pixel sets selected for each angle. For example, the error
estimating unit 403 calculates the sum of absolute values of difference
of the pixel values of the pixels of the set including the pixel of
interest and the pixel values of the pixels at corresponding positions in
other sets, for each angle.

[0841]The angle of data continuity may be detected based on the
correlation between pixel sets selected for each angle.

[0843]In step S406, from position of the pixel set having the strongest
correlation based on the correlation calculated in the processing in step
S405, the continuity direction derivation unit 404 detects the data
continuity angle based on the reference axis in the input image which is
image data that corresponds to the lost actual world 1 light signal
continuity. For example, the continuity direction derivation unit 404
selects the smallest aggregate of the aggregate of absolute values of
difference of pixel values, and detects the data continuity angle θ
from the position of the pixel set regarding which the selected aggregate
has been calculated.

[0845]In step S407, the data selecting unit 402 determines whether or not
processing of all pixels has ended, and in the event that determination
is made that processing of all pixels has not ended, the flow returns to
step S401, a pixel of interest is selected from pixels not yet taken as
the pixel of interest, and the above-described processing is repeated.

[0846]In the event that determination is made in step S407 that processing
of all pixels has ended, the processing ends.

[0847]Thus, the data continuity detecting unit 101 can detect the data
continuity angle based on the reference axis in the image data,
corresponding to the lost actual world 1 light signal continuity.

[0848]Note that an arrangement may be made wherein the data continuity
detecting unit 101 of which the configuration is shown in FIG. 72 detects
activity in the spatial direction of the input image with regard to the
pixel of interest which is a pixel of interest in the frame of interest
which is a frame of interest, extracts multiple pixel sets made up of a
predetermined number of pixels in one row in the vertical direction or
one row in the horizontal direction from the frame of interest and from
each of frames before or after time-wise the frame of interest, for each
angle and movement vector based on the pixel of interest and the
space-directional reference axis, according to the detected activity,
detects the correlation of the extracted pixel sets, and detects the data
continuity angle in the time direction and spatial direction in the input
image, based on this correlation.

[0849]For example, as shown in FIG. 80, the data selecting unit 402
extracts multiple pixel sets made up of a predetermined number of pixels
in one row in the vertical direction or one row in the horizontal
direction from frame #n which is the frame of interest, frame #n-1, and
frame #n+1, for each angle and movement vector based on the pixel of
interest and the space-directional reference axis, according to the
detected activity.

[0850]The frame #n-1 is a frame which is previous to the frame #n
time-wise, and the frame #n+1 is a frame following the frame #n
time-wise. That is to say, the frame #n-1, frame #n, and frame #n+1, are
displayed in the order of frame #n-1, frame #n, and frame #n+1.

[0851]The error estimating unit 403 detects the correlation of pixel sets
for each single angle and single movement vector, with regard to the
multiple sets of the pixels that have been extracted. The continuity
direction derivation unit 404 detects the data continuity angle in the
temporal direction and spatial direction in the input image which
corresponds to the lost actual world 1 light signal continuity, based on
the correlation of pixel sets, and outputs the data continuity
information indicating the angle.

[0852]FIG. 81 is a block diagram illustrating another configuration of the
data continuity detecting unit 101 shown in FIG. 72, in further detail.
Portions which are the same as the case shown in FIG. 76 are denoted with
the same numerals, and description thereof will be omitted.

[0854]With the data continuity detecting unit 101 shown in FIG. 81, sets
of a number corresponding to the range of the angle are extracted wherein
the pixel sets are made up of pixels of a number corresponding to the
range of the angle, the correlation of the extracted pixel sets is
detected, and the data continuity angle based on the reference axis in
the input image is detected based on the detected correlation.

[0855]First, the processing of the pixel selecting unit 421-1 through
pixel selecting unit 421-L in the event that the angle of the data
continuity indicated by activity information is any value 45 degrees to
135 degrees, will be described.

[0856]As shown to the left side in FIG. 82, with the data continuity
detecting unit 101 shown in FIG. 76, pixel sets of a predetermined number
of pixels are extracted regardless of the angle of the set straight line,
but with the data continuity detecting unit 101 shown in FIG. 81, pixel
sets of a number of pixels corresponding to the range of the angle of the
set straight line are extracted, as indicated at the right side of FIG.
82. Also, with the data continuity detecting unit 101 shown in FIG. 81,
pixels sets of a number corresponding to the range of the angle of the
set straight line are extracted.

[0857]The pixel selecting unit 421-1 through pixel selecting unit 421-L
set straight lines of mutually differing predetermined angles which pass
through the pixel of interest with the axis indicating the spatial
direction X as a reference axis, in the range of 45 degrees to 135
degrees.

[0858]The pixel selecting unit 421-1 through pixel selecting unit 421-L
select, from pixels belonging to one vertical row of pixels to which the
pixel of interest belongs, pixels above the pixel of interest and pixels
below the pixel of interest of a number corresponding to the range of the
angle of the straight line set for each, and the pixel of interest, as a
pixel set.

[0859]The pixel selecting unit 421-1 through pixel selecting unit 421-L
select, from pixels belonging to one vertical line each on the left side
and the right side as to the one vertical row of pixels to which the
pixel of interest belongs, a predetermined distance away therefrom in the
horizontal direction with the pixel as a reference, pixels closest to the
straight lines set for each, and selects, from one vertical row of pixels
as to the selected pixel, pixels above the selected pixel of a number
corresponding to the range of angle of the set straight line, pixels
below the selected pixel of a number corresponding to the range of angle
of the set straight line, and the selected pixel, as a pixel set.

[0860]That is to say, the pixel selecting unit 421-1 through pixel
selecting unit 421-L select pixels of a number corresponding to the range
of angle of the set straight line as pixel sets. The pixel selecting unit
421-1 through pixel selecting unit 421-L select pixels sets of a number
corresponding to the range of angle of the set straight line.

[0861]For example, in the event that the image of a fine line, positioned
at an angle approximately 45 degrees as to the spatial direction X, and
having a width which is approximately the same width as the detection
region of a detecting element, has been imaged with the sensor 2, the
image of the fine line is projected on the data 3 such that arc shapes
are formed on three pixels aligned in one row in the spatial direction Y
for the fine-line image. Conversely, in the event that the image of a
fine line, positioned at an angle approximately vertical to the spatial
direction X, and having a width which is approximately the same width as
the detection region of a detecting element, has been imaged with the
sensor 2, the image of the fine line is projected on the data 3 such that
arc shapes are formed on a great number of pixels aligned in one row in
the spatial direction Y for the fine-line image.

[0862]With the same number of pixels included in the pixel sets, in the
event that the fine line is positioned at an angle approximately 45
degrees to the spatial direction X, the number of pixels on which the
fine line image has been projected is smaller in the pixel set, meaning
that the resolution is lower. On the other hand, in the event that the
fine line is positioned approximately vertical to the spatial direction
X, processing is performed on a part of the pixels on which the fine line
image has been projected, which may lead to lower accuracy.

[0863]Accordingly, to make the number of pixels upon which the fine line
image is projected to be approximately equal, the pixel selecting unit
421-1 through pixel selecting unit 421-L selects the pixels and the pixel
sets so as to reduce the number of pixels included in each of the pixels
sets and increase the number of pixel sets in the event that the straight
line set is closer to an angle of 45 degrees as to the spatial direction
X, and increase the number of pixels included in each of the pixels sets
and reduce the number of pixel sets in the event that the straight line
set is closer to being vertical as to the spatial direction X.

[0864]For example, as shown in FIG. 83 and FIG. 84, in the event that the
angle of the set straight line is within the range of 45 degrees or
greater but smaller than 63.4 degrees (the range indicated by A in FIG.
83 and FIG. 84), the pixel selecting unit 421-1 through pixel selecting
unit 421-L select five pixels centered on the pixel of interest from one
vertical row of pixels as to the pixel of interest, as a pixel set, and
also select as pixel sets five pixels each from pixels belonging to one
row of pixels each on the left side and the right side of the pixel of
interest within five pixels therefrom in the horizontal direction.

[0865]That is to say, in the event that the angle of the set straight line
is within the range of 45 degrees or greater but smaller than 63.4
degrees the pixel selecting unit 421-1 through pixel selecting unit 421-L
select 11 pixel sets each made up of five pixels, from the input image.
In this case, the pixel selected as the pixel which is at the closest
position to the set straight line is at a position five pixels to nine
pixels in the vertical direction as to the pixel of interest.

[0866]In FIG. 84, the number of rows indicates the number of rows of
pixels to the left side or right side of the pixel of interest from which
pixels are selected as pixel sets. In FIG. 84, the number of pixels in
one row indicates the number of pixels selected as a pixel set from the
one row of pixels vertical as to the pixel of interest, or the rows to
the left side or the right side of the pixel of interest. In FIG. 84, the
selection range of pixels indicates the position of pixels to be selected
in the vertical direction, as the pixel at a position closest to the set
straight line as to the pixel of interest.

[0867]As shown in FIG. 85, for example, in the event that the angle of the
set straight line is 45 degrees, the pixel selecting unit 421-1 selects
five pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets five pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
five pixels therefrom in the horizontal direction. That is to say, the
pixel selecting unit 421-1 selects 11 pixel sets each made up of five
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
five pixels in the vertical direction as to the pixel of interest.

[0868]Note that in FIG. 85 through FIG. 92, the squares represented by
dotted lines (single grids separated by dotted lines) indicate single
pixels, and squares represented by solid lines indicate pixel sets. In
FIG. 85 through FIG. 92, the coordinate of the pixel of interest in the
spatial direction X is 0, and the coordinate of the pixel of interest in
the spatial direction Y is 0.

[0869]Also, in FIG. 85 through FIG. 92, the hatched squares indicate the
pixel of interest or the pixels at positions closest to the set straight
line. In FIG. 85 through FIG. 92, the squares represented by heavy lines
indicate the set of pixels selected with the pixel of interest as the
center.

[0870]As shown in FIG. 86, for example, in the event that the angle of the
set straight line is 60.9 degrees, the pixel selecting unit 421-2 selects
five pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets five pixels each from pixels belonging to one vertical row of
pixels each on the left side and the right side of the pixel of interest
within five pixels therefrom in the horizontal direction. That is to say,
the pixel selecting unit 421-2 selects 11 pixel sets each made up of five
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
nine pixels in the vertical direction as to the pixel of interest.

[0871]For example, as shown in FIG. 83 and FIG. 84, in the event that the
angle of the set straight line is 63.4 degrees or greater but smaller
than 71.6 degrees (the range indicated by B in FIG. 83 and FIG. 84), the
pixel selecting unit 421-1 through pixel selecting unit 421-L select
seven pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also select as
pixel sets seven pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
four pixels therefrom in the horizontal direction.

[0872]That is to say, in the event that the angle of the set straight line
is 63.4 degrees or greater but smaller than 71.6 degrees the pixel
selecting unit 421-1 through pixel selecting unit 421-L select nine pixel
sets each made up of seven pixels, from the input image. In this case,
the pixel selected as the pixel which is at the closest position to the
set straight line is at a position eight pixels to 11 pixels in the
vertical direction as to the pixel of interest.

[0873]As shown in FIG. 87, for example, in the event that the angle of the
set straight line is 63.4 degrees, the pixel selecting unit 421-3 selects
seven pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets seven pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
four pixels therefrom in the horizontal direction. That is to say, the
pixel selecting unit 421-3 selects nine pixel sets each made up of seven
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
eight pixels in the vertical direction as to the pixel of interest.

[0874]As shown in FIG. 88, for example, in the event that the angle of the
set straight line is 70.0 degrees, the pixel selecting unit 421-4 selects
seven pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets seven pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
four pixels therefrom in the horizontal direction. That is to say, the
pixel selecting unit 421-4 selects nine pixel sets each made up of seven
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
11 pixels in the vertical direction as to the pixel of interest.

[0875]For example, as shown in FIG. 83 and FIG. 84, in the event that the
angle of the set straight line is 71.6 degrees or greater but smaller
than 76.0 degrees (the range indicated by C in FIG. 83 and FIG. 84), the
pixel selecting unit 421-1 through pixel selecting unit 421-L select nine
pixels centered on the pixel of interest from one vertical row of pixels
as to the pixel of interest, as a pixel set, and also select as pixel
sets nine pixels each from pixels belonging to one row of pixels each on
the left side and the right side of the pixel of interest within three
pixels therefrom in the horizontal direction.

[0876]That is to say, in the event that the angle of the set straight line
is 71.6 degrees or greater but smaller than 76.0 degrees, the pixel
selecting unit 421-1 through pixel selecting unit 421-L select seven
pixel sets each made up of nine pixels, from the input image. In this
case, the pixel selected as the pixel which is at the closest position to
the set straight line is at a position nine pixels to 11 pixels in the
vertical direction as to the pixel of interest.

[0877]As shown in FIG. 89, for example, in the event that the angle of the
set straight line is 71.6 degrees, the pixel selecting unit 421-5 selects
nine pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets nine pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
three pixels therefrom in the horizontal direction. That is to say, the
pixel selecting unit 421-5 selects seven pixel sets each made up of nine
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
nine pixels in the vertical direction as to the pixel of interest.

[0878]Also, As shown in FIG. 90, for example, in the event that the angle
of the set straight line is 74.7 degrees, the pixel selecting unit 421-6
selects nine pixels centered on the pixel of interest from one vertical
row of pixels as to the pixel of interest, as a pixel set, and also
selects as pixel sets nine pixels each from pixels belonging to one row
of pixels each on the left side and the right side of the pixel of
interest within three pixels therefrom in the horizontal direction. That
is to say, the pixel selecting unit 421-6 selects seven pixel sets each
made up of nine pixels, from the input image. In this case, of the pixels
selected as the pixels at the closest position to the set straight line
the pixel which is at the farthest position from the pixel of interest is
at a position 11 pixels in the vertical direction as to the pixel of
interest.

[0879]For example, as shown in FIG. 83 and FIG. 84, in the event that the
angle of the set straight line is 76.0 degrees or greater but smaller
than 87.7 degrees (the range indicated by D in FIG. 83 and FIG. 84), the
pixel selecting unit 421-1 through pixel selecting unit 421-L select 11
pixels centered on the pixel of interest from one vertical row of pixels
as to the pixel of interest, as a pixel set, and also select as pixel
sets 11 pixels each from pixels belonging to one row of pixels each on
the left side and the right side of the pixel of interest within two
pixels therefrom in the horizontal direction. That is to say, in the
event that the angle of the set straight line is 76.0 degrees or greater
but smaller than 87.7 degrees, the pixel selecting unit 421-1 through
pixel selecting unit 421-L select five pixel sets each made up of 11
pixels, from the input image. In this case, the pixel selected as the
pixel which is at the closest position to the set straight line is at a
position eight pixels to 50 pixels in the vertical direction as to the
pixel of interest.

[0880]As shown in FIG. 91, for example, in the event that the angle of the
set straight line is 76.0 degrees, the pixel selecting unit 421-7 selects
11 pixels centered on the pixel of interest from one vertical row of
pixels as to the pixel of interest, as a pixel set, and also selects as
pixel sets 11 pixels each from pixels belonging to one row of pixels each
on the left side and the right side of the pixel of interest within two
pixels therefrom in the horizontal direction. That is to say, the pixel
selecting unit 421-7 selects five pixel sets each made up of 11 pixels,
from the input image. In this case, of the pixels selected as the pixels
at the closest position to the set straight line the pixel which is at
the farthest position from the pixel of interest is at a position eight
pixels in the vertical direction as to the pixel of interest.

[0881]Also, as shown in FIG. 92, for example, in the event that the angle
of the set straight line is 87.7 degrees, the pixel selecting unit 421-8
selects 11 pixels centered on the pixel of interest from one vertical row
of pixels as to the pixel of interest, as a pixel set, and also selects
as pixel sets 11 pixels each from pixels belonging to one row of pixels
each on the left side and the right side of the pixel of interest within
two pixels therefrom in the horizontal direction. That is to say, the
pixel selecting unit 421-8 selects five pixel sets each made up of 11
pixels, from the input image. In this case, of the pixels selected as the
pixels at the closest position to the set straight line the pixel which
is at the farthest position from the pixel of interest is at a position
50 pixels in the vertical direction as to the pixel of interest.

[0882]Thus, the pixel selecting unit 421-1 through pixel selecting unit
421-L each select a predetermined number of pixels sets corresponding to
the range of the angle, made up of a predetermined number of pixels
corresponding to the range of the angle.

[0884]The estimated error calculating unit 422-1 through estimated error
calculating unit 422-L detect the correlation of pixel values of the
pixels at corresponding positions in the multiple sets supplied from each
of the pixel selecting unit 421-1 through pixel selecting unit 421-L. For
example, the estimated error calculating unit 422-1 through estimated
error calculating unit 422-L calculate the sum of absolute values of
difference between the pixel values of the pixels of the pixel set
including the pixel of interest, and of the pixel values of the pixels at
corresponding positions in the other multiple sets, supplied from each of
the pixel selecting unit 421-1 through pixel selecting unit 421-L, and
divides the calculated sum by the number of pixels contained in the pixel
sets other than the pixel set containing the pixel of interest. The
reason for dividing the calculated sum by the number of pixels contained
in sets other than the set containing the pixel of interest is to
normalize the value indicating the correlation, since the number of
pixels selected differs according to the angle of the straight line that
has been set.

[0886]Next, the processing of the pixel selecting unit 421-1 through pixel
selecting unit 421-L in the event that the angle of the data continuity
indicated by activity information is any value 0 degrees to 45 degrees
and 135 degrees to 180 degrees, will be described.

[0887]The pixel selecting unit 421-1 through pixel selecting unit 421-L
set straight lines of mutually differing predetermined angles which pass
through the pixel of interest with the axis indicating the spatial
direction X as a reference, in the range of 0 degrees to 45 degrees or
135 degrees to 180 degrees.

[0888]The pixel selecting unit 421-1 through pixel selecting unit 421-L
select, from pixels belonging to one horizontal row of pixels to which
the pixel of interest belongs, pixels to the left side of the pixel of
interest of a number corresponding to the range of angle of the set line,
pixels to the right side of the pixel of interest of a number
corresponding to the range of angle of the set line, and the selected
pixel, as a pixel set.

[0889]The pixel selecting unit 421-1 through pixel selecting unit 421-L
select, from pixels belonging to one horizontal line each above and below
as to the one horizontal row of pixels to which the pixel of interest
belongs, a predetermined distance away therefrom in the vertical
direction with the pixel as a reference, pixels closest to the straight
lines set for each, and selects, from one horizontal row of pixels as to
the selected pixel, pixels to the left side of the selected pixel of a
number corresponding to the range of angle of the set line, pixels to the
right side of the selected pixel of a number corresponding to the range
of angle of the set line, and the selected pixel, as a pixel set.

[0890]That is to say, the pixel selecting unit 421-1 through pixel
selecting unit 421-L select pixels of a number corresponding to the range
of angle of the set line as pixel sets. The pixel selecting unit 421-1
through pixel selecting unit 421-L select pixels sets of a number
corresponding to the range of angle of the set line.

[0891]The pixel selecting unit 421-1 supplies the selected set of pixels
to the estimated error calculating unit 422-1, and the pixel selecting
unit 421-2 supplies the selected set of pixels to the estimated error
calculating unit 422-2. In the same way, each pixel selecting unit 421-3
through pixel selecting unit 421-L supplies the selected set of pixels to
each estimated error calculating unit 422-3 through estimated error
calculating unit 422-L.

[0892]The estimated error calculating unit 422-1 through estimated error
calculating unit 422-L detect the correlation of pixel values of the
pixels at corresponding positions in the multiple sets supplied from each
of the pixel selecting unit 421-1 through pixel selecting unit 421-L.

[0894]Next, the processing for data continuity detection with the data
continuity detecting unit 101 of which the configuration is shown in FIG.
81, corresponding to the processing in step S101, will be described with
reference to the flowchart shown in FIG. 93.

[0895]The processing of step S421 and step S422 is the same as the
processing of step S401 and step S402, so description thereof will be
omitted.

[0896]In step S423, the data selecting unit 402 selects, from a row of
pixels containing a pixel of interest, a number of pixels predetermined
with regard to the range of the angle which are centered on the pixel of
interest, as a set of pixels, for each angle of a range corresponding to
the activity detected in the processing in step S422. For example, the
data selecting unit 402 selects from pixels belonging to one vertical or
horizontal row of pixels, pixels of a number determined by the range of
angle, for the angle of the straight line to be set, above or to the left
of the pixel of interest, below or to the right of the pixel of interest,
and the pixel of interest, as a pixel set.

[0897]In step S424, the data selecting unit 402 selects, from pixel rows
of a number determined according to the range of angle, pixels of a
number determined according to the range of angle, as a pixel set, for
each predetermined angle range, based on the activity detected in the
processing in step S422. For example, the data selecting unit 402 sets a
straight line passing through the pixel of interest with an angle of a
predetermined range, taking an axis representing the spatial direction X
as a reference axis, selects a pixel closest to the straight line while
being distanced from the pixel of interest in the horizontal direction or
the vertical direction by a predetermined range according to the range of
angle of the straight line to be set, and selects pixels of a number
corresponding to the range of angle of the straight line to be set from
above or to the left side of the selected pixel, pixels of a number
corresponding to the range of angle of the straight line to be set from
below or to the right side of the selected pixel, and the pixel closest
to the selected line, as a pixel set. The data selecting unit 402 selects
a set of pixels for each angle.

[0899]In step S425, the error estimating unit 403 calculates the
correlation between the pixel set centered on the pixel of interest, and
the pixel set selected for each angle. For example, the error estimating
unit 403 calculates the sum of absolute values of difference between the
pixel values of pixels of the set including the pixel of interest and the
pixel values of pixels at corresponding positions in the other sets, and
divides the sum of absolute values of difference between the pixel values
by the number of pixels belonging to the other sets, thereby calculating
the correlation.

[0900]An arrangement may be made wherein the data continuity angle is
detected based on the mutual correlation between the pixel sets selected
for each angle.

[0902]The processing of step S426 and step S427 is the same as the
processing of step S406 and step S407, so description thereof will be
omitted.

[0903]Thus, the data continuity detecting unit 101 can detect the angle of
data continuity based on a reference axis in the image data,
corresponding to the lost actual world 1 light signal continuity, more
accurately and precisely. With the data continuity detecting unit 101 of
which the configuration is shown in FIG. 81, the correlation of a greater
number of pixels where the fine line image has been projected can be
evaluated particularly in the event that the data continuity angle is
around 45 degrees, so the angle of data continuity can be detected with
higher precision.

[0904]Note that an arrangement may be made with the data continuity
detecting unit 101 of which the configuration is shown in FIG. 81 as
well, wherein activity in the spatial direction of the input image is
detected for a certain pixel of interest which is the pixel of interest
in a frame of interest which is the frame of interest, and from sets of
pixels of a number determined according to the spatial angle range in one
vertical row or one horizontal row, pixels of a number corresponding to
the spatial angle range are extracted, from the frame of interest and
frames previous to or following the frame of interest time-wise, for each
angle and movement vector based on the pixel of interest and the
reference axis in the spatial direction, according to the detected
activity, the correlation of the extracted pixel sets is detected, and
the data continuity angle in the time direction and the spatial direction
in the input image is detected based on the correlation.

[0906]With the data continuity detecting unit 101 of which the
configuration is shown in FIG. 94, with regard to a pixel of interest
which is the pixel of interest, a block made up of a predetermined number
of pixels centered on the pixel of interest, and multiple blocks each
made up of a predetermined number of pixels around the pixel of interest,
are extracted, the correlation of the block centered on the pixel of
interest and the surrounding blocks is detected, and the angle of data
continuity in the input image based on a reference axis is detected,
based on the correlation.

[0907]A data selecting unit 441 sequentially selects the pixel of interest
from the pixels of the input image, extracts the block made of the
predetermined number of pixels centered on the pixel of interest and the
multiple blocks made up of the predetermined number of pixels surrounding
the pixel of interest, and supplies the extracted blocks to an error
estimating unit 442.

[0908]For example, the data selecting unit 441 extracts a block made up of
5×5 pixels centered on the pixel of interest, and two blocks made
up of 5×5 pixels from the surroundings of the pixel of interest for
each predetermined angle range based on the pixel of interest and the
reference axis.

[0909]The error estimating unit 442 detects the correlation between the
block centered on the pixel of interest and the blocks in the
surroundings of the pixel of the interest supplied from the data
selecting unit 441, and supplies correlation information indicating the
detected correlation to a continuity direction derivation unit 443.

[0910]For example, the error estimating unit 442 detects the correlation
of pixel values with regard to a block made up of 5×5 pixels
centered on the pixel of interest for each angle range, and two blocks
made up of 5×5 pixels corresponding to one angle range.

[0911]From the position of the block in the surroundings of the pixel of
interest with the greatest correlation based on the correlation
information supplied from the error estimating unit 442, the continuity
direction derivation unit 443 detects the angle of data continuity in the
input image based on the reference axis, that corresponds to the lost
actual world 1 light signal continuity, and outputs data continuity
information indicating this angle. For example, the continuity direction
derivation unit 443 detects the range of the angle regarding the two
blocks made up of 5×5 pixels from the surroundings of the pixel of
interest which have the greatest correlation with the block made up of
5×5 pixels centered on the pixel of interest, as the angle of data
continuity, based on the correlation information supplied from the error
estimating unit 442, and outputs data continuity information indicating
the detected angle.

[0912]FIG. 95 is a block diagram illustrating a more detailed
configuration of the data continuity detecting unit 101 shown in FIG. 94.

[0915]Each of the pixel selecting unit 461-1 through pixel selecting unit
461-L extracts a block made up of a predetermined number of pixels
centered on the pixel of interest, and two blocks made up of a
predetermined number of pixels according to a predetermined angle range
based on the pixel of interest and the reference axis.

[0916]FIG. 96 is a diagram for describing an example of a 5×5 pixel
block extracted by the pixel selecting unit 461-1 through pixel selecting
unit 461-L. The center position in FIG. 96 indicates the position of the
pixel of interest.

[0917]Note that a 5×5 pixel block is only an example, and the number
of pixels contained in a block do not restrict the present invention.

[0918]For example, the pixel selecting unit 461-1 extracts a 5×5
pixel block centered on the pixel of interest, and also extracts a
5×5 pixel block (indicated by A in FIG. 96) centered on a pixel at
a position shifted five pixels to the right side from the pixel of
interest, and extracts a 5×5 pixel block (indicated by A' in FIG.
96) centered on a pixel at a position shifted five pixels to the left
side from the pixel of interest, corresponding to 0 degrees to 18.4
degrees and 161.6 degrees to 180.0 degrees. The pixel selecting unit
461-1 supplies the three extracted 5×5 pixel blocks to the
estimated error calculating unit 462-1.

[0919]The pixel selecting unit 461-2 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by B in FIG. 96) centered on a pixel at a position
shifted 10 pixels to the right side from the pixel of interest and five
pixels upwards, and extracts a 5×5 pixel block (indicated by B' in
FIG. 96) centered on a pixel at a position shifted 10 pixels to the left
side from the pixel of interest and five pixels downwards, corresponding
to the range of 18.4 degrees through 33.7 degrees. The pixel selecting
unit 461-2 supplies the three extracted 5×5 pixel blocks to the
estimated error calculating unit 462-2.

[0920]The pixel selecting unit 461-3 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by C in FIG. 96) centered on a pixel at a position
shifted five pixels to the right side from the pixel of interest and five
pixels upwards, and extracts a 5×5 pixel block (indicated by C' in
FIG. 96) centered on a pixel at a position shifted five pixels to the
left side from the pixel of interest and five pixels downwards,
corresponding to the range of 33.7 degrees through 56.3 degrees. The
pixel selecting unit 461-3 supplies the three extracted 5×5 pixel
blocks to the estimated error calculating unit 462-3.

[0921]The pixel selecting unit 461-4 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by D in FIG. 96) centered on a pixel at a position
shifted five pixels to the right side from the pixel of interest and 10
pixels upwards, and extracts a 5×5 pixel block (indicated by D' in
FIG. 96) centered on a pixel at a position shifted five pixels to the
left side from the pixel of interest and 10 pixels downwards,
corresponding to the range of 56.3 degrees through 71.6 degrees. The
pixel selecting unit 461-4 supplies the three extracted 5×5 pixel
blocks to the estimated error calculating unit 462-4.

[0922]The pixel selecting unit 461-5 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by E in FIG. 96) centered on a pixel at a position
shifted five pixels upwards from the pixel of interest, and extracts a
5×5 pixel block (indicated by E' in FIG. 96) centered on a pixel at
a position shifted five pixels downwards from the pixel of interest,
corresponding to the range of 71.6 degrees through 108.4 degrees. The
pixel selecting unit 461-5 supplies the three extracted 5×5 pixel
blocks to the estimated error calculating unit 462-5.

[0923]The pixel selecting unit 461-6 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by F in FIG. 96) centered on a pixel at a position
shifted five pixels to the left side from the pixel of interest and 10
pixels upwards, and extracts a 5×5 pixel block (indicated by F' in
FIG. 96) centered on a pixel at a position shifted five pixels to the
right side from the pixel of interest and 10 pixels downwards,
corresponding to the range of 108.4 degrees through 123.7 degrees. The
pixel selecting unit 461-6 supplies the three extracted 5×5 pixel
blocks to the estimated error calculating unit 462-6.

[0924]The pixel selecting unit 461-7 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by G in FIG. 96) centered on a pixel at a position
shifted five pixels to the left side from the pixel of interest and five
pixels upwards, and extracts a 5×5 pixel block (indicated by G' in
FIG. 96) centered on a pixel at a position shifted five pixels to the
right side from the pixel of interest and five pixels downwards,
corresponding to the range of 123.7 degrees through 146.3 degrees. The
pixel selecting unit 461-7 supplies the three extracted 5×5 pixel
blocks to the estimated error calculating unit 462-7.

[0925]The pixel selecting unit 461-8 extracts a 5×5 pixel block
centered on the pixel of interest, and also extracts a 5×5 pixel
block (indicated by H in FIG. 96) centered on a pixel at a position
shifted 10 pixels to the left side from the pixel of interest and five
pixels upwards, and extracts a 5×5 pixel block (indicated by H' in
FIG. 96) centered on a pixel at a position shifted 10 pixels to the right
side from the pixel of interest and five pixels downwards, corresponding
to the range of 146.3 degrees through 161.6 degrees. The pixel selecting
unit 461-8 supplies the three extracted 5×5 pixel blocks to the
estimated error calculating unit 462-8.

[0926]Hereafter, a block made up of a predetermined number of pixels
centered on the pixel of interest will be called a block of interest.

[0927]Hereafter, a block made up of a predetermined number of pixels
corresponding to a predetermined range of angle based on the pixel of
interest and reference axis will be called a reference block.

[0928]In this way, the pixel selecting unit 461-1 through pixel selecting
unit 461-8 extracts a block of interest and reference blocks from a range
of 25×25 pixels, centered on the pixel of interest, for example.

[0930]For example, the estimated error calculating unit 462-1 calculates
the absolute value of difference between the pixel values of the pixels
contained in the block of interest and the pixel values of the pixels
contained in the reference block, with regard to the block of interest
made up of 5×5 pixels centered on the pixel of interest, and the
5×5 pixel reference block centered on a pixel at a position shifted
five pixels to the right side from the pixel of interest, extracted
corresponding to 0 degrees to 18.4 degrees and 161.6 degrees to 180.0
degrees.

[0931]In this case, as shown in FIG. 97, in order for the pixel value of
the pixel of interest to be used on the calculation of the absolute value
of difference of pixel values, with the position where the center pixel
of the block of interest and the center pixel of the reference block
overlap as a reference, the estimated error calculating unit 462-1
calculates the absolute value of difference of pixel values of pixels at
positions overlapping in the event that the position of the block of
interest is shifted to any one of two pixels to the left side through two
pixels to the right side and any one of two pixels upwards through two
pixels downwards as to the reference block. This means that the absolute
value of difference of the pixel values of pixels at corresponding
positions in 25 types of positions of the block of interest and the
reference block. In other words, in a case wherein the absolute values of
difference of the pixel values are calculated, the range formed of the
block of interest moved relatively and the reference block is 9×9
pixels.

[0932]In FIG. 97, the square represent pixels, A represents the reference
block, and B represents the block of interest. In FIG. 97, the heavy
lines indicate the pixel of interest. That is to say, FIG. 97 is a
diagram illustrating a case wherein the block of interest has been
shifted two pixels to the right side and one pixel upwards, as to the
reference block.

[0933]Further, the estimated error calculating unit 462-1 calculates the
absolute value of difference between the pixel values of the pixels
contained in the block of interest and the pixel values of the pixels
contained in the reference block, with regard to the block of interest
made up of 5×5 pixels centered on the pixel of interest, and the
5×5 pixel reference block centered on a pixel at a position shifted
five pixels to the left side from the pixel of interest, extracted
corresponding to 0 degrees to 18.4 degrees and 161.6 degrees to 180.0
degrees.

[0934]The estimated error calculating unit 462-1 then obtains the sum of
the absolute values of difference that have been calculated, and supplies
the sum of the absolute values of difference to the smallest error angle
selecting unit 463 as correlation information indicating correlation.

[0935]The estimated error calculating unit 462-2 calculates the absolute
value of difference between the pixel values with regard to the block of
interest made up of 5×5 pixels and the two 5×5 reference
pixel blocks extracted corresponding to the range of 18.4 degrees to 33.7
degrees, and further calculates sum of the absolute values of difference
that have been calculated. The estimated error calculating unit 462-1
supplies the sum of the absolute values of difference that has been
calculated to the smallest error angle selecting unit 463 as correlation
information indicating correlation.

[0936]In the same way, the estimated error calculating unit 462-3 through
estimated error calculating unit 462-8 calculate the absolute value of
difference between the pixel values with regard to the block of interest
made up of 5×5 pixels and the two 5×5 pixel reference blocks
extracted corresponding to the predetermined angle ranges, and further
calculate sum of the absolute values of difference that have been
calculated. The estimated error calculating unit 462-3 through estimated
error calculating unit 462-8 each supply the sum of the absolute values
of difference to the smallest error angle selecting unit 463 as
correlation information indicating correlation.

[0937]The smallest error angle selecting unit 463 detects, as the data
continuity angle, the angle corresponding to the two reference blocks at
the reference block position where, of the sums of the absolute values of
difference of pixel values serving as correlation information supplied
from the estimated error calculating unit 462-1 through estimated error
calculating unit 462-8, the smallest value indicating the strongest
correlation has been obtained, and outputs data continuity information
indicating the detected angle.

[0938]Now, description will be made regarding the relationship between the
position of the reference blocks and the range of angle of data
continuity.

[0939]In a case of approximating an approximation function f(x) for
approximating actual world signals with an n-order one-dimensional
polynomial, the approximation function f(x) can be expressed by
Expression (30).

##EQU00017##

[0940]In the event that the waveform of the signal of the actual world 1
approximated by the approximation function f(x) has a certain gradient
(angle) as to the spatial direction Y, the approximation function (x, y)
for approximating actual world 1 signals is expressed by Expression (31)
which has been obtained by taking x in Expression (30) as x+γy.

γ γ γ γ ##EQU00018##

γ represents the ratio of change in position in the spatial
direction X as to the change in position in the spatial direction Y.
Hereafter, γ will also be called amount of shift.

[0942]FIG. 98 is a diagram illustrating the distance to a straight line
having an angle θ in the spatial direction X from the position of
surrounding pixels of the pixel of interest in a case wherein the
distance in the spatial direction X between the position of the pixel of
interest and the straight line having the angle θ is 0, i.e.,
wherein the straight line passes through the pixel of interest. Here, the
position of the pixel is the center of the pixel. Also, in the event that
the position is to the left side of the straight line, the distance
between the position and the straight line is indicated by a negative
value, and in the event that the position is to the right side of the
straight line, is indicated by a positive value.

[0943]For example, the distance in the spatial direction X between the
position of the pixel adjacent to the pixel of interest on the right
side, i.e., the position where the coordinate x in the spatial direction
X increases by 1, and the straight line having the angle θ, is 1,
and the distance in the spatial direction X between the position of the
pixel adjacent to the pixel of interest on the left side, i.e., the
position where the coordinate x in the spatial direction X decreases by
1, and the straight line having the angle θ, is -1. The distance in
the spatial direction X between the position of the pixel adjacent to the
pixel of interest above, i.e., the position where the coordinate y in the
spatial direction Y increases by 1, and the straight line having the
angle θ, is -γ, and the distance in the spatial direction X
between the position of the pixel adjacent to the pixel of interest
below, i.e., the position where the coordinate y in the spatial direction
Y decreases by 1, and the straight line having the angle θ, is
γ.

[0944]In the event that the angle θ exceeds 45 degrees but is
smaller than 90 degrees, and the amount of shift γ exceeds 0 but is
smaller than 1, the relational expression of γ=1/tan θ holds
between the amount of shift γ and the angle θ. FIG. 99 is a
diagram illustrating the relationship between the amount of shift γ
and the angle θ.

[0945]Now, let us take note of the change in distance in the spatial
direction X between the position of a pixel nearby the pixel of interest,
and the straight line which passes through the pixel of interest and has
the angle θ, as to change in the amount of shift γ.

[0946]FIG. 100 is a diagram illustrating the distance in the spatial
direction X between the position of a pixel nearby the pixel of interest
and the straight line which passes through the pixel of interest and has
the angle θ, as to the amount of shift γ. In FIG. 100, the
single-dot broken line which heads toward the upper right indicates the
distance in the spatial direction X between the position of a pixel
adjacent to the pixel of interest on the bottom side, and the straight
line, as to the amount of shift γ. The single-dot broken line which
heads toward the lower left indicates the distance in the spatial
direction X between the position of a pixel adjacent to the pixel of
interest on the top side, and the straight line, as to the amount of
shift γ.

[0947]In FIG. 100, the two-dot broken line which heads toward the upper
right indicates the distance in the spatial direction X between the
position of a pixel two pixels below the pixel of interest and one to the
left, and the straight line, as to the amount of shift γ; the
two-dot broken line which heads toward the lower left indicates the
distance in the spatial direction X between the position of a pixel two
pixels above the pixel of interest and one to the right, and the straight
line, as to the amount of shift γ.

[0948]In FIG. 100, the three-dot broken line which heads toward the upper
right indicates the distance in the spatial direction X between the
position of a pixel one pixel below the pixel of interest and one to the
left, and the straight line, as to the amount of shift γ; the
three-dot broken line which heads toward the lower left indicates the
distance in the spatial direction X between the position of a pixel one
pixel above the pixel of interest and one to the right, and the straight
line, as to the amount of shift γ.

[0949]The pixel with the smallest distance as to the amount of shift
γ can be found from FIG. 100.

[0950]That is to say, in the event that the amount of shift γ is 0
through 1/3, the distance to the straight line is minimal from a pixel
adjacent to the pixel of interest on the top side and from a pixel
adjacent to the pixel of interest on the bottom side. That is to say, in
the event that the angle θ is 71.6 degrees to 90 degrees, the
distance to the straight line is minimal from the pixel adjacent to the
pixel of interest on the top side and from the pixel adjacent to the
pixel of interest on the bottom side.

[0951]In the event that the amount of shift γ is 1/3 through 2/3,
the distance to the straight line is minimal from a pixel two pixels
above the pixel of interest and one to the right and from a pixel two
pixels below the pixel of interest and one to the left. That is to say,
in the event that the angle θ is 56.3 degrees to 71.6 degrees, the
distance to the straight line is minimal from the pixel two pixels above
the pixel of interest and one to the right and from a pixel two pixels
below the pixel of interest and one to the left.

[0952]In the event that the amount of shift γ is 2/3 through 1, the
distance to the straight line is minimal from a pixel one pixel above the
pixel of interest and one to the right and from a pixel one pixel below
the pixel of interest and one to the left. That is to say, in the event
that the angle θ is 45 degrees to 56.3 degrees, the distance to the
straight line is minimal from the pixel one pixel above the pixel of
interest and one to the right and from a pixel one pixel below the pixel
of interest and one to the left.

[0953]The relationship between the straight line in a range of angle
θ from 0 degrees to 45 degrees and a pixel can also be considered
in the same way.

[0954]The pixels shown in FIG. 98 can be replaced with the block of
interest and reference block, to consider the distance in the spatial
direction X between the reference block and the straight line.

[0955]FIG. 101 shows the reference blocks wherein the distance to the
straight line which passes through the pixel of interest and has an angle
θ as to the axis of the spatial direction X is the smallest.

[0956]A through H and A' through H' in FIG. 101 represent the reference
blocks A through H and A' through H' in FIG. 96.

[0957]That is to say, of the distances in the spatial direction X between
a straight line having an angle θ which is any of 0 degrees through
18.4 degrees and 161.6 degrees through 180.0 degrees which passes through
the pixel of interest with the axis of the spatial direction X as a
reference, and each of the reference blocks A through H and A' through
H', the distance between the straight line and the reference blocks A and
A' is the smallest. Accordingly, following reverse logic, in the event
that the correlation between the block of interest and the reference
blocks A and A' is the greatest, this means that a certain feature is
repeatedly manifested in the direction connecting the block of interest
and the reference blocks A and A', so it can be said that the angle of
data continuity is within the ranges of 0 degrees through 18.4 degrees
and 161.6 degrees through 180.0 degrees.

[0958]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 18.4 degrees through 33.7 degrees
which passes through the pixel of interest with the axis of the spatial
direction X as a reference, and each of the reference blocks A through H
and A' through H', the distance between the straight line and the
reference blocks B and B' is the smallest. Accordingly, following reverse
logic, in the event that the correlation between the block of interest
and the reference blocks B and B' is the greatest, this means that a
certain feature is repeatedly manifested in the direction connecting the
block of interest and the reference blocks B and B', so it can be said
that the angle of data continuity is within the range of 18.4 degrees
through 33.7 degrees.

[0959]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 33.7 degrees through 56.3 degrees
which passes through the pixel of interest with the axis of the spatial
direction X as a reference, and each of the reference blocks A through H
and A' through H', the distance between the straight line and the
reference blocks C and C' is the smallest. Accordingly, following reverse
logic, in the event that the correlation between the block of interest
and the reference blocks C and C' is the greatest, this means that a
certain feature is repeatedly manifested in the direction connecting the
block of interest and the reference blocks C and C', so it can be said
that the angle of data continuity is within the range of 33.7 degrees
through 56.3 degrees.

[0960]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 56.3 degrees through 71.6 degrees
which passes through the pixel of interest with the axis of the spatial
direction X as a reference, and each of the reference blocks A through H
and A' through H', the distance between the straight line and the
reference blocks D and D' is the smallest. Accordingly, following reverse
logic, in the event that the correlation between the block of interest
and the reference blocks D and D' is the greatest, this means that a
certain feature is repeatedly manifested in the direction connecting the
block of interest and the reference blocks D and D', so it can be said
that the angle of data continuity is within the range of 56.3 degrees
through 71.6 degrees.

[0961]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 71.6 degrees through 108.4
degrees which passes through the pixel of interest with the axis of the
spatial direction X as a reference, and each of the reference blocks A
through H and A' through H', the distance between the straight line and
the reference blocks E and E' is the smallest. Accordingly, following
reverse logic, in the event that the correlation between the block of
interest and the reference blocks E and E' is the greatest, this means
that a certain feature is repeatedly manifested in the direction
connecting the block of interest and the reference blocks E and E', so it
can be said that the angle of data continuity is within the range of 71.6
degrees through 108.4 degrees.

[0962]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 108.4 degrees through 123.7
degrees which passes through the pixel of interest with the axis of the
spatial direction X as a reference, and each of the reference blocks A
through H and A' through H', the distance between the straight line and
the reference blocks F and F' is the smallest. Accordingly, following
reverse logic, in the event that the correlation between the block of
interest and the reference blocks F and F' is the greatest, this means
that a certain feature is repeatedly manifested in the direction
connecting the block of interest and the reference blocks F and F', so it
can be said that the angle of data continuity is within the range of
108.4 degrees through 123.7 degrees.

[0963]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 123.7 degrees through 146.3
degrees which passes through the pixel of interest with the axis of the
spatial direction X as a reference, and each of the reference blocks A
through H and A' through H', the distance between the straight line and
the reference blocks G and G' is the smallest. Accordingly, following
reverse logic, in the event that the correlation between the block of
interest and the reference blocks G and G' is the greatest, this means
that a certain feature is repeatedly manifested in the direction
connecting the block of interest and the reference blocks G and G', so it
can be said that the angle of data continuity is within the range of
123.7 degrees through 146.3 degrees.

[0964]Of the distances in the spatial direction X between a straight line
having an angle θ which is any of 146.3 degrees through 161.6
degrees which passes through the pixel of interest with the axis of the
spatial direction X as a reference, and each of the reference blocks A
through H and A' through H', the distance between the straight line and
the reference blocks H and H' is the smallest. Accordingly, following
reverse logic, in the event that the correlation between the block of
interest and the reference blocks H and H' is the greatest, this means
that a certain feature is repeatedly manifested in the direction
connecting the block of interest and the reference blocks H and H', so it
can be said that the angle of data continuity is within the range of
146.3 degrees through 161.6 degrees.

[0965]Thus, the data continuity detecting unit 101 can detect the data
continuity angle based on the correlation between the block of interest
and the reference blocks.

[0966]Note that with the data continuity detecting unit 101 of which the
configuration is shown in FIG. 94, an arrangement may be made wherein the
angle range of data continuity is output as data continuity information,
or an arrangement may be made wherein a representative value representing
the range of angle of the data continuity is output as data continuity
information. For example, the median value of the range of angle of the
data continuity may serve as a representative value.

[0967]Further, with the data continuity detecting unit 101 of which the
configuration is shown in FIG. 94, using the correlation between the
block of interest and the reference blocks with the greatest correlation
allows the angle range of data continuity to be detected to be halved,
i.e., for the resolution of the angle of data continuity to be detected
to be doubled.

[0968]For example, when the correlation between the block of interest and
the reference blocks E and E' is the greatest, the smallest error angle
selecting unit 463 compares the correlation of the reference blocks D and
D' as to the block of interest with the correlation of the reference
blocks F and F' as to the block of interest, as shown in FIG. 102. In the
event that the correlation of the reference blocks D and D' as to the
block of interest is greater than the correlation of the reference blocks
F and F' as to the block of interest, the smallest error angle selecting
unit 463 sets the range of 71.6 degrees to 90 degrees for the data
continuity angle. Or, in this case, the smallest error angle selecting
unit 463 may set 81 degrees for the data continuity angle as a
representative value.

[0969]In the event that the correlation of the reference blocks F and F'
as to the block of interest is greater than the correlation of the
reference blocks D and D' as to the block of interest, the smallest error
angle selecting unit 463 sets the range of 90 degrees to 108.4 degrees
for the data continuity angle. Or, in this case, the smallest error angle
selecting unit 463 may set 99 degrees for the data continuity angle as a
representative value.

[0970]The smallest error angle selecting unit 463 can halve the range of
the data continuity angle to be detected for other angle ranges as well,
with the same processing.

[0971]The technique described with reference to FIG. 102 is also called
simplified 16-directional detection.

[0972]Thus, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 94 can detect the angle of data continuity
in narrower ranges, with simple processing.

[0973]Next, the processing for detecting data continuity with the data
continuity detecting unit 101 of which the configuration is shown in FIG.
94, corresponding to the processing in step S101, will be described with
reference to the flowchart shown in FIG. 103.

[0974]In step S441, the data selecting unit 441 selects the pixel of
interest which is a pixel of interest from the input image. For example,
the data selecting unit 441 selects the pixel of interest in raster scan
order from the input image.

[0975]In step S442, the data selecting unit 441 selects a block of
interest made up of a predetermined number of pixels centered on the
pixel of interest. For example, the data selecting unit 441 selects a
block of interest made up of 5×5 pixels centered on the pixel of
interest.

[0976]In step S443, the data selecting unit 441 selects reference blocks
made up of a predetermined number of pixels at predetermined positions at
the surroundings of the pixel of interest. For example, the data
selecting unit 441 selects reference blocks made up of 5×5 pixels
centered on pixels at predetermined positions based on the size of the
block of interest, for each predetermined angle range based on the pixel
of interest and the reference axis.

[0977]The data selecting unit 441 supplies the block of interest and the
reference blocks to the error estimating unit 442.

[0978]In step S444, the error estimating unit 442 calculates the
correlation between the block of interest and the reference blocks
corresponding to the range of angle, for each predetermined angle range
based on the pixel of interest and the reference axis. The error
estimating unit 442 supplies the correlation information indicating the
calculated correlation to the continuity direction derivation unit 443.

[0979]In step S445, the continuity direction derivation unit 443 detects
the angle of data continuity in the input image based on the reference
axis, corresponding to the image continuity which is the lost actual
world 1 light signals, from the position of the reference block which has
the greatest correlation as to the block of interest.

[0981]In step S446, the data selecting unit 441 determines whether or not
processing of all pixels has ended, and in the event that determination
is made that processing of all pixels has not ended, the flow returns to
step S441, a pixel of interest is selected from pixels not yet selected
as the pixel of interest, and the above-described processing is repeated.

[0982]In step S446, in the event that determination is made that
processing of all pixels has ended, the processing ends.

[0983]Thus, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 94 can detect the data continuity angle in
the image data based on the reference axis, corresponding to the lost
actual world 1 light signal continuity with easier processing. Also, the
data continuity detecting unit 101 of which the configuration is shown in
FIG. 94 can detect the angle of data continuity using pixel values of
pixels of a relatively narrow range in the input image, so the angle of
data continuity can be detected more accurately even in the event that
noise and the like is in the input image.

[0984]Note that an arrangement may be made with the data continuity
detecting unit 101 of which the configuration is shown in FIG. 94,
wherein, with regard to a pixel of interest which is the pixel of
interest in a frame of interest which is the frame of interest, in
addition to extracting a block centered on the pixel of interest and made
up of a predetermined number of pixels in the frame of interest, and
multiple blocks each made up of a predetermined number of pixels from the
surroundings of the pixel of interest, also extracting, from frames
previous to or following the frame of interest time-wise, a block
centered on a pixel at a position corresponding to the pixel of interest
and made up of a predetermined number of pixels, and multiple blocks each
made up of a predetermined number of pixels from the surroundings of the
pixel centered on the pixel corresponding to the pixel of interest, and
detecting the correlation between the block centered on the pixel of
interest and blocks in the surroundings thereof space-wise or time-wise,
so as to detect the angle of data continuity in the input image in the
temporal direction and spatial direction, based on the correlation.

[0985]For example, as shown in FIG. 104, the data selecting unit 441
sequentially selects the pixel of interest from the frame #n which is the
frame of interest, and extracts from the frame #n a block centered on the
pixel of interest and made up of a predetermined number of pixels and
multiple blocks each made up of a predetermined number of pixels from the
surroundings of the pixel of interest. Also, the data selecting unit 441
extracts from the frame #n-1 and frame #n+1 a block centered on the pixel
at a position corresponding to the position of the pixel of interest and
made up of a predetermined number of pixels and multiple blocks each made
up of a predetermined number of pixels from the surroundings of a pixel
at a position corresponding to the pixel of interest. The data selecting
unit 441 supplies the extracted blocks to the error estimating unit 442.

[0986]The error estimating unit 442 detects the correlation between the
block centered on the pixel of interest and the blocks in the
surroundings thereof space-wise or time-wise, supplied from the data
selecting unit 441, and supplies correlation information indicated the
detected correlation to the continuity direction derivation unit 443.
Based on the correlation information from the error estimating unit 442,
the continuity direction derivation unit 443 detects the angle of data
continuity in the input image in the space direction or time direction,
corresponding to the lost actual world 1 light signal continuity, from
the position of the block in the surroundings thereof space-wise or
time-wise which has the greatest correlation, and outputs the data
continuity information which indicates the angle.

[0988]FIG. 105 is a block diagram illustrating the configuration of the
data continuity detecting unit 101 for performing data continuity
detection processing based on component signals of the input image.

[0989]Each of data continuity detecting units 481-1 through 481-3 have the
same configuration as the above-described and or later-described data
continuity detecting unit 101, and executes the above-described or
later-described processing on each component signals of the input image.

[0990]The data continuity detecting unit 481-1 detects the data continuity
based on the first component signal of the input image, and supplies
information indicating the continuity of the data detected from the first
component signal to a determining unit 482. For example, the data
continuity detecting unit 481-1 detects data continuity based on the
brightness signal of the input image, and supplies information indicating
the continuity of the data detected from the brightness signal to the
determining unit 482.

[0991]The data continuity detecting unit 481-2 detects the data continuity
based on the second component signal of the input image, and supplies
information indicating the continuity of the data detected from the
second component signal to the determining unit 482. For example, the
data continuity detecting unit 481-2 detects data continuity based on the
I signal which is color difference signal of the input image, and
supplies information indicating the continuity of the data detected from
the I signal to the determining unit 482.

[0992]The data continuity detecting unit 481-3 detects the data continuity
based on the third component signal of the input image, and supplies
information indicating the continuity of the data detected from the third
component signal to the determining unit 482. For example, the data
continuity detecting unit 481-2 detects data continuity based on the Q
signal which is the color difference signal of the input image, and
supplies information indicating the continuity of the data detected from
the Q signal to the determining unit 482.

[0993]The determining unit 482 detects the final data continuity of the
input image based on the information indicating data continuity that has
been detected from each of the component signals supplied from the data
continuity detecting units 481-1 through 481-3, and outputs data
continuity information indicating the detected data continuity.

[0994]For example, the detecting unit 482 takes as the final data
continuity the greatest data continuity of the data continuities detected
from each of the component signals supplied from the data continuity
detecting units 481-1 through 481-3. Or, the detecting unit 482 takes as
the final data continuity the smallest data continuity of the data
continuities detected from each of the component signals supplied from
the data continuity detecting units 481-1 through 481-3.

[0995]Further, for example, the detecting unit 482 takes as the final data
continuity the average data continuity of the data continuities detected
from each of the component signals supplied from the data continuity
detecting units 481-1 through 481-3. The determining unit 482 may be
arranged so as to taken as the final data continuity the median (median
value) of the data continuities detected from each of the component
signals supplied from the data continuity detecting units 481-1 through
481-3.

[0996]Also, for example, based on signals externally input, the detecting
unit 482 takes as the final data continuity the data continuity specified
by the externally input signals of the data continuities detected from
each of the component signals supplied from the data continuity detecting
units 481-1 through 481-3. The determining unit 482 may be arranged so as
to taken as the final data continuity a predetermined data continuity of
the data continuities detected from each of the component signals
supplied from the data continuity detecting units 481-1 through 481-3.

[0997]Moreover, the detecting unit 482 may be arranged so as to determine
the final data continuity based on the error obtained in the processing
for detecting the data continuity of the component signals supplied from
the data continuity detecting units 481-1 through 481-3. The error which
can be obtained in the processing for data continuity detection will be
described later.

[0998]FIG. 106 is a diagram illustrating another configuration of the data
continuity detecting unit 101 for performing data continuity detection
based on components signals of the input image.

[0999]A component processing unit 491 generates one signal based on the
component signals of the input image, and supplies this to a data
continuity detecting unit 492. For example, the component processing unit
491 adds values of each of the component signals of the input image for a
pixel at the same position on the screen, thereby generating a signal
made up of the sum of the component signals.

[1000]For example, the component processing unit 491 averages the pixel
values in each of the component signals of the input image with regard to
a pixel at the same position on the screen, thereby generating a signal
made up of the average values of the pixel values of the component
signals.

[1002]The data continuity detecting unit 492 has the same configuration as
the above-described and or later-described data continuity detecting unit
101, and executes the above-described or later-described processing on
the signals supplied from the component processing unit 491.

[1003]Thus, the data continuity detecting unit 101 can detect data
continuity by detecting the data continuity of the input image based on
component signals, so the data continuity can be detected more accurately
even in the event that noise and the like is in the input image. For
example, the data continuity detecting unit 101 can detect data
continuity angle (gradient), mixture ratio, and regions having data
continuity more precisely, by detecting data continuity of the input
image based on component signals.

[1004]Note that the component signals are not restricted to brightness
signals and color difference signals, and may be other component signals
of other formats, such as RGB signals, YUV signals, and so forth.

[1005]As described above, with an arrangement wherein light signals of the
real world are projected, the angle as to the reference axis is detected
of data continuity corresponding to the continuity of real world light
signals that has dropped out from the image data having continuity of
real world light signals of which a part has dropped out, and the light
signals are estimated by estimating the continuity of the real world
light signals that has dropped out based on the detected angle,
processing results which are more accurate and more precise can be
obtained.

[1006]Also, with an arrangement wherein multiple sets are extracted of
pixel sets made up of a predetermined number of pixels for each angle
based on a pixel of interest which is the pixel of interest and the
reference axis in image data obtained by light signals of the real world
being projected on multiple detecting elements in which a part of the
continuity of the real world light signals has dropped out, the
correlation of the pixel values of pixels at corresponding positions in
multiple sets which have been extracted for each angle is detected, the
angle of data continuity in the image data, based on the reference axis,
corresponding to the real world light signal continuity which has dropped
out, is detected based on the detected correlation and the light signals
are estimated by estimating the continuity of the real world light
signals that has dropped out, based on the detected angle of the data
continuity as to the reference axis in the image data, processing results
which are more accurate and more precise as to the real world events can
be obtained.

[1008]With the data continuity detecting unit 101 shown in FIG. 107, light
signals of the real world are projected, a region, corresponding to a
pixel of interest which is the pixel of interest in the image data of
which a part of the continuity of the real world light signals has
dropped out, is selected, and a score based on correlation value is set
for pixels wherein the correlation value of the pixel value of the pixel
of interest and the pixel value of a pixel belonging to a selected region
is equal to or greater than a threshold value, thereby detecting the
score of pixels belonging to the region, and a regression line is
detected based on the detected score, thereby detecting the data
continuity of the image data corresponding to the continuity of the real
world light signals which has dropped out.

[1009]Frame memory 501 stores input images in increments of frames, and
supplies the pixel values of the pixels making up stored frames to a
pixel acquiring unit 502. The frame memory 501 can supply pixel values of
pixels of frames of an input image which is a moving image to the pixel
acquiring unit 502, by storing the current frame of the input image in
one page, supplying the pixel values of the pixel of the frame one frame
previous (in the past) as to the current frame stored in another page to
the pixel acquiring unit 502, and switching pages at the switching
point-in-time of the frames of the input image.

[1010]The pixel acquiring unit 502 selects the pixel of interest which is
a pixel of interest based on the pixel values of the pixels supplied from
the frame memory 501, and selects a region made up of a predetermined
number of pixels corresponding to the selected pixel of interest. For
example, the pixel acquiring unit 502 selects a region made up of
5×5 pixels centered on the pixel of interest.

[1011]The size of the region which the pixel acquiring unit 502 selects
does not restrict the present invention.

[1012]The pixel acquiring unit 502 acquires the pixel values of the pixels
of the selected region, and supplies the pixel values of the pixels of
the selected region to a score detecting unit 503.

[1013]Based on the pixel values of the pixels of the selected region
supplied from the pixel acquiring unit 502, the score detecting unit 503
detects the score of pixels belonging to the region, by setting a score
based on correlation for pixels wherein the correlation value of the
pixel value of the pixel of interest and the pixel value of a pixel
belonging to the selected region is equal to or greater than a threshold
value. The details of processing for setting score based on correlation
at the score detecting unit 503 will be described later.

[1015]The regression line computing unit 504 computes a regression line
based on the score supplied from the score detecting unit 503. For
example, the regression line computing unit 504 computes a regression
line based on the score supplied from the score detecting unit 503. Also,
the regression line computing unit 504 computes a regression line which
is a predetermined curve, based on the score supplied from the score
detecting unit 503. The regression line computing unit 504 supplies
computation result parameters indicating the computed regression line and
the results of computation to an angle calculating unit 505. The
computation results which the computation parameters indicate include
later-described variation and covariation.

[1016]The angle calculating unit 505 detects the continuity of the data of
the input image which is image data, corresponding to the continuity of
the light signals of the real world that has dropped out, based on the
regression line indicated by the computation result parameters supplied
from the regression line computing unit 504. For example, based on the
regression line indicated by the computation result parameters supplied
from the regression line computing unit 504, the angle calculating unit
505 detects the angle of data continuity in the input image based on the
reference axis, corresponding to the dropped actual world 1 light signal
continuity. The angle calculating unit 505 outputs data continuity
information indicating the angle of the data continuity in the input
image based on the reference axis.

[1017]The angle of the data continuity in the input image based on the
reference axis will be described with reference to FIG. 108 through FIG.
110.

[1018]In FIG. 108, each circle represents a single pixel, and the double
circle represents the pixel of interest. The colors of the circles
schematically represent the pixel values of the pixels, with the lighter
colors indicating greater pixel values. For example, black represents a
pixel value of 30, while white indicates a pixel value of 120.

[1019]In the event that a person views the image made up of the pixels
shown in FIG. 108, the person who sees the image can recognize that a
straight line is extending in the diagonally upper right direction.

[1020]Upon inputting an input image made up of the pixels shown in FIG.
108, the data continuity detecting unit 101 of which the configuration is
shown in FIG. 107 detects that a straight line is extending in the
diagonally upper right direction.

[1021]FIG. 109 is a diagram illustrating the pixel values of the pixels
shown in FIG. 108 with numerical values. Each circle represents one
pixel, and the numerical values in the circles represent the pixel
values.

[1022]For example, the pixel value of the pixel of interest is 120, the
pixel value of the pixel above the pixel of interest is 100, and the
pixel value of the pixel below the pixel of interest is 100. Also, the
pixel value of the pixel to the left of the pixel of interest is 80, and
the pixel value of the pixel to the right of the pixel of interest is 80.
In the same way, the pixel value of the pixel to the lower left of the
pixel of interest is 100, and the pixel value of the pixel to the upper
right of the pixel of interest is 100. The pixel value of the pixel to
the upper left of the pixel of interest is 30, and the pixel value of the
pixel to the lower right of the pixel of interest is 30.

[1023]The data continuity detecting unit 101 of which the configuration is
shown in FIG. 107 plots a regression line A as to the input image shown
in FIG. 109, as shown in FIG. 110.

[1024]FIG. 111 is a diagram illustrating the relation between change in
pixel values in the input image as to the position of the pixels in the
spatial direction, and the regression line A. The pixel values of pixels
in the region having data continuity change in the form of a crest, for
example, as shown in FIG. 111.

[1025]The data continuity detecting unit 101 of which the configuration is
shown in FIG. 107 plots the regression line A by least-square, weighted
with the pixel values of the pixels in the region having data continuity.
The regression line A obtained by the data continuity detecting unit 101
represents the data continuity in the neighborhood of the pixel of
interest.

[1026]The angle of data continuity in the input image based on the
reference axis is detected by obtaining the angle θ between the
regression line A and an axis indicating the spatial direction X which is
the reference axis for example, as shown in FIG. 112.

[1027]Next, a specific method for calculating the regression line with the
data continuity detecting unit 101 of which the configuration is shown in
FIG. 107.

[1028]From the pixel values of pixels in a region made up of 9 pixels in
the spatial direction X and 5 pixels in the spatial direction Y for a
total of 45 pixels, centered on the pixel of interest, supplied from the
pixel acquiring unit 502, for example, the score detecting unit 503
detects the score corresponding to the coordinates of the pixels
belonging to the region.

[1029]For example, the score detecting unit 503 detects the score
Li,j of the coordinates (xi, yj) belonging to the region,
by calculating the score with the computation of Expression (32).

≦> ##EQU00019##

[1030]In Expression (32), P0,0 represents the pixel value of the
pixel of interest, and Pi,j represents the pixel values of the pixel
at the coordinates (xi, yj). Th represents a threshold value.

[1031]i represents the order of the pixel in the spatial direction X in
the region wherein 1≦i≦k. j represents the order of the
pixel in the spatial direction Y in the region wherein
1≦j≦1.

[1032]k represents the number of pixels in the spatial direction X in the
region, and l represents the number of pixels in the spatial direction Y
in the region. For example, in the event of a region made up of 9 pixels
in the spatial direction X and 5 pixels in the spatial direction Y for a
total of 45 pixels, K is 9 and l is 5.

[1033]FIG. 113 is a diagram illustrating an example of a region acquired
by the pixel acquiring unit 502. In FIG. 113, the dotted squares each
represent one pixel.

[1034]For example, as shown in FIG. 113, in the event that the region is
made up of 9 pixels centered on the pixel of interest in the spatial
direction X, and is made up of 5 pixels centered on the pixel of interest
in the spatial direction Y, with the coordinates (x, y) of the pixel of
interest being (0, 0), the coordinates (x, y) of the pixel at the upper
left of the region are (-4, 2), the coordinates (x, y) of the pixel at
the upper right of the region are (4, 2), the coordinates (x, y) of the
pixel at the lower left of the region are (-4, -2), and the coordinates
(x, y) of the pixel at the lower right of the region are (4, -2).

[1035]The order i of the pixels at the left side of the region in the
spatial direction X is 1, and the order i of the pixels at the right side
of the region in the spatial direction X is 9. The order j of the pixels
at the lower side of the region in the spatial direction Y is 1, and the
order j of the pixels at the upper side of the region in the spatial
direction Y is 5.

[1036]That is to say, with the coordinates (x5, y3) of the pixel
of interest as (0, 0), the coordinates (x1, y5) of the pixel at
the upper left of the region are (-4, 2), the coordinates (x9,
y5) of the pixel at the upper right of the region are (4, 2), the
coordinates (x1, y1) of the pixel at the lower left of the
region are (-4, -2), and the coordinates (x9, y1) of the pixel
at the lower right of the region are (4, -2).

[1037]The score detecting unit 503 calculates the absolute values of
difference of the pixel value of the pixel of interest and the pixel
values of the pixels belonging to the region as a correlation value with
Expression (32), so this is not restricted to a region having data
continuity in the input image where a fine line image of the actual world
1 has been projected, rather, score can be detected representing the
feature of spatial change of pixel values in the region of the input
image having two-valued edge data continuity, wherein an image of an
object in the actual world 1 having a straight edge and which is of a
monotone color different from that of the background has been projected.

[1038]Note that the score detecting unit 503 is not restricted to the
absolute values of difference of the pixel values of pixels, and may be
arranged to detect the score based on other correlation values such as
correlation coefficients and so forth.

[1039]Also, the reason that an exponential function is applied in
Expression (32) is to exaggerate difference in score as to difference in
pixel values, and an arrangement may be made wherein other functions are
applied.

[1040]The threshold value Th may be an optional value. For example, the
threshold value Th may be 30.

[1041]Thus, the score detecting unit 503 sets a score to pixels having a
correlation value with a pixel value of a pixel belonging to a selected
region, based on the correlation value, and thereby detects the score of
the pixels belonging to the region.

[1042]Also, the score detecting unit 503 performs the computation of
Expression (33), thereby calculating the score, whereby the score
Li,j of the coordinates (xi, yj) belonging to the region
is detected.

≦> ##EQU00020##

[1043]With the score of the coordinates (xi, yj) as
Li,j(1≦i≦k, 1≦j≦1), the sum qi of
the score Li,j of the coordinate xi in the spatial direction Y
is expressed by Expression (34), and the sum hj of the score
Li,j of the coordinate yj in the spatial direction X is
expressed by Expression (35).

##EQU00021##

[1044]The summation u of the scores is expressed by Expression (36).

##EQU00022##

[1045]In the example shown in FIG. 113, the score L5,3 of the
coordinate of the pixel of interest is 3, the score L5,4 of the
coordinate of the pixel above the pixel of interest is 1, the score
L6,4 of the coordinate of the pixel to the upper right of the pixel
of interest is 4, the score L6,5 of the coordinate of the pixel two
pixels above and one pixel to the right of the pixel of interest is 2,
and the score L7,5 of the coordinate of the pixel two pixels above
and two pixels to the right of the pixel of interest is 3. Also, the
score L5,2 of the coordinate of the pixel below the pixel of
interest is 2, the score L4,3 of the coordinate of the pixel to the
left of the pixel of interest is 1, the score L4,2 of the coordinate
of the pixel to the lower left of the pixel of interest is 3, the score
L3,2 of the coordinate of the pixel one pixel below and two pixels
to the left of the pixel of interest is 2, and the score L3,1 of the
coordinate of the pixel two pixels below and two pixels to the left of
the pixel of interest is 4. The score of all other pixels in the region
shown in FIG. 113 is 0, and description of pixels which have a score of 0
are omitted from FIG. 113.

[1046]In the region shown in FIG. 113, the sum q1 of the scores in
the spatial direction Y is 0, since all scores L wherein i is 1 are 0,
and q2 is 0 since all scores L wherein i is 2 are 0. q3 is 6
since L3,2 is 2 and L3,1 is 4. In the same way, q4 is 4,
q5 is 6, q6 is 6, q7 is 3, q8 is 0, and q9 is 0.

[1047]In the region shown in FIG. 113, the sum h1 of the scores in
the spatial direction X is 4, since L3,1 is 4. h2 is 7 since
L3,2 is 2, L4,2 is 3, and L5,2 is 2. In the same way,
h3 is 4, h4 is 5, and h5 is 5.

[1048]In the region shown in FIG. 113, the summation u of scores is 25.

[1049]The sum Tx of the results of multiplying the sum qi of the
scores Li,j in the spatial direction Y by the coordinate xi is
shown in Expression (37).

##EQU00023##

[1050]The sum Ty of the results of multiplying the sum hj of the
scores Li,j in the spatial direction X by the coordinate yj is
shown in Expression (38).

##EQU00024##

[1051]For example, in the region shown in FIG. 113, q1 is 0 and
xi is -4, so q1 x1 is 0, and q2 is 0 and x2 is
-3, so q2 x2 is 0. In the same way, q3 is 6 and x3 is
-2, so q3 x3 is -12; q4 is 4 and x4 is -1, so q4
x4 is -4; q5 is 6 and x5 is 0, so q5 x5 is 0;
q6 is 6 and x6 is 1, so q6 x6 is 6; q7 is 3 and
x7 is 2, so q7 x7 is 6; q8 is 0 and x8 is 3, so
q8 x8 is 0; and q9 is 0 and x9 is 4, so q9
xg is 0. Accordingly, Tx which is the sum of q1x1
through q9x9 is -4.

[1052]For example, in the region shown in FIG. 113, h1 is 4 and
y1 is -2, so h1 y1 is -8, and h2 is 7 and y2 is
-1, so h2 y2 is -7. In the same way, h3 is 4 and y3
is 0, so h3 y3 is 0; h4 is 5 and y4 is 1, so
h4y4 is 5; and h5 is 5 and y5 is 2, so h5y5
is 10. Accordingly, Ty which is the sum of h1y1 through
h5y5 is 0.

[1053]Also, Qi is defined as follows.

##EQU00025##

[1054]The variation Sx of x is expressed by Expression (40).

##EQU00026##

[1055]The variation Sy of y is expressed by Expression (41).

##EQU00027##

[1056]The covariation sxy is expressed by Expression (42).

##EQU00028##

[1057]Let us consider obtaining the primary regression line shown in
Expression (43).

y=ax+b (43)

[1058]The gradient a and intercept b can be obtained as follows by the
least-square method.

##EQU00029##

[1059]However, it should be noted that the conditions necessary for
obtaining a correct regression line is that the scores Li,j are
distributed in a Gaussian distribution as to the regression line. To put
this the other way around, there is the need for the score detecting unit
503 to convert the pixel values of the pixels of the region into the
scores Li,j such that the scores Li,j have a Gaussian
distribution.

[1060]The regression line computing unit 504 performs the computation of
Expression (44) and Expression (45) to obtain the regression line.

[1061]The angle calculating unit 505 performs the computation of
Expression (46) to convert the gradient a of the regression line to an
angle θ as to the axis in the spatial direction X, which is the
reference axis.

θ=tan-1(a) (46)

Now, in the case of the regression line computing unit 504 computing a
regression line which is a predetermined curve, the angle calculating
unit 505 obtains the angle θ of the regression line at the position
of the pixel of interest as to the reference axis.

[1062]Here, the intercept b is unnecessary for detecting the data
continuity for each pixel. Accordingly, let us consider obtaining the
primary regression line shown in Expression (47).

y=ax (47)

[1063]In this case, the regression line computing unit 504 can obtain the
gradient a by the least-square method as in Expression (48).

##EQU00030##

[1064]The processing for detecting data continuity with the data
continuity detecting unit 101 of which the configuration is shown in FIG.
107, corresponding to the processing in step S101, will be described with
reference to the flowchart shown in FIG. 114.

[1065]In step S501, the pixel acquiring unit 502 selects a pixel of
interest from pixels which have not yet been taken as the pixel of
interest. For example, the pixel acquiring unit 502 selects the pixel of
interest in raster scan order. In step S502, the pixel acquiring unit 502
acquires the pixel values of the pixel contained in a region centered on
the pixel of interest, and supplies the pixel values of the pixels
acquired to the score detecting unit 503. For example, the pixel
acquiring unit 502 selects a region made up of 9×5 pixels centered
on the pixel of interest, and acquires the pixel values of the pixels
contained in the region.

[1066]In step S503, the score detecting unit 503 converts the pixel values
of the pixels contained in the region into scores, thereby detecting
scores. For example, the score detecting unit 503 converts the pixel
values into scores Li,j by the computation shown in Expression (32).
In this case, the score detecting unit 503 converts the pixel values of
the pixels of the region into the scores Li,j such that the scores
Li,j have a Gaussian distribution. The score detecting unit 503
supplies the converted scores to the regression line computing unit 504.

[1067]In step S504, the regression line computing unit 504 obtains a
regression line based on the scores supplied from the score detecting
unit 503. For example, the regression line computing unit 504 obtains the
regression line based on the scores supplied from the score detecting
unit 503. More specifically, the regression line computing unit 504
obtains the regression line by executing the computation shown in
Expression (44) and Expression (45). The regression line computing unit
504 supplies computation result parameters indicating the regression line
which is the result of computation, to the angle calculating unit 505.

[1068]In step S505, the angle calculating unit 505 calculates the angle of
the regression line as to the reference axis, thereby detecting the data
continuity of the image data, corresponding to the continuity of the
light signals of the real world that has dropped out. For example, the
angle calculating unit 505 converts the gradient a of the regression line
into the angle θ as to the axis of the spatial direction X which is
the reference axis, by the computation of Expression (46).

[1069]Note that an arrangement may be made wherein the angle calculating
unit 505 outputs data continuity information indicating the gradient a.

[1070]In step S506, the pixel acquiring unit 502 determines whether or not
the processing of all pixels has ended, and in the event that
determination is made that the processing of all pixels has not ended,
the flow returns to step S501, a pixel of interest is selected from the
pixels which have not yet been taken as a pixel of interest, and the
above-described processing is repeated.

[1071]In the event that determination is made in step S506 that the
processing of all pixels has ended, the processing ends.

[1072]Thus, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 107 can detect the angle of data
continuity in the image data based on the reference axis, corresponding
to the dropped continuity of the actual world 1 light signals.

[1073]Particularly, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 107 can obtain angles smaller than pixels,
based on the pixel values of pixels in a relatively narrow region.

[1074]As described above, in a case wherein light signals of the real
world are projected, a region, corresponding to a pixel of interest which
is the pixel of interest in the image data of which a part of the
continuity of the real world light signals has dropped out, is selected,
and a score based on correlation value is set for pixels wherein the
correlation value of the pixel value of the pixel of interest and the
pixel value of a pixel belonging to a selected region is equal to or
greater than a threshold value, thereby detecting the score of pixels
belonging to the region, and a regression line is detected based on the
detected score, thereby detecting the data continuity of the image data
corresponding to the continuity of the real world light signals which has
dropped out, and subsequently estimating the light signals by estimating
the continuity of the dropped real world light signal based on the
detected data of the image data, processing results which are more
accurate and more precise as to events in the real world can be obtained.

[1075]Note that with the data continuity detecting unit 101 of which the
configuration is shown in FIG. 107, an arrangement wherein the pixel
values of pixels in a predetermined region of the frame of interest where
the pixel of interest belongs and in frames before and after the frame of
interest time-wise are converted into scores, and a regression plane is
obtained based on the scores, allows the angle of time-directional data
continuity to be detected along with the angle of the data continuity in
the spatial direction.

[1077]With the data continuity detecting unit 101 shown in FIG. 115, light
signals of the real world are projected, a region, corresponding to a
pixel of interest which is the pixel of interest in the image data of
which a part of the continuity of the real world light signals has
dropped out, is selected, and a score based on correlation value is set
for pixels wherein the correlation value of the pixel value of the pixel
of interest and the pixel value of a pixel belonging to a selected region
is equal to or greater than a threshold value, thereby detecting the
score of pixels belonging to the region, and a regression line is
detected based on the detected score, thereby detecting the data
continuity of the image data corresponding to the continuity of the real
world light signals which has dropped out.

[1078]Frame memory 601 stores input images in increments of frames, and
supplies the pixel values of the pixels making up stored frames to a
pixel acquiring unit 602. The frame memory 601 can supply pixel values of
pixels of frames of an input image which is a moving image to the pixel
acquiring unit 602, by storing the current frame of the input image in
one page, supplying the pixel values of the pixel of the frame one frame
previous (in the past) as to the current frame stored in another page to
the pixel acquiring unit 602, and switching pages at the switching
point-in-time of the frames of the input image.

[1079]The pixel acquiring unit 602 selects the pixel of interest which is
a pixel of interest based on the pixel values of the pixels supplied from
the frame memory 601, and selects a region made up of a predetermined
number of pixels corresponding to the selected pixel of interest. For
example, the pixel acquiring unit 602 selects a region made up of
5×5 pixels centered on the pixel of interest.

[1080]The size of the region which the pixel acquiring unit 602 selects
does not restrict the present invention.

[1081]The pixel acquiring unit 602 acquires the pixel values of the pixels
of the selected region, and supplies the pixel values of the pixels of
the selected region to a score detecting unit 603.

[1082]Based on the pixel values of the pixels of the selected region
supplied from the pixel acquiring unit 602, the score detecting unit 603
detects the score of pixels belonging to the region, by setting a score
based on correlation value for pixels wherein the correlation value of
the pixel value of the pixel of interest and the pixel value of a pixel
belonging to the selected region is equal to or greater than a threshold
value. The details of processing for setting score based on correlation
at the score detecting unit 603 will be described later.

[1084]The regression line computing unit 604 computes a regression line
based on the score supplied from the score detecting unit 603. For
example, the regression line computing unit 604 computes a regression
line based on the score supplied from the score detecting unit 603. Also,
for example, the regression line computing unit 604 computes a regression
line which is a predetermined curve, based on the score supplied from the
score detecting unit 603. The regression line computing unit 604 supplies
computation result parameters indicating the computed regression line and
the results of computation to an region calculating unit 605. The
computation results which the computation parameters indicate include
later-described variation and covariation.

[1085]The region calculating unit 605 detects the region having the
continuity of the data of the input image which is image data,
corresponding to the continuity of the light signals of the real world
that has dropped out, based on the regression line indicated by the
computation result parameters supplied from the regression line computing
unit 604.

[1086]FIG. 116 is a diagram illustrating the relation between change in
pixel values in the input image as to the position of the pixels in the
spatial direction, and the regression line A. The pixel values of pixels
in the region having data continuity change in the form of a crest, for
example, as shown in FIG. 116.

[1087]The data continuity detecting unit 101 of which the configuration is
shown in FIG. 115 plots the regression line A by least-square, weighted
with the pixel values of the pixels in the region having data continuity.
The regression line A obtained by the data continuity detecting unit 101
represents the data continuity in the neighborhood of the pixel of
interest.

[1088]Plotting a regression line means approximation assuming a Gaussian
function. As shown in FIG. 117, the data continuity detecting unit of
which the configuration is illustrated in FIG. 115 can tell the general
width of the region in the data 3 where the image of the fine line has
been projected, by obtaining standard deviation, for example. Also, the
data continuity detecting unit of which the configuration is illustrated
in FIG. 115 can tell the general width of the region in the data 3 where
the image of the fine line has been projected, based on correlation
coefficients.

[1089]Next, a specific method for calculating the regression line with the
data continuity detecting unit 101 of which the configuration is shown in
FIG. 115.

[1090]From the pixel values of pixels in a region made up of 9 pixels in
the spatial direction X and 5 pixels in the spatial direction Y for a
total of 45 pixels, centered on the pixel of interest, supplied from the
pixel acquiring unit 602, for example, the score detecting unit 603
detects the score corresponding to the coordinates of the pixels
belonging to the region.

[1091]For example, the score detecting unit 603 detects the score
Li,j of the coordinates (xi, yj) belonging to the region,
by calculating the score with the computation of Expression (49).

≦> ##EQU00031##

[1092]In Expression (49), P0,0 represents the pixel value of the
pixel of interest, and Pi,j represents the pixel values of the pixel
at the coordinates (xi, yj). Th represents the threshold value.

[1093]i represents the order of the pixel in the spatial direction X in
the region wherein 1≦i≦k. j represents the order of the
pixel in the spatial direction Y in the region wherein
1≦j≦1.

[1094]k represents the number of pixels in the spatial direction X in the
region, and l represents the number of pixels in the spatial direction Y
in the region. For example, in the event of a region made up of 9 pixels
in the spatial direction X and 5 pixels in the spatial direction Y for a
total of 45 pixels, K is 9 and l is 5.

[1095]FIG. 118 is a diagram illustrating an example of a region acquired
by the pixel acquiring unit 602. In FIG. 118, the dotted squares each
represent one pixel.

[1096]For example, as shown in FIG. 118, in the event that the region is
made up of 9 pixels centered on the pixel of interest in the spatial
direction X, and is made up of 5 pixels centered on the pixel of interest
in the spatial direction Y, with the coordinates (x, y) of the pixel of
interest being (0, 0), the coordinates (x, y) of the pixel at the upper
left of the region are (-4, 2), the coordinates (x, y) of the pixel at
the upper right of the region are (4, 2), the coordinates (x, y) of the
pixel at the lower left of the region are (-4, -2), and the coordinates
(x, y) of the pixel at the lower right of the region are (4, 42).

[1097]The order i of the pixels at the left side of the region in the
spatial direction X is 1, and the order i of the pixels at the right side
of the region in the spatial direction X is 9. The order j of the pixels
at the lower side of the region in the spatial direction Y is 1, and the
order j of the pixels at the upper side of the region in the spatial
direction Y is 5.

[1098]That is to say, with the coordinates (x5, y3) of the pixel
of interest as (0, 0), the coordinates (x1, y5) of the pixel at
the upper left of the region are (-4, 2), the coordinates (x9,
y5) of the pixel at the upper right of the region are (4, 2), the
coordinates (x1, y1) of the pixel at the lower left of the
region are (-4, -2), and the coordinates (x9, y1) of the pixel
at the lower right of the region are (4, -2).

[1099]The score detecting unit 603 calculates the absolute values of
difference of the pixel value of the pixel of interest and the pixel
values of the pixels belonging to the region as a correlation value with
Expression (49), so this is not restricted to a region having data
continuity in the input image where a fine line image of the actual world
1 has been projected, rather, score can be detected representing the
feature of spatial change of pixel values in the region of the input
image having two-valued edge data continuity, wherein an image of an
object in the actual world 1 having a straight edge and which is of a
monotone color different from that of the background has been projected.

[1100]Note that the score detecting unit 603 is not restricted to the
absolute values of difference of the pixel values of the pixels, and may
be arranged to detect the score based on other correlation values such as
correlation coefficients and so forth.

[1101]Also, the reason that an exponential function is applied in
Expression (49) is to exaggerate difference in score as to difference in
pixel values, and an arrangement may be made wherein other functions are
applied.

[1102]The threshold value Th may be an optional value. For example, the
threshold value Th may be 30.

[1103]Thus, the score detecting unit 603 sets a score to pixels having a
correlation value with a pixel value of a pixel belonging to a selected
region equal to or greater than the threshold value, based on the
correlation value, and thereby detects the score of the pixels belonging
to the region.

[1104]Also, the score detecting unit 603 performs the computation of
Expression (50) for example, thereby calculating the score, whereby the
score Li,j of the coordinates (xi, yj) belonging to the
region is detected.

≦> ##EQU00032##

[1105]With the score of the coordinates (xi, yj) as
Li,j(1≦i≦k, 1≦j≦1), the sum qi of
the score Li,j of the coordinate xi in the spatial direction Y
is expressed by Expression (51), and the sum hj of the score
Li,j of the coordinate yj in the spatial direction X is
expressed by Expression (52).

##EQU00033##

[1106]The summation u of the scores is expressed by Expression (53).

##EQU00034##

[1107]In the example shown in FIG. 118, the score L5,3 of the
coordinate of the pixel of interest is 3, the score L5,4 of the
coordinate of the pixel above the pixel of interest is 1, the score
L6,4 of the coordinate of the pixel to the upper right of the pixel
of interest is 4, the score L6,5 of the coordinate of the pixel two
pixels above and one pixel to the right of the pixel of interest is 2,
and the score L7,5 of the coordinate of the pixel two pixels above
and two pixels to the right of the pixel of interest is 3. Also, the
score L5,2 of the coordinate of the pixel below the pixel of
interest is 2, the score L4,3 of the coordinate of the pixel to the
left of the pixel of interest is 1, the score L4,2 of the coordinate
of the pixel to the lower left of the pixel of interest is 3, the score
L4,2 of the coordinate of the pixel one pixel below and two pixels
to the left of the pixel of interest is 2, and the score L3,1 of the
coordinate of the pixel two pixels below and two pixels to the left of
the pixel of interest is 4. The score of all other pixels in the region
shown in FIG. 118 is 0, and description of pixels which have a score of 0
are omitted from FIG. 118.

[1108]In the region shown in FIG. 118, the sum q1 of the scores in
the spatial direction Y is 0, since all scores L wherein i is 1 are 0,
and q2 is 0 since all scores L wherein i is 2 are 0. q3 is 6
since L3,2 is 2 and L3,1 is 4. In the same way, q4 is 4,
q5 is 6, q6 is 6, q7 is 3, q8 is 0, and q9 is 0.

[1109]In the region shown in FIG. 118, the sum h1 of the scores in
the spatial direction X is 4, since L3,1 is 4. h2 is 7 since
L3,2 is 2, L4,2 is 3, and L5,2 is 2. In the same way,
h3 is 4, h4 is 5, and h5 is 5.

[1110]In the region shown in FIG. 118, the summation u of scores is 25.

[1111]The sum Tx of the results of multiplying the sum qi of the
scores Li,j in the spatial direction Y by the coordinate xi is
shown in Expression (54).

##EQU00035##

[1112]The sum Ty of the results of multiplying the sum hj of the
scores Li,j in the spatial direction X by the coordinate yj is
shown in Expression (55).

##EQU00036##

[1113]For example, in the region shown in FIG. 118, q1 is 0 and
x1 is -4, so q1 x1 is 0, and q2 is 0 and x2 is
-3, so q2 x2 is 0. In the same way, q3 is 6 and x3 is
-2, so q3 x3 is -12; q4 is 4 and x4 is -1, so q4
x4 is -4; q5 is 6 and x5 is 0, so q5 x5 is 0;
q6 is 6 and x6 is 1, so q6 x6 is 6; q7 is 3 and
x7 is 2, so q7 x7 is 6; q8 is 0 and x8 is 3, so
q8 x8 is 0; and q9 is 0 and x9 is 4, so q9
x9 is 0. Accordingly, Tx which is the sum of q1x1
through q9x9 is -4.

[1114]For example, in the region shown in FIG. 118, h1 is 4 and
y1 is -2, so h1 y1 is -8, and h2 is 7 and y2 is
-1, so h2 y2 is -7. In the same way, h3 is 4 and y3
is 0, so h3 y3 is 0; h4 is 5 and y4 is 1, so
h4y4 is 5; and h5 is 5 and y5 is 2, so h5y5
is 10. Accordingly, Ty which is the sum of h1y1 through
h5y5 is 0.

[1115]Also, Qi is defined as follows.

##EQU00037##

[1116]The variation Sx of x is expressed by Expression (57).

##EQU00038##

[1117]The variation Sy of y is expressed by Expression (58).

##EQU00039##

[1118]The covariation sxy is expressed by Expression (59).

##EQU00040##

[1119]Let us consider obtaining the primary regression line shown in
Expression (60).

y=ax+b (60)

[1120]The gradient a and intercept b can be obtained as follows by the
least-square method.

##EQU00041##

[1121]However, it should be noted that the conditions necessary for
obtaining a correct regression line is that the scores Li,j are
distributed in a Gaussian distribution as to the regression line. To put
this the other way around, there is the need for the score detecting unit
603 to convert the pixel values of the pixels of the region into the
scores Li,j such that the scores Li,j have a Gaussian
distribution.

[1122]The regression line computing unit 604 performs the computation of
Expression (61) and Expression (62) to obtain the regression line.

[1123]Also, the intercept b is unnecessary for detecting the data
continuity for each pixel. Accordingly, let us consider obtaining the
primary regression line shown in Expression (63).

y=ax (63)

[1124]In this case, the regression line computing unit 604 can obtain the
gradient a by the least-square method as in Expression (64).

##EQU00042##

[1125]With a first technique for determining the region having data
continuity, the estimation error of the regression line shown in
Expression (60) is used.

[1126]The variation Syx of y is obtained with the computation shown
in Expression (65).

##EQU00043##

[1127]Scattering of the estimation error is obtained by the computation
shown in Expression (66) using variation.

##EQU00044##

[1128]Accordingly, the following Expression yields the standard deviation.

##EQU00045##

[1129]However, in the case of handling a region where a fine line image
has been projected, the standard deviation is an amount worth the width
of the fine line, so determination cannot be categorically made that
great standard deviation means that a region is not the region with data
continuity. However, for example, information indicating detected regions
using standard deviation can be utilized to detect regions where there is
a great possibility that class classification adaptation processing
breakdown will occur, since class classification adaptation processing
breakdown occurs at portions of the region having data continuity where
the fine line is narrow.

[1130]The region calculating unit 605 calculates the standard deviation by
the computation shown in Expression (67), and calculates the region of
the input image having data continuity, based on the standard deviation,
for example. The region calculating unit 605 multiplies the standard
deviation by a predetermined coefficient so as to obtain distance, and
takes the region within the obtained distance from the regression line as
a region having data continuity. For example, the region calculating unit
605 calculates the region within the standard deviation distance from the
regression line as a region having data continuity, with the regression
line as the center thereof.

[1131]With a second technique, the correlation of score is used for
detecting a region having data continuity.

[1132]The correlation coefficient rxy can be obtained by the
computation shown in Expression (68), based on the variation Sx of
x, the variation Sy of y, and the covariation Sxy.

rxy=Sxy/ {square root over (SxSy)} (68)

[1133]Correlation includes positive correlation and negative correlation,
so the region calculating unit 605 obtains the absolute value of the
correlation coefficient rxy, and determines that the closer to 1 the
absolute value of the correlation coefficient rxy is, the greater
the correlation is. More specifically, the region calculating unit 605
compares the threshold value with the absolute value of the correlation
coefficient rxy and detects a region wherein the correlation
coefficient rxy is equal to or greater than the threshold value as a
region having data continuity.

[1134]The processing for detecting data continuity with the data
continuity detecting unit 101 of which the configuration is shown in FIG.
115, corresponding to the processing in step S101, will be described with
reference to the flowchart shown in FIG. 119.

[1135]In step S601, the pixel acquiring unit 602 selects a pixel of
interest from pixels which have not yet been taken as the pixel of
interest. For example, the pixel acquiring unit 602 selects the pixel of
interest in raster scan order. In step S602, the pixel acquiring unit 602
acquires the pixel values of the pixel contained in a region centered on
the pixel of interest, and supplies the pixel values of the pixels
acquired to the score detecting unit 603. For example, the pixel
acquiring unit 602 selects a region made up of 9×5 pixels centered
on the pixel of interest, and acquires the pixel values of the pixels
contained in the region.

[1136]In step S603, the score detecting unit 603 converts the pixel values
of the pixels contained in the region into scores, thereby detecting
scores. For example, the score detecting unit 603 converts the pixel
values into scores Li,j by the computation shown in Expression (49).
In this case, the score detecting unit 603 converts the pixel values of
the pixels of the region into the scores Li,j such that the scores
Li,j have a Gaussian distribution. The score detecting unit 603
supplies the converted scores to the regression line computing unit 604.

[1137]In step S604, the regression line computing unit 604 obtains a
regression line based on the scores supplied from the score detecting
unit 603. For example, the regression line computing unit 604 obtains the
regression line based on the scores supplied from the score detecting
unit 603. More specifically, the regression line computing unit 604
obtains the regression line by executing the computation shown in
Expression (61) and Expression (62). The regression line computing unit
604 supplies computation result parameters indicating the regression line
which is the result of computation, to the region calculating unit 605.

[1138]In step S605, the region calculating unit 605 calculates the
standard deviation regarding the regression line. For example, an
arrangement may be made wherein the region calculating unit 605
calculates the standard deviation as to the regression line by the
computation of Expression (67).

[1139]In step S606, the region calculating unit 605 determines the region
of the input image having data continuity, from the standard deviation.
For example, the region calculating unit 605 multiplies the standard
deviation by a predetermined coefficient to obtain distance, and
determines the region within the obtained distance from the regression
line to be the region having data continuity.

[1141]In step S607, the pixel acquiring unit 602 determines whether or not
the processing of all pixels has ended, and in the event that
determination is made that the processing of all pixels has not ended,
the flow returns to step S601, a pixel of interest is selected from the
pixels which have not yet been taken as a pixel of interest, and the
above-described processing is repeated.

[1142]In the event that determination is made in step S607 that the
processing of all pixels has ended, the processing ends.

[1143]Other processing for detecting data continuity with the data
continuity detecting unit 101 of which the configuration is shown in FIG.
115, corresponding to the processing in step S101, will be described with
reference to the flowchart shown in FIG. 120. The processing of step S621
through step S624 is the same as the processing of step S601 through step
S604, so description thereof will be omitted.

[1144]In step S625, the region calculating unit 605 calculates a
correlation coefficient regarding the regression line. For example, the
region calculating unit 605 calculates the correlation coefficient as to
the regression line by the computation of Expression (68).

[1145]In step S626, the region calculating unit 605 determines the region
of the input image having data continuity, from the correlation
coefficient. For example, the region calculating unit 605 compares the
absolute value of the correlation coefficient with a threshold value
stored beforehand, and determines a region wherein the absolute value of
the correlation coefficient is equal to or greater than the threshold
value to be the region having data continuity.

[1147]The processing of step S627 is the same as the processing of step
S607, so description thereof will be omitted.

[1148]Thus, the data continuity detecting unit 101 of which the
configuration is shown in FIG. 115 can detect the region in the image
data having data continuity, corresponding to the dropped actual world 1
light signal continuity.

[1149]As described above, in a case wherein light signals of the real
world are projected, a region, corresponding to a pixel of interest which
is the pixel of interest in the image data of which a part of the
continuity of the real world light signals has dropped out, is selected,
and a score based on correlation value is set for pixels wherein the
correlation value of the pixel value of the pixel of interest and the
pixel value of a pixel belonging to a selected region is equal to or
greater than a threshold value, thereby detecting the score of pixels
belonging to the region, and a regression line is detected based on the
detected score, thereby detecting the region having the data continuity
of the image data corresponding to the continuity of the real world light
signals which has dropped out, and subsequently estimating the light
signals by estimating the dropped real world light signal continuity
based on the detected data continuity of the image data, processing
results which are more accurate and more precise as to events in the real
world can be obtained.

[1150]FIG. 121 illustrates the configuration of another form of the data
continuity detecting unit 101.

[1152]The data selecting unit 701 takes each pixel of the input image as
the pixel of interest, selects pixel value data of pixels corresponding
to each pixel of interest, and outputs this to the data supplementing
unit 702.

[1153]The data supplementing unit 702 performs least-square
supplementation computation based on the data input from the data
selecting unit 701, and outputs the supplementation computation results
of the continuity direction derivation unit 703. The supplementation
computation by the data supplementing unit 702 is computation regarding
the summation item used in the later-described least-square computation,
and the computation results thereof can be said to be the feature of the
image data for detecting the angle of continuity.

[1154]The continuity direction derivation unit 703 computes the continuity
direction, i.e., the angle as to the reference axis which the data
continuity has (e.g., the gradient or direction of a fine line or
two-valued edge) from the supplementation computation results input by
the data supplementing unit 702, and outputs this as data continuity
information.

[1155]Next, the overview of the operations of the data continuity
detecting unit 101 in detecting continuity (direction or angle) will be
described with reference to FIG. 122. Portions in FIG. 122 and FIG. 123
which correspond with those in FIG. 6 and FIG. 7 are denoted with the
same symbols, and description thereof in the following will be omitted as
suitable.

[1156]As shown in FIG. 122, signals of the actual world 1 (e.g., an
image), are imaged on the photoreception face of a sensor 2 (e.g., a CCD
(Charge Coupled Device) or CMOS (Complementary Metal-Oxide
Semiconductor)), by an optical system 141 (made up of lenses, an LPF (Low
Pass Filter), and the like, for example). The sensor 2 is configured of a
device having integration properties, such as a CCD or CMOS, for example.
Due to this configuration, the image obtained from the data 3 output from
the sensor 2 is an image differing from the image of the actual world 1
(difference as to the image of the actual world 1 occurs).

[1157]Accordingly, as shown in FIG. 123, the data continuity detecting
unit 101 uses a model 705 to describe in an approximate manner the actual
world 1 by an approximation expression and extracts the data continuity
from the approximation expression. The model 705 is represented by, for
example, N variables. More accurately, the model 705 approximates
(describes) signals of the actual world 1.

[1158]In order to predict the model 705, the data continuity detecting
unit 101 extracts M pieces of data 706 from the data 3. Consequently, the
model 705 is constrained by the continuity of the data.

[1159]That is to say, the model 705 approximates continuity of the
(information (signals) indicating) events of the actual world 1 having
continuity (constant characteristics in a predetermined dimensional
direction), which generates the data continuity in the data 3 when
obtained with the sensor 2.

[1160]Now, in the event that the number M of the data 706 is N, which is
the number N of variables of the model 705, or more, the model 705
represented by the N variables can be predicted from M pieces of data
706.

[1161]Further, by predicting the model 705 approximating (describing) the
signals of) the actual world 1, the data continuity detecting unit 101
derives the data continuity contained in the signals which are
information of the actual world 1 as, for example, fine line or
two-valued edge direction (the gradient, or the angle as to an axis in a
case wherein a predetermined direction is taken as an axis), and outputs
this as data continuity information.

[1162]Next, the data continuity detecting unit 101 which outputs the
direction (angle) of a fine line from the input image as data continuity
information will be described with reference to FIG. 124.

[1163]The data selecting unit 701 is configured of a horizontal/vertical
determining unit 711, and a data acquiring unit 712. The
horizontal/vertical determining unit 711 determines, from the difference
in pixel values between the pixel of interest and the surrounding pixels,
whether the angle as to the horizontal direction of the fine line in the
input image is a fine line closer to the horizontal direction or is a
fine line closer to the vertical direction, and outputs the determination
results to the data acquiring unit 712 and data supplementing unit 702.

[1164]In more detail, for example, in the sense of this technique, other
techniques may be used as well. For example, simplified 16-directional
detection may be used. As shown in FIG. 125, of the difference between
the pixel of interest and the surrounding pixels (difference in pixel
values between the pixels), the horizontal/vertical determining unit 711
obtains the difference between the sum of difference (activity) between
pixels in the horizontal direction (hdiff) and the sum of difference
(activity) between pixels in the vertical direction (vdiff), and
determines whether the sum of difference is greater between the pixel of
interest and pixels adjacent thereto in the vertical direction, or
between the pixel of interest and pixels adjacent thereto in the
horizontal direction. Now, in FIG. 125, each grid represents a pixel, and
the pixel at the center of the diagram is the pixel of interest. Also,
the differences between pixels indicated by the dotted arrows in the
diagram are the differences between pixels in the horizontal direction,
and the sum thereof is indicated by hdiff. Also, the differences between
pixels indicated by the solid arrows in the diagram are the differences
between pixels in the vertical direction, and the sum thereof is
indicated by vdiff.

[1165]Based on the sum of differences hdiff of the pixel values of the
pixels in the horizontal direction, and the sum of differences vdiff of
the pixel values of the pixels in the vertical direction, that have been
thus obtained, in the event that (hdiff minus vdiff) is positive, this
means that the change (activity) of pixel values between pixels is
greater in the horizontal direction than the vertical direction, so in a
case wherein the angle as to the horizontal direction is represented by
θ (0 degrees≦θ≦180 degrees) as shown in FIG.
126, the horizontal/vertical determining unit 711 determines that the
pixels belong to a fine line which is 45
degrees≦θ≦135 degrees, i.e., an angle closer to the
vertical direction, and conversely, in the event that this is negative,
this means that the change (activity) of pixel values between pixels is
greater in the vertical direction, so the horizontal/vertical determining
unit 711 determines that the pixels belong to a fine line which is 0
degrees≦θ<45 degrees or 135 degrees
degrees<θ≦180 degrees, i.e., an angle closer to the
horizontal direction (pixels in the direction (angle) in which the fine
line extends each are pixels representing the fine line, so change
(activity) between those pixels should be smaller).

[1166]Also, the horizontal/vertical determining unit 711 has a counter
(not shown) for identifying individual pixels of the input image, and can
be used whenever suitable or necessary.

[1167]Also, while description has been made in FIG. 125 regarding an
example of comparing the sum of difference of pixel values between pixels
in the vertical direction and horizontal direction in a 3 pixel×3
pixel range centered on the pixel of interest, to determine whether the
fine line is closer to the vertical direction or closer to the horizontal
direction, but the direction of the fine line can be determined with the
same technique using a greater number of pixels, for example,
determination may be made based on blocks of 5 pixels×5 pixels
centered on the pixel of interest, 7 pixels×7 pixels, and so forth,
i.e., a greater number of pixels.

[1168]Based on the determination results regarding the direction of the
fine line input from the horizontal/vertical determining unit 711, the
data acquiring unit 712 reads out (acquires) pixel values in increments
of blocks made up of multiple pixels arrayed in the horizontal direction
corresponding to the pixel of interest, or in increments of blocks made
up of multiple pixels arrayed in the vertical direction, and along with
data of difference between pixels adjacent in the direction according to
the determination results from the horizontal/vertical determining unit
711 between multiple corresponding pixels for each pixel of interest read
out (acquired), maximum value and minimum value data of pixel values of
the pixels contained in blocks of a predetermined number of pixels is
output to the data supplementing unit 702. Hereafter, a block made up of
multiple pixels obtained corresponding to the pixel of interest by the
data acquiring unit 712 will be referred to as an acquired block (of the
multiple pixels (each represented by a grid) shown in FIG. 139 described
later for example, with the pixel indicated by the black square as the
pixel of interest, an acquired block is the three pixels above and below,
and one pixel to the right and left, for a total of 15 pixels.

[1169]The difference supplementing unit 721 of the data supplementing unit
702 detects the difference data input from the data selecting unit 701,
executes supplementing processing necessary for solution of the
later-described least-square method, based on the determination results
of horizontal direction or vertical direction input from the
horizontal/vertical determining unit 711 of the data selecting unit 701,
and outputs the supplementing results to the continuity direction
derivation unit 703. More specifically, of the multiple pixels, the data
of difference in the pixel values between the pixel i adjacent in the
direction determined by the horizontal/vertical determining unit 711 and
the pixel (i+1) is taken as yi, and in the event that the acquired block
corresponding to the pixel of interest is made up of n pixels, the
difference supplementing unit 721 computes supplementing of
(y1)2+(y2)2+(y3)2+ . . . for each horizontal direction or
vertical direction, and outputs to the continuity direction derivation
unit 703.

[1170]Upon obtaining the maximum value and minimum value of pixel values
of pixels contained in a block set for each of the pixels contained in
the acquired block corresponding to the pixel of interest input from the
data selecting unit 701 (hereafter referred to as a dynamic range block
(of the pixels in the acquired block indicated in FIG. 139 which will be
described later, a dynamic range block of the three pixels above and
below the pixel pix12 for a total of 7 pixels, illustrated as the dynamic
range block B1 surrounded with a black solid line)), a MaxMin acquiring
unit 722 computes (detects) from the difference thereof a dynamic range
Dri (the difference between the maximum value and minimum value of pixel
values of pixels contained in the dynamic range block corresponding to
the i'th pixel in the acquired block), and outputs this to a difference
supplementing unit 723.

[1171]The difference supplementing unit 723 detects the dynamic range Dri
input from the MaxMin acquiring unit 722 and the difference data input
from the data selecting unit 701, supplements each horizontal direction
or vertical direction input from the horizontal/vertical determining unit
711 of the data selecting unit 701 with a value obtained by multiplying
the dynamic range Dri and the difference data yi based on the dynamic
range Dri and the difference data which have been detected, and outputs
the computation results to the continuity direction derivation unit 703.
That is to say, the computation results which the difference
supplementing unit 723 outputs is y1×Dr1+y2×Dr2+y3×Dr3+
. . . in each horizontal direction or vertical direction.

[1172]The continuity direction computation unit 731 of the continuity
direction derivation unit 703 computes the angle (direction) of the fine
line based on the supplemented computation results in each horizontal
direction or vertical direction input from the data supplementing unit
702, and outputs the computed angle as continuity information.

[1173]Now, the method for computing the direction (gradient or angle of
the fine line) of the fine line will be described.

[1174]Enlarging the portion surrounded by the white line in an input image
such as shown in FIG. 127A shows that the fine line (the white line
extending diagonally in the upwards right direction in the drawing) is
actually displayed as in FIG. 127B. That is to say, in the real world,
the image is such that as shown in FIG. 127C, the two levels of fine-line
level (the lighter hatched portion in FIG. 127C) and the background level
form boundaries, and no other levels exist. Conversely, the image taken
with the sensor 2, i.e., the image imaged in increments of pixels, is an
image wherein, as shown in FIG. 127B, there is a repeated array in the
fine line direction of blocks which are made up of multiple pixels with
the background level and the fine line level spatially mixed due to the
integration effects, arrayed in the vertical direction so that the ratio
(mixture ratio) thereof changes according to a certain pattern. Note that
in FIG. 127B, each square-shaped grid represents one pixel of the CCD,
and we will say that the length of each side thereof is d_CCD. Also, the
portions of the grids filled in lattice-like are the minimum value of the
pixel values, equivalent to the background level, and the other portions
filled in hatched have a greater pixel value the less dense the shading
is (accordingly, white grids with no shading have the maximum value of
the pixel values).

[1175]In the event that a fine line exists on the background in the real
world as shown in FIG. 128A, the image of the real world can be
represented as shown in FIG. 128B with the level as the horizontal axis
and the area in the image of the portion corresponding to that level as
the vertical axis, which shows that there is a relation in area occupied
in the image between the area corresponding to the background in the
image and the area of the portion corresponding to the fine line.

[1176]In the same way, as shown in FIG. 129A, the image taken with the
sensor 2 is an image wherein there is a repeated array in the direction
in which the fine line exists of blocks which are made up of pixels with
the background level and the fine line level mixed arrayed in the
vertical direction in the pixel of the background level, so that the
mixture ratio thereof changes according to a certain pattern, and
accordingly, a mixed space region made up of pixels occurring as the
result of spatially mixing the background and the fine line, of a level
partway between the region which is the background level (background
region) and the fine line level, as shown in FIG. 129B. Now, while the
vertical axis in FIG. 129B is the number of pixels, the area of one pixel
is (d_CCD)2, so it can be said that the relation between the level
of pixel values and the number of pixels in FIG. 129B is the same as the
relation between the level of pixel values and distribution of area.

[1177]The same results are obtained regarding the portion enclosed with
the white line in the actual image shown in FIG. 130A (an image 31
pixels×31 pixels), as shown in FIG. 130B. As shown in FIG. 130B,
the background portions shown in FIG. 130A (the portions which appear
black in FIG. 130A) has distribution of a great number of pixels with low
pixel value level (with pixel values around 20), and these portions with
little change make up the image of the background region. Conversely, the
portion wherein the pixel value level in FIG. 130B is not low, i.e.,
pixels with pixel value level distribution of around 40 to around 160 are
pixels belonging to the spatial mixture region which make up the image of
the fine line, and while the number of pixels for each pixel value is not
great, these are distributed over a wide range of pixel values.

[1178]Now, viewing the levels of each of the background and the fine line
in the real world image along the arrow direction (Y-coordinate
direction) shown in FIG. 131A for example, change occurs as shown in FIG.
131B. That is to say, the background region from the start of the arrow
to the fine line has a relatively low background level, and the fine line
region has the fine line level which is a high level, and passing the
fine line region and returning to the background region returns to the
background level which is a low level. As a result, this forms a
pulse-shaped waveform where only the fine line region is high level.

[1179]Conversely, in the image taken with the sensor 2, the relationship
between the pixel values of the pixels of the spatial direction X=X1 in
FIG. 132A corresponding to the arrow in FIG. 131A (the pixels indicated
by black dots in FIG. 132A) and the spatial direction Y of these pixels
is as shown in FIG. 132B. Note that in FIG. 132A, between the two white
lines extending toward the upper right represents the fine line in the
image of the real world.

[1180]That is to say, as shown in FIG. 132B, the pixel corresponding to
the center pixel in FIG. 132A has the highest pixel value, so the pixel
values of the pixels increases as the position of the spatial direction Y
moves from the lower part of the figure toward the center pixel, and then
gradually decreases after passing the center position. As a result, as
shown in FIG. 132B, peak-shaped waveforms are formed. Also, the change in
pixel values of the pixels corresponding to the spatial directions X=X0
and X2 in FIG. 132A also have the same shape, although the position of
the peak in the spatial direction Y is shifted according to the gradient
of the fine line.

[1181]Even in a case of an image actually taken with the sensor 2 as shown
in FIG. 133A for example, the same sort of results are obtained, as shown
in FIG. 133B. That is to say, FIG. 133B shows the change in pixel values
corresponding to the spatial direction Y for each predetermined spatial
direction X (in the figure, X=561, 562, 563) of the pixel values around
fine line in the range enclosed by the white lines in the image in FIG.
133A. In this way, the image taken with the actual sensor 2 also has
waveforms wherein X=561 peaks at Y=730, X=562 at Y=705, and X=563 at
Y=685.

[1182]Thus, while the waveform indicating change of level near the fine
line in the real world image exhibits a pulse-like waveform, the waveform
indicating change of pixel values in the image taken by the sensor 2
exhibits peak-shaped waveforms.

[1183]That is to say, in other words, the level of the real world image
should be a waveform as shown in FIG. 131B, but distortion occurs in the
change in the imaged image due to having been taken by the sensor 2, and
accordingly it can be said that this has changed into a waveform which is
different from the real world image (wherein information of the real
world has dropped out), as shown in FIG. 132B.

[1184]Accordingly, a model (equivalent to the model 705 in FIG. 123) for
approximately describing the real world from the image data obtained from
the sensor 2 is set, in order to obtain continuity information of the
real world image from the image taken by the sensor 2. For example, in
the case of a fine line, the real world image is set, as shown in FIG.
134. That is to say, parameters are set with the level of the background
portion at the left part of the image as B1, the background portion at
the right part of the image as B2, the level of the fine line portion as
L, the mixture ratio of the fine line as α, the width of the fine
line as W, and the angle of the fine line as to the horizontal direction
as θ, this is formed into a model, a function approximately
expressing the real world is set, an approximation function which
approximately expresses the real world is obtained by obtaining the
parameters, and the direction (gradient or angle as to the reference
axis) of the fine line is obtained from the approximation function.

[1185]At this time, the left part and right part of the background region
can be approximated as being the same, and accordingly are integrated
into B (=B1=B2) as shown in FIG. 135. Also, the width of the fine line is
to be one pixel or more. At the time of taking the real world thus set
with the sensor 2, the taken image is imaged as shown in FIG. 136A. Note
that in FIG. 136A, the space between the two white lines extending
towards the upper right represents the fine line on the real world image.

[1186]That is to say, pixels existing in a position on the fine line of
the real world are of a level closest to the level of the fine line, so
the pixel value decreases the further away from the fine line in the
vertical direction (direction of the spatial direction Y), and the pixel
values of pixels which exist at positions which do not come into contact
with the fine line region, i.e., background region pixels, have pixel
values of the background value. At this time, the pixel values of the
pixels existing at positions straddling the fine line region and the
background region have pixel values wherein the pixel value B of the
background level and the pixel value L of the fine line level L are mixed
with a mixture ratio α.

[1187]In the case of taking each of the pixels of the imaged image as the
pixel of interest in this way, the data acquiring unit 712 extracts the
pixels of an acquired block corresponding to the pixel of interest,
extracts a dynamic range block for each of the pixels making up the
extracted acquired block, and extracts from the pixels making up the
dynamic range block a pixel with a pixel value which is the maximum value
and a pixel with a pixel value which is the minimum value. That is to
say, as shown in FIG. 136A, in the event of extracting pixels of a
dynamic range block (e.g., the 7 pixels of pix1 through 7 surrounded by
the black solid line in the drawing) corresponding to a predetermined
pixel in the acquired block (the pixel pix4 regarding which a square is
described with a black solid line in one grid of the drawing), as shown
in FIG. 136A, the image of the real world corresponding to each pixel is
as shown in FIG. 136B.

[1188]That is to say, as shown in FIG. 136B, with the pixel pix1, the
portion taking up generally 1/8 of the area to the left is the background
region, and the portion taking up generally 7/8 of the area to the right
is the fine line region. With the pixel pix2, generally the entire region
is the fine line region. With the pixel pix3, the portion taking up
generally 7/8 of the area to the left is the fine line region, and the
portion taking up generally 1/8 of the area to the right is the
background region. With the pixel pix4, the portion taking up generally
2/3 of the area to the left is the fine line region, and the portion
taking up generally 1/3 of the area to the right is the background
region. With the pixel pix5, the portion taking up generally 1/3 of the
area to the left is the fine line region, and the portion taking up
generally 2/3 of the area to the right is the background region. With the
pixel pix6, the portion taking up generally 1/8 of the area to the left
is the fine line region, and the portion taking up generally 7/8 of the
area to the right is the background region. Further, with the pixel pix7,
the entire region is the background region.

[1189]As a result, the pixel values of the pixels pix1 through 7 of the
dynamic range block shown in FIG. 136A and FIG. 136B are pixel values
wherein the background level and the fine line level are mixed at a
mixture ratio corresponding to the ratio of the fine line region and the
background region.

[1191]Accordingly, of the pixel values of the pixels pix1 through 7 of the
dynamic range block that has been extracted, pixel pix2 is the highest,
followed by pixels pix1 and 3, and then in the order of pixel value,
pixels pix4, 5, 6, and 7. Accordingly, with the case shown in FIG. 136B,
the maximum value is the pixel value of the pixel pix2, and the minimum
value is the pixel value of the pixel pix7.

[1192]Also, as shown in FIG. 137A, the direction of the fine line can be
said to be the direction in which pixels with maximum pixel values
continue, so the direction in which pixels with the maximum value are
arrayed is the direction of the fine line.

[1193]Now, the gradient Gf1 indicating the direction of the fine line
is the ratio of change in the spatial direction Y (change in distance) as
to the unit distance in the spatial direction X, so in the case of an
illustration such as in FIG. 137A, the distance of the spatial direction
Y as to the distance of one pixel in the spatial direction X in the
drawing is the gradient Gf1.

[1194]Change of pixel values in the spatial direction Y of the spatial
directions X0 through X2 is such that the peak waveform is repeated at
predetermined intervals for each spatial direction X, as shown in FIG.
137B. As described above, the direction of the fine line is the direction
in which pixels with maximum value continue in the image taken by the
sensor 2, so the interval S in the spatial direction Y where the maximum
values in the spatial direction X are is the gradient Gf1 of the
fine line. That is to say, as shown in FIG. 137C, the amount of change in
the vertical direction as to the distance of one pixel in the horizontal
direction is the gradient Gf1. Accordingly, with the horizontal
direction corresponding to the gradient thereof as the reference axis,
and the angle of the fine line thereto expressed as θ, as shown in
FIG. 137C, the gradient Gf1 (corresponding to the angle with the
horizontal direction as the reference axis) of the fine line can be
expressed in the relation shown in the following Expression (69).

θ=Tan-1(Gf1)(=Tan-1(S)) (69)

[1195]Also, in the case of setting a model such as shown in FIG. 135, and
further assuming that the relationship between the pixel values of the
pixels in the spatial direction Y is such that the waveform of the peaks
shown in FIG. 137B is formed of perfect triangles (an isosceles triangle
waveform where the leading edge or trailing edge change linearly), and,
as shown in FIG. 138, with the maximum value of pixel values of the
pixels existing in the spatial direction Y, in the spatial direction X of
a predetermined pixel of interest as Max=L (here, a pixel value
corresponding to the level of the fine line in the real world), and the
minimum value as Min=B (here, a pixel value corresponding to the level of
the background in the real world), the relationship illustrated in the
following Expression (70) holds.

L-B=Gf1×d--y (70)

[1196]Here, d_y indicates the difference in pixel values between pixels in
the spatial direction Y.

[1197]That is to say, the greater the gradient Gf1 in the spatial
direction is, the closer the fine line is to being vertical, so the
waveform of the peaks is a waveform of isosceles triangles with a great
base, and conversely, the smaller the gradient S is, the smaller the base
of the isosceles triangles of the waveform is. Consequently, the greater
the gradient Gf1 is, the smaller the difference d_y of the pixel
values between pixels in the spatial direction Y is, and the smaller the
gradient S is, the greater the difference d_y of the pixel values between
pixels in the spatial direction Y is.

[1198]Accordingly, obtaining the gradient Gf1 where the above
Expression (70) holds allows the angle θ of the fine line as to the
reference axis to be obtained. Expression (70) is a single-variable
function wherein Gf1 is the variable, so this could be obtained
using one set of difference d_y of the pixel values between pixels (in
the vertical direction) around the pixel of interest, and the difference
between the maximum value and minimum value (L-B), however, as described
above, this uses an approximation expression assuming that the change of
pixel values in the spatial direction Y assumes a perfect triangle, so
dynamic range blocks are extracted for each of the pixels of the
extracted block corresponding to the pixel of interest, and further the
dynamic range Dr is obtained from the maximum value and the minimum value
thereof, as well as statistically obtaining by the least-square method,
using the difference d_y of pixel values between pixels in the spatial
direction Y for each of the pixels in the extracted block.

[1199]Now, before starting description of statistical processing by the
least-square method, first, the extracted block and dynamic range block
will be described in detail.

[1200]As shown in FIG. 139 for example, the extracted block may be three
pixels above and below the pixel of interest (the pixel of the grid where
a square is drawn with black solid lines in the drawing) in the spatial
direction Y, and one pixel to the right and left in the spatial direction
X, for a total of 15 pixels, or the like. Also, in this case, for the
difference d_y of pixel values between each of the pixels in the
extracted block, with difference corresponding to pixel pix11 being
expressed as d_y11 for example, in the case of spatial direction X=X0,
differences d_y11 through d_y16 are obtained for the pixel values between
the pixels pix11 and pix12, pix12 and pix13, pix13 and pix14, pix15 and
pix16, and pix16 and pix17. At this time, the difference of pixel values
between pixels is obtained in the same way for spatial direction X=X1 and
X2, as well. As a result, there are 18 differences d_y of pixel values
between the pixels.

[1201]Further, with regard to the pixels of the extracted block,
determination has been made for this case based on the determination
results of the horizontal/vertical determining unit 711 that the pixels
of the dynamic range block are, with regard to pix11 for example, in the
vertical direction, so as shown in FIG. 139, the pixel pix11 is taken
along with three pixels in both the upwards and downwards direction which
is the vertical direction (spatial direction Y) so that the range of the
dynamic range block B1 is 7 pixels, the maximum value and minimum value
of the pixel values of the pixels in this dynamic range block B1 is
obtained, and further, the dynamic range obtained from the maximum value
and the minimum value is taken as dynamic range Dr11. In the same way,
the dynamic range Dr12 is obtained regarding the pixel pix12 of the
extracted block from the 7 pixels of the dynamic range block B2 shown in
FIG. 139 in the same way. Thus, the gradient Gf1 is statistically
obtained using the least-square method, based on the combination of the
18 pixel differences d_yi in the extracted block and the corresponding
dynamic ranges Dri.

[1202]Next, the single-variable least-square solution will be described.
Let us assume here that the determination results of the
horizontal/vertical determining unit 711 are the vertical direction.

[1203]The single-variable least-square solution is for obtaining, for
example, the gradient Gf1 of the straight line made up of prediction
values Dri_c wherein the distance to all of the actual measurement values
indicated by black dots in FIG. 140 is minimal. Thus, the gradient S is
obtained from the following technique based on the relationship indicated
in the above-described Expression (70).

[1204]That is to say, with the difference between the maximum value and
the minimum value as the dynamic range Dr, the above Expression (70) can
be described as in the following Expression (71).

Dr=Gf1×d_y (71)

[1205]Thus, the dynamic range Dri_c can be obtained by substituting the
difference d_yi between each of the pixels in the extracted block into
the above Expression (71). Accordingly, the relation of the following
Expression (72) is satisfied for each of the pixels.

Dri--c=Gf1×d--yi (72)

[1206]Here, the difference d_yi is the difference in pixel values between
pixels in the spatial direction Y for each of the pixels i (for the
example, the difference in pixel values between pixels adjacent to a
pixel i in the upward direction or the downward direction, and Dri_c is
the dynamic range obtained when the Expression (70) holds regarding the
pixel i.

[1207]As described above, the least-square method as used here is a method
for obtaining the gradient Gf1 wherein the sum of squared
differences Q of the dynamic range Dri_c for the pixel of the extracted
block and the dynamic range Dri_r which is the actual measured value of
the pixel i, obtained with the method described with reference to FIG.
136A and FIG. 136B, is the smallest for all pixels within the image.
Accordingly, the sum of squared differences Q can be obtained by the
following Expression (73).

× ##EQU00046##

[1208]The sum of squared differences Q shown in Expression (73) is a
quadratic function, which assumes a downward-convex curve as shown in
FIG. 141 regarding the variable Gf1(gradient Gf1), so Gf1
min where the gradient Gf1 is the smallest is the solution of the
least-square method.

[1209]Differentiating the sum of squared differences Q shown in Expression
(73) with the variable Gf1 yields dQ/dGf1 shown in the
following Expression (74).

∂∂ × ##EQU00047##

[1210]With Expression (74), 0 is the Gf1 min assuming the minimal
value of the sum of squared differences Q shown in FIG. 141, so by
expanding the Expression wherein Expression (74) is 0 yields the gradient
Gf1 with the following Expression (75).

[1212]Thus, substituting the obtained gradient Gf1 into the above
Expression (69) yields the angle θ of the fine line with the
horizontal direction as the reference axis, corresponding to the gradient
Gf1 of the fine line.

[1213]Now, in the above description, description has been made regarding a
case wherein the pixel of interest is a pixel on the fine line which is
within a range of angle θ of 45 degrees
degrees≦θ≦135 degrees with the horizontal direction
as the reference axis, but in the event that the pixel of interest is a
pixel on the fine line closer to the horizontal direction, within a range
of angle θ of 0 degrees degrees≦θ<45 degrees or 135
degrees≦θ<108 degrees with the horizontal direction as
the reference axis for example, the difference of pixel values between
pixels adjacent to the pixel i in the horizontal direction is d_xi, and
in the same way, at the time of obtaining the maximum value or minimum
value of pixel values from the multiple pixels corresponding to the pixel
i, the pixels of the dynamic range block to be extracted are selected
from multiple pixels existing in the horizontal direction as to the pixel
i. With the processing in this case, the relationship between the
horizontal direction and vertical direction in the above description is
simply switched, so description thereof will be omitted.

[1214]Also, similar processing can be used to obtain the angle
corresponding to the gradient of a two-valued edge.

[1215]That is to say, enlarging the portion in an input image such as that
enclosed by the white lines as illustrated in FIG. 142A shows that the
edge portion in the image (the lower part of the cross-shaped character
written in white on a black banner in the figure) (hereafter, an edge
portion in an image made up of two value levels will also be called a
two-valued edge) is actually displayed as shown in FIG. 142B. That is to
say, in the real world, the image has a boundary formed of the two types
of levels of a first level (the field level of the banner) and a second
level (the level of the character (the hatched portion with low
concentration in FIG. 142C)), and no other levels exist. Conversely, with
the image taken by the sensor 2, i.e., the image taken in increments of
pixels, a portion where first level pixels are arrayed and a portion
where second level pixels are arrayed border on a region wherein there is
a repeated array in the direction in which the edge exists of blocks
which are made up of pixels occurring as the result of spatially mixing
the first level and the second level, arrayed in the vertical direction,
so that the ratio (mixture ratio) thereof changes according to a certain
pattern.

[1216]That is to say, as shown in FIG. 143A, with regard to the spatial
direction X=X0, X1, and X2, the respective change of pixel values in the
spatial direction Y is such that as shown in FIG. 143B, the pixel values
are a predetermined minimum value pixel value from the bottom of the
figure to near to the two-valued edge (the straight line in FIG. 143A
which heads toward the upper right) boundary, but the pixel value
gradually increases near the two-valued edge boundary, and at the point
PE in the drawing past the edge the pixel value reaches a
predetermined maximum value. More specifically, the change of the spatial
direction X=X0 is such that the pixel value gradually increases after
passing the point PS which is the minimum value of the pixel value,
and reaches the point P0 where the pixel value is the maximum value, as
shown in FIG. 143B. In comparison with this, the change of pixel values
of the pixels in the spatial direction X=X1 exhibits a waveform offset in
the spatial direction, and accordingly increases to the maximum value of
the pixel value via the point P1 in the drawing, with the position where
the pixel value gradually increases from the minimum value of pixel
values being a direction offset in the positive direction of the spatial
direction Y as shown in FIG. 143B. Further, change of pixel values in the
spatial direction Y at the spatial direction X=X2 decreases via a point
P2 in the drawing which is even further shifted in the positive direction
of the spatial direction Y, and goes from the maximum value of the pixel
value to the minimum value.

[1217]A similar tendency can be observed at the portion enclosed with the
white line in the actual image, as well. That is to say, in the portion
enclosed with the white line in the actual image in FIG. 144A (a 31
pixel×31 pixel image), the background portion (the portion which
appears black in FIG. 144A) has distribution of a great number of pixels
with low pixel values (pixel value around 90) as shown in FIG. 144B, and
these portions with little change form the image of the background
region. Conversely, the portion in FIG. 144B wherein the pixel values are
not low, i.e., pixels with pixel values distributed around 100 to 200 are
a distribution of pixels belonging to the spatially mixed region between
the character region and the background region, and while the number of
pixels per pixel value is small, the distribution is over a wide range of
pixel values. Further, a great number of pixels of the character region
with high pixel values (the portion which appears white in FIG. 144A) are
distributed around the pixel value shown as 220.

[1218]As a result, the change of pixel values in the spatial direction Y
as to the predetermined spatial direction X in the edge image shown in
FIG. 145A is as shown in FIG. 145B.

[1219]That is, FIG. 145B illustrates the change of pixel values
corresponding to the spatial direction Y, for each predetermined spatial
direction X (in the drawing, X=658, 659, 660) regarding the pixel values
near the edge within the range enclosed by the white lines in the image
in FIG. 145A. As can be seen here, in the image taken by the actual
sensor 2 as well, with X=658, the pixel value begins to increase around
Y=374 (the distribution indicated by black circles in the drawing), and
reaches the maximum value around X=382. Also, with X=659, the pixel value
begins to increase around Y=378 which is shifted in the positive
direction as to the spatial direction Y (the distribution indicated by
black triangles in the drawing), and reaches the maximum pixel value
around X=386. Further, with X=660, the pixel value begins to increase
around Y=382 which is shifted even further in the positive direction as
to the spatial direction Y (the distribution indicated by black squares
in the drawing), and reaches the maximum value around X=390.

[1220]Accordingly, in order to obtain continuity information of the real
world image from the image taken by the sensor 2, a model is set to
approximately describe the real world from the image data acquired by the
sensor 2. For example, in the case of a two-valued edge, a real world
image is set, as shown in FIG. 146. That is to say, parameters are set
with the background portion level to the left in the figure as V1, the
character portion level to the right side in the figure as V2, the
mixture ratio between pixels around the two-valued edge as α, and
the angle of the edge as to the horizontal direction as θ, this is
formed into a model, a function which approximately expresses the real
world is set, the parameters are obtained so as to obtain a function
which approximately expresses the real world, and the direction
(gradient, or angle as to the reference axis) of the edge is obtained
from the approximation function.

[1221]Now, the gradient indicating the direction of the edge is the ratio
of change in the spatial direction Y (change in distance) as to the unit
distance in the spatial direction X, so in a case such as shown in FIG.
147A, the distance in the spatial direction Y as to the distance of one
pixel in the spatial direction X in the drawing is the gradient.

[1222]The change in pixel values as to the spatial direction Y for each of
the spatial directions X0 through X2 is such that the same waveforms are
repeated at predetermined intervals for each of the spatial directions X,
as shown in FIG. 147B. As described above, the edge in the image taken by
the sensor 2 is the direction in which similar pixel value change (in
this case, change in pixel values in a predetermined spatial direction Y,
changing from the minimum value to the maximum value) spatially
continues, so the intervals S of the position where change of pixel
values in the spatial direction Y begins, or the spatial direction Y
which is the position where change ends, for each of the spatial
directions X, is the gradient Gfe of the edge. That is to say, as
shown in FIG. 147C, the amount of change in the vertical direction as to
the distance of one pixel in the horizontal direction, is the gradient
Gfe.

[1223]Now, this relationship is the same as the relationship regarding the
gradient Gf1 of the fine line described above with reference to FIG.
137A through C. Accordingly, the relational expression is the same. That
is to say, the relational expression in the case of a two-valued edge is
that shown in FIG. 148, with the pixel value of the background region as
V1, and the pixel value of the character region as V2, each as the
minimum value and the maximum value. Also, with the mixture ratio of
pixels near the edge as α, and the edge gradient as Gfe,
relational expressions which hold will be the same as the above
Expression (69) through Expression (71) (with Gf1 replaced with
Gfe).

[1224]Accordingly, the data continuity detecting unit 101 shown in FIG.
124 can detect the angle corresponding to the gradient of the fine line,
and the angle corresponding to the gradient of the edge, as data
continuity information with the same processing. Accordingly, in the
following, gradient will collectively refer to the gradient of the fine
line and the gradient of the two-valued edge, and will be called gradient
Gf. Also, the gradient Gf1 in the above Expression (73) through
Expression (75) may be Gfe, and consequently, will be considered to
be substitutable with Gf.

[1225]Next, the processing for detecting data continuity will be described
with reference to the flowchart in FIG. 149.

[1226]In step S701, the horizontal/vertical determining unit 711
initializes a counter T which identifies each of the pixels of the input
image.

[1228]Now, the processing for extracting data will be described with
reference to the flowchart in FIG. 150.

[1229]In step S711, the horizontal/vertical determining unit 711 of the
data selecting unit 701 computes, for each pixel of interest T, as
described with reference to FIG. 125, the sum of difference (activity) of
the pixel values of the pixel values between the pixels in the horizontal
direction (hdiff) and the sum of difference (activity) between pixels in
the vertical direction (vdiff), with regard to nine pixels adjacent in
the horizontal, vertical, and diagonal directions, and further obtains
the difference thereof the difference (hdiff minus vdiff); in the event
that (hdiff minus vdiff)≧0, and with the pixel of interest T
taking the horizontal direction as the reference axis, determination is
made that it is a pixel near a fine line or two-valued edge closer to the
vertical direction, wherein the angle θ as to the reference axis is
45 degrees≦θ<135 degrees, and determination results
indicating that the extracted block to be used corresponds to the
vertical direction are output to the data acquiring unit 712 and the data
supplementing unit 702.

[1230]On the other hand, in the event that (hdiff minus vdiff)<0, and
with the pixel of interest taking the horizontal direction as the
reference axis, determination is made by the horizontal/vertical
determining unit 711 that it is a pixel near a fine line or edge closer
to the horizontal direction, wherein the angle θ of the fine line
or the two-valued edge as to the reference axis is 0
degrees≦θ<45 degrees degrees or 135
degrees≦θ<180 degrees, and determination results
indicating that the extracted block to be used corresponds to the
horizontal direction are output to the data acquiring unit 712 and the
data supplementing unit 702.

[1231]That is, the gradient of the fine line or two-valued edge being
closer to the vertical direction means that, as shown in FIG. 131A for
example, the portion of the fine line which intersects with the arrow in
the drawing is greater, so extracted blocks with an increased number of
pixels in the vertical direction are set (vertically long extracted
blocks are set). In the same way, with the case of fine lines having a
gradient closer to the horizontal direction, extracted blocks with an
increased number of pixels in the horizontal direction are set
(horizontally long extracted blocks are set). Thus, accurate maximum
values and minimum values can be computed without increasing the amount
of unnecessary calculations.

[1232]In step S712, the data acquiring unit 712 extracts pixels of an
extracted block corresponding to the determination results input from the
horizontal/vertical determining unit 711 indicating the horizontal
direction or the vertical direction for the pixel of interest. That is to
say, as shown in FIG. 139 for example, (three pixels in the horizontal
direction)×(seven pixels in the vertical direction) for a total of
21 pixels, centered on the pixel of interest, are extracted as the
extracted block, and stored.

[1233]In step S713, the data acquiring unit 712 extracts the pixels of
dynamic range blocks corresponding to the direction corresponding to the
determination results of the horizontal/vertical determining unit 711 for
each of the pixels in the extracted block, and stores these. That is to
say, as described above with reference to FIG. 139, in this case, with
regard to the pixel pix11 of the extracted block for example, the
determination results of the horizontal/vertical determining unit 711
indicate the vertical direction, so the data acquiring unit 712 extracts
the dynamic range block B1 in the vertical direction, and extracts the
dynamic range block B2 for the pixel pix12 in the same way. Dynamic range
blocks are extracted for the other extracted blocks in the same way.

[1234]That is to say, information of pixels necessary for computation of
the normal equation regarding a certain pixel of interest T is stored in
the data acquiring unit 712 with this data extracting processing (a
region to be processed is selected).

[1235]Now, let us return to the flowchart in FIG. 149.

[1236]In step S703, the data supplementing unit 702 performs processing
for supplementing the values necessary for each of the items in the
normal equation (Expression (74) here).

[1237]Now, the supplementing process to the normal equation will be
described with reference to the flowchart in FIG. 151.

[1238]In step S721, the difference supplementing unit 721 obtains
(detects) the difference of pixel values between the pixels of the
extracted block stored in the data acquiring unit 712, according to the
determination results of the horizontal/vertical determining unit 711 of
the data selecting unit 701, and further raises these to the second power
(squares) and supplements. That is to say, in the event that the
determination results of the horizontal/vertical determining unit 711 are
the vertical direction, the difference supplementing unit 721 obtains the
difference of pixel values between pixels adjacent to each of the pixels
of the extracted block in the vertical direction, and further squares and
supplements these. In the same way, in the event that the determination
results of the horizontal/vertical determining unit 711 are the
horizontal direction, the difference supplementing unit 721 obtains the
difference of pixel values between pixels adjacent to each of the pixels
of the extracted block in the horizontal direction, and further squares
and supplements these. As a result, the difference supplementing unit 721
generates the sum of squared difference of the items to be the
denominator in the above-described Expression (75) and stores.

[1239]In step S722, the MaxMin acquiring unit 722 obtains the maximum
value and minimum value of the pixel values of the pixels contained in
the dynamic range block stored in the data acquiring unit 712, and in
step S723, obtains (detects) the dynamic range from the maximum value and
minimum value, and outputs this to the difference supplementing unit 723.
That is to say, in the case of a 7-pixel dynamic range block made up of
pixels pix1 through 7 as illustrated in FIG. 136B, the pixel value of
pix2 is detected as the maximum value, the pixel value of pix7 is
detected as the minimum value, and the difference of these is obtained as
the dynamic range.

[1240]In step S724, the difference supplementing unit 723 obtains
(detects), from the pixels in the extracted block stored in the data
acquiring unit 712, the difference in pixel values between pixel adjacent
in the direction corresponding to the determination results of the
horizontal/vertical determining unit 711 of the data selecting unit 701,
and supplements values multiplied by the dynamic range input from the
MaxMin acquiring unit 722. That is to say, the difference supplementing
unit 721 generates a sum of items to serve as the numerator in the
above-described Expression (75), and stores this.

[1241]Now, let us return to description of the flowchart in FIG. 149.

[1242]In step S704, the difference supplementing unit 721 determines
whether or not the difference in pixel values between pixels (the
difference in pixel values between pixels adjacent in the direction
corresponding to the determination results of the horizontal/vertical
determining unit 711) has been supplemented for all pixels of the
extracted block, and in the event that determination is made that, for
example, the difference in pixel values between pixels has not been
supplemented for all pixels of the extracted block, the flow returns to
step S702, and the subsequent processing is repeated. That is to say, the
processing of step S702 through S704 is repeated until determination is
made that the difference in pixel values between pixels has been
supplemented for all pixels of the extracted block.

[1243]In the event that determination is made in step S704 that the
difference in pixel values between pixels has been supplemented for all
pixels of the extracted block, in step S705, the difference supplementing
units 721 and 723 output the supplementing results stored therein to the
continuity direction derivation unit 703.

[1244]In step S706, the continuity direction computation unit 731 solves
the normal equation given in the above-described Expression (75), based
on: the sum of squared difference in pixel values between pixels adjacent
in the direction corresponding to the determination results of the
horizontal/vertical determining unit 711, of the pixels in the acquired
block input from the difference supplementing unit 721 of the data
supplementing unit 702; the difference in pixel values between pixels
adjacent in the direction corresponding to the determination results of
the horizontal/vertical determining unit 711, of the pixels in the
acquired block input from the difference supplementing unit 723; and the
sum of products of the dynamic ranges corresponding to the pixels of the
obtained block; thereby statistically computing and outputting the angle
indicating the direction of continuity (the angle indicating the gradient
of the fine line or two-valued edge), which is the data continuity
information of the pixel of interest, using the least-square method.

[1245]In step S707, the data acquiring unit 712 determines whether or not
processing has been performed for all pixels of the input image, and in
the event that determination is made that processing has not been
performed for all pixels of the input image for example, i.e., that
information of the angle of the fine line or two-valued edge has not been
output for all pixels of the input image, the counter T is incremented by
1 in step S708, and the process returns to step S702. That is to say, the
processing of steps S702 through S708 is repeated until pixels of the
input image to be processed are changed and processing is performed for
all pixels of the input image. Change of pixel by the counter T may be
according to raster scan or the like for example, or may be sequential
change according to other rules.

[1246]In the event that determination is made in step S707 that processing
has been performed for all pixels of the input image, in step S709 the
data acquiring unit 712 determines whether or not there is a next input
image, and in the event that determination is made that there is a next
input image, the processing returns to step S701, and the subsequent
processing is repeated.

[1247]In the event that determination is made in step S709 that there is
no next input image, the processing ends.

[1248]According to the above processing, the angle of the fine line or
two-valued edge is detected as continuity information and output.

[1249]The angle of the fine line or two-valued edge obtained by this
statistical processing approximately matches the angle of the fine line
or two-valued edge obtained using correlation. That is to say, with
regard to the image of the range enclosed by the white lines in the image
shown in FIG. 152A, as shown in FIG. 152B, the angle indicating the
gradient of the fine line obtained by the method using correlation (the
black circles in the figure) and the angle of the fine line obtained by
statistical processing with the data continuity detecting unit 101 shown
in FIG. 124 (the black triangles in the figure) approximately agree at
the spatial direction Y coordinates near the fine line, with regard to
change in gradient in the spatial direction Y at predetermined
coordinates in the horizontal direction on the fine line. Note that in
FIG. 152B, the spatial directions Y=680 through 730 between the black
lines in the figure are the coordinates on the fine line.

[1250]In the same way, with regard to the image of the range enclosed by
the white lines in the image shown in FIG. 153A, as shown in FIG. 153B,
the angle indicating the gradient of the two-valued edge obtained by the
method using correlation (the black circles in the figure) and the angle
of the two-valued edge obtained by statistical processing with the data
continuity detecting unit 101 shown in FIG. 124 (the black triangles in
the figure) approximately agree at the spatial direction Y coordinates
near the fine line, with regard to change in gradient in the spatial
direction Y at predetermined coordinates in the horizontal direction on
the two-valued edge.

[1251]Note that in FIG. 153B, the spatial directions Y=(around) 376
through (around) 388 are the coordinates on the fine line.

[1252]Consequently, the data continuity detecting unit 101 shown in FIG.
124 can statistically obtain the angle indicating the gradient of the
fine line or two-valued edge (the angle with the horizontal direction as
the reference axis here) using information around each pixel for
obtaining the angle of the fine line or two-valued edge as the data
continuity, unlike the method using correlation with blocks made up of
predetermined pixels, and accordingly, there is no switching according to
predetermined angle ranges as observed with the method using correlation,
thus, the angle of the gradients of all fine lines or two-valued edges
can be obtained with the same processing, thereby enabling simplification
of the processing.

[1253]Also, while description has been made above regarding an example of
the data continuity detecting unit 101 outputting the angle between the
fine line or two-valued edge and a predetermined reference axis as the
continuity information, but it is conceivable that depending on the
subsequent processing, outputting the angle as such may improve
processing efficiency. In such a case, the continuity direction
derivation unit 703 and continuity direction computation unit 731 of the
data continuity detecting unit 101 may output the gradient Gf of the
fine line or two-valued edge obtained by the least-square method as
continuity information, without change.

[1254]Further, while description has been made above regarding a case
wherein the dynamic range Dri_r in Expression (75) is computed having
been obtained regarding each of the pixels in the extracted block, but
setting the dynamic range block sufficiently great, i.e., setting the
dynamic range for a great number of pixels of interest and a great number
of pixels therearound, the maximum value and minimum value of pixel
values of pixels in the image should be selected at all times for the
dynamic range. Accordingly, an arrangement may be made wherein
computation is made for the dynamic range Dri_r with the dynamic range
Dri_r as a fixed value obtained as the dynamic range from the maximum
value and minimum value of pixels in the extracted block or in the image
data without computing each pixel of the extracted block.

[1255]That is to say, an arrangement may be made to obtain the angle
θ (gradient Gf) of the fine line by supplementing only the
difference in pixel values between the pixels, as in the following
Expression (76). Fixing the dynamic range in this way allows the
computation processing to be simplified, and processing can be performed
at high speed.

× ##EQU00049##

[1256]Next, description will be made regarding the data continuity
detecting unit 101 for detecting the mixture ratio of the pixels as data
continuity information with reference to FIG. 154.

[1257]Note that with the data continuity detecting unit 101 shown in FIG.
154, portions which correspond to those of the data continuity detecting
unit 101 shown in FIG. 124 are denoted with the same symbols, and
description thereof will be omitted as appropriate.

[1259]A MaxMin acquiring unit 752 of the data supplementing unit 751
performs the same processing as the MaxMin acquiring unit 722 in FIG.
124, and the maximum value and minimum value of the pixel values of the
pixels in the dynamic range block are obtained, the difference (dynamic
range) of the maximum value and minimum value is obtained, and output to
supplementing units 753 and 755 as well as outputting the maximum value
to a difference computing unit 754.

[1260]The supplementing unit 753 squares the value obtained by the MaxMin
acquiring unit, performs supplementing for all pixels of the extracted
block, obtains the sum thereof, and outputs to the mixture ratio
derivation unit 761.

[1261]The difference computing unit 754 obtains the difference between
each pixel in the acquired block of the data acquiring unit 712 and the
maximum value of the corresponding dynamic range block, and outputs this
to the supplementing unit 755.

[1262]The supplementing unit 755 multiplies the difference between the
maximum value and minimum value (dynamic range) of each pixel of the
acquired block input from the Max Min acquiring unit 752 with the
difference between the pixel value of each of the pixels in the acquired
block input from the difference computing unit 754 and the maximum value
of the corresponding dynamic range block, obtains the sum thereof, and
outputs to the mixture ratio derivation unit 761.

[1263]A mixture ratio calculating unit 762 of the mixture ratio derivation
unit 761 statistically obtains the mixture ratio of the pixel of interest
by the least-square method, based on the values input from the
supplementing units 753 and 755 of the data supplementing unit, and
outputs this as data continuity information.

[1264]Next, the mixture ratio derivation method will be described.

[1265]As shown in FIG. 155A, in the event that a fine line exists on the
image, the image taken with the sensor 2 is an image such as shown in
FIG. 155B. In this image, let us hold in interest the pixel enclosed by
the black solid lines on the spatial direction X=X1 in FIG. 155B. Note
that the range between the white lines in FIG. 155B indicates the
position corresponding to the fine line region in the real world. The
pixel value M of this pixel should be an intermediate color between the
pixel value B corresponding to the level of the background region, and
the pixel value L corresponding to the level of the fine line region, and
in further detail, this pixel value PS should be a mixture of each
level according to the area ratio between the background region and fine
line region. Accordingly, the pixel value PS can be expressed by the
following Expression (77).

PS=α×B+(1-α)×L (77)

[1266]Here, α is the mixture ratio, and more specifically, indicates
the ratio of area which the background region occupies in the pixel of
interest. Accordingly, (1-α) can be said to indicate the ratio of
area which the fine line region occupies. Now, pixels of the background
region can be considered to be the component of an object existing in the
background, and thus can be said to be a background object component.
Also, pixels of the fine line region can be considered to be the
component of an object existing in the foreground as to the background
object, and thus can be said to be a foreground object component.

[1267]Consequently, the mixture ratio α can be expressed by the
following Expression (78) by expanding the Expression (77).

α=(PS-L)/(B-L) (78)

[1268]Further, in this case, we are assuming that the pixel value exists
at a position straddling the first pixel value (pixel value B) region and
the second pixel value (pixel value L) region, and accordingly, the pixel
value L can be substituted with the maximum value Max of the pixel
values, and further, the pixel value B can be substituted with the
minimum value of the pixel value. Accordingly, the mixture ratio α
can also be expressed by the following Expression (79).

α=(PS-Max)/(Min-Max) (79)

[1269]As a result of the above, the mixture ratio α can be obtained
from the dynamic range (equivalent to (Min-Max)) of the dynamic range
block regarding the pixel of interest, and the difference between the
pixel of interest and the maximum value of pixels within the dynamic
range block, but in order to further improve precision, the mixture ratio
α will here be statistically obtained by the least-square method.

[1270]That is to say, expanding the above Expression (79) yields the
following Expression (80).

(PS-Max)=α×(Min-Max) (80)

[1271]As with the case of the above-described Expression (71), this
Expression (80) is a single-variable least-square equation. That is to
say, in Expression (71), the gradient Gf was obtained by the
least-square method, but here, the mixture ratio α is obtained.
Accordingly, the mixture ratio α can be statistically obtained by
solving the normal equation shown in the following Expression (81).

α ##EQU00050##

[1272]Here, i is for identifying the pixels of the extracted block.
Accordingly, in Expression (81), the number of pixels in the extracted
block is n.

[1273]Next, the processing for detecting data continuity with the mixture
ratio as data continuity will be described with reference to the
flowchart in FIG. 156.

[1274]In step S731, the horizontal/vertical determining unit 711
initializes the counter U which identifies the pixels of the input image.

[1275]In step S732, the horizontal/vertical determining unit 711 performs
processing for extracting data necessary for subsequent processing. Note
that the processing of step S732 is the same as the processing described
with reference to the flowchart in FIG. 150, so description thereof will
be omitted.

[1277]Now, the processing for supplementing to the normal equation will be
described with reference to the flowchart in FIG. 157.

[1278]In step S751, the MaxMin acquiring unit 752 obtains the maximum
value and minimum value of the pixels values of the pixels contained in
the dynamic range block stored in the data acquiring unit 712, and of
these, outputs the minimum value to the difference computing unit 754.

[1279]In step S752, the MaxMin acquiring unit 752 obtains the dynamic
range from the difference between the maximum value and the minimum
value, and outputs this to the difference supplementing units 753 and
755.

[1280]In step S753, the supplementing unit 753 squares the dynamic range
(Max-Min) input from the MaxMin acquiring unit 752, and supplements. That
is to say, the supplementing unit 753 generates by supplementing a value
equivalent to the denominator in the above Expression (81).

[1281]In step S754, the difference computing unit 754 obtains the
difference between the maximum value of the dynamic range block input
from the MaxMin acquiring unit 752 and the pixel values of the pixels
currently being processed in the extracted block, and outputs to the
supplementing unit 755.

[1282]In step S755, the supplementing unit 755 multiplies the dynamic
range input from the MaxMin acquiring unit 752 with the difference
between the pixel values of the pixels currently being processed input
from the difference computing unit 754 and the maximum value of the
pixels of the dynamic range block, and supplements. That is to say, the
supplementing unit 755 generates values equivalent to the numerator item
of the above Expression (81).

[1283]As described above, the data supplementing unit 751 performs
computation of the items of the above Expression (81) by supplementing.

[1284]Now, let us return to the description of the flowchart in FIG. 156.

[1285]In step S734, the difference supplementing unit 721 determines
whether or not supplementing has ended for all pixels of the extracted
block, and in the event that determination is made that supplementing has
not ended for all pixels of the extracted block for example, the
processing returns to step S732, and the subsequent processing is
repeated. That is to say, the processing of steps S732 through S734 is
repeated until determination is made that supplementing has ended for all
pixels of the extracted block.

[1286]In step S734, in the event that determination is made that
supplementing has ended for all pixels of the extracted block, in step
S735 the supplementing units 753 and 755 output the supplementing results
stored therein to the mixture ratio derivation unit 761.

[1287]In step S736, the mixture ratio calculating unit 762 of the mixture
ratio derivation unit 761 statistically computes, by the least-square
method, and outputs, the mixture ratio of the pixel of interest which is
the data continuity information, by solving the normal equation shown in
Expression (81), based on the sum of squares of the dynamic range, and
the sum of multiplying the difference between the pixel values of the
pixels of the extracted block and the maximum value of the dynamic block
by the dynamic range, input from the supplementing units 753 and 755 of
the data supplementing unit 751.

[1288]In step S737, the data acquiring unit 712 determines whether or not
processing has been performed for all pixels in the input image, and in
the event that determination is made that, for example, processing has
not been performed for all pixels in the input image, i.e., in the event
that determination is made that the mixture ratio has not been output for
all pixels of the input image, in step S738 the counter U is incremented
by 1, and the processing returns to step S732.

[1289]That is to say, the processing of steps S732 through S738 is
repeated until pixels to be processed within the input image are changed
and the mixture ratio is computed for all pixels of the input image.
Change of pixel by the counter U may be according to raster scan or the
like for example, or may be sequential change according to other rules.

[1290]In the event that determination is made in step S737 that processing
has been performed for all pixels of the input image, in step S739 the
data acquiring unit 712 determines whether or not there is a next input
image, and in the event that determination is made that there is a next
input image, the processing returns to step S731, and the subsequent
processing is repeated.

[1291]In the event that determination is made in step S739 that there is
no next input image, the processing ends.

[1292]Due to the above processing, the mixture ratio of the pixels is
detected as continuity information, and output.

[1293]FIG. 158B illustrates the change in the mixture ratio on
predetermined spatial directions X (=561, 562, 563) with regard to the
fine line image within the white lines in the image shown in FIG. 158A,
according to the above technique, for example. As shown in FIG. 158B, the
change in the mixture ratio in the spatial direction Y which is
continuous in the horizontal direction is such that, respectively, in the
case of the spatial direction X=563, the mixture ratio starts rising at
around the spatial direction Y=660, peaks at around Y=685, and drops to
Y=710. Also, in the case of the spatial direction X=562, the mixture
ratio starts rising at around the spatial direction Y=680, peaks at
around Y=705, and drops to Y=735. Further, in the case of the spatial
direction X=561, the mixture ratio starts rising at around the spatial
direction Y=705, peaks at around Y=725, and drops to Y=755.

[1294]Thus, as shown in FIG. 158B, the change of each of the mixture
ratios in the continuous spatial directions X is the same change as the
change in pixel values changing according to the mixture ratio (the
change in pixel values shown in FIG. 133B), and is cyclically continuous,
so it can be understood that the mixture ratio of pixels near the fine
line are being accurately represented.

[1295]Also, in the same way, FIG. 159B illustrates the change in the
mixture ratio on predetermined spatial directions X (=658, 659, 660) with
regard to the two-valued edge image within the white lines in the image
shown in FIG. 159A. As shown in FIG. 159B, the change in the mixture
ratio in the spatial direction Y which is continuous in the horizontal
direction is such that, respectively, in the case of the spatial
direction X=660, the mixture ratio starts rising at around the spatial
direction Y=750, and peaks at around Y=765. Also, in the case of the
spatial direction X=659, the mixture ratio starts rising at around the
spatial direction Y=760, and peaks at around Y=775. Further, in the case
of the spatial direction X=658, the mixture ratio starts rising at around
the spatial direction Y=770, and peaks at around Y=785.

[1296]Thus, as shown in FIG. 159B, the change of each of the mixture
ratios of the two-valued edge is approximately the same as change which
is the same change as the change in pixel values changing according to
the mixture ratio (the change in pixel values shown in FIG. 145B), and is
cyclically continuous, so it can be understood that the mixture ratio of
pixel values near the two-valued edge are being accurately represented.

[1297]According to the above, the mixture ratio of each pixel can be
statistically obtained as data continuity information by the least-square
method. Further, the pixel values of each of the pixels can be directly
generated based on this mixture ratio.

[1298]Also, if we say that the change in mixture ratio has continuity, and
further, the change in the mixture ratio is linear, the relationship such
as indicated in the following Expression (82) holds.

α=m×y+n (82)

[1299]Here, m represents the gradient when the mixture ratio α
changes as to the spatial direction Y, and also, n is equivalent to the
intercept when the mixture ratio α changes linearly.

[1300]That is, as shown in FIG. 160, the straight line indicating the
mixture ratio is a straight line indicating the boundary between the
pixel value B equivalent to the background region level and the level L
equivalent to the fine line level, and in this case, the amount in change
of the mixture ratio upon progressing a unit distance with regard to the
spatial direction Y is the gradient m.

[1302]Further, expanding this Expression (83) yields the following
Expression (84).

M-L=(y×B-y×L)×m+(B-L)×n (84)

[1303]In Expression (84), the first item m represents the gradient of the
mixture ratio in the spatial direction, and the second item is the item
representing the intercept of the mixture ratio. Accordingly, an
arrangement may be made wherein a normal equation is generated using the
least-square of two variables to obtain m and n in Expression (84)
described above.

[1304]However, the gradient m of the mixture ratio α is the
above-described gradient of the fine line or two-valued edge (the
above-described gradient Gf) itself, so an arrangement may be made
wherein the above-described method is used to obtain the gradient Gf
of the fine line or two-valued edge beforehand, following which the
gradient is used and substituted into Expression (84), thereby making for
a single-variable function with regard to the item of the intercept, and
obtaining with the single-variable least-square method the same as the
technique described above.

[1305]While the above example has been described regarding a data
continuity detecting unit 101 for detecting the angle (gradient) or
mixture ratio of a fine line or two-valued edge in the spatial direction
as data continuity information, an arrangement may be made wherein that
corresponding to the angle in the spatial direction obtained by replacing
one of the spatial-direction axes (spatial directions X and Y), for
example, with the time-direction (frame direction) T axis. That is to
say, that which corresponds to the angle obtained by replacing one of the
spatial-direction axes (spatial directions X and Y) with the
time-direction (frame direction) T axis, is a vector of movement of an
object (movement vector direction).

[1306]More specifically, as shown in FIG. 161A, in the event that an
object is moving upwards in the drawing with regard to the spatial
direction Y over time, the track of movement of the object is manifested
at the portion equivalent to the fine line in the drawing (in comparison
with that in FIG. 131A). Accordingly, the gradient at the fine line in
the time direction T represents the direction of movement of the object
(angle indicating the movement of the object) (is equivalent to the
direction of the movement vector) in FIG. 161A. Accordingly, in the real
world, in a frame of a predetermined point-in-time indicated by the arrow
in FIG. 161A, a pulse-shaped waveform wherein the portion to be the track
of the object is the level of (the color of) the object, and other
portions are the background level, as shown in FIG. 161B, is obtained.

[1307]In this way, in the case of imaging an object with movement with the
sensor 2, as shown in FIG. 162A, the distribution of pixel values of each
of the pixels of the frames from point-in-time T1 through T3 each assumes
a peak-shaped waveform in the spatial direction Y, as shown in FIG. 162B.
This relationship can be thought to be the same as the relationship in
the spatial directions X and Y, described with reference to FIG. 132A and
FIG. 132B. Accordingly, in the event that the object has movement in the
frame direction T, the direction of the movement vector of the object can
be obtained as data continuity information in the same way as with the
information of the gradient of the fine line or the angle (gradient) of
the two-valued edge described above. Note that in FIG. 162B, each grid in
the frame direction T (time direction T) is the shutter time making up
the image of one frame.

[1308]Also, in the same way, in the event that there is movement of an
object in the spatial direction Y for each frame direction T as shown in
FIG. 163A, each pixel value corresponding to the movement of the object
as to the spatial direction Y on a frame corresponding to a predetermined
point-in-time T1 can be obtained as shown in FIG. 163B. At this time, the
pixel value of the pixel enclosed by the black solid lines in FIG. 163B
is a pixel value wherein the background level and the object level are
mixed in the frame direction at a mixture ratio β, corresponding to
the movement of the object, as shown in FIG. 163C, for example.

[1309]This relationship is the same as the relationship described with
reference to FIG. 155A, FIG. 155B, and FIG. 155C.

[1310]Further, as shown in FIG. 164, the level O of the object and the
level B of the background can also be made to be linearly approximated by
the mixture ratio β in the frame direction (time direction). This
relationship is the same relationship as the linear approximation of
mixture ratio in the spatial direction, described with reference to FIG.
160.

[1311]Accordingly, the mixture ratio β in the time (frame) direction
can be obtained as data continuity information with the same technique as
the case of the mixture ratio α in the spatial direction.

[1312]Also, an arrangement may be made wherein the frame direction, or one
dimension of the spatial direction, is selected, and the data continuity
angle or the movement vector direction is obtained, and in the same way,
the mixture ratios α and β may be selectively obtained.

[1313]According to the above, light signals of the real world are
projected, a region, corresponding to a pixel of interest in the image
data of which a part of the continuity of the real world light signals
has dropped out, is selected, features for detecting the angle as to a
reference axis of the image data continuity corresponding to the lost
real world light signal continuity are detected in the selected region,
the angle is statistically detected based on the detected features, and
light signals are estimated by estimating the lost real world light
signal continuity based on the detected angle of the continuity of the
image data as to the reference axis, so the angle of continuity
(direction of movement vector) or (a time-space) mixture ratio can be
obtained.

[1314]Next, description will be made, with reference to FIG. 165, of a
data continuity information detecting unit 101 which outputs, as data
continuity information, information of regions where processing using
data continuity information should be performed.

[1315]An angle detecting unit 801 detects, of the input image, the
spatial-direction angle of regions having continuity, i.e., of portions
configuring fine lines and two-valued edges having continuity in the
image, and outputs the detected angle to an actual world estimating unit
802. Note that this angle detecting unit 801 is the same as the data
continuity detecting unit 101 in FIG. 3.

[1316]The actual world estimating unit 802 estimates the actual world
based on the angle indicating the direction of data continuity input from
the angle detecting unit 801, and information of the input image. That is
to say, the actual world estimating unit 802 obtains a coefficient of an
approximation function which approximately describes the intensity
distribution of the actual world light signals, from the input angle and
each pixel of the input image, and outputs to an error computing unit 803
the obtained coefficient as estimation results of the actual world. Note
that this actual world estimating unit 802 is the same as the actual
world estimating unit 102 shown in FIG. 3.

[1317]The error computing unit 803 formulates an approximation function
indicating the approximately described real world light intensity
distribution, based on the coefficient input from the actual world
estimating unit 802, and further, integrates the light intensity
corresponding to each pixel position based on this approximation
function, thereby generating pixel values of each of the pixels from the
light intensity distribution estimated from the approximation function,
and outputs to a comparing unit 804 with the difference as to the
actually-input pixel values as error.

[1318]The comparing unit 804 compares the error input from the error
computing unit 803 for each pixel, and a threshold value set beforehand,
so as to distinguish between processing regions where pixels exist
regarding which processing using continuity information is to be
performed, and non-processing regions, and outputs region information,
distinguishing between processing regions where processing using
continuity information is to be performed and non-processing regions, as
continuity information.

[1319]Next, description will be made regarding continuity detection
processing using the data continuity detecting unit 101 in FIG. 165 with
reference to the flowchart in FIG. 166.

[1320]The angle detecting unit 801 acquires an image input in step S801,
and detects an angle indicating the direction of continuity in step S802.
More particularly, the angle detecting unit 801 detects a fine line when
the horizontal direction is taken as a reference axis, or an angle
indicating the direction of continuity having a two-valued edge for
example, and outputs this to the actual world estimating unit 802.

[1321]In step S803, the actual world estimating unit 802 obtains a
coefficient of an approximation function f(x) made up of a polynomial,
which approximately describes a function F(x) expressing the real world,
based on angular information input from the angle detecting unit 801 and
input image information, and outputs this to the error calculation unit
803. That is to say, the approximation function f(x) expressing the real
world is shown with a primary polynomial such as the following Expression
(85).

##EQU00051##

[1322]Here, wi is a coefficient of the polynomial, and the actual world
estimating unit 802 obtains this coefficient wi and outputs this to the
error calculation unit 803. Further, a gradient from the direction of
continuity can be obtained based on an angle input from the angle
detecting unit 801 (Gf=tan-1θ, Gf: gradient,
θ: angle), so the above Expression (85) can be described with a
quadratic polynomial such as shown in the following Expression (86) by
substituting a constraint condition of this gradient Gf.

α α α α ##EQU00052##

[1323]That is to say, the above Expression (86) describes a quadratic
function f(x, y) obtained by expressing the width of a shift occurring
due to the primary approximation function f(x) described with Expression
(85) moving in parallel with the spatial direction Y using a shift amount
α (=-dy/Gf: dy is the amount of change in the spatial
direction Y).

[1324]Accordingly, the actual world estimating unit 802 solves each
coefficient wi of the above Expression (86) using an input image and
angular information in the direction of continuity, and outputs the
obtained coefficients wi to the error calculation unit 803.

[1325]Here, description will return to the flowchart in FIG. 166.

[1326]In step S804, the error calculation unit 803 performs reintegration
regarding each pixel based on the coefficients input by the actual world
estimating unit 802. More specifically, the error calculation unit 803
subjects the above Expression (86) to integration regarding each pixel
such as shown in the following Expression (87) based on the coefficients
input from the actual world estimating unit 802.

∫ ∫ ∫ ∫ α ∫ ∫
α × α× α α α
α ##EQU00053##

[1327]Here, Ss denotes the integrated result in the spatial direction
shown in FIG. 167. Also, the integral range thereof is, as shown in FIG.
167, xm through xm+B for the spatial direction X, and ym
through ym+A for the spatial direction Y. Also, in FIG. 167, let us
say that each grid (square) denotes one pixel, and both grid for the
spatial direction X and grid for the spatial direction Y is 1.

[1328]Accordingly, the error calculation unit 803, as shown in FIG. 168,
subjects each pixel to an integral arithmetic operation such as shown in
the following Expression (88) with an integral range of xm through
xm+1 for the spatial direction X of a curved surface shown in the
approximation function f(x, y), and ym through ym+1 for the
spatial direction Y (A=B=1), and calculates the pixel value PS of
each pixel obtained by spatially integrating the approximation function
expressing the actual world in an approximate manner.

∫ ∫ ∫ ∫ α ∫ ∫
α × α× α α α
α ##EQU00054##

[1329]In other words, according to this processing, the error calculation
unit 803 serves as, so to speak, a kind of pixel value generating unit,
and generates pixel values from the approximation function.

[1330]In step S805, the error calculation unit 803 calculates the
difference between a pixel value obtained with integration such as shown
in the above Expression (88) and a pixel value of the input image, and
outputs this to the comparison unit 804 as an error. In other words, the
error calculation unit 803 obtains the difference between the pixel value
of a pixel corresponding to the integral range (xm through xm+1
for the spatial direction X, and ym through ym+1 for the
spatial direction Y) shown in the above FIG. 167 and FIG. 168, and a
pixel value obtained with the integrated result in a range corresponding
to the pixel as an error, and outputs this to the comparison unit 804.

[1331]In step S806, the comparison unit 804 determines regarding whether
or not the absolute value of the error between the pixel value obtained
with integration input from the error calculation unit 803 and the pixel
value of the input image is a predetermined threshold value or less.

[1332]In step S806, in the event that determination is made that the error
is the predetermined threshold value or less, since the pixel value
obtained with integration is a value close to the pixel value of the
pixel of the input image, the comparison unit 804 regards the
approximation function set for calculating the pixel value of the pixel
as a function sufficiently approximated with the light intensity
allocation of a light signal in the real world, and recognizes the region
of the pixel now processed as a processing region where processing using
the approximation function based on continuity information is performed
in step S807. In further detail, the comparison unit 804 stores the pixel
now processed in unshown memory as the pixel in the subsequent processing
regions.

[1333]On the other hand, in the event that determination is made that the
error is not the threshold value or less in step S806, since the pixel
value obtained with integration is a value far from the actual pixel
value, the comparison unit 804 regards the approximation function set for
calculating the pixel value of the pixel as a function insufficiently
approximated with the light intensity allocation of a light signal in the
real world, and recognizes the region of the pixel now processed as a
non-processing region where processing using the approximation function
based on continuity information is not performed at a subsequent stage in
step S808. In further detail, the comparison unit 804 stores the region
of the pixel now processed in unshown memory as the subsequent
non-processing regions.

[1334]In step S809, the comparison unit 804 determines regarding whether
or not the processing has been performed as to all of the pixels, and in
the event that determination is made that the processing has not been
performed as to all of the pixels, the processing returns to step S802,
wherein the subsequent processing is repeatedly performed. In other
words, the processing in steps S802 through S809 is repeatedly performed
until determination processing wherein comparison between a pixel value
obtained with integration and a pixel value input is performed, and
determination is made regarding whether or not the pixel is a processing
region, is completed regarding all of the pixels.

[1335]In step S809, in the event that determination is made that
determination processing wherein comparison between a pixel value
obtained with reintegration and a pixel value input is performed, and
determination is made regarding whether or not the pixel is a processing
region, has been completed regarding all of the pixels, the comparison
unit 804, in step S810, outputs region information wherein a processing
region where processing based on the continuity information in the
spatial direction is performed at subsequent processing, and a
non-processing region where processing based on the continuity
information in the spatial direction is not performed are identified
regarding the input image stored in the unshown memory, as continuity
information.

[1336]According to the above processing, based on the error between the
pixel value obtained by the integrated result in a region corresponding
to each pixel using the approximation function f(x) calculated based on
the continuity information and the pixel value in the actual input image,
evaluation for reliability of expression of the approximation function is
performed for each region (for each pixel), and accordingly, a region
having a small error, i.e., only a region where a pixel of which the
pixel value obtained with integration based on the approximation function
is reliable exists is regarded as a processing region, and the regions
other than this region are regarded as non-processing regions, and
consequently, only a reliable region can be subjected to the processing
based on the continuity information in the spatial direction, and the
necessary processing alone can be performed, whereby processing speed can
be improved, and also the processing can be performed as to the reliable
region alone, resulting in preventing image quality due to this
processing from deterioration.

[1337]Next, description will be made regarding other embodiments regarding
the data continuity information detecting unit 101 which outputs region
information where a pixel to be processed using data continuity
information exists, as data continuity information with reference to FIG.
169.

[1338]A movement detecting unit 821 detects, of images input, a region
having continuity, i.e., movement having continuity in the frame
direction on an image (direction of movement vector: Vf), and
outputs the detected movement to the actual world estimating unit 822.
Note that this movement detecting unit 821 is the same as the data
continuity detecting unit 101 in FIG. 3.

[1339]The actual world estimating unit 822 estimates the actual world
based on the movement of the data continuity input from the movement
detecting unit 821, and the input image information. More specifically,
the actual world estimating unit 822 obtains coefficients of the
approximation function approximately describing the intensity allocation
of a light signal in the actual world in the frame direction (time
direction) based on the movement input and each pixel of the input image,
and outputs the obtained coefficients to the error calculation unit 823
as an estimated result in the actual world. Note that this actual world
estimating unit 822 is the same as the actual world estimating unit 102
in FIG. 3.

[1340]The error calculation unit 823 makes up an approximation function
indicating the intensity allocation of light in the real world in the
frame direction, which is approximately described based on the
coefficients input from the actual world estimating unit 822, further
integrates the intensity of light equivalent to each pixel position for
each frame from this approximation function, generates the pixel value of
each pixel from the intensity allocation of light estimated by the
approximation function, and outputs the difference with the pixel value
actually input to the comparison unit 824 as an error.

[1341]The comparison unit 824 identifies a processing region where a pixel
to be subjected to processing using the continuity information exists,
and a non-processing region by comparing the error input from the error
calculation unit 823 regarding each pixel with a predetermined threshold
value set beforehand, and outputs region information wherein a processing
region where processing is performed using this continuity information
and a non-processing region are identified, as continuity information.

[1342]Next, description will be made regarding continuity detection
processing using the data continuity detecting unit 101 in FIG. 169 with
reference to the flowchart in FIG. 170.

[1343]The movement detecting unit 801 acquires an image input in step
S821, and detects movement indicating continuity in step S822. In further
detail, the movement detecting unit 801 detects movement of a substance
moving within the input image (direction of movement vector: Vf) for
example, and outputs this to the actual world estimating unit 822.

[1344]In step S823, the actual world estimating unit 822 obtains
coefficients of a function f(t) made up of a polynomial, which
approximately describes a function F(t) in the frame direction, which
expresses the real world, based on the movement information input from
the movement detecting unit 821 and the information of the input image,
and outputs this to the error calculation unit 823. That is to say, the
function f(t) expressing the real world is shown as a primary polynomial
such as the following Expression (89).

##EQU00055##

[1345]Here, wi is coefficients of the polynomial, and the actual world
estimating unit 822 obtains these coefficients wi, and outputs these to
the error calculation unit 823. Further, movement as continuity can be
obtained by the movement input from the movement detecting unit 821
(Vf=tan-1θv, Vf: gradient in the frame direction of
a movement vector, θv: angle in the frame direction of a movement
vector), so the above Expression (89) can be described with a quadratic
polynomial such as shown in the following Expression (90) by substituting
a constraint condition of this gradient.

α α α α ##EQU00056##

[1346]That is to say, the above Expression (90) describes a quadratic
function f(t, y) obtained by expressing the width of a shift occurring by
a primary approximation function f(t), which is described with Expression
(89), moving in parallel to the spatial direction Y, as a shift amount
αt (=-dy/Vf: dy is the amount of change in the spatial
direction Y).

[1347]Accordingly, the actual world estimating unit 822 solves each
coefficient wi of the above Expression (90) using the input image and
continuity movement information, and outputs the obtained coefficients wi
to the error calculation unit 823.

[1348]Now, description will return to the flowchart in FIG. 170.

[1349]In step S824, the error calculation unit 823 performs integration
regarding each pixel in the frame direction from the coefficients input
by the actual world estimating unit 822. That is to say, the error
calculation unit 823 integrates the above Expression (90) regarding each
pixel from coefficients input by the actual world estimating unit 822
such as shown in the following Expression (91).

∫ ∫ ∫ ∫ α ∫ ∫
α × α× α α α
α ##EQU00057##

[1350]Here, St represents the integrated result in the frame
direction shown in FIG. 171. The integral range thereof is, as shown in
FIG. 171, Tm through Tm+B for the frame direction T, and
ym through ym+A for the spatial direction Y. Also, in FIG. 171,
let us say that each grid (square) denotes one pixel, and both for the
frame direction T and spatial direction Y are 1. Here, "1 regarding the
frame direction T" means that the shutter time for the worth of one frame
is 1.

[1351]Accordingly, the error calculation unit 823 performs, as shown in
FIG. 172, an integral arithmetic operation such as shown in the following
Expression (92) regarding each pixel with an integral range of Tm
through Tm+1 for the spatial direction T of a curved surface shown
in the approximation function f(t, y), and ym through ym+1 for
the spatial direction Y (A=B=1), and calculates the pixel value Pt
of each pixel obtained from the function approximately expressing the
actual world.

∫ ∫ ∫ ∫ α ∫ ∫
α × α× α α α
α ##EQU00058##

[1352]That is to say, according to this processing, the error calculation
unit 823 serves as, so to speak, a kind of pixel value generating unit,
and generates pixel values from the approximation function.

[1353]In step S825, the error calculation unit 803 calculates the
difference between a pixel value obtained with integration such as shown
in the above Expression (92) and a pixel value of the input image, and
outputs this to the comparison unit 824 as an error. That is to say, the
error calculation unit 823 obtains the difference between the pixel value
of a pixel corresponding to the integral range shown in the above FIG.
171 and FIG. 172 (Tm through Tm+1 for the spatial direction T,
and ym through ym+1 for the spatial direction Y) and a pixel
value obtained by the integrated result in a range corresponding to the
pixel, as an error, and outputs this to the comparison unit 824.

[1354]In step S826, the comparison unit 824 determines regarding whether
or not the absolute value of the error between the pixel value obtained
with integration and the pixel value of the input image, which are input
from the error calculation unit 823, is a predetermined threshold value
or less.

[1355]In step S826, in the event that determination is made that the error
is the predetermined threshold value or less, since the pixel value
obtained with integration is a value close to the pixel value of the
input image, the comparison unit 824 regards the approximation function
set for calculating the pixel value of the pixel as a function
sufficiently approximated with the light intensity allocation of a light
signal in the real world, and recognizes the region of the pixel now
processed as a processing region in step S827. In further detail, the
comparison unit 824 stores the pixel now processed in unshown memory as
the pixel in the subsequent processing regions.

[1356]On the other hand, in the event that determination is made that the
error is not the threshold value or less in step S826, since the pixel
value obtained with integration is a value far from the actual pixel
value, the comparison unit 824 regards the approximation function set for
calculating the pixel value of the pixel as a function insufficiently
approximated with the light intensity allocation in the real world, and
recognizes the region of the pixel now processed as a non-processing
region where processing using the approximation function based on
continuity information is not performed at a subsequent stage in step
S828. In further detail, the comparison unit 824 stores the region of the
pixel now processed in unshown memory as the subsequent non-processing
regions.

[1357]In step S829, the comparison unit 824 determines regarding whether
or not the processing has been performed as to all of the pixels, and in
the event that determination is made that the processing has not been
performed as to all of the pixels, the processing returns to step S822,
wherein the subsequent processing is repeatedly performed. In other
words, the processing in steps S822 through S829 is repeatedly performed
until determination processing wherein comparison between a pixel value
obtained with integration and a pixel value input is performed, and
determination is made regarding whether or not the pixel is a processing
region, is completed regarding all of the pixels.

[1358]In step S829, in the event that determination is made that
determination processing wherein comparison between a pixel value
obtained by reintegration and a pixel value input is performed, and
determination is made regarding whether or not the pixel is a processing
region, has been completed regarding all of the pixels, the comparison
unit 824, in step S830, outputs region information wherein a processing
region where processing based on the continuity information in the frame
direction is performed at subsequent processing, and a non-processing
region where processing based on the continuity information in the frame
direction is not performed are identified regarding the input image
stored in the unshown memory, as continuity information.

[1359]According to the above processing, based on the error between the
pixel value obtained by the integrated result in a region corresponding
to each pixel using the approximation function f(t) calculated based on
the continuity information and the pixel value within the actual input
image, evaluation for reliability of expression of the approximation
function is performed for each region (for each pixel), and accordingly,
a region having a small error, i.e., only a region where a pixel of which
the pixel value obtained with integration based on the approximation
function is reliable exists is regarded as a processing region, and the
regions other than this region are regarded as non-processing regions,
and consequently, only a reliable region can be subjected to the
processing based on continuity information in the frame direction, and
the necessary processing alone can be performed, whereby processing speed
can be improved, and also the processing can be performed as to the
reliable region alone, resulting in preventing image quality due to this
processing from deterioration.

[1360]An arrangement may be made wherein the configurations of the data
continuity information detecting unit 101 in FIG. 165 and FIG. 169 are
combined, any one-dimensional direction of the spatial and temporal
directions is selected, and the region information is selectively output.

[1361]According to the above configuration, light signals in the real
world are projected by the multiple detecting elements of the sensor each
having spatio-temporal integration effects, continuity of data in image
data made up of multiple pixels having a pixel value projected by the
detecting elements of which a part of continuity of the light signals in
the real world drops is detected, a function corresponding to the light
signals in the real world is approximated on condition that the pixel
value of each pixel corresponding to the detected continuity, and
corresponding to at least a position in a one-dimensional direction of
the spatial and temporal directions of the image data is the pixel value
acquired with at least integration effects in the one-dimensional
direction, and accordingly, a difference value between a pixel value
acquired by estimating the function corresponding to the light signals in
the real world, and integrating the estimated function at least in
increments of corresponding to each pixel in the primary direction and
the pixel value of each pixel is detected, and the function is
selectively output according to the difference value, and accordingly, a
region alone where a pixel of which the pixel value obtained with
integration based on the approximation function is reliable exists can be
regarded as a processing region, and the other regions other than this
region can be regarded as non-processing regions, the reliable region
alone can be subjected to processing based on the continuity information
in the frame direction, so the necessary processing alone can be
performed, whereby processing speed can be improved, and also the
reliable region alone can be subjected to processing, resulting in
preventing image quality due to this processing from deterioration.

[1362]Next, description will be made regarding estimation of signals in
the actual world 1.

[1363]FIG. 173 is a block diagram illustrating the configuration of the
actual world estimating unit 102.

[1364]With the actual world estimating unit 102 of which the configuration
is shown in FIG. 173, based on the input image and the data continuity
information supplied from the continuity detecting unit 101, the width of
a fine line in the image, which is a signal in the actual world 1, is
detected, and the level of the fine line (light intensity of the signal
in the actual world 1) is estimated.

[1365]A line-width detecting unit 2101 detects the width of a fine line
based on the data continuity information indicating a continuity region
serving as a fine-line region made up of pixels, on which the fine-line
image is projected, supplied from the continuity detecting unit 101. The
line-width detecting unit 2101 supplies fine-line width information
indicating the width of a fine line detected to a signal-level estimating
unit 2102 along with the data continuity information.

[1366]The signal-level estimating unit 2102 estimates, based on the input
image, the fine-line width information indicating the width of a fine
line, which is supplied from the line-width detecting unit 2101, and the
data continuity information, the level of the fine-line image serving as
the signals in the actual world 1, i.e., the level of light intensity,
and outputs actual world estimating information indicating the width of a
fine line and the level of the fine-line image.

[1367]FIG. 174 and FIG. 175 are diagrams for describing processing for
detecting the width of a fine line in signals in the actual world 1.

[1368]In FIG. 174 and FIG. 175, a region surrounded with a thick line
(region made up of four squares) denotes one pixel, a region surrounded
with a dashed line denotes a fine-line region made up of pixels on which
a fine-line image is projected, and a circle denotes the gravity of a
fine-line region. In FIG. 174 and FIG. 175, a hatched line denotes a
fine-line image cast in the sensor 2. In other words, it can be said that
this hatched line denotes a region where a fine-line image in the actual
world 1 is projected on the sensor 2.

[1369]In FIG. 174 and FIG. 175, S denotes a gradient to be calculated from
the gravity position of a fine-line region, and D is the duplication of
fine-line regions. Here, fine-line regions are adjacent to each other, so
the gradient S is a distance between the gravities thereof in increments
of pixel. Also, the duplication D of fine-line regions denotes the number
of pixels adjacent to each other in two fine-line regions.

[1370]In FIG. 174 and FIG. 175, W denotes the width of a fine line.

[1371]In FIG. 174, the gradient S is 2, and the duplication D is 2.

[1372]In FIG. 175, the gradient S is 3, and the duplication D is 1.

[1373]The fine-line regions are adjacent to each other, and the distance
between the gravities thereof in the direction where the fine-line
regions are adjacent to each other is one pixel, so W:D=1:S holds, the
fine-line width W can be obtained by the duplication D/gradient S.

[1374]For example, as shown in FIG. 174, when the gradient S is 2, and the
duplication D is 2, 2/2 is 1, so the fine-line width W is 1. Also, for
example, as shown in FIG. 175, when the gradient S is 3, and the
duplication D is 1, the fine-line width W is 1/3.

[1375]The line-width detecting unit 2101 thus detects the width of a
fine-line based on the gradient calculated from the gravity positions of
fine-line regions, and duplication of fine-line regions.

[1376]FIG. 176 is a diagram for describing the processing for estimating
the level of a fine-line signal in signals in the actual world 1.

[1377]In FIG. 176, a region surrounded with a thick line (region made up
of four squares) denotes one pixel, a region surrounded with a dashed
line denotes a fine-line region made up of pixels on which a fine-line
image is projected. In FIG. 176, E denotes the length of a fine-line
region in increments of a pixel in a fine-line region, and D is
duplication of fine-line regions (the number of pixels adjacent to
another fine-line region).

[1378]The level of a fine-line signal is approximated when the level is
constant within processing increments (fine-line region), and the level
of an image other than a fine line wherein a fine line is projected on
the pixel value of a pixel is approximated when the level is equal to a
level corresponding to the pixel value of the adjacent pixel.

[1379]With the level of a fine-line signal as C, let us say that with a
signal (image) projected on the fine-line region, the level of the left
side portion of a portion where the fine-line signal is projected is A in
the drawing, and the level of the right side portion of the portion where
the fine-line signal is projected is B in the drawing.

[1380]At this time, Expression (93) holds.

Sum of pixel values of a fine-line
region=(E-D)/2×A+(E-D)/2×B+D×C (93)

[1381]The width of a fine line is constant, and the width of a fine-line
region is one pixel, so the area of (the portion where the signal is
projected of) a fine line in a fine-line region is equal to the
duplication D of fine-line regions. The width of a fine-line region is
one pixel, so the area of a fine-line region in increments of a pixel in
a fine-line region is equal to the length E of a fine-line region.

[1382]Of a fine-line region, the area on the left side of a fine line is
(E-D)/2. Of a fine-line region, the area on the right side of a fine line
is (E-D)/2.

[1383]The first term of the right side of Expression (93) is the portion
of the pixel value where the signal having the same level as that in the
signal projected on a pixel adjacent to the left side is projected, and
can be represented with Expression (94).

A=Σαi×Ai=Σ1/(E-D)×(i+0.5)×A.-
sub.i (94)

[1384]In Expression (94), Ai denotes the pixel value of a pixel
adjacent to the left side.

[1385]In Expression (94), αi denotes the proportion of the area
where the signal having the same level as that in the signal projected on
a pixel adjacent to the left side is projected on the pixel of the
fine-line region. In other words, αi denotes the proportion of
the same pixel value as that of a pixel adjacent to the left side, which
is included in the pixel value of the pixel in the fine-line region.

[1386]i represents the position of a pixel adjacent to the left side of
the fine-line region.

[1387]For example, in FIG. 176, the proportion of the same pixel value as
the pixel value A0 of a pixel adjacent to the left side of the
fine-line region, which is included in the pixel value of the pixel in
the fine-line region, is α0. In FIG. 176, the proportion of
the same pixel value as the pixel value A1 of a pixel adjacent to
the left side of the fine-line region, which is included in the pixel
value of the pixel in the fine-line region, is α1. In FIG.
176, the proportion of the same pixel value as the pixel value A2 of
a pixel adjacent to the left side of the fine-line region, which is
included in the pixel value of the pixel in the fine-line region, is
α2.

[1388]The second term of the right side of Expression (93) is the portion
of the pixel value where the signal having the same level as that in the
signal projected on a pixel adjacent to the right side is projected, and
can be represented with Expression (95).

B=Σβj×Bj=Σ1/(E-D)×(j+0.5)×B.s-
ub.j (95)

[1389]In Expression (95), Bj denotes the pixel value of a pixel
adjacent to the right side.

[1390]In Expression (95), βj denotes the proportion of the area
where the signal having the same level as that in the signal projected on
a pixel adjacent to the right side is projected on the pixel of the
fine-line region. In other words, βj denotes the proportion of
the same pixel value as that of a pixel adjacent to the right side, which
is included in the pixel value of the pixel in the fine-line region.

[1391]j denotes the position of a pixel adjacent to the right side of the
fine-line region.

[1392]For example, in FIG. 176, the proportion of the same pixel value as
the pixel value B0 of a pixel adjacent to the right side of the
fine-line region, which is included in the pixel value of the pixel in
the fine-line region, is β0. In FIG. 176, the proportion of the
same pixel value as the pixel value B1 of a pixel adjacent to the
right side of the fine-line region, which is included in the pixel value
of the pixel in the fine-line region, is β1. In FIG. 176, the
proportion of the same pixel value as the pixel value B2 of a pixel
adjacent to the right side of the fine-line region, which is included in
the pixel value of the pixel in the fine-line region, is β2.

[1393]Thus, the signal level estimating unit 2102 obtains the pixel values
of the image including a fine line alone, of the pixel values included in
a fine-line region, by calculating the pixel values of the image other
than a fine line, of the pixel values included in the fine-line region,
based on Expression (94) and Expression (95), and removing the pixel
values of the image other than the fine line from the pixel values in the
fine-line region based on Expression (93). Subsequently, the signal level
estimating unit 2102 obtains the level of the fine-line signal based on
the pixel values of the image including the fine line alone and the area
of the fine line. More specifically, the signal level estimating unit
2102 calculates the level of the fine line signal by dividing the pixel
values of the image including the fine line alone, of the pixel values
included in the fine-line region, by the area of the fine line in the
fine-line region, i.e., the duplication D of the fine-line regions.

[1394]The signal level estimating unit 2102 outputs actual world
estimating information indicating the width of a fine line, and the
signal level of a fine line, in a signal in the actual world 1.

[1395]With the technique of the present invention, the waveform of a fine
line is geometrically described instead of pixels, so any resolution can
be employed.

[1396]Next, description will be made regarding actual world estimating
processing corresponding to the processing in step S102 with reference to
the flowchart in FIG. 177.

[1397]In step S2101, the line-width detecting unit 2101 detects the width
of a fine line based on the data continuity information. For example, the
line-width detecting unit 2101 estimates the width of a fine line in a
signal in the actual world 1 by dividing duplication of fine-line regions
by a gradient calculated from the gravity positions in fine-line regions.

[1398]In step S2102, the signal level estimating unit 2102 estimates the
signal level of a fine line based on the width of a fine line, and the
pixel value of a pixel adjacent to a fine-line region, outputs actual
world estimating information indicating the width of the fine line and
the signal level of the fine line, which are estimated, and the
processing ends. For example, the signal level estimating unit 2102
obtains pixel values on which the image including a fine line alone is
projected by calculating pixel values on which the image other than the
fine line included in a fine-line region is projected, and removing the
pixel values on which the image other than the fine line from the
fine-line region is projected, and estimates the level of the fine line
in a signal in the actual world 1 by calculating the signal level of the
fine line based on the obtained pixel values on which the image including
the fine line alone is projected, and the area of the fine line.

[1399]Thus, the actual world estimating unit 102 can estimate the width
and level of a fine line of a signal in the actual world 1.

[1400]As described above, a light signal in the real world is projected,
continuity of data regarding first image data wherein part of continuity
of a light signal in the real world drops, is detected, the waveform of
the light signal in the real world is estimated from the continuity of
the first image data based on a model representing the waveform of the
light signal in the real world corresponding to the continuity of data,
and in the event that the estimated light signal is converted into second
image data, a more accurate higher-precision processing result can be
obtained as to the light signal in the real world.

[1401]FIG. 178 is a block diagram illustrating another configuration of
the actual world estimating unit 102.

[1402]With the actual world estimating unit 102 of which the configuration
is illustrated in FIG. 178, a region is detected again based on an input
image and the data continuity information supplied from the data
continuity detecting unit 101, the width of a fine line in the image
serving as a signal in the actual world 1 is detected based on the region
detected again, and the light intensity (level) of the signal in the
actual world 1 is estimated. For example, with the actual world
estimating unit 102 of which the configuration is illustrated in FIG.
178, a continuity region made up of pixels on which a fine-line image is
projected is detected again, the width of a fine line in an image serving
as a signal in the actual world 1 is detected based on the region
detected again, and the light intensity of the signal in the actual world
1 is estimated.

[1403]The data continuity information, which is supplied from the data
continuity detecting unit 101, input to the actual world estimating unit
102 of which configuration is shown in FIG. 178, includes non-continuity
component information indicating non-components other than continuity
components on which a fine-line image is projected, of input images
serving as the data 3, monotonous increase/decrease region information
indicating a monotonous increase/decrease region of continuity regions,
information indicating a continuity region, and the like. For example,
non-continuity component information included in the data continuity
information is made up of the gradient of a plane and intercept which
approximate non-continuity components such as a background in an input
image.

[1404]The data continuity information input to the actual world estimating
unit 102 is supplied to a boundary detecting unit 2121. The input image
input to the actual world estimating unit 102 is supplied to the boundary
detecting unit 2121 and signal level estimating unit 2102.

[1405]The boundary detecting unit 2121 generates an image made up of
continuity components alone on which a fine-line image is projected from
the non-continuity component information included in the data continuity
information, and the input image, calculates an allocation ratio
indicating a proportion wherein a fine-line image serving as a signal in
the actual world 1 is projected, and detects a fine-line region serving
as a continuity region again by calculating a regression line indicating
the boundary of the fine-line region from the calculated allocation
ratio.

[1406]FIG. 179 is a block diagram illustrating the configuration of the
boundary detecting unit 2121.

[1407]An allocation-ratio calculation unit 2131 generates an image made up
of continuity components alone on which a fine-line image is projected
from the data continuity information, the non-continuity component
information included in the data continuity information, and an input
image. More specifically, the allocation-ratio calculation unit 2131
detects adjacent monotonous increase/decrease regions of the continuity
region from the input image based on the monotonous increase/decrease
region information included in the data continuity information, and
generates an image made up of continuity components alone on which a
fine-line image is projected by subtracting an approximate value to be
approximated at a plane indicated with a gradient and intercept included
in the continuity component information from the pixel value of a pixel
belonged to the detected monotonous increase/decrease region.

[1408]Note that the allocation-ratio calculation unit 2131 may generate an
image made up of continuity components alone on which a fine-line image
is projected by subtracting an approximate value to be approximated at a
plane indicated with a gradient and intercept included in the continuity
component information from the pixel value of a pixel in the input image.

[1409]The allocation-ratio calculation unit 2131 calculates an allocation
ratio indicating proportion wherein a fine-line image serving as a signal
in the actual world 1 is allocated into two pixels belonged to adjacent
monotonous increase/decrease regions within a continuity region based on
the generated image made up of the continuity components alone. The
allocation-ratio calculation unit 2131 supplies the calculated allocation
ratio to a regression-line calculation unit 2132.

[1410]Description will be made regarding allocation-ratio calculation
processing in the allocation-ratio calculation unit 2131 with reference
to FIG. 180 through FIG. 182.

[1411]The numeric values in two columns on the left side in FIG. 180
denote the pixel values of pixels vertically arrayed in two columns of an
image calculated by subtracting approximate values to be approximated at
a plane indicated with a gradient and intercept included in the
continuity component information from the pixel values of an input image.
Two regions surrounded with a square on the left side in FIG. 180 denote
a monotonous increase/decrease region 2141-1 and monotonous
increase/decrease region 2141-2, which are two adjacent monotonous
increase/decrease regions. In other words, the numeric values shown in
the monotonous increase/decrease region 2141-1 and monotonous
increase/decrease region 2141-2 denote the pixel values of pixels
belonged to a monotonous increase/decrease region serving as a continuity
region, which is detected by the data continuity detecting unit 101.

[1412]The numeric values in one column on the right side in FIG. 180
denote values obtained by adding the pixel values of the pixels
horizontally arrayed, of the pixel values of the pixels in two columns on
the left side in FIG. 180. In other words, the numeric values in one
column on the right side in FIG. 180 denote values obtained by adding the
pixel values on which a fine-line image is projected for each pixel
horizontally adjacent regarding the two monotonous increase/decrease
regions made up of pixels in one column vertically arrayed.

[1413]For example, when belonging to any one of the monotonous
increase/decrease region 2141-1 and monotonous increase/decrease region
2141-2, which are made up of the pixels in one column vertically arrayed
respectively, and the pixel values of the pixels horizontally adjacent
are 2 and 58, the value added is 60. When belonging to any one of the
monotonous increase/decrease region 2141-1 and monotonous
increase/decrease region 2141-2, which are made up of the pixels in one
column vertically arrayed respectively, and the pixel values of the
pixels horizontally adjacent are 1 and 65, the value added is 66.

[1414]It can be understood that the numeric values in one column on the
right side in FIG. 180, i.e., the values obtained by adding the pixel
values on which a fine-line image is projected regarding the pixels
adjacent in the horizontal direction of the two adjacent monotonous
increase/decrease regions made up of the pixels in one column vertically
arrayed, are generally constant.

[1415]Similarly, the values obtained by adding the pixel values on which a
fine-line image is projected regarding the pixels adjacent in the
vertical direction of the two adjacent monotonous increase/decrease
regions made up of the pixels in one column horizontally arrayed, are
generally constant.

[1416]The allocation-ratio calculation unit 2131 calculates how a
fine-line image is allocated on the pixel values of the pixels in one
column by utilizing characteristics that the values obtained by adding
the pixel values on which the fine-line image is projected regarding the
adjacent pixels of the two adjacent monotonous increase/decrease regions,
are generally constant.

[1417]The allocation-ratio calculation unit 2131 calculates an allocation
ratio regarding each pixel belonged to the two adjacent monotonous
increase/decrease regions by dividing the pixel value of each pixel
belonged to the two adjacent monotonous increase/decrease regions made up
of pixels in one column vertically arrayed by the value obtained by
adding the pixel values on which a fine-line image is projected for each
pixel horizontally adjacent. However, in the event that the calculated
result, i.e., the calculated allocation ratio exceeds 100, the allocation
ratio is set to 100.

[1418]For example, as shown in FIG. 181, when the pixel values of pixels
horizontally adjacent, which are belonged to two adjacent monotonous
increase/decrease regions made up of pixels in one column vertically
arrayed, are 2 and 58 respectively, the value added is 60, and
accordingly, allocation ratios 3.5 and 96.5 are calculated as to the
corresponding pixels respectively. When the pixel values of pixels
horizontally adjacent, which are belonged to two adjacent monotonous
increase/decrease regions made up of pixels in one column vertically
arrayed, are 1 and 65 respectively, the value added is 65, and
accordingly, allocation ratios 1.5 and 98.5 are calculated as to the
corresponding pixels respectively.

[1419]In this case, in the event that three monotonous increase/decrease
regions are adjacent, regarding which column is first calculated, of two
values obtained by adding the pixel values on which a fine-line image is
projected for each pixel horizontally adjacent, an allocation ratio is
calculated based on a value closer to the pixel value of the peak P, as
shown in FIG. 182.

[1420]For example, when the pixel value of the peak P is 81, and the pixel
value of a pixel of interest belonged to a monotonous increase/decrease
region is 79, in the event that the pixel value of a pixel adjacent to
the left side is 3, and the pixel value of a pixel adjacent to the right
side is -1, the value obtained by adding the pixel value adjacent to the
left side is 82, and the value obtained by adding the pixel value
adjacent to the right side is 78, and consequently, 82 which is closer to
the pixel value 81 of the peak P is selected, so an allocation ratio is
calculated based on the pixel adjacent to the left side. Similarly, when
the pixel value of the peak P is 81, and the pixel value of a pixel of
interest belonged to the monotonous increase/decrease region is 75, in
the event that the pixel value of a pixel adjacent to the left side is 0,
and the pixel value of a pixel adjacent to the right side is 3, the value
obtained by adding the pixel value adjacent to the left side is 75, and
the value obtained by adding the pixel value adjacent to the right side
is 78, and consequently, 78 which is closer to the pixel value 81 of the
peak P is selected, so an allocation ratio is calculated based on the
pixel adjacent to the right side.

[1421]Thus, the allocation-ratio calculation unit 2131 calculates an
allocation ratio regarding a monotonous increase/decrease region made up
of pixels in one column vertically arrayed.

[1422]With the same processing, the allocation-ratio calculation unit 2131
calculates an allocation ratio regarding a monotonous increase/decrease
region made up of pixels in one column horizontally arrayed.

[1423]The regression-line calculation unit 2132 assumes that the boundary
of a monotonous increase/decrease region is a straight line, and detects
the monotonous increase/decrease region within the continuity region
again by calculating a regression line indicating the boundary of the
monotonous increase/decrease region based on the calculated allocation
ratio by the allocation-ratio calculation unit 2131.

[1424]Description will be made regarding processing for calculating a
regression line indicating the boundary of a monotonous increase/decrease
region in the regression-line calculation unit 2132 with reference to
FIG. 183 and FIG. 184.

[1425]In FIG. 183, a white circle denotes a pixel positioned in the
boundary on the upper side of the monotonous increase/decrease region
2141-1 through the monotonous increase/decrease region 2141-5. The
regression-line calculation unit 2132 calculates a regression line
regarding the boundary on the upper side of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5 using the regression processing. For example, the
regression-line calculation unit 2132 calculates a straight line A
wherein the sum of squares of the distances with the pixels positioned in
the boundary on the upper side of the monotonous increase/decrease region
2141-1 through the monotonous increase/decrease region 2141-5 becomes the
minimum value.

[1426]Also, in FIG. 183, a black circle denotes a pixel positioned in the
boundary on the lower side of the monotonous increase/decrease region
2141-1 through the monotonous increase/decrease region 2141-5. The
regression-line calculation unit 2132 calculates a regression line
regarding the boundary on the lower side of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5 using the regression processing. For example, the
regression-line calculation unit 2132 calculates a straight line B
wherein the sum of squares of the distances with the pixels positioned in
the boundary on the lower side of the monotonous increase/decrease region
2141-1 through the monotonous increase/decrease region 2141-5 becomes the
minimum value.

[1427]The regression-line calculation unit 2132 detects the monotonous
increase/decrease region within the continuity region again by
determining the boundary of the monotonous increase/decrease region based
on the calculated regression line.

[1428]As shown in FIG. 184, the regression-line calculation unit 2132
determines the boundary on the upper side of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5 based on the calculated straight line A. For example, the
regression-line calculation unit 2132 determines the boundary on the
upper side from the pixel closest to the calculated straight line A
regarding each of the monotonous increase/decrease region 2141-1 through
the monotonous increase/decrease region 2141-5. For example, the
regression-line calculation unit 2132 determines the boundary on the
upper side such that the pixel closest to the calculated straight line A
is included in each region regarding each of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5.

[1429]As shown in FIG. 184, the regression-line calculation unit 2132
determines the boundary on the lower side of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5 based on the calculated straight line B. For example, the
regression-line calculation unit 2132 determines the boundary on the
lower side from the pixel closest to the calculated straight line B
regarding each of the monotonous increase/decrease region 2141-1 through
the monotonous increase/decrease region 2141-5. For example, the
regression-line calculation unit 2132 determines the boundary on the
upper side such that the pixel closest to the calculated straight line B
is included in each region regarding each of the monotonous
increase/decrease region 2141-1 through the monotonous increase/decrease
region 2141-5.

[1430]Thus, the regression-line calculation unit 2132 detects a region
wherein the pixel value monotonously increases or decreases from the peak
again based on a regression line for recurring the boundary of the
continuity region detected by the data continuity detecting unit 101. In
other words, the regression-line calculation unit 2132 detects a region
serving as the monotonous increase/decrease region within the continuity
region again by determining the boundary of the monotonous
increase/decrease region based on the calculated regression line, and
supplies region information indicating the detected region to the
line-width detecting unit 2101.

[1431]As described above, the boundary detecting unit 2121 calculates an
allocation ratio indicating proportion wherein a fine-line image serving
as a signal in the actual world 1 is projected on pixels, and detects the
monotonous increase/decrease region within the continuity region again by
calculating a regression line indicating the boundary of the monotonous
increase/decrease region from the calculated allocation ratio. Thus, a
more accurate monotonous increase/decrease region can be detected.

[1432]The line-width detecting unit 2101 shown in FIG. 178 detects the
width of a fine line in the same processing as the case shown in FIG. 173
based on the region information indicating the region detected again,
which is supplied from the boundary detecting unit 2121. The line-width
detecting unit 2101 supplies fine-line width information indicating the
width of a fine line detected to the signal level estimating unit 2102
along with the data continuity information.

[1433]The processing of the signal level estimating unit 2102 shown in
FIG. 178 is the same processing as the case shown in FIG. 173, so the
description thereof is omitted.

[1434]FIG. 185 is a flowchart for describing actual world estimating
processing using the actual world estimating unit 102 of which
configuration is shown in FIG. 178, which corresponds to the processing
in step S102.

[1435]In step S2121, the boundary detecting unit 2121 executes boundary
detecting processing for detecting a region again based on the pixel
value of a pixel belonged to the continuity region detected by the data
continuity detecting unit 101. The details of the boundary detecting
processing will be described later.

[1436]The processing in step S2122 and step S2123 is the same as the
processing in step S2101 and step S2102, so the description thereof is
omitted.

[1437]FIG. 186 is a flowchart for describing boundary detecting processing
corresponding to the processing in step S2121.

[1438]In step S2131, the allocation-ratio calculation unit 2131 calculates
an allocation ratio indicating proportion wherein a fine-line image is
projected based on the data continuity information indicating a
monotonous increase/decrease region and an input image. For example, the
allocation-ratio calculation unit 2131 detects adjacent monotonous
increase/decrease regions within the continuity region from an input
image based on the monotonous increase/decrease region information
included in the data continuity information, and generates an image made
up of continuity components alone on which a fine-line image is projected
by subtracting approximate values to be approximated at a plane indicated
with a gradient and intercept included in the continuity component
information from the pixel values of the pixels belonged to the detected
monotonous increase/decrease region. Subsequently, the allocation-ratio
calculation unit 2131 calculates an allocation ratio, by dividing the
pixel values of pixels belonged to two monotonous increase/decrease
regions made up of pixels in one column by the sum of the pixel values of
the adjacent pixels, regarding each pixel belonged to the two adjacent
monotonous increase/decrease regions.

[1440]In step S2132, the regression-line calculation unit 2132 detects a
region within the continuity region again by calculating a regression
line indicating the boundary of a monotonous increase/decrease region
based on the allocation ratio indicating proportion wherein a fine-line
image is projected. For example, the regression-line calculation unit
2132 assumes that the boundary of a monotonous increase/decrease region
is a straight line, and detects the monotonous increase/decrease region
within the continuity region again by calculating a regression line
indicating the boundary of one end of the monotonous increase/decrease
region, and calculating a regression line indicating the boundary of
another end of the monotonous increase/decrease region.

[1442]Thus, the actual world estimating unit 102 of which configuration is
shown in FIG. 178 detects a region made up of pixels on which a fine-line
image is projected again, detects the width of a fine line in the image
serving as a signal in the actual world 1 based on the region detected
again, and estimates the intensity (level) of light of the signal in the
actual world 1. Thus, the width of a fine line can be detected more
accurately, and the intensity of light can be estimated more accurately
regarding a signal in the actual world 1.

[1443]As described above, in the event that a light signal in the real
world is projected, a discontinuous portion of the pixel values of
multiple pixels in the first image data of witch part of continuity of
the light signal in the real world drops is detected, a continuity region
having continuity of data is detected from the detected discontinuous
portion, a region is detected again based on the pixel values of pixels
belonged to the detected continuity region, and the actual world is
estimated based on the region detected again, a more accurate and
higher-precision processing result can be obtained as to events in the
real world.

[1444]Next, description will be made regarding the actual world estimating
unit 102 for outputting derivative values of the approximation function
in the spatial direction for each pixel in a region having continuity as
actual world estimating information with reference to FIG. 187.

[1445]A reference-pixel extracting unit 2201 determines regarding whether
or not each pixel in an input image is a processing region based on the
data continuity information (angle as continuity or region information)
input from the data continuity detecting unit 101, and in the event of a
processing region, extracts reference pixel information necessary for
obtaining an approximate function for approximating the pixel values of
pixels in the input image (the positions and pixel values of multiple
pixels around a pixel of interest necessary for calculation), and outputs
this to an approximation-function estimating unit 2202.

[1446]The approximation-function estimating unit 2202 estimates, based on
the least-squares method, an approximation function for approximately
describing the pixel values of pixels around a pixel of interest based on
the reference pixel information input from the reference-pixel extracting
unit 2201, and outputs the estimated approximation function to a
differential processing unit 2203.

[1447]The differential processing unit 2203 obtains a shift amount in the
position of a pixel to be generated from a pixel of interest according to
the angle of the data continuity information (for example, angle as to a
predetermined axis of a fine line or two-valued edge: gradient) based on
the approximation function input from the approximation-function
estimating unit 2202, calculates a derivative value in the position on
the approximation function according to the shift amount (the derivative
value of a function for approximating the pixel value of each pixel
corresponding to a distance from a line corresponding to continuity along
in the one-dimensional direction), and further, adds information
regarding the position and pixel value of a pixel of interest, and
gradient as continuity to this, and outputs this to the image generating
unit 103 as actual world estimating information.

[1448]Next, description will be made regarding actual world estimating
processing by the actual world estimating unit 102 in FIG. 187 with
reference to the flowchart in FIG. 188.

[1449]In step S2201, the reference-pixel extracting unit 2201 acquires an
angle and region information as the data continuity information from the
data continuity detecting unit 101 as well as an input image.

[1450]In step S2202, the reference-pixel extracting unit 2201 sets a pixel
of interest from unprocessed pixels in the input image.

[1451]In step S2203, the reference-pixel extracting unit 2201 determines
regarding whether or not the pixel of interest is included in a
processing region based on the region information of the data continuity
information, and in the event that the pixel of interest is not a pixel
in a processing region, the processing proceeds to step S2210, the
differential processing unit 2203 is informed that the pixel of interest
is in a non-processing region via the approximation-function estimating
unit 2202, in response to this, the differential processing unit 2203
sets the derivative value regarding the corresponding pixel of interest
to zero, further adds the pixel value of the pixel of interest to this,
and outputs this to the image generating unit 103 as actual world
estimating information, and also the processing proceeds to step S2211.
Also, in the event that determination is made that the pixel of interest
is in a processing region, the processing proceeds to step S2204.

[1452]In step S2204, the reference-pixel extracting unit 2201 determines
regarding whether the direction having data continuity is an angle close
to the horizontal direction or angle close to the vertical direction
based on the angular information included in the data continuity
information. That is to say, in the event that an angle θ having
data continuity is 45°>θ≧0°, or
180°>θ≧135°, the reference-pixel extracting
unit 2201 determines that the direction of continuity of the pixel of
interest is close to the horizontal direction, and in the event that the
angle θ having data continuity is
135°>θ≧45°, determines that the direction
of continuity of the pixel of interest is close to the vertical
direction.

[1453]In step S2205, the reference-pixel extracting unit 2201 extracts the
positional information and pixel values of reference pixels corresponding
to the determined direction from the input image respectively, and
outputs these to the approximation-function estimating unit 2202. That is
to say, reference pixels become data to be used for calculating a
later-described approximation function, so are preferably extracted
according to the gradient thereof. Accordingly, corresponding to any
determined direction of the horizontal direction and the vertical
direction, reference pixels in a long range in the direction thereof are
extracted. More specifically, for example, as shown in FIG. 189, in the
event that a gradient Gf is close to the vertical direction,
determination is made that the direction is the vertical direction. In
this case, as shown in FIG. 189 for example, when a pixel (0, 0) in the
center of FIG. 189 is taken as a pixel of interest, the reference-pixel
extracting unit 2201 extracts each pixel value of pixels (-1, 2), (-1,
1), (-1, 0), (-1, -1), (-1, -2), (0, 2), (0, 1), (0, 0), (0, -1), (0,
-2), (1, 2), (1, 1), (1, 0), (1, -1), and (1, -2). Note that in FIG. 189,
let us say that both sizes in the horizontal direction and in the
vertical direction of each pixel is 1.

[1454]In other words, the reference-pixel extracting unit 2201 extracts
pixels in a long range in the vertical direction as reference pixels such
that the reference pixels are 15 pixels in total of 2 pixels respectively
in the vertical (upper/lower) direction×1 pixel respectively in the
horizontal (left/right) direction centered on the pixel of interest.

[1455]On the contrary, in the event that determination is made that the
direction is the horizontal direction, the reference-pixel extracting
unit 2201 extracts pixels in a long range in the horizontal direction as
reference pixels such that the reference pixels are 15 pixels in total of
1 pixel respectively in the vertical (upper/lower) direction×2
pixels respectively in the horizontal (left/right) direction centered on
the pixel of interest, and outputs these to the approximation-function
estimating unit 2202. Needless to say, the number of reference pixels is
not restricted to 15 pixels as described above, so any number of pixels
may be employed.

[1456]In step S2206, the approximation-function estimating unit 2202
estimates the approximation function f(x) using the least squares method
based on information of reference pixels input from the reference-pixel
extracting unit 2201, and outputs this to the differential processing
unit 2203.

[1457]That is to say, the approximation function f(x) is a polynomial such
as shown in the following Expression (96).

f(x)=w1xn+w1xn-1+ . . . +wn+1 (96)

[1458]Thus, if each of coefficients W1 through Wn+1 of the
polynomial in Expression (96) can be obtained, the approximation function
f(x) for approximating the pixel value of each reference pixel (reference
pixel value) can be obtained. However, reference pixel values exceeding
the number of coefficients are necessary, so for example, in the case
such as shown in FIG. 189, the number of reference pixels is 15 pixels in
total, and accordingly, the number of obtainable coefficients in the
polynomial is restricted to 15. In this case, let us say that the
polynomial is up to 14-dimension, and the approximation function is
estimated by obtaining the coefficients W1 through W15. Note
that in this case, simultaneous equations may be employed by setting the
approximation function f(x) made up of a 15-dimensional polynomial.

[1459]Accordingly, when 15 reference pixel values shown in FIG. 189 are
employed, the approximation-function estimating unit 2202 estimates the
approximation function f(x) by solving the following Expression (97)
using the least squares method.

P(-1,-2)=f(-1-Cx(-2))

P(-1,-1)=f(-1-Cx(-1))

P(-1,0)=f(-1)(=f(-1-Cx(0)))

P(-1,1)=f(-1-Cx(1))

P(-1,2)=f(-1-Cx(2))

P(0,-2)=f(0-Cx(-2))

P(0,-1)=f(0-Cx(-1))

P(0,0)=f(0)(=f(0-Cx(0)))

P(0,1)=f(0-Cx(1))

P(0,2)=f(0-Cx(2)

P(1,-2)=f(1-Cx(-2))

P(1,-1)=f(1-Cx(-1))

P(1,0)=f(1)(=f(1-Cx(0)))

P(1,1)=f(1-Cx(1))

P(1,2)=f(1-Cx(2)) (97)

[1460]Note that the number of reference pixels may be changed in
accordance with the degree of the polynomial.

[1461]Here, Cx(ty) denotes a shift amount, and when the gradient as
continuity is denoted with Gf, Cx(ty)=ty/Gf is defined. This
shift amount Cx(ty) denotes the width of a shift as to the spatial
direction X in the position in the spatial direction Y=ty on condition
that the approximation function f(x) defined on the position in the
spatial direction Y=0 is continuous (has continuity) along the gradient
Gf. Accordingly, for example, in the event that the approximation
function is defined as f(x) on the position in the spatial direction Y=0,
this approximation function f(x) must be shifted by Cx(ty) as to the
spatial direction X along the gradient Gf in the spatial direction
Y=ty, so the function is defined as f(x-Cx(ty)) (=f(x-ty/Gf).

[1462]In step S2207, the differential processing unit 2203 obtains a shift
amount in the position of a pixel to be generated based on the
approximation function f(x) input from the approximation-function
estimating unit 2202.

[1463]That is to say, in the event that pixels are generated so as to be a
double density in the horizontal direction and in the vertical direction
respectively (quadruple density in total), the differential processing
unit 2203 first obtains a shift amount of Pin (Xin, Yin) in the center
position to divide a pixel of interest into two pixels Pa and Pb, which
become a double density in the vertical direction, as shown in FIG. 190,
to obtain a derivative value at a center position Pin (Xin, Yin) of a
pixel of interest. This shift amount becomes Cx(0), so actually becomes
zero. Note that in FIG. 190, a pixel Pin of which general gravity
position is (Xin, Yin) is a square, and pixels Pa and Pb of which general
gravity positions are (Xin, Yin+0.25) and (Xin, Yin-0.25) respectively
are rectangles long in the horizontal direction in the drawing.

[1464]In step S2208, the differential processing unit 2203 differentiates
the approximation function f(x) so as to obtain a primary differential
function f(x)' of the approximation function, obtains a derivative value
at a position according to the obtained shift amount, and outputs this to
the image generating unit 103 as actual world estimating information.
That is to say, in this case, the differential processing unit 2203
obtains a derivative value f(Xin)', and adds the position thereof (in
this case, a pixel of interest (Xin, Yin)), the pixel value thereof, and
the gradient information in the direction of continuity to this, and
outputs this.

[1465]In step S2209, the differential processing unit 2203 determines
regarding whether or not derivative values necessary for generating
desired-density pixels are obtained. For example, in this case, the
obtained derivative values are only derivative values necessary for a
double density (only derivative values to become a double density for the
spatial direction Y are obtained), so determination is made that
derivative values necessary for generating desired-density pixels are not
obtained, and the processing returns to step S2207.

[1466]In step S2207, the differential processing unit 2203 obtains a shift
amount in the position of a pixel to be generated based on the
approximation function f(x) input from the approximation-function
estimating unit 2202 again. That is to say, in this case, the
differential processing unit 2203 obtains derivative values necessary for
further dividing the divided pixels Pa and Pb into 2 pixels respectively.
The positions of the pixels Pa and Pb are denoted with black circles in
FIG. 190 respectively, so the differential processing unit 2203 obtains a
shift amount corresponding to each position. The shift amounts of the
pixels Pa and Pb are Cx(0.25) and Cx(-0.25) respectively.

[1467]In step S2208, the differential processing unit 2203 subjects the
approximation function f(x) to a primary differentiation, obtains a
derivative value in the position according to a shift amount
corresponding to each of the pixels Pa and Pb, and outputs this to the
image generating unit 103 as actual world estimating information.

[1468]That is to say, in the event of employing the reference pixels shown
in FIG. 189, the differential processing unit 2203, as shown in FIG. 191,
obtains a differential function f(x)' regarding the obtained
approximation function f(x), obtains derivative values in the positions
(Xin-Cx(0.25)) and (Xin-Cx(-0.25)), which are positions shifted by shift
amounts Cx(0.25) and Cx(-0.25) for the spatial direction X, as
f(Xin-Cx(0.25))' and f(Xin-Cx(-0.25))' respectively, adds the positional
information corresponding to the derivative values thereof to this, and
outputs this as actual world estimating information. Note that the
information of the pixel values is output at the first processing, so
this is not added at this processing.

[1469]In step S2209, the differential processing unit 2203 determines
regarding whether or not derivative values necessary for generating
desired-density pixels are obtained again. For example, in this case,
derivative values to become a quadruple density have been obtained, so
determination is made that derivative values necessary for generating
desired-density pixels have been obtained, and the processing proceeds to
step S2211.

[1470]In step S2211, the reference-pixel extracting unit 2201 determines
regarding whether or not all of the pixels have been processed, and in
the event that determination is made that all of the pixels have not been
processed, the processing returns to step S2202. Also, in step S2211, in
the event that determination is made that all of the pixels have been
processed, the processing ends.

[1471]As described above, in the event that pixels are generated so as to
become a quadruple density in the horizontal direction and in the
vertical direction regarding the input image, pixels are divided by
extrapolation/interpolation using the derivative value of the
approximation function in the center position of the pixel to be divided,
so in order to generate quadruple-density pixels, information of three
derivative values in total is necessary.

[1472]That is to say, as shown in FIG. 190, derivative values necessary
for generating four pixels P01, P02, P03, and P04 (in FIG. 190, pixels
P01, P02, P03, and P04 are squares of which the gravity positions are the
positions of four cross marks in the drawing, and the length of each side
is 1 for the pixel Pin, so around 0.5 for the pixels P01, P02, P03, and
P04) are necessary for one pixel in the end, and accordingly, in order to
generate quadruple-density pixels, first, double-density pixels in the
horizontal direction or in the vertical direction (in this case, in the
vertical direction) are generated (the above first processing in steps
S2207 and S2208), and further, the divided two pixels are divided in the
direction orthogonal to the initial dividing direction (in this case, in
the horizontal direction) (the above second processing in steps S2207 and
S2208).

[1473]Note that with the above example, description has been made
regarding derivative values at the time of calculating quadruple-density
pixels as an example, but in the event of calculating pixels having a
density more than a quadruple density, many more derivative values
necessary for calculating pixel values may be obtained by repeatedly
performing the processing in steps S2207 through S2209. Also, with the
above example, description has been made regarding an example for
obtaining double-density pixel values, but the approximation function
f(x) is a continuous function, so necessary derivative values may be
obtained even regarding pixel values having a density other than a
pluralized density.

[1474]According to the above arrangement, an approximation function for
approximating the pixel values of pixels near a pixel of interest can be
obtained, and derivative values in the positions corresponding to the
pixel positions in the spatial direction can be output as actual world
estimating information.

[1475]With the actual world estimating unit 102 described in FIG. 187,
derivative values necessary for generating an image have been output as
actual world estimating information, but a derivative value is the same
value as a gradient of the approximation function f(x) in a necessary
position.

[1476]Now, description will be made next regarding the actual world
estimating unit 102 wherein gradients alone on the approximation function
f(x) necessary for generating pixels are directly obtained without
obtaining the approximation function f(x), and output as actual world
estimating information, with reference to FIG. 192.

[1477]The reference-pixel extracting unit 2211 determines regarding
whether or not each pixel of an input image is a processing region based
on the data continuity information (angle as continuity, or region
information) input from the data continuity detecting unit 101, and in
the event of a processing region, extracts information of reference
pixels necessary for obtaining gradients from the input image (perimeter
multiple pixels arrayed in the vertical direction including a pixel of
interest, which are necessary for calculation, or the positions of
perimeter multiple pixels arrayed in the horizontal direction including a
pixel of interest, and information of each pixel value), and outputs this
to a gradient estimating unit 2212.

[1478]The gradient estimating unit 2212 generates gradient information of
a pixel position necessary for generating a pixel based on the reference
pixel information input from the reference-pixel extracting unit 2211,
and outputs this to the image generating unit 103 as actual world
estimating information. More specifically, the gradient estimating unit
2212 obtains a gradient in the position of a pixel of interest on the
approximation function f(x) approximately expressing the actual world
using the difference information of the pixel values between pixels,
outputs this along with the position information and pixel value of the
pixel of interest, and the gradient information in the direction of
continuity, as actual world estimating information.

[1479]Next, description will be made regarding the actual world estimating
processing by the actual world estimating unit 102 in FIG. 192 with
reference to the flowchart in FIG. 193.

[1480]In step S2221, the reference-pixel extracting unit 2211 acquires an
angle and region information as the data continuity information from the
data continuity detecting unit 101 along with an input image.

[1481]In step S2222, the reference-pixel extracting unit 2211 sets a pixel
of interest from unprocessed pixels in the input image.

[1482]In step S2223, the reference-pixel extracting unit 2211 determines
regarding whether or not the pixel of interest is in a processing region
based on the region information of the data continuity information, and
in the event that determination is made that the pixel of interest is not
a pixel in the processing region, the processing proceeds to step S2228,
wherein the gradient estimating unit 2212 is informed that the pixel of
interest is in a non-processing region, in response to this, the gradient
estimating unit 2212 sets the gradient for the corresponding pixel of
interest to zero, and further adds the pixel value of the pixel of
interest to this, and outputs this as actual world estimating information
to the image generating unit 103, and also the processing proceeds to
step S2229. Also, in the event that determination is made that the pixel
of interest is in a processing region, the processing proceeds to step
S2224.

[1483]In step S2224, the reference-pixel extracting unit 2211 determines
regarding whether the direction having data continuity is an angle close
to the horizontal direction or angle close to the vertical direction
based on the angular information included in the data continuity
information. That is to say, in the event that an angle θ having
data continuity is 45°>θ≧0°, or
180°>θ≧135°, the reference-pixel extracting
unit 2211 determines that the direction of continuity of the pixel of
interest is close to the horizontal direction, and in the event that the
angle θ having data continuity is
135°>θ≧45°, determines that the direction
of continuity of the pixel of interest is close to the vertical
direction.

[1484]In step S2225, the reference-pixel extracting unit 2211 extracts the
positional information and pixel values of reference pixels corresponding
to the determined direction from the input image respectively, and
outputs these to the gradient estimating unit 2212. That is to say,
reference pixels become data to be used for calculating a later-described
gradient, so are preferably extracted according to a gradient indicating
the direction of continuity. Accordingly, corresponding to any determined
direction of the horizontal direction and the vertical direction,
reference pixels in a long range in the direction thereof are extracted.
More specifically, for example, in the event that determination is made
that a gradient is close to the vertical direction, as shown in FIG. 194,
when a pixel (0, 0) in the center of FIG. 194 is taken as a pixel of
interest, the reference-pixel extracting unit 2211 extracts each pixel
value of pixels (0, 2), (0, 1), (0, 0), (0, -1), and (0, -2). Note that
in FIG. 194, let us say that both sizes in the horizontal direction and
in the vertical direction of each pixel is 1.

[1485]In other words, the reference-pixel extracting unit 2211 extracts
pixels in a long range in the vertical direction as reference pixels such
that the reference pixels are 5 pixels in total of 2 pixels respectively
in the vertical (upper/lower) direction centered on the pixel of
interest.

[1486]On the contrary, in the event that determination is made that the
direction is the horizontal direction, the reference-pixel extracting
unit 2211 extracts pixels in a long range in the horizontal direction as
reference pixels such that the reference pixels are 5 pixels in total of
2 pixels respectively in the horizontal (left/right) direction centered
on the pixel of interest, and outputs these to the approximation-function
estimating unit 2202. Needless to say, the number of reference pixels is
not restricted to 5 pixels as described above, so any number of pixels
may be employed.

[1487]In step S2226, the gradient estimating unit 2212 calculates a shift
amount of each pixel value based on the reference pixel information input
from the reference-pixel extracting unit 2211, and the gradient Gf
in the direction of continuity. That is to say, in the event that the
approximation function f(x) corresponding to the spatial direction Y=0 is
taken as a basis, the approximation functions corresponding to the
spatial directions Y=-2, -1, 1, and 2 are continuous along the gradient
Gf as continuity as shown in FIG. 194, so the respective
approximation functions are described as f(x-Cx(2)), f(x-Cx(1)),
f(x-Cx(-1)), and f(x-Cx(-2)), and are represented as functions shifted by
each shift amount in the spatial direction X for each of the spatial
directions Y=-2, -1, 1, and 2.

[1489]In step S2227, the gradient estimating unit 2212 calculates
(estimates) a gradient on the approximation function f(x) in the position
of the pixel of interest. For example, as shown in FIG. 194, in the event
that the direction of continuity regarding the pixel of interest is an
angle close to the vertical direction, the pixel values between the
pixels adjacent in the horizontal direction exhibit great differences,
but change between the pixels in the vertical direction is small and
similar, and accordingly, the gradient estimating unit 2212 substitutes
the difference between the pixels in the vertical direction for the
difference between the pixels in the horizontal direction, and obtains a
gradient on the approximation function f(x) in the position of the pixel
of interest, by seizing change between the pixels in the vertical
direction as change in the spatial direction X according to a shift
amount.

[1490]That is to say, if we assume that the approximation function f(x)
approximately describing the real world exists, the relations between the
above shift amounts and the pixel values of the respective reference
pixels is such as shown in FIG. 195. Here, the pixel values of the
respective pixels in FIG. 194 are represented as P(0, 2), P(0, 1), P(0,
0), P(0, -1), and P(0, -2) from the top. As a result, with regard to the
pixel value P and shift amount Cx near the pixel of interest (0, 0), 5
pairs of relations (P, Cx)=((P(0, 2), -Cx(2)), (P(0, 1), -Cx(1)), (P(0,
-1)), -Cx(-1)), (P(0, -2), -Cx(-2)), and (P(0, 0), 0) are obtained.

[1491]Now, with the pixel value P, shift amount Cx, and gradient Kx
(gradient on the approximation function f(x)), the relation such as the
following Expression (98) holds.

P=Kx×Cx (98)

[1492]The above Expression (98) is a one-variable function regarding the
variable Kx, so the gradient estimating unit 2212 obtains the gradient Kx
(gradient) using the least squares method of one variable.

[1493]That is to say, the gradient estimating unit 2212 obtains the
gradient of the pixel of interest by solving a normal equation such as
shown in the following Expression (99), adds the pixel value of the pixel
of interest, and the gradient information in the direction of continuity
to this, and outputs this to the image generating unit 103 as actual
world estimating information.

##EQU00059##

[1494]Here, i denotes a number for identifying each pair of the pixel
value P and shift amount C of the above reference pixel, 1 through m.
Also, m denotes the number of the reference pixels including the pixel of
interest.

[1495]In step S2229, the reference-pixel extracting unit 2211 determines
regarding whether or not all of the pixels have been processed, and in
the event that determination is made that all of the pixels have not been
processed, the processing returns to step S2222. Also, in the event that
determination is made that all of the pixels have been processed in step
S2229, the processing ends.

[1496]Note that the gradient to be output as actual world estimating
information by the above processing is employed at the time of
calculating desired pixel values to be obtained finally through
extrapolation/interpolation. Also, with the above example, description
has been made regarding the gradient at the time of calculating
double-density pixels as an example, but in the event of calculating
pixels having a density more than a double density, gradients in many
more positions necessary for calculating the pixel values may be
obtained.

[1497]For example, as shown in FIG. 190, in the event that pixels having a
quadruple density in the spatial directions in total of a double density
in the horizontal direction and also a double density in the vertical
direction are generated, the gradient Kx of the approximation function
f(x) corresponding to the respective positions Pin, Pa, and Pb in FIG.
190 should be obtained, as described above.

[1498]Also, with the above example, an example for obtaining
double-density pixels has been described, but the approximation function
f(x) is a continuous function, so it is possible to obtain a necessary
gradient even regarding the pixel value of a pixel in a position other
than a pluralized density.

[1499]According to the above arrangements, it is possible to generate and
output gradients on the approximation function necessary for generating
pixels in the spatial direction as actual world estimating information by
using the pixel values of pixels near a pixel of interest without
obtaining the approximation function approximately representing the
actual world.

[1500]Next, description will be made regarding the actual world estimating
unit 102, which outputs derivative values on the approximation function
in the frame direction (temporal direction) for each pixel in a region
having continuity as actual world estimating information, with reference
to FIG. 196.

[1501]The reference-pixel extracting unit 2231 determines regarding
whether or not each pixel in an input image is in a processing region
based on the data continuity information (movement as continuity
(movement vector), and region information) input from the data continuity
detecting unit 101, and in the event that each pixel is in a processing
region, extracts reference pixel information necessary for obtaining an
approximation function approximating the pixel values of the pixels in
the input image (multiple pixel positions around a pixel of interest
necessary for calculation, and the pixel values thereof), and outputs
this to the approximation-function estimating unit 2202.

[1502]The approximation-function estimating unit 2232 estimates an
approximation function, which approximately describes the pixel value of
each pixel around the pixel of interest based on the reference pixel
information in the frame direction input from the reference-pixel
extracting unit 2231, based on the least squares method, and outputs the
estimated function to the differential processing unit 2233.

[1503]The differential processing unit 2233 obtains a shift amount in the
frame direction in the position of a pixel to be generated from the pixel
of interest according to the movement of the data continuity information
based on the approximation function in the frame direction input from the
approximation-function estimating unit 2232, calculates a derivative
value in a position on the approximation function in the frame direction
according to the shift amount thereof (derivative value of the function
approximating the pixel value of each pixel corresponding to a distance
along in the primary direction from a line corresponding to continuity),
further adds the position and pixel value of the pixel of interest, and
information regarding movement as continuity to this, and outputs this to
the image generating unit 103 as actual world estimating information.

[1504]Next, description will be made regarding the actual world estimating
processing by the actual world estimating unit 102 in FIG. 196 with
reference to the flowchart in FIG. 197.

[1505]In step S2241, the reference-pixel extracting unit 2231 acquires the
movement and region information as the data continuity information from
the data continuity detecting unit 101 along with an input image.

[1506]In step S2242, the reference-pixel extracting unit 2231 sets a pixel
of interest from unprocessed pixels in the input image.

[1507]In step S2243, the reference-pixel extracting unit 2231 determines
regarding whether or not the pixel of interest is included in a
processing region based on the region information of the data continuity
information, and in the event that the pixel of interest is not a pixel
in a processing region, the processing proceeds to step S2250, the
differential processing unit 2233 is informed that the pixel of interest
is in a non-processing region via the approximation-function estimating
unit 2232, in response to this, the differential processing unit 2233
sets the derivative value regarding the corresponding pixel of interest
to zero, further adds the pixel value of the pixel of interest to this,
and outputs this to the image generating unit 103 as actual world
estimating information, and also the processing proceeds to step S2251.
Also, in the event that determination is made that the pixel of interest
is in a processing region, the processing proceeds to step S2244.

[1508]In step S2244, the reference-pixel extracting unit 2231 determines
regarding whether the direction having data continuity is movement close
to the spatial direction or movement close to the frame direction based
on movement information included in the data continuity information. That
is to say, as shown in FIG. 198, if we say that an angle indicating the
spatial and temporal directions within a surface made up of the frame
direction T, which is taken as a reference axis, and the spatial
direction Y, is taken as θv, in the event that an angle θv
having data continuity is 45°>θv≧0°, or
180°>θv≧135°, the reference-pixel
extracting unit 2201 determines that the movement as continuity of the
pixel of interest is close to the frame direction (temporal direction),
and in the event that the angle θ having data continuity is
135°>θ≧45°, determines that the direction
of continuity of the pixel of interest is close to the spatial direction.

[1509]In step S2245, the reference-pixel extracting unit 2201 extracts the
positional information and pixel values of reference pixels corresponding
to the determined direction from the input image respectively, and
outputs these to the approximation-function estimating unit 2232. That is
to say, reference pixels become data to be used for calculating a
later-described approximation function, so are preferably extracted
according to the angle thereof. Accordingly, corresponding to any
determined direction of the frame direction and the spatial direction,
reference pixels in a long range in the direction thereof are extracted.
More specifically, for example, as shown in FIG. 198, in the event that a
movement direction Vf is close to the spatial direction,
determination is made that the direction is the spatial direction. In
this case, as shown in FIG. 198 for example, when a pixel (t, y)=(0, 0)
in the center of FIG. 198 is taken as a pixel of interest, the
reference-pixel extracting unit 2231 extracts each pixel value of pixels
(t, y)=(-1, 2), (-1, 1), (-1, 0), (-1, -1), (-1, -2), (0, 2), (0, 1), (0,
0), (0, -1), (0, -2), (1, 2), (1, 1), (1, 0), (1, -1), and (1, -2). Note
that in FIG. 198, let us say that both sizes in the frame direction and
in the spatial direction of each pixel is 1.

[1510]In other words, the reference-pixel extracting unit 2231 extracts
pixels in a long range in the spatial direction as to the frame direction
as reference pixels such that the reference pixels are 15 pixels in total
of 2 pixels respectively in the spatial direction (upper/lower direction
in the drawing)×1 frame respectively in the frame direction
(left/right direction in the drawing) centered on the pixel of interest.

[1511]On the contrary, in the event that determination is made that the
direction is the frame direction, the reference-pixel extracting unit
2231 extracts pixels in a long range in the frame direction as reference
pixels such that the reference pixels are 15 pixels in total of 1 pixel
respectively in the spatial direction (upper/lower direction in the
drawing)×2 frames respectively in the frame direction (left/right
direction in the drawing) centered on the pixel of interest, and outputs
these to the approximation-function estimating unit 2232. Needless to
say, the number of reference pixels is not restricted to 15 pixels as
described above, so any number of pixels may be employed.

[1512]In step S2246, the approximation-function estimating unit 2232
estimates the approximation function f(t) using the least squares method
based on information of reference pixels input from the reference-pixel
extracting unit 2231, and outputs this to the differential processing
unit 2233.

[1513]That is to say, the approximation function f(t) is a polynomial such
as shown in the following Expression (100).

f(t)=W1tn+W2tn-1+ . . . +Wn-1 (100)

[1514]Thus, if each of coefficients W1 through Wn+1 of the
polynomial in Expression (100) can be obtained, the approximation
function f(t) in the frame direction for approximating the pixel value of
each reference pixel can be obtained. However, reference pixel values
exceeding the number of coefficients are necessary, so for example, in
the case such as shown in FIG. 198, the number of reference pixels is 15
pixels in total, and accordingly, the number of obtainable coefficients
in the polynomial is restricted to 15. In this case, let us say that the
polynomial is up to 14-dimension, and the approximation function is
estimated by obtaining the coefficients W1 through W15. Note
that in this case, simultaneous equations may be employed by setting the
approximation function f(t) made up of a 15-dimensional polynomial.

[1515]Accordingly, when 15 reference pixel values shown in FIG. 198 are
employed, the approximation-function estimating unit 2232 estimates the
approximation function f(t) by solving the following Expression (101)
using the least squares method.

P(-1,-2)=f(-1-Ct(-2))

P(-1,-1)=f(-1-Ct(-1))

P(-1,0)=f(-1)(=f(-1-Ct(0)))

P(-1,1)=f(-1-Ct(1))

P(-1,2)=f(-1-Ct(2))

P(0,-2)=f(0-Ct(-2))

P(0,-1)=f(0-Ct(-1))

P(0,0)=f(0)(=f(0-Ct(0)))

P(0,1)=f(0-Ct(1))

P(0,2)=f(0-Ct(2))

P(1,-2)=f(1-Ct(-2))

P(1,-1)=f(1-Ct(-1))

P(1,0)=f(1)(=f(1-Ct(0)))

P(1,1)=f(1-Ct(1))

P(1,2)=f(1-Ct(2)) (101)

[1516]Note that the number of reference pixels may be changed in
accordance with the degree of the polynomial.

[1517]Here, Ct(ty) denotes a shift amount, which is the same as the above
Cx(ty), and when the gradient as continuity is denoted with Vf,
Ct(ty)=ty/Vf is defined. This shift amount Ct(ty) denotes the width
of a shift as to the frame direction T in the position in the spatial
direction Y=ty on condition that the approximation function f(t) defined
on the position in the spatial direction Y=0 is continuous (has
continuity) along the gradient Vf. Accordingly, for example, in the
event that the approximation function is defined as f (t) on the position
in the spatial direction Y=0, this approximation function f(t) must be
shifted by Ct(ty) as to the frame direction (temporal direction) T in the
spatial direction Y=ty, so the function is defined as f(t-Ct (ty))
(=f(t-ty/Vf).

[1518]In step S2247, the differential processing unit 2233 obtains a shift
amount in the position of a pixel to be generated based on the
approximation function f(t) input from the approximation-function
estimating unit 2232.

[1519]That is to say, in the event that pixels are generated so as to be a
double density in the frame direction and in the spatial direction
respectively (quadruple density in total), the differential processing
unit 2233 first obtains, for example, a shift amount of later-described
Pin (Tin, Yin) in the center position to be divided into later-described
two pixels Pat and Pbt, which become a double density in the spatial
direction, as shown in FIG. 199, to obtain a derivative value at a center
position Pin (Tin, Yin) of a pixel of interest. This shift amount becomes
Ct(0), so actually becomes zero. Note that in FIG. 199, a pixel Pin of
which general gravity position is (Tin, Yin) is a square, and pixels Pat
and Pbt of which general gravity positions are (Tin, Yin+0.25) and (Tin,
Yin-0.25) respectively are rectangles long in the horizontal direction in
the drawing. Also, the length in the frame direction T of the pixel of
interest Pin is 1, which corresponds to the shutter time for one frame.

[1520]In step S2248, the differential processing unit 2233 differentiates
the approximation function f(t) so as to obtain a primary differential
function f(t)' of the approximation function, obtains a derivative value
at a position according to the obtained shift amount, and outputs this to
the image generating unit 103 as actual world estimating information.
That is to say, in this case, the differential processing unit 2233
obtains a derivative value f(Tin)', and adds the position thereof (in
this case, a pixel of interest (Tin, Yin)), the pixel value thereof, and
the movement information in the direction of continuity to this, and
outputs this.

[1521]In step S2249, the differential processing unit 2233 determines
regarding whether or not derivative values necessary for generating
desired-density pixels are obtained. For example, in this case, the
obtained derivative values are only derivative values necessary for a
double density in the spatial direction (derivative values to become a
double density for the frame direction are not obtained), so
determination is made that derivative values necessary for generating
desired-density pixels are not obtained, and the processing returns to
step S2247.

[1522]In step S2247, the differential processing unit 2203 obtains a shift
amount in the position of a pixel to be generated based on the
approximation function f(t) input from the approximation-function
estimating unit 2202 again. That is to say, in this case, the
differential processing unit 2203 obtains derivative values necessary for
further dividing the divided pixels Pat and Pbt into 2 pixels
respectively. The positions of the pixels Pat and Pbt are denoted with
black circles in FIG. 199 respectively, so the differential processing
unit 2233 obtains a shift amount corresponding to each position. The
shift amounts of the pixels Pat and Pbt are Ct(0.25) and Ct(-0.25)
respectively.

[1523]In step S2248, the differential processing unit 2233 differentiates
the approximation function f(t), obtains a derivative value in the
position according to a shift amount corresponding to each of the pixels
Pat and Pbt, and outputs this to the image generating unit 103 as actual
world estimating information.

[1524]That is to say, in the event of employing the reference pixels shown
in FIG. 198, the differential processing unit 2233, as shown in FIG. 200,
obtains a differential function f(t)' regarding the obtained
approximation function f(t), obtains derivative values in the positions
(Tin-Ct(0.25)) and (Tin-Ct(-0.25)), which are positions shifted by shift
amounts Ct(0.25) and Ct(-0.25) for the spatial direction T, as
f(Tin-Ct(0.25))' and f(Tin-Ct(-0.25))' respectively, adds the positional
information corresponding to the derivative values thereof to this, and
outputs this as actual world estimating information. Note that the
information of the pixel values is output at the first processing, so
this is not added at this processing.

[1525]In step S2249, the differential processing unit 2233 determines
regarding whether or not derivative values necessary for generating
desired-density pixels are obtained again. For example, in this case,
derivative values to become a double density in the spatial direction Y
and in the frame direction T respectively (quadruple density in total)
are obtained, so determination is made that derivative values necessary
for generating desired-density pixels are obtained, and the processing
proceeds to step S2251.

[1526]In step S2251, the reference-pixel extracting unit 2231 determines
regarding whether or not all of the pixels have been processed, and in
the event that determination is made that all of the pixels have not been
processed, the processing returns to step S2242. Also, in step S2251, in
the event that determination is made that all of the pixels have been
processed, the processing ends.

[1527]As described above, in the event that pixels are generated so as to
become a quadruple density in the frame direction (temporal direction)
and in the spatial direction regarding the input image, pixels are
divided by extrapolation/interpolation using the derivative value of the
approximation function in the center position of the pixel to be divided,
so in order to generate quadruple-density pixels, information of three
derivative values in total is necessary.

[1528]That is to say, as shown in FIG. 199, derivative values necessary
for generating four pixels P01t, P02t, P03t, and P04t (in FIG. 199,
pixels P01t, P02t, P03t, and P04t are squares of which the gravity
positions are the positions of four cross marks in the drawing, and the
length of each side is 1 for the pixel Pin, so around 0.5 for the pixels
P01t, P02t, P03t, and P04t) are necessary for one pixel in the end, and
accordingly, in order to generate quadruple-density pixels, first,
double-density pixels in the frame direction or in the spatial direction
are generated (the above first processing in steps S2247 and S2248), and
further, the divided two pixels are divided in the direction orthogonal
to the initial dividing direction (in this case, in the frame direction)
(the above second processing in steps S2247 and S2248).

[1529]Note that with the above example, description has been made
regarding derivative values at the time of calculating quadruple-density
pixels as an example, but in the event of calculating pixels having a
density more than a quadruple density, many more derivative values
necessary for calculating pixel values may be obtained by repeatedly
performing the processing in steps S2247 through S2249. Also, with the
above example, description has been made regarding an example for
obtaining double-density pixel values, but the approximation function
f(t) is a continuous function, so derivative values may be obtained even
regarding pixel values having a density other than a pluralized density.

[1530]According to the above arrangement, an approximation function for
approximately expressing the pixel value of each pixel can be obtained
using the pixel values of pixels near a pixel of interest, and derivative
values in the positions necessary for generating pixels can be output as
actual world estimating information.

[1531]With the actual world estimating unit 102 described in FIG. 196,
derivative values necessary for generating an image have been output as
actual world estimating information, but a derivative value is the same
value as a gradient of the approximation function f(t) in a necessary
position.

[1532]Now, description will be made next regarding the actual world
estimating unit 102 wherein gradients alone in the frame direction on the
approximation function necessary for generating pixels are directly
obtained without obtaining the approximation function, and output as
actual world estimating information, with reference to FIG. 201.

[1533]A reference-pixel extracting unit 2251 determines regarding whether
or not each pixel of an input image is a processing region based on the
data continuity information (movement as continuity, or region
information) input from the data continuity detecting unit 101, and in
the event of a processing region, extracts information of reference
pixels necessary for obtaining gradients from the input image (perimeter
multiple pixels arrayed in the spatial direction including a pixel of
interest, which are necessary for calculation, or the positions of
perimeter multiple pixels arrayed in the frame direction including a
pixel of interest, and information of each pixel value), and outputs this
to a gradient estimating unit 2252.

[1534]The gradient estimating unit 2252 generates gradient information of
a pixel position necessary for generating a pixel based on the reference
pixel information input from the reference-pixel extracting unit 2251,
and outputs this to the image generating unit 103 as actual world
estimating information. In further detail the gradient estimating unit
2252 obtains a gradient in the frame direction in the position of a pixel
of interest on the approximation function approximately expressing the
pixel value of each reference pixel using the difference information of
the pixel values between pixels, outputs this along with the position
information and pixel value of the pixel of interest, and the movement
information in the direction of continuity, as actual world estimating
information.

[1535]Next, description will be made regarding the actual world estimating
processing by the actual world estimating unit 102 in FIG. 201 with
reference to the flowchart in FIG. 202.

[1536]In step S2261, the reference-pixel extracting unit 2251 acquires
movement and region information as the data continuity information from
the data continuity detecting unit 101 along with an input image.

[1537]In step S2262, the reference-pixel extracting unit 2251 sets a pixel
of interest from unprocessed pixels in the input image.

[1538]In step S2263, the reference-pixel extracting unit 2251 determines
regarding whether or not the pixel of interest is in a processing region
based on the region information of the data continuity information, and
in the event that determination is made that the pixel of interest is not
a pixel in a processing region, the processing proceeds to step S2268,
wherein the gradient estimating unit 2252 is informed that the pixel of
interest is in a non-processing region, in response to this, the gradient
estimating unit 2252 sets the gradient for the corresponding pixel of
interest to zero, and further adds the pixel value of the pixel of
interest to this, and outputs this as actual world estimating information
to the image generating unit 103, and also the processing proceeds to
step S2269. Also, in the event that determination is made that the pixel
of interest is in a processing region, the processing proceeds to step
S2264.

[1539]In step S2264, the reference-pixel extracting unit 2211 determines
regarding whether movement as data continuity is movement close to the
frame direction or movement close to the spatial direction based on the
movement information included in the data continuity information. That is
to say, if we say that an angle indicating the spatial and temporal
directions within a surface made up of the frame direction T, which is
taken as a reference axis, and the spatial direction Y, is taken as
θv, in the event that an angle θv of movement as data
continuity is 45°>θv≧0°, or
180°>θv≧135°, the reference-pixel
extracting unit 2251 determines that the movement as continuity of the
pixel of interest is close to the frame direction, and in the event that
the angle θv having data continuity is
135°>θv≧45°, determines that the movement
as continuity of the pixel of interest is close to the spatial direction.

[1540]In step S2265, the reference-pixel extracting unit 2251 extracts the
positional information and pixel values of reference pixels corresponding
to the determined direction from the input image respectively, and
outputs these to the gradient estimating unit 2252. That is to say,
reference pixels become data to be used for calculating a later-described
gradient, so are preferably extracted according to movement as
continuity. Accordingly, corresponding to any determined direction of the
frame direction and the spatial direction, reference pixels in a long
range in the direction thereof are extracted. More specifically, for
example, in the event that determination is made that movement is close
to the spatial direction, as shown in FIG. 203, when a pixel (t, y)=(0,
0) in the center of FIG. 203 is taken as a pixel of interest, the
reference-pixel extracting unit 2251 extracts each pixel value of pixels
(t, y)=(0, 2), (0, 1), (0, 0), (0, -1), and (0, -2). Note that in FIG.
203, let us say that both sizes in the frame direction and in the spatial
direction of each pixel is 1.

[1541]In other words, the reference-pixel extracting unit 2251 extracts
pixels in a long range in the spatial direction as reference pixels such
that the reference pixels are 5 pixels in total of 2 pixels respectively
in the spatial direction (upper/lower direction in the drawing) centered
on the pixel of interest.

[1542]On the contrary, in the event that determination is made that the
direction is the frame direction, the reference-pixel extracting unit
2251 extracts pixels in a long range in the horizontal direction as
reference pixels such that the reference pixels are 5 pixels in total of
2 pixels respectively in the frame direction (left/right direction in the
drawing) centered on the pixel of interest, and outputs these to the
approximation-function estimating unit 2252. Needless to say, the number
of reference pixels is not restricted to 5 pixels as described above, so
any number of pixels may be employed.

[1543]In step S2266, the gradient estimating unit 2252 calculates a shift
amount of each pixel value based on the reference pixel information input
from the reference-pixel extracting unit 2251, and the movement Vf
in the direction of continuity. That is to say, in the event that the
approximation function f(t) corresponding to the spatial direction Y=0 is
taken as a basis, the approximation functions corresponding to the
spatial directions Y=-2, -1, 1, and 2 are continuous along the gradient
Vf as continuity as shown in FIG. 203, so the respective
approximation functions are described as f(t-Ct(2)), f(t-Ct(1)),
f(t-Ct(-1)), and f(t-Ct(-2)), and are represented as functions shifted by
each shift amount in the frame direction T for each of the spatial
directions Y=-2, -1, 1, and 2.

[1545]In step S2267, the gradient estimating unit 2252 calculates
(estimates) a gradient in the frame direction of the pixel of interest.
For example, as shown in FIG. 203, in the event that the direction of
continuity regarding the pixel of interest is an angle close to the
spatial direction, the pixel values between the pixels adjacent in the
frame direction exhibit great differences, but change between the pixels
in the spatial direction is small and similar, and accordingly, the
gradient estimating unit 2252 substitutes the difference between the
pixels in the frame direction for the difference between the pixels in
the spatial direction, and obtains a gradient at the pixel of interest,
by seizing change between the pixels in the spatial direction as change
in the frame direction T according to a shift amount.

[1546]That is to say, if we assume that the approximation function f(t)
approximately describing the real world exists, the relations between the
above shift amounts and the pixel values of the respective reference
pixels is such as shown in FIG. 204. Here, the pixel values of the
respective pixels in FIG. 204 are represented as P(0, 2), P(0, 1), P(0,
0), P(0, -1), and P(0, -2) from the top. As a result, with regard to the
pixel value P and shift amount Ct near the pixel of interest (0, 0), 5
pairs of relations (P, Ct)=((P(0, 2), -Ct(2)), (P(0, 1), -Ct(1)), (P(0,
-1)), -Ct(-1)), (P(0, -2), -Ct(-2)), and (P(0, 0), 0) are obtained.

[1547]Now, with the pixel value P, shift amount Ct, and gradient Kt
(gradient on the approximation function f(t)), the relation such as the
following Expression (102) holds.

P=Kt×Ct (102)

[1548]The above Expression (102) is a one-variable function regarding the
variable Kt, so the gradient estimating unit 2212 obtains the variable Kt
(gradient) using the least squares method of one variable.

[1549]That is to say, the gradient estimating unit 2252 obtains the
gradient of the pixel of interest by solving a normal equation such as
shown in the following Expression (103), adds the pixel value of the
pixel of interest, and the gradient information in the direction of
continuity to this, and outputs this to the image generating unit 103 as
actual world estimating information.

##EQU00060##

[1550]Here, i denotes a number for identifying each pair of the pixel
value P and shift amount Ct of the above reference pixel, 1 through m.
Also, m denotes the number of the reference pixels including the pixel of
interest.

[1551]In step S2269, the reference-pixel extracting unit 2251 determines
regarding whether or not all of the pixels have been processed, and in
the event that determination is made that all of the pixels have not been
processed, the processing returns to step S2262. Also, in the event that
determination is made that all of the pixels have been processed in step
S2269, the processing ends.

[1552]Note that the gradient in the frame direction to be output as actual
world estimating information by the above processing is employed at the
time of calculating desired pixel values to be obtained finally through
extrapolation/interpolation. Also, with the above example, description
has been made regarding the gradient at the time of calculating
double-density pixels as an example, but in the event of calculating
pixels having a density more than a double density, gradients in many
more positions necessary for calculating the pixel values may be
obtained.

[1553]For example, as shown in FIG. 190, in the event that pixels having a
quadruple density in the temporal and spatial directions in total of a
double density in the horizontal direction and also a double density in
the frame direction are generated, the gradient Kt of the approximation
function f(t) corresponding to the respective positions Pin, Pat, and Pbt
in FIG. 190 should be obtained, as described above.

[1554]Also, with the above example, an example for obtaining
double-density pixel values has been described, but the approximation
function f(t) is a continuous function, so it is possible to obtain a
necessary gradient even regarding the pixel value of a pixel in a
position other than a pluralized density.

[1555]Needless to say, there is no restriction regarding the sequence of
processing for obtaining gradients on the approximation function as to
the frame direction or the spatial direction or derivative values.
Further, with the above example in the spatial direction, description has
been made using the relation between the spatial direction Y and frame
direction T, but the relation between the spatial direction X and frame
direction T may be employed instead of this. Further, a gradient (in any
one-dimensional direction) or a derivative value may be selectively
obtained from any two-dimensional relation of the temporal and spatial
directions.

[1556]According to the above arrangements, it is possible to generate and
output gradients on the approximation function in the frame direction
(temporal direction) of positions necessary for generating pixels as
actual world estimating information by using the pixel values of pixels
near a pixel of interest without obtaining the approximation function in
the frame direction approximately representing the actual world.

[1557]Next, description will be made regarding another embodiment example
of the actual world estimating unit 102 (FIG. 3) with reference to FIG.
205 through FIG. 235.

[1558]FIG. 205 is a diagram for describing the principle of this
embodiment example.

[1559]As shown in FIG. 205, a signal (light intensity allocation) in the
actual world 1, which is an image cast on the sensor 2, is represented
with a predetermined function F. Note that hereafter, with the
description of this embodiment example, the signal serving as an image in
the actual world 1 is particularly referred to as a light signal, and the
function F is particularly referred to as a light signal function F.

[1560]With this embodiment example, in the event that the light signal in
the actual world 1 represented with the light signal function F has
predetermined continuity, the actual world estimating unit 102 estimates
the light signal function F by approximating the light signal function F
with a predetermined function f using an input image (image data
including continuity of data corresponding to continuity) from the sensor
2, and data continuity information (data continuity information
corresponding to continuity of the input image data) from the data
continuity detecting unit 101. Note that with the description of this
embodiment example, the function f is particularly referred to as an
approximation function f, hereafter.

[1561]In other words, with this embodiment example, the actual world
estimating unit 102 approximates (describes) the image (light signal in
the actual world 1) represented with the light signal function F using a
model 161 (FIG. 7) represented with the approximation function f.
Accordingly, hereafter, this embodiment example is referred to as a
function approximating method.

[1562]Now, description will be made regarding the background wherein the
present applicant has invented the function approximating method, prior
to entering the specific description of the function approximating
method.

[1563]FIG. 206 is a diagram for describing integration effects in the case
in which the sensor 2 is treated as a CCD.

[1564]As shown in FIG. 206, multiple detecting elements 2-1 are disposed
on the plane of the sensor 2.

[1565]With the example in FIG. 206, a direction in parallel with a
predetermined side of the detecting elements 2-1 is taken as the X
direction, which is one direction in the spatial direction, and the a
direction orthogonal to the X direction is taken as the Y direction,
which is another direction in the spatial direction. Also, the direction
perpendicular to the X-Y plane is taken as the direction t serving as the
temporal direction.

[1566]Also, with the example in FIG. 206, the spatial shape of each
detecting element 2-1 of the sensor 2 is represented with a square of
which one side is 1 in length. The shutter time (exposure time) of the
sensor 2 is represented with 1.

[1567]Further, with the example in FIG. 206, the center of one detecting
element 2-1 of the sensor 2 is taken as the origin (position x=0 in the X
direction, and position y=0 in the Y direction) in the spatial direction
(X direction and Y direction), and the intermediate point-in-time of the
exposure time is taken as the origin (position t=0 in the t direction) in
the temporal direction (t direction).

[1568]In this case, the detecting element 2-1 of which the center is in
the origin (x=0, y=0) in the spatial direction subjects the light signal
function F(x, y, t) to integration with a range between -0.5 and 0.5 in
the X direction, range between -0.5 and 0.5 in the Y direction, and range
between -0.5 and 0.5 in the t direction, and outputs the integral value
thereof as a pixel value P.

[1569]That is to say, the pixel value P output from the detecting element
2-1 of which the center is in the origin in the spatial direction is
represented with the following Expression (104).

∫ ∫ ∫ ##EQU00061##

[1570]The other detecting elements 2-1 also output the pixel value P shown
in Expression (104) by taking the center thereof as the origin in the
spatial direction in the same way.

[1571]FIG. 207 is a diagram for describing a specific example of the
integration effects of the sensor 2.

[1573]A portion 2301 of the light signal in the actual world 1 (hereafter,
such a portion is referred to as a region) represents an example of a
region having predetermined continuity.

[1574]Note that the region 2301 is a portion of the continuous light
signal (continuous region). On the other hand, in FIG. 207, the region
2301 is shown as divided into 20 small regions (square regions) in
reality. This is because of representing that the size of the region 2301
is equivalent to the size wherein the four detecting elements (pixels) of
the sensor 2 in the X direction, and also the five detecting elements
(pixels) of the sensor 2 in the Y direction are arrayed. That is to say,
each of the 20 small regions (virtual regions) within the region 2301 is
equivalent to one pixel.

[1575]Also, a white portion within the region 2301 represents a light
signal corresponding to a fine line. Accordingly, the region 2301 has
continuity in the direction wherein a fine line continues. Hereafter, the
region 2301 is referred to as the fine-line-including actual world region
2301.

[1576]In this case, when the fine-line-including actual world region 2301
(a portion of a light signal in the actual world 1) is detected by the
sensor 2, region 2302 (hereafter, this is referred to as a
fine-line-including data region 2302) of the input image (pixel values)
is output from the sensor 2 by integration effects.

[1577]Note that each pixel of the fine-line-including data region 2302 is
represented as an image in the drawing, but is data representing a
predetermined value in reality. That is to say, the fine-line-including
actual world region 2301 is changed (distorted) to the
fine-line-including data region 2302, which is divided into 20 pixels (20
pixels in total of 4 pixels in the X direction and also 5 pixels in the Y
direction) each having a predetermined pixel value by the integration
effects of the sensor 2.

[1578]FIG. 208 is a diagram for describing another specific example
(example different from FIG. 207) of the integration effects of the
sensor 2.

[1580]A portion (region) 2303 of the light signal in the actual world 1
represents another example (example different from the
fine-line-including actual region 2301 in FIG. 207) of a region having
predetermined continuity.

[1581]Note that the region 2303 is a region having the same size as the
fine-line-including actual world region 2301. That is to say, the region
2303 is also a portion of the continuous light signal in the actual world
1 (continuous region) as with the fine-line-including actual world region
2301 in reality, but is shown as divided into 20 small regions (square
regions) equivalent to one pixel of the sensor 2 in FIG. 208.

[1582]Also, the region 2303 includes a first portion edge having
predetermined first light intensity (value), and a second portion edge
having predetermined second light intensity (value). Accordingly, the
region 2303 has continuity in the direction wherein the edges continue.
Hereafter, the region 2303 is referred to as the
two-valued-edge-including actual world region 2303.

[1583]In this case, when the two-valued-edge-including actual world region
2303 (a portion of the light signal in the actual world 1) is detected by
the sensor 2, a region 2304 (hereafter, referred to as
two-valued-edge-including data region 2304) of the input image (pixel
value) is output from the sensor 2 by integration effects.

[1584]Note that each pixel value of the two-valued-edge-including data
region 2304 is represented as an image in the drawing as with the
fine-line-including data region 2302, but is data representing a
predetermined value in reality. That is to say, the
two-valued-edge-including actual world region 2303 is changed (distorted)
to the two-valued-edge-including data region 2304, which is divided into
20 pixels (20 pixels in total of 4 pixels in the X direction and also 5
pixels in the Y direction) each having a predetermined pixel value by the
integration effects of the sensor 2.

[1585]Conventional image processing devices have regarded image data
output from the sensor 2 such as the fine-line-including data region
2302, two-valued-edge-including data region 2304, and the like as the
origin (basis), and also have subjected the image data to the subsequent
image processing. That is to say, regardless of that the image data
output from the sensor 2 had been changed (distorted) to data different
from the light signal in the actual world 1 by integration effects, the
conventional image processing devices have performed image processing on
assumption that the data different from the light signal in the actual
world 1 is correct.

[1586]As a result, the conventional image processing devices have provided
a problem wherein based on the waveform (image data) of which the details
in the actual world is distorted at the stage wherein the image data is
output from the sensor 2, it is very difficult to restore the original
details from the waveform.

[1587]Accordingly, with the function approximating method, in order to
solve this problem, as described above (as shown in FIG. 205), the actual
world estimating unit 102 estimates the light signal function F by
approximating the light signal function F(light signal in the actual
world 1) with the approximation function f based on the image data (input
image) such as the fine-line-including data region 2302, and
two-valued-edge-including data region 2304 output from the sensor 2.

[1588]Thus, at a later stage than the actual world estimating unit 102 (in
this case, the image generating unit 103 in FIG. 3), the processing can
be performed by taking the image data wherein integration effects are
taken into consideration, i.e., image data that can be represented with
the approximation function f as the origin.

[1589]Hereafter, description will be made independently regarding three
specific methods (first through third function approximating methods), of
such a function approximating method with reference to the drawings.

[1590]First, description will be made regarding the first function
approximating method with reference to FIG. 209 through FIG. 223.

[1591]FIG. 209 is a diagram representing the fine-line-including actual
world region 2301 shown in FIG. 207 described above again.

[1593]The first function approximating method is a method for
approximating a one-dimensional waveform (hereafter, such a waveform is
referred to as an X cross-sectional waveform F(x)) wherein the light
signal function F(x, y, t) corresponding to the fine-line-including
actual world region 2301 such as shown in FIG. 209 is projected in the X
direction (direction of an arrow 2311 in the drawing), with the
approximation function f(x) serving as an n-dimensional (n is an
arbitrary integer) polynomial. Accordingly, hereafter, the first function
approximating method is particularly referred to as a one-dimensional
polynomial approximating method.

[1594]Note that with the one-dimensional polynomial approximating method,
the X cross-sectional waveform F(x), which is to be approximated, is not
restricted to a waveform corresponding to the fine-line-including actual
world region 2301 in FIG. 209, of course. That is to say, as described
later, with the one-dimensional polynomial approximating method, any
waveform can be approximated as long as the X cross-sectional waveform
F(x) corresponds to the light signals in the actual world 1 having
continuity.

[1595]Also, the direction of the projection of the light signal function
F(x, y, t) is not restricted to the X direction, or rather the Y
direction or t direction may be employed. That is to say, with the
one-dimensional polynomial approximating method, a function F(y) wherein
the light signal function F(x, y, t) is projected in the Y direction may
be approximated with a predetermined approximation function f(y), or a
function F(t) wherein the light signal function F(x, y, t) is projected
in the t direction may be approximated with a predetermined approximation
f(t).

[1596]More specifically, the one-dimensional polynomial approximating
method is a method for approximating, for example, the X cross-sectional
waveform F(x) with the approximation function f(x) serving as an
n-dimensional polynomial such as shown in the following Expression (105).

##EQU00062##

[1597]That is to say, with the one-dimensional polynomial approximating
method, the actual world estimating unit 102 estimates the X
cross-sectional waveform F(x) by calculating the coefficient (features)
wi of x i in Expression (105).

[1598]This calculation method of the features wi is not restricted to
a particular method, for example, the following first through third
methods may be employed.

[1599]That is to say, the first method is a method that has been employed
so far.

[1600]On the other hand, the second method is a method that has been newly
invented by the present applicant, which is a method that considers
continuity in the spatial direction as to the first method.

[1601]However, as described later, with the first and second methods, the
integration effects of the sensor 2 are not taken into consideration.
Accordingly, an approximation function f(x) obtained by substituting the
features wi calculated by the first method or the second method for
the above Expression (105) is an approximation function regarding an
input image, but strictly speaking, cannot be referred to as the
approximation function of the X cross-sectional waveform F(x).

[1602]Consequently, the present applicant has invented the third method
that calculates the features wi further in light of the integration
effects of the sensor 2 as to the second method. An approximation
function f(x) obtained by substituting the features wi calculated
with this third method for the above Expression (105) can be referred to
as the approximation function of the X cross-sectional waveform F(x) in
that the integration effects of the sensor 2 are taken into
consideration.

[1603]Thus, strictly speaking, the first method and the second method
cannot be referred to as the one-dimensional polynomial approximating
method, and the third method alone can be referred to as the
one-dimensional polynomial approximating method.

[1604]In other words, as shown in FIG. 210, the second method is an
embodiment of the actual world estimating unit 102 according to the
present invention, which is different from the one-dimensional polynomial
approximating method. That is to say, FIG. 210 is a diagram for
describing the principle of the embodiment corresponding to the second
method.

[1605]As shown in FIG. 210, with the embodiment corresponding to the
second method, in the event that the light signal in the actual world 1
represented with the light signal function F has predetermined
continuity, the actual world estimating unit 102 does not approximate the
X cross-sectional waveform F(x) with an input image (image data including
continuity of data corresponding to continuity) from the sensor 2, and
data continuity information (data continuity information corresponding to
continuity of input image data) from the data continuity detecting unit
101, but approximates the input image from the sensor 2 with a
predetermined approximation function f2(x).

[1606]Thus, it is hard to say that the second method is a method having
the same level as the third method in that approximation of the input
image alone is performed without considering the integral effects of the
sensor 2. However, the second method is a method superior to the
conventional first method in that the second method takes continuity in
the spatial direction into consideration.

[1607]Hereafter, description will be made independently regarding the
details of the first method, second method, and third method in this
order.

[1608]Note that hereafter, in the event that the respective approximation
functions f(x) generated by the first method, second method, and third
method are distinguished from that of the other method, they are
particularly referred to as approximation function f1(x),
approximation function f2(x), and approximation function f3(x)
respectively.

[1609]First, description will be made regarding the details of the first
method.

[1610]With the first method, on condition that the approximation function
f1(x) shown in the above Expression (105) holds within the
fine-line-including actual world region 2301 in FIG. 211, the following
prediction equation (106) is defined.

P(x,y)=f1(x)+e (106)

[1611]In Expression (106), x represents a pixel position relative as to
the X direction from a pixel of interest. y represents a pixel position
relative as to the Y direction from the pixel of interest. e represents a
margin of error. Specifically, for example, as shown in FIG. 211, let us
say that the pixel of interest is the second pixel in the X direction
from the left, and also the third pixel in the Y direction from the
bottom in the drawing, of the fine-line-including data region 2302 (data
of which the fine-line-including actual world region 2301 (FIG. 209) is
detected by the sensor 2, and output). Also, let us say that the center
of the pixel of interest is the origin (0, 0), and a coordinates system
(hereafter, referred to as a pixel-of-interest coordinates system) of
which axes are an x axis and y axis respectively in parallel with the X
direction and Y direction of the sensor 2 (FIG. 206) is set. In this
case, the coordinates value (x, y) of the pixel-of-interest coordinates
system represents a relative pixel position.

[1612]Also, in Expression (106), P(x, y) represents a pixel value in the
relative pixel positions (x, y). Specifically, in this case, the P(x, y)
within the fine-line-including data region 2302 is such as shown in FIG.
212.

[1614]In FIG. 212, the respective vertical axes of the graphs represent
pixel values, and the horizontal axes represent a relative position x in
the X direction from the pixel of interest. Also, in the drawing, the
dashed line in the first graph from the top represents an input pixel
value P(x, -2), the chain triple-dashed line in the second graph from the
top represents an input pixel value P(x, -1), the solid line in the third
graph from the top represents an input pixel value P (x, 0), the chain
single-dashed line in the fourth graph from the top represents an input
pixel value P(x, 1), and the chain double-dashed line in the fifth graph
from the top (the first from the bottom) represents an input pixel value
P(x, 2) respectively.

[1615]Upon the 20 input pixel values P(x, -2), P(x, -1), P(x, 0), P(x, 1),
and P(x, 2) (however, x is any one integer value of -1 through 2) shown
in FIG. 212 being substituted for the above Expression (106)
respectively, 20 equations as shown in the following Expression (107) are
generated. Note that each ek (k is any one of integer values 1
through 20) represents a margin of error.

P(-1,-2)=f1(-1)+e1

P(0,-2)=f1(0)+e2

P(1,-2)=f1(1)+e3

P(2,-2)=f1(2)+e4

P(-1,-1)=f1(-1)+e5

P(0,-1)=f1(0)+e6

P(1,-1)=f(1)+e7

P(2,-1)=f1(2)+e8

P(-1,0)=f1(-1)+e9

P(0,0)=f1(0)+e10

P(1,0)=f1(1)+e11

P(2,0)=f1(2)+e12

P(-1,1)=f1(-1)+e13

P(0,1)=f1(0)+e14

P(1,1)=f1(1)+e15

P(2,1)=f1(2)+e16

P(-1,2)=f1(-1)+e17

P(0,2)=f1(0)+e18

P(1,2)=f1(1)+e19

P(2,2)=f1(2)+e20 (107)

Expression (107) is made up of 20 equations, so in the event that the
number of the features wi of the approximation function f1(x)
is less than 20, i.e., in the event that the approximation function
f1(x) is a polynomial having the number of dimensions less than 19,
the features wi can be calculated using the least squares method,
for example. Note that the specific solution of the least squares method
will be described later.

[1616]For example, if we say that the number of dimensions of the
approximation function f1(x) is five, the approximation function
f1(x) calculated with the least squares method using Expression
(107) (the approximation function f1(x) generated by the calculated
features wi) becomes a curve shown in FIG. 213.

[1617]Note that in FIG. 213, the vertical axis represents pixel values,
and the horizontal axis represents a relative position x from the pixel
of interest.

[1618]That is to say, for example, if we supplement the respective 20
pixel values P(x, y) (the respective input pixel values P(x, -2), P(x,
-1), P(x, 0), P(x, 1), and P (x, 2) shown in FIG. 212) making up the
fine-line-including data region 2302 in FIG. 211 along the x axis without
any modification (if we regard a relative position y in the Y direction
as constant, and overlay the five graphs shown in FIG. 212), multiple
lines (dashed line, chain triple-dashed line, solid line, chain
single-dashed line, and chain double-dashed line) in parallel with the x
axis, such as shown in FIG. 213, are distributed.

[1619]However, in FIG. 213, the dashed line represents the input pixel
value P(x, -2), the chain triple-dashed line represents the input pixel
value P(x, -1), the solid line represents the input pixel value P(x, 0),
the chain single-dashed line represents the input pixel value P(x, 1),
and the chain double-dashed line represents the input pixel value P(x, 2)
respectively. Also, in the event of the same pixel value, lines more than
2 lines are overlaid in reality, but in FIG. 213, the lines are drawn so
as to distinguish each line, and so as not to overlay each line.

[1620]The respective 20 input pixel values (P (x, -2), P(x, -1), P(x, 0),
P(x, 1), and P(x, 2)) thus distributed, and a regression curve (the
approximation function f1(x) obtained by substituting the features
wi calculated with the least squares method for the above Expression
(104)) so as to minimize the error of the value f1(x) become a curve
(approximation function f1(x)) shown in FIG. 213.

[1621]Thus, the approximation function f1(x) represents nothing but a
curve connecting in the X direction the means of the pixel values (pixel
values having the same relative position x in the X direction from the
pixel of interest) P(x, -2), P(x, -1), P(x, 0), P(x, 1), and P(x, 2) in
the Y direction. That is to say, the approximation function f1(x) is
generated without considering continuity in the spatial direction
included in the light signal.

[1622]For example, in this case, the fine-line-including actual world
region 2301 (FIG. 209) is regarded as a subject to be approximated. This
fine-line-including actual world region 2301 has continuity in the
spatial direction, which is represented with a gradient GF, such as
shown in FIG. 214. Note that in FIG. 214, the X direction and Y direction
represent the X direction and Y direction of the sensor 2 (FIG. 206).

[1623]Accordingly, the data continuity detecting unit 101 (FIG. 205) can
output an angle θ (angle θ generated between the direction of
data continuity represented with a gradient Gf corresponding to the
gradient GF, and the X direction) such as shown in FIG. 214 as data
continuity information corresponding to the gradient GF as
continuity in the spatial direction.

[1624]However, with the first method, the data continuity information
output from the data continuity detecting unit 101 is not employed at
all.

[1625]In other words, such as shown in FIG. 214, the direction of
continuity in the spatial direction of the fine-line-including actual
world region 2301 is a general angle θ direction. However, the
first method is a method for calculating the features wi of the
approximation function f1(x) on assumption that the direction of
continuity in the spatial direction of the fine-line-including actual
world region 2301 is the Y direction (i.e., on assumption that the angle
θ is 90°).

[1626]Consequently, the approximation function f1(x) becomes a
function of which the waveform gets dull, and the detail decreases than
the original pixel value. In other words, though not shown in the
drawing, with the approximation function f1(x) generated with the
first method, the waveform thereof becomes a waveform different from the
actual X cross-sectional waveform F(x).

[1627]To this end, the present applicant has invented the second method
for calculating the features wi by further taking continuity in the
spatial direction into consideration (utilizing the angle θ) as to
the first method.

[1628]That is to say, the second method is a method for calculating the
features wi of the approximation function f2(x) on assumption
that the direction of continuity of the fine-line-including actual world
region 2301 is a general angle θ direction.

[1629]Specifically, for example, the gradient Gf representing
continuity of data corresponding to continuity in the spatial direction
is represented with the following Expression (108).

θ ##EQU00063##

[1630]Note that in Expression (108), dx represents the amount of fine
movement in the X direction such as shown in FIG. 214, dy represents the
amount of fine movement in the Y direction as to the dx such as shown in
FIG. 214.

[1631]In this case, if we define the shift amount Cx(y) as shown in
the following Expression (109), with the second method, an equation
corresponding to Expression (106) employed in the first method becomes
such as the following Expression (110).

##EQU00064##

[1632]That is to say, Expression (106) employed in the first method
represents that the position x in the X direction of the pixel center
position (x, y) is the same value regarding the pixel value P(x, y) of
any pixel positioned in the same position. In other words, Expression
(106) represents that pixels having the same pixel value continue in the
Y direction (exhibits continuity in the Y direction).

[1633]On the other hand, Expression (110) employed in the second method
represents that the pixel value P(x, y) of a pixel of which the center
position is (x, y) is not identical to the pixel value (approximate
equivalent to f2(x)) of a pixel positioned in a place distant from
the pixel of interest (a pixel of which the center position is the origin
(0, 0)) in the X direction by x, and is the same value as the pixel value
(approximate equivalent to f2(x+Cx(y)) of a pixel positioned in
a place further distant from the pixel thereof in the X direction by the
shift amount Cx(y) (pixel positioned in a place distant from the
pixel of interest in the X direction by x+Cx(y)). In other words,
Expression (110) represents that pixels having the same pixel value
continue in the angle θ direction corresponding to the shift amount
Cx(y) (exhibits continuity in the general angle θ direction).

[1634]Thus, the shift amount Cx(y) is the amount of correction
considering continuity (in this case, continuity represented with the
gradient GF in FIG. 214 (strictly speaking, continuity of data
represented with the gradient Gf)) in the spatial direction, and
Expression (110) is obtained by correcting Expression (106) with the
shift amount Cx(y).

[1635]In this case, upon the 20 pixel values P(x, y) (however, x is any
one integer value of -1 through 2, and y is any one integer value of -2
through 2) of the fine-line-including data region shown in FIG. 211 being
substituted for the above Expression (110) respectively, 20 equations as
shown in the following Expression (111) are generated.

P(-1,-2)=f2(-1-Cx(-2))+e1

P(0,-2)=f2(0-Cx(-2))+e2

P(1,-2)=f2(1-Cx(-2))+e3

P(2,-2)=f2(2-Cx(-2))+e4

P(-1,-1)=f2(-1-Cx(-1))+e5

P(0,-1)=f2(0-Cx(-1))+e6

P(1,-1)=f2(1-Cx(-))+e7

P(2,-1)=f2(2-Cx(-1))+e8

P(-1,0)=f2(-1)+e9

P(0,0)=f2(0)+e10

P(1,0)=f2(1)+e11

P(2,0)=f2(2)+e12

P(-1,1)=f2(-1-Cx(1))+e13

P(0,1)=f2(0-Cx(1))+e14

P(1,1)=f2(1-Cx(1))+e15

P(2,1)=f2(2-Cx(1))+e16

P(-1,2)=f2(-1-Cx(2))+e17

P(0,2)=f2(0-Cx(2))+e18

P(1,2)=f2(1-Cx(2))+e19

P(2,2)=f2(2-Cx(2))+e20 (111)

Expression (111) is made up of 20 equations, as with the above Expression
(107). Accordingly, with the second method, as with the first method, in
the event that the number of the features wi of the approximation
function f2(x) is less than 20, i.e., the approximation function
f2(x) is a polynomial having the number of dimensions less than 19,
the features wi can be calculated with the least squares method, for
example. Note that the specific solution regarding the least squares
method will be described later.

[1636]For example, if we say that the number of dimensions of the
approximation function f2(x) is five as with the first method, with
the second method, the features wi are calculated as follows.

[1637]That is to say, FIG. 215 represents the pixel value P(x, y) shown in
the left side of Expression (111) in a graphic manner. The respective
five graphs shown in FIG. 215 are basically the same as shown in FIG.
212.

[1638]As shown in FIG. 215, the maximal pixel values (pixel values
corresponding to fine lines) are continuous in the direction of
continuity of data represented with the gradient Gf.

[1639]Consequently, with the second method, if we supplement the
respective input pixel values P(x, -2), P(x, -1), P(x, 0), P(x, 1), and
P(x, 2) shown in FIG. 215, for example, along the x axis, we supplement
the pixel values after the pixel values are changed in the states shown
in FIG. 216 instead of supplementing the pixel values without any
modification as with the first method (let us assume that y is constant,
and the five graphs are overlaid in the states shown in FIG. 215).

[1640]That is to say, FIG. 216 represents a state wherein the respective
input pixel values P(x, -2), P(x, -1), P(x, 0), P (x, 1), and P(x, 2)
shown in FIG. 215 are shifted by the shift amount Cx(y) shown in the
above Expression (109). In other words, FIG. 216 represents a state
wherein the five graphs shown in FIG. 215 are moved as if the gradient
GF representing the actual direction of continuity of data were
regarded as a gradient GF' (in the drawing, a straight line made up
of a dashed line were regarded as a straight line made up of a solid
line).

[1641]In the states in FIG. 216, if we supplement the respective input
pixel values P(x, -2), P(x, -1), P(x, 0), P (x, 1), and P(x, 2), for
example, along the x axis (in the states shown in FIG. 216, if we overlay
the five graphs), multiple lines (dashed line, chain triple-dashed line,
solid line, chain single-dashed line, and chain double-dashed line) in
parallel with the x axis, such as shown in FIG. 217, are distributed.

[1642]Note that in FIG. 217, the vertical axis represents pixel values,
and the horizontal axis represents a relative position x from the pixel
of interest. Also, the dashed line represents the input pixel value P(x,
-2), the chain triple-dashed line represents the input pixel value P(x,
-1), the solid line represents the input pixel value P(x, 0), the chain
single-dashed line represents the input pixel value P(x, 1), and the
chain double-dashed line represents the input pixel value P(x, 2)
respectively. Further, in the event of the same pixel value, lines more
than 2 lines are overlaid in reality, but in FIG. 217, the lines are
drawn so as to distinguish each line, and so as not to overlay each line.

[1643]The respective 20 input pixel values P(x, y) (however, x is any one
integer value of -1 through 2, and y is any one integer value of -2
through 2) thus distributed, and a regression curve (the approximation
function f2(x) obtained by substituting the features wi
calculated with the least squares method for the above Expression (104))
to minimize the error of the value f2(x+Cx(y)) become a curve
f2(x) shown in the solid line in FIG. 217.

[1644]Thus, the approximation function f2(x) generated with the
second method represents a curve connecting in the X direction the means
of the input pixel values P(x, y) in the angle θ direction (i.e.,
direction of continuity in the general spatial direction) output from the
data continuity detecting unit 101 (FIG. 205).

[1645]On the other hand, as described above, the approximation function
f1(x) generated with the first method represents nothing but a curve
connecting in the X direction the means of the input pixel values P(x, y)
in the Y direction (i.e., the direction different from the continuity in
the spatial direction).

[1646]Accordingly, as shown in FIG. 217, the approximation function
f2(x) generated with the second method becomes a function wherein
the degree of dullness of the waveform thereof decreases, and also the
degree of decrease of the detail as to the original pixel value decreases
less than the approximation function f1(x) generated with the first
method. In other words, though not shown in the drawing, with the
approximation function f2(x) generated with the second method, the
waveform thereof becomes a waveform closer to the actual X
cross-sectional waveform F(x) than the approximation function f1(x)
generated with the first method.

[1647]However, as described above, the approximation function f2(x)
is a function considering continuity in the spatial direction, but is
nothing but a function generated wherein the input image (input pixel
value) is regarded as the origin (basis). That is to say, as shown in
FIG. 210 described above, the approximation function f2(x) is
nothing but a function that approximated the input image different from
the X cross-sectional waveform F(x), and it is hard to say that the
approximation function f2(x) is a function that approximated the X
cross-sectional waveform F(x). In other words, the second method is a
method for calculating the features wi on assumption that the above
Expression (110) holds, but does not take the relation in Expression
(104) described above into consideration (does not consider the
integration effects of the sensor 2).

[1648]Consequently, the present applicant has invented the third method
that calculates the features wi of the approximation function
f3(x) by further taking the integration effects of the sensor 2 into
consideration as to the second method.

[1649]That is to say the third method is a method that introduces the
concept of a spatial mixed region.

[1650]Description will be made regarding a spatial mixed region with
reference to FIG. 218 prior to description of the third method.

[1651]In FIG. 218, a portion 2321 (hereafter, referred to as a region
2321) of a light signal in the actual world 1 represents a region having
the same area as one detecting element (pixel) of the sensor 2.

[1652]Upon the sensor 2 detecting the region 2321, the sensor 2 outputs a
value (one pixel value) 2322 obtained by the region 2321 being subjected
to integration in the temporal and spatial directions (X direction, Y
direction, and t direction). Note that the pixel value 2322 is
represented as an image in the drawing, but is actually data representing
a predetermined value.

[1653]The region 2321 in the actual world 1 is clearly classified into a
light signal (white region in the drawing) corresponding to the
foreground (the above fine line, for example), and a light signal (black
region in the drawing) corresponding to the background.

[1654]On the other hand, the pixel value 2322 is a value obtained by the
light signal in the actual world 1 corresponding to the foreground and
the light signal in the actual world 1 corresponding to the background
being subjected to integration. In other words, the pixel value 2322 is a
value corresponding to a level wherein the light corresponding to the
foreground and the light corresponding to the background are spatially
mixed.

[1655]Thus, in the event that a portion corresponding to one pixel
(detecting element of the sensor 2) of the light signals in the actual
world 1 is not a portion where the light signals having the same level
are spatially uniformly distributed, but a portion where the light
signals having a different level such as a foreground and background are
distributed, upon the region thereof being detected by the sensor 2, the
region becomes one pixel value as if the different light levels were
spatially mixed by the integration effects of the sensor 2 (integrated in
the spatial direction). Thus, a region made up of pixels in which an
image (light signals in the actual world 1) corresponding to a
foreground, and an image (light signals in the actual world 1)
corresponding to a background are subjected to spatial integration is,
here, referred to as a spatial mixed region.

[1656]Accordingly, with the third method, the actual world estimating unit
102 (FIG. 205) estimates the X cross-sectional waveform F(x) representing
the original region 2321 in the actual world 1 (of the light signals in
the actual world 1, the portion 2321 corresponding to one pixel of the
sensor 2) by approximating the X cross-sectional waveform F(x) with the
approximation function f3(x) serving as a one-dimensional polynomial
such as shown in FIG. 219.

[1657]That is to say, FIG. 219 represents an example of the approximation
function f3(x) corresponding to the pixel value 2322 serving as a
spatial mixed region (FIG. 218), i.e., the approximation function
f3(x) that approximates the X cross-sectional waveform F(x)
corresponding to the solid line within the region 2331 in the actual
world 1 (FIG. 218). In FIG. 219, the axis in the horizontal direction in
the drawing represents an axis in parallel with the side from the upper
left end xs to lower right end xe of the pixel corresponding to
the pixel value 2322 (FIG. 218), which is taken as the x axis. The axis
in the vertical direction in the drawing is taken as an axis representing
pixel values.

[1658]In FIG. 219, the following Expression (112) is defined on condition
that the result obtained by subjecting the approximation function
f3(x) to integration in a range (pixel width) from the xs to
the xe is generally identical with the pixel values P(x, y) output
from the sensor 2 (dependent on a margin of error e alone).

∫ ∫ ##EQU00065##

[1659]In this case, the features wi of the approximation function
f3(x) are calculated from the 20 pixel values P(x, y) (however, x is
any one integer value of -1 through 2, and y is any one integer value of
-2 through 2) of the fine-line-including data region 2302 shown in FIG.
214, so the pixel value P in Expression (112) becomes the pixel values
P(x, y).

[1660]Also, as with the second method, it is necessary to take continuity
in the spatial direction into consideration, and accordingly, each of the
start position xs and end position xe in the integral range in
Expression (112) is dependent upon the shift amount Cx(y). That is
to say, each of the start position xs and end position xe of
the integral range in Expression (112) is represented such as the
following Expression (113).

xs=x-Cx(y)-0.5

xe=x-Cx(y)+0.5 (113)

[1661]In this case, upon each pixel value of the fine-line-including data
region 2302 shown in FIG. 214, i.e., each of the input pixel values P(x,
-2), P(x, -1), P(x, 0), P(x, 1), and P(x, 2) (however, x is any one
integer value of -1 through 2) shown in FIG. 215 being substituted for
the above Expression (112) (the integral range is the above Expression
(113)), 20 equations shown in the following Expression (114) are
generated.

∫ ∫ ∫ ∫ ∫
∫ ∫ ∫ ∫ ∫ ∫
∫ ∫ ∫ ∫ ∫
∫ ∫ ∫ ∫ ##EQU00066##

[1662]Expression (114) is made up of 20 equations as with the above
Expression (111). Accordingly, with the third method as with the second
method, in the event that the number of the features wi of the
approximation function f3(x) is less than 20, i.e., in the event
that the approximation function f3(x) is a polynomial having the
number of dimensions less than 19, for example, the features wi may
be calculated with the least squares method. Note that the specific
solution of the least squares method will be described later.

[1663]For example, if we say that the number of dimensions of the
approximation function f3(x) is five, the approximation function
f3(x) calculated with the least squares method using Expression
(114) (the approximation function f3(x) generated with the
calculated features wi) becomes a curve shown with the solid line in
FIG. 220.

[1664]Note that in FIG. 220, the vertical axis represents pixel values,
and the horizontal axis represents a relative position x from the pixel
of interest.

[1665]As shown in FIG. 220, in the event that the approximation function
f3(x) (a curve shown with a solid line in the drawing) generated
with the third method is compared with the approximation function
f2(x) (a curve shown with a dashed line in the drawing) generated
with the second method, a pixel value at x=0 becomes great, and also the
gradient of the curve creates a steep waveform. This is because details
increase more than the input pixels, resulting in being unrelated to the
resolution of the input pixels. That is to say, we can say that the
approximation function f3(x) approximates the X cross-sectional
waveform F(x). Accordingly, though not shown in the drawing, the
approximation function f3(x) becomes a waveform closer to the X
cross-sectional waveform F(x) than the approximation function f2(x).

[1666]FIG. 221 represents an configuration example of the actual world
estimating unit 102 employing such a one-dimensional polynomial
approximating method.

[1667]In FIG. 221, the actual world estimating unit 102 estimates the X
cross-sectional waveform F(x) by calculating the features wi using
the above third method (least squares method), and generating the
approximation function f(x) of the above Expression (105) using the
calculated features wi.

[1669]The conditions setting unit 2331 sets a pixel range (hereafter,
referred to as a tap range) used for estimating the X cross-sectional
waveform F(x) corresponding to a pixel of interest, and the number of
dimensions n of the approximation function f(x).

[1671]The input pixel acquiring unit 2333 acquires, of the input images
stored in the input image storage unit 2332, an input image region
corresponding to the tap range set by the conditions setting unit 2231,
and supplies this to the normal equation generating unit 2335 as an input
pixel value table. That is to say, the input pixel value table is a table
in which the respective pixel values of pixels included in the input
image region are described. Note that a specific example of the input
pixel value table will be described later.

[1672]Now, the actual world estimating unit 102 calculates the features
wi of the approximation function f(x) with the least squares method
using the above Expression (112) and Expression (113) here, but the above
Expression (112) can be represented such as the following Expression
(115).

× × ##EQU00067##

[1673]In Expression (115), Si(xs, xe) represents the
integral components of the i-dimensional term. That is to say, the
integral components Si(xs, xe) are shown in the following
Expression (116).

[1675]Specifically, the integral components Si(xs, xe)
(however, the value xs and value xe are values shown in the
above Expression (112)) shown in Expression (116) may be calculated as
long as the relative pixel positions (x, y), shift amount Cx(y), and
i of the i-dimensional terms are known. Also, of these, the relative
pixel positions (x, y) are determined by the pixel of interest and the
tap range, the shift amount Cx(y) is determined by the angle θ
(by the above Expression (107) and Expression (109)), and the range of i
is determined by the number of dimensions n, respectively.

[1676]Accordingly, the integral component calculation unit 2334 calculates
the integral components Si(xs, xe) based on the tap range
and the number of dimensions set by the conditions setting unit 2331, and
the angle θ of the data continuity information output from the data
continuity detecting unit 101, and supplies the calculated results to the
normal equation generating unit 2335 as an integral component table.

[1677]The normal equation generating unit 2335 generates the above
Expression (112), i.e., a normal equation in the case of obtaining the
features wi of the right side of Expression (115) with the least
squares method using the input pixel value table supplied from the input
pixel value acquiring unit 2333, and the integral component table
supplied from the integral component calculation unit 2334, and supplies
this to the approximation function generating unit 2336 as a normal
equation table. Note that a specific example of a normal equation will be
described later.

[1678]The approximation function generating unit 2336 calculates the
respective features wi of the above Expression (115) (i.e., the
respective coefficients wi of the approximation function f(x)
serving as a one-dimensional polynomial) by solving a normal equation
included in the normal equation table supplied from the normal equation
generating unit 2335 using the matrix solution, and outputs these to the
image generating unit 103.

[1679]Next, description will be made regarding the actual world estimating
processing (processing in step S102 in FIG. 40) of the actual world
estimating unit 102 (FIG. 221) which employs the one-dimensional
polynomial approximating method with reference to the flowchart in FIG.
222.

[1680]For example, let us say that an input image, which is a one-frame
input image output from the sensor 2, including the fine-line-including
data region 2302 in FIG. 207 described above has been already stored in
the input image storage unit 2332. Also, let us say that the data
continuity detecting unit 101 has subjected, at the continuity detection
processing in step S101 (FIG. 40), the fine-line-including data region
2302 to the processing thereof, and has already output the angle θ
as data continuity information.

[1681]In this case, the conditions setting unit 2331 sets conditions (a
tap range and the number of dimensions) in step S2301 in FIG. 222.

[1682]For example, let us say that a tap range 2351 shown in FIG. 223 is
set, and 5 dimensions are set as the number of dimensions.

[1683]That is to say, FIG. 223 is a diagram for describing an example of a
tap range. In FIG. 223, the X direction and Y direction are the X
direction and Y direction of the sensor 2 (FIG. 206) respectively. Also,
the tap range 2351 represents a pixel group made up of 20 pixels in total
(20 squares in the drawing) of 4 pixels in the X direction, and also 5
pixels in the Y direction.

[1684]Further, as shown in FIG. 223, let us say that a pixel of interest
is set at the second pixel from the left and also the third pixel from
the bottom in the drawing, of the tap range 2351. Also, let us say that
each pixel is denoted with a number l such as shown in FIG. 223 (l is any
integer value of 0 through 19) according to the relative pixel positions
(x, y) from the pixel of interest (a coordinate value of a
pixel-of-interest coordinates system wherein the center (0, 0) of the
pixel of interest is taken as the origin).

[1686]In step S2303, the input pixel value acquiring unit 2333 acquires an
input pixel value based on the condition (tap range) set by the
conditions setting unit 2331, and generates an input pixel value table.
That is to say, in this case, the input pixel value acquiring unit 2333
acquires the fine-line-including data region 2302 (FIG. 211), and
generates a table made up of 20 input pixel values P(l) as an input pixel
value table.

[1687]Note that in this case, the relation between the input pixel values
P(l) and the above input pixel values P(x, y) is a relation shown in the
following Expression (117). However, in Expression (117), the left side
represents the input pixel values P(l), and the right side represents the
input pixel values P(x, y).

P(0)=P(0,0)

P(1)=P(-1,2)

P(2)=P(0,2)

P(3)=P(1,2)

P(4)=P(2,2)

P(5)=P(-1,1)

P(6)=P(0,1)

P(7)=P(1,1)

P(8)=P(2,1)

P(9)=P(-1,0)

P(10)=P(1,0)

P(11)=P(2,0)

P(12)=P(-1,-1)

P(13)=P(0,-1)

P(14)=P(1,-1)

P(15)=P(2,-1)

P(16)=P(-1,-2)

P(17)=P(0,-2)

P(18)=P(1,-2)

P(19)=P(2,-2) (117)

[1688]In step S2304, the integral component calculation unit 2334
calculates integral components based on the conditions (a tap range and
the number of dimensions) set by the conditions setting unit 2331, and
the data continuity information (angle θ) supplied from the data
continuity detecting unit 101, and generates an integral component table.

[1689]In this case, as described above, the input pixel values are not
P(x, y) but P(l), and are acquired as the value of a pixel number l, so
the integral component calculation unit 2334 calculates the above
integral components Si(xs, xe) in Expression (116) as a
function of l such as the integral components Si(l) shown in the
left side of the following Expression (118).

Si(l)=Si(xs,xe) (118)

[1690]Specifically, in this case, the integral components Si(l) shown
in the following Expression (119) are calculated.

Si(0)=Si(-0.5,0.5)

Si(1)=Si(-1.5-Cx(2),-0.5-Cx(2))

Si(2)=Si(-0.5-Cx(2),0.5-Cx(2))

Si(3)=Si(0.5-Cx(2),1.5-Cx(2))

Si(4)=Si(1.5-Cx(2),2.5-Cx(2))

Si(5)=Si(-1.5-Cx(1),-0.5-Cx(1))

Si(6)=Si(-0.5-Cx(1),0.5-Cx(1))

Si(7)=Si(0.5-Cx(1),1.5-Cx(1))

Si(8)=Si(1.5-Cx(1),2.5-Cx(1))

Si(9)=Si(-1.5,-0.5)

Si(10)=Si(0.5,1.5)

Si(11)=Si(1.5,2.5)

Si(12)=Si(-1.5-Cx(-1),-0.5-Cx(-1))

Si(13)=Si(-0.5-Cx(-1),0.5-Cx(-1))

Si(14)=Si(0.5-Cx(-1),1.5-Cx(-1))

Si(15)=Si(1.5-Cx(-1),2.5-Cx(-1))

Si(16)=Si(-1.5-Cx(-2),-0.5-Cx(-2))

Si(17)=Si(-0.5-Cx(-2),0.5-Cx(-2))

Si(18)=Si(0.5-Cx(-2),1.5-Cx(-2))

Si(19)=Si(1.5-Cx(-2),2.5-Cx(-2)) (119)

[1691]Note that in Expression (119), the left side represents the integral
components Si(l), and the right side represents the integral
components Si(xs, xe). That is to say, in this case, i is
0 through 5, and accordingly, the 120 Si(l) in total of the 20
S0(l), 20 S1(l), 20 S2(l), 20 S3(l), 20 S4(l),
and 20 S5(l) are calculated.

[1692]More specifically, first the integral component calculation unit
2334 calculates each of the shift amounts Cx(-2), Cx(-1),
Cx(1), and Cx(2) using the angle θ supplied from the data
continuity detecting unit 101. Next, the integral component calculation
unit 2334 calculates each of the 20 integral components Si(xs,
xe) shown in the right side of Expression (118) regarding each of
i=0 through 5 using the calculated shift amounts Cx(-2),
Cx(-1), Cx(1), and Cx(2). That is to say, the 120 integral
components Si(xs, xe) are calculated. Note that with this
calculation of the integral components Si(xs, xe), the
above Expression (116) is used. Subsequently, the integral component
calculation unit 2334 converts each of the calculated 120 integral
components Si (xs, xe) into the corresponding integral
components Si(l) in accordance with Expression (119), and generates
an integral component table including the converted 120 integral
components Si(l).

[1693]Note that the sequence of the processing in step S2303 and the
processing in step S2304 is not restricted to the example in FIG. 222,
the processing in step S2304 may be executed first, or the processing in
step S2303 and the processing in step S2304 may be executed
simultaneously.

[1694]Next, in step S2305, the normal equation generating unit 2335
generates a normal equation table based on the input pixel value table
generated by the input pixel value acquiring unit 2333 at the processing
in step S2303, and the integral component table generated by the integral
component calculation unit 2334 at the processing in step S2304.

[1695]Specifically, in this case, the features wi of the following
Expression (120) corresponding to the above Expression (115) are
calculated using the least squares method. A normal equation
corresponding to this is represented as the following Expression (121).

×
##EQU00069##

[1696]Note that in Expression (121), L represents the maximum value of the
pixel number l in the tap range. n represents the number of dimensions of
the approximation function f(x) serving as a polynomial. Specifically, in
this case, n=5, and L=19.

[1697]If we define each matrix of the normal equation shown in Expression
(121) as the following Expressions (122) through (124), the normal
equation is represented as the following Expression (125).

##EQU00070##

[1698]As shown in Expression (123), the respective components of the
matrix WMAT are the features wi to be obtained. Accordingly, in
Expression (125), if the matrix SMAT of the left side and the matrix
PMAT of the right side are determined, the matrix WMAT(i.e.,
features wi) may by be calculated with the matrix solution.

[1699]Specifically, as shown in Expression (122), the respective
components of the matrix SMAT may be calculated as long as the above
integral components Si(l) are known. The integral components
Si(l) are included in the integral component table supplied from the
integral component calculation unit 2334, so the normal equation
generating unit 2335 can calculate each component of the matrix SMAT
using the integral component table.

[1700]Also, as shown in Expression (124), the respective components of the
matrix PMAT may be calculated as long as the integral components
Si(l) and the input pixel values P(l) are known. The integral
components Si(l) is the same as those included in the respective
components of the matrix SMAT, also the input pixel values P(l) are
included in the input pixel value table supplied from the input pixel
value acquiring unit 2333, so the normal equation generating unit 2335
can calculate each component of the matrix PMAT using the integral
component table and input pixel value table.

[1701]Thus, the normal equation generating unit 2335 calculates each
component of the matrix SMAT and matrix PMAT, and outputs the
calculated results (each component of the matrix SMAT and matrix
PMAT) to the approximation function generating unit 2336 as a normal
equation table.

[1702]Upon the normal equation table being output from the normal equation
generating unit 2335, in step S2306, the approximation function
generating unit 2336 calculates the features wi(i.e., the
coefficients wi of the approximation function f(x) serving as a
one-dimensional polynomial) serving as the respective components of the
matrix WMAT in the above Expression (125) based on the normal
equation table.

[1703]Specifically, the normal equation in the above Expression (125) can
be transformed as the following Expression (126).

WMAT=SMAT-1PMAT (126)

[1704]In Expression (126), the respective components of the matrix
WMAT in the left side are the features wi to be obtained. The
respective components regarding the matrix SMAT and matrix PMAT
are included in the normal equation table supplied from the normal
equation generating unit 2335. Accordingly, the approximation function
generating unit 2336 calculates the matrix WMAT by calculating the
matrix in the right side of Expression (126) using the normal equation
table, and outputs the calculated results (features wi) to the image
generating unit 103.

[1705]In step S2307, the approximation function generating unit 2336
determines regarding whether or not the processing of all the pixels has
been completed.

[1706]In step S2307, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S2303, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S2302 through S2307 is repeatedly performed.

[1707]In the event that the processing of all the pixels has been
completed (in step S2307, in the event that determination is made that
the processing of all the pixels has been completed), the estimating
processing of the actual world 1 ends.

[1708]Note that the waveform of the approximation function f(x) generated
with the coefficients (features) wi thus calculated becomes a
waveform such as the approximation function f3(x) in FIG. 220 described
above.

[1709]Thus, with the one-dimensional polynomial approximating method, the
features of the approximation function f(x) serving as a one-dimensional
polynomial are calculated on assumption that a waveform having the same
form as the one-dimensional X cross-sectional waveform F(x) is continuous
in the direction of continuity. Accordingly, with the one-dimensional
polynomial approximating method, the features of the approximation
function f(x) can be calculated with less amount of calculation
processing than other function approximating methods.

[1710]In other words, with the one-dimensional polynomial approximating
method, for example, the multiple detecting elements of the sensor (for
example, detecting elements 2-1 of the sensor 2 in FIG. 206) each having
time-space integration effects project the light signals in the actual
world 1 (for example, an l portion 2301 of the light signal in the actual
world 1 in FIG. 207), and the data continuity detecting unit 101 in FIG.
205 (FIG. 3) detects continuity of data (for example, continuity of data
represented with Gf in FIG. 214) in image data (for example, image
data (input image region) 2302 in FIG. 207) made up of multiple pixels
having a pixel value (for example, input pixel values P(x, y) shown in
the respective graphs in FIG. 212) projected by the detecting elements
2-1, which drop part of continuity (for example, continuity represented
with the gradient GF in FIG. 214) of the light signal in the actual
world 1.

[1711]For example, the actual world estimating unit 102 in FIG. 205 (FIG.
3) estimates the light signal function F by approximating the light
signal function F representing the light signal in the actual world 1
(specifically, X cross-sectional waveform F(x)) with a predetermined
approximation function f(specifically, for example, the approximation
function f3(x) in FIG. 220) on condition that the pixel value (for
example, input pixel value P serving as the left side of the above
Expression (112)) of a pixel corresponding to a position in the
one-dimensional direction (for example, arrow 2311 in FIG. 209, i.e., X
direction) of the time-space directions of image data corresponding to
continuity of data detected by the data continuity detecting unit 101 is
the pixel value (for example, as shown in the right side of Expression
(112), the value obtained by the approximation function f3(x) being
integrated in the X direction) acquired by integration effects in the
one-dimensional direction.

[1712]Speaking in detail, for example, the actual world estimating unit
102 estimates the light signal function F by approximating the light
signal function F with the approximation function f on condition that the
pixel value of a pixel corresponding to a distance (for example, shift
amounts Cx(y) in FIG. 216) along in the one-dimensional direction
(for example, X direction) from a line corresponding to continuity of
data (for example, a line (dashed line) corresponding to the gradient
Gf in FIG. 216) detected by the continuity detecting hand unit 101
is the pixel value (for example, a value obtained by the approximation
function f3(x) being integrated in the X direction such as shown in
the right side of Expression (112) with an integral range such as shown
in Expression (112)) acquired by integration effects in the
one-dimensional direction.

[1713]Accordingly, with the one-dimensional polynomial approximating
method, the features of the approximation function f(x) can be calculated
with less amount of calculation processing than other function
approximating methods.

[1714]Next, description will be made regarding the second function
approximating method with reference to FIG. 224 through FIG. 230.

[1715]That is to say, the second function approximating method is a method
wherein the light signal in the actual world 1 having continuity in the
spatial direction represented with the gradient GF such as shown in
FIG. 224 for example is regarded as a waveform F(x, y) on the X-Y plane
(on the plane level in the X direction serving as one direction of the
spatial directions, and in the Y direction orthogonal to the X
direction), and the waveform F(x, y) is approximated with the
approximation function f(x, y) serving as a two-dimensional polynomial,
thereby estimating the waveform F(x, y). Accordingly, hereafter, the
second function approximating method is referred to as a two-dimensional
polynomial approximating method.

[1716]Note that in FIG. 224, the horizontal direction represents the X
direction serving as one direction of the spatial directions, the upper
right direction represents the Y direction serving as the other direction
of the spatial directions, and the vertical direction represents the
level of light respectively. GF represents the gradient as
continuity in the spatial direction.

[1717]Also, with description of the two-dimensional polynomial
approximating method, let us say that the sensor 2 is a CCD made up of
the multiple detecting elements 2-1 disposed on the plane thereof, such
as shown in FIG. 225.

[1718]With the example in FIG. 225, the direction in parallel with a
predetermined side of the detecting elements 2-1 is taken as the X
direction serving as one direction of the spatial directions, and the
direction orthogonal to the X direction is taken as the Y direction
serving as the other direction of the spatial directions. The direction
orthogonal to the X-Y plane is taken as the t direction serving as the
temporal direction.

[1719]Also, with the example in FIG. 225, the spatial shape of the
respective detecting elements 2-1 of the sensor 2 is taken as a square of
which one side is 1 in length. The shutter time (exposure time) of the
sensor 2 is taken as 1.

[1720]Further, with the example in FIG. 225, the center of one certain
detecting element 2-1 of the sensor 2 is taken as the origin (the
position in the X direction is x=0, and the position in the Y direction
is y=0) in the spatial directions (X direction and Y direction), and also
the intermediate point-in-time of the exposure time is taken as the
origin (the position in the t direction is t=0) in the temporal direction
(t direction).

[1721]In this case, the detecting element 2-1 of which the center is in
the origin (x=0, y=0) in the spatial directions subjects the light signal
function F(x, y, t) to integration with a range of -0.5 through 0.5 in
the X direction, with a range of -0.5 through 0.5 in the Y direction, and
with a range of -0.5 through 0.5 in the t direction, and outputs the
integral value as the pixel value P.

[1722]That is to say, the pixel value P output from the detecting element
2-1 of which the center is in the origin in the spatial directions is
represented with the following Expression (127).

∫ ∫ ∫ ##EQU00071##

[1723]Similarly, the other detecting elements 2-1 output the pixel value P
shown in Expression (127) by taking the center of the detecting element
2-1 to be processed as the origin in the spatial directions.

[1724]Incidentally, as described above, the two-dimensional polynomial
approximating method is a method wherein the light signal in the actual
world 1 is handled as a waveform F(x, y) such as shown in FIG. 224 for
example, and the two-dimensional waveform F(x, y) is approximated with
the approximation function f(x, y) serving as a two-dimensional
polynomial.

[1725]First, description will be made regarding a method representing such
the approximation function f(x, y) with a two-dimensional polynomial.

[1726]As described above, the light signal in the actual world 1 is
represented with the light signal function F(x, y, t) of which variables
are the position on the three-dimensional space x, y, and z, and
point-in-time t. This light signal function F(x, y, t), i.e., a
one-dimensional waveform projected in the X direction at an arbitrary
position y in the Y direction is referred to as an X cross-sectional
waveform F(x), here.

[1727]When paying attention to this X cross-sectional waveform F(x), in
the event that the signal in the actual world 1 has continuity in a
certain direction in the spatial directions, it can be conceived that a
waveform having the same form as the X cross-sectional waveform F(x)
continues in the continuity direction. For example, with the example in
FIG. 224, a waveform having the same form as the X cross-sectional
waveform F(x) continues in the direction of the gradient GF. In
other words, it can be said that the waveform F(x, y) is formed by a
waveform having the same form as the X cross-sectional waveform F(x)
continuing in the direction of the gradient GF.

[1728]Accordingly, the approximation function f(x, y) can be represented
with a two-dimensional polynomial by considering that the waveform of the
approximation function f(x, y) approximating the waveform F(x, y) is
formed by a waveform having the same form as the approximation function
f(x) approximating the X cross-sectional F(x) continuing.

[1729]Description will be made in more detail regarding the representing
method of the approximation function f(x, y).

[1730]For example, let us say that the light signal in the actual world 1
such as shown in FIG. 224 described above, i.e., a light signal having
continuity in the spatial direction represented with the gradient GF
is detected by the sensor 2 (FIG. 225), and output as an input image
(pixel value).

[1731]Further, let us say that as shown in FIG. 226, the data continuity
detecting unit 101 (FIG. 3) subjects an input image region 2401 made up
of 20 pixels (in the drawing, 20 squares represented with dashed line) in
total of 4 pixels in the X direction and also 5 pixels in the Y
direction, of this input image, to the processing thereof, and outputs an
angle θ (angle θ generated between the direction of data
continuity represented with the gradient Gf corresponding to the
gradient GF, and the X direction) as one of the data continuity
information.

[1732]Note that with the input image region 2401, the horizontal direction
in the drawing represents the X direction serving as one direction in the
spatial directions, and the vertical direction in the drawing represents
the Y direction serving as the other direction of the spatial directions.

[1733]Also, in FIG. 226, an (x, y) coordinates system is set such that a
pixel in the second pixel from the left, and also the third pixel from
the bottom is taken as a pixel of interest, and the center of the pixel
of interest is taken as the origin (0, 0). A relative distance
(hereafter, referred to as a cross-sectional direction distance) in the X
direction as to the straight line (straight line having the gradient
Gf representing the direction of data continuity) having an angle
θ passing through the origin (0, 0) is described as x'.

[1734]Further, in FIG. 226, the graph on the right side is a function
wherein an X cross-sectional waveform F(x') is approximated, which
represents an approximation function f(x') serving as an n-dimensional (n
is an arbitrary integer) polynomial. Of the axes in the graph on the
right side, the axis in the horizontal direction in the drawing
represents a cross-sectional direction distance, and the axis in the
vertical direction in the drawing represents pixel values.

[1735]In this case, the approximation function f(x') shown in FIG. 226 is
an n-dimensional polynomial, so is represented as the following
Expression (128).

' ' ' ' ' ##EQU00072##

[1736]Also, since the angle θ is determined, the straight line
having angle θ passing through the origin (0, 0) is uniquely
determined, and a position x1 in the X direction of the straight
line at an arbitrary position y in the Y direction is represented as the
following Expression (129). However, in Expression (129), s represents
cot θ.

x1=s×y (129)

[1737]That is to say, as shown in FIG. 226, a point on the straight line
corresponding to continuity of data represented with the gradient Gf
is represented with a coordinate value (x1, y).

[1738]The cross-sectional direction distance x' is represented as the
following Expression (130) using Expression (129).

x'=x-x1=x-s×y (130)

[1739]Accordingly, the approximation function f(x, y) at an arbitrary
position (x, y) within the input image region 2401 is represented as the
following Expression (131) using Expression (128) and Expression (130).

× ##EQU00073##

[1740]Note that in Expression (131), wi represents coefficients of
the approximation function f(x, y). Note that the coefficients wi of
the approximation function f including the approximation function f(x, y)
can be evaluated as the features of the approximation function f.
Accordingly, the coefficients wi of the approximation function f are
also referred to as the features wi of the approximation function f.

[1741]Thus, the approximation function f(x, y) having a two-dimensional
waveform can be represented as the polynomial of Expression (131) as long
as the angle θ is known.

[1742]Accordingly, if the actual world estimating unit 102 can calculate
the features wi of Expression (131), the actual world estimating
unit 102 can estimate the waveform F(x, y) such as shown in FIG. 224.

[1743]Consequently, hereafter, description will be made regarding a method
for calculating the features wi of Expression (131).

[1744]That is to say, upon the approximation function f(x, y) represented
with Expression (131) being subjected to integration with an integral
range (integral range in the spatial direction) corresponding to a pixel
(the detecting element 2-1 of the sensor 2 (FIG. 225)), the integral
value becomes the estimated value regarding the pixel value of the pixel.
It is the following Expression (132) that this is represented with an
equation. Note that with the two-dimensional polynomial approximating
method, the temporal direction t is regarded as a constant value, so
Expression (132) is taken as an equation of which variables are the
positions x and y in the spatial directions (X direction and Y
direction).

∫ ∫ × ##EQU00074##

[1745]In Expression (132), P(x, y) represents the pixel value of a pixel
of which the center position is in a position (x, y) (relative position
(x, y) from the pixel of interest) of an input image from the sensor 2.
Also, e represents a margin of error.

[1746]Thus, with the two-dimensional polynomial approximating method, the
relation between the input pixel value P(x, y) and the approximation
function f(x, y) serving as a two-dimensional polynomial can be
represented with Expression (132), and accordingly, the actual world
estimating unit 102 can estimate the two-dimensional function F(x, y)
(waveform F(x, y) wherein the light signal in the actual world 1 having
continuity in the spatial direction represented with the gradient
GF(FIG. 224) is represented focusing attention on the spatial
direction) by calculating the features wi with, for example, the
least squares method or the like using Expression (132) (by generating
the approximation function f(x, y) by substituting the calculated
features wi for Expression (130)).

[1747]FIG. 227 represents a configuration example of the actual world
estimating unit 102 employing such a two-dimensional polynomial
approximating method.

[1749]The conditions setting unit 2421 sets a pixel range (tap range) used
for estimating the function F(x, y) corresponding to a pixel of interest,
and the number of dimensions n of the approximation function f(x, y).

[1751]The input pixel value acquiring unit 2423 acquires, of the input
images stored in the input image storage unit 2422, an input image region
corresponding to the tap range set by the conditions setting unit 2421,
and supplies this to the normal equation generating unit 2425 as an input
pixel value table. That is to say, the input pixel value table is a table
in which the respective pixel values of pixels included in the input
image region are described. Note that a specific example of the input
pixel value table will be described later.

[1752]Incidentally, as described above, the actual world estimating unit
102 employing the two-dimensional approximating method calculates the
features wi of the approximation function f(x, y) represented with
the above Expression (131) by solving the above Expression (132) using
the least squares method.

[1753]Expression (132) can be represented as the following Expression
(137) by using the following Expression (136) obtained by the following
Expressions (133) through (135).

∫ ∫× × ∫× × ∫ ∫
× ∫ × ∫ ×× × ×
× × × × × × × ×
##EQU00075##

[1754]In Expression (137), Si(x-0.5, x+0.5, y-0.5, y+0.5) represents
the integral components of i-dimensional terms. That is to say, the
integral components Si(x-0.5, x+0.5, y-0.5, y+0.5) are as shown in
the following Expression (138).

[1756]Specifically, the integral components Si(x-0.5, x+0.5, y-0.5,
y+0.5) shown in Expression (138) can be calculated as long as the
relative pixel positions (x, y), the variable s and i of i-dimensional
terms in the above Expression (131) are known. Of these, the relative
pixel positions (x, y) are determined with a pixel of interest, and a tap
range, the variable s is cot θ, which is determined with the angle
θ, and the range of i is determined with the number of dimensions n
respectively.

[1757]Accordingly, the integral component calculation unit 2424 calculates
the integral components Si(x-0.5, x+0.5, y-0.5, y+0.5) based on the
tap range and the number of dimensions set by the conditions setting unit
2421, and the angle θ of the data continuity information output
from the data continuity detecting unit 101, and supplies the calculated
results to the normal equation generating unit 2425 as an integral
component table.

[1758]The normal equation generating unit 2425 generates a normal equation
in the case of obtaining the above Expression (132), i.e., Expression
(137) by the least squares method using the input pixel value table
supplied from the input pixel value acquiring unit 2423, and the integral
component table supplied from the integral component calculation unit
2424, and outputs this to the approximation function generating unit 2426
as a normal equation table. Note that a specific example of a normal
equation will be described later.

[1759]The approximation function generating unit 2426 calculates the
respective features wi of the above Expression (132) (i.e., the
coefficients wi of the approximation function f(x, y) serving as a
two-dimensional polynomial) by solving the normal equation included in
the normal equation table supplied from the normal equation generating
unit 2425 using the matrix solution, and output these to the image
generating unit 103.

[1760]Next, description will be made regarding the actual world estimating
processing (processing in step S102 in FIG. 40) to which the
two-dimensional polynomial approximating method is applied, with
reference to the flowchart in FIG. 228.

[1761]For example, let us say that the light signal in the actual world 1
having continuity in the spatial direction represented with the gradient
GF has been detected by the sensor 2 (FIG. 225), and has been stored
in the input image storage unit 2422 as an input image corresponding to
one frame. Also, let us say that the data continuity detecting unit 101
has subjected the region 2401 shown in FIG. 226 described above of the
input image to processing in the continuity detecting processing in step
S101 (FIG. 40), and has output the angle θ as data continuity
information.

[1762]In this case, in step S2401, the conditions setting unit 2421 sets
conditions (a tap range and the number of dimensions).

[1763]For example, let us say that a tap range 2441 shown in FIG. 229 has
been set, and also 5 has been set as the number of dimensions.

[1764]FIG. 229 is a diagram for describing an example of a tap range. In
FIG. 229, the X direction and Y direction represent the X direction and Y
direction of the sensor 2 (FIG. 225). Also, the tap range 2441 represents
a pixel group made up of 20 pixels (20 squares in the drawing) in total
of 4 pixels in the X direction and also 5 pixels in the Y direction.

[1765]Further, as shown in FIG. 229, let us say that a pixel of interest
has been set to a pixel, which is the second pixel from the left and also
the third pixel from the bottom in the drawing, of the tap range 2441.
Also, let us say that each pixel is denoted with a number l such as shown
in FIG. 229 (l is any integer value of 0 through 19) according to the
relative pixel positions (x, y) from the pixel of interest (a coordinate
value of a pixel-of-interest coordinates system wherein the center (0, 0)
of the pixel of interest is taken as the origin).

[1767]In step S2403, the input pixel value acquiring unit 2423 acquires an
input pixel value based on the condition (tap range) set by the
conditions setting unit 2421, and generates an input pixel value table.
That is to say, in this case, the input pixel value acquiring unit 2423
acquires the input image region 2401 (FIG. 226), generates a table made
up of 20 input pixel values P(l) as an input pixel value table.

[1768]Note that in this case, the relation between the input pixel values
P(l) and the above input pixel values P(x, y) is a relation shown in the
following Expression (139). However, in Expression (139), the left side
represents the input pixel values P(l), and the right side represents the
input pixel values P(x, y).

P(0)=P(0,0)

P(1)=P(-1,2)

P(2)=P(0,2)

P(3)=P(1,2)

P(4)=P(2,2)

P(5)=P(-1,1)

P(6)=P(0,1)

P(7)=P(1,1)

P(8)=P(2,1)

P(9)=P(-1,0)

P(10)=P(1,0)

P(11)=P(2,0)

P(12)=P(-1,-1)

P(13)=P(0,-1)

P(14)=P(1,-1)

P(15)=P(2,-1)

P(16)=P(-1,-2)

P(17)=P(0,-2)

P(18)=P(1,-2)

P(19)=P(2,-2) (139)

[1769]In step S2404, the integral component calculation unit 2424
calculates integral components based on the conditions (a tap range and
the number of dimensions) set by the conditions setting unit 2421, and
the data continuity information (angle θ) supplied from the data
continuity detecting unit 101, and generates an integral component table.

[1770]In this case, as described above, the input pixel values are not
P(x, y) but P(l), and are acquired as the value of a pixel number l, so
the integral component calculation unit 2424 calculates the integral
components Si(x-0.5, x+0.5, y-0.5, y+0.5) in the above Expression
(138) as a function of 1 such as the integral components Si(l) shown
in the left side of the following Expression (140).

Si(l)=Si(x-0.5,x+0.5,y-0.5,y+0.5) (140)

[1771]Specifically, in this case, the integral components Si(l) shown
in the following Expression (141) are calculated.

Si(0)=Si(-0.5,0.5,-0.5,0.5)

Si(1)=Si(-1.5,-0.5,1.5,2.5)

Si(2)=Si(-0.5,0.5,1.5,2.5)

Si(3)=Si(0.5,1.5,1.5,2.5)

Si(4)=Si(1.5,2.5,1.5,2.5)

Si(5)=Si(-1.5,-0.5,0.5,1.5)

Si(6)=Si(-0.5,0.5,0.5,1.5)

Si(7)=Si(0.5,1.5,0.5,1.5)

Si(8)=Si(1.5,2.5,0.5,1.5)

Si(9)=Si(-1.5,-0.5,-0.5,0.5)

Si(10)=Si(0.5,1.5,-0.5,0.5)

Si(11)=Si(1.5,2.5,-0.5,0.5)

Si(12)=Si(-1.5,-0.5,-1.5,-0.5)

Si(13)=Si(-0.5,0.5,-1.5,-0.5)

Si(14)=Si(0.5,1.5,-1.5,-0.5)

Si(15)=Si(1.5,2.5,-1.5,-0.5)

Si(16)=Si(-1.5,-0.5,-2.5,-1.5)

Si(17)=Si(-0.5,0.5,-2.5,-1.5)

Si(18)=Si(0.5,1.5,-2.5,-1.5)

Si(19)=Si(1.5,2.5,-2.5,-1.5) (141)

[1772]Note that in Expression (141), the left side represents the integral
components Si(l), and the right side represents the integral
components Si(x-0.5, x+0.5, y-0.5, y+0.5). That is to say, in this
case, i is 0 through 5, and accordingly, the 120 Si(l) in total of
the 20 S0(l), 20 S1(l), 20 S2(l), 20 S3(l), 20
S4(l), and 20 S5(l) are calculated.

[1773]More specifically, first the integral component calculation unit
2424 calculates cot θ corresponding to the angle θ supplied
from the data continuity detecting unit 101, and takes the calculated
result as a variable s. Next, the integral component calculation unit
2424 calculates each of the 20 integral components Si(x-0.5, x+0.5,
y-0.5, y+0.5) shown in the right side of Expression (140) regarding each
of i=0 through 5 using the calculated variable s. That is to say, the 120
integral components Si(x-0.5, x+0.5, y-0.5, y+0.5) are calculated.
Note that with this calculation of the integral components Si(x-0.5,
x+0.5, y-0.5, y+0.5), the above Expression (138) is used. Subsequently,
the integral component calculation unit 2424 converts each of the
calculated 120 integral components Si(x-0.5, x+0.5, y-0.5, y+0.5)
into the corresponding integral components Si(l) in accordance with
Expression (141), and generates an integral component table including the
converted 120 integral components Si(l).

[1774]Note that the sequence of the processing in step S2403 and the
processing in step S2404 is not restricted to the example in FIG. 228,
the processing in step S2404 may be executed first, or the processing in
step S2403 and the processing in step S2404 may be executed
simultaneously.

[1775]Next, in step S2405, the normal equation generating unit 2425
generates a normal equation table based on the input pixel value table
generated by the input pixel value acquiring unit 2423 at the processing
in step S2403, and the integral component table generated by the integral
component calculation unit 2424 at the processing in step S2404.

[1776]Specifically, in this case, the features wi are calculated with
the least squares method using the above Expression (137) (however, in
Expression (136), the Si(l) into which the integral components
Si(x-0.5, x+0.5, y-0.5, y+0.5) are converted using Expression (140)
is used), so a normal equation corresponding to this is represented as
the following Expression (142).

##EQU00077##

[1777]Note that in Expression (142), L represents the maximum value of the
pixel number l in the tap range. n represents the number of dimensions of
the approximation function f(x) serving as a polynomial. Specifically, in
this case, n=5, and L=19.

[1778]If we define each matrix of the normal equation shown in Expression
(142) as the following Expressions (143) through (145), the normal
equation is represented as the following Expression (146).

##EQU00078##

[1779]As shown in Expression (144), the respective components of the
matrix WMAT are the features wi to be obtained. Accordingly, in
Expression (146), if the matrix SMAT of the left side and the matrix
PMAT of the right side are determined, the matrix WMAT may be
calculated with the matrix solution.

[1780]Specifically, as shown in Expression (143), the respective
components of the matrix SMAT may be calculated with the above
integral components Si(l). That is to say, the integral components
Si(l) are included in the integral component table supplied from the
integral component calculation unit 2424, so the normal equation
generating unit 2425 can calculate each component of the matrix SMAT
using the integral component table.

[1781]Also, as shown in Expression (145), the respective components of the
matrix PMAT may be calculated with the integral components
Si(l) and the input pixel values P(l). That is to say, the integral
components Si(l) is the same as those included in the respective
components of the matrix SMAT, also the input pixel values P(l) are
included in the input pixel value table supplied from the input pixel
value acquiring unit 2423, so the normal equation generating unit 2425
can calculate each component of the matrix PMAT using the integral
component table and input pixel value table.

[1782]Thus, the normal equation generating unit 2425 calculates each
component of the matrix SMAT and matrix PMAT, and outputs the
calculated results (each component of the matrix SMAT and matrix
PMAT) to the approximation function generating unit 2426 as a normal
equation table.

[1783]Upon the normal equation table being output from the normal equation
generating unit 2425, in step S2406, the approximation function
generating unit 2426 calculates the features wi(i.e., the
coefficients wi of the approximation function f(x, y) serving as a
two-dimensional polynomial) serving as the respective components of the
matrix WMAT in the above Expression (146) based on the normal
equation table.

[1784]Specifically, the normal equation in the above Expression (146) can
be transformed as the following Expression (147).

WMAT=SMAT-1PMAT (147)

[1785]In Expression (147), the respective components of the matrix
WMAT in the left side are the features wi to be obtained. The
respective components regarding the matrix SMAT and matrix PMAT
are included in the normal equation table supplied from the normal
equation generating unit 2425. Accordingly, the approximation function
generating unit 2426 calculates the matrix WMAT by calculating the
matrix in the right side of Expression (147) using the normal equation
table, and outputs the calculated results (features wi) to the image
generating unit 103.

[1786]In step S2407, the approximation function generating unit 2426
determines regarding whether or not the processing of all the pixels has
been completed.

[1787]In step S2407, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S2402, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S2402 through S2407 is repeatedly performed.

[1788]In the event that the processing of all the pixels has been
completed (in step S2407, in the event that determination is made that
the processing of all the pixels has been completed), the estimating
processing of the actual world 1 ends.

[1789]As description of the two-dimensional polynomial approximating
method, an example for calculating the coefficients (features) wi of
the approximation function f(x, y) corresponding to the spatial
directions (X direction and Y direction) has been employed, but the
two-dimensional polynomial approximating method can be applied to the
temporal and spatial directions (X direction and t direction, or Y
direction and t direction) as well.

[1790]That is to say, the above example is an example in the case of the
light signal in the actual world 1 having continuity in the spatial
direction represented with the gradient GF(FIG. 224), and
accordingly, the equation including two-dimensional integration in the
spatial directions (X direction and Y direction), such as shown in the
above Expression (132). However, the concept regarding two-dimensional
integration can be applied not only to the spatial direction but also to
the temporal and spatial directions (X direction and t direction, or Y
direction and t direction).

[1791]In other words, with the two-dimensional polynomial approximating
method, even in the case in which the light signal function F(x, y, t),
which needs to be estimated, has not only continuity in the spatial
direction but also continuity in the temporal and spatial directions
(however, X direction and t direction, or Y direction and t direction),
this can be approximated with a two-dimensional polynomial.

[1792]Specifically, for example, in the event that there is an object
moving horizontally in the X direction at uniform velocity, the direction
of movement of the object is represented with like a gradient VF in
the X-t plane such as shown in FIG. 230. In other words, it can be said
that the gradient VF represents the direction of continuity in the
temporal and spatial directions in the X-t plane. Accordingly, the data
continuity detecting unit 101 can output movement θ such as shown
in FIG. 230 (strictly speaking, though not shown in the drawing, movement
θ is an angle generated by the direction of data continuity
represented with the gradient Vf corresponding to the gradient
VF and the X direction in the spatial direction) as data continuity
information corresponding to the gradient VF representing continuity
in the temporal and spatial directions in the X-t plane as well as the
above angle θ (data continuity information corresponding to
continuity in the spatial directions represented with the gradient
GF in the X-Y plane).

[1793]Accordingly, the actual world estimating unit 102 employing the
two-dimensional polynomial approximating method can calculate the
coefficients (features) wi of an approximation function f(x, t) in
the same method as the above method by employing the movement θ
instead of the angle θ. However, in this case, the equation to be
employed is not the above Expression (132) but the following Expression
(148).

∫ ∫ × ##EQU00079##

[1794]Note that in Expression (148), s is cot θ (however, θ is
movement).

[1795]Also, an approximation function f(y, t) focusing attention on the
spatial direction Y instead of the spatial direction X can be handled in
the same way as the above approximation function f(x, t).

[1796]Thus, with the two-dimensional polynomial approximating method, for
example, the multiple detecting elements of the sensor (for example,
detecting elements 2-1 of the sensor 2 in FIG. 225) each having
time-space integration effects project the light signals in the actual
world 1 (FIG. 205), and the data continuity detecting unit 101 in FIG.
205 (FIG. 3) detects continuity of data (for example, continuity of data
represented with Gf in FIG. 226) in image data (for example, input
image in FIG. 205) made up of multiple pixels having a pixel value
projected by the detecting elements 2-1, which drop part of continuity
(for example, continuity represented with the gradient GF in FIG.
224) of the light signal in the actual world 1.

[1797]For example, the actual world estimating unit 102 in FIG. 205 (FIG.
3) (FIG. 227 for configuration) estimates the light signal function F by
approximating the light signal function F representing the light signal
in the actual world 1 (specifically, function F(x, y) in FIG. 224) with
an approximation function f(for example, approximation function f(x, y)
shown in Expression (131)) serving as a polynomial on condition that the
pixel value (for example, input pixel value P (x, y) serving as the left
side of the above Expression (131)) of a pixel corresponding to a
position at least in the two-dimensional direction (for example, spatial
direction X and spatial direction Y in FIG. 224 and FIG. 225) of the
time-space directions of image data corresponding to continuity of data
detected by the data continuity detecting unit 101 is the pixel value
(for example, as shown in the right side of Expression (132), the value
obtained by the approximation function f(x, y) shown in the above
Expression (131) being integrated in the X direction and Y direction)
acquired by integration effects in the two-dimensional direction.

[1798]Speaking in detail, for example, the actual world estimating unit
102 estimates a first function representing the light signals in the real
world by approximating the first function with a second function serving
as a polynomial on condition that the pixel value of a pixel
corresponding to a distance (for example, cross-sectional direction
distance x' in FIG. 226) along in the two-dimensional direction from a
line corresponding to continuity of data (for example, a line (arrow)
corresponding to the gradient Gf in FIG. 226) detected by the
continuity detecting unit 101 is the pixel value acquired by integration
effects at least in the two-dimensional direction.

[1799]Thus, the two-dimensional polynomial approximating method takes not
one-dimensional but two-dimensional integration effects into
consideration, so can estimate the light signals in the actual world 1
more accurately than the one-dimensional polynomial approximating method.

[1800]Next, description will be made regarding the third function
approximating method with reference to FIG. 231 through FIG. 235.

[1801]That is to say, the third function approximating method is a method
for estimating the light signal function F(x, y, t) by approximating the
light signal function F(x, y, t) with the approximation function f(x, y,
t) focusing attention on that the light signal in the actual world 1
having continuity in a predetermined direction of the temporal and
spatial directions is represented with the light signal function F(x, y,
t), for example. Accordingly, hereafter, the third function approximating
method is referred to as a three-dimensional function approximating
method.

[1802]Also, with description of the three-dimensional function
approximating method, let us say that the sensor 2 is a CCD made up of
the multiple detecting elements 2-1 disposed on the plane thereof, such
as shown in FIG. 231.

[1803]With the example in FIG. 231, the direction in parallel with a
predetermined side of the detecting elements 2-1 is taken as the X
direction serving as one direction of the spatial directions, and the
direction orthogonal to the X direction is taken as the Y direction
serving as the other direction of the spatial directions. The direction
orthogonal to the X-Y plane is taken as the t direction serving as the
temporal direction.

[1804]Also, with the example in FIG. 231, the spatial shape of the
respective detecting elements 2-1 of the sensor 2 is taken as a square of
which one side is 1 in length. The shutter time (exposure time) of the
sensor 2 is taken as 1.

[1805]Further, with the example in FIG. 231, the center of one certain
detecting element 2-1 of the sensor 2 is taken as the origin (the
position in the X direction is x=0, and the position in the Y direction
is y=0) in the spatial directions (X direction and Y direction), and also
the intermediate point-in-time of the exposure time is taken as the
origin (the position in the t direction is t=0) in the temporal direction
(t direction).

[1806]In this case, the detecting element 2-1 of which the center is in
the origin (x=0, y=0) in the spatial directions subjects the light signal
function F(x, y, t) to integration with a range of -0.5 through 0.5 in
the X direction, with a range of -0.5 through 0.5 in the Y direction, and
with a range of -0.5 through 0.5 in the t direction, and outputs the
integral value as the pixel value P.

[1807]That is to say, the pixel value P output from the detecting element
2-1 of which the center is in the origin in the spatial directions is
represented with the following Expression (149).

∫ ∫ ∫ ##EQU00080##

[1808]Similarly, the other detecting elements 2-1 output the pixel value P
shown in Expression (149) by taking the center of the detecting element
2-1 to be processed as the origin in the spatial directions.

[1810]Specifically, for example, the approximation function f(x, y, t) is
taken as a function having N variables (features), a relational
expression between the input pixel values P(x, y, t) corresponding to
Expression (149) and the approximation function f(x, y, t) is defined.
Thus, in the event that M input pixel values P(x, y, t) more than N are
acquired, N variables (features) can be calculated from the defined
relational expression. That is to say, the actual world estimating unit
102 can estimate the light signal function F(x, y, t) by acquiring M
input pixel values P(x, y, t), and calculating N variables (features).

[1811]In this case, the actual world estimating unit 102 extracts
(acquires) M input images P(x, y, t), of the entire input image by using
continuity of data included in an input image (input pixel values) from
the sensor 2 as a constraint (i.e., using data continuity information as
to an input image to be output from the data continuity detecting unit
101). As a result, the prediction function f(x, y, t) is constrained by
continuity of data.

[1812]For example, as shown in FIG. 232, in the event that the light
signal function F(x, y, t) corresponding to an input image has continuity
in the spatial direction represented with the gradient GF, the data
continuity detecting unit 101 results in outputting the angle θ
(the angle θ generated between the direction of continuity of data
represented with the gradient Gf(not shown) corresponding to the
gradient GF, and the X direction) as data continuity information as
to the input image.

[1813]In this case, let us say that a one-dimensional waveform wherein the
light signal function F(x, y, t) is projected in the X direction (such a
waveform is referred to as an X cross-sectional waveform here) has the
same form even in the event of projection in any position in the Y
direction.

[1814]That is to say, let us say that there is an X cross-sectional
waveform having the same form, which is a two-dimensional (spatial
directional) waveform continuous in the direction of continuity (angle
θ direction as to the X direction), and a three-dimensional
waveform wherein such a two-dimensional waveform continues in the
temporal direction t, is approximated with the approximation function
f(x, y, t).

[1815]In other words, an X cross-sectional waveform, which is shifted by a
position y in the Y direction from the center of the pixel of interest,
becomes a waveform wherein the X cross-sectional waveform passing through
the center of the pixel of interest is moved (shifted) by a predetermined
amount (amount varies according to the angle θ) in the X direction.
Note that hereafter, such an amount is referred to as a shift amount.

[1816]This shift amount can be calculated as follows.

[1817]That is to say, the gradient Vf(for example, gradient Vf
representing the direction of data continuity corresponding to the
gradient VF in FIG. 232) and angle θ are represented as the
following Expression (150).

θ ##EQU00081##

[1818]Note that in Expression (150), dx represents the amount of fine
movement in the X direction, and dy represents the amount of fine
movement in the Y direction as to the dx.

[1819]Accordingly, if the shift amount as to the X direction is described
as Cx(y), this is represented as the following Expression (151).

##EQU00082##

[1820]If the shift amount Cx(y) is thus defined, a relational
expression between the input pixel values P(x, y, t) corresponding to
Expression (149) and the approximation function f(x, y, t) is represented
as the following Expression (152).

∫ ∫ ∫ ##EQU00083##

[1821]In Expression (152), e represents a margin of error. ts
represents an integration start position in the t direction, and te
represents an integration end position in the t direction. In the same
way, ys represents an integration start position in the Y direction,
and ye represents an integration end position in the Y direction.
Also, xs represents an integration start position in the X
direction, and xe represents an integration end position in the X
direction. However, the respective specific integral ranges are as shown
in the following Expression (153).

ts=t-0.5

te=t+0.5

ys=y-0.5

ye=y+0.5

xs=x-Cx(y)-0.5

xe=x-Cx(y)+0.5 (153)

[1822]As shown in Expression (153), it can be represented that an X
cross-sectional waveform having the same form continues in the direction
of continuity (angle θ direction as to the X direction) by shifting
an integral range in the X direction as to a pixel positioned distant
from the pixel of interest by (x, y) in the spatial direction by the
shift amount Cx(y).

[1823]Thus, with the three-dimensional function approximating method, the
relation between the pixel values P(x, y, t) and the three-dimensional
approximation function f(x, y, t) can be represented with Expression
(152) (Expression (153) for the integral range), and accordingly, the
light signal function F(x, y, t) (for example, a light signal having
continuity in the spatial direction represented with the gradient VF
such as shown in FIG. 232) can be estimated by calculating the N features
of the approximation function f(x, y, t), for example, with the least
squares method using Expression (152) and Expression (153).

[1824]Note that in the event that a light signal represented with the
light signal function F(x, y, t) has continuity in the spatial direction
represented with the gradient VF such as shown in FIG. 232, the
light signal function F(x, y, t) may be approximated as follows.

[1825]That is to say, let us say that a one-dimensional waveform wherein
the light signal function F(x, y, t) is projected in the Y direction
(hereafter, such a waveform is referred to as a Y cross-sectional
waveform) has the same form even in the event of projection in any
position in the X direction.

[1826]In other words, let us say that there is a two-dimensional (spatial
directional) waveform wherein a Y cross-sectional waveform having the
same form continues in the direction of continuity (angle θ
direction as to in the X direction), and a three-dimensional waveform
wherein such a two-dimensional waveform continues in the temporal
direction t is approximated with the approximation function f(x, y, t).

[1827]Accordingly, the Y cross-sectional waveform, which is shifted by x
in the X direction from the center of the pixel of interest, becomes a
waveform wherein the Y cross-sectional waveform passing through the
center of the pixel of interest is moved by a predetermined shift amount
(shift amount changing according to the angle θ) in the Y
direction.

[1828]This shift amount can be calculated as follows.

[1829]That is to say, the gradient GF is represented as the above
Expression (150), so if the shift amount as to the Y direction is
described as Cy(x), this is represented as the following Expression
(154).

Cy(x)=Gf×x (154)

[1830]If the shift amount Cx(y) is thus defined, a relational
expression between the input pixel values P(x, y, t) corresponding to
Expression (149) and the approximation function f(x, y, t) is represented
as the above Expression (152), as with when the shift amount Cx(y)
is defined.

[1831]However, in this case, the respective specific integral ranges are
as shown in the following Expression (155).

ts=t-0.5

te=t+0.5

ys=y-Cy(x)-0.5

ye=y-Cy(x)+0.5

xs=x-0.5

xe=x+0.5 (155)

[1832]As shown in Expression (155) (and the above Expression (152)), it
can be represented that a Y cross-sectional waveform having the same form
continues in the direction of continuity (angle θ direction as to
the X direction) by shifting an integral range in the Y direction as to a
pixel positioned distant from the pixel of interest by (x, y), by the
shift amount Cx(y).

[1833]Thus, with the three-dimensional function approximating method, the
integral range of the right side of the above Expression (152) can be set
to not only Expression (153) but also Expression (155), and accordingly,
the light signal function F(x, y, t) (light signal in the actual world 1
having continuity in the spatial direction represented with the gradient
GF) can be estimated by calculating the n features of the
approximation function f(x, y, t) with, for example, the least squares
method or the like using Expression (152) in which Expression (155) is
employed as an integral range.

[1834]Thus, Expression (153) and Expression (155), which represent an
integral range, represent essentially the same with only a difference
regarding whether perimeter pixels are shifted in the X direction (in the
case of Expression (153)) or shifted in the Y direction (in the case of
Expression (155)) in response to the direction of continuity.

[1835]However, in response to the direction of continuity (gradient
GF), there is a difference regarding whether the light signal
function F(x, y, t) is regarded as a group of X cross-sectional
waveforms, or is regarded as a group of Y cross-sectional waveforms. That
is to say, in the event that the direction of continuity is close to the
Y direction, the light signal function F(x, y, t) is preferably regarded
as a group of X cross-sectional waveforms. On the other hand, in the
event that the direction of continuity is close to the X direction, the
light signal function F(x, y, t) is preferably regarded as a group of Y
cross-sectional waveforms.

[1836]Accordingly, it is preferable that the actual world estimating unit
102 prepares both Expression (153) and Expression (155) as an integral
range, and selects any one of Expression (153) and Expression (155) as
the integral range of the right side of the appropriate Expression (152)
in response to the direction of continuity.

[1837]Description has been made regarding the three-dimensional function
method in the case in which the light signal function F(x, y, t) has
continuity (for example, continuity in the spatial direction represented
with the gradient GF in FIG. 232) in the spatial directions (X
direction and Y direction), but the three-dimensional function method can
be applied to the case in which the light signal function F(x, y, t) has
continuity (continuity represented with the gradient VF) in the
temporal and spatial directions (X direction, Y direction, and t
direction), as shown in FIG. 233.

[1838]That is to say, in FIG. 233, a light signal function corresponding
to a frame #N-1 is taken as F (x, y, #N-1), a light signal function
corresponding to a frame #N is taken as F (x, y, #N), and a light signal
function corresponding to a frame #N+1 is taken as F (x, y, #N+1).

[1839]Note that in FIG. 233, the horizontal direction is taken as the X
direction serving as one direction of the spatial directions, the upper
right diagonal direction is taken as the Y direction serving as the other
direction of the spatial directions, and also the vertical direction is
taken as the t direction serving as the temporal direction in the
drawing.

[1840]Also, the frame #N-1 is a frame temporally prior to the frame #N,
the frame #N+1 is a frame temporally following the frame #N. That is to
say, the frame #N-1, frame #N, and frame #N+1 are displayed in the
sequence of the frame #N-1, frame #N, and frame #N+1.

[1841]With the example in FIG. 233, a cross-sectional light level along
the direction shown with the gradient VF (upper right inner
direction from lower left near side in the drawing) is regarded as
generally constant. Accordingly, with the example in FIG. 233, it can be
said that the light signal function F(x, y, t) has continuity in the
temporal and spatial directions represented with the gradient VF.

[1842]In this case, in the event that a function C (x, y, t) representing
continuity in the temporal and spatial directions is defined, and also
the integral range of the above Expression (152) is defined with the
defined function C (x, y, t), N features of the approximation function
f(x, y, t) can be calculated as with the above Expression (153) and
Expression (155).

[1843]The function C (x, y, t) is not restricted to a particular function
as long as this is a function representing the direction of continuity.
However, hereafter, let us say that linear continuity is employed, and
Cx(t) and Cy(t) corresponding to the shift amount Cx(y)
(Expression (151)) and shift amount Cy(x) (Expression (153)), which
are functions representing continuity in the spatial direction described
above, are defined as a function C (x, y, t) corresponding thereto as
follows.

[1844]That is to say, if the gradient as continuity of data in the
temporal and spatial directions corresponding to the gradient Gf
representing continuity of data in the above spatial direction is taken
as Vf, and if this gradient Vf is divided into the gradient in
the X direction (hereafter, referred to as Vfx) and the gradient in
the Y direction (hereafter, referred to as Vfy), the gradient
Vfx is represented with the following Expression (156), and the
gradient Vfy is represented with the following Expression (157),
respectively.

##EQU00084##

[1845]In this case, the function Cx(t) is represented as the
following Expression (158) using the gradient Vfx shown in
Expression (156).

Cx(t)=Vfx×t (158)

[1846]Similarly, the function Cy(t) is represented as the following
Expression (159) using the gradient Vfy shown in Expression (157).

Cy(t)=Vfy×t (159)

[1847]Thus, upon the function Cx(t) and function Cy(t), which
represent continuity 2511 in the temporal and spatial directions, being
defined, the integral range of Expression (152) is represented as the
following Expression (160).

ts=t-0.5

te=t+0.5

ys=y-Cy(t)-0.5

ye=y-Cy(t)+0.5

xs=x-Cx(t)-0.5

xe=x-Cx(t)+0.5 (160)

[1848]Thus, with the three-dimensional function approximating method, the
relation between the pixel values P(x, y, t) and the three-dimensional
approximation function f(x, y, t) can be represented with Expression
(152), and accordingly, the light signal function F(x, y, t) (light
signal in the actual world 1 having continuity in a predetermined
direction of the temporal and spatial directions) can be estimated by
calculating the n+1 features of the approximation function f(x, y, t)
with, for example, the least squares method or the like using Expression
(160) as the integral range of the right side of Expression (152).

[1849]FIG. 234 represents a configuration example of the actual world
estimating unit 102 employing such a three-dimensional function
approximating method.

[1850]Note that the approximation function f(x, y, t) (in reality, the
features (coefficients) thereof) calculated by the actual world
estimating unit 102 employing the three-dimensional function
approximating method is not restricted to a particular function, but an n
(n=N-1)-dimensional polynomial is employed in the following description.

[1852]The conditions setting unit 2521 sets a pixel range (tap range) used
for estimating the light signal function F(x, y, t) corresponding to a
pixel of interest, and the number of dimensions n of the approximation
function f(x, y, t).

[1854]The input pixel acquiring unit 2523 acquires, of the input images
stored in the input image storage unit 2522, an input image region
corresponding to the tap range set by the conditions setting unit 2521,
and supplies this to the normal equation generating unit 2525 as an input
pixel value table. That is to say, the input pixel value table is a table
in which the respective pixel values of pixels included in the input
image region are described.

[1855]Incidentally, as described above, the actual world estimating unit
102 employing the three-dimensional function approximating method
calculates the N features (in this case, coefficient of each dimension)
of the approximation function f(x, y) with the least squares method using
the above Expression (152) (however, Expression (153), Expression (156),
or Expression (160) for the integral range).

[1856]The right side of Expression (152) can be represented as the
following Expression (161) by calculating the integration thereof.

##EQU00085##

[1857]In Expression (161), wi represents the coefficients (features)
of the i-dimensional term, and also Si(xs, xe, ys,
ye, ts, te) represents the integral components of the
i-dimensional term. However, xs represents an integral range start
position in the X direction, xe represents an integral range end
position in the X direction, ys represents an integral range start
position in the Y direction, ye represents an integral range end
position in the Y direction, ts represents an integral range start
position in the t direction, te represents an integral range end
position in the t direction, respectively.

[1859]That is to say, the integral component calculation unit 2524
calculates the integral components Si(xs, xe, ys,
ye, ts, te) based on the tap range and the number of
dimensions set by the conditions setting unit 2521, and the angle or
movement (as the integral range, angle in the case of using the above
Expression (153) or Expression (156), and movement in the case of using
the above Expression (160)) of the data continuity information output
from the data continuity detecting unit 101, and supplies the calculated
results to the normal equation generating unit 2525 as an integral
component table.

[1860]The normal equation generating unit 2525 generates a normal equation
in the case of obtaining the above Expression (161) with the least
squares method using the input pixel value table supplied from the input
pixel value acquiring unit 2523, and the integral component table
supplied from the integral component calculation unit 2524, and outputs
this to the approximation function generating unit 2526 as a normal
equation table. An example of a normal equation will be described later.

[1861]The approximation function generating unit 2526 calculates the
respective features wi (in this case, the coefficients wi of
the approximation function f(x, y) serving as a three-dimensional
polynomial) by solving the normal equation included in the normal
equation table supplied from the normal equation generating unit 2525
with the matrix solution, and output these to the image generating unit
103.

[1862]Next, description will be made regarding the actual world estimating
processing (processing in step S102 in FIG. 40) to which the
three-dimensional function approximating method is applied, with
reference to the flowchart in FIG. 235.

[1863]First, in step S2501, the conditions setting unit 2521 sets
conditions (a tap range and the number of dimensions).

[1864]For example, let us say that a tap range made up of L pixels has
been set. Also, let us say that a predetermined number l (l is any one of
integer values 0 through L-1) is appended to each of the pixels.

[1866]In step S2503, the input pixel value acquiring unit 2523 acquires an
input pixel value based on the condition (tap range) set by the
conditions setting unit 2521, and generates an input pixel value table.
In this case, a table made up of L input pixel values P(x, y, t) is
generated. Here, let us say that each of the L input pixel values P(x, y,
t) is described as P(l) serving as a function of the number l of the
pixel thereof. That is to say, the input pixel value table becomes a
table including L P(l).

[1867]In step S2504, the integral component calculation unit 2524
calculates integral components based on the conditions (a tap range and
the number of dimensions) set by the conditions setting unit 2521, and
the data continuity information (angle or movement) supplied from the
data continuity detecting unit 101, and generates an integral component
table.

[1868]However, in this case, as described above, the input pixel values
are not P(x, y, t) but P(l), and are acquired as the value of a pixel
number l, so the integral component calculation unit 2524 results in
calculating the integral components Si(xs, xe, ys,
ye, ts, te) in the above Expression (161) as a function of
l such as the integral components Si(l). That is to say, the
integral component table becomes a table including L×i Si(l).

[1869]Note that the sequence of the processing in step S2503 and the
processing in step S2504 is not restricted to the example in FIG. 235, so
the processing in step S2504 may be executed first, or the processing in
step S2503 and the processing in step S2504 may be executed
simultaneously.

[1870]Next, in step S2505, the normal equation generating unit 2525
generates a normal equation table based on the input pixel value table
generated by the input pixel value acquiring unit 2523 at the processing
in step S2503, and the integral component table generated by the integral
component calculation unit 2524 at the processing in step S2504.

[1871]Specifically, in this case, the features wi of the following
Expression (162) corresponding to the above Expression (161) are
calculated using the least squares method. A normal equation
corresponding to this is represented as the following Expression (163).

##EQU00086##

[1872]If we define each matrix of the normal equation shown in Expression
(163) as the following Expressions (164) through (166), the normal
equation is represented as the following Expression (167).

##EQU00087##

[1873]As shown in Expression (165), the respective components of the
matrix WMAT are the features wi to be obtained. Accordingly, in
Expression (167), if the matrix SMAT of the left side and the matrix
PMAT of the right side are determined, the matrix WMAT(i.e.,
features wi) may by be calculated with the matrix solution.

[1874]Specifically, as shown in Expression (164), the respective
components of the matrix SMAT may be calculated as long as the above
integral components Si(l) are known. The integral components
Si(l) are included in the integral component table supplied from the
integral component calculation unit 2524, so the normal equation
generating unit 2525 can calculate each component of the matrix SMAT
using the integral component table.

[1875]Also, as shown in Expression (166), the respective components of the
matrix PMAT may be calculated as long as the integral components
Si(l) and the input pixel values P(l) are known. The integral
components Si(l) is the same as those included in the respective
components of the matrix SMAT, also the input pixel values P(l) are
included in the input pixel value table supplied from the input pixel
value acquiring unit 2523, so the normal equation generating unit 2525
can calculate each component of the matrix PMAT using the integral
component table and input pixel value table.

[1876]Thus, the normal equation generating unit 2525 calculates each
component of the matrix SMAT and matrix PMAT, and outputs the
calculated results (each component of the matrix SMAT and matrix
PMAT) to the approximation function generating unit 2526 as a normal
equation table.

[1877]Upon the normal equation table being output from the normal equation
generating unit 2526, in step S2506, the approximation function
generating unit 2526 calculates the features wi(i.e., the
coefficients wi of the approximation function f(x, y, t)) serving as
the respective components of the matrix WMAT in the above Expression
(167) based on the normal equation table.

[1878]Specifically, the normal equation in the above Expression (167) can
be transformed as the following Expression (168).

WMAT=SMAT-1PMAT (168)

[1879]In Expression (168), the respective components of the matrix
WMAT in the left side are the features wi to be obtained. The
respective components regarding the matrix SMAT and matrix PMAT
are included in the normal equation table supplied from the normal
equation generating unit 2525. Accordingly, the approximation function
generating unit 2526 calculates the matrix WMAT by calculating the
matrix in the right side of Expression (168) using the normal equation
table, and outputs the calculated results (features wi) to the image
generating unit 103.

[1880]In step S2507, the approximation function generating unit 2526
determines regarding whether or not the processing of all the pixels has
been completed.

[1881]In step S2507, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S2502, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S2502 through S2507 is repeatedly performed.

[1882]In the event that the processing of all the pixels has been
completed (in step S5407, in the event that determination is made that
the processing of all the pixels has been completed), the estimating
processing of the actual world 1 ends.

[1883]As described above, the three-dimensional function approximating
method takes three-dimensional integration effects in the temporal and
spatial directions into consideration instead of one-dimensional or
two-dimensional integration effects, and accordingly, can estimate the
light signals in the actual world 1 more accurately than the
one-dimensional polynomial approximating method and two-dimensional
polynomial approximating method.

[1884]In other words, with the three-dimensional function approximating
method, for example, the actual world estimating unit 102 in FIG. 205
(FIG. 3) (for example, FIG. 234 for configuration) estimates the light
signal function F by approximating the light signal function F
representing the light signal in the actual world (specifically, for
example, the light signal function F(x, y, t) in FIG. 232 and FIG. 233)
with a predetermined approximation function f(specifically, for example,
the approximation function f(x, y, t) in the right side of Expression
(152)), on condition that the multiple detecting elements of the sensor
(for example, detecting elements 2-1 of the sensor 2 in FIG. 231) each
having time-space integration effects project the light signals in the
actual world 1, of the input image made up of multiple pixels having a
pixel value projected by the detecting elements, which drop part of
continuity (for example, continuity represented with the gradient GF
in FIG. 232, or represented with the gradient VF in FIG. 233) of the
light signal in the actual world 1, the above pixel value (for example,
input pixel values P(x, y, z) in the left side of Expression (153)) of
the above pixel corresponding to at least a position in the
one-dimensional direction (for example, three-dimensional directions of
the spatial direction X, spatial direction Y, and temporal direction t in
FIG. 233) of the time-space directions is a pixel value (for example, a
value obtained by the approximation function f(x, y, t) being integrated
in three dimensions of the X direction, Y direction, and t direction,
such as shown in the right side of the above Expression (153)) acquired
by at least integration effects in the one-dimensional direction.

[1885]Further, for example, in the event that the data continuity
detecting unit 101 in FIG. 205 (FIG. 3) detects continuity of input image
data, the actual world estimating unit 102 estimates the light signal
function F by approximating the light signal function F with the
approximation function f on condition that the pixel value of a pixel
corresponding to at least a position in the one-dimensional direction of
the time-space directions of the image data corresponding to continuity
of data detected by the data continuity detecting unit 101 is the pixel
value acquired by at least integration effects in the one-dimensional
direction.

[1886]Speaking in detail, for example, the actual world estimating unit
102 estimates the light signal function by approximating the light signal
function F with the approximation function f on condition that the pixel
value of a pixel corresponding to a distance (for example, shift amounts
Cx(y) in the above Expression (151)) along at least in the
one-dimensional direction from a line corresponding to continuity of data
detected by the continuity detecting unit 101 is the pixel value (for
example, a value obtained by the approximation function f(x, y, t) being
integrated in three dimensions of the X direction, Y direction, and t
direction, such as shown in the right side of Expression (152) with an
integral range such as shown in the above Expression (153)) acquired by
at least integration effects in the one-dimensional direction.

[1887]Accordingly, the three-dimensional function approximating method can
estimate the light signals in the actual world 1 more accurately.

[1888]Next, description will be made regarding an embodiment of the image
generating unit 103 (FIG. 3) with reference to FIG. 236 through FIG. 257.

[1889]FIG. 236 is a diagram for describing the principle of the present
embodiment.

[1890]As shown in FIG. 236, the present embodiment is based on condition
that the actual world estimating unit 102 employs a function
approximating method. That is to say, let us say that the signals in the
actual world 1 (distribution of light intensity) serving as an image cast
in the sensor 2 are represented with a predetermined function F, it is an
assumption for the actual world estimating unit 102 to estimate the
function F by approximating the function F with a predetermined function
f using the input image (pixel value P) output from the sensor 2 and the
data continuity information output from the data continuity detecting
unit 101.

[1891]Note that hereafter, with description of the present embodiment, the
signals in the actual world 1 serving as an image are particularly
referred to as light signals, and the function F is particularly referred
to as a light signal function F. Also, the function f is particularly
referred to as an approximation function f.

[1892]With the present embodiment, the image generating unit 103
integrates the approximation function f with a predetermined time-space
region using the data continuity information output from the data
continuity detecting unit 101, and the actual world estimating
information (in the example in FIG. 236, the features of the
approximation function f) output from the actual world estimating unit
102 based on such an assumption, and outputs the integral value as an
output pixel value M (output image). Note that with the present
embodiment, an input pixel value is described as P, and an output pixel
value is described as M in order to distinguish an input image pixel from
an output image pixel.

[1893]In other words, upon the light signal function F being integrated
once, the light signal function F becomes an input pixel value P, the
light signal function F is estimated from the input pixel value
P(approximated with the approximation function f), the estimated light
signal function F(i.e., approximation function f) is integrated again,
and an output pixel value M is generated. Accordingly, hereafter,
integration of the approximation function f executed by the image
generating unit 103 is referred to as reintegration. Also, the present
embodiment is referred to as a reintegration method.

[1894]Note that as described later, with the reintegration method, the
integral range of the approximation function f in the event that the
output pixel value M is generated is not restricted to the integral range
of the light signal function F in the event that the input pixel value P
is generated (i.e., the vertical width and horizontal width of the
detecting element of the sensor 2 for the spatial direction, the exposure
time of the sensor 2 for the temporal direction), an arbitrary integral
range may be employed.

[1895]For example, in the event that the output pixel value M is
generated, varying the integral range in the spatial direction of the
integral range of the approximation function f enables the pixel pitch of
an output image according to the integral range thereof to be varied.
That is to say, creation of spatial resolution is available.

[1896]In the same way, for example, in the event that the output pixel
value M is generated, varying the integral range in the temporal
direction of the integral range of the approximation function f enables
creation of temporal resolution.

[1897]Hereafter, description will be made individually regarding three
specific methods of such a reintegration method with reference to the
drawings.

[1898]That is to say, three specific methods are reintegration methods
corresponding to three specific methods of the function approximating
method (the above three specific examples of the embodiment of the actual
world estimating unit 102) respectively.

[1899]Specifically, the first method is a reintegration method
corresponding to the above one-dimensional polynomial approximating
method (one method of the function approximating method). Accordingly,
with the first method, one-dimensional reintegration is performed, so
hereafter, such a reintegration method is referred to as a
one-dimensional reintegration method.

[1900]The second method is a reintegration method corresponding to the
above two-dimensional polynomial approximating method (one method of the
function approximating method). Accordingly, with the second method,
two-dimensional reintegration is performed, so hereafter, such a
reintegration method is referred to as a two-dimensional reintegration
method.

[1901]The third method is a reintegration method corresponding to the
above three-dimensional function approximating method (one method of the
function approximating method). Accordingly, with the third method,
three-dimensional reintegration is performed, so hereafter, such a
reintegration method is referred to as a three-dimensional reintegration
method.

[1902]Hereafter, description will be made regarding each details of the
one-dimensional reintegration method, two-dimensional reintegration
method, and three-dimensional reintegration method in this order.

[1903]First, the one-dimensional reintegration method will be described.

[1904]With the one-dimensional reintegration method, it is an assumption
that the approximation function f(x) is generated using the
one-dimensional polynomial approximating method.

[1905]That is to say, it is an assumption that a one-dimensional waveform
(with description of the reintegration method, a waveform projected in
the X direction of such a waveform is referred to as an X cross-sectional
waveform F(x)) wherein the light signal function F(x, y, t) of which
variables are positions x, y, and z on the three-dimensional space, and a
point-in-time t is projected in a predetermined direction (for example, X
direction) of the X direction, Y direction, and z direction serving as
the spatial direction, and t direction serving as the temporal direction,
is approximated with the approximation function f(x) serving as an
n-dimensional (n is an arbitrary integer) polynomial.

[1906]In this case, with the one-dimensional reintegration method, the
output pixel value M is calculated such as the following Expression
(169).

[1908]Specifically, for example, let us say that the actual world
estimating unit 102 has already generated the approximation function f(x)
(the approximation function f(x) of the X cross-sectional waveform F(x))
such as shown in FIG. 237 with a pixel 3101 (pixel 3101 corresponding to
a predetermined detecting element of the sensor 2) such as shown in FIG.
237 as a pixel of interest.

[1909]Note that with the example in FIG. 237, the pixel value (input pixel
value) of the pixel 3101 is taken as P, and the shape of the pixel 3101
is taken as a square of which one side is 1 in length. Also, of the
spatial directions, the direction in parallel with one side of the pixel
3101 (horizontal direction in the drawing) is taken as the X direction,
and the direction orthogonal to the X direction (vertical direction in
the drawing) is taken as the Y direction.

[1910]Also, on the lower side in FIG. 237, the coordinates system
(hereafter, referred to as a pixel-of-interest coordinates system) in the
spatial directions (X direction and Y direction) of which the origin is
taken as the center of the pixel 3101, and the pixel 3101 in the
coordinates system are shown.

[1911]Further, on the upward direction in FIG. 237, a graph representing
the approximation function f(x) at y=0 (y is a coordinate value in the Y
direction in the pixel-of-interest coordinates system shown on the lower
side in the drawing) is shown. In this graph, the axis in parallel with
the horizontal direction in the drawing is the same axis as the x axis in
the X direction in the pixel-of-interest coordinates system shown on the
lower side in the drawing (the origin is also the same), and also the
axis in parallel with the vertical direction in the drawing is taken as
an axis representing pixel values.

[1912]In this case, the relation of the following Expression (170) holds
between the approximation function f(x) and the pixel value P of the
pixel 3101.

∫ ##EQU00089##

[1913]Also, as shown in FIG. 237, let us say that the pixel 3101 has
continuity of data in the spatial direction represented with the gradient
Gf. Further, let us say that the data continuity detecting unit 101
(FIG. 236) has already output the angle θ such as shown in FIG. 237
as data continuity information corresponding to continuity of data
represented with the gradient Gf.

[1914]In this case, for example, with the one-dimensional reintegration
method, as shown in FIG. 238, four pixels 3111 through 3114 can be newly
created in a range of -0.5 through 0.5 in the X direction, and also in a
range of -0.5 through 0.5 in the Y direction (in the range where the
pixel 3101 in FIG. 237 is positioned).

[1915]Note that on the lower side in FIG. 238, the same pixel-of-interest
coordinates system as that in FIG. 237, and the pixels 3111 through 3114
in the pixel-of-interest coordinates system thereof are shown. Also, on
the upper side in FIG. 238, the same graph (graph representing the
approximation function f(x) at y=0) as that in FIG. 237 is shown.

[1916]Specifically, as shown in FIG. 238, with the one-dimensional
reintegration method, calculation of the pixel value M(1) of the pixel
3111 using the following Expression (171), calculation of the pixel value
M(2) of the pixel 3112 using the following Expression (172), calculation
of the pixel value M(3) of the pixel 3113 using the following Expression
(173), and calculation of the pixel value M(4) of the pixel 3114 using
the following Expression (174) are available respectively.

×∫ ×∫ ×∫
×∫ ##EQU00090##

[1917]Note that xs1 in Expression (171), xs2 in Expression
(172), xs3 in Expression (173), and xs4 in Expression (174)
each represent the integration start position of the corresponding
expression. Also, xe1 in Expression (171), xe2 in Expression
(172), xe3 in Expression (173), and xe4 in Expression (174)
each represent the integration end position of the corresponding
expression.

[1918]The integral range in the right side of each of Expression (171)
through Expression (174) becomes the pixel width (length in the X
direction) of each of the pixel 3111 through pixel 3114. That is to say,
each of xe1-xs1, xe2-xs2, xe3-xs3, and
xe4-xs4 becomes 0.5.

[1919]However, in this case, it can be conceived that a one-dimensional
waveform having the same form as that in the approximation function f(x)
at y=0 continues not in the Y direction but in the direction of data
continuity represented with the gradient Gf(i.e., angle θ
direction) (in fact, a waveform having the same form as the X
cross-sectional waveform F(x) at y=0 continues in the direction of
continuity). That is to say, in the case in which a pixel value f(0) in
the origin (0, 0) in the pixel-of-interest coordinates system in FIG. 238
(center of the pixel 3101 in FIG. 237) is taken as a pixel value f1, the
direction where the pixel value f1 continues is not the Y direction but
the direction of data continuity represented with the gradient Gf
(angle θ direction).

[1920]In other words, in the case of conceiving the waveform of the
approximation function f(x) in a predetermined position y in the Y
direction (however, y is a numeric value other than zero), the position
corresponding to the pixel value f1 is not a position (0, y) but a
position (Cx(y), y) obtained by moving in the X direction from the
position (0, y) by a predetermined amount (here, let us say that such an
amount is also referred to as a shift amount. Also, a shift amount is an
amount depending on the position y in the Y direction, so let us say that
this shift amount is described as Cx(y)).

[1921]Accordingly, as the integral range of the right side of each of the
above Expression (171) through Expression (174), the integral range needs
to be set in light of the position y in the Y direction where the center
of the pixel value M(l) to be obtained (however, l is any integer value
of 1 through 4) exists, i.e., the shift amount Cx(y).

[1922]Specifically, for example, the position y in the Y direction where
the centers of the pixel 3111 and pixel 3112 exist is not y=0 but y=0.25.

[1923]Accordingly, the waveform of the approximation function f(x) at
y=0.25 is equivalent to a waveform obtained by moving the waveform of the
approximation function f(x) at y=0 by the shift amount Cx(0.25) in
the X direction.

[1924]In other words, in the above Expression (171), if we say that the
pixel value M(1) as to the pixel 3111 is obtained by integrating the
approximation function f(x) at y=0 with a predetermined integral range
(from the start position xs1 to the end position xe1), the
integral range thereof becomes not a range from the start position
xs1=-0.5 to the end position xe1=0 (a range itself where the
pixel 3111 occupies in the X direction) but the range shown in FIG. 238,
i.e., from the start position xs1=-0.5+Cx(0.25) to the end
position xe1=0+Cx(0.25) (a range where the pixel 3111 occupies
in the X direction in the event that the pixel 3111 is tentatively moved
by the shift amount Cx(0.25)).

[1925]Similarly, in the above Expression (172), if we say that the pixel
value M(2) as to the pixel 3112 is obtained by integrating the
approximation function f(x) at y=0 with a predetermined integral range
(from the start position xs2 to the end position xe2), the
integral range thereof becomes not a range from the start position
xs2=0 to the end position xe2=0.5 (a range itself where the
pixel 3112 occupies in the X direction) but the range shown in FIG. 238,
i.e., from the start position xs2=0+Cx(0.25) to the end
position xe1=0.5+Cx(0.25) (a range where the pixel 3112
occupies in the X direction in the event that the pixel 3112 is
tentatively moved by the shift amount Cx(0.25)).

[1926]Also, for example, the position y in the Y direction where the
centers of the pixel 3113 and pixel 3114 exist is not y=0 but y=-0.25.

[1927]Accordingly, the waveform of the approximation function f(x) at
y=-0.25 is equivalent to a waveform obtained by moving the waveform of
the approximation function f(x) at y=0 by the shift amount Cx(-0.25)
in the X direction.

[1928]In other words, in the above Expression (173), if we say that the
pixel value M(3) as to the pixel 3113 is obtained by integrating the
approximation function f(x) at y=0 with a predetermined integral range
(from the start position xs3 to the end position xe3), the
integral range thereof becomes not a range from the start position
xs3=-0.5 to the end position xe3=0 (a range itself where the
pixel 3113 occupies in the X direction) but the range shown in FIG. 238,
i.e., from the start position xs3=-0.5+Cx(-0.25) to the end
position xe3=0+Cx(-0.25) (a range where the pixel 3113 occupies
in the X direction in the event that the pixel 3113 is tentatively moved
by the shift amount Cx(-0.25)).

[1929]Similarly, in the above Expression (174), if we say that the pixel
value M(4) as to the pixel 3114 is obtained by integrating the
approximation function f(x) at y=0 with a predetermined integral range
(from the start position xs4 to the end position xe4), the
integral range thereof becomes not a range from the start position
xs4=0 to the end position xe4=0.5 (a range itself where the
pixel 3114 occupies in the X direction) but the range shown in FIG. 238,
i.e., from the start position xs4=0+Cx(-0.25) to the end
position xe1=0.5+Cx(-0.25) (a range where the pixel 3114
occupies in the X direction in the event that the pixel 3114 is
tentatively moved by the shift amount Cx(-0.25)).

[1930]Accordingly, the image generating unit 102 (FIG. 236) calculates the
above Expression (171) through Expression (174) by substituting the
corresponding integral range of the above integral ranges for each of
these expressions, and outputs the calculated results of these as the
output pixel values M(1) through M(4).

[1931]Thus, the image generating unit 102 can create four pixels having
higher spatial resolution than that of the output pixel 3101, i.e., the
pixel 3111 through pixel 3114 (FIG. 238) by employing the one-dimensional
reintegration method as a pixel at the output pixel 3101 (FIG. 237) from
the sensor 2 (FIG. 236). Further, though not shown in the drawing, as
described above, the image generating unit 102 can create a pixel having
an arbitrary powered spatial resolution as to the output pixel 3101
without deterioration by appropriately changing an integral range, in
addition to the pixel 3111 through pixel 3114.

[1934]The conditions setting unit 3121 sets the number of dimensions n of
the approximation function f(x) based on the actual world estimating
information (the features of the approximation function f(x) in the
example in FIG. 239) supplied from the actual world estimating unit 102.

[1935]The conditions setting unit 3121 also sets an integral range in the
case of reintegrating the approximation function f(x) (in the case of
calculating an output pixel value). Note that an integral range set by
the conditions setting unit 3121 does not need to be the width of a
pixel. For example, the approximation function f(x) is integrated in the
spatial direction (X direction), and accordingly, a specific integral
range can be determined as long as the relative size (power of spatial
resolution) of an output pixel (pixel to be calculated by the image
generating unit 103) as to the spatial size of each pixel of an input
image from the sensor 2 (FIG. 236) is known. Accordingly, the conditions
setting unit 3121 can set, for example, a spatial resolution power as an
integral range.

[1936]The features storage unit 3122 temporally stores the features of the
approximation function f(x) sequentially supplied from the actual world
estimating unit 102. Subsequently, upon the features storage unit 3122
storing all of the features of the approximation function f(x), the
features storage unit 3122 generates a features table including all of
the features of the approximation function f(x), and supplies this to the
output pixel value calculation unit 3124.

[1937]Incidentally, as described above, the image generating unit 103
calculates the output pixel value M using the above Expression (169), but
the approximation function f(x) included in the right side of the above
Expression (169) is represented as the following Expression (175)
specifically.

× ##EQU00091##

[1938]Note that in Expression (175), wi represents the features of
the approximation function f(x) supplied from the actual world estimating
unit 102.

[1939]Accordingly, upon the approximation function f(x) of Expression
(175) being substituted for the approximation function f(x) of the right
side of the above Expression (169) so as to expand (calculate) the right
side of Expression (169), the output pixel value M is represented as the
following Expression (176).

× × × ##EQU00092##

[1940]In Expression (176), Ki(xs, xe) represent the
integral components of the i-dimensional term. That is to say, the
integral components Ki(xs, xe) are such as shown in the
following Expression (177).

[1942]Specifically, as shown in Expression (177), the components
Ki(xs, xe) can be calculated as long as the start position
xs and end position xe of an integral range, gain Ge, and
i of the i-dimensional term are known.

[1943]Of these, the gain Ge is determined with the spatial resolution
power (integral range) set by the conditions setting unit 3121.

[1944]The range of i is determined with the number of dimensions n set by
the conditions setting unit 3121.

[1945]Also, each of the start position xs and end position xe of
an integral range is determined with the center pixel position (x, y) and
pixel width of an output pixel to be generated from now, and the shift
amount Cx(y) representing the direction of data continuity. Note
that (x, y) represents the relative position from the center position of
a pixel of interest when the actual world estimating unit 102 generates
the approximation function f(x).

[1946]Further, each of the center pixel position (x, y) and pixel width of
an output pixel to be generated from now is determined with the spatial
resolution power (integral range) set by the conditions setting unit
3121.

[1947]Also, with the shift amount Cx(y), and the angle θ
supplied from the data continuity detecting unit 101, the relation such
as the following Expression (178) and Expression (179) holds, and
accordingly, the shift amount Cx(y) is determined with the angle
θ.

θ ##EQU00094##

[1948]Note that in Expression (178), Gf represents a gradient
representing the direction of data continuity, θ represents an
angle (angle generated between the X direction serving as one direction
of the spatial directions and the direction of data continuity
represented with a gradient Gf) of one of the data continuity
information output from the data continuity detecting unit 101 (FIG.
236). Also, dx represents the amount of fine movement in the X direction,
and dy represents the amount of fine movement in the Y direction (spatial
direction perpendicular to the X direction) as to the dx.

[1949]Accordingly, the integral component calculation unit 3123 calculates
the integral components Ki(xs, xe) based on the number of
dimensions and spatial resolution power (integral range) set by the
conditions setting unit 3121, and the angle θ of the data
continuity information output from the data continuity detecting unit
101, and supplies the calculated results to the output pixel value
calculation unit 3124 as an integral component table.

[1950]The output pixel value calculation unit 3124 calculates the right
side of the above Expression (176) using the features table supplied from
the features storage unit 3122 and the integral component table supplied
from the integral component calculation unit 3123, and outputs the
calculation result as an output pixel value M.

[1951]Next, description will be made regarding image generating processing
(processing in step S103 in FIG. 40) by the image generating unit 103
(FIG. 239) employing the one-dimensional reintegration method with
reference to the flowchart in FIG. 240.

[1952]For example, now, let us say that the actual world estimating unit
102 has already generated the approximation function f(x) such as shown
in FIG. 237 while taking the pixel 3101 such as shown in FIG. 237
described above as a pixel of interest at the processing in step S102 in
FIG. 40 described above.

[1953]Also, let us say that the data continuity detecting unit 101 has
already output the angle θ such as shown in FIG. 237 as data
continuity information at the processing in step S101 in FIG. 40
described above.

[1954]In this case, the conditions setting unit 3121 sets conditions (the
number of dimensions and an integral range) at step S3101 in FIG. 240.

[1955]For example, now, let us say that 5 has been set as the number of
dimensions, and also a spatial quadruple density (spatial resolution
power to cause the pitch width of a pixel to become half power in the
upper/lower/left/right sides) has been set as an integral range.

[1956]That is to say, in this case, consequently, it has been set that the
four pixel 3111 through pixel 3114 are created newly in a range of -0.5
through 0.5 in the X direction, and also a range of -0.5 through 0.5 in
the Y direction (in the range of the pixel 3101 in FIG. 237), such as
shown in FIG. 238.

[1957]In step S3102, the features storage unit 3122 acquires the features
of the approximation function f(x) supplied from the actual world
estimating unit 102, and generates a features table. In this case,
coefficients w0 through w5 of the approximation function f(x)
serving as a five-dimensional polynomial are supplied from the actual
world estimating unit 102, and accordingly, (w0, w1, w2,
w3, w4, w5) is generated as a features table.

[1958]In step S3103, the integral component calculation unit 3123
calculates integral components based on the conditions (the number of
dimensions and integral range) set by the conditions setting unit 3121,
and the data continuity information (angle θ) supplied from the
data continuity detecting unit 101, and generates an integral component
table.

[1959]Specifically, for example, if we say that the respective pixels 3111
through 3114, which are to be generated from now, are appended with
numbers (hereafter, such a number is referred to as a mode number) 1
through 4, the integral component calculation unit 3123 calculates the
integral components Ki(xs, xe) of the above Expression
(177) as a function of l (however, l represents a mode number) such as
integral components Ki(l) shown in the left side of the following
Expression (180).

Ki(l)=Ki(xs,xe) (180)

[1960]Specifically, in this case, the integral components Ki(l) shown
in the following Expression (181) are calculated.

ki(1)=ki(-0.5-Cx(-0.25),0-Cx(-0.25))

ki(2)=ki(0-Cx(-0.25),0.5-Cx(-0.25))

ki(3)=k1(-0.5-Cx(0.25),0-Cx(0.25))

ki(4)=ki(0-Cx(0.25),0.5-Cx(0.25)) (181)

[1961]Note that in Expression (181), the left side represents the integral
components Ki(l), and the right side represents the integral
components Ki(xs, xe). That is to say, in this case, l is
any one of 1 through 4, and also i is any one of 0 through 5, and
accordingly, 24 Ki(l) in total of 6 Ki(1), 6 Ki(2), 6
Ki(3), and 6 Ki(4) are calculated.

[1962]More specifically, first, the integral component calculation unit
3123 calculates each of the shift amounts Cx(-0.25) and
Cx(0.25) from the above Expression (178) and Expression (179) using
the angle θ supplied from the data continuity detecting unit 101.

[1963]Next, the integral component calculation unit 3123 calculates the
integral components Ki(xs, xe) of each right side of the
four expressions in Expression (181) regarding i=0 through 5 using the
calculated shift amounts Cx(-0.25) and Cx(0.25). Note that with
this calculation of the integral components Ki(xs, xe),
the above Expression (177) is employed.

[1965]Note that the sequence of the processing in step S3102 and the
processing in step S3103 is not restricted to the example in FIG. 240,
the processing in step S3103 may be executed first, or the processing in
step S3102 and the processing in step S3103 may be executed
simultaneously.

[1966]Next, in step S3104, the output pixel value calculation unit 3124
calculates the output pixel values M(1) through M (4) respectively based
on the features table generated by the features storage unit 3122 at the
processing in step S3102, and the integral component table generated by
the integral component calculation unit 3123 at the processing in step
S3103.

[1967]Specifically, in this case, the output pixel value calculation unit
3124 calculates each of the pixel value M(1) of the pixel 3111 (pixel of
mode number 1), the pixel value M (2) of the pixel 3112 (pixel of mode
number 2), the pixel value M(3) of the pixel 3113 (pixel of mode number
3), and the pixel value M(4) of the pixel 3114 (pixel of mode number 4)
by calculating the right sides of the following Expression (182) through
Expression (185) corresponding to the above Expression (176).

##EQU00095##

[1968]In step S3105, the output pixel value calculation unit 3124
determines regarding whether or not the processing of all the pixels has
been completed.

[1969]In step S3105, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S3102, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S3102 through S3104 is repeatedly performed.

[1970]In the event that the processing of all the pixels has been
completed (in step S3105, in the event that determination is made that
the processing of all the pixels has been completed), the output pixel
value calculation unit 3124 outputs the image in step S3106. Then, the
image generating processing ends.

[1971]Next, description will be made regarding the differences between the
output image obtained by employing the one-dimensional reintegration
method and the output image obtained by employing another method
(conventional classification adaptive processing) regarding a
predetermined input image with reference to FIG. 241 through FIG. 248.

[1972]FIG. 241 is a diagram illustrating the original image of the input
image, and FIG. 242 illustrates image data corresponding to the original
image in FIG. 241. In FIG. 242, the axis in the vertical direction in the
drawing represents pixel values, and the axis in the lower right
direction in the drawing represents the X direction serving as one
direction of the spatial directions of the image, and the axis in the
upper right direction in the drawing represents the Y direction serving
as the other direction of the spatial directions of the image. Note that
the respective axes in later-described FIG. 244, FIG. 246, and FIG. 248
corresponds to the axes in FIG. 242.

[1973]FIG. 243 is a diagram illustrating an example of an input image. The
input image illustrated in FIG. 243 is an image generated by taking the
mean of the pixel values of the pixels belonged to a block made up of
2×2 pixels shown in FIG. 241 as the pixel value of one pixel. That
is to say, the input image is an image obtained by integrating the image
shown in FIG. 241 in the spatial direction, which imitates the
integration property of a sensor. Also, FIG. 244 illustrates image data
corresponding to the input image in FIG. 243.

[1974]The original image illustrated in FIG. 241 includes a fine-line
image inclined almost 5° clockwise from the vertical direction.
Similarly, the input image illustrated in FIG. 243 includes a fine-line
image inclined almost 5° clockwise from the vertical direction.

[1975]FIG. 245 is a diagram illustrating an image (hereafter, the image
illustrated in FIG. 245 is referred to as a conventional image) obtained
by subjecting the input image illustrated in FIG. 243 to conventional
classification adaptive processing. Also, FIG. 246 illustrates image data
corresponding to the conventional image.

[1976]Note that the classification adaptive processing is made up of
classification processing and adaptive processing, data is classified
based on the property thereof by the class classification processing, and
is subjected to the adaptive processing for each class. With the adaptive
processing, for example, a low-quality or standard-quality image is
subjected to mapping using a predetermined tap coefficient so as to be
converted into a high-quality image.

[1977]FIG. 247 is a diagram illustrating an image (hereafter, the image
illustrated in FIG. 247 is referred to as an image according to the
present invention) obtained by applying the one-dimensional reintegration
method to which the present invention is applied, to the input image
illustrated in FIG. 243. Also, FIG. 248 illustrates image data
corresponding to the image according to the present invention.

[1978]It can be understood that upon the conventional image in FIG. 245
being compared with the image according to the present invention in FIG.
247, a fine-line image is different from that in the original image in
FIG. 241 in the conventional image, but on the other hand, the fine-line
image is almost the same as that in the original image in FIG. 241 in the
image according to the present invention.

[1979]This difference is caused by a difference wherein the conventional
class classification adaptation processing is a method for performing
processing on the basis (origin) of the input image in FIG. 243, but on
the other hand, the one-dimensional reintegration method according to the
present invention is a method for estimating the original image in FIG.
241 (generating the approximation function f(x) corresponding to the
original image) in light of continuity of a fine line, and performing
processing (performing reintegration so as to calculate pixel values) on
the basis (origin) of the original image estimated.

[1980]Thus, with the one-dimensional reintegration method, an output image
(pixel values) is generated by integrating the approximation function
f(x) in an arbitrary range on the basis (origin) of the approximation
function f(x) (the approximation function f(x) of the X cross-sectional
waveform F(x) in the actual world) serving as the one-dimensional
polynomial generated with the one-dimensional polynomial approximating
method.

[1981]Accordingly, with the one-dimensional reintegration method, it
becomes possible to output an image more similar to the original image
(the light signal in the actual world 1 which is to be cast in the sensor
2) in comparison with the conventional other methods.

[1982]In other words, the one-dimensional reintegration method is based on
condition that the data continuity detecting unit 101 in FIG. 236 detects
continuity of data in an input image made up of multiple pixels having a
pixel value on which the light signals in the actual world 1 are
projected by the multiple detecting elements of the sensor 2 each having
spatio-temporal integration effects, and projected by the detecting
elements of which a part of continuity of the light signals in the actual
world 1 drops, and in response to the detected continuity of data, the
actual world estimating unit 102 estimates the light signal function F by
approximating the light signal function F (specifically, X
cross-sectional waveform F(x)) representing the light signals in the
actual world 1 with a predetermined approximation function f(x) on
assumption that the pixel value of a pixel corresponding to a position in
the one-dimensional direction of the time-space directions of the input
image is the pixel value acquired by integration effects in the
one-dimensional direction thereof.

[1983]Speaking in detail, for example, the one-dimensional reintegration
method is based on condition that the X cross-sectional waveform F(x) is
approximated with the approximation function f(x) on assumption that the
pixel value of each pixel corresponding to a distance along in the
one-dimensional direction from a line corresponding to the detected
continuity of data is the pixel value obtained by the integration effects
in the one-dimensional direction thereof.

[1984]With the one-dimensional reintegration method, for example, the
image generating unit 103 in FIG. 236 (FIG. 3) generates a pixel value M
corresponding to a pixel having a desired size by integrating the X
cross-sectional waveform F(x) estimated by the actual world estimating
unit 102, i.e., the approximation function f(x) in desired increments in
the one-dimensional direction based on such an assumption, and outputs
this as an output image.

[1985]Accordingly, with the one-dimensional reintegration method, it
becomes possible to output an image more similar to the original image
(the light signal in the actual world 1 which is to be cast in the sensor
2) in comparison with the conventional other methods.

[1986]Also, with the one-dimensional reintegration method, as described
above, the integral range is arbitrary, and accordingly, it becomes
possible to create resolution (temporal resolution or spatial resolution)
different from the resolution of an input image by varying the integral
range. That is to say, it becomes possible to generate an image having
arbitrary powered resolution as well as an integer value as to the
resolution of the input image.

[1987]Further, the one-dimensional reintegration method enables
calculation of an output image (pixel values) with less calculation
processing amount than other reintegration methods.

[1988]Next, description will be made regarding a two-dimensional
reintegration method with reference to FIG. 249 through FIG. 255.

[1989]The two-dimensional reintegration method is based on condition that
the approximation function f(x, y) has been generated with the
two-dimensional polynomial approximating method.

[1990]That is to say, for example, it is an assumption that the image
function F(x, y, t) representing the light signal in the actual world 1
(FIG. 236) having continuity in the spatial direction represented with
the gradient GF has been approximated with a waveform projected in
the spatial directions (X direction and Y direction), i.e., the waveform
F(x, y) on the X-Y plane has been approximated with the approximation
function f(x, y) serving as a n-dimensional (n is an arbitrary integer)
polynomial, such as shown in FIG. 249.

[1991]In FIG. 249, the horizontal direction represents the X direction
serving as one direction in the spatial directions, the upper right
direction represents the Y direction serving as the other direction in
the spatial directions, and the vertical direction represents light
levels, respectively in the drawing. GF represents gradient as
continuity in the spatial directions.

[1992]Note that with the example in FIG. 249, the direction of continuity
is taken as the spatial directions (X direction and Y direction), so the
projection function of a light signal to be approximated is taken as the
function F(x, y), but as described later, the function F(x, t) or
function F(y, t) may be a target of approximation according to the
direction of continuity.

[1993]In the case of the example in FIG. 249, with the two-dimensional
reintegration method, the output pixel value M is calculated as the
following Expression (186).

×∫ ∫ ##EQU00096##

[1994]Note that in Expression (186), ys represents an integration
start position in the Y direction, and ye represents an integration
end position in the Y direction. Similarly, xs represents an
integration start position in the X direction, and xe represents an
integration end position in the X direction. Also, Ge represents a
predetermined gain.

[1995]In Expression (186), an integral range can be set arbitrarily, and
accordingly, with the two-dimensional reintegration method, it becomes
possible to create pixels having an arbitrary powered spatial resolution
as to the original pixels (the pixels of an input image from the sensor 2
(FIG. 236)) without deterioration by appropriately changing this integral
range.

[1998]The conditions setting unit 3201 sets the number of dimensions n of
the approximation function f(x, y) based on the actual world estimating
information (with the example in FIG. 250, the features of the
approximation function f(x, y)) supplied from the actual world estimating
unit 102.

[1999]The conditions setting unit 3201 also sets an integral range in the
case of reintegrating the approximation function f(x, y) (in the case of
calculating an output pixel value). Note that an integral range set by
the conditions setting unit 3201 does not need to be the vertical width
or the horizontal width of a pixel. For example, the approximation
function f(x, y) is integrated in the spatial directions (X direction and
Y direction), and accordingly, a specific integral range can be
determined as long as the relative size (power of spatial resolution) of
an output pixel (pixel to be generated from now by the image generating
unit 103) as to the spatial size of each pixel of an input image from the
sensor 2 is known. Accordingly, the conditions setting unit 3201 can set,
for example, a spatial resolution power as an integral range.

[2000]The features storage unit 3202 temporally stores the features of the
approximation function f(x, y) sequentially supplied from the actual
world estimating unit 102. Subsequently, upon the features storage unit
3202 storing all of the features of the approximation function f(x, y),
the features storage unit 3202 generates a features table including all
of the features of the approximation function f(x, y), and supplies this
to the output pixel value calculation unit 3204.

[2001]Now, description will be made regarding the details of the
approximation function f(x, y).

[2002]For example, now, let us say that the light signals (light signals
represented with the wave F (x, y)) in the actual world 1 (FIG. 236)
having continuity in the spatial directions represented with the gradient
GF shown in FIG. 249 described above have been detected by the
sensor 2 (FIG. 236), and have been output as an input image (pixel
values).

[2003]Further, for example, let us say that the data continuity detecting
unit 101 (FIG. 3) has subjected a region 3221 of an input image made up
of 20 pixels in total (20 squares represented with a dashed line in the
drawing) of 4 pixels in the X direction and also 5 pixels in the Y
direction of this input image to the processing thereof, and has output
an angle θ (angle θ generated between the direction of data
continuity represented with the gradient Gf corresponding to the
gradient GF and the X direction) as one of data continuity
information, as shown in FIG. 251.

[2004]Note that as viewed from the actual world estimating unit 102, the
data continuity detecting unit 101 should simply output the angle θ
at a pixel of interest, and accordingly, the processing region of the
data continuity detecting unit 101 is not restricted to the above region
3221 in the input image.

[2005]Also, with the region 3221 in the input image, the horizontal
direction in the drawing represents the X direction serving as one
direction of the spatial directions, and the vertical direction in the
drawing represents the Y direction serving the other direction of the
spatial directions.

[2006]Further, in FIG. 251, a pixel, which is the second pixel from the
left, and also the third pixel from the bottom, is taken as a pixel of
interest, and an (x, y) coordinates system is set so as to take the
center of the pixel of interest as the origin (0, 0). A relative distance
(hereafter, referred to as a cross-sectional direction distance) in the X
direction as to a straight line (straight line of the gradient Gf
representing the direction of data continuity) having an angle θ
passing through the origin (0, 0) is taken as x'.

[2007]Further, in FIG. 251, the graph on the right side represents the
approximation function f(x') serving as a n-dimensional (n is an
arbitrary integer) polynomial, which is a function approximating a
one-dimensional waveform (hereafter, referred to as an X cross-sectional
waveform F(x')) wherein the image function F(x, y, t) of which variables
are positions x, y, and z on the three-dimensional space, and
point-in-time t is projected in the X direction at an arbitrary position
y in the Y direction. Of the axes in the graph on the right side, the
axis in the horizontal direction in the drawing represents a
cross-sectional direction distance, and the axis in the vertical
direction in the drawing represents pixel values.

[2008]In this case, the approximation function f(x') shown in FIG. 251 is
a n-dimensional polynomial, so is represented as the following Expression
(187).

' ' ' ' ' ##EQU00097##

[2009]Also, since the angle θ is determined, the straight line
having angle θ passing through the origin (0, 0) is uniquely
determined, and a position xi in the X direction of the straight
line at an arbitrary position y in the Y direction is represented as the
following Expression (188). However, in Expression (188), s represents
cot θ.

x1=s×y (188)

[2010]That is to say, as shown in FIG. 251, a point on the straight line
corresponding to continuity of data represented with the gradient Gf
is represented with a coordinate value (x1, y).

[2011]The cross-sectional direction distance x' is represented as the
following Expression (189) using Expression (188).

x'=x-x1=x-s×y (189)

[2012]Accordingly, the approximation function f(x, y) at an arbitrary
position (x, y) within the input image region 3221 is represented as the
following Expression (190) using Expression (187) and Expression (189).

× ##EQU00098##

[2013]Note that in Expression (190), wi represents the features of
the approximation function f(x, y).

[2014]Now, description will return to FIG. 250, wherein the features
wi included in Expression (190) are supplied from the actual world
estimating unit 102, and stored in the features storage unit 3202. Upon
the features storage unit 3202 storing all of the features wi
represented with Expression (190), the features storage unit 3202
generates a features table including all of the features wi, and
supplies this to the output pixel value calculation unit 3204.

[2015]Also, upon the right side of the above Expression (186) being
expanded (calculated) by substituting the approximation function f(x, y)
of Expression (190) for the approximation function f(x, y) in the right
side of Expression (186), the output pixel value M is represented as the
following Expression (191).

× ××××× × ##EQU00099##

[2016]In Expression (191), Ki(xs, xe, ys, ye)
represent the integral components of the i-dimensional term. That is to
say, the integral components Ki(xs, xe, ys, ye)
are such as shown in the following Expression (192).

[2018]Specifically, as shown in Expression (191) and Expression (192), the
integral components Ki(xs, xe, ys, ye) can be
calculated as long as the start position xs in the X direction and
end position xe in the X direction of an integral range, the start
position ys in the Y direction and end position ye in the Y
direction of an integral range, variable s, gain Ge, and i of the
i-dimensional term are known.

[2019]Of these, the gain Ge is determined with the spatial resolution
power (integral range) set by the conditions setting unit 3201.

[2020]The range of i is determined with the number of dimensions n set by
the conditions setting unit 3201.

[2021]A variable s is, as described above, cot θ, so is determined
with the angle θ output from the data continuity detecting unit
101.

[2022]Also, each of the start position xs in the X direction and end
position xe in the X direction of an integral range, and the start
position ys in the Y direction and end position ye in the Y
direction of an integral range is determined with the center pixel
position (x, y) and pixel width of an output pixel to be generated from
now. Note that (x, y) represents a relative position from the center
position of the pixel of interest when the actual world estimating unit
102 generates the approximation function f(x).

[2023]Further, each of the center pixel position (x, y) and pixel width of
an output pixel to be generated from now is determined with the spatial
resolution power (integral range) set by the conditions setting unit
3201.

[2024]Accordingly, the integral component calculation unit 3203 calculates
Ki(xs, xe, ys, ye) based on the number of
dimensions and the spatial resolution power (integral range) set by the
conditions setting unit 3201, and the angle θ of the data
continuity information output from the data continuity detecting unit
101, and supplies the calculated result to the output pixel value
calculation unit 3204 as an integral component table.

[2025]The output pixel value calculation unit 3204 calculates the right
side of the above Expression (191) using the features table supplied from
the features storage unit 3202, and the integral component table supplied
from the integral component calculation unit 3203, and outputs the
calculated result to the outside as the output pixel value M.

[2026]Next, description will be made regarding image generating processing
(processing in step S103 in FIG. 40) by the image generating unit 103
(FIG. 251) employing the two-dimensional reintegration method with
reference to the flowchart in FIG. 252.

[2027]For example, let us say that the light signals represented with the
function F(x, y) shown in FIG. 249 have been cast in the sensor 2 so as
to become an input image, and the actual world estimating unit 102 has
already generated the approximation function f(x, y) for approximating
the function F(x, y) with one pixel 3231 such as shown in FIG. 253 as a
pixel of interest at the processing in step S102 in FIG. 40 described
above.

[2028]Note that in FIG. 253, the pixel value (input pixel value) of the
pixel 3231 is taken as P, and the shape of the pixel 3231 is taken as a
square of which one side is 1 in length. Also, of the spatial directions,
the direction in parallel with one side of the pixel 3231 is taken as the
X direction, and the direction orthogonal to the X direction is taken as
the Y direction. Further, a coordinates system (hereafter, referred to as
a pixel-of-interest coordinates system) in the spatial directions (X
direction and Y direction) of which the origin is the center of the pixel
3231 is set.

[2029]Also, let us say that in FIG. 253, the data continuity detecting
unit 101, which takes the pixel 3231 as a pixel of interest, has already
output the angle θ as data continuity information corresponding to
continuity of data represented with the gradient Gf at the
processing in step S101 in FIG. 40 described above.

[2030]Description will return to FIG. 252, and in this case, the
conditions setting unit 3201 sets conditions (the number of dimensions
and an integral range) at step S3201.

[2031]For example, now, let us say that 5 has been set as the number of
dimensions, and also spatial quadruple density (spatial resolution power
to cause the pitch width of a pixel to become half power in the
upper/lower/left/right sides) has been set as an integral range.

[2032]That is to say, in this case, it has been set that the four pixel
3241 through pixel 3244 are created newly in a range of -0.5 through 0.5
in the X direction, and also a range of -0.5 through 0.5 in the Y
direction (in the range of the pixel 3231 in FIG. 253), such as shown in
FIG. 254. Note that in FIG. 254 as well, the same pixel-of-interest
coordinates system as that in FIG. 253 is shown.

[2033]Also, in FIG. 254, M(1) represents the pixel value of the pixel 3241
to be generated from now, M(2) represents the pixel value of the pixel
3242 to be generated from now, M(3) represents the pixel value of the
pixel 3243 to be generated from now, and M(4) represents the pixel value
of the pixel 3244 to be generated from now.

[2034]Description will return to FIG. 252, in step S3202, the features
storage unit 3202 acquires the features of the approximation function
f(x, y) supplied from the actual world estimating unit 102, and generates
a features table. In this case, the coefficients w0 through w5
of the approximation function f(x) serving as a 5-dimensional polynomial
are supplied from the actual world estimating unit 102, and accordingly,
(w0, w1, w2, w3, w4, w5) is generated as a
features table.

[2035]In step S3203, the integral component calculation unit 3203
calculates integral components based on the conditions (the number of
dimensions and an integral range) set by the conditions setting unit
3201, and the data continuity information (angle θ) supplied from
the data continuity detecting unit 101, and generates an integral
component table.

[2036]Specifically, for example, let us say that numbers (hereafter, such
a number is referred to as a mode number) 1 through 4 are respectively
appended to the pixel 3241 through pixel 3244 to be generated from now,
the integral component calculation unit 3203 calculates the integral
components Ki(xs, xe, ys, ye) of the above
Expression (191) as a function of l (however, l represents a mode number)
such as the integral components Ki(l) shown in the left side of the
following Expression (193).

Ki(l)=Ki(xs,xe,ys,ye) (193)

[2037]Specifically, in this case, the integral components Ki(l) shown
in the following Expression (194) are calculated.

ki(1)=ki(-0.5,0,0,0.5)

ki(2)=ki(0,0.5,0,0.5)

ki(3)=ki(-0.5,0,-0.5,0)

ki(4)=ki(0,0.5,-0.5,0) (194)

[2038]Note that in Expression (194), the left side represents the integral
components Ki(l), and the right side represents the integral
components Ki(xs, xe, ys, ye). That is to say,
in this case, l is any one of 1 thorough 4, and also i is any one of 0
through 5, and accordingly, 24 Ki(l) in total of 6 Ki(1), 6
Ki(2), 6 Ki(3), and 6 Ki(4) are calculated.

[2040]Next, the integral component calculation unit 3203 calculates the
integral components Ki(xs, xe, ys, ye) of each
right side of the four expressions in Expression (194) regarding i=0
through 5 using the calculated variable s. Note that with this
calculation of the integral components Ki(xs, xe, ys,
ye), the above Expression (191) is employed.

[2042]Note that the sequence of the processing in step S3202 and the
processing in step S3203 is not restricted to the example in FIG. 252,
the processing in step S3203 may be executed first, or the processing in
step S3202 and the processing in step S3203 may be executed
simultaneously.

[2043]Next, in step S3204, the output pixel value calculation unit 3204
calculates the output pixel values M(1) through M (4) respectively based
on the features table generated by the features storage unit 3202 at the
processing in step S3202, and the integral component table generated by
the integral component calculation unit 3203 at the processing in step
S3203.

[2044]Specifically, in this case, the output pixel value calculation unit
3204 calculates each of the pixel value M(1) of the pixel 3241 (pixel of
mode number 1), the pixel value M (2) of the pixel 3242 (pixel of mode
number 2), the pixel value M(3) of the pixel 3243 (pixel of mode number
3), and the pixel value M(4) of the pixel 3244 (pixel of mode number 4)
shown in FIG. 254 by calculating the right sides of the following
Expression (195) through Expression (198) corresponding to the above
Expression (191).

× × × × ##EQU00101##

[2045]However, in this case, each n of Expression (195) through Expression
(198) becomes 5.

[2046]In step S3205, the output pixel value calculation unit 3204
determines regarding whether or not the processing of all the pixels has
been completed.

[2047]In step S3205, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S3202, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S3202 through S3204 is repeatedly performed.

[2048]In the event that the processing of all the pixels has been
completed (in step S3205, in the event that determination is made that
the processing of all the pixels has been completed), the output pixel
value calculation unit 3204 outputs the image in step S3206. Then, the
image generating processing ends.

[2049]Thus, four pixels having higher spatial resolution than the input
pixel 3231, i.e., the pixel 3241 through pixel 3244 (FIG. 254) can be
created by employing the two-dimensional reintegration method as a pixel
at the pixel 3231 of the input image (FIG. 253) from the sensor 2 (FIG.
236). Further, though not shown in the drawing, as described above, the
image generating unit 103 can create a pixel having an arbitrary powered
spatial resolution as to the input pixel 3231 without deterioration by
appropriately changing an integral range, in addition to the pixel 3241
through pixel 3244.

[2050]As described above, as description of the two-dimensional
reintegration method, an example for subjecting the approximation
function f(x, y) as to the spatial directions (X direction and Y
direction) to two-dimensional integration has been employed, but the
two-dimensional reintegration method can be applied to the time-space
directions (X direction and t direction, or Y direction and t direction).

[2051]That is to say, the above example is an example in the case in which
the light signals in the actual world 1 (FIG. 236) have continuity in the
spatial directions represented with the gradient GF such as shown in
FIG. 249, and accordingly, an expression including two-dimensional
integration in the spatial directions (X direction and Y direction) such
as shown in the above Expression (186) has been employed. However, the
concept regarding two-dimensional integration can be applied not only to
the spatial direction but also the time-space directions (X direction and
t direction, or Y direction and t direction).

[2052]In other words, with the two-dimensional polynomial approximating
method serving as an assumption of the two-dimensional reintegration
method, it is possible to perform approximation using a two-dimensional
polynomial even in the case in which the image function F(x, y, t)
representing the light signals has continuity in the time-space
directions (however, X direction and t direction, or Y direction and t
direction) as well as continuity in the spatial directions.

[2053]Specifically, for example, in the event that there is an object
moving horizontally in the X direction at uniform velocity, the direction
of movement of the object is represented with like a gradient VF in
the X-t plane such as shown in FIG. 255. In other words, it can be said
that the gradient VF represents the direction of continuity in the
time-space directions in the X-t plane. Accordingly, the data continuity
detecting unit 101 (FIG. 236) can output movement θ such as shown
in FIG. 255 (strictly speaking, though not shown in the drawing, movement
θ is an angle generated by the direction of data continuity
represented with the gradient Vf corresponding to the gradient
VF and the X direction in the spatial direction) as data continuity
information corresponding to the gradient VF representing continuity
in the time-space directions in the X-t plane as well as the above angle
θ (data continuity information corresponding to the gradient
GF representing continuity in the spatial directions in the X-Y
plane).

[2054]Also, the actual world estimating unit 102 (FIG. 236) employing the
two-dimensional polynomial approximating method can calculate the
coefficients (features) wi of an approximation function f(x, t) with
the same method as the above method by employing the movement θ
instead of the angle θ. However, in this case, the equation to be
employed is not the above Expression (190) but the following Expression
(199).

× ##EQU00102##

[2055]Note that in Expression (199), s is cot θ (however, θ is
movement).

[2056]Accordingly, the image generating unit 103 (FIG. 236) employing the
two-dimensional reintegration method can calculate the pixel value M by
substituting the f(x, t) of the above Expression (199) for the right side
of the following Expression (200), and calculating this.

×∫ ∫ ##EQU00103##

[2057]Note that in Expression (200), ts represents an integration
start position in the t direction, and te represents an integration
end position in the t direction. Similarly, xs represents an
integration start position in the X direction, and xe represents an
integration end position in the X direction. Ge represents a
predetermined gain.

[2058]Alternately, an approximation function f(y, t) focusing attention on
the spatial direction Y instead of the spatial direction X can be handled
as the same way as the above approximation function f(x, t).

[2059]Incidentally, in Expression (199), it becomes possible to obtain
data not integrated in the temporal direction, i.e., data without
movement blurring by regarding the t direction as constant, i.e., by
performing integration while ignoring integration in the t direction. In
other words, this method may be regarded as one of two-dimensional
reintegration methods in that reintegration is performed on condition
that one certain dimension of two-dimensional polynomials is constant, or
in fact, may be regarded as one of one-dimensional reintegration methods
in that one-dimensional reintegration in the X direction is performed.

[2060]Also, in Expression (200), an integral range may be set arbitrarily,
and accordingly, with the two-dimensional reintegration method, it
becomes possible to create a pixel having an arbitrary powered resolution
as to the original pixel (pixel of an input image from the sensor 2 (FIG.
236)) without deterioration by appropriately changing this integral
range.

[2061]That is to say, with the two-dimensional reintegration method, it
becomes possible to create temporal resolution by appropriately changing
an integral range in the temporal direction t. Also, it becomes possible
to create spatial resolution by appropriately changing an integral range
in the spatial direction X (or spatial direction Y). Further, it becomes
possible to create both temporal resolution and spatial resolution by
appropriately changing each integral range in the temporal direction and
in the spatial direction X.

[2062]Note that as described above, creation of any one of temporal
resolution and spatial resolution may be performed even with the
one-dimensional reintegration method, but creation of both temporal
resolution and spatial resolution cannot be performed with the
one-dimensional reintegration method in theory, which becomes possible
only by performing two-dimensional or more reintegration. That is to say,
creation of both temporal resolution and spatial resolution becomes
possible only by employing the two-dimensional reintegration method and a
later-described three-dimensional reintegration method.

[2063]Also, the two-dimensional reintegration method takes not
one-dimensional but two-dimensional integration effects into
consideration, and accordingly, an image more similar to the light signal
in the actual world 1 (FIG. 236) may be created.

[2064]In other words, with the two-dimensional reintegration method, for
example, the data continuity detecting unit 101 in FIG. 236 (FIG. 3)
detects continuity (e.g., continuity of data represented with the
gradient Gf in FIG. 251) of data in an input image made up of
multiple pixels having a pixel value on which the light signals in the
actual world 1 are projected by the multiple detecting elements of the
sensor 2 each having spatio-temporal integration effects, and projected
by the detecting elements of which a part of continuity (e.g., continuity
represented with the gradient GF in FIG. 249) of the light signals
in the actual world 1 drops.

[2065]Subsequently, for example, in response to the continuity of data
detected by the data continuity detecting unit 101, the actual world
estimating unit 102 in FIG. 236 (FIG. 3) estimates the light signal
function F by approximating the light signal function F(specifically,
function F(x, y) in FIG. 249) representing the light signals in the
actual world 1 with an approximation function f(x, y), which is a
polynomial, on assumption that the pixel value of a pixel corresponding
to at least a position in the two-dimensional direction (e.g., spatial
direction X and spatial direction Y in FIG. 249) of the time-space
directions of the image data is the pixel value acquired by at least
integration effects in the two-dimensional direction, which is an
assumption.

[2066]Speaking in detail, for example, the actual world estimating unit
102 estimates a first function representing the light signals in the real
world by approximating the first function with a second function serving
as a polynomial on condition that the pixel value of a pixel
corresponding to at least a distance (for example, cross-sectional
direction distance x' in FIG. 251) along in the two-dimensional direction
from a line corresponding to continuity of data (for example, a line
(arrow) corresponding to the gradient Gf in FIG. 251) detected by
the continuity detecting unit 101 is the pixel value acquired by at least
integration effects in the two-dimensional direction, which is an
assumption.

[2067]With the two-dimensional reintegration method, based on such an
assumption, for example, the image generating unit 103 (FIG. 250 for
configuration) in FIG. 236 (FIG. 3) generates a pixel value corresponding
to a pixel (for example, output image (pixel value M) in FIG. 236.
Specifically, for example, the pixel 3241 through pixel 3244 in FIG. 254)
having a desired size by integrating the function F(x, y) estimated by
the actual world estimating unit 102, i.e., the approximation function
f(x, y) in at least desired increments in the two-dimensional direction
(e.g., by calculating the right side of the above Expression (186)).

[2068]Accordingly, the two-dimensional reintegration method enables not
only any one of temporal resolution and spatial resolution but also both
temporal resolution and spatial resolution to be created. Also, with the
two-dimensional reintegration method, an image more similar to the light
signal in the actual world 1 (FIG. 236) than that in the one-dimensional
reintegration method may be generated.

[2069]Next, description will be made regarding a three-dimensional
reintegration method with reference to FIG. 256 and FIG. 257.

[2070]With the three-dimensional reintegration method, the approximation
function f(x, y, t) has been created using the three-dimensional function
approximating method, which is an assumption.

[2071]In this case, with the three-dimensional reintegration method, the
output pixel value M is calculated as the following Expression (201).

×∫ ∫ ∫ ##EQU00104##

[2072]Note that in Expression (201), ts represents an integration
start position in the t direction, and te represents an integration
end position in the t direction. Similarly, ys represents an
integration start position in the Y direction, and ye represents an
integration end position in the Y direction. Also, xs represents an
integration start position in the X direction, and xe represents an
integration end position in the X direction. Ge represents a
predetermined gain.

[2073]Also, in Expression (201), an integral range may be set arbitrarily,
and accordingly, with the three-dimensional reintegration method, it
becomes possible to create a pixel having an arbitrary powered time-space
resolution as to the original pixel (pixel of an input image from the
sensor 2 (FIG. 236)) without deterioration by appropriately changing this
integral range. That is to say, upon the integral range in the spatial
direction being reduced, a pixel pitch can be reduced without restraint.
On the other hand, upon the integral range in the spatial direction being
enlarged, a pixel pitch can be enlarged without restraint. Also, upon the
integral range in the temporal direction being reduced, temporal
resolution can be created based on an actual waveform.

[2076]The conditions setting unit 3301 sets the number of dimensions n of
the approximation function f(x, y, t) based on the actual world
estimating information (with the example in FIG. 256, features of the
approximation function f(x, y, t)) supplied from the actual world
estimating unit 102.

[2077]The conditions setting unit 3301 sets an integral range in the case
of reintegrating the approximation function f(x, y, t) (in the case of
calculating output pixel values). Note that an integral range set by the
conditions setting unit 3301 needs not to be the width (vertical width
and horizontal width) of a pixel or shutter time itself. For example, it
becomes possible to determine a specific integral range in the spatial
direction as long as the relative size (spatial resolution power) of an
output pixel (pixel to be generated from now by the image generating unit
103) as to the spatial size of each pixel of an input image from the
sensor 2 (FIG. 236) is known. Similarly, it becomes possible to determine
a specific integral range in the temporal direction as long as the
relative time (temporal resolution power) of an output pixel as to the
shutter time of the sensor 2 (FIG. 236) is known. Accordingly, the
conditions setting unit 3301 can set, for example, a spatial resolution
power and temporal resolution power as an integral range.

[2078]The features storage unit 3302 temporally stores the features of the
approximation function f(x, y, t) sequentially supplied from the actual
world estimating unit 102. Subsequently, upon the features storage unit
3302 storing all of the features of the approximation function f(x, y,
t), the features storage unit 3302 generates a features table including
all of the features of the approximation function f(x, y, t), and
supplies this to the output pixel value calculation unit 3304.

[2079]Incidentally, upon the right side of the approximation function f(x,
y) of the right side of the above Expression (201) being expanded
(calculated), the output pixel value M is represented as the following
Expression (202).

× ##EQU00105##

[2080]In Expression (202), Ki(xs, xe, ys, ye,
ts, te) represents the integral components of the i-dimensional
term. However, xs represents an integration range start position in
the X direction, xe represents an integration range end position in
the X direction, ys represents an integration range start position
in the Y direction, ye represents an integration range end position
in the Y direction, ts represents an integration range start
position in the t direction, and te represents an integration range
end position in the t direction, respectively.

[2082]Specifically, the integral component calculation unit 3303
calculates the integral components Ki(xs, xe, ys,
ye, ts, te) based on the number of dimensions and the
integral range (spatial resolution power or temporal resolution power)
set by the conditions setting unit 3301, and the angle θ or
movement θ of the data continuity information output from the data
continuity detecting unit 101, and supplies the calculated results to the
output pixel value calculation unit 3304 as an integral component table.

[2083]The output pixel value calculation unit 3304 calculates the right
side of the above Expression (202) using the features table supplied from
the features storage unit 3302, and the integral component table supplied
from the integral component calculation unit 3303, and outputs the
calculated result to the outside as the output pixel value M.

[2084]Next, description will be made regarding image generating processing
(processing in step S103 in FIG. 40) by the image generating unit 103
(FIG. 256) employing the three-dimensional reintegration method with
reference to the flowchart in FIG. 257.

[2085]For example, let us say that the actual world estimating unit 102
(FIG. 236) has already generated an approximation function f(x, y, t) for
approximating the light signals in the actual world 1 (FIG. 236) with a
predetermined pixel of an input image as a pixel of interest at the
processing in step S102 in FIG. 40 described above.

[2086]Also, let us say that the data continuity detecting unit 101 (FIG.
236) has already output the angle θ or movement θ as data
continuity information with the same pixel as the actual world estimating
unit 102 as a pixel of interest.

[2087]In this case, the conditions setting unit 3301 sets conditions (the
number of dimensions and an integral range) at step S3301 in FIG. 257.

[2088]In step S3302, the features storage unit 3302 acquires the features
wi of the approximation function f(x, y, t) supplied from the actual
world estimating unit 102, and generates a features table.

[2089]In step S3303, the integral component calculation unit 3303
calculates integral components based on the conditions (the number of
dimensions and an integral range) set by the conditions setting unit
3301, and the data continuity information (angle θ or movement
θ) supplied from the data continuity detecting unit 101, and
generates an integral component table.

[2090]Note that the sequence of the processing in step S3302 and the
processing in step S3303 is not restricted to the example in FIG. 257,
the processing in step S3303 may be executed first, or the processing in
step S3302 and the processing in step S3303 may be executed
simultaneously.

[2091]Next, in step S3304, the output pixel value calculation unit 3304
calculates each output pixel value based on the features table generated
by the features storage unit 3302 at the processing in step S3302, and
the integral component table generated by the integral component
calculation unit 3303 at the processing in step S3303.

[2092]In step S3305, the output pixel value calculation unit 3304
determines regarding whether or not the processing of all the pixels has
been completed.

[2093]In step S3305, in the event that determination is made that the
processing of all the pixels has not been completed, the processing
returns to step S3302, wherein the subsequent processing is repeatedly
performed. That is to say, the pixels that have not become a pixel of
interest are sequentially taken as a pixel of interest, and the
processing in step S3302 through S3304 is repeatedly performed.

[2094]In the event that the processing of all the pixels has been
completed (in step S3305, in the event that determination is made that
the processing of all the pixels has been completed), the output pixel
value calculation unit 3304 outputs the image in step S3306. Then, the
image generating processing ends.

[2095]Thus, in the above Expression (201), an integral range may be set
arbitrarily, and accordingly, with the three-dimensional reintegration
method, it becomes possible to create a pixel having an arbitrary powered
resolution as to the original pixel (pixel of an input image from the
sensor 2 (FIG. 236)) without deterioration by appropriately changing this
integral range.

[2096]That is to say, with the three-dimensional reintegration method,
appropriately changing an integral range in the temporal direction
enables temporal resolution to be created. Also, appropriately changing
an integral range in the spatial direction enables spatial resolution to
be created. Further, appropriately changing each integral range in the
temporal direction and in the spatial direction enables both temporal
resolution and spatial resolution to be created.

[2097]Specifically, with the three-dimensional reintegration method,
approximation is not necessary when degenerating three dimension to two
dimension or one dimension, thereby enabling high-precision processing.
Also, movement in an oblique direction may be processed without
degenerating to two dimension. Further, no degenerating to two dimension
enables process at each dimension. For example, with the two-dimensional
reintegration method, in the event of degenerating in the spatial
directions (X direction and Y direction), process in the t direction
serving as the temporal direction cannot be performed. On the other hand,
with the three-dimensional reintegration method, any process in the
time-space directions may be performed.

[2098]Note that as described above, creation of any one of temporal
resolution and spatial resolution may be performed even with the
one-dimensional reintegration method, but creation of both temporal
resolution and spatial resolution cannot be performed with the
one-dimensional reintegration method in theory, which becomes possible
only by performing two-dimensional or more reintegration. That is to say,
creation of both temporal resolution and spatial resolution becomes
possible only by employing the above two-dimensional reintegration method
and the three-dimensional reintegration method.

[2099]Also, the three-dimensional reintegration method takes not
one-dimensional and two-dimensional but three-dimensional integration
effects into consideration, and accordingly, an image more similar to the
light signal in the actual world 1 (FIG. 236) may be created.

[2100]In other words, with the three-dimensional reintegration method, for
example, the actual world estimating unit 102 in FIG. 236 (FIG. 3)
estimates the light signal function F representing the light signals in
the actual world by approximating the light signal function F with a
predetermined approximation function f on condition that, the pixel value
of a pixel corresponding to at least a position in the one-dimensional
direction of the time-space directions, of an input image made up of
multiple pixels having a pixel value on which the light signals in the
actual world 1 are projected by the multiple detecting elements of the
sensor 2 each having spatio-temporal integration effects, and projected
by the detecting elements of which a part of continuity of the light
signals in the actual world 1 drops, is a pixel value acquired by at
least integration effects in the one-dimensional direction, which is an
assumption.

[2101]Further, for example, in the event that the data continuity
detecting unit 101 in FIG. 236 (FIG. 3) detects continuity of data of an
input image, the actual world estimating unit 102 estimates the light
signal function F by approximating the light signal function F with the
approximation function f on condition that the pixel value of a pixel
corresponding to at least a position in the one-dimensional direction in
the time-space directions of the image data, corresponding to continuity
of data detected by the data continuity detecting unit 101 is the pixel
value acquired by at least integration effects in the one-dimensional
direction, which is an assumption.

[2102]Speaking in detail, for example, the actual world estimating unit
102 estimates the light signal function by approximating the light signal
function F with an approximation function on condition that the pixel
value of a pixel corresponding to at least a distance along in the
one-dimensional direction from a line corresponding to continuity of data
detected by the continuity detecting nit 101 is the pixel value acquired
by at least integration effects in the one-dimensional direction, which
is an assumption.

[2103]With the three-dimensional reintegration method, for example, the
image generating unit 103 (configuration is FIG. 256) in FIG. 236 (FIG.
3) generates a pixel value corresponding to a pixel having a desired size
by integrating the light signal function F estimated by the actual world
estimating unit 102, i.e., the approximation function f in at least
desired increments in the one-dimensional direction (e.g., by calculating
the right side of the above Expression (201)).

[2104]Accordingly, with the three-dimensional reintegration method, an
image more similar to the light signal in the actual world 1 (FIG. 236)
than that in conventional image generating methods, or the above
one-dimensional or two-dimensional reintegration method may be generated.

[2105]Next, description will be made regarding the image generating unit
103 which newly generates pixels based on the derivative value or
gradient of each pixel in the event that the actual world estimating
information input from the actual world estimating unit 102 is
information of the derivative value or gradient of each pixel on the
approximation function f(x) approximately representing each pixel value
of reference pixels with reference to FIG. 258.

[2106]Note that the term "derivative value" mentioned here, following the
approximation function f(x) approximately representing each pixel value
of reference pixels being obtained, means a value obtained at a
predetermined position using a one-dimensional differential equation
f(x)' obtained from the approximation function f(x) thereof
(one-dimensional differential equation f(t)' obtained from an
approximation function f(t) in the event that the approximation function
is in the frame direction). Also, the term "gradient" mentioned here
means the gradient of a predetermined position on the approximation
function f(x) directly obtained from the pixel values of perimeter pixels
at the predetermined position without obtaining the above approximation
function f(x) (or f (t)). However, derivative values mean the gradient at
a predetermined position on the approximation function f(x), and
accordingly, either case means the gradient at a predetermined position
on the approximation function f(x). Accordingly, with regard to
derivative values and a gradient serving as the actual world estimating
information input from the actual world estimating unit 102, they are
unified and referred to as the gradient on the approximation function
f(x) (or f(t)), with description of the image generating unit 103 in FIG.
258 and FIG. 262.

[2107]A gradient acquiring unit 3401 acquires the gradient information of
each pixel, the pixel value of the corresponding pixel, and the gradient
in the direction of continuity regarding the approximation function f(x)
approximately representing the pixel values of the reference pixels input
from the actual world estimating unit 102, and outputs these to an
extrapolation/interpolation unit 3402.

[2108]The extrapolation/interpolation unit 3402 generates certain-powered
higher-density pixels than an input image using
extrapolation/interpolation based on the gradient of each pixel on the
approximation function f(x), the pixel value of the corresponding pixel,
and the gradient in the direction of continuity, which are input from the
gradient acquiring unit 3401, and outputs the pixels as an output image.

[2109]Next, description will be made regarding image generating processing
by the image generating unit 103 in FIG. 258 with reference to the
flowchart in FIG. 259.

[2110]In step S3401, the gradient acquiring unit 3401 acquires information
regarding the gradient (derivative value) on the approximation function
f(x), position, and pixel value of each pixel, and the gradient in the
direction of continuity, which is input from the actual world estimating
unit 102, as actual world estimating information.

[2111]At this time, for example, in the event of generating an image made
up of pixels having double density in the spatial direction X and spatial
direction Y (quadruple in total) as to an input image, information
regarding as to a pixel Pin such as shown in FIG. 260, gradients f(Xin)'
(gradient in the center position of the pixel Pin), f(Xin-Cx(-0.25))'
(gradient of the center position of a pixel Pa when generating a pixel of
double density in the Y direction from the pixel Pin), and
f(Xin-Cx(0.25))' (gradient of the center position of a pixel Pb when
generating a pixel of double density in the Y direction from the pixel
Pin), the position and pixel value of the pixel Pin, and a gradient
Gf in the direction of continuity is input from the actual world
estimating unit 102.

[2112]In step S3402, the gradient acquiring unit 3401 selects information
of the corresponding pixel of interest, of the actual world estimating
information input, and outputs this to the extrapolation/interpolation
unit 3402.

[2113]In step S3403, the extrapolation/interpolation unit 3402 obtains a
shift amount from the position information of the input pixels, and the
gradient Gf in the direction of continuity.

[2114]Here, a shift amount Cx(ty) is defined as Cx(ty)=ty/Gf when the
gradient as continuity is represented with Gf. This shift amount
Cx(ty) represents a shift width as to the spatial direction X at a
position in the spatial direction Y=ty of the approximation function
f(x), which is defined on the position in the spatial direction Y=0.
Accordingly, for example, in the event that an approximation function on
the position in the spatial direction Y=0 is defined as f(x), in the
spatial direction Y=ty this approximation function f(x) becomes a
function shifted by the Cx(ty) as to the spatial direction X, so that
this approximation function is defined as f(x-Cx(ty)) (=f(x-ty/Gf).

[2115]For example, in the event of the pixel Pin such as shown in FIG.
260, when one pixel (one pixel size in the drawing is 1 both in the
horizontal direction and in the vertical direction) in the drawing is
divided into two pixels in the vertical direction (when generating a
double-density pixel in the vertical direction), the
extrapolation/interpolation unit 3402 obtains the shift amounts of the
pixels Pa and Pb, which are to be obtained. That is to say, in this case,
the pixels Pa and Pb are shifted by -0.25 and 0.25 as to the spatial
direction Y respectively as viewed from the pixel Pin, so that the shift
amounts of the pixels Pa and Pb become Cx(-0.25) and Cx(0.25)
respectively. Note that in FIG. 260, the pixel Pin is a square of which
general gravity position is (Xin, Yin), and the pixels Pa and Pb are
rectangles long in the horizontal direction in the drawing of which
general gravity positions are (Xin, Yin+0.25) and (Xin, Yin-0.25)
respectively.

[2116]In step S3404, the extrapolation/interpolation unit 3402 obtains the
pixel values of the pixels Pa and Pb using extrapolation/interpolation
through the following Expression (203) and Expression (204) based on the
shift amount Cx obtained at the processing in step S3403, the gradient f
(Xin)' on the pixel of interest on the approximation function f(x) of the
pixel Pin acquired as the actual world estimating information, and the
pixel value of the pixel Pin.

[2118]That is to say, as shown in FIG. 261, the amount of change of the
pixel value is set by multiplying the gradient f (Xin)' in the pixel of
interest Pin by the movement distance in the X direction, i.e., shift
amount, and the pixel value of a pixel to be newly generated is set on
the basis of the pixel value of the pixel of interest.

[2119]In step S3405, the extrapolation/interpolation unit 3402 determines
regarding whether or not pixels having predetermined resolution have been
obtained. For example, in the event that predetermined resolution is
pixels having double density in the vertical direction as to the pixels
in an input image, the extrapolation/interpolation unit 3402 determines
that pixels having predetermined resolution have been obtained by the
above processing, but for example, in the event that pixels having
quadruple density (double in the horizontal direction×double in the
vertical direction) as to the pixels in the input image have been
desired, pixels having predetermined resolution have not been obtained by
the above processing. Consequently, in the event that a quadruple-density
image is a desired image, the extrapolation/interpolation unit 3402
determines that pixels having predetermined resolution have not been
obtained, and the processing returns to step S3403.

[2120]In step S3403, the extrapolation/interpolation unit 3402 obtains the
shift amounts of pixels P01, P02, P03, and P04 (pixel having quadruple
density as to the pixel of interest Pin), which are to be obtained, from
the center position of a pixel, which is to be generated, at the second
processing respectively. That is to say, in this case, the pixels P01 and
P02 are pixels to be obtained from the pixel Pa, so that each shift
amount from the pixel Pa is obtained respectively. Here, the pixels P01
and P02 are shifted by -0.25 and 0.25 as to the spatial direction X
respectively as viewed from the pixel Pa, and accordingly, each value
itself becomes the shift amount thereof (since the pixels are shifted as
to the spatial direction X). Similarly, the pixels P03 and P04 are
shifted by -0.25 and 0.25 respectively as to the spatial direction X as
viewed from the pixel Pb, and accordingly, each value itself becomes the
shift amount thereof. Note that in FIG. 260, the pixels P01, P02, P03,
and P04 are squares of which gravity positions are four cross-marked
positions in the drawing, and the length of each side is 1 for the pixel
Pin, and accordingly, around 5 for the pixels P01, P02, P03, and P04
respectively.

[2121]In step S3404, the extrapolation/interpolation unit 3402 obtains the
pixel values of the pixels P01, P02, P03, and P04 using
extrapolation/interpolation through the following Expression (205)
through Expression (208) based on the shift amount Cx obtained at the
processing in step S3403, the gradients f(Xin-Cx(-0.25))' and
f(Xin-Cx(0.25))' at a predetermined position on the approximation
function f(x) of the pixels Pa and Pb acquired as actual world estimating
information, and the pixel values of the pixels Pa and Pb obtained at the
above processing, and stores these in unshown memory.

P01=Pa+f(Xin-Cx(0.25))'×(-0.25) (205)

P02=Pa+f(Xin-Cx(0.25))'×(0.25) (206)

P03=Pb+f(Xin-Cx(-0.25))'×(-0.25) (207)

P04=Pb+f(Xin-Cx(-0.25))'×(0.25) (208)

[2122]In the above Expression (205) through Expression (208), P01 through
P04 represent the pixel values of the pixels P01 through P04
respectively.

[2123]In step S3405, the extrapolation/interpolation unit 3402 determines
regarding whether or not pixels having predetermined resolution have been
obtained, and in this case, the desired quadruple-density pixels have
been obtained, and accordingly, the extrapolation/interpolation unit 3402
determines that the pixels having predetermined resolution have been
obtained, and the processing proceeds to step S3406.

[2124]In step S3406, the gradient acquiring unit 3401 determines regarding
whether or not the processing of all pixels has been completed, and in
the event that determination is made that the processing of all pixels
has not been completed, the processing returns to step S3402, wherein the
subsequent processing is repeatedly performed.

[2125]In step S3406, in the event that the gradient acquiring unit 3401
determines that the processing of all pixels has been completed, the
extrapolation/interpolation unit 3402 outputs an image made up of the
generated pixels, which are stored in unshown memory, in step S3407.

[2126]That is to say, as shown in FIG. 261, the pixel values of new pixels
are obtained using extrapolation/interpolation according to a distance
apart in the spatial direction X from the pixel of interest of which
gradient is obtained using the gradient f(x)' on the approximation
function f(x).

[2127]Note that with the above example, description has been made
regarding the gradient (derivative value) at the time of calculating a
quadruple-density pixel as an example, but in the event that gradient
information at many more positions can be obtained as the actual world
estimating information, pixels having more density in the spatial
directions than that in the above example may be calculated using the
same method as the above example.

[2128]Also, with regard to the above example, description has been made
regarding an example for obtaining double-density pixel values, but the
approximation function f(x) is a continuous function, and accordingly, in
the event that necessary gradient (derivative value) information can be
obtained even regarding pixel values having density other than double
density, an image made up of further high-density pixels may be
generated.

[2129]According to the above description, based on the gradient (or
derivative value) f(x)' information of the approximation function f(x)
approximating the pixel value of each pixel of an input image supplied as
the actual world estimating information in the spatial direction, the
pixels of an higher resolution image than the input image may be
generated.

[2130]Next, description will be made with reference to FIG. 262 regarding
the image generating unit 103 for generating new pixel values so as to
output an image based upon the derivative values or gradient information
for each pixel in a case that the actual world estimation information
input from the actual world estimating unit 102 is derivative values or
gradient information for these pixels, obtained from f(t) that is a
function in the frame direction (time direction) representing approximate
pixel values of the reference pixels.

[2131]An gradient acquisition unit 3411 acquires the gradient information
obtained from an approximate function f(t) which represents approximate
pixel values of the reference pixels, the corresponding pixel value, and
movement as continuity, for each pixel position, which are input from the
actual world estimating unit 102, and outputs the information thus
obtained to an extrapolation unit 3412.

[2132]The extrapolation unit 3412 generates a high-density pixel of a
predetermined order higher than that of the input image using
extrapolation based upon the gradient which is obtained from the
approximate function f(t), the corresponding pixel value, and movement as
continuity, for each pixel, which are input from the gradient acquisition
unit 3411, and outputs the image thus generated as an output image.

[2133]Next, description will be made regarding image generating processing
by the image generating unit 103 shown in FIG. 262, with reference to the
flowchart shown in FIG. 263.

[2134]In Step S3421, the gradient acquisition unit 3411 acquires
information regarding the gradient (derivative value) which is obtained
from the approximate function f(t), the position, the pixel value, and
movement as continuity, for each pixel, which are input from the actual
world estimating unit 102, as actual world estimation information.

[2135]For example, in a case of generating an image from the input image
with double pixel density in both the spatial direction and the frame
direction (i.e., a total of quadruple pixel density), the input
information regarding the pixel Pin shown in FIG. 264, received from the
actual world estimating unit 102 includes: the gradient f(Tin)' (the
gradient at the center of the pixel Pin), f(Tin-Ct(0.25))' (the gradient
at the center of the pixel Pat generated in a step for generating pixels
in the Y direction from the pixel Pin with double pixel density),
f(Tin-Ct(-0.25))' (the gradient at the center of the pixel Pbt generated
in a step for generating pixels in the Y direction from the pixel Pin
with double pixel density), the position of the pixel Pin, the pixel
value, and movement as continuity (motion vector).

[2136]In Step S3422, the gradient acquisition unit 3411 selects the
information regarding the pixel of interest, from the input actual world
estimation information, and outputs the information thus acquired, to the
extrapolation unit 3412.

[2137]In Step S3423, the extrapolation unit 3412 calculates the shift
amount based upon the position information thus input, regarding the
pixel and the gradient of continuity direction.

[2138]Here, with movement as continuity (gradient on the plane having the
frame direction and the spatial direction) as Vf, the shift amount
Ct(ty) is obtained by the equation Ct(ty)=ty/Vf. The shift amount
Ct(ty) represents the shift of the approximate function f(t) in the frame
direction T, calculated at the position of Y=ty in the spatial direction.
Note that the approximate function f(t) is defined at the position Y=0 in
the spatial direction. Accordingly, in a case that the approximate
function f(t) is defined at the position Y=0 in the spatial direction,
for example, the approximate function f(t) is shifted at Y=ty in the
spatial direction by Ct(ty) in the spatial direction T, and accordingly,
the approximate function at Y=ty is defined as f(t-Ct(ty))
(=f(t-ty/Vf)).

[2139]For example, let us consider the pixel Pin as shown in FIG. 264. In
a case that the one pixel in the drawing (let us say that the pixel is
formed with a pixel size of (1, 1) both in the frame direction and the
spatial direction) is divided into two in the spatial direction (in a
case of generating an image with double pixel density in the spatial
direction), the extrapolation unit 3412 calculates the shift amounts for
obtaining the pixels Pat and Pbt. That is to say, the pixels Pat and Pbt
are shifted along the spatial direction Y from the pixel Pin by 0.25 and
-0.25, respectively. Accordingly, the shift amounts for obtaining the
pixel values of the pixels Pat and Pbt are Ct(0.25) and Ct(-0.25),
respectively. Note that in FIG. 264, the pixel Pin is formed in the shape
of a square with the center of gravity at around (Xin, Yin). On the other
hand, the pixels Pat and Pbt are formed in the shape of a rectangle
having long sides in the horizontal direction in the drawing with the
centers of gravity of around (Xin, Yin+0.25) and (Xin, Yin-0.25),
respectively.

[2140]In Step S3424, the extrapolation unit 3412 calculates the pixel
values of the pixels Pat and Pbt with the following Expressions (209) and
(210) using extrapolation based upon the shift amount obtained in Step
S3423, the gradient f(Tin)' at the pixel of interest, which is obtained
from the approximate function f(t) for providing the pixel value of the
pixel Pin and has been acquired as the actual world estimation
information, and the pixel value of the pixel Pin.

pat=Pin-f(Tin)'×Ct(0.25) (209)

pbt=Pin-f(Xin)'×Ct(-0.25) (210)

[2141]In the above Expressions (209) and (210), Pat, Pbt, and Pin
represent the pixel values of the pixel Pat, Pbt, and Pin, respectively.

[2142]That is to say, as shown in FIG. 265, the change in the pixel value
is calculated by multiplying the gradient f(Xin)' at the pixel of
interest Pin by the distance in the X direction, i.e., the shift amount.
Then, the value of a new pixel, which is to be generated, is determined
using the change thus calculated with the pixel value of the pixel of
interest as a base.

[2143]In Step S3425, the extrapolation unit 3412 determines whether or not
the pixels thus generated provide requested resolution. For example, in a
case that the user has requested resolution of double pixel density in
the spatial direction as compared with the input image, the extrapolation
unit 3412 determines that requested resolution image has been obtained.
However, in a case that the user has requested resolution of quadruple
pixel density (double pixel density in both the frame direction and the
spatial direction), the above processing does not provide the requested
pixel density. Accordingly, in a case that the user has requested
resolution of quadruple pixel density, the extrapolation unit 3412
determines that requested resolution image has not been obtained, and the
flow returns to Step S3423.

[2144]In Step S3423 for the second processing, the extrapolation unit 3412
calculates the shift amounts from the pixels as bases for obtaining the
centers of the pixels P01t, P02t, P03t, and P04t (quadruple pixel density
as compared with the pixel of interest Pin). That is to say, in this
case, the pixels P01t and P02t are obtained from the pixel Pat, and
accordingly, the shift amounts from the pixel Pat are calculated for
obtaining these pixels. Here, the pixels P01t and P02t are shifted from
the pixel Pat in the frame direction T by -0.25 and 0.25, respectively,
and accordingly, the distances therebetween without any conversion are
employed as the shift amounts. In the same way, the pixels P03t and P04t
are shifted from the pixel Pbt in the frame direction T by -0.25 and
0.25, respectively, and accordingly, the distances therebetween without
any conversion are employed as the shift amounts. Note that in FIG. 264,
each of the pixels P01t, P02t, P03t, and P04t is formed in the shape of a
square having the center of gravity denoted by a corresponding one of the
four cross marks in the drawing, and the length of each side of each of
these pixels P01t, P02t, P03t, and P04t is approximately 0.5, since the
length of each side of the pixel Pin is 1.

[2145]In Step S3424, the extrapolation unit 3412 calculates the pixel
values of the pixels P01t, P02t, P03t, and P04t, with the following
Expressions (211) through (214) using extrapolation based upon the shift
amount Ct obtained in Step S3423, f(Tin-Ct(0.25))' and f(Tin-Ct(-0.25))'
which are the gradients of the approximate function f(t) at the
corresponding positions of Pat and Pbt and acquired as the actual world
estimation information, and the pixel values of the pixels Pat and Pbt
obtained in the above processing. The pixel values of the pixels P01t,
P02t, P03t, and P04t thus obtained are stored in unshown memory.

P01t=Pat+f(Tin-Ct(0.25))'×(-0.25) (211)

P02t=Pat+f(Tin-Ct(0.25))'×(0.25) (212)

P03t=Pbt+f(Tin-Ct(-0.25))'×(-0.25) (213)

P04t=Pbt+f(Tin-Ct(-0.25))'×(0.25) (214)

[2146]In the above Expressions (205) through (208), P01t through P04t
represent the pixel values of the pixels P01t through P04t, respectively.

[2147]In Step S3425, the extrapolation unit 3412 determines whether or not
the pixel density for achieving the requested resolution has been
obtained. In this stage, the requested quadruple pixel density is
obtained. Accordingly, the extrapolation unit 3412 determines that the
pixel density for requested resolution has been obtained, following which
the flow proceeds to Step S3426.

[2148]In Step S3426, the gradient acquisition unit 3411 determines whether
or not processing has been performed for all the pixels. In a case that
the gradient acquisition unit 3411 determines that processing has not
been performed for all the pixels, the flow returns to Step S3422, and
subsequent processing is repeated.

[2149]In Step S3426, the gradient acquisition unit 3411 determines that
processing has been performed for all the pixels, the extrapolation unit
3412 outputs an image formed of generated pixels stored in the unshown
memory in Step S3427.

[2150]That is to say, as shown in FIG. 265, the gradient of the pixel of
interest is obtained using the gradient f(t)' of the approximate function
f(t), and the pixel values of new pixels are calculated corresponding to
the number of frames positioned along the frame direction T from the
pixel of interest.

[2151]While description has been made in the above example regarding an
example of the gradient (derivative value) at the time of computing a
quadruple-density pixel, the same technique can be used to further
compute pixels in the frame direction as well, if gradient information at
a greater number of positions can be obtained as actual world estimation
information.

[2152]While description has been made regarding an arrangement for
obtaining a double pixel-density image, an arrangement may be made
wherein much higher pixel-density image is obtained based upon the
information regarding the necessary gradient information (derivative
values) using the nature of the approximate function f(t) as a continuous
function.

[2153]The above-described processing enables creation of a higher
resolution pixel image than the input image in the frame direction based
upon the information regarding f(t), which is supplied as the actual
world estimation information, and is the gradient (or derivative value)
of the approximate function f(t) which provides an approximate value of
the pixel value of each pixel of the input image.

[2154]With the present embodiment described above, data continuity is
detected from the image data formed of multiple pixels having the pixel
values obtained by projecting the optical signals in the real world by
actions of multiple detecting elements; a part of continuity of the
optical signals in the real world being lost due to the projection with
the multiple detecting elements each of which has time-space integration
effects. Then, the gradients at the multiple pixels shifted from the
pixel of interest in the image data in one dimensional direction of the
time-space directions are employed as a function corresponding to the
optical signals in the real world. Subsequently, the line is calculated
for each of the aforementioned multiple pixels shifted from the center of
the pixel of interest in the predetermined direction, with the center
matching that of the corresponding pixel and with the gradient at the
pixel thus employed. Then, the values at both ends of the line thus
obtained within the pixel of interest are employed as the pixel values of
a higher pixel-density image than the input image formed of the pixel of
interest. This enables creation of high-resolution image in the
time-space directions than the input image.

[2155]Next, description will be made regarding another arrangement of the
image generating unit 103 (see FIG. 3) according to the present
embodiment with reference to FIG. 266 through FIG. 291.

[2156]FIG. 266 shows an example of a configuration of the image generating
unit 103 according to the present embodiment.

[2157]The image generating unit 103 shown in FIG. 266 includes a class
classification adaptation unit 3501 for executing conventional class
classification adaptation processing, a class classification adaptation
correction unit 3502 for performing correction of the results of the
class classification adaptation processing (detailed description will be
made later), and addition unit 3503 for making the sum of an image output
from the class classification adaptation unit 3501 and an image output
from the class classification adaptation processing correction unit 3502,
and outputting the summed image as an output image to external circuits.

[2158]Note that the image output from the class classification adaptation
processing unit 3501 will be referred to as "predicted image" hereafter.
On the other hand, the image output from the class classification
adaptation processing correction unit 3502 will be referred to as
"correction image" or "subtraction predicted image". Note that
description will be made later regarding the concept behind the
"predicted image" and "subtraction predicted image".

[2159]Also, in the present embodiment, let us say that the class
classification adaptation processing is processing for improving the
spatial resolution of the input image, for example. That is to say, the
class classification adaptation processing is processing for converting
the input image with standard resolution into the predicted image with
high resolution.

[2160]Note that the image with the standard resolution will be referred to
as "SD (Standard Definition) image" hereafter as appropriate. Also, the
pixels forming the SD image will be referred to as "SD pixels" as
appropriate.

[2161]On the other hand, the high-resolution image will be referred to as
"HD (High Definition) image" hereafter as appropriate. Also, the pixels
forming the HD image will be referred to as "HD pixels" as appropriate.

[2162]Next, description will be made below regarding a specific example of
the class classification adaptation processing according to the present
embodiment.

[2163]First, the features are obtained for each of the SD pixels including
the pixel of interest and the pixels therearound (such SD pixels will be
referred to as "class tap" hereafter) for calculating the HD pixels of
the predicted image (HD image) corresponding to the pixel of interest (SD
pixel) of the input image (SD image). Then, the class of the class tap is
selected from classes prepared beforehand, based upon the features thus
obtained (the class code of the class tap is determined).

[2164]Then, product-sum calculation is performed using the coefficients
forming a coefficient set selected from multiple coefficient sets
prepared beforehand (each coefficient set corresponds to a certain class
code) based upon the class code thus determined, and the SD pixels
including the pixel of interest and the pixels therearound (Such SD
pixels will be referred to as "prediction tap" hereafter. Note that the
class tap may also be employed as the prediction tap), so as to obtain HD
pixels of a predicted image (HD image) corresponding to the pixel of
interest (SD pixel) of the input image (SD image).

[2165]Accordingly, with the arrangement according to the present
embodiment, the input image (SD image) is subjected to conventional class
classification adaptation processing at the class classification
adaptation processing unit 3501 so as to generate the predicted image (HD
image). Furthermore, the predicted image thus obtained is corrected at
the addition unit 3503 using the correction image output from the class
classification adaptation processing correction unit 3502 (by making the
sum of the predicted image and the correction image), thereby obtaining
the output image (HD image).

[2166]That is to say, the arrangement according to the present embodiment
can be said to be an arrangement of the image generating unit 103 of the
image processing device (FIG. 3) for performing processing based upon the
continuity, from the perspective of the continuity. On the other hand,
the arrangement according to the present embodiment can also be said to
be an arrangement of the image processing device further including the
data continuity detecting unit 101, the actual world estimating unit 102,
the class classification adaptation correction unit 3502, and the
addition unit 3503, for performing correction of the class classification
adaptation processing, as compared with a conventional image processing
device formed of the sensor 2 and the class classification adaptation
processing unit 3501, from the perspective of class classification
adaptation processing.

[2167]Accordingly, such an arrangement according to the present embodiment
will be referred to as "class classification processing correction means"
hereafter, as opposed to reintegration means described above.

[2168]Detailed description will be made regarding the image generating
unit 103 using the class classification processing correction means.

[2169]In FIG. 266, upon input of signals in the actual world 1
(distribution of the light intensity) to the sensor 2, the input image is
output from the sensor 2. The input image is input to the class
classification adaptation processing unit 3501 of the image generating
unit 103, as well as to the data continuity detecting unit 101.

[2171]As described above, with the class classification adaptation
processing unit 3501, the input image (image data) input from the sensor
2 is employed as a target image which is to be subjected to processing,
as well as a reference image. That is to say, although the input image
from the sensor 2 is different (distorted) from the signals of the actual
world 1 due to the integration effects described above, the class
classification adaptation processing unit 3501 performs the processing
using the input image different from the signals of the actual world 1,
as a correct reference image.

[2172]As a result, in a case that the HD image is generated using the
class classification adaptation processing based upon the input image (SD
image) in which original details have been lost in the input stage where
the input image has been output from the sensor 2, such an HD image may
have a problem that original details cannot be reproduced completely.

[2173]In order to solve the aforementioned problem, with the class
classification processing correction means, the class classification
adaptation processing correction unit 3502 of the image generating unit
103 employs the information (actual world estimation information) for
estimating the original image (signals of the actual world 1 having
original continuity) which is to be input to the sensor 2, as a target
image to be subjected to processing as well as a reference image, instead
of the input image from the sensor 2, so as to create a correction image
for correcting the predicted image output from the class classification
adaptation processing unit 3501.

[2174]The actual world estimation information is created by actions of the
data continuity detecting unit 101 and the actual world estimating unit
102.

[2175]That is to say, the data continuity detecting unit 101 detects the
continuity of the data (the data continuity corresponding to the
continuity contained in signals of the actual world 1, which are input to
the sensor 2) contained in the input image output from the sensor 2, and
outputs the detection results as the data continuity information, to the
actual world estimating unit 102.

[2176]Note that while FIG. 266 shows an arrangement wherein the angle is
employed as the data continuity information, the data continuity
information is not restricted to the angle, rather various kinds
information may be employed as the data continuity information.

[2178]Note that while FIG. 266 shows an arrangement wherein the
features-amount image (detailed description thereof will be made later)
is employed as the actual world estimation information, the actual world
estimation information is not restricted to the features-amount image,
various information may be employed as described above.

[2181]The output image thus output is similar to the signals (image) of
the actual world 1 with higher precision than the predicted image. That
is to say, the class classification adaptation processing correction
means enable the user to solve the aforementioned problem.

[2182]Furthermore, with the signal processing device (image processing
device) 4 having a configuration as shown in FIG. 266, such processing
can be applied for the entire area of one frame. That is to say, while a
signal processing device using a hybrid technique described later (e.g.,
an arrangement described later with reference to FIG. 292) or the like
has need of identifying the pixel region for generating the output image,
the signal processing device 4 shown in FIG. 266 has the advantage that
there is no need of identifying such pixel region.

[2183]Next, description will be made in detail regarding the class
classification adaptation processing unit 3510 of the image generating
device 103.

[2185]In FIG. 267, the input image (SD image) input from the sensor 2 is
supplied to a region extracting unit 3511 and a region extracting unit
3515. The region extracting unit 3511 extracts a class tap (the SD pixels
existing at predetermined positions, which includes the pixel of interest
(SD pixel)), and outputs the class tap to a pattern detecting unit 3512.
The pattern detecting unit 3512 detects the pattern of the input image
based upon the class tap thus input.

[2186]A class-code determining unit 3513 determines the class code based
upon the pattern detected by the pattern detecting unit 3512, and outputs
the class code to a coefficient memory 3514 and a region extracting unit
3515. The coefficient memory 3514 stores the coefficients for each class
code prepared beforehand by learning, reads out the coefficients
corresponding to the class code input from the class code determining
unit 3513, and outputs the coefficients to a prediction computing unit
3516.

[2187]Note that description will be made later regarding the learning
processing for obtaining the coefficients stored in the coefficient
memory 3514, with reference to a block diagram of a class classification
adaptation processing learning unit shown in FIG. 269.

[2188]Also, the coefficients stored in the coefficient memory 3514 are
used for creating a prediction image (HD image) as described later.
Accordingly, the coefficients stored in the coefficient memory 3514 will
be referred to as "prediction coefficients" in order to distinguishing
the aforementioned coefficients from other kinds of coefficients.

[2189]The region extracting unit 3515 extracts a prediction tap (SD pixels
which exist at predetermined positions including the pixel of interest)
necessary for predicting and creating a prediction image (HD image) from
the input image (SD image) input from the sensor 2 based upon the class
code input from the class code determining unit 3513, and outputs the
prediction tap to the prediction computing unit 3516.

[2190]The prediction computing unit 3516 executes product-sum computation
using the prediction tap input from the region extracting unit 3515 and
the prediction coefficients input from the coefficient memory 3514,
creates the HD pixels of the prediction image (HD image) corresponding to
the pixel of interest (SD pixel) of the input image (SD image), and
outputs the HD pixels to the addition unit 3503.

[2191]More specifically, the coefficient memory 3514 outputs the
prediction coefficients corresponding to the class code supplied from the
class code determining unit 3513 to the prediction computing unit 3516.
The prediction computing unit 3516 executes the product-sum computation
represented by the following Expression (215) using the prediction tap
which is supplied from the region extracting unit 3515 and is extracted
from the pixel values of predetermined pixels of the input image, and the
prediction coefficients supplied from the coefficient memory 3514,
thereby obtaining (predicting and estimating) the HD pixels of the
prediction image (HD image).

[2193]As described above, the class classification adaptation processing
unit 3501 predicts and estimates the corresponding HD image based upon
the SD image (input image), and accordingly, in this case, the HD image
output from the class classification adaptation processing unit 3501 is
referred to as "prediction image".

[2195]Note that with the class classification adaptation processing
correction technique, coefficient memory (correction coefficient memory
3554 which will be described later with reference to FIG. 276) is
included in the class classification adaptation processing correction
unit 3502, in addition to the coefficient memory 3514. Accordingly, as
shown in FIG. 268, a learning device 3504 according to the class
classification adaptation processing technique includes a learning unit
3561 (which will be referred to as "class classification adaptation
processing correction learning unit 3561" hereafter) for determining the
coefficients stored in the correction coefficient memory 3554 of the
class classification adaptation processing correction unit 3502 as well
as a learning unit 3521 (which will be referred to as "class
classification adaptation processing learning unit 3521" hereafter) for
determining the prediction coefficients (di in Expression (215))
stored in the coefficient memory 3514 of the class classification
adaptation processing unit 3501.

[2196]Accordingly, while the tutor image used in the class classification
adaptation processing learning unit 3521 will be referred to as "first
tutor image" hereafter, the tutor image used in the class classification
adaptation processing correction learning unit 3561 will be referred to
as "second tutor image" hereafter. In the same way, while the student
image used in the class classification adaptation processing learning
unit 3521 will be referred to as "first student image" hereafter, the
student image used in the class classification adaptation processing
correction learning unit 3561 will be referred to as "second student
image" hereafter.

[2197]Note that description will be made later regarding the class
classification adaptation processing correction learning unit 3561.

[2199]In FIG. 269, a certain image is input to the class classification
adaptation processing correction learning unit 3561 (FIG. 268), as well
as to a down-converter unit 3531 and a normal equation generating unit
3536 as a first tutor image (HD image).

[2200]The down-converter unit 3531 generates a first student image (SD
image) with a lower resolution than the first tutor image based upon the
input first tutor image (HD image) (converts the first tutor image into a
first student image with a lower resolution), and outputs the first
student image to region extracting units 3532 and 3535, and the class
classification adaptation processing correction learning unit 3561 (FIG.
268).

[2201]As described above, the class classification adaptation processing
learning unit 3521 includes the down-converter unit 3531, and
accordingly, the first tutor image (HD image) has no need of having a
higher resolution than the input image from the aforementioned sensor 2
(FIG. 266). The reason is that in this case, the first tutor image
subjected to down-converting processing (the processing for reducing the
resolution of the image) is employed as the first student image, i.e.,
the SD image. That is to say, the first tutor image corresponding to the
first student image is employed as an HD image. Accordingly, the input
image from the sensor 2 may be employed as the first tutor image without
any conversion.

[2203]The region extracting unit 3535 extracts the prediction tap (SD
pixels) from the first student image (SD image) input from the
down-converter unit 3531 based upon the class code input from the class
code determining unit 3534, and outputs the prediction tap to the normal
equation generating unit 3536 and a prediction computing unit 3558.

[2204]Note that the region extracting unit 3532, the pattern detecting
unit 3533, the class-code determining unit 3534, and the region
extracting unit 3535 have generally the same configurations and functions
as those of the region extracting unit 3511, the pattern detecting unit
3512, the class-code determining unit 3513, and the region extracting
unit 3515, of the class classification adaptation processing unit 3501
shown in FIG. 267.

[2205]The normal equation generating unit 3536 generates normal equations
based upon the prediction tap (SD pixels) of the first student image (SD
image) input from the region extracting unit 3535, and the HD pixels of
the first tutor image (HD image), for each class code of all class codes
input form the class code determining unit 3545, and supplies the normal
equations to a coefficient determining unit 3537. Upon reception of the
normal equations corresponding to a certain class code from the normal
equation generating unit 3537, the coefficient determining unit 3537
computes the prediction coefficients using the normal equations. Then,
the coefficient determining unit 3537 supplies the computed prediction
coefficients to a prediction computing unit 3538, as well as storing the
prediction coefficients in the coefficient memory 3514 in association
with the class code.

[2206]Detailed description will be made regarding the normal equation
generating unit 3536 and the coefficient determining unit 3537.

[2207]In the aforementioned Expression (215), each of the prediction
coefficients di is undetermined coefficients before learning
processing. The learning processing is performed by inputting HD pixels
of the multiple tutor images (HD image) for each class code. Let us say
that there are m HD pixels corresponding to a certain class code. With
each of the m HD pixels as qk (k represents an integer of 1 through
m), the following Expression (216) is introduced from the Expression
(215).

× ##EQU00107##

[2208]That is to say, the Expression (216) indicates that the HD pixel
qk can be predicted and estimated by computing the right side of the
Expression (216). Note that in Expression (216), ek represents
error. That is to say, the HD pixel qk' which is a prediction image
(HD image) which is the results of computing the right side, does not
completely match the actual HD pixel qk, and includes a certain
error ek.

[2209]Accordingly, the prediction coefficients di which exhibit the
minimum of the sum of the squares of errors ek should be obtained by
the learning processing, for example.

[2210]Specifically, the number of the HD pixels qk prepared for the
learning processing should be greater than n (i.e., m>n) In this case,
the prediction coefficients di are determined as a unique solution
using the least squares method.

[2211]That is to say, the normal equations for obtaining the prediction
coefficients di in the right side of the Expression (216) using the
least squares method are represented by the following Expression (217).

× × × × × × ×
× × × × × ##EQU00108##

[2212]Accordingly, the normal equations represented by the Expression
(217) are created and solved, thereby determining the prediction
coefficients di as a unique solution.

[2213]Specifically, let us say that the matrices in the Expression (217)
representing the normal equations are defined as the following
Expressions (218) through (220). In this case, the normal equations are
represented by the following Expression (221).

× × × × × × ×
× × × × × ##EQU00109##

[2214]As shown in Expression (219), each component of the matrix DMAT
is the prediction coefficient di which is to be obtained. With the
present embodiment, the matrix CMAT in the left side and the matrix
QMAT in the right side in Expression (221) are determined, thereby
obtaining the matrix DMAT (i.e., the prediction coefficients
di) using matrix computation.

[2215]More specifically, as shown in Expression (218), each component of
the matrix CMAT can be computed since the prediction tap cik is
known. With the present embodiment, the prediction tap cik is
extracted by the region extracting unit 3535. The normal equation
generating unit 3536 computes each component of the matrix CMAT
using the prediction tap cik supplied from the region extracting
unit 3535.

[2216]Also, with the present embodiment, the prediction tap Cik and
the HD pixel qk are known. Accordingly, each component of the matrix
QMAT can be computed as shown in Expression (220). Note that the
prediction tap Cik is the same as in the matrix CMAT. Also,
employed as the HD pixel qk is the HD pixel of the first tutor image
corresponding to the pixel of interest (SD pixel of the first student
image) included in the prediction tap cik. Accordingly, the normal
equation generating unit 3536 computes each component of the matrix
QMAT based upon the prediction tap cik supplied from the region
extracting unit 3535 and the first tutor image.

[2217]As described above, the normal equation generating unit 3536
computes each component of the matrix CMAT and the matrix QMAT,
and supplies the computation results in association with the class code
to the coefficient determining unit 3537.

[2218]The coefficient determining unit 3537 computes the prediction
coefficient di serving as each component of the matrix DMAT in
the above Expression (221) based upon the normal equation corresponding
to the supplied certain class code.

[2219]Specifically, the above Expression (221) can be transformed into the
following Expression (222)

DMAT=CMAT-1QMAT (222)

[2220]In Expression (222), each component of the matrix DMAT in the
left side is the prediction coefficient di which is to be obtained.
On the other hand, each component of the matrix CMAT and the matrix
QMAT is supplied from the normal equation generating unit 3536. With
the present embodiment, upon reception of each component of the matrix
CMAT and the matrix QMAT corresponding to the current class
code from the normal equation generating unit 3536, the coefficient
determining unit 3537 executes the matrix computation represented by the
right side of Expression (222), thereby computing the matrix DMAT.
Then, the coefficient determining unit 3537 supplies the computation
results (prediction coefficient di) to the prediction computation
unit 3538, as well as storing the computation results in the coefficient
memory 3514 in association with the class code.

[2221]The prediction computation unit 3538 executes product-sum
computation using the prediction tap input from the region extracting
unit 3535 and the prediction coefficients determined by the coefficient
determining unit 3537, thereby generating the HD pixel of the prediction
image (predicted image as the first tutor image) corresponding to the
pixel of interest (SD pixel) of the first student image (SD image). The
HD pixels thus generated are output as a learning-prediction image to the
class classification adaptation processing correction learning unit 3561
(FIG. 268).

[2222]More specifically, with the prediction computation unit 3538, the
prediction tap extracted from the pixel values around a certain pixel
position in the first student image supplied from the region extracting
unit 3535 is employed as ci (i represents an integer of 1 through
n). Furthermore, each of the prediction coefficients supplied from the
coefficient determining unit 3537 is employed as di. The prediction
computation unit 3538 executes product-sum computation represented by the
above Expression (215) using the ci and di thus employed,
thereby obtaining the HD pixel q' of the learning-prediction image (HD
image) (i.e., thereby predicting and estimating the first tutor image).

[2223]Now, description will be made with reference to FIG. 270 through
FIG. 275 regarding a problem of the conventional class classification
adaptation processing (class classification adaptation processing unit
3501) described above, i.e., a problem that original details cannot be
reproduced completely in a case that the HD image (predicted image of
signals in the actual world 1) is generated by the class classification
adaptation processing unit 3501 shown in FIG. 266 based upon the input
image (SD image) in which original details have been lost in the input
stage where the input image has been output from the sensor 2.

[2225]In FIG. 270, an HD image 3541 has a fine line with a gradient of
around 5° clockwise as to the vertical direction in the drawing.
On the other hand, an SD image 3542 is generated from the HD image 3541
such that the average of each block of 2×2 pixels (HD pixels) of
the HD image 3541 is employed as the corresponding single pixel (SD
pixel) thereof. That is to say, the SD image 3542 is "down-converted"
(reduced-resolution) image of the HD image 3541.

[2226]In other words, the HD image 3541 can be assumed to be an image
(signals in the actual world 1 (FIG. 266)) which is to be output from the
sensor 2 (FIG. 266) in this simulation. In this case, the SD image 3542
can be assumed to be an image corresponding to the HD image 3541,
obtained from the sensor 2 having certain integration properties in the
spatial direction in this simulation. That is to say, the SD image 3542
can be assumed to be an image input from the sensor 2 in this simulation.

[2227]In this simulation, the SD image 3542 is input to the class
classification adaptation processing unit 3501 (FIG. 266). The predicted
image output from the class classification adaptation processing unit
3501 is a predicted image 3543. That is to say, the predicted image 3543
is an HD image (image with the same resolution as with the original HD
image 3541) generated by conventional class classification adaptation
processing. Note that the prediction coefficients (prediction
coefficients stored in the coefficient memory 3514 (FIG. 267)) used for
prediction computation by the class classification adaptation processing
unit 3501 are obtained with learning/computation processing performed by
the class classification adaptation processing learning unit 3561 (FIG.
269) with the HD image 3541 as the first tutor image and with the SD
image 3542 as the first student image.

[2228]Making a comparison between the HD image 3541, the SD image 3542,
and the predicted image 3543, it has been confirmed that the predicted
image 3543 is more similar to the HD image 3541 than the SD image 3542.

[2229]The comparison results indicate that the class classification
adaptation processing 3501 generates the predicted image 3543 with
reproduced original details using conventional class classification
adaptation processing based upon the SD image 3542 in which the original
details in the HD image 3541 have been lost.

[2230]However, making a comparison between the predicted image 3543 and
the HD image 3541, it cannot be said definitely that the predicted image
3543 is a complete reproduced image of the HD image 3541.

[2231]In order to investigate the cause of such insufficient reproduction
of the predicted image 3543 as to the HD image 3541, the present
applicant formed a summed image by making the sum of the HD image 3541
and the inverse image of the predicted image 3534 using the addition unit
3546, i.e., a subtraction image 3544 obtained by subtracting the
predicted image 3543 from the HD image 3541 (In a case of large
difference in pixel values therebetween, the pixel of the subtraction
image is formed with a density close to white. On the other hand, in a
case of small difference in pixel values therebetween, the pixel of the
subtraction image is formed with a density close to black).

[2232]In the same way, the present applicant formed a summed image by
making the sum of the HD image 3541 and the inverse image of the SD image
3542 using the addition unit 3547, i.e., a subtraction image 3545
obtained by subtracting the SD image 3542 from the HD image 3541 (In a
case of large difference in pixel values therebetween, the pixel of the
subtraction image is formed with a density close to white. On the other
hand, in a case of small difference in pixel values therebetween, the
pixel of the subtraction image is formed with a density close to black).

[2233]Then, making a comparison between the subtraction image 3544 and the
subtraction image 3545, the present applicant obtained investigation
results as follows.

[2234]That is to say, the region which exhibits great difference in the
pixel value between the HD image 3541 and the SD image 3542 (i.e., the
region formed with a density close to white, in the subtraction image
3545) generally matches the region which exhibits great difference in the
pixel value between the HD image 3541 and the predicted image 3543 (i.e.,
the region formed with a density close to white, in the subtraction image
3544).

[2235]In other words, the region in the predicted image 3543, exhibiting
insufficient reproduction results as to the HD image 3541 generally
matches the region which exhibits great difference in the pixel value
between the HD image 3541 and the SD image 3542 (i.e., the region formed
with a density close to white, in the subtraction image 3545).

[2236]Then, in order to solve the cause of the investigation results, the
present applicant further made investigation as follows.

[2237]That is to say, first, the present applicant investigated
reproduction results in the region which exhibits small difference in the
pixel value between the HD image 3541 and the predicted image 3543 (i.e.,
the region formed with a density close to black, in the subtraction image
3544). With the aforementioned region, information obtained for this
investigation are: the actual values of the HD image 3541; the actual
pixel values of the SD image 3542; and the actual waveform corresponding
to the HD image 3541 (signals in the actual world 1). The investigation
results are shown in FIG. 271 and FIG. 272.

[2238]FIG. 271 shows an example of the investigation-target region. Note
that in FIG. 271, the horizontal direction is represented by the X
direction which is one spatial direction, and the vertical direction is
represented by the Y direction which is another spatial direction.

[2239]That is to say, the present applicant investigated reproduction
results of a region 3544-1 in the subtraction image 3544 shown in FIG.
271, which is an example of a region which exhibits small difference in
the pixel value between the HD image 3541 and the predicted image 3543.

[2240]FIG. 272 is a chart which shows: the actual pixel values of the HD
image 3541; the actual pixel values of the SD image 3542, corresponding
to the four pixels from the left side of a series of six HD pixels in the
X direction within the region 3544-1 shown in FIG. 271; and the actual
waveform (signals in the actual world 1).

[2241]In FIG. 272, the vertical axis represents the pixel value, and the
horizontal axis represents the x-axis parallel with the spatial direction
X. Note that the X axis is defined with the origin as the position of the
left end of the third HD pixel form the left side of the six HD pixels
within the subtraction image 3544 in the drawing. Each coordinate value
is defined with the origin thus obtained as the base. Note that the
X-axis coordinate values are defined with the pixel width of an HD pixel
of the subtraction image 3544 as 0.5. That is to say, the subtraction
image 3544 is an HD image, and accordingly, each pixel of the HD image is
plotted in the chart with the pixel width Lt of 0.5 (which will be
referred to as "HD-pixel width Lt" hereafter). On the other hand, in
this case, each pixel of the SD image 3542 is plotted with the pixel
width (which will be referred to as "SD-pixel width Ls" hereafter)
which is twice the HD-pixel width Lt, i.e., with the SD-pixel width
Ls of 1.

[2242]Also, in FIG. 272, the solid line represents the pixel values of the
HD image 3541, the dotted line represents the pixel values of the SD
image 3542, and the broken line represents the signal waveform of the
actual world 1 along the X-direction. Note that it is difficult to plot
the actual waveform of the actual world 1 in reality. Accordingly, the
broken line shown in FIG. 272 represents an approximate function f(x)
which approximates the waveform along the X-direction using the
aforementioned linear polynomial approximation technique (the actual
estimating unit 102 according to the first embodiment shown in FIG. 266).

[2243]Then, the present applicant investigated reproduction results in the
region which exhibits large difference in the pixel value between the HD
image 3541 and the predicted image 3543 (i.e., the region formed with a
density close to white, in the subtraction image 3544) in the same way as
in the aforementioned investigation with regard to the region which
exhibits small difference in the pixel value therebetween. With the
aforementioned region, information obtained for this investigation are:
the actual values of the HD image 3541; the actual pixel values of the SD
image 3542; and the actual waveform corresponding to the HD image 3541
(signals in the actual world 1), in the same way. The investigation
results are shown in FIG. 273 and FIG. 274.

[2244]FIG. 273 shows an example of the investigation-target region. Note
that in FIG. 273, the horizontal direction is represented by the X
direction which is a spatial direction, and the vertical direction is
represented by the Y direction which is another spatial direction.

[2245]That is to say, the present applicant investigated reproduction
results of a region 3544-2 in the subtraction image 3544 shown in FIG.
273, which is an example of a region which exhibits large difference in
the pixel value between the HD image 3541 and the predicted image 3543.

[2246]FIG. 274 is a chart which shows: the actual pixel values of the HD
image 3541; the actual pixel values of the SD image 3542, corresponding
to the four pixels from the left side of a series of six HD pixels in the
X direction within the region 3544-2 shown in FIG. 273; and the actual
waveform (signals in the actual world 1).

[2247]In FIG. 274, the vertical axis represents the pixel value, and the
horizontal axis represents the x-axis parallel with the spatial direction
X. Note that the X axis is defined with the origin as the position of the
left end of the third HD pixel form the left side of the six HD pixels
within the subtraction image 3544 in the drawing. Each coordinate value
is defined with the origin thus obtained as the base. Note that the
X-axis coordinate values are defined with the SD-pixel width Ls of
1.

[2248]In FIG. 274, the solid line represents the pixel values of the HD
image 3541, the dotted line represents the pixel values of the SD image
3542, and the broken line represents the signal waveform of the actual
world 1 along the X-direction. Note that the broken line shown in FIG.
274 represents an approximate function f(x) which approximates the
waveform along the X-direction, in the same way as with the broken line
shown in FIG. 272.

[2249]Making a comparison between the charts shown in FIG. 272 and FIG.
274, it is clear that each region in the drawing includes the line object
from the waveforms of the approximate functions f(x) shown in the
drawings.

[2250]However, there is the difference therebetween as follows. That is to
say, while the line object extends over the region of x of around 0 to 1
in FIG. 272, the line object extends over the region of x of around -0.5
to 0.5 in FIG. 274. That is to say, in FIG. 272, the most part of the
line object is included within the single SD pixel positioned at the
region of x of 0 to 1 in the SD image 3542. On the other hand, in FIG.
274, a part of the line object is included within the single SD pixel
positioned at the region of x of 0 to 1 in the SD image 3542 (the edge of
the line object adjacent to the background is also included therewithin).

[2251]Accordingly, in a case shown in FIG. 272, there is the small
difference in the pixel value between the two HD pixels (represented by
the solid line) extending the region of x of 0 to 1.0 in the HD image
3541. The pixel value of the corresponding SD pixel (represented by the
dotted line in the drawing) is the average of the pixel values of the two
HD pixels. As a result, it can be easily understood that there is the
small difference in the pixel value between the SD pixel of the SD image
3542 and the two HD pixels of the HD image 3541.

[2252]In such a state (the state shown in FIG. 272), let us consider
reproduction processing for generating two HD pixels (the pixels of the
predicted image 3543) which extend over the region of x of 0 to 1.0 with
the single SD pixel extending the region of x of 0 to 1.0 as the pixel of
interest using the conventional class classification adaptation
processing. In this case, the generated HD pixels of the predicted image
3543 approximate the HD pixels of the HD image 3541 with sufficiently
high precision as shown in FIG. 271. That is to say, in the region
3544-1, there is the small difference in the pixel value of the HD pixel
between the predicted image 3543 and the HD image 3541, and accordingly,
the subtraction image is formed with a density close to black as shown in
FIG. 271.

[2253]On the other hand, in a case shown in FIG. 274, there is the large
difference in the pixel value between the two HD pixels (represented by
the solid line) extending the region of x of 0 to 1.0 in the HD image
3541. The pixel value of the corresponding SD pixel (represented by the
dotted line in the drawing) is the average of the pixel values of the two
HD pixels. As a result, it can be easily understood that there is the
large difference in the pixel value between the SD pixel of the SD image
3541 and the two HD pixels of the HD image 3541, as compared with the
corresponding difference shown in FIG. 272.

[2254]In such a state (the state shown in FIG. 274), let us consider
reproduction processing for generating two HD pixels (the pixels of the
predicted image 3543) which extend over the region of x of 0 to 1.0 with
the single SD pixel extending the region of x of 0 to 1.0 as the pixel of
interest using the conventional class classification adaptation
processing. In this case, the generated HD pixels of the predicted image
3543 approximate the HD pixels of the HD image 3541 with poor precision
as shown in FIG. 273. That is to say, in the region 3544-2, there is the
large difference in the pixel value of the HD pixel between the predicted
image 3543 and the HD image 3541, and accordingly, the subtraction image
is formed with a density close to white as shown in FIG. 273.

[2255]Making a comparison between the approximate functions f(x)
(represented by the broken line shown in the drawings) for the signals in
the actual world 1 shown in FIG. 272 and FIG. 274, it can be understood
as follows. That is to say, while the change in the approximate function
f(x) is small over the region of x of 0 to 1 in FIG. 272, the change in
the approximate function f(x) is large over the region of x of 0 to 1 in
FIG. 274.

[2256]Accordingly, there is an SD pixel in the SD image 3542 as shown in
FIG. 272, which extends over the range of x of 0 to 1.0, over which the
change in the approximate function f(x) is small (i.e., the change in
signals in the actual world 1 is small).

[2257]From this perspective, the investigation results described above can
also be said as follows. That is to say, in a case of reproduction of the
HD pixels based upon the SD pixels which extends over the region over
which the change in the approximate function f(x) is small (i.e., the
change in signals in the actual world 1 is small), such as the SD pixel
extending over the region of x of 0 to 1.0 shown in FIG. 272, using the
conventional class classification adaptation processing, the generated HD
pixels approximate the signals in the actual world 1 (in this case, the
image of the line object) with sufficiently high precision.

[2258]On the other hand, there is another SD pixel in the SD image 3542 as
shown in FIG. 274, which extends over the range of x of 0 to 1.0, over
which the change in the approximate function f(x) is large (i.e., the
change in signals in the actual world 1 is large).

[2259]From this perspective, the investigation results described above can
also be said as follows. That is to say, in a case of reproduction of the
HD pixels based upon the SD pixels which extends over the region over
which the change in the approximate function f(x) is large (i.e., the
change in signals in the actual world 1 is large), such as the SD pixel
extending over the region of x of 0 to 1.0 shown in FIG. 274, using the
conventional class classification adaptation processing, the generated HD
pixels approximate the signals in the actual world 1 (in this case, the
image of the line object) with poor precision.

[2260]The conclusion of the investigation results described above is that
in a case as shown in FIG. 275, it is difficult to reproduce the details
extending over the region corresponding to a single pixel using the
conventional signal processing based upon the relation between pixels
(e.g., the class classification adaptation processing).

[2261]That is to say, FIG. 275 is a diagram for describing the
investigation results obtained by the present applicant.

[2262]In FIG. 275, the horizontal direction in the drawing represents the
X-direction which is a direction (spatial direction) along which the
detecting elements of the sensor 2 (FIG. 266) are arrayed. On the other
hand, the vertical direction in the drawing represents the light-amount
level or the pixel value. The dotted line represents the X
cross-sectional waveform F(x) of the signal in the actual world 1 (FIG.
266). The solid line represents the pixel value P output from the sensor
2 in a case the sensor 2 receives a signal (image) in the actual world 1
represented as described above. Also, the width (length in the
X-direction) of a detecting element of the sensor 2 is represented by
Lc. The change in the X cross-sectional waveform F(x) as to the
pixel width Lc of the sensor 2, which is the width Lc of the
detecting element of the sensor 2, is represented by ΔP.

[2263]Here, the aforementioned SD image 3542 (FIG. 270) is an image for
simulating the image (FIG. 266) input from the sensor 2. With this
simulation, evaluation can be made with the SD-pixel width Ls of the
SD image 3542 (FIG. 272 and FIG. 274) as the pixel width (width of the
detecting element) LC of the sensor 2.

[2264]While description has been made regarding investigation for the
signal in the actual world 1 (approximate function f(x)) which reflects
the fine line, there are various types of change in the signal level in
the actual world 1.

[2265]Accordingly, the reproduction results under the conditions shown in
FIG. 275 can be estimated based upon the investigation results. The
reproduction results thus estimated are as follows.

[2266]That is to say, in a case of reproducing HD pixels (e.g., pixels of
the predicted image output from the class classification adaptation
processing unit 3501 in FIG. 266) using the conventional class
classification adaptation processing with an SD pixel (output pixel from
the sensor 2), over which the change ΔP in signals in the actual
world 1 (the change in the X cross-sectional waveform F(x)) is large, as
the pixel of interest, the generated HD pixels approximate the signals in
the actual world 1 (X cross-sectional waveform F(x) in a case shown in
FIG. 275) with poor precision.

[2267]Specifically, with the conventional methods such as the class
classification adaptation processing, image processing is performed based
upon the relation between multiple pixels output from the sensor 2.

[2268]That is to say, as shown in FIG. 275, let us consider a signal which
exhibits rapid change ΔP in the X cross-sectional waveform F(x),
i.e., rapid change in the signal in the actual world 1, over the region
corresponding to a single pixel. Such a signal is integrated (strictly,
time-spatial integration), and only a single pixel value P is output (the
signal over the single pixel is represented by the uniform pixel value
P).

[2269]With the conventional methods, image processing is performed with
the pixel value P as both the reference and the target. In other words,
with the conventional methods, image processing is performed without
giving consideration to the change in the signal in the actual world 1 (X
cross-sectional waveform F(x)) over a single pixel, i.e., without giving
consideration to the details extending over a single pixel.

[2270]Any image processing (even class classification adaptation
processing) has difficulty in reproducing change in the signal in the
actual world 1 over a single pixel with high precision as long as the
image processing is performed in increments of pixels. In particular,
great change ΔP in the signal in the actual world 1 leads to marked
difficulty therein.

[2271]In other words, the problem of the aforementioned class
classification adaptation processing, i.e., the cause of insufficient
reproduction of the original details using the class classification
adaptation processing, which often occurs in a case of employing the
input image (SD image) in which the details have been lost in the stage
where the image has been output from the sensor 2, is as follows. The
cause is that the class classification adaptation processing is performed
in increment of pixels (a single pixel has a single pixel value) without
giving consideration to change in signals in the actual world 1 over a
single pixel.

[2272]Note that all the conventional image processing methods including
the class classification adaptation processing have the same problem, the
cause of the problem is completely the same.

[2273]As described above, the conventional image processing methods have
the same problem and the same cause of the problem.

[2274]On the other hand, the combination of the data continuity detecting
unit 101 and the actual world estimating unit 102 (FIG. 3) allows
estimation of the signals in the actual world 1 based upon the input
image from the sensor 2 (i.e., the image in which the change in the
signal in the actual world 1 has been lost) using the continuity of the
signals in the actual world 1. That is to say, the actual world
estimating unit 102 has a function for outputting the actual world
estimation information which allows estimation of the signal in the
actual world 1.

[2275]Accordingly, the change in the signals in the actual world 1 over a
single pixel can be estimated based upon the actual world estimation
information.

[2276]In this specification, the present applicant has proposed a class
classification adaptation processing correction method as shown in FIG.
266, for example, based upon the mechanism in which the predicted image
(which represents the image in the actual world 1, predicted without
giving consideration to the change in the signal in the actual world 1
over a single pixel) generated by the conventional class classification
adaptation processing is corrected using a predetermined correction image
(which represents the estimated error of the predicted image due to
change in the signal in the actual world 1 over a single pixel) generated
based on the actual world estimation information, thereby solving the
aforementioned problem.

[2277]That is to say, in FIG. 266, the data continuity detecting unit 101
and the actual world estimating unit 102 generate the actual world
estimation information. Then, the class classification adaptation
processing correction unit 3502 generates a correction image having a
predetermined format based upon the actual world estimation information
thus generated. Subsequently, the addition unit 3503 corrects the
predicted image output from the class classification adaptation
processing unit 3501 using the correction image output from the class
classification adaptation processing correction unit 3502 (Specifically,
makes the sum of the predicted image and the correction image, and
outputs the summed image as an output image).

[2278]Note that detailed description has been made regarding the class
classification adaptation processing unit 3501 included in the image
generating unit 103 for performing class classification adaptation
processing correction method. Also, the type of the addition unit 3503 is
not restricted in particular as long as the addition unit 3503 has a
function of making the sum of the predicted image and the correction
image. Examples employed as the addition unit 3503 include various types
of adders, addition programs, and so forth.

[2279]Accordingly, detailed description will be made below regarding the
class classification adaptation processing correction unit 3502 which has
not been described.

[2280]First description will be made regarding the mechanism of the class
classification adaptation processing correction unit 3502.

[2281]As described above, in FIG. 270, let us assume the HD image 3541 as
the original image (signals in the actual world 1) which is to be input
to the sensor 2 (FIG. 266). Furthermore, let us assume the SD image 3542
as the input image from the sensor 2. In this case, the predicted image
3543 can be assumed as the predicted image (image obtained by predicting
the original image (HD image 3541)) output from the class classification
adaptation processing unit 3501.

[2282]On the other hand, the image obtained by subtracting the predicted
image 3543 from the HD image 3541 is the subtraction image 3544.

[2283]Accordingly, the HD image 3541 is reproduced by actions of: the
class classification adaptation processing correction unit 3502 having a
function of creating the subtraction image 3544 and outputting the
subtraction image 3544 as a correction image; and the addition unit 3503
having a function of making the sum of the predicted image 3543 output
from the class classification adaptation processing unit 3501 and the
subtraction image 3544 (correction image) output from the class
classification adaptation processing correction unit 3502.

[2284]That is to say, the class classification adaptation processing
correction unit 3502 suitably predicts the subtraction image (with the
same resolution as with the predicted image output from the class
classification adaptation processing unit 3501), which is the difference
between the image which represents the signals in the actual world 1
(original image which is to be input to the sensor 2) and the predicted
image output from the class classification adaptation processing unit
3501, and outputs the subtraction image thus predicted (which will be
referred to as "subtraction predicted image" hereafter) as a correction
image, thereby almost completely reproducing the signals in the actual
world 1 (original image).

[2285]On the other hand, as described above, there is a relation between:
the difference (error) between the signals in the actual world 1 (the
original image which is to be input to the sensor 2) and the predicted
image output from the class classification adaptation processing unit
3501; and the change in the signals in the actual world 1 over a single
pixel of the input image. Also, the actual world estimating unit 102 has
a function of estimating the signals in the actual world 1, thereby
allowing estimation of the features for each pixel, representing the
change in the signal in the actual world 1 over a single pixel of the
input image.

[2286]With such a configuration, the class classification adaptation
processing correction unit 3502 receives the features for each pixel of
the input image, and creates the subtraction predicted image based
thereupon (predicts the subtraction image).

[2287]Specifically, for example, the class classification adaptation
processing correction unit 3502 receives an image (which will be referred
to as "feature-amount image" hereafter) from the actual world estimating
unit 102, as the actual world estimation information in which the
features is represented by each pixel value.

[2288]Note that the feature-amount image has the same resolution as with
the input image from the sensor 2. On the other hand, the correction
image (subtraction predicted image) has the same resolution as with the
predicted image output from the class classification adaptation
processing unit 3501.

[2289]With such a configuration, the class classification adaptation
processing correction unit 3502 predicts and computes the subtraction
image based upon the feature-amount image using the conventional class
classification adaptation processing with the feature-amount image as an
SD image and with the correction image (subtraction predicted image) as
an HD image, thereby obtaining suitable subtraction predicted image as a
result of the prediction computation.

[2291]FIG. 276 shows a configuration example of the class classification
adaptation processing correction unit 3502 which works on the mechanism.

[2292]In FIG. 276, the feature-amount image (SD image) input from the
actual world estimating unit 102 is supplied to region extracting units
3551 and 3555. The region extracting unit 3551 extracts a class tap (a
set of SD pixels positioned at a predetermined region including the pixel
of interest) necessary for class classification from the supplied
feature-amount image, and outputs the extracted class tap to a pattern
detecting unit 3552. The pattern detecting unit 3552 detects the pattern
of the feature-amount image based upon the class tap thus input.

[2293]A class code determining unit 3553 determines the class code based
upon the pattern detected by the pattern detecting unit 3552, and outputs
the determined class code to correction coefficient memory 3554 and the
region extracting unit 3555. The correction coefficient memory 3554
stores the coefficients for each class code, obtained by learning. The
correction coefficient memory 3554 reads out the coefficients
corresponding to the class code input from the class code determining
unit 3553, and outputs the class code to a correction computing unit
3556.

[2294]Note that description will be made later with reference to the block
diagram of the class classification adaptation processing correction
learning unit shown in FIG. 277 regarding the learning processing for
calculating the coefficients stored in the correction coefficient memory
3554.

[2295]On the other hand, the coefficients, i.e., prediction coefficients,
stored in the correction coefficient memory 3554 are used for predicting
the subtraction image (for generating the subtraction predicted image
which is an HD image) as described later. However, the term, "prediction
coefficients" used in the above description has indicated the
coefficients stored in the coefficient memory 3514 (FIG. 267) of the
class classification adaptation processing unit 3501. Accordingly, the
prediction coefficients stored in the correction coefficient memory 3554
will be refereed to as "correction coefficients" hereafter in order to
distinguish the coefficients from the prediction coefficients stored in
the coefficient memory 3514.

[2296]The region extracting unit 3555 extracts a prediction tap (a set of
the SD pixels positioned at a predetermined region including the pixel of
interest) from the feature-amount image (SD image) input from the actual
world estimating unit 102 based upon the class code input from the class
code determining unit 3553, necessary for predicting the subtraction
image (HD image) (i.e., for generating subtraction predicted image which
is an HD image) corresponding to a class code, and outputs the extracted
class tap to the correction computing unit 3556. The correction computing
unit 3556 executes product-sum computation using the prediction tap input
from the region extracting unit 3555 and the correction coefficients
input from the correction coefficient memory 3554, thereby generating HD
pixels of the subtraction predicted image (HD image) corresponding to the
pixel of interest (SD pixel) of the feature-amount image (SD image).

[2297]More specifically, the correction coefficient memory 3554 outputs
the correction coefficients corresponding to the class code supplied from
the class code determining unit 3553 to the correction computing unit
3556. The correction computing unit 3556 executes product-sum computation
represented by the following Expression (233) using the prediction tap
(SD pixels) extracted from the pixel values at a predetermined position
at a pixel in the input image supplied from the region extracting unit
3555 and the correction coefficients supplied from the correction
coefficient memory 3554, thereby obtaining HD pixels of the subtraction
predicted image (HD image) (i.e., predicting and estimating the
subtraction image).

' × ##EQU00110##

[2298]In Expression (223), u' represents the HD pixel of the subtraction
predicted image (HD image). Each of ai (i represents an integer of 1
through n) represents the corresponding prediction tap (SD pixels). On
the other hand, each of gi represents the corresponding correction
coefficient.

[2299]Accordingly, while the class classification adaptation processing
unit 3501 shown in FIG. 266 outputs the HD pixel q' represented by the
above Expression (215), the class classification adaptation processing
correction unit 3502 outputs the HD pixel u' of the subtraction predicted
image represented by Expression (223). Then, the addition unit 3503 makes
the sum of the HD pixel q' of the predicted image and the HD pixel u' of
the subtraction predicted image (which will be represented by "o'"
hereafter), and outputs the sum to external circuits, as an HD pixel of
the output image.

[2300]That is to say, the HD pixel o' of the output image output from the
image generating unit 103 in the final stage is represented by the
following Expression (224).

[2302]In FIG. 268 as described above, upon completion of leaning
processing, the class classification adaptation processing learning unit
3521 outputs learning predicted image obtained by predicting the first
tutor image based upon the first student image using the prediction
coefficients calculated by learning, as well as outputting the first
tutor image (HD image) and the first student image (SD image) used for
learning processing to the class classification adaptation processing
correction learning unit 3561.

[2303]Returning to FIG. 277, of these images, the first student image is
input to a data continuity detecting unit 3572.

[2304]On the other hand, of these images, the first tutor image and the
learning predicted image are input to an addition unit 3571. Note that
the learning predicted image is inverted before input to the addition
unit 3571.

[2305]The addition unit 3571 makes the sum of the input first tutor image
and the inverted input learning predicted image, i.e., generates a
subtraction image between the first tutor image and the learning
predicted image, and outputs the generated subtraction image to a normal
equation generating unit 3578 as a tutor image used in the class
classification adaptation processing correction learning unit 3561 (which
will be referred to as "second tutor image" for distinguish this image
from the first tutor image).

[2306]The data continuity detecting unit 3572 detects the continuity of
the data contained in the input first student image, and outputs the
detection results to an actual world estimating unit 3573 as data
continuity information.

[2307]The actual world estimating unit 3573 generates a feature-amount
image based upon the data continuity information thus input, and outputs
the generated image to region extracting units 3574 and 3577 as a student
image used in the class classification adaptation processing correction
learning unit 3561 (the student image will be referred to as "second
student image" for distinguishing this student image from the first
student image described above).

[2309]The region extracting unit 3577 extracts the prediction tap (SD
pixels) from the second student image (SD image) input from the actual
world estimating unit 3573 based upon the class code input from the class
code determining unit 3576, and outputs the extracted prediction tap to
the normal equation generating unit 3578.

[2310]Note that the aforementioned region extracting unit 3574, the
pattern detecting unit 3575, the class code determining unit 3576, and
the region extracting unit 3577, have generally the same configurations
and functions as with the region extracting unit 3551, the pattern
detecting unit 3552, the class code determining unit 3553, and the region
extracting unit 3555 of the class classification adaptation processing
correction unit 3502 shown in FIG. 276, respectively. Also, the
aforementioned data continuity detecting unit 3572 and the actual world
estimating unit 3773 have generally the same configurations and functions
as with the data continuity detecting unit 101 and the actual world
estimating unit 102 shown in FIG. 266, respectively.

[2311]The normal equation generating unit 3578 generates a normal equation
based upon the prediction tap (SD pixels) of the second student image (SD
image) input from the region extracting unit 3577 and the HD pixels of
the second tutor image (HD image), for each of the class codes input from
the class code determining unit 3576, and supplies the normal equation to
a correction coefficient determining unit 3579. Upon reception of the
normal equation for the corresponding class code from the normal equation
generating unit 3578, the correction coefficient determining unit 3579
computes the correction coefficients using the normal equation, which and
are stored in the correction coefficient memory 3554 in association with
the class code.

[2312]Now, detailed description will be made regarding the normal equation
generating unit 3578 and the correction coefficient determining unit
3579.

[2313]In the above Expression (223), all the correction coefficients
gi are undetermined before learning. With the present embodiment,
learning is performed by inputting multiple HD pixels of the tutor image
(HD image) for each class code. Let us say that there are m HD pixels
corresponding to a certain class code, and each of the m HD pixels are
represented by uk (k is an integer of 1 through m). In this case,
the following Expression (225) is introduced from the above Expression
(223).

× ##EQU00112##

[2314]That is to say, the Expression (225) indicates that the HD pixels
corresponding to a certain class code can be predicted and estimated by
computing the right side of this Expression. Note that in Expression
(225), ek represents error. That is to say, the HD pixel Uk' of
the subtraction predicted image (HD image) which is computation results
of the right side of this Expression does not exactly matches the HD
pixel uk of the actual subtraction image, but contains a certain
error ek.

[2315]With Expression (2225), the correction coefficients ai are
obtained by learning such that the sum of squares of the errors ek
exhibits the minimum, for example.

[2316]With the present embodiment, the m (m>n) HD pixels uk are
prepared for learning processing. In this case, the correction
coefficients ai can be calculated as a unique solution using the
least squares method.

[2317]That is to say, the normal equation for calculating the correction
coefficients ai in the right side of the Expression (225) using the
least squares method is represented by the following Expression (226).

× × × × × × ×
× × × × × ##EQU00113##

[2318]With the matrix in the Expression (226) as the following Expressions
(227) through (229), the normal equation is represented by the following
Expression (230).

× × × × × × ×
× × × × × ##EQU00114##

[2319]As shown in Expression (228), each component of the matrix GMAT
is the correction coefficient gi which is to be obtained. With the
present embodiment, in Expression (230), the matrix AMAT in the left
side thereof and the matrix UMAT in the right side thereof are
prepared, thereby calculating the matrix GMAT (i.e., the correction
coefficients gi) using the matrix solution method.

[2320]Specifically, with the present embodiment, each prediction tap
aik is known, and accordingly, each component of the matrix
AMAT represented by Expression (227) can be obtained. Each
prediction tap aik is extracted by the region extracting unit 3577,
and the normal equation generating unit 3578 computes each component of
the matrix AMAT using the prediction tap aik supplied from the
region extracting unit 3577.

[2321]On the other hand, with the present embodiment, the prediction tap
aik and the HD pixel uk of the subtraction image are prepared,
and accordingly, each component of the matrix UMAT represented by
Expression (299) can be calculated. Note that the prediction tap aik
is the same as that of the matrix AMAT. On the other hand, the HD
pixel uk of the subtraction image matches the corresponding HD pixel
of the second tutor image output from the addition unit 3571. With the
present embodiment, the normal equation generating unit 3578 computes
each component of the matrix UMAT using the prediction tap aik
supplied from the region extracting unit 3577 and the second tutor image
(the subtraction image between the first tutor image and the learning
predicted image).

[2322]As described above, the normal equation generating unit 3578
computes each component of the matrix AMAT and the matrix UMAT
for each class code, and supplies the computation results to the
correction coefficient determining unit 3579 in association with the
class code.

[2323]The correction coefficient determining unit 3579 computes the
correction coefficients gi each of which is the component of the
matrix GMAT represented by the above Expression (230) based upon the
normal equation corresponding to the supplied class code.

[2324]Specifically, the normal equation represented by the above
Expression (230) can be transformed into the following Expression (231).

GMAT=AMAT-1UMAT (231)

[2325]In Expression (231), each component of the matrix GMAT in the
left side thereof is the correction coefficient gi which is to be
obtained. Note that each component of the matrix AMAT and each
component of the matrix UMAT are supplied from the normal equation
generating unit 3578. With the present embodiment, upon reception of the
components of the matrix AMAT in association with a certain class
code and the components of the matrix UMAT from the normal equation
generating unit 3578, the correction coefficient determining unit 3579
computes the matrix GMAT by executing matrix computation represented
by the right side of Expression (231), and stores the computation results
(correction coefficients gi) in the correction coefficient memory
3554 in association with the class code.

[2327]Note that the type of the feature-amount image employed in the
present invention is not restricted in particular as long as the
correction image (subtraction predicted image) is generated based
thereupon by actions of the class classification adaptation processing
correction unit 3502. In other words, the pixel value of each pixel in
the feature-amount image, i.e., the features, employed in the present
invention is not restricted in particular as long as the features
represents the change in the signal in the actual world 1 (FIG. 266) over
a single pixel (pixel of the sensor 2 (FIG. 266)).

[2328]For example, "intra-pixel gradient" can be employed as the features.

[2329]Note that the "intra-pixel gradient" is a new term defined here.
Description will be made below regarding the intra-pixel gradient.

[2330]As described above, the signal in the actual world 1, which is an
image in FIG. 266, is represented by the function F(x, y, t) with the
positions x, y, and z in the three-dimensional space and time t as
variables.

[2331]Now, let us say that the signal in the actual world 1 which is an
image has continuity in a certain spatial direction. In this case, let us
consider a one-dimensional waveform (the waveform obtained by projecting
the function F along the X direction will be referred to as "X
cross-section waveform F(x)") obtained by projecting the function F(x, y,
t) along a certain direction (e.g., X-direction) selected from the
spatial directions of the X-direction, Y-direction, and Z-direction. In
this case, it can be understood that waveforms similar to the
aforementioned one-dimensional waveform F(x) can be obtained therearound
along the direction of the continuity.

[2332]Based upon the fact described above, with the present embodiment,
the actual world estimating unit 102 approximates the X cross-section
waveform F(x) using a n'th (n represents a certain integer) polynomial
approximate function f(x) based upon the data continuity information
(e.g., angle) which reflects the continuity of the signal in the actual
world 1, which is output form the data continuity detecting unit 101, for
example.

[2333]FIG. 278 shows f4(x) (which is a fifth polynomial function)
represented by the following Expression (232), and f5(x) (which is a
first polynomial function) represented by the following Expression (233),
for example of such a polynomial approximate function f(x).

f4(x)=w0+w1x+w2x2+w3x3+w4x4+w-
5x5 (232)

f5(x)=w0'+w1'x (233)

[2334]Note that each of W0 through W5 in Expression (232) and
W0' and W1' in Expression (233) represents the coefficient of
the corresponding order of the function computed by the actual world
estimating unit 102.

[2335]On the other hand, in FIG. 278, the x-axis in the horizontal
direction in the drawing is defined with the left end of the pixel of
interest as the origin (x=0), and represents the relative position from
the pixel of interest along the spatial direction x. Note that the x-axis
is defined with the width LC of the detecting element of the sensor
2 as 1. On the other hand, the axis in the vertical direction in the
drawing represents the pixel value.

[2336]As shown in FIG. 278, the one-dimensional approximate function
f5(x) (approximate function f5(x) represented by Expression
(232)) approximates the X cross-sectional waveform F(x) around the pixel
of interest using collinear approximation. In this specification, the
gradient of the linear approximate function will be referred to as
"intra-pixel gradient". That is to say, the intra-pixel gradient is
represented by the coefficient w1' of x in Expression (233).

[2337]The rapid intra-pixel gradient reflects great change in the X
cross-sectional waveform F(x) around the pixel of the interest. On the
other hand, the gradual gradient reflects small change in the X
cross-sectional waveform F(x) around the pixel of interest.

[2338]As described above, the intra-pixel gradient suitably reflects
change in the signal in the actual world 1 over a single pixel (pixel of
the sensor 2). Accordingly, the intra-pixel gradient may be employed as
the features.

[2340]That is to say, the image on the left side in FIG. 279 is the same
as the SD image 3542 shown in FIG. 270 described above. On the other
hand, the image on the right side in FIG. 279 is a feature-amount image
3591 generated as follows. That is to say, the intra-pixel gradient is
obtained for each pixel of the SD image 3542 on the left side in the
drawing. Then, the image on the right side in the drawing is generated
with the value corresponding to the intra-pixel gradient as the pixel
value. Note that the feature-amount image 3591 has the nature as follows.
That is to say, in a case of the intra-pixel gradient of zero (the linear
approximate function is parallel with the X-direction), the image is
generated with a density corresponding to black. On the other hand, in a
case of the intra-pixel gradient of 90° (the linear approximate
function is parallel with the Y-direction), the image is generated with a
density corresponding to white.

[2341]The region 3542-1 in the SD image 3542 corresponds to the region
3544-1 (which has been used in the above description with reference to
FIG. 272, as an example of the region in which change in the signal in
the actual world 1 is small over a single pixel) in the subtraction image
3544 shown in FIG. 271 described above. In FIG. 279, the region 3591-1 in
the feature-amount image 3591 corresponds to the region 3542-1 in the SD
image 3542.

[2342]On the other hand, the region 3542-2 in the SD image 3542
corresponds to the region 3544-2 (which has been used in the above
description with reference to FIG. 274, as an example of the region in
which change in the signal in the actual world 1 is large over a single
pixel) in the subtraction image 3544 shown in FIG. 273 described above.
In FIG. 279, the region 3591-2 in the feature-amount image 3591
corresponds to the region 3542-2 in the SD image 3542.

[2343]Making a comparison between the region 3542-1 of the SD image 3542
and the region 3591-1 of the feature-amount image 3591, it can be
understood that the region in which change in the signal in the actual
world 1 is small corresponds to the region of the feature-amount image
3591 having a density close to black (corresponding to the region having
a gradual intra-pixel gradient).

[2344]On the other hand, making a comparison between the region 3542-2 of
the SD image 3542 and the region 3591-2 of the feature-amount image 3591,
it can be understood that the region in which change in the signal in the
actual world 1 is large corresponds to the region of the feature-amount
image 3591 having a density close to white (corresponding to the region
having a rapid intra-pixel gradient).

[2345]As described above, the feature-amount image generated with the
value corresponding to the intra-pixel gradient as the pixel value
suitably reflects the degree of change in the signal in the actual world
1 for each pixel.

[2346]Next, description will be made regarding a specific computing method
for the intra-pixel gradient.

[2347]That is to say, with the intra-pixel gradient around the pixel of
interest as "grad", the intra-pixel gradient grad is represented by the
following Expression (234).

' ##EQU00115##

[2348]In Expression (234), Pn represents the pixel value of the pixel
of interest. Also, PC represents the pixel value of the center
pixel.

[2349]Specifically, as shown in FIG. 280, let us consider a region 3601
(which will be referred to as "continuity region 3601" hereafter) of
5×5 pixels (square region of 5×5=25 pixels in the drawing) in
the input image from the sensor 2, having a certain data continuity. In a
case of the continuity region 3601, the center pixel is the pixel 3602
positioned at the center of the continuity region 3601. Accordingly,
PC is the pixel value of the center pixel 3602. Also, in a case that
the pixel 3603 is the pixel of interest, Pn is the pixel value of
the pixel of interest 3603.

[2350]Also, in Expression (234), xn' represents the cross-sectional
direction distance at the center of the pixel of interest. Note that with
the center of the center pixel (pixel 3602 in a case shown in FIG. 280)
as the origin (0, 0) in the spatial directions, "the cross-sectional
direction distance" is defined as the relative distance along the
X-direction between the center pixel of interest and the line (the line
3604 in a case shown in FIG. 280) which is parallel with the
data-continuity direction, and which passes through the origin.

[2351]FIG. 281 is a diagram which shows the cross-sectional direction
distance for each pixel within the continuity region 3601 in FIG. 280.
That is to say, in FIG. 281, the value marked within each pixel in the
continuity region 3601 (square region of 5×5=25 pixels in the
drawing) represents the cross-sectional direction distance at the
corresponding pixel. For example, the cross-sectional direction distance
Xn' at the pixel of interest 3603 is -2β.

[2352]Note that the X-axis and the Y-axis are defined with the pixel width
of 1 in both the X-direction and the Y-direction. Furthermore, the
X-direction is defined with the positive direction matching the right
direction in the drawing. Also, in this case, β represents the
cross-sectional direction distance at the pixel 3605 adjacent to the
center pixel 3602 in the Y-direction (adjacent thereto downward in the
drawing). With the present embodiment, the data continuity detecting unit
101 supplies the angle θ (the angle θ between the direction
of the line 3604 and the X-direction) as shown in FIG. 281 as the data
continuity information, and accordingly, the value β can be obtained
with ease using the following Expression (235).

β θ ##EQU00116##

[2353]As described above, the intra-pixel gradient can be obtained with
simple computation based upon the two input pixel values of the center
pixel (e.g., pixel 3602 in FIG. 281) and the pixel of interest (e.g.,
pixel 3603 in FIG. 281) and the angle θ. With the present
embodiment, the actual world estimating unit 102 generates a
feature-amount image with the value corresponding to the intra-pixel
gradient as the pixel value, thereby greatly reducing the processing
amount.

[2354]Note that with an arrangement which requires higher-precision
intra-pixel gradient, the actual-world estimating unit 102 should compute
the intra-pixel gradient using the pixels around and including the pixel
of interest with the least square method. Specifically, let us say that m
(m represents an integer of 2 or more) pixels around and including the
pixel of interest are represented by index number i (i represents an
integer of 1 through m). The actual world estimating unit 102 substitutes
the input pixel values Pi and the corresponding cross-sectional
direction distance xi' into the right side of the following
Expression (236), thereby computing the intra-pixel gradient grad at the
pixel of interest. That is to say, the intra-pixel gradient is calculated
using the least square method with a single variable in the same way as
described above.

'× ' ##EQU00117##

[2355]Next, description will be made with reference to FIG. 282 regarding
processing (processing in Step S103 shown in FIG. 40) for generating an
image performed by the image generating unit 103 (FIG. 266) using the
class classification adaptation processing correction method.

[2356]In FIG. 266, upon reception of the signal in the actual world 1
which is an image, the sensor 2 outputs the input image. The input image
is input to the class classification adaptation processing unit 3501 of
the image generating unit 103 as well as being input to the data
continuity detecting unit 101.

[2358]Note that such processing in Step S3501 performed by the class
classification adaptation processing unit 3501 will be referred to as
"input image class classification adaptation processing" hereafter.
Detailed description will be made later with reference to the flowchart
shown in FIG. 283 regarding the "input image class classification
adaptation processing" in this case.

[2359]The data continuity detecting unit 101 detects the data continuity
contained in the input image at almost the same time as with the
processing in Step S3501, and outputs the detection results (angle in
this case) to the actual world estimating unit 102 as data continuity
information (processing in Step S101 shown in FIG. 40).

[2360]The actual world estimating unit 102 generates the actual world
estimation information (the feature-amount image which is an SD image in
this case) based upon the input angle (data continuity information), and
supplies the actual world estimation information to the class
classification adaptation processing correction unit 3502 (processing in
Step S102 shown in FIG. 40).

[2361]Then, in Step S3502, the class classification adaptation processing
correction unit 3502 performs class classification adaptation processing
for the feature-amount image (SD image) thus supplied, so as to generate
the subtraction predicted image (HD image) (i.e., so as to predict and
compute the subtraction image (HD image) between the actual image (signal
in the actual world 1) and the predicted image output from the class
classification adaptation processing unit 3501), and outputs the
subtraction predicted image to the addition unit 3503 as a correction
image.

[2362]Note that such processing in Step S3502 performed by the class
classification adaptation processing correction unit 3502 will be
referred to as "class classification adaptation processing correction
processing" hereafter. Detailed description will be made later with
reference to the flowchart shown in FIG. 284 regarding the "class
classification adaptation processing correction processing" in this case.

[2364]In Step S3504, the addition unit 3503 determines whether or not the
processing has been performed for all the pixels.

[2365]In the event that determination has been made that the processing
has not been performed for all the pixels in Step S3504, the flow returns
to Step S3501, and the subsequent processing is repeated. That is to say,
the processing in Steps S3501 through S3503 is performed for each of the
remaining pixels which have not been subjected to the processing in
order.

[2366]Upon completion of the processing for all the pixels (in the event
that determination has been made that processing has been performed for
all the pixels in Step S3504), the addition unit 3504 outputs the output
image (HD image) to external circuits in Step S3505, whereby processing
for generating an image ends.

[2367]Next, detailed description will be made with reference to the
drawings regarding the "input image class classification adaptation
processing (the processing in Step S3501)", and the "class classification
adaptation correction processing (the processing in Step S3502)", step by
step in that order.

[2370]In Step S3522, the region extracting unit 3511 extracts the pixel of
interest (SD pixel) from the input image and (one or more) pixels (SD
pixels) at predetermined relative positions away from the pixel of
interest as a class tap, and supplies the extracted class tap to the
pattern detecting unit 3512.

[2373]In Step S3525, the coefficient memory 3514 selects the prediction
coefficients (set) corresponding to the supplied class code, which are to
be used in the subsequent processing, from the multiple prediction
coefficients (set) determined beforehand with learning processing, and
supplies the selected prediction coefficients to the prediction computing
unit 3516.

[2374]Note that description will be made later regarding the learning
processing with reference to the flowchart shown in FIG. 288.

[2375]In Step S3526, the region extracting unit 3515 extracts the pixel of
interest (SD pixel) from the input image and (one or more) pixels (SD
pixels) at predetermined relative positions (which may be set to the same
positions as with the class tap) away from the pixel of interest as a
prediction tap, and supplies the extracted prediction tap to the
prediction computing unit 3516.

[2376]In Step S3527, the prediction computing unit 3516 performs
computation processing for the prediction tap supplied from the region
extracting unit 3515 using the prediction coefficients supplied from the
coefficient memory 3514 so as to generate the predicted image (HD image),
and outputs the generated predicted image to the addition unit 3503.

[2377]Specifically, the prediction computing unit 3516 performs
computation processing as follows. That is to say, with each pixel of the
prediction tap supplied from the region extracting unit 3515 as ci
(i represents an integer of 1 through n), and with each of the prediction
coefficients supplied from the coefficient memory 3514 as di, the
prediction computing unit 3516 performs computation represented by the
right side of the above Expression (215), thereby calculating the HD
pixel q' corresponding to the pixel of interest (SD pixel). Then, the
prediction computing unit 3516 outputs the calculated HD pixel q' to the
addition unit 3503 as a pixel forming the predicted image (HD image),
whereby the input image class classification adaptation processing ends.

[2379]Upon input of the feature-amount image (SD image) to the class
classification adaptation processing correction unit 3502 as the actual
world estimation information from the actual world estimating unit 102,
the region extracting units 3551 and 3555 each receive the feature-amount
image in Step S3541.

[2380]In Step S3542, the region extracting unit 3551 extracts the pixel of
interest (SD pixel) and (one or more) pixels (SD pixels) at predetermined
relative positions away from the pixel of interest from the feature
amount image as a class tap, and supplies the extracted class tap to the
pattern detecting unit 3552.

[2381]Specifically, in this case, let us say that the region extracting
unit 3551 extracts a class tap (a set of pixels) 3621 shown in FIG. 285,
for example. That is to say, FIG. 285 shows an example of the layout of
the class tap.

[2382]In FIG. 285, the horizontal axis in the drawing represents the
X-direction which is one spatial direction, and the vertical direction in
the drawing represents the Y-direction which is another spatial
direction. Note that the pixel of interest is represented by the pixel
3621-2.

[2383]In this case, the pixels extracted as the class tap are a total of
five pixels of: the pixel of interest 3621-1; the pixels 3621-0 and
3621-4 which are adjacent to the pixel of interest 3621-2 along the
Y-direction; and the pixels 3621-1 and 3621-3 which are adjacent to the
pixel of interest 3621-2 along the X-direction, which make up a pixel set
3621.

[2384]It is needless to say that the layout of the class tap employed in
the present embodiment is not restricted to the example shown in FIG.
285, rather, various kinds of layouts may be employed as long as it
includes the pixel of interest 3624-2.

[2385]Returning to FIG. 284, in Step S3543, the pattern detecting unit
3552 detects the pattern of the class tap thus supplied, and supplies the
detected pattern to the class code determining unit 3553.

[2386]Specifically, in this case, the pattern detecting unit 3552 detects
the class which belongs the pixel value, i.e., the value of features
(e.g., intra-pixel gradient), for each of the five pixels 3621-0 through
3621-4 forming the class tap shown in FIG. 285, and outputs the detection
results in the form of a single data set as a pattern, for example.

[2387]Now, let us say that a pattern shown in FIG. 286 is detected, for
example. That is to say, FIG. 286 shows an example of the pattern of the
class tap.

[2388]In FIG. 286, the horizontal axis in the drawing represents the class
taps, and the vertical axis in the drawing represents the intra-pixel
gradient. On the other hand, let us say that the classes prepared
beforehand are a total of three classes of class 3631, class 3632, and
class 3633.

[2390]As described above, each of the five class taps 3621-0 through
3621-4 belongs to one of the three classes 3631 through 3633.
Accordingly, in this case, there are a total of 273 (=3 5) patterns
including the pattern shown in FIG. 286.

[2391]Returning to FIG. 284, in Step S3544, the class code determining
unit 3553 determines the class code corresponding to the pattern of the
class tap thus supplied, from multiple class code prepared beforehand,
and supplies the determined class code to the correction coefficient
memory 3554 and the region extracting unit 3555. In this case, there are
273 patterns, and accordingly, there are 273(or more) class codes
prepared beforehand.

[2392]In step S3545, the correction coefficient memory 3554 selects the
correction coefficients (set), which are to be used in the subsequent
processing, corresponding to the class code thus supplied, from the
multiple sets of the correction coefficient set determined beforehand
with the learning processing, and supplies the selected correction
coefficients to the correction computing unit 3556. Note that each of the
correction-coefficient sets prepared beforehand is stored in the
correction coefficient memory 3554 in association with one of the class
codes prepared beforehand. Accordingly, in this case, the number of the
correction-coefficient sets matches the number of the class codes
prepared beforehand (i.e., 273 or more).

[2393]Note that description will be made later regarding the learning
processing with reference to the flowchart shown in FIG. 288.

[2394]In Step S3546, the region extracting unit 3555 extracts the pixel of
interest (SD pixel) from the input image and the pixels (SD pixels) at
predetermined relative positions (One or more positions determined
independent of those of the class taps. However, the positions of the
prediction tap may match those of the class tap) away from the pixel of
interest, which are used as class taps, and supplies the extracted
prediction taps to the correction computing unit 3556.

[2395]Specifically, in this case, let us say that the prediction tap (set)
3641 shown in FIG. 287 is extracted. That is to say, FIG. 287 shows an
example of the layout of the prediction tap.

[2396]In FIG. 287, the horizontal axis in the drawing represents the
X-direction which is one spatial direction, and the vertical direction in
the drawing represents the Y-direction which is another spatial
direction. Note that the pixel of interest is represented by the pixel
3641-1. That is, the pixel 3641-1 is a pixel corresponding to the class
tap 3621-2(FIG. 285).

[2397]In this case, the pixels extracted as the prediction tap (group) are
5×5 pixels 3041 (a set of pixels formed of a total of 25 pixels)
with the pixel of interest 3641-1 as the center.

[2398]It is needless to say that the layout of the prediction tap employed
in the present embodiment is not restricted to the example shown in FIG.
287, rather, various kinds of layouts including the pixel of interest
3641-1 may be employed.

[2400]More specifically, with each of the class taps supplied from the
region extracting unit 3555 as ai (i represents an integer of 1
through n), and with each of the correction coefficients supplied from
the correction coefficient memory 3554 as gi, the correction
computing unit 3556 performs computation represented by the right side of
the above Expression (223), thereby calculating the HD pixel u'
corresponding to the pixel of interest (SD pixel). Then, the correction
computing unit 3556 outputs the calculated HD pixel to the addition unit
3503 as a pixel of the correction image (HD image), whereby the class
classification adaptation correction processing ends.

[2401]Next, description will be made with reference to the flowchart shown
in FIG. 288 regarding the learning processing performed by the learning
device (FIG. 268), i.e., the learning processing for generating the
prediction coefficients used in the class classification adaptation
processing unit 3501 (FIG. 267), and the learning processing for
generating the correction coefficients used in the class classification
adaptation processing correction unit 3502 (FIG. 276).

[2403]That is to say, the class classification adaptation processing
learning unit 3521 receives a certain image as a first tutor image (HD
image), and generates a student image (SD image) with a reduced
resolution based upon the first tutor image.

[2404]Then, the class classification adaptation processing learning unit
3521 generates the prediction coefficients which allows suitable
prediction of the first tutor image (HD image) based upon the first
student image (SD image) using the class classification adaptation
processing, and stores the generated prediction coefficients in the
coefficient memory 3514 (FIG. 267) of the class classification adaptation
processing unit 3501.

[2405]Note that such processing shown in Step S3561 executed by the class
classification adaptation processing learning unit 3521 will be referred
to as "class classification processing learning processing" hereafter.
Detailed description will be made later regarding the "class
classification adaptation processing learning unit" in this case, with
reference to the flowchart shown in FIG. 289.