Abstract:

A method of automatically grading beef quality by analyzing a digital
image is provided. The method includes: an image acquiring step of
acquiring a color image of beef using a CCD camera; a region separating
step of separating a lean region from the acquired image; a boundary
extracting step of extracting a boundary line of the lean region; a
boundary smoothing step of smoothing the boundary line extracted in the
boundary extracting step; a boundary correcting step of correcting an
indented portion and a protruded portion included in the boundary line
having been subjected to the boundary smoothing step; a grading region
determining step of determining a grading region on the basis of the
boundary line corrected in the boundary correcting step; and a grading
step of grading the beef quality on the basis of the image of the grading
region.

Claims:

1. A method of automatically grading beef quality, comprising: an image
acquiring step of acquiring a color image of beef using a CCD camera; a
region separating step of separating a lean region from the acquired
image; a boundary extracting step of extracting a boundary line of the
lean region; a boundary smoothing step of smoothing the boundary line
extracted in the boundary extracting step; a boundary correcting step of
correcting an indented portion and a protruded portion included in the
boundary line having been subjected to the boundary smoothing step; a
grading region determining step of determining a grading region on the
basis of the boundary line corrected in the boundary correcting step; and
a grading step of grading the beef quality on the basis of the image of
the grading region.

2. The method according to claim 1, wherein the boundary smoothing step
employs a curve generating method using relationships of pixels selected
from pixels in the boundary line, and wherein the pixels in a part with a
complex boundary line are selected so that a distance between the pixels
is small, and the pixels in a part with a smooth boundary line are
selected so that the distance between the pixels is great.

3. The method according to claim 2, wherein the pixels are selected in
the boundary smoothing step by: a first sub-step of selecting a start
pixel from the pixels in the boundary line, storing positional
information of the start pixel, and selecting an end pixel which is
separated from the start pixel along the boundary line by a predetermined
number of pixels X; a second sub-step of determining a degree of
complexity of the boundary line between the start pixel and the end
pixel; and a third sub-step of storing the positional information of the
end pixel, selecting the end pixel as a new start pixel, and then
repeatedly performing the first sub-step when the boundary line
determined in the degree of complexity in the second sub-step is not
complex, and detecting an intermediate pixel separated from the start
pixel along the boundary line by the number of pixels W smaller than the
number of pixels between the start pixel and the end pixel, storing the
positional information of the intermediate pixel, selecting the
intermediate pixel as a new start pixel, and then repeatedly performing
the first sub-step when the boundary line determined in the degree of
complexity in the second sub-step.

4. The method according to claim 3, wherein the degree of complexity of
the boundary line is determined in the second sub-step by comparing a
predetermined value Z with a value z obtained by dividing the number of
pixels Y in a straight line between the start pixel and the end pixel by
the number of pixels X in the boundary line between the start pixel and
the end pixel.

5. The method according to claim 4, wherein W=5, X=20, and Z=0.8.

6. The method according to claim 1, wherein the boundary correcting step
includes: a sub-step of detecting protruded pixels by comparing slopes of
the pixels in the boundary line; a sub-step of determining whether the
boundary line between the adjacent protruded pixels out of the protruded
pixels should be corrected; and a sub-step of correcting the boundary
line using a curve generating method when it is determined that the
boundary line should be corrected.

7. The method according to claim 6, wherein the sub-step of determining
whether the boundary line should be corrected includes comparing a
predetermined value K with a value k obtained by dividing the number of
pixels I in the boundary line between the adjacent protruded pixels by
the number of pixels J in a straight line between the adjacent protruded
pixels, determining that the boundary line should be maintained when the
obtained value is smaller than the predetermined value, and determining
that the boundary line should be corrected when the obtained value is
greater than the predetermined value, where K=1.8.

8. The method according to claim 6, wherein the sub-step of correcting
the boundary line using the curve generating method is performed by
applying the curve generating method to the adjacent protruded pixels and
two pixels separated outward from the adjacent protruded pixels by 30
pixels.

9. The method according to claim 1, wherein the region separating step
includes a binarization sub-step of calculating an optimal threshold
value and displaying only the lean region.

10. The method according to claim 9, wherein the optimal threshold value
is calculated in the binarization sub-step by: analyzing a gray-scale
level using a brightness distribution of an image in a green band;
excluding a region where the gray-scale level of the image in the green
band is less than 25 and a region where the gray-scale level is greater
than 150 and reducing the gray-scale level in the remaining region to a
half; calculating a probability distribution of the lean region and a
probability distribution of a fat region using probability density
functions of the gray-scale levels, a sum of probability density
functions of the lean region, and a sum of probability density functions
of the fat region; applying a probability distribution of the lean region
and a probability distribution of the fat region to α-dimension R
nyi entropy; calculating the gray-scale level at which the sum of the R
nyi entropy in the lean region and the R nyi entropy in the fat region is
the maximum; and calculating the optimal threshold value using the
gray-scale level at which the sum of the R nyi entropy having three
different values depending on the range of a is the maximum.

11. The method according to claim 1, wherein the grading region
determining step includes an interactive checking sub-step of allowing a
user to check the determined grading region and correcting the boundary
line.

12. The method according to claim 1, wherein the boundary extracting step
includes a labeling sub-step of labeling the lean region of which the
boundary line would be extracted, a dilation sub-step of filling an empty
space remaining in the labeled region, an erosion sub-step of eroding a
part of the lean region exaggerated in the dilation sub-step, and an
automatic boundary extracting sub-step of extracting the boundary line of
the lean region determined up to the erosion sub-step.

13. The method according to claim 1, wherein the grading step includes at
least one sub-step of a size determining sub-step of determining an area
of a lean region, an intramuscular fat determining sub-step of
determining a marbling state of beef, a color determining sub-step of
determining lean and fat colors, and a fat thickness determining sub-step
of determining a thickness of back fat.

14. The method according to claim 13, wherein the size determining
sub-step includes converting the number of pixels of the grading region
into an area.

15. The method according to claim 13, wherein the intramuscular fact
determining sub-step includes grading the beef quality by performing a
binarization process with respect to 135 using the image of the red band
and by calculating tissue indexes of element difference moment, entropy,
uniformity, and area ratio using four paths selected from a co-occurrence
matrix as a horizontal path mask.

16. The method according to claim 13, wherein the color determining
sub-step uses L*a*b* values of the International Commission on
Illumination changed, which is obtained by converting average RGB values
calculated from output values of an image expressed by RGB by learning
using a back-propagation multi-layer neural network.

17. The method according to claim 13, wherein the thickness determining
sub-step includes performing a triangular method on the grading region to
detect the longest straight line in the grading region, selecting the fat
part of which the thickness should be measured on the basis of the
straight line, drawing a normal line perpendicular to the straight line
in the selected fat region, and measuring the length of the normal line.

18. A system for automatically grading beef quality, comprising: an image
acquiring unit including a lamp and a CCD camera; a grading unit
including an analyzer analyzing an image acquired by the image acquiring
unit and grading the beef quality and a monitor displaying the image and
the analysis result; and a data storage unit storing the image data and
the analysis result data.

19. The system according to claim 18, wherein the monitor is a touch pad.

20. The system according to claim 18, wherein the data storage unit is
connected to a computer network.

Description:

BACKGROUND

[0001] 1. Field of the Invention

[0002] The present invention relates to a method and a system for grading
beef quality, and more particularly, to a method and a system for
automatically grading beef quality by analyzing a digital image.

[0003] 2. Description of the Related Art

[0004] In general, beef quality is graded just after butchering and the
price of beef is determined depending on the grade. The grades of beef
quality are determined on the basis of grades of meat quality and
quantity and the grading is made with a specialized grader's naked eye.

[0005] However, the grading with the naked eye has a problem in that the
objectivity of the grading result is not guaranteed because it is
difficult to accumulate data quantized by grading items. There are
problems in that time of the grading is very long and it is difficult to
train specialized graders because of the importance in experience.

[0006] Techniques for automatically grading beef quality by image analysis
have been studied to solve the above-mentioned problems and have
attracted attentions more and more with improvement in digital imaging
techniques. However, since lean and fat are mixed in a section of beef
and are not clearly distinguished from each other, boundary lines of
grading regions extracted in the related art are greatly different from
boundary lines extracted by specialized graders.

[0007] Therefore, in the automatic grading, it is very important to invent
a new method of extracting grading regions with boundary lines similar to
the boundary lines extracted by the specialized graders.

SUMMARY

[0008] An advantage of some aspects of the invention is that it provides a
method and a system for automatically grading beef quality by image
analysis after determining a grading region with a boundary line
substantially similar to a boundary line extracted by a specialized
grader.

[0009] According to an aspect of the invention, there is provided a method
of automatically grading beef quality, including: an image acquiring step
of acquiring a color image of beef using a CCD camera; a region
separating step of separating a lean region from the acquired image; a
boundary extracting step of extracting a boundary line of the lean
region; a boundary smoothing step of smoothing the boundary line
extracted in the boundary extracting step; a boundary correcting step of
correcting an indented portion and a protruded portion included in the
boundary line having been subjected to the boundary smoothing step; a
grading region determining step of determining a grading region on the
basis of the boundary line corrected in the boundary correcting step; and
a grading step of grading the beef quality on the basis of the image of
the grading region.

[0010] The boundary smoothing step may employ a curve generating method
using relationships of pixels selected from pixels in the boundary line.
Here, the pixels in a part with a complex boundary line may be selected
so that a distance between the pixels is small, and the pixels in a part
with a smooth boundary line may be selected so that the distance between
the pixels is great.

[0011] In the boundary smoothing step, the pixels may be selected by: a
first sub-step of selecting a start pixel from the pixels in the boundary
line, storing positional information of the start pixel, and selecting an
end pixel which is separated from the start pixel along the boundary line
by a predetermined number of pixels X; a second sub-step of determining a
degree of complexity of the boundary line between the start pixel and the
end pixel; and a third sub-step of storing the positional information of
the end pixel, selecting the end pixel as a new start pixel, and then
repeatedly performing the first sub-step when the boundary line
determined in the degree of complexity in the second sub-step is not
complex, and detecting an intermediate pixel separated from the start
pixel along the boundary line by the number of pixels W smaller than the
number of pixels between the start pixel and the end pixel, storing the
positional information of the intermediate pixel, selecting the
intermediate pixel as a new start pixel, and then repeatedly performing
the first sub-step when the boundary line determined in the degree of
complexity in the second sub-step.

[0012] In the second sub-step, the degree of complexity of the boundary
line may be determined by comparing a predetermined value Z with a value
z obtained by dividing the number of pixels Y in a straight line between
the start pixel and the end pixel by the number of pixels X in the
boundary line between the start pixel and the end pixel. Here, W=5, X=20,
and Z=0.8.

[0013] The boundary correcting step may include: a sub-step of detecting
protruded pixels by comparing slopes of the pixels in the boundary line;
a sub-step of determining whether the boundary line between the adjacent
protruded pixels out of the protruded pixels should be corrected; and a
sub-step of correcting the boundary line using a curve generating method
when it is determined that the boundary line should be corrected.

[0014] In this case, the sub-step of determining whether the boundary line
should be corrected may include comparing a predetermined value K with a
value k obtained by dividing the number of pixels I in the boundary line
between the adjacent protruded pixels by the number of pixels J in a
straight line between the adjacent protruded pixels, determining that the
boundary line should be maintained when the obtained value is smaller
than the predetermined value, and determining that the boundary line
should be corrected when the obtained value is greater than the
predetermined value, where K=1.8.

[0015] The sub-step of correcting the boundary line using the curve
generating method may be performed by applying the curve generating
method to the adjacent protruded pixels and two pixels separated outward
from the adjacent protruded pixels by 30 pixels.

[0016] The region separating step may include a binarization sub-step of
calculating an optimal threshold value and displaying only the lean
region. The optimal threshold value may be calculated in the binarization
sub-step by: analyzing a gray-scale level using a brightness distribution
of an image in a green band; excluding a region where the gray-scale
level of the image in the green band is less than 25 and a region where
the gray-scale level is greater than 150 and reducing the gray-scale
level in the remaining region to a half; calculating a probability
distribution of the lean region and a probability distribution of a fat
region using probability density functions of the gray-scale levels, a
sum of probability density functions of the lean region, and a sum of
probability density functions of the fat region; applying a probability
distribution of the lean region and a probability distribution of the fat
region to α-dimension R nyi entropy; calculating the gray-scale
level at which the sum of the R nyi entropy in the lean region and the
R nyi entropy in the fat region is the maximum; and calculating the
optimal threshold value using the gray-scale level at which the sum of
the R nyi entropy having three different values depending on the range of
α is the maximum.

[0017] The grading region determining step may include an interactive
checking sub-step of allowing a user to check the determined grading
region and correcting the boundary line.

[0018] The boundary extracting step may include a labeling sub-step of
labeling the lean region of which the boundary line would be extracted, a
dilation sub-step of filling an empty space remaining in the labeled
region, an erosion sub-step of eroding a part of the lean region
exaggerated in the dilation sub-step, and an automatic boundary
extracting sub-step of extracting the boundary line of the lean region
determined up to the erosion sub-step.

[0019] The grading step may include at least one sub-step of a size
determining sub-step of determining an area of a lean region, an
intramuscular fat determining sub-step of determining a marbling state of
beef, a color determining sub-step of determining lean and fat colors,
and a fat thickness determining sub-step of determining a thickness of
back fat.

[0020] The size determining sub-step may include converting the number of
pixels of the grading region into an area.

[0021] The intramuscular fact determining sub-step may include grading the
beef quality by performing a binarization process with respect to 135
using the image of the red band and by calculating tissue indexes of
element difference moment, entropy, uniformity, and area ratio using four
paths selected from a co-occurrence matrix as a horizontal path mask.

[0022] The color determining sub-step may use L*a*b* values of the
International Commission on Illumination changed, which is obtained by
converting average RGB values calculated from output values of an image
expressed by RGB by learning using a back-propagation multi-layer neural
network.

[0023] The thickness determining sub-step may include performing a
triangular method on the grading region to detect the longest straight
line in the grading region, selecting the fat part of which the thickness
should be measured on the basis of the straight line, drawing a normal
line perpendicular to the straight line in the selected fat region, and
measuring the length of the normal line.

[0024] According to another aspect of the invention, there is provided a
system for automatically grading beef quality, including: an image
acquiring unit including a lamp and a CCD camera; a grading unit
including an analyzer analyzing an image acquired by the image acquiring
unit and grading the beef quality and a monitor displaying the image and
the analysis result; and a data storage unit storing the image data and
the analysis result data.

[0025] Here, the monitor may include a touch pad and the data storage unit
may be connected to a computer network.

[0026] According to the invention, it is possible to automatically grade
beef quality by extracting a boundary line substantially similar to a
boundary line extracted by a specialized grader.

[0027] It is also possible to enhance the accuracy of the determination
result and to allow a user to participate directly in the correction, by
providing a user with an interactive checking procedure in the grading
region determining step.

[0028] According to the invention, since the image data and the grading
result data are stored in the data storage unit, the image data and the
grading result data can be formed into a database. The database can allow
the grading result including the measured values of grading items to be
checked at any time, whereby the objectivity of the grading is guaranteed
and the database can be utilized as base materials for improving meat
quality of cattle farms. In addition, by applying the beef grading data
according to the invention to the recent beef history rule, the database
can be utilized as materials useful for selling or purchasing beef.

[0029] Particularly, by connecting the data storage unit to a computer
network, it is possible to check the grading result data according to the
invention at any place of the country using the Internet.

BRIEF DESCRIPTION OF THE DRAWINGS

[0030] FIG. 1 is a flowchart illustrating a method of automatically
grading beef quality according to an embodiment of the invention.

[0031] FIG. 2 is diagram illustrating an image acquiring unit including an
LED lamp and a CCD camera.

[0032] FIGS. 3A to 3C are diagrams illustrating RGB-channel images of an
image captured by the CCD camera.

[0033]FIG. 4 is a diagram illustrating an image obtained by binarizing an
image of a green band using the optimal threshold value acquired in an
embodiment of the invention.

[0034]FIG. 5 is a diagram illustrating an image of only a blob of a lean
region obtained by labeling the image shown in FIG. 4.

[0035] FIGS. 6 and 7 are diagrams illustrating images obtained by filling
the blob of the image shown in FIG. 5 in a dilation sub-step.

[0036] FIG. 8 is a diagram illustrating an image obtained by performing an
erosion sub-step on the image shown in FIG. 7.

[0037]FIG. 9 is a diagram illustrating an image obtained by performing an
automatic boundary extracting sub-step on the image shown in FIG. 8.

[0038]FIG. 10 is a diagram illustrating a boundary line before it is
subjected to a boundary smoothing step.

[0039]FIG. 11 is a diagram illustrating the boundary line after the
boundary smoothing step according to the embodiment is performed on the
boundary line shown in FIG. 10.

[0040] FIG. 12 is a diagram illustrating the boundary line of a lean
region extracted from beef including a valley portion formed by fat.

[0041] FIG. 13 is a diagram illustrating the boundary line obtained by
performing the boundary smoothing step according to the embodiment on the
image shown in FIG. 12.

[0042] FIG. 14 is a diagram illustrating an image in which protrusions are
marked in the boundary line including indented portions.

[0043] FIGS. 15A to 15D are diagrams showing differences between the
boundary lines corrected depending on a value of K.

[0044] FIG. 16 is a diagram illustrating a curve obtained by correcting
the boundary line between adjacent protruded pixels p2 and p3 using an
Overhauser curve generating method.

[0045] FIG. 17 is a graph illustrating grades of beef based on four tissue
indexes.

[0046] FIGS. 18 to 20 are diagrams a procedure of measuring a back fat
thickness.

[0047] FIG. 21 is a diagram illustrating a grading start picture displayed
on a monitor of a system for automatically grading beef quality according
to an embodiment of the invention.

[0048] FIG. 22 is a diagram illustrating a grading result picture
displayed on the monitor of the system for automatically grading beef
quality according to the embodiment of the invention.

DESCRIPTION OF EXEMPLARY EMBODIMENTS

[0049] Hereinafter, exemplary embodiments of the invention will be
described in detail with reference to the accompanying drawings.

[0050] FIG. 1 is a diagram illustrating the flow of a method of
automatically grading beef quality according to an embodiment of the
invention.

[0052] In the image acquiring step S10, an image is acquired by
photographing a section of beef to be graded. In general, the section
used in the grading step is a section of a thirteenth rib. The acquired
image is a color image captured by a CCD camera so as to enable digital
analysis. Since the color of beef is very important in grading the beef
quality and the color the captured image may vary due to the lamp, it is
preferable that a map employing an LED is used. Particularly, when an LED
lamp and a CCD camera are combined to construct an image acquiring unit
which can easily move to the section of beef to be photographed, it is
possible to acquire images of beef with the same lamp condition.

[0053] FIG. 2 is a diagram illustrating an image acquiring unit including
the LED lamp and the CCD camera.

[0054] In the region separating step S20, a lean region is separated from
the acquired image. The beef to be graded includes fat surrounding the
outer portion and lean meat located inside the fat. However, the lean
portion obtained by removing the outer fat is eaten by persons and the
lean portion is thus graded in quality. Accordingly, at the first time
for determining a grading region to be graded in quality, the lean
portion should be separated from the fat portion and the background
portion.

[0055] The region separating step S20 employs a binarization process of
displaying only the lean portion in white. The image captured by the CCD
camera has an RGB format in which it can be divided into three images of
red, green, and blue. As the process of binarizing the color image,
various methods using a gray-scale level histogram which is scaled in 256
levels have been developed.

[0056] FIGS. 3A to 3C are diagrams RGB channel images of the image
captured by the CCD camera. FIG. 3A shows an image in a red band, FIG. 3B
shows an image in a green band, and FIG. 3C shows an image in a blue
band.

[0057] In the past, the lean region and the fat region were separated on
the basis of threshold values predetermined for the red, green, and blue
images. However, in this case, the predetermined threshold values were
not suitable for all the images and thus particularly threshold values
should be determined for the respective images.

[0058] To solve this problem, the binarization using an entropic method
has been studied. Examples of the binarization include a Re nyi entropy
method, a maximum entropy method, a maximum entropy sum method, a minimum
entropy method, a Tsallis method, and an entropic correlation method.
However, the threshold values determined by the entropic methods were not
satisfactory and the entropic methods take a much time to operate.

[0059] In this embodiment, to reduce the operation time, only a suitable
one image is used and a new optimal threshold value having merits of the
entropic methods.

[0060] In this embodiment, to binarize an image, a brightness distribution
is analyzed using only an image in a green band most suitable for
distinguishment based on the histogram.

[0061] In this analysis, first, a region having a gray-scale level less
than 25 is determined and excluded as a background, and a region having a
gray-scale level greater than 150 is determined and excluded as fat.
Then, a process of reducing 126 gray-scale levels between the two values
to 63 levels is carried out. In this embodiment, by reducing the
gray-scale levels to which the entropic methods are applied, it is
possible to reduce the operation time greatly. Even when about 60
gray-scale levels are used in the entropic methods, it is possible to
accurately distinguish the fat and the lean from each other.

[0062] In this processes, an expression for calculating a gray-scale level
J(i,j) obtained by reducing the gray level I(i,j) of a (i,j) pixel is as
follows.

[0063] The probability distribution A1 of the lean region and the
probability distribution A2 of the fat region can be calculated as
follows using probability density functions of the gray-scale levels
PLlow, PLlow+1, PLlow+2, O,
PLhigh, the sum p(A1) of probability density functions of
the lean region, and the sum p(A2) of probability density functions
of the fat region.

Here, t*1, t*2, and t*3 are values different from each
other. The gray-scale level t*2 at which α comes close to 1 is
equal to the optimal threshold value in the maximum entropy sum method,
and the gray-scale level t*3 at which α is greater than 1 is
equal to the optimal threshold value in the entropic correlation method.
The gray-scale level t*1 is a threshold value in the R nyi entropy
method when α is 0.5.

[0066] Accordingly, the optimal threshold value t*c in this
embodiment is expressed by the following expression.

[0067] The image in the green band is binarized using the optimal
threshold value calculated in the above-mentioned method.

[0068]FIG. 4 is a diagram illustrating an image obtained by binarizing
the image in the green band using the calculated optimal threshold value
according to the embodiment.

[0069] In the boundary extracting step S30, a boundary line of the lean
region is extracted. That is, the boundary line of the lean region marked
with white by the binarization process is extracted. Here, the outline of
a binarized image can be extracted by various methods, many of which are
automated. However, in the image having been subjected to the region
separating step S20, the fat portions in the lean region are empty and
small lean portions are displayed in addition to a large lean blob.
Accordingly, when the automatic outline extracting method is applied at
once, the boundary line of the lean region necessary to determine the
grading region cannot be extracted. Therefore, in this embodiment, an
automatic boundary extracting sub-step 34 is performed after a labeling
sub-step S31, a dilation sub-step S32, and an erosion sub-step S33 are
performed.

[0070] In the labeling sub-step S31, a blob labeling process is performed
on the binarized image to label the lean region from which the boundary
line should be extracted. The blob labeling process is performed on
pixels connected in eight directions to each pixel, whereby only the
principal lean blobs remain and the lean regions including much fat so as
not to be used are removed.

[0071]FIG. 5 is a diagram illustrating an image in which only the lean
blobs remain by labeling the image shown in FIG. 4.

[0072] In the dilation sub-step S32, empty spaces resulting from
intramuscular fat located in the lean blobs having been the labeling
sub-step are filled. In this embodiment, the insides of the lean regions
are filled by twice dilation sub-steps using a square mask of 5×5.

[0073] FIGS. 6 and 7 are diagrams illustrating images obtained by filling
the blobs in the image shown in FIG. 5 by the dilation sub-step.

[0074] In the erosion sub-step S33, the blobs of the lean regions
exaggerated in the dilation sub-step are reduced. In this embodiment, the
erosion sub-step is performed using a square mask of 5×5.

[0075] FIG. 8 is a diagram illustrating an image obtained by performing
the erosion sub-step on the image shown in FIG. 7.

[0076] In the automatic boundary extracting sub-step S34, the boundary
line of the lean region trimmed in the dilation sub-step and the erosion
sub-step is extracted using an automatic boundary extracting method. In
this embodiment, the boundary line of the lean blob is extracted using an
8-direction chain coding method.

[0077]FIG. 9 is a diagram illustrating an image obtained by performing
the automatic boundary extracting sub-step on the image shown in FIG. 8.

[0078] In the boundary smoothing step S40, the boundary line of the lean
region is smoothed. Since the boundary line extracted in the boundary
extracting step S30 is extracted in the unit of pixels, the boundary line
is much rugged and is different from the boundary line extracted by a
specialized grader. Accordingly, the boundary line has to be smoothed so
as to be similar to the boundary line extracted by the specialized
grader.

[0079]FIG. 10 is a diagram illustrating the boundary line not having been
subjected to the boundary smoothing step.

[0080] In the boundary smoothing step S40, specific pixels are extracted
from the boundary line and then a curve generating method using
relationships of the extracted pixels are applied.

[0081] The pixels for generating a curve are extracted so that a distance
between the pixels is great in a part with a smooth boundary line, and
the pixels are extracted so that the distance between the pixels is small
in a part with a rugged boundary line. That is, the part with the smooth
boundary line and the part with the complex boundary line should be
distinguished. In this embodiment, the number of pixels Y in a straight
line between two pixels A and B is compared with the number of pixels X
in the boundary line between the two pixels A and B, thereby determining
the degree of complexity of the boundary line. For example, when X
between two pixels A and B separated by 20 pixels along the boundary line
is 20 and Y is also 20 when the boundary line between the two pixels A
and B is a straight line which is the smoothest line. The value of Y
decreases as the complexity of the boundary line between A and B
increases. Accordingly, z defined as a value obtained by dividing Y by X
can be used as a coefficient indicating the degree of complexity of the
boundary line between two pixels. When z is greater than a predetermined
value Z, the boundary line can be determined as a smooth line. When z is
smaller than the predetermined value, the boundary line can be determined
as a complex line. The pixel extracting process using that is described
below.

[0082] First, a start pixel A in the boundary line is selected, the
positional information of the start pixel is stored, and an end pixel B
separated from the start pixel by X along the boundary line is detected.

[0083] The distance Y in the straight line between A and B is divided by X
to acquire z and the value of z is compared with the predetermined value
Z.

[0084] When z is equal to or greater than Z, the positional information of
the pixel B is stored and the above-mentioned entire processes are
repeatedly performed using the pixel B as a new start pixel.

[0085] When z is smaller than Z, the positional information of an
intermediate pixel C separated from the pixel A by W (<X) along the
boundary line is stored and the above-mentioned processes are repeatedly
performed using the intermediate pixel X as a new start pixel.

[0086] In this course, X and Z are predetermined values, and the pixels to
be extracted are changed depending on the values. In this embodiment,
W=5, X=20, and Z=0.8 are used.

[0087] By carrying out the curve generating method using the positional
information of the extracted pixels, it is possible to obtain a smoothed
boundary line. In this embodiment, an Overhauser curve generating method
is used.

[0088]FIG. 11 is a diagram illustrating an image of the boundary line
obtained by performing the boundary smoothing step according to this
embodiment on the boundary line shown in FIG. 10.

[0089] In the boundary correcting step S50, protruded portions and
indented portions of the boundary line which are not smoothed in the
boundary smoothing step S40 are corrected. In the boundary smoothing step
S40, since pixels are extracted in the unit of small pixels and the curve
generating method is applied, valley portions indented by fat or steeply
protruding portions remain after the boundary smoothing step.

[0090] FIG. 12 is a diagram illustrating an image of the boundary line of
the lean region extracted from beef including the valley portions
indented by the fat. FIG. 13 is a diagram illustrating an image of the
boundary line obtained by performing the boundary smoothing step
according to this embodiment on the image shown in FIG. 12.

[0091] Some protruded portions or indented portions may not be cut along
the boundary in the actual cutting operation as shown in FIG. 12 and the
specialized graders generally set the grading region to include these
portions. Accordingly, a correction step of correcting small-sized
indented portions or protruded portions has to be performed.

[0092] To correct the indented portions or the protruded portions, the
indented portions or the protruded portions should be first determined.
Since a protruded portion is formed at the entry of the indented portion,
the protruded portions are first detected in the boundary line in this
embodiment. The protruded pixels are detected by comparing the slopes of
the pixels along the boundary line.

[0093] FIG. 14 is a diagram illustrating an image in which the protruded
portions are marked in the boundary line including the concave portions.

[0094] It is then determined whether the boundary between the adjacent
protruded pixels should be corrected. This determination is made on the
basis of a value k obtained by dividing the number of pixels I in the
boundary line between the adjacent protruded pixels by the number of
pixels J in the straight line between the protruded pixels. When the
value of k is smaller than a predetermined value K, the boundary line is
maintained. When the value of k is greater than the predetermined value
K, the boundary line is corrected on the basis of the adjacent protruded
pixels.

[0095] FIGS. 15A to 15D are diagrams illustrating images in which the
boundary line is corrected different depending on the value of K. FIG.
15A shows an originally photographed picture, FIG. 15B shows the
corrected boundary line when the value of K is set to 1.6, FIG. 15C shows
the corrected boundary line when the value of K is set to 1.8, and FIG.
15D shows the corrected boundary line when the value of K is set to 2.1.
It can be seen from the drawings that the boundary line is less corrected
as the value of K becomes greater. The boundary line of the large
indented portion in the left beef image need not to be corrected and the
boundary line of the small indented portion in the right beef image need
to be corrected. Accordingly, it can be seen that the optimal value of K
is 1.8.

[0096] The method of correcting the boundary line on the basis of the
protruded pixels employs the curve generating method, particularly, the
Overhauser curve generating method. To correct the boundary line using
the Overhauser curve generating method, two adjacent protruded pixels p2
and p3 and two pixels p1 and p4 separated outward from the pixels p2 and
p3 by a predetermined number of pixels along the boundary line are
extracted. In this embodiment, the pixels separated from the pixels p2
and p3 by 30 pixels are extracted as p1 and p4. Four pixels p1, p2, p3,
and p4 extracted in this way are used in the Overhauser curve generating
method to correct the boundary line between the protruded pixels. The
Overhauser curve C(t) is generated by the following expression.

[0099] Here, D(i,j) represents a region surrounded by a desirable boundary
line extracted by specialized graders, C(i,j) represents a region
surrounded by the boundary line extracted in this embodiment, and PLM
represents the degree of overlapping between D(i,j) and C(i,j).

[0100] The percent error (PE) of the extracted boundary line is calculated
by PE=100-PLM. However, this value does not express the case where C(i,j)
is not included in D(i,j).

[0101] Therefore, an AEPD (Average Error Pixel Distance) indicating the
difference between the two results is applied together.

Here, XOR represents the exclusive OR, is 1 when a difference exists in
background or boundary line, and is 0 when a difference hardly exists in
background or boundary line. P(i,j) represents the outline of D(i,j).

[0102] Table 1 shows the accuracies of the boundary lines extracted in
this embodiment.

Here, A(i,j)=D(i,j)iC(i,j) and the units of D(i,j), C(i,j), and A(i,j)
are pixels.

[0103] In the table, the PE of the boundary lines extracted in this
embodiment has an average value of 2.63, a maximum value of 7.38, and a
minimum value of 0.89. The AEPD has an average value of 2.51, a maximum
value of 7.13, and a minimum value of 1.03.

[0104] It can be seen from the table that the boundary lines of the lean
regions extracted according to this embodiment are very similar to the
desirable boundary lines extracted by the specialized graders and can be
applied as the boundary lines of the regions to be automatically graded.

[0105] In the grading region determining step S60, a grading region to be
graded is determined on the basis of the smoothed and corrected boundary
line. The grading region can be determined automatically in a grading
system, but it is preferable that a user check the determined grading
region. Particularly, an interactive checking course allowing the user to
correct the boundary line is preferably disposed in the course of
checking. In this case, by providing a touch panel as a monitor
displaying an image, the user can be allowed to directly input a
correction point to the image displayed on the monitor. At the time of
correcting the boundary line, the user can directly input a corrected
boundary line with the touch panel, or can input a correction point with
a pointer or the like and apply the Overhauser curve generating method.

[0106] In the grading step S70, the beef quality is graded on the basis of
the determined grading region. The specialized graders synthetically
consider the size of a lean portion, the distribution of intramuscular
fat, the fat and lean colors, and the back fat thickness to grade the
beef quality.

[0107] In the size determining sub-step, an area of a lean portion is
determined by converting the number of pixels in the determined grading
region into an area.

[0108] In general, in the intramuscular fat determining sub-step, the
distribution of the intramuscular fat which is expressed by "marbling" is
determined. For this purpose, a co-occurrence matrix is calculated from
the image of the lean region. The binarization process is performed using
the image in the red band out of the RGB channels of the image of the
lean region. Four paths are selected as a horizontal path mask from the
co-occurrence matrix. Then, four tissue indexes of element difference
moment (EDM), entropy (ENT), uniformity (UNF), and area ratio (AR) are
calculated therefrom. The four tissue indexes are calculated as follows.

[0109] Here, i and j represent positional information values of a pixel.

[0110] FIG. 17 is a graph illustrating beef grades based on the four
tissue indexes. This reflects the fact that the beef grade is higher as
the element difference moment, the entropy, and the area ratio become
smaller and the beef grad is higher as the uniformity becomes greater.

[0111] In the color determining sub-step, the state of beef is checked
using the lean color and the fat color. Here, various lean and fat colors
of samples are compared with colors of the image. However, the RGB color
system of the image may not give a constant result due to the influence
of the lamp or the like. The L*a*b* color system of the International
Commission on Illumination (CIE) may be used instead of the RGB color
system having the above-mentioned problem. The L*a*b* color values are
generally measured using a colorimeter. An error may occur when the image
captured by the CCD camera is converted into the L*a*b* color values
using a conversion expression. Accordingly, in this embodiment, the
average RGB values calculated from the output values of the color camera
which are expressed in RGB are converted into the L*a*b* color values of
the CIE by the learning of a neural network, and a back-propagation
multi-layer neural network is used as the neural network.

[0112] In the fat thickness determining sub-step, the thickness of a back
fat which is attached to the outside of the lean region is measured to
grade the beef quality. First, a triangular method is performed using the
protruded portions of the determined boundary line as vertexes and then
the longest straight line is detected from the straight lines connecting
the protruded portions. The back fat portion of which the thickness
should be measured is selected from the fat layers surrounding the lean
region using the selected straight line. Finally, the normal line
perpendicular to the longest straight line is drawn on the back fat, and
the length of the normal line is measured and is determined as the back
fat thickness.

[0113] FIGS. 18 to 20 are diagrams illustrating the procedure of measuring
the back fat thickness. FIG. 18 shows a state where the triangular method
is applied to the grading region, FIG. 19 shows a state where the longest
straight line is detected to select the back fat portion, and FIG. 20
shows a state where the back fat thickness is measured using the normal
line.

[0114] The beef quality is finally graded by synthesizing the grades
estimated on the basis of the size of the lean portion, the intramuscular
fat distribution, the lean and fat colors, and the back fat thickness.

[0115] The system for automatically grading beef quality according to an
embodiment of the invention includes an image acquiring unit, a grading
unit, and a data storage unit.

[0116] The image acquiring unit serves to capture an image of a beef
section and includes a CCD camera and a lamp. The CCD camera is an image
capturing device which can store a beef section as a digital image, and
the lamp is a device illuminating the beef with strong light so as for
the CCD camera to capture an accurate image. In the image acquiring unit
shown in FIG. 2 according to this embodiment, a white LED lamp 20 is
attached to the periphery of the CCD camera 10 in a round form, and a
knob with a switch is attached to the outside thereof, so that an image
of a beef section can be easily captured.

[0117] The grading unit includes an analyzer analyzing the image acquired
by the image acquiring unit to grade the beef quality and a monitor
displaying the image and the analysis result.

[0118] The analyzer serves to analyze a digital image to determine a
grading region and to grade the beef quality, and includes a processor
analyzing the digital image.

[0119] The monitor is an imaging device displaying the image acquired by
the image acquiring unit and the analysis result of the image for the
user. In this embodiment, an interactive system can be constructed using
a touch pad monitor to which a user can directly input data with a
screen.

[0120] FIG. 21 is a diagram illustrating a grading start picture displayed
on the monitor of the system for automatically grading beef quality
according to this embodiment. In the grading start picture, a user can
check the boundary line marked in the image and touch a start button,
thereby allowing the analyzer to start the grading. When it is necessary
to correct the boundary line marked in the image, a boundary correction
button may be touched to start the correction step. In this embodiment,
since the touch pad is employed, the user can directly input a correcting
part to the image displayed on the monitor.

[0121] FIG. 22 is a diagram illustrating a grading result picture
displayed on the monitor of the system for automatically grading beef
quality according to this embodiment. Since the grading result picture
includes a beef section image and the grading result, the user can see
all information on the grading.

[0122] The data storage unit serves to store the image data acquired by
the image acquiring unit and the analysis result data including the
boundary line information analyzed by the grading unit. Since the
procedures of the system according to the embodiment are all
computerized, the image data and the analysis result data can be stored
in the data storage unit. When the result picture shown in FIG. 22 is
stored in the data storage unit to construct a database, it is possible
to check the grading result including the measured values of grading
items at any time, whereby the objectivity of the grading is guaranteed
and the database can be utilized as base materials for improving meat
quality of cattle farms. In addition, by applying the beef grading data
according to the invention to the recent beef history rule, the database
can be utilized as materials useful for selling or purchasing beef.

[0123] Particularly, since the data storage unit according to the
embodiment is connected to a computer network, it is possible to check
the analysis result data of the beef grading according to the invention
at any place of the country using the Internet.

[0124] While the exemplary embodiments of the invention have been shown
and described above, the invention is not limited to the exemplary
embodiments, but it will be understood by those skilled in the art that
the invention can be modified in various forms without departing from the
technical spirit of the invention. Therefore, the scope of the invention
is not limited to any specific embodiment, but should be determined by
the appended claims.