This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2008-310010, filed on Dec. 4, 2008; the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image evaluation device and an image evaluation method for evaluating the quality of image.
2. Description of the Related Art
Various techniques have been developed to evaluate the quality of image (for instance, the presence/absence of blurring, camera-shake and the like occurred in an image captured by an imaging system such as a camera). Here, a technique for evaluating the quality of image based on an edge of the image has been disclosed (refer to Reference 1 to 3 (which are, JP-A 2006-019874 (KOKAI), JP-A 2006-172417 (KOKAI), and JP-A 2008-123346 (KOKAI), in order)). Reference 1 discloses a technique in which a histogram of an estimated value of edge width is computed for each direction, and if the histogram different among the directions, it is determined that the camera-shake has affected the image. Reference 2 discloses a technique in which an average of edge widths of an original image is estimated to evaluate the degree of blurring. Furthermore, Reference 2 discloses a technique in which direction of the camera-shake is estimated as the direction perpendicular to the direction in which an edge strength is maximum, an autocorrelation is calculated along the camera-shake direction, and a displacement of the minimum value of the autocorrelation is computed as a width of the camera-shake. Reference 3 discloses a technique in which an average of edge widths is computed for each direction, and if all the averages are equal to or less than a threshold value, it is determined that no blurring occurs. Furthermore, Reference 3 discloses a technique in which edge patterns are classified based on a pattern matching of coefficients of DCT, an edge width is estimated based on a representative value of edge widths previously computed for each classification, and a blur region is narrowed down by setting it as a part in which the estimated edge width is wide.
In the techniques disclosed in References 1 to 3, the edge direction is estimated and the edge width along the edge direction is calculated. A method based on the calculation of the edge width is effective for evaluating a simple blurring. However, when the camera-shake has affected the image, the estimation of the edge direction may be incorrect. For this reason, it may be difficult to accurately evaluate the image quality with these methods. Furthermore, with these methods, when ghost images are present and no other defects are significant, a sharpness of edge is unaffected. Accordingly, the defect of the image quality may be undetected.
One of the object of the present invention is to provide an image evaluation device and an image evaluation method which does not depend on estimation gradient direction.
An image evaluation device according to one aspect of the present invention includes: a partial area extracting section extracting a plurality of partial areas from an image; an extracted image generating section extracting a plurality of extracted images corresponding to the plurality of partial areas from the image; an autocorrelation calculating section calculating a plurality of autocorrelation coefficients corresponding to the plurality of extracted images; a representative coefficient value calculating section calculating a representative coefficient value of the plurality of autocorrelation coefficients; and a checking section checking the quality of the image based on a distribution of the representative coefficient values.
An image evaluation method according to one aspect of the present invention includes: extracting a plurality of partial areas from an image; generating a plurality of extracted images corresponding to the plurality of partial areas of which pixel values are gradient of pixel values in the image; calculating a plurality of autocorrelation coefficients corresponding to the plurality of extracted images; calculating a representative coefficient value of the plurality of autocorrelation coefficients; and determining the quality of the image based on a distribution of the representative coefficient values.
In a later-described embodiment of the present invention, a degradation function representing an image degradation in the imaging system is calculated from the original image, and as the degradation function, an autocorrelation of impulse response of the imaging system is used, for instance. The autocorrelation of impulse response is estimated as the degradation function for the following reason.
Many of the image degradations in the imaging system can be modeled by a two-dimensional linear system of which ideal images captured by an imaging system without image degradation are the input and images captured by the imaging system with image degradation are the output. The impulse response of the linear system is spread because of the image degradation such as blurring, camera-shake, ghost images, and afterimage. For instance, if the captured image is blurred, the impulse response is spread isotropically like a Gaussian, and when the camera-shake has affected the image, the response is spread along a line. Furthermore, when the ghost images or afterimages are present, extra peaks other than the one at the origin are observed in the impulse response. Since the extent of impulse response causes the extent of autocorrelation of the impulse response, the image degradation causes extent of the autocorrelation of the impulse response. Therefore, by estimating the autocorrelation of the impulse response as the degradation function and evaluating the degree of the extent, it is possible to check the presence/absence of the image degradation.
In the explanation hereinbelow, it is assumed that a linear operator is used for extracting edge images, and an edge image of an ideal image with no image degradation is called “ideal edge image”, and an edge image extracted from the original image with image degradation is simply called “edge image”.
In the present invention, the degradation function is estimated based on the edge image instead of the original image. Under the assumption where the linear operator is used for extracting the edge image, a linear system for converting the ideal edge image into the edge image is equal to a linear system in which the image degradation is modeled, therefore it is possible to estimate the degradation function using the edge image instead of the original image as described above.
The extent of the autocorrelation coefficients of the edge images in the partial areas can be regarded as sum of the extent due to the image degradation and the extent due to a geometrical structure of the ideal edge images in the respective partial areas. Furthermore, the extent in accordance with the geometrical structure reflects a tendency of a positional relationship between each pixel on which the pixel value is non-zero in the ideal edge image. For instance, at a portion where the edge is linear in the horizontal direction, the autocorrelation coefficients of the edge image are spread horizontally and at a portion where the edge is linear in the vertical direction, the coefficients are spread vertically.
The tendency of the geometrical structure differs among the partial areas, in which when the autocorrelation coefficient of the ideal edge image in each of the partial areas is calculated, the extent in accordance with the structure of each of the partial areas is observed, and the origin is the only point on which the autocorrelation coefficient is always positive regardless of the structure of each of the partial areas. Accordingly, the autocorrelation of the ideal edge image calculated for each of the partial areas can be decomposed into a part of the origin where the correlation always becomes positive, and a sum of parts that depend on the geometrical structure.
Therefore, the autocorrelation of each of the partial areas of the edge image formed by degrading the ideal edge image with the linear system can also be decomposed into a component corresponding to the image degradation and a sum of components corresponding to the geometrical structure. Accordingly, if the components that depend on the geometrical structure are assumed to be positive, the components corresponding to the structure is eliminated by calculating the minimum value of the autocorrelation calculated for each of coordinates among the partial areas of the edge image, so that the component corresponding to the image degradation is computed. Therefore, in the present invention, the autocorrelation of impulse response, namely, the degradation function is estimated by calculating the representative coefficient value such as the minimum value for each of the coordinates of the autocorrelation, as described above.
The pixel value of each pixel in the edge image is a complex number expressing the gradient vector of the original image By this design, the extraction of the edge image is performed through the linear conversion of the original image, which is convenient since it matches with the aforementioned assumption.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
The image evaluation device 100 can be structured by incorporating image evaluation software into a computer, so that the explanation will be made hereinafter by assuming that such a structure is made. However, it is also possible to structure the image evaluation device 100 using dedicated hardware, an aggregate of dedicated hardware, or a computer network for distributed processing. The image evaluation device 100 can adopt, not only the structure cited here, but also various structures.
The original image input unit 110 is an input device to input data of an original image. The image evaluation unit 120 is for evaluating the quality of the original image (for instance, the presence/absence of blurring, camera-shake and the like occurred in an image captured by an imaging system such as a camera), and includes a gray-scale image generating unit 121, an edge image generating unit 122, an autocorrelation calculating unit 123, a degradation function estimating unit 124, a extent width calculating unit 125, and a quality checking unit 126. Note that the image evaluation unit 120 can be structured by a combination of a CPU (Central Processing Unit) and software. The display unit 130 is a display device that displays an image and the like such as, for instance, a CRT and an LCD. The input unit 140 is an input device to input information such as, for instance, a keyboard and a mouse.
It is set such that the original image in the present embodiment is a color image or a gray-scale image whose number of pixels in the horizontal direction and the vertical direction are respectively w and h. The color image can be represented as a combination of two-dimensional arrays R (x, y), G (x, y), and B (x, y) of luminance values of a red component, a green component, and a blue component. The gray-scale image can be represented as a two-dimensional array I (x, y) of a luminance value.
Here, x and y indicate coordinates in the horizontal direction and in the vertical direction, respectively. In the coordinates x, y, the right direction and the downward direction are respectively set as positive directions. Note that the coordinates x, y are represented by using a pixel as a unit.
In the present embodiment, the luminance values R (x, y), G (x, y), B (x, y), and I (x, y) are set to be represented by an integer from 0 to 255. For instance, a gray-scale image shown in
The gray-scale image generating unit 121 gray-scales the original image (color image) when the original image is the color image. As a result of this, the two-dimensional array I (x, y) of the luminance value is generated, similar to the case where the original image is gray-scale. For example, the gray-scale is performed in accordance with the following expression (1).
I (x, y)=max (0, min (Imax, WRR (x, y)+WGG (x, y)+WBB (x, y))) expression (1)
Note that WR, WG, and WB are positive constants for weighting the red component, the green component, and the blue component, and min (x1, . . . xn) and max (x1, . . . xn) indicate the minimum value of x1, . . . xn, and the maximum value of x1, . . . xn, respectively. Imax is the upper limit value of the luminance.
The expression (1) indicates, as in the following expression (2), that the two-dimensional array I (x, y) is represented by the weighting sum of the two-dimensional arrays R (x, y), G (x, y) and B (x, y). Both min and max are for ensuring that the calculated value of the two-dimensional array I (x, y) is between the upper limit (Imax) and the lower limit (0).
I (x, y)=WR·R (x, y)+WG·G (x, y)+WB·B (x, y) expression (2)
The edge image generating unit 122 generates an edge image from the gray-scale image. A combination of the edge image generating unit 122 and a later-described extracted image extracting unit 156 works as an extracted image generating section that generates a plurality of extracted images corresponding to a plurality of partial areas. The gray-scale image can be either the original image itself, or the gray-scale image generated from the original image (color image).
The edge image is, for instance, an image having pixels whose pixel values correspond to a gradient of pixel values of the original image (gray-scale image). The pixel values of the edge image here indicate the gradient of pixel values of the gray-scale image. Here, the pixel value of the edge image (value of gradient) is set to a complex number. Namely, the edge image generating unit 122 calculates a gradient vector of each pixel of the gray-scale image, and generates the edge image in which a horizontal component and a vertical component of the vector are respectively set to a real component and an imaginary component of the pixel.
A gradient vector g (x, y) in coordinates (x, y) can be defined by the following expression (3).
g (x, y)=(I (x+1, y)−I (x−1, y), I (x, y+1)−I (x, y−1) expression (3)
Therefore, a pixel value E (x, y) of the edge image in the coordinates (x, y) can be calculated in accordance with the following expression (4).
E (x, y)=(I (x+1, y)−I (x−1, y)+j·(I (x, y+1)−I (x, y−1)) expression (4)
Note that “j” is set to indicate an imaginary unit.
Here, in the calculation of pixel value E (x, y), when the coordinates (x+1, y) position on the right side of a right edge of the image, namely, when x+1>w, it is set that I (x+1, y)=I (w−1, y). In like manner, when x−1<0, it is set that I (x−1, y)=I (0, y). Furthermore, when y+1>h, it is set that I (x, y+1)=I (x, h−1). When y−1<0, it is set that I (x, y−1)=I (x, 0).
Since a pixel value is not defined on the outside of the gray-scale image, the gradient at the edge of the image cannot be calculated. By setting the pixel value on the outside of the image to have the same value as the pixel value in the periphery of the image, it becomes possible to calculate the gradient at the edge of the image.
Real components and imaginary components as a result of calculating the edge image in the edge image generating unit 122 from the image shown in
The expression (4) represents an expanded gradient coefficient (edge image) in which gradients in two directions (horizontal direction and vertical direction) are set to the real component and the imaginary component.
As above, the amount corresponding to a clarity of the edge (particularly, the amount having an absolute value whose magnitude corresponds to the clarity of the edge) can be used instead of the pixel value represented by the expression (4). For instance, as in the following expressions (4a), (4b), and (4c), it is possible to define the edge image using an image in which a gradient in one direction (horizontal direction, vertical direction, or diagonal direction) itself is set as a pixel value.
E (x, y)=I (x+1, y)−I (x−1, y) expression (4a)
E (x, y)=I (x, y+1)−I (x, y−1) expression (4b)
E (x, y)=I (x+1, y+1)−I (x−1, y−1) expression (4c)
The edge image represented by the expression (4) is defined by a linear amount corresponding to a difference in luminance of pixels adjacent to a pixel of coordinates (x, y). On the contrary, it is also possible to define the edge image using a non-linear amount (for instance, an amount computed by non-linear converting a difference in luminance). For example, it is possible to create a non-linear edge image by setting a norm of the gradient vector g (x, y) to E (x, y), and the like.
As described above, the amount having an absolute value whose magnitude corresponds to the clarity of the edge (in other words, the amount having a large absolute value which intensively appears on the periphery of the edge of the original image) can be used for the definition of the edge image.
The autocorrelation calculating unit 123 calculates a second-order or higher of an autocorrelation coefficient of the edge image in each of n partial areas R1, . . . Rn.
The partial area enumeration unit 151 enumerates (extracts), from the original image, n partial areas R1, . . . Rn including a certain number or more of pixels whose pixel values are largely different from a representative pixel value. The partial area enumeration unit 151 works as a partial area extracting section that extracts a plurality of partial areas from the original image.
Note that each of the partial areas Rk is a quadrangle having a width of a and a height of b, and coordinates at the upper-left corner of the kth quadrangle Rk are set to (xk, yk). Furthermore, p and q indicate coordinates in the horizontal direction and in the vertical direction, respectively, and take integers within the following range in which a pixel is used as a unit.
pmin≦p≦pmax
qmin≦q≦qmax
p
min=−Floor (a/2), pmax=pmin+a−1
q
min=−Floor (b/2), qmax=qmin+b−1
Note that Floor (x) is set to indicate the maximum value of integers equal to or less than x.
The minimum value pmin and the maximum value pmax are set so that a substantially center of each of the partial areas Rk corresponds to the origin (0, 0). When the width a and the height b are odd numbers, the center of the partial area Rk coincides with the origin (0, 0). When the width a and the height b are even numbers, the center of the partial area Rk does not correspond to the pixel (the center is disposed between the pixels), so that in order to make the origin (0, 0) correspond to the pixel, the origin (0, 0) is displaced from the center.
The partial area enumeration unit 151 includes a block enumeration unit 153, a partial area selecting unit 154, and a partial area selecting unit 155. The block enumeration unit 153 enumerates t blocks B1, . . . Bt. The partial area selecting unit 154 selects, from the original image, m candidates for partial areas D1, . . . Dm, within the blocks B1, . . . Bt, including a certain number or more of pixels whose pixel values are largely different from a representative pixel value. The partial area selecting unit 155 further selects n partial areas R1, . . . Rn from the candidates for partial areas D1, . . . Dm.
The block enumeration unit 153 enumerates t blocks B1, . . . Bt from the image.
In the block enumeration unit 153, the number t of the blocks B1, . . . Bt each having a width of a and a height of b and coordinates (xB, 1, yB, 1), . . . (xB, t, yB, t) at the upper-left corners of the blocks are calculated in accordance with the following method.
A. thorz and tvert are calculated through the following expressions.
t
horz=Floor (w/a)
t
vert=Floor (h/b)
t=t
horz
·t
vert
Specifically, the original image (width: w, height: h) is basically divided into the blocks B1, . . . Bt arranged as thorz blocks horizontal by tvert blocks vertical. Floor is used, when (w/a) and (h/b) cannot be divided, to round down a value below decimal point to the integer.
B. k is initialized to 1.
C. Coordinates (xB, k, yB, k) at the upper-left corner of the block Bk are calculated as follows.
(1) The following processing is repeated while changing a value of ivert from 0 to (tvert−1).
(2) The following processing is repeated while changing a value of ihorz from 0 to (thorz−1).
1) xB, k is calculated: xB, k=a·ihorz.
2) yB, k is calculated: yB, k=b·ivert.
3) k is increased by 1.
By changing ivert and ihorz, k is changed between 1 to t. As a result of this, coordinates (xB, 1, yB, 1), . . . (xB, t, yB, t) at the upper-left corners of all the blocks B1, . . . Bt are calculated.
The partial area selecting unit 154 selects the number m of partial areas D1, . . . Dm each having a width of a and a height of b and coordinates (xD, 1, yD, 1), . . . (xD, m, yD, m) at the upper-left corners of the areas in accordance with the following method.
A. It is set that m=0.
B. The following processing is repeated while changing a value of k from 1 to t.
(1) A mean value of the pixel values I (x, y) of the original image in the block Bk is calculated and is set as a representative pixel value Gk of the original image. The mean value is a value of data located at a center when the pieces of data are arranged in order. When the number nn of data is an odd number, a value of the [(nn+1)/2] th smallest (substantially the central) data is the mean value. Furthermore, when the number nn of data is an even number, an average of the Floor [nn/2] th smallest (central) data and the Floor [(nn/2)+1] th smallest (central) data is the mean value. The mean value can be regarded as the representative pixel value in the block Bk.
(2) Threshold values θ1, k and θ2, k are calculated in accordance with the following expressions.
θ1, k=α1·Gk
θ2, k=α2·Gk
Note that α1 and α2 are predetermined constants which satisfy 0<α1<1<α2.
(3) The number ek of pixels having pixel values which are out of the range of the pixel values determined based on the representative pixel value being the pixel value I (x, y) of the original image in the block Bk satisfying the following condition, is counted.
I (x, y)≦θ1, k, or θ2, k≦I (x, y)
(4) When ek≧β·a·b is satisfied, the following processing is conducted. Note that β is a predetermined constant which satisfies 0<β<1.
1) m is increased by 1.
2) It is set that xD, m=xB, k, yD, m=yB, k
Through the above processing, the partial area selecting unit 154 selects, from the original image, the candidates for partial areas D1, . . . Dm, within the blocks B1, . . . Bt, including a certain number or more of pixels whose pixel values are largely different from the representative pixel value. Specifically, the partial area selecting unit 154 selects a candidate for partial area Dk having the number ek of pixels whose pixel values are out of the range of the threshold values θ1, k to θ2, k greater than a predetermined ratio β.
The partial area selecting unit 155 selects the n partial areas R1, . . . Rn each having a width of a and a height of b from the candidates for partial areas D1, . . . Dm, and coordinates (x1, y1), . . . (xn, yn) at the upper-left corners of the areas. The selection can be performed randomly. In this case, the partial areas R1, . . . Rn can be selected by using a pseudo-random number k1 represented by the following expression (11). A candidate for partial area Dk1 is selected as the partial area Rk.
k1={[N (k−1)] mod m}+1 expression (11)
Note that n is a predetermined number. Furthermore, N is a prime number and can be set as N=9973, for instance. “x mod y” indicates a remainder as a result of dividing x by y.
It is also possible to regularly select the partial areas R1, . . . Rn from the candidates for partial areas D1, . . . Dm. For instance, it is possible to select the partial areas R1, . . . Rn by using an integer k1 represented by the following expression (11a). A candidate for partial area Dk1 is selected as the partial area Rk.
k1=Floor [(m−1) (k−1)/(n−1)]+1 expression (11a)
Coordinates (xk, yk) at the upper-left corner of the selected partial area Dk is represented as follows.
(xk, yk)=(xD, k1, yD, k1)
The selection by the partial area selecting unit 155 is conducted for aligning the number n of the partial areas R1, . . . Rn. The number m of the candidates for partial areas D1, . . . Dm selected by the partial area selecting unit 154 depends on the original image. Specifically, the number depends on a ratio of areas, included in the original image, including a certain number or more of pixels whose pixel values are largely different from the representative pixel value. It is also possible to align the number n of the partial areas R1, . . . Rn by interrupting the selection by the partial area selecting unit 154 in the middle thereof. In the present embodiment, after the selection by the partial area selecting unit 154 is completed, the number n of the partial areas R1, . . . Rn is adjusted by the partial area selecting unit 155.
As described above, various methods, which can be performed randomly or regularly, can be adopted for the selection of the partial areas R1, . . . Rn by the partial area selecting unit 155. As the entire partial area enumeration unit 151, various methods can be adopted for enumerating, from the original image, the partial areas R1, . . . Rn including a certain number or more of pixels whose pixel values are largely different from the representative pixel value.
The calculating unit of autocorrelation in area 152 calculates, for each of the partial areas Rk, the autocorrelation coefficient Ck (p, q) of the pixel value E (x, y) of the edge image. The calculating unit of autocorrelation in area 152 includes an extracted image extracting unit 156, and a calculating unit of autocorrelation of extracted image 157.
The extracted image extracting unit 156 extracts, from the edge image, partial edge images U1, . . . Un corresponding to the respective partial areas R1, . . . Rn. The calculating unit of autocorrelation in area 152 calculates an autocorrelation coefficient corresponding to each of the extracted images. As described above, the partial area Rk is the area having a width of a and a height of b and coordinates at the upper-left corner of (xk, yk). The extracted image extracting unit 156 substitutes a pixel value E (xk+x, yk+y) of the edge image by setting it as a pixel value Fk (x, y) located x pixels to the right and y pixels down from the upper-left corner of the extracted image, as shown by the following expression.
F
k (x, y)=E (xk+x, yk+y)
The calculating unit of autocorrelation of extracted image 157 calculates a second-order or higher of an autocorrelation coefficient Ck (p, q) of the extracted image. The calculating unit of autocorrelation of extracted image 157 works as an autocorrelation calculating section that calculates the autocorrelation coefficient for each of the partial areas R1, . . . Rn. The calculating unit of autocorrelation of extracted image 157 calculates the autocorrelation coefficient Ck (p, q) for each of the integers p and q satisfying pmin<p<pmax, qmin<q<qmax, through the following expression (21), for instance.
C
k (p, q)=η·Σy=0bΣx=0aFk (x, y)·CJ (Fk ((p+x) mod a, (q+y) mod b))/Σy=0bΣx=0aFk (x, y)·CJ (Fk (x, y)) expression (21)
Here, η is a predetermined constant (255, for instance). Furthermore, CJ (x) represents a conjugate complex number of x.
The expression (21) is basically represented by the following expression (22).
C
k (p, q)=η·Σy=0bΣx=0aFk (x, y)·CJ (Fk (p+x, q+y)/Σy=0bΣx=0aFk (x, y)·CJ (Fk (x, y)) expression (22)
The reason why “(p+x) mod a” and “(q+y) mod b” are set in the expression (21) is to easily perform the calculation of the autocorrelation coefficient Ck (p, q) by defining E (x, y) on the outside of the area of the original extracted image. Specifically, the autocorrelation coefficient Ck (p, q) is calculated by assuming that the extracted images are arranged vertically and horizontally.
The expressions (21) and (22) take into consideration that the pixel value Fk (x, y) of the extracted image is the complex number. If the pixel value Fk (x, y) is a real number (for instance, if the pixel value E (x, y) of the edge image can be represented by the expressions (4a) to (4c)), an arithmetic expression of the autocorrelation coefficient Ck (p, q) is simplified. For instance, the expression (22) is represented as the following expression (23).
C
k (p, q)=η·Σy=0bΣx=0aFk (x, y)·Fk (p+x, q+y)/Σy=0bΣx=0aFk (x, y)·Fk (x, y) expression (23)
As the calculation method of autocorrelation coefficient in the calculating unit of autocorrelation of extracted image 157, not only the aforementioned method but also various methods can be adopted as long as the autocorrelation coefficient Ck (p, q) of the edge image in the partial area can be calculated. For example, the autocorrelation coefficient Ck (p, q) can be calculated by Fourier-transforming the pixel value Fk (p, q) to calculate a spectrum of the pixel value Fk (p, q) (to resolve a spatial frequency) and then inverse-Fourier-transforming a square of an absolute value of the spectrum.
The degradation function estimating unit 124 estimates a degradation function T (p, q) by calculating the representative coefficient value for each of the coordinates (p, q) of the autocorrelation coefficients Ck (p, q) (for each of the pixels). The degradation function estimating unit 124 works as a representative coefficient value calculating section that creates the representative coefficient value of the plurality of autocorrelation coefficients in each of the plurality of corresponding pixels. The degradation function T (p, q) is a function representing the degree of degradation of the image caused by an imaging system used for capturing the original image. The degradation function estimating unit 124 calculates the degradation function T (p, q) in the coordinates (p, q) in accordance with the following expression.
T (p, q)=min (C1 (p, q), . . . Cn (p, q))
Note that min (x1, . . . xn) indicates the minimum value of x1, . . . xn.
The autocorrelation coefficients Ck (p, q) have a distribution corresponding to the degree of correlation of the pixel values Fk (p, q). In the origin (0, 0), the correlation between the same pixels is calculated, and it takes the maximum value (η). A point other than the origin takes a value corresponding to the correlation of offsets p and q in the horizontal direction and the vertical direction. For example, if the edge (boundary of bright and dark) is clear in the original image, large pixel values Fk (p, q) are aligned linearly along the edge direction. Furthermore, if the edge (boundary of bright and dark) is not clear in the original image, relatively large pixel values Fk (p, q) are aligned with width along the edge direction.
Specifically, it is possible to determine the quality of the original image based on the width of the autocorrelation coefficients Ck (p, q). Generally, the determination of the edge direction is required to obtain the width. Since no determination of the edge direction is required in this embodiment, the degradation function T (p, q) is calculated by calculating a plurality of autocorrelation coefficients Ck (p, q) from the original image and obtaining the representative coefficient value of the plurality of autocorrelation coefficients Ck (p, q) for each of the coordinates (p, q). As above, by obtaining the representative coefficient values of the autocorrelation coefficients Ck (p, q), it becomes possible to alleviate the influence corresponding to the edge direction (to eliminate the autocorrelation depending on the geometrical structure of the image), and thus to eliminate the necessity of determining the edge direction.
The reason why the component corresponding to the geometrical structure of the image can be eliminated by setting the minimum value of the autocorrelation coefficients Ck (p, q) for each of the coordinates (p, q) to the degradation function T (p, q) is because the autocorrelation coefficient Ck (p, q) can be decomposed into the component corresponding to the image degradation and the sum of components that depend on the geometrical structure, in which if the components that depend on the geometrical structure are assumed to be positive, by calculating the minimum value for each of the coordinates of the autocorrelation calculated for each of the partial areas of the edge image, the components that depend on the structure can be eliminated.
It is also possible to calculate the degradation function T (p, q) through the following expression.
T (p, q)=max (0, min (C1 (p, q), . . . Cn (p, q)))
This expression prevents the value of the degradation function T (p, q) from being less than 0.
The degradation function T (p, q) can also be calculated as an order statistic represented by the following expression.
T (p, q)=odr (C1 (p, q), . . . Cn (p, q);r)
r=Floor (γ·(n−1))+1
Note that odr (x1, . . . xn;r) indicates the ‘r’ th smallest value in x1, . . . xn, and γ indicates a predetermined real constant which is not less than 0 nor more than 1. For instance, if it is set that γ=0.05, the degradation function T (p, q) can be defined by setting the numberth autocorrelation coefficient Ck (p, q) counted from the minimum value as the representative coefficient value, the number corresponding to 5% of the total number of values. Floor (x) is set to indicate the maximum value of integers equal to or less than x.
The degradation function estimating unit 124 can also calculate the degradation function T (p, q) by selecting a larger value between the order statistic odr (C1 (p, q), . . . (p, q);r) of autocorrelation coefficient and 0, as represented by the following expression. When the order statistic odr (C1 (p, q), . . . Cn (p, q);r) is less than 0, it is replaced with 0.
T (p, q)=max (0, odr (C1 (p, q), . . . Cn (p, q);r))
r=Floor (γ·(n−1))+1
As described above, by using the representative coefficient value of the autocorrelation coefficients C1 (p, q), . . . Cn (p, q), the degradation function T (p, q) from which the autocorrelation depending on the geometrical structure (for instance, the arrangement of the edge) is eliminated, is calculated. Accordingly, the degradation function T (p, q) results in including a lot of components of autocorrelation corresponding to the image degradation, which enables to easily calculate the degree of image degradation from the degradation function T (p, q).
For the calculation of the degradation function T (p, q) from the autocorrelation coefficients C1 (p, q), . . . Cn (p, q), not only the example cited here but also any method can be adopted as long as the autocorrelation corresponding to the image degradation can be computed by eliminating the autocorrelation depending on the geometrical structure.
The extent width calculating unit 125 calculates a extent width of the degradation function T (p, q). The extent width calculating unit 125 works as an extent width calculating section that calculates an extent width of the distribution of the representative coefficient values.
The per-distance sum calculation unit 161 calculates per-distance sums of the degradation function for each interval of distance ν from the origin (0, 0). The interval is set as [0, 1), [1, 2), . . . [ν, ν+1), [νmax, νmax+1). Here, it is set that [x, y) indicates an interval of real value which is not less than x and less than y, ν indicates an integer, and νmax is defined by the following expression.
νmax=Ceil ((max (−pmin, pmax)2+max (−qmin, qmax)2)1/2)−1
Note that Ceil (x) is set to indicate the minimum value of integers equal to or more than x.
The per-distance sum of the degradation function at an interval of distance [ν, ν+1) is represented as S(ν).
νmax indicates the maximum distance from the origin (0, 0) to the periphery of the extracted image, and is basically represented by the following expression.
νmax=(Pmax2+qmax2)1/2−1
The expression is complicated since it is dealt with a case where the absolute value of “Pmin” is greater than “Pmax”, and νmax is set as an integer.
The per-distance sum calculation unit 161 calculates the per-distance sum S(ν) of the degradation function in accordance with the following method.
A. It is set that S(ν)=0 for each integer ν satisfying 0≦ν≦νmax.
B. The following processing is repeated for each integer q satisfying qmin≦q≦qmax.
(1) The following processing is repeated for each integer p satisfying pmin≦p≦pmax.
1) ν is calculated: ν=Floor ((p2+q2)1/2).
2) T (p, q) is added to S(ν).
The per-distance sum evaluation unit 162 calculates a extent width λ of the degradation function based on the per-distance sum of the degradation function. The per-distance sum evaluation unit 162 calculates the extent width λ in accordance with the following method.
A. It is set that S=0.
B. It is set that λ=νmax.
C. The following processing is repeated until it is satisfied that λ=0 or S≧θ.
(1) S(λ) is added to S: S=S+S(λ)
This is the same as obtaining the area of function of the per-distance sum S(ν) of the degradation function.
(2) 1 is subtracted from λ.
Note that θ is a predetermined real constant value.
In the extent width calculating unit 125, a extent width S of the degradation function can also be calculated in accordance with the following expression.
S={Σq=qminqmaxΣp=pminpmaxT (p, q)·(p2+q2)/Σq=qminqmaxΣp=pminpmaxT (p, q)}1/2
As a calculation method of the extent width of the degradation function in the extent width calculating unit 125, not only the aforementioned method but also any method can be adopted as long as a value that reflects a magnitude of the extent of the degradation function can be calculated with the method.
The quality checking unit 126 determines the quality of the original image by comparing the extent width λ with a predetermined threshold value Λ. The quality checking unit 126 works as a checking section that determines the quality of the image based on the distribution of the representative coefficient values. The quality checking unit 126 determines that the original image is non-defective when the extent width λ is equal to or less than the threshold value Λ, and it determines that the original image is defective when the extent width λ is greater than the threshold value Λ.
In the present embodiment, the degradation function T (p, q) is calculated in the degradation function estimating unit 124, and the extent width of the degradation function is calculated in the extent width calculating unit 125 for all of the coordinates (p, q) in a range of pmin≦p≦pmax, qmin≦q≦qmax. However, there is no problem if the degradation function T (p, q) is calculated for only a part of the coordinates (p, q) in the range, and the extent width is calculated based on the function.
As described above, in the present embodiment, the degradation function T (p, q) that reflects the image degradation corresponding to the defect caused by blurring, camera-shake and the like is estimated, and the extent width of the degradation function T (p, q) is used for the determination of presence/absence of the defect. In the present embodiment, advantages as described below can be attained.
(1) The extent width of the degradation function T (p, q) is not influenced by a contrast of the image, so that even when the contrast of the image is varied, it is possible to accurately evaluate the image quality.
(2) The estimation of the edge direction is not required, so that there arises no problem caused by an error in the estimation of the edge direction.
(3) Even when the ghost images are present, it is possible to detect the extent of the degradation function caused by the ghost images as the defect of the image quality. For instance, double image occurs in an image shown in
(4) The magnitude of spatial frequency is not used for the evaluation of sharpness, so that even when there is a problem in a quantization method of pixel values or even when there is generated a noise including a high frequency component at the time of capturing an image, there is no chance of overestimating the sharpness based on an image in which blurring or the like occurs.
Embodiments of the present invention are not limited to the above embodiment and can be expanded or modified, and the expanded or modified embodiment is also included in the technical scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
P2008-310010 | Dec 2008 | JP | national |