BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the present invention will be described in detail based on the following figures, wherein:
FIG. 1 is a figure showing a function block for explaining a digital camera according to a present embodiment;
FIG. 2 is a figure showing a function block for explaining an image capturing section according to the present embodiment;
FIG. 3 is a figure showing a function block for explaining an image processor according to the present embodiment;
FIG. 4 is a flow chart showing an image restoration processing procedure performed by a first image restoration processor according to the present embodiment; and
FIG. 5 is a flow chart showing an image restoration processing procedure performed by a second image restoration processor according to the present embodiment.
DETAILED DESCRIPTION OF THE INVENTION
The best mode for carrying out the present invention (hereinafter referred to as an embodiment) is explained below with reference to the accompanying drawings.
FIG. 1 is a figure showing a function block of a digital camera according to the present embodiment. Note that in the present embodiment, a consumer digital camera is explained as an example of an image processing apparatus. However, the image processing apparatus may be formed as a camera for other applications, such as a camera for monitoring, a camera for television, and an endoscope camera. In addition, the image processing apparatus may also be applied to an apparatus, such as a microscope, binoculars, and further to a diagnostic imaging apparatus for NMR imaging or the like, other than a camera.
In FIG. 1, an image capturing section 10 receives light from an object under the control of a CPU 20, and outputs RAW data in accordance with the received light. The image capturing section 10 includes an optical system 12, an image sensor 14, and a CDS (Correlated Double Sampling)-A/D (Analog/Digital) circuit 16, as shown in FIG. 2.
The image sensor 14 is provided with a color filter, in which a red filter (R), a green filter (Gr) of R column, a blue filter (B), and a green filter (Gb) of B column are arranged in a Bayer array. From the image sensor 14, an R signal which is a pixel signal of the red filter (R), a Gr signal which is a pixel signal of the green filter (Gr) of R column, a B signal which is a pixel signal of the blue filter (B), and a Gb signal which is a pixel signal of the green filter (Gb) of B column are outputted. The CDS (Correlated Double Sampling)-AD (Analog/Digital) circuit 16 reduces noise of the RAW data outputted from the image sensor 14 by performing correlated double sampling, and converts an analog signal of the RAW data to a digital signal.
The CPU 20 is a central processing unit which controls the whole digital camera. The CPU 20 expands, in a RAM 24, various programs and parameters which are stored in a ROM 22, and performs various kinds of calculation. An image processor 30 performs various kinds of image processing, such as RGB interpolation and white balance, on the RAW data, and outputs image data obtained as the result of the processing. A display device 40 functions as a viewfinder for image capturing by displaying a video image based on the image data. Further, a recording medium 50 records the image data. A blurring detector 60 is provided with two angular velocity sensors which detect angular velocities about the X-axis and Y-axis which are perpendicular direction to the Z-axis serving as an optical axis of the digital camera, and outputs time-sequential displacement angles θx and θy about the X-axis and the Y-axis, these angles being caused by the user's hand movement at the time of image capturing.
Note that in the present embodiment, it is necessary to obtain beforehand a deterioration function h in order to use the function for image restoration processing as described below. Thus, for example, the CPU 20 calculates a displacement trajectory of blurring on the image sensor 14 based on a focal distance of a lens which is presently obtained on the basis of a zoom position, and on the displacement angles θx and θy outputted from the blurring detector 60, and obtains the deterioration function h from the calculated displacement trajectory of blurring on the image sensor 14, so as to store the deterioration function in the RAM 24. Note that the deterioration function h may be obtained by a known method based on information obtained during or prior to image capturing, such as information on defocusing, aberration and optical low pass filter.
FIG. 3 is a figure showing a function block of the image processor 30 in more detail. Here, the image processor 30 includes a first image restoration processor 35 for executing image restoration processing by performing Fourier transform and a calculation in frequency domain on a degradation image, the deterioration of which is caused by blurring, and a second image restoration processor 36 for executing image restoration processing by performing a iterative computation on the degradation image in real domain. In the present embodiment, the second image restoration processor 36 performs image restoration processing on a restored image obtained by the image restoration processing in the first image restoration processor 35.
In FIG. 3, an RGB interpolation section 32 performs pixel interpolation by known pixel interpolation processing on the RAW data to interpolate lacking color components of each pixel constituting the RAW data by referring to color components of peripheral pixels, and temporarily stores the image in a first image memory 34a. Further, a correcting section 38 performs various kinds of correction processing, such as white balance adjustment, color adjustment, y correction, on the image which is subjected to the restoration processing in the first image restoration processor 35 and the second image restoration processor 36. The image outputted from the correcting section 38 is outputted to the display device 40 as a restored image so as to be screen displayed. Alternatively, the image is temporarily stored in a second image memory 34b, and thereafter is compressed into JPEG data or the like, so as to be recorded in the recording medium 50 as image data of a restored image.
FIG. 4 is a flow chart showing an image restoration processing procedure executed on a degradation image by the first image restoration processor 35.
In FIG. 4, the first image restoration processor 35 first reads a degradation image g from the first image memory 34a, and obtains a Fourier transform G of the degradation image g (S100). Further, the first image restoration processor 35 reads the deterioration function h from the RAM 24, and obtains a Fourier transform H of the deterioration function h (S102). Subsequently, the first image restoration processor 35 obtains an inverse filter H−1 by a known technique using the Fourier transform H of the deterioration function h and noise information registered beforehand in the ROM 22 (S104). Note that when the noise information is ignored, the inverse filter H−1 is expressed by an inverse matrix of H. Note that when the Fourier transform H is irregular, an inverse matrix cannot be obtained, and hence, the inverse filter H−1 is expressed by a Moore-Penrose general inverse matrix. Then, the first image restoration processor 35 multiplies the Fourier transform G of the degradation image g and the inverse filter H−1 to obtain G/H, and applies inverse Fourier transform to G/H to obtain a restored image f1 (S106). The first image restoration processor 35 stores the obtained restored image f1 in the second image memory 34b (S108).
Next, an image restoration processing procedure in the second image restoration processor 36 will be explained with reference to a flow chart shown in FIG. 5. The second image restoration processor 36 performs image restoration processing using an image restoration algorithm based on a steepest descent method which is one of the repetition methods. Here, the image restoration algorithm performed by the second image restoration processor 36 is not limited to the steepest descent method, and other repetition methods, such as a moment method, a correction moment method, and a conjugate gradient method, may also be used.
The present embodiment is characterized in that the second image restoration processor 36 utilizes the restored image f1, which is obtained by the first image restoration processor 35 by performing the image restoration processing on a degradation image, as an initial image at the time when the second image restoration processor 36 starts image restoration processing.
In FIG. 5, the second image restoration processor 36 sets the restored image f1, which is obtained by the first image restoration processor 35 by performing the image restoration processing on a degradation image, as the 0-th restored image (that is, the initial image) in the second image memory 34b (S200). Next, the second image restoration processor 36 initializes a parameter n indicating the number of repetition times to 0 (S202), and reads a predetermined convergence parameter ε from the ROM 22 (S204). Further, the second image restoration processor 36 reads a threshold value Thr as an end determination parameter from the ROM 22 (S206). Next, when the number of repetition times n is smaller than a predetermined maximum number of repetition times (the determination result in step S208 is Yes: “Y”), after incrementing the number of repetition times n (S210), the second image restoration processor 36 calculates ∇J (nabla) (S212), and calculates the square of the norm of ∇J, so as to set the calculation result as a parameter t (S214).
Here, J is an evaluation quantity of a general inverse filter, and is given by the formula: J=∥g(x, y)−h(x, y)*f(x, y)∥2, where g(x, y) is a degradation image, f(x, y) is a restored image, and h(x, y) is a deterioration function. The above formula means that the evaluation quantity J can be given as the magnitude of the difference between an image h(x, y)*f(x, y) which is obtained by performing the deterioration function h(x, y) on the restored image f(x, y), and the actual degradation image g(x, y). If the restored image is correctly restored, the formula: h(x, y)*f(x, y)=g(x, y) is theoretically established, and hence, the evaluation quantity is zero. Thus, the smaller evaluation quantity J means that the restored image f(x, y) is restored better. In the steepest descent method, iterative computation is repeated until the magnitude of ∇J which is the gradient of the evaluation quantity J. i.e., the square of the norm of ∇J, becomes equal to or smaller than the threshold value. When the magnitude of ∇J becomes smaller than the threshold value, the iterative computation is completed, and thereby the restored image f(x, y) is obtained.
Now, returning to FIG. 5, the second image restoration processor 36 determines whether or not t exceeds the threshold value Thr (S216). When t exceeds the threshold value Thr, the second image restoration processor 36 determines that the restoration is not sufficiently performed, and multiplies ∇J and the convergence parameter ε (S218). Then, the second image restoration processor 36 creates a new restored image by subtracting ε∇J from the restored image (S220), and repeats the processing of S208 to S220 until t becomes equal to or smaller than the threshold value Thr. When t becomes equal to or smaller than the threshold value Thr (the determination result in step S216 is Yes: “Y”), or when the number of repetition times reaches the maximum number of repetition times even though t is not equal to or smaller than the threshold value Thr (the determination result of step S208 is No: “N”), the second image restoration processor 36 completes the processing.
Thereby, the second image restoration processor 36 eventually obtains a restored image f2. In the present embodiment, the second image restoration processor 36 which performs image restoration processing by a repetition method utilizes the restored image f1, which is obtained by the first image restoration processor 35 by performing the image restoration processing on a degradation image, as an initial image. Therefore, the second image restoration processor 36 applies image restoration processing to the image, the deterioration of which is improved to some extent, thereby enabling the iterative computation to quickly converge in comparison with the case where the image restoration processing is applied to a degradation image obtained by image-capturing as an image which is not subjected to the image restoration processing.
Note that the above described image processor 30 can be realized by installing programs for embodying various kinds of processing such as image restoration processing, in a microcomputer, and by executing the program.
That is, the microcomputer has a CPU, various memories such as ROM, RAM and EEPROM, a communication bus and an interface. The CPU reads the image processing programs, such as an image restoration algorithm stored beforehand in the ROM as firmware, and executes the programs successively. The CPU receives an input of a degradation image from an image sensor, such as CCD (Charge Coupled Devices) and CMOS, via the interface, executes image restoration processing by performing Fourier transform and a calculation on the degradation image in frequency domain, and further executes image restoration processing by performing a iterative computation on the resultant restored image in real domain.
PARTS LIST
10 image capturing section
12 optical system
14 image sensor
16 CDS (Correlated Double Sampling)-A/D (Analog/Digital) circuit
20 CPU
22 ROM
24 RAM
30 image processor
32 RGB interpolation section
34
a first image memory
34
b second image memory
35 first image restoration processor
36 second image restoration processor
38 correcting section
40 display device
50 recording medium
60 blurring detector
- S100 step
- S102 step
- S104 step
- S106 step
- S108 step
- S200 step
- S202 step
- S204 step
- S206 step
- S208 step
- S210 step
- S212 step
- S214 step
- S216 step
- S218 step
- S220 step