The present invention relates to an imaging system and an imaging program. In particular, the present invention relates to an imaging system which captures a plurality of images and estimates image displacement amounts between the plurality of images, and the like.
In a digital camera disclosed in JP2005-94288A (page 1 and FIG. 3), a plurality of images that have higher resolution than normal images are captured and recorded into an internal memory while a user half-presses the shutter button. If the user fully-presses the shutter button, the digital camera records a high-resolution image captured during the half-pressing, which is done immediately before the full-pressing, into a memory pack as a non-provisional image, whereby a time lag between an image for displaying on the liquid crystal display panel and the non-provisional image upon pressing the shutter is reduced.
Moreover, WO04/63991 discloses a technique for performing registration (alignment) of a plurality of low resolution images by estimating image displacement amounts between the plurality of images by a sub-pixel matching.
Furthermore, WO04/68862 discloses a technique for generating a single high-resolution image from a plurality of low resolution images by super-resolution processing.
An image system according to one aspect of this invention, which captures an image and generates image data of the image, comprising: a capture instruction input unit through which a first stage capture instruction and a following second stage capture instruction are input; a capture unit which performs earlier capture processing for capturing a plurality of images during a period from when the first stage capture instruction is input and until the second stage capture instruction is input, and performs later capture processing for capturing a plurality of images after the second stage capture instruction is input; and an image displacement amount estimation unit which estimates an image displacement amount between a reference image and each of the plurality of images captured in the earlier capture processing and the later capture processing, using one of the images captured during a prescribed period which is a predetermined period including before and after the second stage capture instruction is input, as the reference image.
A computer readable storage medium according to another aspect of this invention stores an imaging program. The imaging program instructs a computer to execute a method comprising: an image data acquisition step of acquiring image data of a plurality of images from an imaging system which captures a plurality of images before and after a shutter button is fully-pressed; a reference image determination step of automatically determining a reference image from the images captured in a prescribed period which is a predetermined period including before and after a time when the shutter button is fully-pressed; and an image displacement amount estimation step of estimating an image displacement amount between the reference image and each of the plurality of images captured before and after the shutter button is fully-pressed.
The embodiments and advantageous characteristics of the present invention will be described in detail below with reference to the attached drawings.
Hereafter, the imaging system according to the first embodiment of the present invention will be described with reference to the drawings.
The imaging system according to the present embodiment includes a lens system 1 which includes a diaphragm 1a, a spectral half-mirror system 3, a shutter 4, a lowpass filter 5, a CCD (charge coupled devices) imaging device 6, an analog-to-digital (A/D) conversion circuit 7, a switching unit 8, and an AE (automatic exposure) photosensor 9, an AF (auto focus) motor 10, an imaging control unit 11, a first image processing unit 12, a buffer memory 13, a compression unit 14, a memory card I/F (interface) 15, a memory card 16, a shutter button determination unit 17, a second image processing unit 20, a liquid crystal display panel 102, and a shutter button (capture instruction input unit) 104. The lens system 1, the spectral half-mirror system 3, the shutter 4, the lowpass filter 5, the CCD imaging device 6, the A/D conversion circuit 7, the switching unit 8, the AE photosensor 9, the AF motor 10, the imaging control unit 11, the first image processing unit 12, the buffer memory 13, the compression unit 14, and the like configure a capture unit which performs earlier capture processing and later capture processing, which will be described later.
The lens system 1 which includes the diaphragm 1a, the spectral half-mirror system 3, the shutter 4, the lowpass filter 5, and the CCD imaging device 6 are arranged along an optical axis. A single CCD imaging device is used as the CCD imaging device 6 in the first embodiment. However, for example, a CMOS imaging device may be used instead of the CCD imaging device 6. A light flux branched from the spectral half-mirror system 3 is guided to the AE photosensor 9. Connected to the lens system 1 is the AF motor 10 for moving a part of the lens system during the focusing work. Signals from the CCD imaging device 6 are fed into the buffer memory 13 through the A/D conversion circuit 7, the switching unit 8, and the first image processing unit 12, or alternatively, directly fed from the switching unit 8 into the buffer memory 13, without passing through the first image processing unit 12.
The buffer memory 13 is capable of performing input and output of image data to and from the compression unit 14 and the second image processing unit 20, and is also used as a work buffer in each processing. The image data saved in the buffer memory 13 is fed and recorded into a removable memory card 16 through the memory card I/F 15.
Signals from the AE photosensor 9 are fed to the imaging control unit 11. The imaging control unit 11 controls the diaphragm 1a based on signals from the AE photosensor 9, and also controls the AF motor 10, the CCD imaging device 6, and the switching unit 8. Moreover, the shutter button determination unit 17 determines the state of the shutter button 104, and feeds the determined result into the imaging control unit 11. Furthermore, signals from the imaging control unit 11 are fed into the liquid crystal display panel 102.
The second image processing unit 20 includes an image displacement amount estimation unit 20a and a high-resolution processing unit 20b. Moreover, the second image processing unit 20 is capable of performing input and output of the image data to and from the buffer memories 13, and is also capable of performing input and output of the image data to and from the memory card 16 through the memory card I/F 15.
(number of earlier capture images)=α×((number of capture images)/(α+β)) (1)
(number of later capture images)=(number of capture images)−(number of earlier capture images) (2)
In the present embodiment, α and β in Expression (1) are set beforehand. In the number-of-capture-images setting (S101), it may be configured such that the number of earlier capture images and the number of later capture images are directly specified. Moreover, the number of capture images set in the number-of-capture-images setting (S101) and the set values of α and β may depend on the scene which the user is going to shoot. For example, when the user specifies with an operation of the operation button 103 that the user is going to shoot a scene in which motion of the photographic subject is rapid, the number of capture images may be set to increase, or a may be set to a relatively large value.
After the prescribed-numbers determination processing (S102) for the earlier and later capture, the recorded number of earlier and later capture images recorded in the buffer memory 13 is initialized to zero (103), and the state determination of the shutter button 104 is performed (S104). If the shutter button 104 is in a fully-pressed state, a single usual capture is performed (S113), and the image data of the captured image is recorded (S112), and the process ends. If the shutter button 104 is in a half-pressed state, the AF control (S105) is performed. Thereafter, it is determined whether or not the number of capture images is a plural number (S106). If the number of capture images is a plural number, a plurality of images are captured and recorded in the earlier capture processing (S107).
Thereafter, the recorded number of earlier capture images is incremented (S302). It is determined whether or not the number of the earlier capture images determined in the prescribed-numbers determination processing (S102) in
After the earlier capture processing (S107), the state of the shutter button 104 (S108) is determined. If the shutter button is in a half-pressed state, the earlier capture processing (S107) is repeated, and if the shutter button is in a full-pressed state, a plurality of images are captured and recorded in the later capture processing (S109). If the shutter button is in other state, such as a state where the user is releasing the shutter button 104, the processing returns to the initialization processing of the recorded numbers of earlier and later capture images (S103).
Moreover, in the present embodiment, m images (as described above, m=1 in the present embodiment) are captured immediately after the full-pressing (S401) in the later capture processing (S109). Thereafter, it is determined whether or not a predetermined time period t has elapsed since the shutter button 104 is fully-pressed (S404), and the capture of the image is halted until a predetermined time period t elapses. If it is determined that the predetermined time period t has elapsed since the shutter button 104 had been fully-pressed in S404, the capture of the second image and subsequent images starts. The predetermined time period t for halting the capture is set in advance. For example, the predetermined time period t may be set as a time period during which the amount of camera shake upon the shutter button 104 being fully-pressed is considered large. Therefore, the predetermined time period t is set to a suitable value according to the above-described scenes to be shot.
After the later capture processing (S109), the image displacement amount estimation processing (S110) and the high-resolution processing (S111) are performed, whereby the high-resolution image generated in the high-resolution processing (S112) is recorded, and the processing ends. The image displacement amount estimation processing (S110) and the high-resolution processing (S111) will be described later.
Now, the above-described processing performed in the imaging system according to the present embodiment will be described in more detail along with the flow of the image signal (image data). First, by the user pushing the shutter button 104 after turning the power switch 101 to the ON state, the imaging control unit 11 controls the diaphragm 1a, the shutter 4, and the AF motor 10, etc., whereby images are captured. In capturing images, image signals from the CCD imaging device 6 are converted into digital signals in the A/D conversion circuit 7, and are output to the buffer memory 13 as image signals of RGB (red, green, blue) which had been subjected to well-known white balance, emphasis processing, interpolation processing, and the like, by the first image processing unit 12.
In S104 of
Then, after locking the AF, the earlier capture processing (S107) is performed during the time period when the shutter button 104 is half-pressed. For example, the circular recording of the image data is performed using a storage area for the number of earlier capture images, which is maintained in the buffer memory 13. Upon the circular recording, the image signals converted into digital signals in the A/D conversion circuit 7 are recorded into the buffer memory 13 through the first image processing unit 12. When the image data for the number of earlier capture images is recorded into the buffer memory 13, the imaging control unit 11 outputs signals for issuing the notification of the completion of preparation for the later capture, and for example, the notification of the completion of preparation for the later capture is displayed in the liquid crystal display panel 102.
After the notice of the completion of preparation for the later capture, when the user fully-presses the shutter button 104, the imaging control unit 11 performs the later capture processing (S109). In the later capture processing (S109), image signals of a first image, which is captured first, are saved into the buffer memory 13 through the first image processing unit 12, and it is determined in the imaging control unit 11 whether or not a predetermined time period t has elapsed since the shutter button 104 is fully-pressed (S404). The capture of the image is halted from the shutter button 104 is fully-pressed until the predetermined time period t elapses. When the predetermined time period t has elapsed, the second and subsequent captures in the later capture processing (S109) are started, and image signals of the second and the subsequent images are also saved at the buffer memory 13 through the first image processing unit 12. The image data saved in the buffer memory 13 is subjected to an image compression, such as JPEG, in the compression unit 14, and the compressed image data is recorded into the memory card 16 from the buffer memory 13 via the memory card I/F 15 using the buffer memory 13 as a work buffer.
The second image processing unit 20 is capable of performing input and output of the image data to and from the memory card 16 via the memory card I/F 15, and is also capable of performing input and output of the image data to and from the buffer memory 13. For this reason, if the buffer memory 13 has large capacity, the compressed image data and the image data before being compressed may not be recorded into the memory card 16 so that these pieces of image data are recorded into the buffer memory 13, and only the image data of the high-resolution image generated in the high-resolution processing unit 20b may be recorded into the memory card 16 from the buffer memory 13.
Now, the image displacement amount estimation processing (S110) and the high-resolution processing (S111) which are performed in the second image processing unit 20 will be described. The second image processing unit 20 first estimates the image displacement amount between the images (correspondence of the pixel positions between the images) in the image displacement amount estimation unit 20a. Then, the second image processing unit 20 generates a high-resolution image in the high-resolution processing unit 20b using the pixel displacement amount and the image data obtained in the earlier capture processing (S107) and the later capture processing (S109).
First, the image displacement amount estimation processing (S110) will be described. The image displacement amount estimation unit 20a sets an image captured in a prescribed period which is a predetermined period before and after the shutter button 104 is fully-pressed (i.e., predetermined period including a time when the shutter button 104 is fully-pressed), as a reference image. The image displacement amount estimation unit 20a estimates the image displacement amount between the reference image and each of the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109). In the present embodiment, an image captured immediately after the shutter button 104 is fully-pressed is used as the reference image. In this case, the above-described prescribed period is a period from just before the shutter button 104 is fully-pressed until the first image is captured.
Next, similarity values between the image sequence generated by transforming the reference image in S502 and the subject images are computed (S505). The similarity value can be obtained as the difference between the image of the image sequence and the subject image, such as SSD (Sum of Squared Difference) and SAD (Sum of Absolute Difference). Then, a discrete similarity map is created using a relationship between the image transformation parameter at the time of generating the image sequence in S502, and the similarity value computed in S505 (S506). From the discrete similarity map, a continuous similarity curve is obtained by complementing the discrete similarity map created in S506, and the extremum of the similarity value is searched in the continuous similarity curve (S507). Examples of the methods to obtain of the continuous similarity curve by complementing the discrete similarity map include a parabola fitting technique and a spline interpolation technique. The image transformation parameter where the similarity value becomes the extremum in the continuous similarity curve is estimated as the image displacement amount between the reference image and the subject images.
Thereafter, it is determined whether or not the image displacement amount estimation is performed for all the images to be used in the high-resolution processing (S111) (S508). If the image displacement amount estimation is not performed for all the images, another image among the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109) is set as the next image (S509), and then the processing from S503 to S508 is repeated. In the present embodiment, the image displacement amount is estimated for all the images captured in the earlier capture processing (S107) and the later capture processing (S109), and recorded into the buffer memory 13. However, the image displacement amount may be estimated for only a part of the plurality of images recorded in the buffer memory 13. In S508, if it is determined that the image displacement amount estimation is performed for all the images which are to be used in the high-resolution processing (S111), the processing ends.
Thus, the image displacement amount estimated in the image displacement amount estimation unit 20a and the image data of the image for which an image displacement amount is obtained are transferred to the high-resolution processing unit 20b, and the reference image is subjected to the high-resolution processing (S111) using super-resolution processing.
Then, the relationship of the pixel correspondences between the target image and the image for which the image displacement amount is estimated in the image displacement amount estimation processing (S110) is represented by the image displacement amount estimated in the image displacement amount estimation unit 20a. Using the image displacement amount, these images are aligned and superposed in a coordinate space which is based on the expanded coordinates (magnified coordinates) of the target image, whereby the registration image y is generated (S603). Here, y is a vector representing image data of the registration image. The details of the method of generating the registration image y is disclosed in “M. Tanaka and M. Okutomi, Fast Algorithm for Reconstruction-based Super-resolution, Computer Vision and Image Media (CVIM) Vol. 2004, No. 113, and pp. 97-104, (2004-11)”. For example, in the superposition processing in S603, each pixel of the plurality of subject images for which the image displacement amount is estimated in the image displacement amount estimation processing (S110), is fitted to the pixel positions of the expanded coordinates of the target image, and thereby, each pixel is arranged on the nearest lattice point of the expanded coordinates of the target image. Here, there are cases where a plurality of pixel values is set on the same lattice point. If so, the averaging is performed to the plurality of pixel values.
Next, the point spread function (PSF) is obtained in consideration of the capture characteristics, such as the optical transfer function (OTF) and the CCD aperture (CCD opening) (S604). The PSF is reflected in the matrix A in the following Expression (3). For example, a Gaussian function may be used for simplicity. Thereafter, the minimization of the evaluation function f(z) represented by the following Expression (3) is performed using the registration image y generated in S603 and the PSF obtained in S604 (S605). Furthermore, it is determined whether or not f(z) is minimized (S606).
f(z)=∥y−Az∥2+λg(z) (3)
In Expression (3), y is a column vector representing image data of the registration image generated in S603, z is a column vector representing image data of a high-resolution image obtained by subjecting the target image to high-resolution processing, and A is an image transformation matrix representing the characteristics of the imaging system including PSF and the like. Moreover, g(z) is a regularized term in consideration of the smoothness of the image, correlation of the color of the image, and the like, and λ is a weighting factor. For example, a steepest descent method can be used for the minimization of the evaluation function f(z) represented by Expression (3). In cases where the steepest descent method is used, the value derived by performing partial differentiation of f(z) with each element (component) of z is calculated, and the vector which has those values as elements is generated. As shown in the following Expression (4), a vector that has the values derived by the partial differentiation as elements is added to z, and thereby, the high-resolution image z is repeatedly updated (S607) and z that derives the minimum f(z) is obtained.
In Expression (4), zn is a column vector representing the image data of the high-resolution image that was subjected to n times of updating, and a is a step size of the update amount. In the first processing of S605, the initial image z0 obtained in S602 can be used as the high-resolution image z. If it is determined that f(z) is minimized in S606, the processing ends, and zn at that time is recorded as a final high-resolution image in the memory card 16 or the like. Thus, it is possible to obtain a high-resolution image having higher resolution than the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109).
First, a reference image captured during the prescribed period, which is a predetermined period including before and after the shutter button 104 is fully-pressed, is supplied to the interpolation expansion unit 201, whereby an interpolation expansion of the reference image is performed (corresponding to S602 in
Moreover, the reference image, and a plurality of subject images for which the image displacement amount is estimated in the image displacement amount estimation processing (S110) are supplied to the registration image generation unit 205. Then, on the basis of the image displacement amount obtained in the image displacement amount estimation unit 20a, the registration image y is generated by performing the superposition processing in a coordinate space that is based on the expanded coordinates of the reference image (corresponds to S603 in
The image data (vector) that was subjected to the convolution in the convolution unit 204 is sent to the image comparison unit 206. In the image comparison unit 206, the difference of the pixel values in the same pixel position is computed between the image data that was subjected to the convolution and the registration image y generated in the registration image generation unit 205, whereby the difference image data (corresponds to (y−Az) of Expression (3)) is generated. The difference image data generated in the image comparison unit 206 is supplied to the convolution unit 207 to thereby perform convolution with the PSF data supplied from the PSF data retention unit 203. The convolution unit 207, for example, performs convolution of the transposed matrix of the image transformation matrix A in Expression (3) and the column vector representing the difference image data, so as to generate a vector which was obtained by partially differentiating ∥y−Az∥2 of Expression (3) with each element (component) of z.
Moreover, the image accumulated in the image accumulation unit 202 is supplied to the regularized term computation unit 208 where the regularized term g(z) in Expression (3) is obtained. The regularized term g(z) is partially differentiated with each element of z to thereby derive a vector ∂g(z)/∂z. For example, the regularized term computation unit 208 performs the color conversion processing from RGB to YCrCb for image data accumulated in the image accumulation unit 202, to thereby obtain a vector by executing convolution of a frequency highpass filter (Laplacian filter) to the YCrCb component (brightness component and color difference component). Then, the regularized term computation unit 208 uses the square norm (square of length) of the obtained vector as the regularized term g(z), to thereby derive a vector ∂g(z)/∂z by executing the partial differentiation of g(z) with each element of z. Although a component for false color is extracted by applying the Laplacian filter to Cr and Cb components (color difference components), it is possible to remove the component of false color by minimizing the regularized term g(z). Since the regularized term g(z) is included in Expression (3), an empirical rule relating to images, “A color difference component of an image generally changes smoothly.” is used. Therefore, it is possible to obtain stably the high-resolution image in which the color difference is suppressed.
The updated image generation unit 209 is provided with the image data (vector) generated in the convolution unit 207, the image data (vector) accumulated in the image accumulation unit 202, and the image data (vector) generated in the regularized term computation unit 208. In the updated image generation unit 209, these pieces of image data (vectors) are summed up after being multiplied by the weighting factors, such as λ and α shown in Expression (3) and Expression (4), whereby the updated high-resolution image is generated (corresponds to Expression (4)).
Thereafter, the high-resolution image updated in the updated image generation unit 209 is provided to the convergence determination unit 210, and the convergence is determined. In the convergence determination, it may be determined that the updating work of the high-resolution image is converged if the number of times of iteration computation needed for the convergence is more than a predetermined number of times. Moreover, the high-resolution image updated in the past may be recorded, and the difference with the present high-resolution image may be calculated as the update amount, and then it may be determined that the updating work of the high-resolution image is converged if the update amount is less than a fixed value.
If it is determined that the updating work is converged in the convergence determination unit 210, the updated high-resolution image is output to the exterior as a final high-resolution image. If it is determined that the updating work is not converged, the updated high-resolution image is provided to the image accumulation unit 202, and is used for the next updating work. The high-resolution image is provided to the convolution unit 204 and the regularized term computation unit 208 for the next updating work. The above processing is repeated and the high-resolution image is updated in the updated image generation unit 209, whereby it is possible to obtain a high-resolution image.
In the present embodiment, the earlier capture processing (S107) is performed while the shutter button 104 is half-pressed as the first stage capture instruction, and the later capture processing (S109) is performed after the shutter button 104 is fully-pressed as the second stage capture instruction. The image displacement amount between the subject image and the reference image is estimated by using an image captured immediately after the shutter button 104 is fully-pressed as the reference image. For this reason, the image displacement amount between the subject image and the reference image can be estimated in high accuracy, and it is possible to generate good high-resolution images.
Moreover, it is possible to estimate the image displacement amount between the subject image and the reference image in further higher accuracy because the capture of the image is halted until a time period considered as having a large amount of camera shake elapses, after performing one image capture (S401) immediately after the full-press in the later capture processing (S109).
In
In
In
In
In the present embodiment, as shown in
In the present embodiment, instead of generating the registration image y, high-resolution processing (equivalent to the high-resolution processing S111 of the first embodiment) is performed using image data yk of the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109). Upon the processing, weighting is performed to the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109). The evaluation function f(z) in the present embodiment is represented by the following Expression (5).
f(z)={ak∥yk−Akz∥2λg(z)} (5)
In Expression (5), yk indicates a column vector representing image data of an image (low resolution image) captured in k-th order in the earlier capture processing (S107) and the later capture processing (S109), ak indicates a weighting factor for each low resolution image, z indicates a column vector representing image data of a high-resolution image that was subjected to high resolution processing of the target image, and Ak indicates an image conversion matrix representing characteristics of the imaging system including motions between the images, PSF, and the like. In the present embodiment, Ak is computed using the image displacement amount estimated in the image displacement amount estimation processing (S110), whereby it is possible to perform a registration (alignment) of the target image and the low resolution image. Moreover, g(z) indicates a regularized term which takes into consideration the smoothness of the image, correlation of color of the image, and the like, and λ indicates a weighting factor. In the present embodiment, as in the first embodiment, the high-resolution image z which provides the minimum evaluation function f(z) represented by Expression (5) is obtained using a steepest descent method and the like.
In the present embodiment, as shown in
In the present embodiment, since the weighting is performed such that the image captured at a time closer to the time when the reference image is captured is set larger weight, it is possible to generate a good high-resolution image using an image that is considered as having high degree of similarity with the reference image. With respect to other advantageous effects, they are similar to that of the imaging systems according to the first to the third embodiments.
In the present embodiment, image capturing is not halted in the later capture processing (S109 of the first embodiment), and a plurality of images are continuously captured as in the earlier capture processing (S107 of the first embodiment). Further, an image captured immediately after fully-pressing the shutter button 104 (N of
Moreover, in the example of
In the present embodiment, a predetermined number of images that are captured after the shutter button 104 is fully-pressed and are considered as experiencing a large amount of camera shake, are excluded or discriminated so as to perform the high-resolution processing, and therefore, it is possible to generate a good high-resolution image. With respect to other advantageous effects, they are similar to that of the imaging systems according to the first to the fourth embodiments.
In the present embodiment, among the plurality of images captured in the earlier capture processing (S107) and the later capture processing (S109), the number of images used for the high-resolution processing is automatically set according to the magnification ratio (ratio of the resolution compared to the original image) of the high-resolution image to be generated. At this time, as shown in
Moreover, for example, instead of the number-of-capture-images setting (S101) of the first embodiment, the user sets the magnification ratio of the high resolution image, so that the number of images required for the generation of the high-resolution image is calculated, whereby that number of images are captured in the earlier capture processing (S107) and the later capture processing (S109). At this time, the numbers of images for the images captured in the earlier capture processing (S107) and the later capture processing (S109) may be calculated separately.
In the present embodiment, the high-resolution processing using the images of the number according to the magnification ratio of the high-resolution image, and therefore, it is possible to generate a good high-resolution image. With respect to other advantageous effects, they are similar to that of the imaging systems according to the first to the fifth embodiments.
The imaging system according to the present embodiment performs the earlier capture processing (S107) and the later capture processing (S109) with an imaging apparatus, such as a digital still camera, and performs the image displacement amount estimation processing (S110) and the high-resolution processing (S111) with an image processing apparatus, such as a personal computer. The image processing program is stored in a computer-readable storage medium. The program is encoded and saved in a computer-readable format. The computer has a microprocessor and a memory. The program includes a program code (command) for causing the computer to perform the above-described processing. The arrangement and the processing of the other imaging systems are the same as that of one imaging system of the first to the sixth embodiment. In cases where the image capture and the image processing are performed in separate apparatuses as in the present embodiment, it is necessary to transfer or input to the image processing system the image data to be used in the image displacement amount estimation processing (S110) and the high-resolution processing (S111), information as to which image is set as the reference image, and the like. Moreover, in cases where high-resolution processing using the weighting is performed as in the fourth embodiment, it is required to give the information on the weighting factor, and the like to the image processing apparatus. With regards to the advantageous effects, they are similar to any of the imaging systems of the first to the sixth embodiments.
The present invention is not limited to the above-described embodiments, and obviously includes various changes and improvements made within the scope of the technical idea. For example, in the first to the sixth embodiments, the half-pressing of the shutter button 104 is described as the example of the first stage capture instruction that starts the earlier capture processing (S107), and the full-press of the shutter button 104 is described as the example of the second stage capture instruction that starts the later capture processing (S109). However, the input of the first stage and the second stage capture instructions may be done by other techniques. Moreover, in the first to the sixth embodiments, the super-resolution processing is used as the method of subjecting the reference image to the high-resolution processing. However, instead of the high-resolution processing using the super-resolution processing, image processing, such as estimating the image displacement amount of the images captured in the earlier capture processing (S107) and in the later capture processing (S109) and then obtaining the weighted average by superposing the images, to thereby reduce the random noise.
Number | Date | Country | Kind |
---|---|---|---|
2007-154097 | Jun 2007 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2008/060929, filed on Jun. 10, 2008, which claims the benefit of Japanese Patent Application No. JP 2007-154097, filed on Jun. 11, 2007, which are incorporated by reference as if fully set forth.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2008/060929 | Jun 2008 | US |
Child | 12635368 | US |