1. Field of the Invention
The present invention relates to an image processing method and apparatus, and particularly to combining sharp and blurred images of different resolution and exposure to produce a relatively high resolution, fully exposed and relatively sharp image.
2. Description of the Related Art
Two source images nominally of the same scene may be used to produce a single target image of better quality or higher resolution than either of the source images.
In super-resolution, multiple differently exposed lower resolution images can be combined to produce a single high resolution image of a scene, for example, see “High-Resolution Image Reconstruction from Multiple Differently Exposed Images”, Gunturk et al., IEEE Signal Processing Letters, Vol. 13, No. 4, April 2006; or “Optimizing and Learning for Super-resolution”, Lyndsey Pickup et al, BMVC 2006, 4-7 Sep. 2006, Edinburgh, UK, each being incorporated by reference. However, in super-resolution, blurring of the individual source images either because of camera or subject motion are usually not of concern before the combination of the source images.
U.S. Pat. No. 7,072,525, incorporated by reference, discloses adaptive filtering of a target version of an image that has been produced by processing an original version of the image to mitigate the effects of processing including adaptive gain noise, up-sampling artifacts or compression artifacts.
PCT Application No. PCT/EP2005/011011 and U.S. Ser. No. 10/985,657, incorporated by reference, discloses using information from one or more presumed-sharp short exposure time (SET) preview images to calculate a motion function for a fully exposed higher resolution main image to assist in the de-blurring of the main image.
Indeed many other documents, including US 2006/0187308, Suk Hwan Lim et al.; and “Image Deblurring with Blurred/Noisy Image Pairs”, Lu Yuan et al, SIGGRAPH07, Aug. 5-9, 2007, San Diego, Calif., incorporated by reference, are directed towards attempting to calculate a blur function in the main image using a second reference image before de-blurring the main image.
Other approaches, such as disclosed in US2006/0017837, incorporated by reference, have involved selecting information from two or more images, having varying exposure times, to reconstruct a target image where image information is selected from zones with high image details in SET images and from zones with low image details in longer exposure time images.
It is desired to provide an improved method of combining a sharp image and a blurred image of differing resolution and exposure to produce a relatively high resolution, fully exposed and relatively sharp image.
An image processing method is provided to process a first relatively underexposed and sharp image of a scene, and a second relatively well exposed and blurred image nominally of the same scene. The first and second images are derived from respective image sources. A portion of a relatively first underexposed image is provided as an input signal to an adaptive filter. A corresponding portion of the second relatively well exposed image is provided as a desired signal to the adaptive filter. An output signal is provided from the input signal and the desired signal. A first filtered image is constructed from the output signal that is relatively less blurred than the second image. The first filtered image or a further processed version is displayed, transmitted, projected and/or stored.
The first and second images may be in YCC format. The image portions may include a respective Y plane of the first and second images. The constructing the first filtered image may include using the output signal as a Y plane of the first filtered image and using Cb and Cr planes of the input image as the Cb and Cr planes of the first filtered image.
The image source for the second image may be of a relatively higher resolution than the image source for the first image. The method may include aligning and interpolating the first image source to match the alignment and resolution of the second image source.
The first image may be one of an image acquired soon before or after the second image.
The image source for the second image may be of a relatively lower resolution than the image source for the first image. The method may include aligning and interpolating the second source to match the alignment and resolution of the first source. Responsive to the first and second sources being misaligned by more than a pre-determined threshold, the method may further include providing the desired signal from a linear combination of first and second image sources or from a combination of phase values from one of the first and second image sources and amplitude values for the other of the first and second image sources.
The method may include noise filtering the first filtered image and/or applying color correction to one of the first filtered image or the noise filtered image.
The method may include acquiring a first partially exposed image from an image sensor, acquiring a second further exposed image from the image sensor, and subsequently resetting the image sensor. The obtaining the first relatively underexposed and sharp image of a scene may include subtracting the first partially exposed image from the second further exposed image. The second relatively well exposed and blurred image may be obtained from the image sensor immediately prior to the resetting.
A further image processing method is provided. A first relatively underexposed and sharp image of a scene is obtained. A second relatively well exposed and blurred image, nominally of the same scene, is also obtained, wherein the first and second images are derived from respective image sources. A portion of the relatively first underexposed image is provided as an input signal to an adaptive filter. A corresponding portion of the second relatively well exposed image is provided as a desired signal to the adaptive filter. The input signal is adaptively filtered to produce an output signal. A first filtered image is constructed from the output signal, and is relatively less blurred than the second image. The first filtered image and/or a processed version is/are displayed, transmitted, projected and/or stored.
The method may include providing a portion of the first filtered image as the input signal to an adaptive filter, providing a corresponding portion of the second image as a desired signal to the adaptive filter, further adaptively filtering the input signal to produce a further output signal, and constructing a further filtered image from the further output signal relatively less blurred than the first filtered image.
The first and second images may be in RGB format and may be used to produce the first filtered image. The image portions may include a respective color plane of the first and second images. The providing of a portion of the first filtered image may include converting the first filtered image to YCC format. The method may also include converting the second image to YCC format. The image portions for further adaptive filtering may include a respective Y plane of said converted images.
The first and second images may be in YCC format and may be used to produce the first filtered image. The image portions may include a respective Y plane of the first and second images. The providing of a portion of the first filtered image may include converting the first filtered image to RGB format. The method may include converting the second image to RGB format. The image portions for further adaptive filtering may include a respective color plane of the converted images.
The adaptive filtering may be performed one of row or column wise on said input signal and wherein further adaptively filtering is performed on the other of row or column wise on said input signal.
The method may include noise filtering the first filtered image and/or applying color correction to the first filtered image and/or the noise filtered image.
The first and second images may be in RGB format. The image portions may include a respective color plane of the first and second images. The adaptive filtering includes producing a set of filter coefficients from a combination of said input signal and a error signal being the difference between said desired signal and said output signal; and wherein the method further comprises constructing each color plane of said first filtered image from a combination of said filter coefficients and said input signal color plane information.
Prior to adaptive filtering, a point spread function, PSF, is estimated for the second image. The second image is de-blurred with the point spread function. The de-blurring may be performed in response to the PSF being less than a pre-determined threshold.
The method may include amplifying the luminance characteristics of the under exposed image prior to the adaptive filtering.
An image processing apparatus is further provided to process a first relatively underexposed and sharp image of a scene, as well as a second relatively well exposed and blurred image, nominally of the same scene. The first and second images are derived from respective image sources. The apparatus includes processor-based components to provide a portion of the relatively first underexposed image as an input signal to an adaptive filter and to provide a corresponding portion of the second relatively well exposed image as a desired signal to the adaptive filter. The apparatus also includes an adaptive filter arranged to produce an output signal from the input signal and the desired signal. An image generator of the apparatus is arranged to construct a first filtered image from the output signal, which is relatively less blurred than the second image.
One or more processor-readable media are also provided having embedded therein processor-readable code for programming a processor to perform any of the image processing methods described herein.
A portable digital image acquisition device is also provided. The device includes a lens and image sensor for capturing a digital image, and a processor, and one or more processor-readable media having processor-readable code embedded therein for programming the processor to perform any of the image processing methods described herein.
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
a) and 5(b) illustrate adaptive filtering of images in accordance with certain embodiments.
a)-(e) illustrate some image data produced during the image acquisition illustrated at
Embodiments are described to provide an image processing method and apparatus is arranged to process a first relatively underexposed and sharp image of a scene, and a second relatively well exposed and blurred image, nominally of the same scene, the first and second images being derived from respective image sources. The apparatus provides a portion of the relatively first underexposed image as an input signal to an adaptive filter; and a corresponding portion of the second relatively well exposed image as a desired signal to the adaptive filter.
Embodiments are also described to provide an image processing apparatus is arranged to process a first relatively underexposed and sharp image of a scene, and a second relatively well exposed and blurred image, nominally of the same scene, the first and second images being derived from respective image sources. The apparatus provides a portion of the relatively first underexposed image as an input signal to an adaptive filter; and a corresponding portion of the second relatively well exposed image as a desired signal to the adaptive filter. The adaptive filter produces an output signal from the input signal and the desired signal; and an image generator constructs a first filtered image from the output signal, relatively less blurred than the second image.
Embodiments further provide image enhancement methods and systems. In accordance with one embodiment, there is no need to perform image registration between picture pair, linear regression or gamma correction. The method shows robustness to image content differences and low quality previews. Also, the method can be easily applied to create video frames from the under-exposed and well exposed frame pair. For one such embodiment, implementation complexity is reduced and the resultant technique is robust in finite precision implementations. The method shows robustness to image content differences and low quality previews.
In this description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” or “in certain embodiments” in various places throughout the specification are not necessarily all referring to the same embodiment or embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Moreover, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, any claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
Embodiments of the invention are applicable in a variety of settings in which image enhancement is effected. For one embodiment of the invention, during the acquisition step, the camera will acquire two images (one full-resolution under-exposed image and a well exposed blurred image (can be a preview or another full resolution image).
Referring now to
The size of the lower resolution image 12 is OxP and the size of the under-exposed full resolution image 10 is QxR, with O<Q and P<R.
Where the images are acquired in a digital image acquisition device such as a digital stills camera, camera phone or digital video camera, the lower resolution image 12 may be a preview image of a scene acquired soon before or after the acquisition of a main image comprising the full resolution image 10, with the dimensions of the preview and full resolution images depending on the camera type and settings. For example, the preview size can be 320×240 (O=320; P=240) and the full resolution image can be much bigger (e.g. Q=3648; R=2736).
In accordance with certain embodiments, adaptive filtering (described in more detail later) is applied to the (possibly pre-processed) source images 10, 12 to produce an improved filtered image. Adaptive filtering requires an input image (referred to in the present specification as x(k)) and a desired image (referred to in the present specification as d(k)) of the same size, with the resultant filtered image (referred to in the present specification as y(k)) having the same size as both input and desired images.
As such, in an embodiment, the preview image is interpolated to the size Q×R of the full resolution image.
In interpolating the preview image, a misalignment between the interpolated image 14 and the full resolution image might exist. As such, in an embodiment, the images are aligned 16 to produce an aligned interpolated preview image 18 and an aligned full resolution image 20. Any known image alignment procedure can be used, for example, as described in Kuglin C D., Hines D C. “The phase correlation image alignment method”, Proc. Int. Conf Cybernetics and Society, IEEE, Bucharest, Romania, September 1975, pp. 163-165, incorporated by reference.
Other possible image registration methods are surveyed in “Image registration methods: a survey”, Image and Vision Computing 21 (2003), 977-1000, Barbara Zitova and Jan Flusser, incorporated by reference.
Alternatively, the displacements between the images 10 and 12/14 can be measured if camera sensors producing such a measure are made available.
In any case, either before or during alignment, the full resolution image can be down-sampled to an intermediate size S×T with the preview image being interpolated accordingly to produce the input and desired images of the required resolution, so that after alignment 16, the size of the aligned interpolated image and the aligned full resolution image will be S×T (S≦Q,T≦R).
These images are now subjected to further processing 22 to compute the input and desired images (IMAGE 1 and IMAGE 2) to be used in adaptive filtering after a decision is made based on the displacement value(s) provided from image alignment 16 as indicated by the line 24.
In real situations, there may be relatively large differences between the images 10, 14, with one image being severely blurred and the other one being under-exposed. As such, alignment may fail to give the right displacement between images.
If the displacement values are lower than a specified number of pixels (e.g. 20), then the full resolution aligned image 20 is used as IMAGE 1 and the aligned interpolated preview image 18 is used as IMAGE 2.
Otherwise, if the displacement values are higher than the specified number of pixels, several alternatives are possible for IMAGE 2, although in general these involve obtaining IMAGE 2 by combining the interpolated preview image 14 and the full resolution image 10 in one of a number of manners.
In a first implementation, we compute two coefficients c1 and c2 and the pixel values of IMAGE 2 are obtained by multiplying the pixel values of the full resolution image 10 with c1 and adding c2. These coefficients are computed using a linear regression and a common form of linear regression is least square fitting (G. H. Golub and C. F. Van Loan, Matrix Computations. John Hopkins University Press, Baltimore, Md., 3rd edition, 1996), incorporated by reference.
Referring to
Therefore we obtain two 5×5 matrices, M1 that corresponds to the pixel values chosen from the preview image and M2 that corresponds to the pixel values chosen from the full resolution image. Two vectors are obtained from the pixel values of these matrices by column-wise ordering of M1(a=(a1) and M2b=(bi)). We therefore have pairs of data (ai,bi) for i=1, 2, . . . , n, where n=25 is the total number of grid points from each image. We define the matrix
The coefficient vector c=[c1c2] is obtained by solving the linear system VTVc=VTb. The linear system can be solved with any known method.
Another alternative is to amplify the pixels of the under-exposed image 10 with the ratio of average values of the 25 grid points of both images 10, 12 and rescale within the [0-255] interval for use as IMAGE 2.
In a still further alternative, IMAGE 2 is obtained by combining the amplitude spectrum of the interpolated blurred preview image 14 and the phase of the under-exposed full resolution image 10. As such, IMAGE 2 will be slightly deblurred, with some color artifacts, although it will be aligned with the under-exposed image 10. This should produce relatively fewer artifacts in the final image produced by adaptive filtering.
Alternatively, instead of computing FFTs on full resolution images to determine phase values, an intermediate image at preview resolution can be computed by combining the amplitude spectrum of the blurred image 12 and the phase of a reduced sized version of the under-exposed image 10. This can then be interpolated to produce IMAGE 2.
Another possibility is to use as IMAGE 2, a weighted combination of image 20 and image 18, e.g. 0.1*(Image 18)+0.9*(Image 20). This can be used if the preview image 12 has large saturated areas.
In any case, once the processing 22 is complete, two images of similar size are available for adaptive filtering 30, as illustrated at
In a first implementation, the input and desired images are in RGB color space, e.g., see
In the YCC case, the Y plane is selected with the Cb and Cr planes being left unchanged.
Referring now to
In the simplest implementation, L=1 and this can be used if the original image acquisition device can provide good quality under-exposed pictures with a low exposure time. Where the acquisition device produces low quality and noisy under-exposed images, a longer filter length L should be chosen (e.g. 2 or 3 coefficients).
The sliding vectors 32, 34 are obtained from the columns of the image matrices, as illustrated at
When the vectors 32, 34 are combined in the adaptive filter 36, the most recent pixel value added to the first sliding vector 32 is updated. In the preferred embodiment, the updated pixel is the dot product of the filter coefficients and the L pixel values of the first vector. Many variations of adaptive algorithms (e.g., Least Mean Square based, Recursive Least Square based) can be applied and many such algorithms can be found for example in S. Haykin, “Adaptive filter theory”, Prentice Hall, 1996, incorporated by reference. Preferably, the sign-data LMS is employed as described in Hayes, M, Statistical Digital Signal Processing and Modeling, New York, Wiley, 1996, incorporated by reference.
The formulae are:
x(k)=[x(k),x(k−1) . . . x(k−L+1)],
w(k)=[w(k),w(k−1) . . . w(k−L+1)],
y(k)=w(k)·x(k),
e(k)=d(k)−y(k),
w(k+1)=w(k)+μ(k)·e(k)·sign(x(k))=w(k)+μ(k)·e(k),
where
Other considered variants were:
w(k+1)=w(k)+μ(k)·e(k)·x(k) (standard LMS) or
w(k+1)=w(k)+μ(k)·e(k)/(1+x(k))
The term 1+x(k) is used above to avoid the division by zero. Alternatively, the formula:
could be used, with any zero-valued x pixel value replaced with a 1.
In a further variant, the step size μ(k) is variable as follows:
or
So, using the above formula:
w(k+1)=w(k)+μ(k)·e(k)·sign(x(k))=w(k)+μ(k)·e(k)
this gives:
If μ(k)=μ=1−α, α very close to 1 (e.g. 0.99999), for L=1, we have
with vectors being replaced with scalars. Therefore, for this particular fixed step size, the sign-data LMS and the previous equation are equivalent.
The β parameter can be used in order to avoid division by zero and to over-amplify any black pixels. β is preferably in the interval [1 . . . 10], and preferably in the interval [5 . . . 10], particularly if the under-exposed image is too dark. If not, β=1 is enough.
Some thresholds or resetting for the filter coefficients w(k) or output values y(k) can be imposed in order to avoid artifacts in the filtered image 38. An upper threshold, δ is imposed for the values that can be allowed for the coefficients of w(k) (i.e. wi(k)=δ for any i=1 . . . L, if its computed value at iteration k is above δ). A suitable threshold value for the mentioned LMS algorithm, can be chosen as
where
The updated color matrix 38 is completed when the last pixel from the last column has been updated. If filtering has been performed in RGB space, then a final reconstructed image 40 is obtained by concatenating the R/G/B updated matrices. Alternatively, if filtering has been performed in YCC space, the concatenated updated Y plane, i.e. matrix 38, with unchanged Cb and Cr planes of the under-exposed image 10 can be converted back to RGB color space.
The filtering can be repeated with the reconstructed image 40 replacing the under-exposed image, i.e. IMAGE 1.
In this case, adaptive filtering can be performed on the Y plane of an image converted from RGB space, if previous filtering had been performed in RGB space; or alternatively filtering can be performed on an RGB color plane of an image converted from YCC space, if previous filtering had been performed on the Y plane.
It will also be seen that filtering can be operated column wise or row wise. As such, adaptive filtering can be performed first column or row wise and subsequently in the other of column or row wise.
In each case where filtering is repeated, it has been found that the quality of the reconstructed image after two filtering operations is superior than for each individual filtering result.
Referring to
Therefore, the pixel value of the filtered image z(k) is generated by the following formula:
Referring now to
A PSF estimation block 74 computes a PSF for the blurred image 72, from the interpolated preview 70 and the full resolution image 72, using any suitable method such as outlined in the introduction.
The blurred 72 image is then deblurred using this estimated PSF to produce a relatively deblurred image 76. Examples of deblurring using a PSF are disclosed in “Deconvolution of Images and Spectra” 2nd. Edition, Academic Press, 1997, edited by Jannson, Peter A. and “Digital Image Restoration”, Prentice Hall, 1977 authored by Andrews, H. C. and Hunt, B. R., incorporated by reference.
Prior to adaptive filtering, the average luminance of the interpolated preview image 70 is equalized in processing block 78 with that of the full resolution (relatively) deblurred image 76. Preferably, this comprises a gamma (γ) amplification of the under-exposed image. The exact value of gamma is determined by obtaining a ratio of average luminance (
Adaptive filtering is then applied and re-applied if necessary to IMAGE 1 and IMAGE 2 as in the first embodiment. Again when repeating adaptive filtering, the under-exposed image, i.e. IMAGE 1 is replaced with the reconstructed one.
In the second embodiment, the quality of the reconstructed image 76 produced by adaptive filtering may not be good enough, especially if the PSF is relatively large. In such cases, de-blurring using the PSF may not be used, because can it introduce significant ringing.
In cases such as this, re-applying adaptive filtering as in the first embodiment can attenuate the blurring artifacts in the original image 72 and improve the quality of the image to some extent.
Again, the adaptive filtering can be performed on Y plane if RGB filtering had been performed previously and on the RGB color space if Y filtering had been performed previously.
Again, filtering can be operated on columns or rows, and sequentially on columns and rows.
It has also been found that the second embodiment is useful, if the ratio between the full resolution image 72 and the preview image sizes is less than three and the preview image is not too noisy. If this is not the case, the filtered image can have a lower quality than that obtained by de-blurring the blurred image with a very good PSF estimation such as described in the introduction.
In both of the first and second embodiments, a single preview image is described as being interpolated to match the resolution of the full resolution image. However, it will also be appreciated that super-resolution of more than 1 preview image, nominally of the same scene, could also be used to generate the interpolated images 14, 70 of the first and second embodiments, and/or one or more post-view images may be used, and/or another reference image may be used having a partially or wholly overlapping exposure period. Thus, wherever “preview image” is mentioned herein, it is meant to include post-view and overlapping images.
In the above embodiments and in particular in relation to the second embodiment, the short-exposure time (presumed sharp) image is described as comprising a preview image acquired either soon before or after acquisition of a main high resolution image.
However, in a further refined embodiment, the two images are acquired within the longer time period of acquisition of the relatively blurred image. In an implementation of this embodiment, an image acquisition device including a CMOS sensor which allows for a non-destructive readout of an image sensor during image acquisition is employed to acquire the images.
A schematic representation of the timing involved in acquiring these images is explained in relation to
In certain embodiments, the read-out of the under-exposed image is placed mid-way through the longer exposure period, i.e between T0 and T0+Tshort. As such, the actual exposing scheme goes as follows:
This means that statistically, the chances of content differences between the short exposure and the long exposure images G and F are minimized. Again, statistically, it is therefore more likely that the differences are caused only by the motion existing in the period [0, Tlong]. The well exposed picture is blurred by the motion existing in its exposure period, while the other is not moved at all, i.e. the motion blur makes the content differences.
Referring now to
The image G can now be combined with the image F through adaptive filtering as described above and in particular in relation to the second embodiment, luminance enhancement can be performed on the image G before being combined with the image F.
Subsequent to producing the filtered image 40 through adaptive filtering, the filtered image can be subjected to further processing to improve its quality further.
Noise correction of the filtered image can be performed using a modified version of the Lee Least mean square (LLMSE) filter. In the following example, G1 is the filtered image, G1x is the convolution of G1 with an X×X uniform averaging kernel; so G13 is the convolution of G1 with a 3×3 uniform averaging kernel; and G17 is the convolution of G1 with a 7×7 uniform averaging kernel.
The noise cleared picture is: G2=αG1x+(1−α)G1
If SF is smaller than a predetermined threshold (meaning that the current pixel in a perfectly uniform area) then G1X=G17 otherwise (in the current pixel neighborhood there is an edge) G1X=G13. It will therefore be seen that where the variation around a pixel is high, G2 is approximately equal to G1.
As discussed, the under-exposed acquired image has intensities in the lower part of the range (darkness range). The spectral characteristics of the cameras, in this area, differ from those of normally exposed areas. Therefore, the adaptively filtered image, G1 or G2, depending on whether noise filtering has been applied or not, may have deviations in color. To compensate for these deviations, a rotation or a translation in the (Cb,Cr) plane can be applied. The parameter values for these operations will depend on the camera and number of exposure stops between the well-exposed and the under-exposed images. One exemplary scheme for color correction in RBG space is as follows:
In certain embodiments, non-linear adaptive amplification is used with a logarithmic model. A one-pass dynamic range correction method may also be used to correct the uneven luminance of a previously obtained image, and noise correction may be used.
As shown in
The non-linear amplification is performed for each pixel of the under-exposed image, G(i,j,k), with i=1 . . . M, j=1 . . . N, k=1, 2, 3. A conventional logarithmic image model may be used in order to avoid the saturation of the colors.
For each color channel, k, the amplification if performed as:
Still referring to
In an image acquisition operation in accordance with certain embodiments, the camera will acquire two images (one full-resolution under-exposed image and a well exposed blurred image (can be a preview, post-view, exposure overlap, or another full resolution image).
Two images of similar size are obtained. For example, a pair of images may include image G which is a full resolution under-exposed image (M×N×3) and image F may be a well exposed preview or postview image (P×Q×3), while M>P and N>Q.
The smaller image can be interpolated or up-sampled to the size of the full resolution image.
There are the following steps:
An initial coefficient, H(1,1), is computed as a function of the ratio of exposure times of the well exposed preview and of the full resolution image or the ratio of average luminance of both pictures. We parse the rows or columns of one color channel and take two corresponding pixels from both images (F(i,j) and G(i,j)). They can be from each color channel or preferably the grayscale values. For this pixel pair, the coefficient is changed to: H(i,j)=H(i−1,j)+s·sign(F(i,j)−H(i−1,j)·G(i,j)), where s is a step size whose value depends on the dynamic range of the pixels. Therefore, if F(i,j)>H(i−1,j)·G(i,j), H(i,j) become H(i−1,j)+s, otherwise is H(i−1,j)−s (
The update of H can be performed after a number of pixels are passed through the comparator. For example, a counter is incremented for each +1 from the output of the comparator and decremented for each −1 from the output of the comparator. The value of the His updated to H+s. if the counter reaches a specified positive value (e.g. 256) and to H−s if the counter reaches a specified negative value (e.g. −256). The step size depends on the size of the full resolution picture, the thresholds or the allowed bandwidth around the initial value (e.g. around 0.0001 for 8-bit images and above mentioned thresholds). Another possibility is to use an adaptive step size, depending on previous values of the comparator.
The non-linear amplification is performed for each pixel of the under-exposed image, G(i,j,k), with i=1 . . . M, j=1 . . . N, k=1, 2, 3. The classical logarithmic image model described by the following papers may be used in order to avoid the saturation of the colors.
For each color channel, k, the amplification if performed as:
For each pixel, a conversion RGB to YCbCr is performed and the one pass dynamic range correction method presented in PCT/EP08/000,378, assigned to the same assigned and incorporated by reference, may be used to correct the uneven luminance of the previously computed pixel. A conversion YCbCr to RGB is performed (see
The noise correction is performed using any known method in YCC, RGB or other color space on the enhanced image. The strength of the de-noising method depends on the computed DRC coefficient.
The dynamic range correction and noise correction are shown in
For alternative embodiments, the non-linear adaptive technique may use a fixed/variable step size, a counter and two thresholds. In a further alternative embodiment, the noise reduction method strength is correlated depending on the coefficient computed by the DRC method.
The present invention provides a one-pass image technique that uses an IIR filter to improve the quality of pictures, using only one image and with efficient use of processor resources.
Certain embodiments provide for the automatic correction of uneven luminance in the foreground/background of an image. This implementation improves quality especially where the background is more illuminated/or darker than the foreground.
Implementations of these embodiments provide an estimate of the average of the red, green and blue channels while another recursive filter filters a term that has a component inversely proportional with the values of each color plane pixel value or the intensity value. Its output is multiplied with one or more correction terms dependent on the color channel(s) and preferably limited by two thresholds. The enhanced pixel value is obtained by using a linear or logarithmic model.
Using the embodiment, as well as an automatic correction of uneven luminance in the foreground/background, color boost is also obtained.
In certain embodiments, the average values of each color channel are not used for comparison purposes and they can be replaced by sliding averaging windows ending on the pixel being processed. In any case, these average values are used to determine correction terms which in turn are used to avoid over-amplification of red or blue channels.
Coefficients of the IIR filter may be advantageously fixed, rather than employ adaptive filters. As such, the present method may involve merely one pass through an image and the output of one filter does not have to be used as an input to another filter.
Referring now to
Only one input image, G, is used and a running average on each color channel is computed 20 as each pixel value is read. Therefore for each pixel G(i,j,k) of each plane k=1 . . . 3, we compute:
where β is a coefficient between 0 and 1.
Another variant is to compute on each color channel, the sum of 2N+1 pixel values around the pixel G(i,j,k) and divide by 2N+1.
From the moving average values,
Preferably, both correction terms, γR and γB values are limited within a chosen interval (e.g. between 0.95 and 1.05; if any of γR and γB values is below 0.95 their value is set to 0.95; if any of γR and γB values is above 1.05 their value is set to 1.05). This prevents over-amplification of the red and blue channels in further processing.
In parallel with generating the moving average values, the pixels are parsed on rows or columns and for each pixel of a color plane G(i,j,k), a coefficient H(i,j) is calculated as follows:
In
followed by a recursive filter, step 40:
H(i,j)=αH(i,j−1)+(1−α)(f(G(i,j,k),aδ))
where:
The comparison with δ is used in order to avoid division by zero and to amplify dark pixels (e.g. δ=15). The initial value H(1,1) can have values between 1 and 2.
Using this filter, darker areas are amplified more than illuminated areas due to the inverse values averaging and, therefore, an automatic correction of uneven luminance in the foreground/background is obtained.
It will be seen from the above that the recursive filter, H, doesn't filter the pixel values. For example, if a=α=⅛ and δ=15, the filter 30/40 is filtering a sequence of numbers that varies between 1 and 3 depending on actual pixel value G(i,j,k) and the preceding values of the image. If the filter 40 simply uses as input the pixel values G(i,j,k), it generates a simple low pass filtered image, with no luminance correction.
In one implementation of the embodiment, the modified pixel values, G1(i,j,k), are given by a linear combination, step 50, of the filter parameters H and the correction terms γR,γB:
G1(i,j,1)=G(i,j,1)·H(i,j)·γR
G1(i,j,2)=G(i,j,2)·H(i,j)
G1(i,j,3)=G(i,j,3)·H(i,j)·γB.
One more complex alternative to the linear model is a logarithmic model. In such an implementation, the output pixel G1(i,j,k) corresponding to the enhanced color plane (R/G/B color planes), is as follows:
where:
Examination of the formula above shows that only values smaller than D may be obtained. In this implementation, the degree of color and brightness boost are obtained by varying the pole value (α) and the logarithmic model factor (ε).
The computations can be adapted for the YCC or other color spaces. For example, when using YCC color space in the embodiment of
The linear model can be applied for the luminance channel and the logarithmic model can be used for the chrominance channels using the H(i,j) coefficient computed on the luminance channel.
This approach leads to computational savings and add the possibility of adjusting the color saturation by using a different positive value for ε (e.g. ε=0.9) when computing the new chrominance values. The brightness of the enhanced image can be varied by multiplying the Y channel with a positive factor, g, whose value can be different than the value of g used for the chrominance channels.
In another embodiment, the processing structure of
In this embodiment, the image is preferably provided in YCC format and the processing is performed on the Y channel only. The ratio of the next pixel and the current pixel value is computed and filtered with a one pole IIR filter (e.g. α= 1/16), step 40. The operations can be performed on successive or individual rows or columns. The initial H coefficient is set to 1 and in case of operating on row i we have:
where:
Again, this processing can be broken down in step 30:
followed by the recursive filter, step 40:
H(i,j)=αH(i,j−1)+(1−α)(f(Y(i,j),δ))
Again, the comparison with δ is used in order to avoid division by zero (δ is usually set to 1). H(i,j) is a coefficient that corresponds to the current pixel position (i, j) of the original image. The initial coefficient can be set to 1 at the beginning of the first row or at the beginning of each row. In the first case, the coefficient computed at the end of the one row is used to compute the coefficient corresponding to the first pixel of the next row.
The enhanced pixel value Y1(i,j) is given by the following formula:
Y1(i,j)=Y(i,j)[1+ε(i,j)·(1−H(i,j))]
where ε(i,j) can be a constant gain factor or a variable gain depending on the H coefficients. Another alternative for ε(i,j) is to use the difference between consecutive pixels or the ratio of successive pixel values. For example, if the difference between successive pixels is small (or the ratio of consecutive pixel values is close to 1) the value of ε(i,j) should be lower, because the pixel might be situated in a smooth area. If the difference is big (or the ratio is much higher or much lower than 1), the pixels might be situated on an edge, therefore the value of δ(i,j) should be close to zero, in order to avoid possible over-shooting or under-shooting problems. For intermediate values, the gain function should vary between 0 and a maximum chosen gain. An example of ε(i,j) according to these requirements has a Rayleigh distribution.
In some implementations, a look up table (LUT) can be used if a variable ε(i,j) is chosen, because the absolute difference between consecutive pixels has limited integer values.
This method is highly parallelizable and its complexity is very low. The complexity can be further reduced if LUTs are used and some multiplications are replaced by shifts.
Furthermore, this embodiment can also be applied to images in RGB space.
This embodiment can be applied in sharpening video frames either by sharpening each individual video frame or identified slightly blurred frames.
In each embodiment, the pixels can be parsed using any space-filling curves (e.g. Hilbert curves), not only by rows or columns. The corrected image can be thought as a continuously modified image, pixel by pixel, through a path of a continuously moving point.
It will also be seen that the image sharpening image processing of this embodiment can be applied after the luminance correction of the first embodiment to provide a filtered image with even superior characteristics to either method implemented independently.
Indeed either method can be applied in conjunction with other image processing methods as required for example following the processing described in PCT Application No. PCT/EP2007/009939, hereby incorporated by reference.
In accordance with an embodiment, image enhancement data processing is effected using a digital processing system (DPS). The DPS may be configured to store, process, and communicate a plurality of various types of digital information including digital images and video.
Certain embodiments may employ a DPS or devices having digital processing capabilities. Exemplary components of such a system include a central processing unit (CPU), and a signal processor coupled to a main memory, static memory, and mass storage device. The main memory may store various applications to effect operations of certain embodiments, while the mass storage device may store various digital content. The DPS may also be coupled to input/output (I/O) devices and audio/visual devices. The CPU may be used to process information and/or signals for the processing system. The main memory may be a random access memory (RAM) or some other dynamic storage device, for storing information or instructions (program code), which are used by the CPU. The static memory may be a read only memory (ROM) and/or other static storage devices, for storing information or instructions, which may also be used by the CPU. The mass storage device may be, for example, a hard disk drive, optical disk drive, or firmware for storing information or instructions for the processing system.
Embodiments have been described involving image enhancement methods and systems.
Embodiments have been described as including various operations. Many of the processes are described in their most basic form, but operations can be added to or deleted from any of the processes without departing from the scope of the invention. The operations of the invention may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the operations. Alternatively, the steps may be performed by a combination of hardware and software. The invention may be provided as a computer program product that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process according to the invention. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, the invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication cell (e.g., a modem or network connection). All operations may be performed at the same central cite or, alternatively, one or more operations may be performed elsewhere.
While an exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention as set forth in the claims that follow and their structural and functional equivalents.
In addition, in methods that may be performed according to the claims below and/or preferred embodiments herein, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, unless a particular ordering is expressly provided or understood by those skilled in the art as being necessary.
This application claims priority to U.S. provisional patent application Ser. No. 61/023,774, filed Jan. 25, 2008, and is a continuation-in-part of U.S. patent application Ser. No. 11/856,721, filed Sep. 18, 2007, and is a continuation-in-part of U.S. patent application Ser. No. 12/116,140, filed May 6, 2008, now U.S. Pat. No. 7,995,855. Each of these is assigned to the same assignee and is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5251019 | Moorman et al. | Oct 1993 | A |
5374956 | D'Luna | Dec 1994 | A |
5392088 | Abe et al. | Feb 1995 | A |
5428723 | Ainscow et al. | Jun 1995 | A |
5510215 | Prince et al. | Apr 1996 | A |
5599766 | Boroson et al. | Feb 1997 | A |
5686383 | Long et al. | Nov 1997 | A |
5747199 | Roberts et al. | May 1998 | A |
5751836 | Wildes et al. | May 1998 | A |
5756239 | Wake | May 1998 | A |
5756240 | Roberts et al. | May 1998 | A |
5802220 | Black et al. | Sep 1998 | A |
5889277 | Hawkins et al. | Mar 1999 | A |
5889554 | Mutze | Mar 1999 | A |
5909242 | Kobayashi et al. | Jun 1999 | A |
5981112 | Roberts | Nov 1999 | A |
6028960 | Graf et al. | Feb 2000 | A |
6035072 | Read | Mar 2000 | A |
6041078 | Rao | Mar 2000 | A |
6061462 | Tostevin et al. | May 2000 | A |
6081606 | Hansen et al. | Jun 2000 | A |
6114075 | Long et al. | Sep 2000 | A |
6122017 | Taubman | Sep 2000 | A |
6124864 | Madden et al. | Sep 2000 | A |
6134339 | Luo | Oct 2000 | A |
6269175 | Hanna et al. | Jul 2001 | B1 |
6297071 | Wake | Oct 2001 | B1 |
6297846 | Edanami | Oct 2001 | B1 |
6326108 | Simons | Dec 2001 | B2 |
6330029 | Hamilton et al. | Dec 2001 | B1 |
6360003 | Doi et al. | Mar 2002 | B1 |
6365304 | Simons | Apr 2002 | B2 |
6381279 | Taubman | Apr 2002 | B1 |
6387577 | Simons | May 2002 | B2 |
6407777 | DeLuca | Jun 2002 | B1 |
6535244 | Lee et al. | Mar 2003 | B1 |
6555278 | Loveridge et al. | Apr 2003 | B1 |
6567536 | McNitt et al. | May 2003 | B2 |
6599668 | Chari et al. | Jul 2003 | B2 |
6602656 | Shore et al. | Aug 2003 | B1 |
6607873 | Chari et al. | Aug 2003 | B2 |
6618491 | Abe | Sep 2003 | B1 |
6625396 | Sato | Sep 2003 | B2 |
6643387 | Sethuraman et al. | Nov 2003 | B1 |
6741960 | Kim et al. | May 2004 | B2 |
6863368 | Sadasivan et al. | Mar 2005 | B2 |
6892029 | Tsuchida et al. | May 2005 | B2 |
6947609 | Seeger et al. | Sep 2005 | B2 |
6961518 | Suzuki | Nov 2005 | B2 |
7019331 | Winters et al. | Mar 2006 | B2 |
7072525 | Covell | Jul 2006 | B1 |
7084037 | Gamo et al. | Aug 2006 | B2 |
7160573 | Sadasivan et al. | Jan 2007 | B2 |
7177538 | Sato et al. | Feb 2007 | B2 |
7180238 | Winters | Feb 2007 | B2 |
7195848 | Roberts | Mar 2007 | B2 |
7292270 | Higurashi et al. | Nov 2007 | B2 |
7315324 | Cleveland et al. | Jan 2008 | B2 |
7315630 | Steinberg et al. | Jan 2008 | B2 |
7315631 | Corcoran et al. | Jan 2008 | B1 |
7316630 | Tsukada et al. | Jan 2008 | B2 |
7316631 | Tsunekawa | Jan 2008 | B2 |
7317815 | Steinberg et al. | Jan 2008 | B2 |
7336821 | Ciuc et al. | Feb 2008 | B2 |
7369712 | Steinberg et al. | May 2008 | B2 |
7403643 | Ianculescu et al. | Jul 2008 | B2 |
7453493 | Pilu | Nov 2008 | B2 |
7453510 | Kolehmainen et al. | Nov 2008 | B2 |
7460695 | Steinberg et al. | Dec 2008 | B2 |
7469071 | Drimbarean et al. | Dec 2008 | B2 |
7489341 | Yang et al. | Feb 2009 | B2 |
7548256 | Pilu | Jun 2009 | B2 |
7551755 | Steinberg et al. | Jun 2009 | B1 |
7565030 | Steinberg et al. | Jul 2009 | B2 |
7593144 | Dymetman | Sep 2009 | B2 |
7623153 | Hatanaka | Nov 2009 | B2 |
7657172 | Nomura et al. | Feb 2010 | B2 |
7692696 | Steinberg et al. | Apr 2010 | B2 |
20010036307 | Hanna et al. | Nov 2001 | A1 |
20020006163 | Hibi et al. | Jan 2002 | A1 |
20030052991 | Stavely et al. | Mar 2003 | A1 |
20030058361 | Yang | Mar 2003 | A1 |
20030091225 | Chen | May 2003 | A1 |
20030103076 | Neuman | Jun 2003 | A1 |
20030151674 | Lin | Aug 2003 | A1 |
20030152271 | Tsujino et al. | Aug 2003 | A1 |
20030169818 | Obrador | Sep 2003 | A1 |
20030193699 | Tay | Oct 2003 | A1 |
20030219172 | Caviedes et al. | Nov 2003 | A1 |
20040066981 | Li et al. | Apr 2004 | A1 |
20040076335 | Kim | Apr 2004 | A1 |
20040090532 | Imada | May 2004 | A1 |
20040120598 | Feng | Jun 2004 | A1 |
20040120698 | Hunter | Jun 2004 | A1 |
20040130628 | Stavely | Jul 2004 | A1 |
20040145659 | Someya et al. | Jul 2004 | A1 |
20040169767 | Norita et al. | Sep 2004 | A1 |
20040212699 | Molgaard | Oct 2004 | A1 |
20040218057 | Yost et al. | Nov 2004 | A1 |
20040218067 | Chen et al. | Nov 2004 | A1 |
20040247179 | Miwa et al. | Dec 2004 | A1 |
20050010108 | Rahn et al. | Jan 2005 | A1 |
20050019000 | Lim et al. | Jan 2005 | A1 |
20050031224 | Prilutsky et al. | Feb 2005 | A1 |
20050041121 | Steinberg et al. | Feb 2005 | A1 |
20050041123 | Ansari et al. | Feb 2005 | A1 |
20050047672 | Ben-Ezra et al. | Mar 2005 | A1 |
20050052553 | Kido et al. | Mar 2005 | A1 |
20050057687 | Irani et al. | Mar 2005 | A1 |
20050068446 | Steinberg et al. | Mar 2005 | A1 |
20050068452 | Steinberg et al. | Mar 2005 | A1 |
20050140801 | Prilutsky et al. | Jun 2005 | A1 |
20050140829 | Uchida et al. | Jun 2005 | A1 |
20050195317 | Myoga | Sep 2005 | A1 |
20050201637 | Schuler et al. | Sep 2005 | A1 |
20050219391 | Sun et al. | Oct 2005 | A1 |
20050231625 | Parulski et al. | Oct 2005 | A1 |
20050248660 | Stavely et al. | Nov 2005 | A1 |
20050259864 | Dickinson et al. | Nov 2005 | A1 |
20050270381 | Owens et al. | Dec 2005 | A1 |
20050281477 | Shiraki et al. | Dec 2005 | A1 |
20060006309 | Dimsdale et al. | Jan 2006 | A1 |
20060017837 | Sorek et al. | Jan 2006 | A1 |
20060038891 | Okutomi et al. | Feb 2006 | A1 |
20060039690 | Steinberg et al. | Feb 2006 | A1 |
20060093212 | Steinberg et al. | May 2006 | A1 |
20060098237 | Steinberg et al. | May 2006 | A1 |
20060098890 | Steinberg et al. | May 2006 | A1 |
20060098891 | Steinberg et al. | May 2006 | A1 |
20060119710 | Ben-Ezra et al. | Jun 2006 | A1 |
20060120599 | Steinberg et al. | Jun 2006 | A1 |
20060125938 | Ben-Ezra et al. | Jun 2006 | A1 |
20060133688 | Kang et al. | Jun 2006 | A1 |
20060140455 | Costache et al. | Jun 2006 | A1 |
20060170786 | Won | Aug 2006 | A1 |
20060171464 | Ha | Aug 2006 | A1 |
20060187308 | Lim et al. | Aug 2006 | A1 |
20060204034 | Steinberg et al. | Sep 2006 | A1 |
20060204054 | Steinberg et al. | Sep 2006 | A1 |
20060204110 | Steinberg et al. | Sep 2006 | A1 |
20060227249 | Chen et al. | Oct 2006 | A1 |
20060285754 | Steinberg et al. | Dec 2006 | A1 |
20070025714 | Shiraki | Feb 2007 | A1 |
20070058073 | Steinberg et al. | Mar 2007 | A1 |
20070083114 | Yang et al. | Apr 2007 | A1 |
20070086675 | Chinen et al. | Apr 2007 | A1 |
20070097221 | Stavely et al. | May 2007 | A1 |
20070110305 | Corcoran et al. | May 2007 | A1 |
20070147820 | Steinberg et al. | Jun 2007 | A1 |
20070189748 | Drimbarean et al. | Aug 2007 | A1 |
20070201724 | Steinberg et al. | Aug 2007 | A1 |
20070234779 | Hsu et al. | Oct 2007 | A1 |
20070269108 | Steinberg et al. | Nov 2007 | A1 |
20070296833 | Corcoran et al. | Dec 2007 | A1 |
20080012969 | Kasai et al. | Jan 2008 | A1 |
20080037827 | Corcoran et al. | Feb 2008 | A1 |
20080037839 | Corcoran et al. | Feb 2008 | A1 |
20080037840 | Steinberg et al. | Feb 2008 | A1 |
20080043121 | Prilutsky et al. | Feb 2008 | A1 |
20080166115 | Sachs et al. | Jul 2008 | A1 |
20080175481 | Petrescu et al. | Jul 2008 | A1 |
20080211943 | Egawa et al. | Sep 2008 | A1 |
20080218611 | Parulski et al. | Sep 2008 | A1 |
20080219581 | Albu et al. | Sep 2008 | A1 |
20080219585 | Kasai et al. | Sep 2008 | A1 |
20080220750 | Steinberg et al. | Sep 2008 | A1 |
20080231713 | Florea et al. | Sep 2008 | A1 |
20080232711 | Prilutsky et al. | Sep 2008 | A1 |
20080240555 | Nanu et al. | Oct 2008 | A1 |
20080240607 | Sun et al. | Oct 2008 | A1 |
20080259175 | Muramatsu et al. | Oct 2008 | A1 |
20080267530 | Lim | Oct 2008 | A1 |
20080292193 | Bigioi et al. | Nov 2008 | A1 |
20080309769 | Albu et al. | Dec 2008 | A1 |
20080309770 | Florea et al. | Dec 2008 | A1 |
20090003652 | Steinberg et al. | Jan 2009 | A1 |
20090009612 | Tico et al. | Jan 2009 | A1 |
20090080713 | Bigioi et al. | Mar 2009 | A1 |
20090080796 | Capata et al. | Mar 2009 | A1 |
20090080797 | Nanu et al. | Mar 2009 | A1 |
20090179999 | Albu et al. | Jul 2009 | A1 |
20090185041 | Kang et al. | Jul 2009 | A1 |
20090185753 | Albu et al. | Jul 2009 | A1 |
20090190803 | Neghina et al. | Jul 2009 | A1 |
20090196466 | Capata et al. | Aug 2009 | A1 |
20090284610 | Fukumoto et al. | Nov 2009 | A1 |
20090303342 | Corcoran et al. | Dec 2009 | A1 |
20090303343 | Drimbarean et al. | Dec 2009 | A1 |
20100026823 | Sawada | Feb 2010 | A1 |
20100053349 | Watanabe et al. | Mar 2010 | A1 |
20100126831 | Ceelen | May 2010 | A1 |
20110090352 | Wang et al. | Apr 2011 | A1 |
20110102642 | Wang et al. | May 2011 | A1 |
Number | Date | Country |
---|---|---|
3729324 | Mar 1989 | DE |
10154203 | Jun 2002 | DE |
10107004 | Sep 2002 | DE |
0944251 | Sep 1999 | EP |
944251 | Apr 2003 | EP |
1583033 | Oct 2005 | EP |
1779322 | Jan 2008 | EP |
1429290 | Jul 2008 | EP |
10285542 | Oct 1998 | JP |
11327024 | Nov 1999 | JP |
WO9843436 | Oct 1998 | WO |
WO-9843436 | Oct 1998 | WO |
WO0113171 | Feb 2001 | WO |
WO-0245003 | Jun 2002 | WO |
WO-03071484 | Aug 2003 | WO |
WO-2004001667 | Dec 2003 | WO |
WO-2004036378 | Apr 2004 | WO |
WO-2006050782 | May 2006 | WO |
WO2007093199 | Aug 2007 | WO |
WO2007093199 | Aug 2007 | WO |
WO2007142621 | Dec 2007 | WO |
WO-2007143415 | Dec 2007 | WO |
WO2008017343 | Feb 2008 | WO |
WO2008131438 | Oct 2008 | WO |
WO2008151802 | Dec 2008 | WO |
WO2009036793 | Mar 2009 | WO |
Entry |
---|
John Russ, The Image Processing Handbook, CRC Press 2002. |
Xinqiao Liu, Photocurrent Estimation from Multiple Non-destructive Samples in a CMOS Image Sensor, SPIE 2001. |
Andrews, H.C. et al., “Digital Image Restoration”, Prentice Hall, 1977. |
Bates et al., “Some Implications of Zero Sheets for Blind Deconvolution and Phase Retrieval”, J. Optical Soc. Am. A, 1990, pp. 468-479, vol. 7. |
Ben Ezra, Moshe et al., “Motion Deblurring Using Hybrid Imaging”, Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. |
Ben-Ezra, M. et al., “Motion-Based Motion Deblurring”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, pp. 689-698, vol. 26—Issue 6. |
Bennett, Eric P. et al., “Video Enhancement Using Per-Pixel Virtual Exposures”, International Conference on Computer Graphics and Interactive Techniques, ACM Siggraph, 2005, pp. 845-852. |
Bhaskaran, V. et al., “Motion estimation using a computation-constrained criterion”, Digital Signal Processing Proceedings , 1997, pp. 229-232, vol. 1. |
Bones et al., “Deconvolution and Phase Retrieval With Use of Zero Sheets”, J. Optical Soc. Am. A, 1995, pp. 1,842-1,857, vol. 12. |
Chen-Kuei Y. et al., “Color image sharpening by moment-preserving technique”, Signal Processing, 1995, pp. 397-403, vol. 45—Issue 3, Elsevier Science Publishers. |
Crowley, J. et al., “Multi-modal tracking of faces for video communication, http://citeseer.ist.psu.edu/crowley97multimodal.html”, In Comp Vision and Patent Recognition, 1997. |
Deever, A., “In-camera all-digital video stabilization”, Proceedings of the International Conference on Decision Support Systems.Proceedings of ISDSS, 2006, pp. 190-193. |
Demir, B. et al., “Block motion estimation using adaptive modified two-bit transform”, 2007, pp. 215-222, vol. 1—Isuue 2. |
Dufournaud et al., “Matching Images With Different Resolutions”, IEEE Conference Proceedings on Computer Vision and Pattern Recognition, 2000. |
Elad et al., “Restoration of a Single Superresolution Image from several Blurred, Noisy and Undersampled Measured Images”, IEEE Trans on Image Processing, 1997. vol. 6—Issue 12. |
Elad, Michael et al., “Superresolution Restoration of an Image Sequence: Adaptive Filtering Approach”, IEEE Transactions on Image Processing, 1999, pp. 529-541, vol. 8—Issue 3. |
Favaro, Paolo, “Depth from focus/defocus, http://homepages.inf.ed.ac.uk/rbf/Cvonline/LOCAL—COPIES/FAVAROI/dfdtutorial.html”, 2002. |
Feng, J. et al., “Adaptive block matching motion estimation algorithm using bit plane matching”, ICIP. 1995, pp. 496-499. |
Fujita K. et al., An edge-adaptive iterative method for image restoration, Singapore ICCS/ISITA '92.“Communications on the Move” Singapore Nov. 16-20, 1992. New York, NY, USA. IEEE, US, Nov. 16, 1992.pp. 361-365, XP010066997, ISBN: 0-7803-0803-4. |
Jansson, Peter A., “Chapter 1: Convolution and Related Concepts”, Deconvolution of Images and Spectra, 1997, 2nd. Edition, Academic Press. |
Jiang, Wei et al., “Dense Panoramic Imaging and 3D Reconstruction by Sensors Fusion, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, Japan Sci. and Technol. Agency, JPN(JST); National Inst. Industrial Safety, JPN Nippon Kikai Gakkai Robotikusu, Mekatoronikusu Koenkai Koen Ronbunshu (CD-ROM), 2006, pp. 2P1-C15. |
Ko, S. et al., “Fast digital image stabilizer based on gray-coded bit-plane matching”, IEEE Transactions on Consumer Electronics, 1999, pp. 598-603, vol. 45—Issue 3. |
Lane et al., “Automatic multidimensional deconvolution”, J. Opt. Soc. Am. A, 1987, pp. 180-188, vol. 4—Issue 1. |
Lhuillier, M. et al.. “A quasi-dense approach to surface reconstruction from uncalibrated images, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, pp. 418-433, vol. 27—Issue 3, IEEE Comput. Soc. |
Mase, Mitsuhito et al., “A Wide Dynamic Range CMOS image Sensor with Multiple Exposure-Time Signal Outputs and 12-bit Column-Parallel Cyclic A/D Converters”, IEEE Journal of Solid-State Circuits, 2005, vol. 40—Issue 12. |
Natarajan B. et al., “Low-complexity block-based motion estimation via one-bit transforms”, IEEE Trans. Circuit Syst. Video Technol, 1997, pp. 702-706, vol. 7—Issue 5. |
Oppenheim, A.V. et al., “The Importance of Phase in Signals, XP008060042, ISSN: 0018-9219.”, Proceedings of the IEEE, 1981, pp. 529-541, vol. 69—Issue 5. |
Park, Sung Cheol et al., “Super-resolution image reconstruction: a technical overview, ISSN: 1053-5888. DOI: 10.1109/MSP.2003.1203207.”, Signal Processing Magazine, 2003, pp. 21-36, vol. 20—Issue 3, IEEE Publication. |
Patti A. et al., “Super-Resolution video reconstruction with arbitrary sampling lattices and non-zero aperture time http://citeseer.ist.psu.edu/patti97super.html”, In IEEE Transactions on Image Processing, 1997, pp. 1064-1078. |
PCT International Preliminary Report on Patentability, for PCT Application No. PCT/EP2005/011011, dated Jan. 22, 2007, 8 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT application No. PCT/US2007/069638, dated Mar. 5, 2008, 9 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2005/011011, dated Apr. 24, 2006, 12 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2007/009939. dated May 21, 2008, 13 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2008/004729, dated Oct. 29, 2008. 9 pages. |
Pulli, Kari et al., “Robust Meshes from Multiple Range Maps, http://citeseer.ist.psu.edu/pulli97robust.html”, In Proc IEEE International Conference on Recent Advances in 3-D Digital Imaging and Modeling, 1997. |
Rahgozar et al., “Motion Estimation Based on Time-Sequentially Sampled Imagery”, IEEE Transactions on Image Processing, 1995. |
Rav-Acha, A. et al., “Restoration of Multiple Images with Motion Blur in Different Directions, XP002375829, ISBN: 0-7695-0813-8”, Proceedings Fifth IEEE Workshop on Applications on Computer Vision IEEE Comput. Soc, 2000, pp. 22-28. |
Sasaki et al., “A Wide Dynamic Range CMOS Image Sensor with Multiple Short-Time Exposures”, IEEE Proceedings on Sensors, 2004, pp. 967-972, vol. 2. |
Sauer, K. et al., “Efficient Block Motion Estimation Using Integral Projections”, IEEE Trans. Circuits, Systems for video Tech, 1996, pp. 513-518, vol. 6—Issue 5. |
Schultz, Richard R. et al., “Extraction of High-Resolution Frames from Video Sequences, http://citeseer.ist.psu.edu/schultz96extraction.html”, IEEE transactions on image processing, 1996, pp. 996-1011. |
Seldin et al., “Iterative blind deconvolution algorithm applied to phase retrieval”, J. Opt. Soc. Am. A, 1990, pp. 428-433, vol. 7—Issue 3. |
She, Peng et al., “Motion View Reconstruction Method with Real Object Image based on Virtual Object Movement, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, Eizo Joho Media Gakkai Gijutsu Hokoku, 2005, pp. 67-70, vol. 29—Issue 17. |
Siu, Angus et al., “Image registration for image-based rendering, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, IEEE Transactions on Image Processing , 2005, pp. 241-252, vol. 14—Issue 2. |
Trussell, H.J. et al., “Identification and restoration of spatially variant motion blurs in sequential images, XP002375828”, IEEE Transactions on Image Processing, 1992, pp. 123-126, vol. 1—Issue 1. |
Uomori, K. et al., “Automatic image stabilizing system by fulldigital signal processing” IEEE Transactions on Consumer Electronics, 1990, vol. 36, No. 3, pp. 510-519. |
Zhang, Junping et al., “Change detection for the urban area based on multiple sensor information fusion, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering”, IEEE International Geoscience and Remote Sensing Symposium, 2005, p. 4, IEEE. |
Baker S., et al., Lucas Kanade 20 Years on: A Unifying Framework, International Journal of Computer Vision 2004, Springer Netherlands, 2004, vol. 56 (3), pp. 221-255. |
Bhaskaran, V. et al., Motion estimation using a computation-constrained criterion, Digital Signal Processing Proceedings, 1997, pp. 229-232, vol. 1. |
Cannon M., Blind Deconvolution of Spatially Invariant Image Blurs with Phase, IEEE Transactions on Acoustics, Speech, and Signal Processing, 1976, vol. ASSP-24, No. 1. |
Internet Reference: CCD Fundamentals. Canon, URL:http//:www.roper.co.jp/Html/roper/tech—note/html/tefbin.htm, Nov. 2003. |
Aaron Deever: In-camera all-digital video stabilization, Proceedings of the International Conference on Decision Support Systems. Proceedings of ISDSS, Jan. 1, 2006, pp. 190-193, XP009106923. |
Demir, B. et al., Block motion estimation using adaptive modified two-bit transform, 2007, pp. 215-222, vol. 1, Issue 2. |
Golub G. H. et al., Matrix Computations, 1996, 3rd edition, John Hopkins University Press, Baltimore. |
Bahadir K. Gunturk, Murat Gevrekci, High-Resolution Image Reconstruction From Multiple Differently Exposed Images, IEEE Signal Processing Letters, vol. 13, No. 4, Apr. 2006, pp. 197-200. |
Peter A. Jannson, Deconvolution of Images and Spectra, 1997, 2nd. Edition, Academic Press. |
Kuglin C. D. et al., The phase correlation image alignment method, Proc. Int. Conf. Cybernetics and Society, 1975, pp. 163-165, IEEE, Bucharest, Romania. |
Liu X., et al., Photocurrent Estimation from Multiple Non-Destructive Samples in a CMOS Image Sensor, in Proceedings of the SPIE Electronic Imaging 2001 Conference, vol. 4306, San Jose, CA, Jan. 2001, pp. 450-458. |
Lyndsey Pickup, et al., Optimizing and Learning for Super-resolution, BMVC, Sep. 4-7, 2006. |
Russ J.C., Chapter 3: Correcting Imaging Defects, in the Image Processing Handbook, 2002, by CRC Press, LLC., 75 pages. |
Shi J., et al., Good Features to Track, IEEE Conference on Computer Vision and Patern Recognition, 1994, pp. 593-600. |
Tomasi C., et al., Detection and Tracking of Point Features, Carnegie Mellon University Technical Report CMU-CS-91-132, Apr. 1991. |
Trimeche M., et al., Multichannel Image Deblurring of Raw Color Components, Computational Imaging III., Edited by Bouman, Charles A., Miller, Eric L., Proceedings of the SPIE, vol. 5674, pp. 169-178, 2005. |
Lu Yuan, et al., Image Deblurring with Blurred/Noisy Image Pairs, SIGGRAPH07, Aug. 5-9, 2007. |
Barbara Zitova, et al., Image registration methods: a survey, Image and Vision Computing, 2003, pp. 977-1000, vol. 21. |
PCT International Preliminary Report on Patentability, Chapter I, for PCT Application No. PCT/EP2009/008674, dated Jun. 14, 2011, 7 pages. |
PCT International Preliminary Report on Patentability, Chapter I, for PCT Application No. PCT/US2007/069638, dated Dec. 10, 2008, 5 pages. |
PCT International Preliminary Report on Patentability, Chapter I, for PCT Application No. PCT/EP2008/004729, dated Dec. 17, 2009, 10 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2010/056999, dated Sep. 1, 2010, 10 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2009/008674, dated Mar. 29, 2010, 10 pages. |
PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2008/004729, dated Oct. 29, 2008, 14 pages. |
PCT Written Opinion of the International Search Authority, for PCT Application No. PCT/EP2008/004729, dated Dec. 17, 2009, 9 pages. |
Final Office Action mailed Nov. 17, 2011 for U.S. Appl. No. 12/901,577, filed Oct. 11, 2010. |
Non-Final Office Action mailed Dec. 7, 2011 for U.S. Appl. No. 12/789,300, filed May 27, 2010. |
Non-Final Office Action mailed Dec. 8, 2011 for U.S. Appl. No. 12/820,086, filed Jun. 21, 2010. |
Non-Final Office Action mailed Dec. 8, 2011 for U.S. Appl. No. 12/820,034, filed Jun. 21, 2010. |
Non-Final Office Action mailed Nov. 21, 2011 for U.S. Appl. No. 12/956,904, filed Nov. 30, 2010. |
Notice of Allowance mailed Nov. 25, 2011 for U.S. Appl. No. 12/485,316, filed Jun. 16, 2009. |
Number | Date | Country | |
---|---|---|---|
20090179999 A1 | Jul 2009 | US |
Number | Date | Country | |
---|---|---|---|
61023774 | Jan 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11856721 | Sep 2007 | US |
Child | 12336416 | US | |
Parent | 12116140 | May 2008 | US |
Child | 11856721 | US |