Method and apparatus for correcting blur in all or part of an image

Information

  • Patent Grant
  • 11924551
  • Patent Number
    11,924,551
  • Date Filed
    Monday, April 10, 2023
    a year ago
  • Date Issued
    Tuesday, March 5, 2024
    2 months ago
Abstract
A method, apparatus, and processor for capturing digital images. The method comprising: displaying a preview scene to be captured in a user interface of the imaging device, capturing a plurality of images using a lens having one or more lens elements and at least one moveable lens element, and moving the moveable lens element electro-mechanically to counter an effect of motion. The method further including processing the plurality of images, receiving and executing instructions stored in a memory of the imaging device, to obtain a corrected image, such that the corrected image includes a first and second subject, the first subject in the corrected image is blur free, and the second subject in the corrected image is blurred compared to the first subject, storing the corrected image in the memory, and displaying the corrected image in the user interface.
Description
FIELD OF INVENTION

The present invention generally relates to digital image processing. More specifically, this invention relates to processing of digitized image data in order to correct for image distortion caused by relative motion between the imaging device and the subject at the time of image capture, or by optical distortion from other sources.


BACKGROUND

When capturing images, as with a camera, it is desirable to capture images without unwanted distortion. In general, sources of unwanted distortion can be characterized as equipment errors and user errors. Examples of common equipment errors include inadequate or flawed optical equipment, and undesirable characteristics of the film or other recording media. Using equipment and media of a quality that is suitable for a particular photograph can help mitigate the problems associated with the equipment and the recording medium, but in spite of this, image distortion due to equipment errors can still appear.


Another source of image distortion is user error. Examples of common user errors include poor image processing, and relative motion between the imaging device and the subject of the image. For example, one common problem that significantly degrades the quality of a photograph is the blur that results from camera movement (i.e. shaking) at the time the photograph is taken. This can be difficult to avoid, especially when a slow shutter speed is used, such as in low light conditions, or when a large depth of field is needed and the lens aperture is small. Similarly, if the subject being photographed is moving, use of a slow shutter speed can also result in image blur.


There are currently many image processing techniques that are used to improve the quality, or “correctness,” of a photograph. These techniques are applied to the image either at the time it is captured by a camera, or later when it is post-processed. This is true for both traditional “hardcopy” photographs that are chemically recorded on film, and for digital photographs that are captured as digital data, for example using a charged couple device (CCD) or a CMOS sensor. Also, hardcopy photographs can be scanned and converted into digital data, and are thereby able to benefit from the same digital signal processing techniques as digital photographs.


Commonly used post-processing techniques for digitally correcting blurred images typically involve techniques that seek to increase the sharpness or contrast of the image. This can give the mistaken impression that the blur is remedied. However, in reality, this process causes loss of data from the original image, and also alters the nature of the photograph. Thus, current techniques for increasing the sharpness of an image do not really “correct” the blur that results from relative motion between a camera and a subject being photographed. In fact, the data loss from increasing the sharpness can result in a less accurate image than the original. Therefore, a different method that actually corrects the blur is desirable.


In the prior art, electro-mechanical devices for correcting image blur due to camera motion are built into some high quality lenses, variously called “image stabilization”, “vibration reduction”, or similar names by camera/lens manufacturers. These devices seek to compensate for the camera/lens movement by moving one or more of the lens elements; hence countering the effect of the motion. Adding such a device to a lens typically makes the lens much more expensive, heavier and less sturdy, and can also compromise image quality.


Accordingly, it is desirable to have a technique that corrects for distortion in photographs without adding excessively to the price, robustness or weight of a camera or other imaging device, or adversely affecting image quality.


SUMMARY

The present invention processes image data in order to correct an image for distortion caused by imager movement or by movement of the subject being imaged. In another embodiment, the present invention can prevent image distortion due to motion of the imaging device or subject at relatively slow shutter speeds, resulting in a substantially undistorted image.


In another embodiment, the present invention measures relative motion between the imaging device and the subject by using sensors that detect the motion. When an image is initially captured, the effect of relative motion between the imaging device and the subject is that it transforms the “true image” into a blurred image, according to a 2-dimensional transfer function defined by the motion. The invention determines a transfer function that represents the motion and corrects the blur.


In yet another embodiment, the transfer function is estimated using blind detection techniques. The transfer function is then inverted, and the inverted function is implemented in an image correcting filter that essentially reverses the blurring effect of the motion on the image. The image is processed through the filter, wherein blur due to the motion is reversed, and the true image is recovered.


In yet another embodiment, the invention uses the transfer function to combine consecutive images taken at a fast shutter speed to avoid blur due to motion between camera and subject that could result from using a slow shutter speed. In still another embodiment, the image sensor is moved to counter camera motion while the image is being captured.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a portion of memory having memory locations wherein elements of a recorded image are stored.



FIG. 2 is a portion of memory having memory locations wherein elements of a deconvolution filter are stored.



FIG. 3 is a portion of memory having memory locations wherein the recorded image is stored for calculating the next value of a corrected image.



FIG. 4 is a functional block diagram of a system for correcting an image for distortion using a transfer function representing the distortion, wherein the transfer function is derived from measurements of the motion that caused the distortion.



FIG. 5 is a functional block diagram of a system for correcting an image for distortion using a transfer function representing the distortion, wherein the transfer function is derived using blind estimation techniques.



FIG. 6 shows a unit for iterative calculation of the corrective filter coefficients and estimation of the correct image data.



FIG. 7 illustrates support regions of an image r(n,m) and of a transfer function h(n,m), and the transfer function h(n,m) being applied to different parts of the image r(n,m).



FIG. 8 shows a unit for blind deconvolution to calculate the correct image data.



FIG. 9 is an image of an object being captured on an image sensor wherein pixel values represent points of the image.



FIG. 10 illustrates the effect of moving an imager while capturing an image, resulting in multiple copies of the image being recorded over each other, causing blur.



FIG. 11 illustrates combining images taken at fast shutter speeds to result in the equivalent of a final image taken at a slower shutter speed, but with reduced blur.



FIG. 12 illustrates image blur correction where an image sensor is moved to compensate for imager movement.



FIG. 13 is an example of an image distorted by movement of the imager when the image was captured.



FIG. 14 is represents the image of FIG. 13 corrected according to the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention will be described with reference to the figures wherein like numerals represent like elements throughout. Although the invention is explained hereinafter as a method of correcting for image distortion due to the shaking of a camera when a picture is taken, similar distortions can also be caused by other types of imaging equipment and by imperfections in photo processing equipment, movement of the subject being photographed, and other sources. The present invention can be applied to correct for these types of distortions as well. Additionally, although reference is made throughout the specification to a camera as the exemplary imaging device, the present invention is not limited to such a device. As aforementioned, the teachings of the present invention may be applied to any type of imaging device, as well as image post-processing techniques.


Capturing and recording a photograph, for example by a camera, involves gathering the light reflected or emanating from a subject, passing it through an optical system, such as a series of lenses, and directing it onto a light sensitive recording medium. A typical recording medium in traditional analog photography is a film that is coated with light sensitive material. During processing of the exposed film, the image is fixed and recorded. In digital cameras, the recording medium is typically a dense arrangement of light sensors, such as a Charge-Coupled Device (CCD) or a CMOS sensor.


The recording medium continuously captures the impression of the light that falls upon it as long as the camera shutter is open. Therefore, if the camera and the subject are moving with respect to each other (such as in the case when the user is unsteady and is shaking the camera, or when the subject is moving), the recorded image becomes blurred. To reduce this effect, a fast shutter speed can be used, thereby reducing the amount of motion occurring while the shutter is open. However, this reduces the amount of light from the subject captured on the recording medium, which can adversely affect image quality. In addition, increasing the shutter speed beyond a certain point is not always practical. Therefore, undesired motion blur occurs in many pictures taken by both amateur and professional photographers.


The nature of the blur is that the light reflected from a reference point on the subject does not fall on a single point on the recording medium, but rather it ‘travels’ across the recording medium. Thus a spread-out, or smudged, representation of the reference point is recorded.


Generally, all points of the subject move together, and the optics of the camera and the recording medium also move together. For example, in the case of a photograph of a moving car, wherein an image of the car is blurred due to uniform motion of all parts of the car. In other words, the image falling on the recording medium ‘travels’ uniformly across the recording medium, and all points of the subject blur in the same manner.


The nature of the blur resulting from uniform relative motion can be expressed mathematically. In a 2-dimensional space with discrete coordinate indices ‘n’ and ‘m’, the undistorted image of the subject can be represented by s(n,m), and a transfer function h(n,m) can be used to represent the blur. Note that h(n,m) describes the way the image ‘travels’ on the recording medium while it is captured. The resulting image that is recorded, r(n,m), is given by:

r(n,m)=s(n,m)**h(n,m);  Equation(1)

where ** represents 2-dimensional convolution. The mathematical operation of convolution is well known to those skilled in the art and describes the operation:










r

(

n
,
m

)

=




i
=

-









j
=

-







h

(

i
,
j

)




s

(


n
-
i

,

m
-
j


)

.








Equation



(
2
)








In the sum operations in Equation (2), the summation limits are infinite. In practice, the summations are not infinite, since the support region of the transfer function is finite. In other words, the region where the function is non-zero is limited by the time the camera shutter is open and the amount of motion. Therefore, the summation is calculated for only the indices of the transfer function where the function itself is non-zero, for example, from i=−N . . . N and j=−M . . . M.


If the transfer function h(n,m) is known, or its estimate is available, the blur that it represents can be “undone” or compensated for in a processor or in a computer program, and a corrected image can be obtained, as follows. Represent the “reverse” of the transfer function h(n,m) as h−1(n,m) such that:

h(n,m)**h−1(n,m)=δ(n,m);  Equation(3)

where δ(n,m) is the 2-dimensional Dirac delta function, which is:










δ

(

n
,
m

)

=

{






1


if


n

=

m
=
0







0


otherwise




.






Equation



(
4
)








The delta function has the property that when convolved with another function, it does not change the nature of that function. Therefore, once h(n,m) and hence h−1(n,m) are known, an image r(n,m) can be put through a correcting filter, called a “deconvolution filter”, which implements the inverse transfer function w(n,m)=h−1(n,m) and undoes the effect of blur. Then:
















r

(

n
,
m

)


*
*




w

(

n
,
m

)


=




r

(

n
,
m

)


*
*





h

-
1


(

n
,
m

)








=




s

(

n
,
m

)


*
*





h

(

n
,
m

)


*
*





h

-
1


(

n
,
m

)








=




s

(

n
,
m

)


*
*




δ

(

n
,
m

)








=


s

(

n
,
m

)





;




Equation



(
5
)









and the correct image data s(n,m) is recovered.


The deconvolution filter in this example is such that:













i
=

-
N


N





j
=

-
M


M



w

(

i
,
j

)



h

(


n
-
i

,

m
-
j


)




=

{




1




if


n

=

m
=
0






0


otherwise



.






Equation



(
6
)









Because of the property that the deconvolution operation forces the output of the convolution to be zero for all but one index, this method is called the “zero-forcing algorithm”. The zero-forcing algorithm itself is but one method that can be used, but there are others possible also, such as the least mean-square algorithm described in more detail below.


In order to define a deconvolution filter, the transfer function h(n,m) representing the relative motion between the imager and the subject must be derived from measuring the motion, or alternatively by using blind estimation techniques. The inverse function h−1(n,m) must then be calculated and incorporated in a filter to recover a corrected image s(n,m). It is possible to determine h(n,m) using sensors that detect motion, and record it at the time the image is captured.


One embodiment of the present invention includes one or more motion sensors, attached to or included within the imager body, the lens, or otherwise configured to sense any motion of the imager while an image is being captured, and to record this information. Such sensors are currently commercially available which are able to capture movement in a single dimension, and progress is being made to improve their accuracy, cost, and characteristics. To capture motion in two dimensions, two sensors may be used, each capable of detecting motion in a single direction. Alternatively, a sensor able to detect motion in more than one dimension can be used.


The convolution in Equation (5) can be performed using memory elements, by performing an element-by-element multiplication and summation over the support region of the transfer function. The recorded image is stored, at least temporarily, in memory elements forming a matrix of values such as shown in FIG. 1. Similarly, the deconvolution filter w(n,m) is stored in another memory location as shown in FIG. 2. The deconvolution operation is then performed by multiplying the values in the appropriate memory locations on an element-by-element basis, such as multiplying r(n,m) and w(0,0); r(n−1,m) and w(1,0), and so on, and summing them all up.


Element-by-element multiplication and summing results in the convolution:










y

(

n
,
m

)

=




i
=

-
N


N





j
=

-
M


M



w

(

i
,
j

)




r

(


n
-
i

,

m
-
j


)

.








Equation



(
7
)









To calculate the next element, y(n+1,m) for example, the deconvolution filter w(n,m) multiplies the shifted memory locations, such as shown in FIG. 3, followed by the summation. Note that the memory locations do not need to be shifted in practice; rather, the pointers indicating the memory locations would move. In FIG. 1 and FIG. 3 portions of r(n,m) are shown that would be included in the element-by-element multiplication and summation, and this portion is the same size as w(n,m). However, it should be understood that r(n,m), that is the whole image, is typically much larger than the support region of w(n,m). To determine value of the convolution for different points, an appropriate portion of r(n,m) would be included in the calculations.


The filter defined by Equation (5) is ideal in the sense that it reconstructs the corrected image from the blurred image with no data loss. A first embodiment calculates the inverse of h(n,m) where h(n,m) is known. As explained above, by making use of motion detecting devices, such as accelerometers, the motion of the imager (such as a camera and/or the associated lens) can be recorded while the picture is being captured, and the motion defines the transfer function describing this motion.


A functional block diagram of this embodiment in accordance with the present invention is illustrated in FIG. 4, wherein a method 40 for correcting image distortion is shown. An image r(n,m) from camera optics is captured by an imager (step 41) and recorded in memory (step 42). Simultaneously, motion sensors detect and record camera motion (step 43) that occurs while the shutter of the camera is open. The transfer function representing the motion h(n,m) is derived (step 44), and the inverse transfer function h−1(n,m) is determined (step 46). The inverse transfer function is applied in a corrective filter (step 48) to the image, which outputs a corrected image s(n,m) (step 49).


In this and other embodiments that make use of motion sensors to represent the imager's movement, derivation of the transfer function from motion information (step 44) takes into account the configuration of the imager and the lens also. For an imager that is a digital camera, for example, the focal length of the lens factors into the way the motion of the imager affects the final image. Therefore the configuration of the imager is part of the derivation of h(n,m). This is important especially for imagers with varying configurations, such as digital cameras with interchangeable lenses.


In this first embodiment of the invention, an iterative procedure is used to compute the inverse transfer function from h(n,m). The approximate inverse transfer function at iteration k is denoted as ĥk−1(n,m). At this iteration, output of the deconvolution filter is:















y
k

(

n
,
m

)

=




h
ˆ

k

-
1


(

n
,
m

)

**

r

(

n
,
m

)








=



i




j





h
ˆ

k

-
1


(

i
,
j

)



r

(


n
-
i

,

m
-
j


)








.




Equation



(
8
)








The filter output can be written as the sum of the ideal term and the estimation noise as:










Equation



(
9
)















y
k

(

n
,
m

)

=




h

-
1


(

n
,
m

)

**

r

(

n
,
m

)


+


(




h
ˆ

k

-
1


(

n
,
m

)

-


h

-
1


(

n
,
m

)


)

**

r

(

n
,
m

)










=


s

(

n
,
m

)

+


v
k

(

n
,
m

)



;








where v(n,m) is the estimation noise which is desirable to eliminate. An initial estimate of the correct image can be written as:

ŝk(n,m)=ĥk−1(n,m)**r(n,m).  Equation (10)


However, this estimate can in general be iteratively improved. There are a number of currently known techniques described in estimation theory to achieve this. A preferable option is the Least Mean-Square (LMS) algorithm. A block diagram of a calculation unit 60 which implements this method is shown in FIG. 6.


As an initial state, ĥ−10 (n,m) is set to equal μr(n,m). Then, the following steps are iteratively repeated:


Step 1, an estimate of the correct image is calculated in a first 2-dimensional finite impulse response (2D FIR) filter 62:

ŝk(n,m)=ĥk−1(n,m)**r(n,m).


Step 2, a received signal based on the estimated correct image is calculated in a second 2D FIR filter 64:

{tilde over (r)}k(n,m)=ŝk(n,m)**h(n,m);

and the estimation error is calculated using an adder 66:

ek(n,m)=rk(n,m)−{tilde over (r)}k(n,m).


Step 3, the inverse transfer function coefficients are then updated in the LMS algorithm unit 68:

ĥk+1−1(n,m)=ĥk−1(n,m)+μr(n,m)ek(n,m);

where μ is the step-size parameter.


These steps are repeated until the estimation error becomes small enough to be acceptable; which value can be predetermined or may be set by a user. As the iterative algorithm converges, the estimated inverse transfer function approaches the correct inverse transfer function h−1(n,m). The inverse transfer function coefficients are the coefficients of the deconvolution filter, and the estimate s(n,m) converges to s(n,m), the correct image, at the same time.


This process can be repeated for the entire image, but it is less complex, and therefore preferable, to find the inverse filter first over a single transfer function support region, then apply it to the entire image r(n,m).


While the above Steps 1-3 are being repeated, a different portion of the recorded image r(n,m) can be used in each iteration. As in FIG. 7, it should be noted that the recorded image r(n,m) typically has a much larger support region than the transfer function h(n,m) that represents the camera motion. Therefore, the above steps are preferably performed over a support region of h(n,m), and not over the entire image r(n,m), for each iteration.


Although the present invention has been explained with reference to the LMS algorithm, this is by way of example and not by way of limitation. It should be clear to those skilled in the art that there are other iterative algorithms beside the LMS algorithm that can be used to achieve acceptable results, and also that there are equivalent frequency domain derivations of these algorithms. For example, it is possible to write Equation (1) in frequency domain as:

R12)=S12)H12);  Equation (11)

where R(ω12), S(ω12), and H(ω12) are the frequency domain representations (Fourier Transforms) of the captured image, the correct image, and the transfer function, respectively, and therefore:










S

(


ω
1

,

ω
2


)

=



R

(


ω
1

,

ω
2


)


H

(


ω
1

,

ω
2


)


.





Equation



(
12
)








To obtain s(n,m) one would calculate S(ω12) as above and take the Inverse Fourier Transform, which should be known to those skilled in the art. However, this method does not always lead to well behaved solutions, especially when numerical precision is limited.


In a second embodiment of the present invention, h(n,m) is not known. This second embodiment uses so-called blind deconvolution, whereby the transfer function h(n,m) is estimated using signal processing techniques. A functional block diagram of this embodiment is illustrated in FIG. 5, wherein a method 50 for correcting image distortion according to this embodiment is shown. An image r(n,m) from the optics from a camera is captured (step 51) and recorded in memory (step 52). Unlike the first embodiment, there are no motion sensors to detect and record camera motion that occurs while the shutter of the camera is open. Instead, the transfer function representing the motion h(n,m) is derived using blind estimation techniques (step 54), and the inverse transfer function h−1(n,m) is determined (step 56). The inverse transfer function is applied in a corrective filter to the image (step 58), which outputs a corrected image s(n,m) (step 59).


Blind equalization techniques are used to obtain the deconvolution filter coefficients. This is also an iterative LMS algorithm, similar to that used in the first embodiment. In this second embodiment, an iterative procedure is also used to compute an approximate deconvolution filter, and the approximation is improved at each iteration until it substantially converges to the ideal solution. As aforementioned with respect to the first embodiment, the level of convergence may be predetermined or may be set by a user. The approximate deconvolution filter is denoted at iteration k as ŵk(n,m). At this iteration, the output of the deconvolution filter is:














y
k

(

n
,
m

)

=




w
ˆ

k

(

n
,
m

)

**

r

(

n
,
m

)









=








w
ˆ

k

(

i
,
j

)



r

(


n
-
i

,

m
-
j


)





;







Equation



(
13
)








The filter output can be written as the sum of the ideal term and the estimation noise as:












Equation



(
14
)















y
k

(

n
,
m

)

=



w

(

n
,
m

)

**

r

(

n
,
m

)


+


[




w
ˆ

k

(

n
,
m

)

-

w

(

n
,
m

)


]

**

r

(

n
,
m

)










=


s

(

n
,
m

)

+


v
k

(

n
,
m

)



;








where v(n,m) is the estimation noise, which is desirable to eliminate. An initial estimate of the correct image can be written as:

ŝk(n,m)=ŵk(n,m)**r(n,m).  Equation (15)


However, this estimate can be iteratively improved. There are a number of currently known techniques described in estimation theory to achieve this. A preferable option is the LMS algorithm. A block diagram of a calculation unit 80 which implements this method is shown in FIG. 8.


As an initial state, ĥ−10(n,m) is set equal to μr(n,m). Then, the following steps are iteratively repeated:


Step 1, an estimate of the correct image is calculated in a first 2D FIR filter 82:

ŝk(n,m)=ĥk−1(n,m)**r(n,m)


Step 2, a received signal based on the estimated correct image is calculated in a non-linear estimator 84:

{tilde over (r)}k(n,m)=g(ŝk(n,m));

and the estimation error is calculated using an adder 86:

ek(n,m)=rk(n,m)−{tilde over (r)}k(n,m).


Step 3, the inverse transfer function coefficients are then updated in the LMS algorithm unit 88:

ĥk+1−1(n,m)=ĥk−1(n,m)+μr(n,m)ek(n,m),

where μ is the step-size parameter.


The function g(.) calculated in step 2 is a non-linear function chosen to yield a Bayes estimate of the image data. Since this function is not central to the present invention and is well known to those of skill in the art, it will not be described in detail hereinafter.


There are known blind detection algorithms for calculating s(n,m) by looking at higher order statistics of the image data r(n,m). A group of algorithms under this category are called Bussgang algorithms. There are also variations called Sato algorithms, and Godard algorithms. Another class of blind estimation algorithms use spectral properties (polyspectra) of the image data to deduce information about h(n,m). Any appropriate blind estimation algorithm can be used to determine h(n,m), and to construct a correcting filter.


The first two embodiments of the present invention described hereinbefore correct blur in an image based on determining a transfer function that represents the motion of an imager while an image is being captured, and then correcting for the blur by making use of the “inverse” transfer function. One method determines the transfer function at the time the photograph is being captured by using devices that can detect camera motion directly. The other method generates a transfer function after the image is captured by using blind estimation techniques. Both methods then post-process the digital image to correct for blur. In both cases, the captured image is originally blurred by motion, and the blur is then removed.


In accordance with a third embodiment of the present invention the blurring of an image is prevented as it's being captured, as described below. When an imager is moved while an image is being captured, multiple copies of the same image are, in effect, recorded over each other. For example, when an image is captured digitally it is represented as pixel values in the sensor points of the image sensor. This is pictorially represented in FIG. 9, in which the imager (for example, a camera and its associated lens) are not shown in order to simplify the depiction.


If the imager is shaken or moved while the image is being captured, the situation is equivalent to copies of the same image being captured multiple times in an overlapping fashion with an offset. The result is a blurred image. This is particularly true if the shutter speed is relatively slow compared to the motion of the camera. This is graphically illustrated in FIG. 10.


When the shutter speed is sufficiently fast compared to the motion of the imager, blur does not occur or is very limited because the displacement of the imager is not large enough to cause the light reflected from a point on the image to fall onto more than one point on the image sensor. This third embodiment of the invention takes advantage of the ability of an imager to record multiple images using fast shutter speeds. When an image is being captured using a setting of a relatively slow shutter speed, the imager actually operates at a higher shutter speed (for instance at the fastest shutter speed at which the imager is designed to operate), and captures multiple images “back to back.” For example, if the photograph is being taken with a shutter speed setting of 1/125 sec and the fastest shutter speed of the camera is 1/1000 sec, the camera actually captures 8 consecutive images, each taken with a shutter speed setting of 1/1000 sec. Then, the camera combines the images into a single image by aligning them such that each pixel corresponding to the same image point in each image is combined pixel-by-pixel into one pixel value by adding pixel values, averaging them, or using any other appropriate operation to combine them. The multiple images can all be stored and aligned once all of them are captured, or alternatively, each image can be aligned and combined with the first image in “real time” without the need to store all images individually. The blur of the resulting image is substantially reduced, as depicted in FIG. 11.


The quality of an image can be measured in terms of signal-to-noise power ratio (SNR). When a fast shutter speed is used, the SNR of the image is degraded because the image sensor operates less effectively when the amount of light falling on it is reduced. However, since multiple images are being added, this degradation is overcome. Indeed, an SNR improvement can be expected using this embodiment, because the image data is being added coherently while the noise is being added non-coherently. This phenomenon is the basis for such concepts as maximal ratio combining (MRC).


To determine how to align the pixel values, a device that can detect motion, such as an accelerometer or other motion sensor, is attached to or incorporated within the imager, and it records the motion of the imager while the photograph is being taken. The detected motion indicates how much the imager moved while each of the series of images was captured, each image having been captured back-to-back with a high shutter speed as explained in the example above. The imager moves each of the images in the series by an amount which is preferably measured in pixels, in the direction opposite the motion of the imager that occurred during the interval between the capture of the first image and each respective image in the series. Thus, the shift of each image is compensated for, and the correct pixels are aligned in each of the images. This is illustrated in FIG. 11. The combined image will not be blurred since there is no spilling of image points into more than one pixel in the combined final image.


As an alternative to the third embodiment, the reference point for aligning the higher speed images is not the imager location, but the subject itself. In other words, higher shutter speed images can be aligned and combined such that a designated subject in a field of view is clear and sharp whereas other parts of the image may be blurred. For example, a moving subject such as a car in motion can be the designated subject. If high shutter speed images are combined such that the points of the image of the moving car are aligned, the image of the car will be clear and sharp, while the background is blurred. As a way to align a designated subject, such as the car in this example, pattern recognition and segmentation algorithms may be used that are well known to those skilled in the art, and defined in current literature. Alternatively, a tracking signal that is transmitted from the subject can be used to convey its position. Alternatively, the user can indicate, such as by an indicator in a viewfinder, which object in the field of view is the designated subject to be kept blur-free.


A fourth embodiment of the invention compensates for movement of the imager or the subject by adjusting the position of the image sensor during image capture, according to the inverse of the transfer function describing the imager or subject motion, or both. This embodiment is illustrated in FIG. 12. This embodiment is preferably used in digital cameras wherein the image sensor 108 is a relatively small component and can be moved independently of the camera, but can also be used with film. Accordingly, this embodiment makes use of motion sensors, and detects the movement of the camera and/or the subject while the image is being captured. The signals from the motion sensors are used to control devices that adjust the position of the image sensor. In FIG. 12, horizontal motion sensor 102 and vertical motion sensor 104 measure movement of the camera while its shutter (not shown) is open and an image is being captured. The motion information is conveyed to a controller 106, which determines and sends signals to devices 110a, 110b, 110c, and 110d, which adjust the position of the image sensor 108. The control mechanism is such that the devices 110a-d, for example electromagnets or servos, move the image sensor 108 in the opposite direction of the camera motion to prevent motion blur. Additional sensors (not shown) can be used to detect motion of the subject, and the control mechanism configured to correct for that motion as well.



FIG. 13 shows an example of a photographic image that is blurred due to user movement of the imager while taking the picture. FIG. 14 shows the same image, corrected according to the present invention. The invention substantially recovers the correct image from the blurred image.


Those skilled in the art will recognize that all embodiments of the invention are applicable to digitized images which are blurred by uniform motion, regardless of the source of the image or the source of the motion blur. It is applicable to digital images blurred due to motion of the imager, of the subject, or both. In some cases, it is also applicable to images captured on film and then scanned into digital files. In the latter case, however, motion sensor information typically may not be available, and therefore only the blind estimation embodiment can be used. Also, where appropriate, the different embodiments of the invention can be combined. For example, the superposition embodiment can be used to avoid most blur, and the correcting filter using blind estimation embodiment can then be applied to correct the combined image for any remaining blur.


In describing the invention, no distinction has been made between an imager that captures images one at a time, such as a digital camera, and one that captures sequence of images, such as digital or analog video recorders. A digital video recorder or similar device operates substantially the same way as a digital camera, with the addition of video compression techniques to reduce the amount of image data being stored, and various filtering operations used to improve image quality. The invention is also applicable to digital and analog video capture and processing, being applied to each image in the sequence of images, and can be used in conjunction with compression and other filtering.


The implementation of the apparatus that performs the restoration of the images to their correct form can be done as part of the imager capturing the image, or it can be done as a post-process. When done as part of the imager, the image correcting apparatus can be implemented in an integrated circuit, or in software to run on a processor, or a combination of the two. When done as a post process, a preferred embodiment is that the image data is input into a post processing device such as a computer, and the blind estimation algorithm is performed by a computer program. In this embodiment, the implementation could be a dedicated computer program, or an add-on function to an existing computer program.


Where a computer program performs the image restoration, a blind estimation algorithm can be executed by the program to calculate the estimated transfer function h(n,m). Alternatively, motion information can be recorded by the camera at the time the image is captured, and can be downloaded into the program to be used as an input to calculate h(n,m). In either case, the program then derives the correcting filter and applies the filter to correct the image.


It should also be noted that if there are multiple blurred objects in an image, and the blur is caused by the objects moving in different directions, the image of each object will be blurred differently, each blurred object having a different transfer function describing its motion. The present invention can allow the user to individually select independently blurred parts of the image and individually correct only the selected parts, or alternatively, to correct a selected part of the image at the expense of the rest of the image, resulting in a blur-corrected subject and a blurred background.


When increased accuracy is needed in obtaining h(n,m), those skilled in the art will recognize that, in some cases, the motion information from sensors can be used to calculate h(n,m), and an estimate of h(n,m) can also be calculated by blind estimation and the two transfer functions can be advantageously combined for more accurate results.


There are other signal processing algorithms and digital filters which can be applied to digital images in order to improve their color saturation, reduce noise, adjust contrast and sharpness, etc. These can be incorporated as part of an imager, such as a digital camera, or as part of a post-processing application, such as a photo editing software running on a computer. It should be clear to those skilled in the art that those techniques can be applied in addition to the distortion correction of this invention.

Claims
  • 1. A method for use in an imaging device for capturing digital images, the method comprising: displaying a preview of a scene to be captured in a user interface of the imaging device;capturing a plurality of images using a lens having one or more lens elements and at least one moveable lens element movable relative to the imaging device, wherein the plurality of images include a first subject and a second subject;moving the at least one moveable lens element of the lens electro-mechanically to counter an effect of motion of the imaging device;processing the plurality of images by a processor of the imaging device, the processor receiving and executing instructions stored in a memory of the imaging device, to obtain a corrected image, such that the corrected image includes the first subject and the second subject, the first subject in the corrected image is blur free, and the second subject in the corrected image is blurred compared to the first subject;storing the corrected image in the memory of the imaging device; anddisplaying the corrected image in the user interface of the imaging device.
  • 2. The method of claim 1, wherein the method further comprises: displaying the corrected image in the user interface;receiving a user input in the user interface, wherein the user input selects the second subject in the corrected image;processing the plurality of images by the processor to obtain a second corrected image, such that the second corrected image includes the first subject and the second subject, the second subject in the second corrected image is blur free, and the first subject in the second corrected image is blurred compared to the second subject;storing the second corrected image in the memory; anddisplaying the second corrected image in the user interface.
  • 3. The method of claim 1, wherein the imaging device receives a user input in the user interface of the imaging device and the user input designates the first subject to be blur free in the corrected image.
  • 4. The method of claim 1, wherein the processor designates the first subject to be blur free in the corrected image.
  • 5. The method of claim 1, wherein the processing of the images includes calculating by the processor a pixel value for each pixel representing an image point of the first subject in the corrected image based on values of the pixels representing the image point of the first subject in one or more of the plurality of images.
  • 6. The method of claim 1, wherein areas of the corrected image other than the first subject are blurred compared to areas other than the first subject in the plurality of images being processed.
  • 7. The method of claim 1, wherein processing the plurality of images comprises modifying one or more of the plurality of images such that the first subject in the plurality of images is aligned in a same location in the corrected image.
  • 8. An imaging device for capturing digital images, comprising: a user interface configured to display a preview of a scene to be captured;at least one image sensor for capturing a plurality of images, wherein the plurality of images include a first subject and a second subject;at least one lens optically connected to the at least one image sensor, wherein the lens includes one or more lens elements, at least one of the lens elements electro-mechanically movable relative to the imaging device;a memory configured to store instructions for execution by a processor;the processor, wherein the processor is connected to the memory for receiving instructions stored therein, and executing the instructions by the processor causes the processor to correct the plurality of images to obtain a corrected image, such that the corrected image includes the first subject and the second subject, the first subject in the corrected image is blur free and the second subject in the corrected image is blurred compared to the first subject;the memory further configured to store the corrected image; andthe user interface further configured to display the corrected image.
  • 9. The imaging device of claim 8, wherein: the user interface is further configured to display the corrected image and to receive a user input, wherein the user input selects the second subject in the corrected image;the instructions, when executed by the processor further cause the processor to process the plurality of images to obtain a second corrected image, such that the second corrected image includes the first subject and the second subject, the second subject in the second corrected image is blur free, and the first subject in the second corrected image is blurred compared to the second subject;the memory is further configured to store the second corrected image; andthe user interface is further configured to display the second corrected image.
  • 10. The imaging device of claim 8, wherein executing the instructions by the processor further causes the processor to calculate a pixel value for each pixel representing an image point of the first subject in the corrected image based on values of the pixels representing the image point of the first subject in the one or more of the plurality of images.
  • 11. The imaging device of claim 8, wherein the user interface is further configured to receive a user input and the user input designates the first subject to be blur free in the corrected image.
  • 12. The imaging device of claim 8, wherein executing the instructions by the processor further causes the processor to designate the first subject to be blur free in the corrected image.
  • 13. The imaging device of claim 8, wherein executing the instructions by the processor further causes the processor to process the plurality of images to obtain the corrected image such that areas of the corrected image other than the first subject are blurred compared to areas other than the first subject in the plurality of images being processed.
  • 14. The imaging device of claim 8, wherein executing the instructions by the processor further causes the processor to modify the plurality of images such that the first subject in the plurality of images is aligned in a same location in the corrected image.
  • 15. A processor for use in an imaging device, wherein the processor is connected to a memory of the imaging device, the memory holding instructions for execution by the processor, wherein receiving and executing the instructions by the processor causes the processor to: move at least one movable lens element within a lens of the imaging device electro-mechanically to counter an effect of a motion of the imaging device, wherein the lens includes one or more lens elements and at least one of the lens elements is movable relative to the imaging device;receive a plurality of images captured by at least one image sensor and the lens of the imaging device, wherein the plurality of images include a first subject and a second subject;process the plurality of images to obtain a corrected image, such that the corrected image includes the first subject and the second subject, the first subject in the corrected image is blur free and the second subject in the corrected image is blurred compared to the first subject;store the corrected image in the memory of the imaging device; anddisplay the corrected image in user interface of the imaging device.
  • 16. The processor of claim 15, wherein executing the instructions by the processor further causes the processor to: display the corrected image in the user interface of the imaging device and receive a user input via the user interface, wherein the user input selects the second subject in the corrected image;correct the plurality of images to obtain a second corrected image, such that the second corrected image includes the first subject and the second subject, the second subject in the second corrected image is blur free, and the first subject in the second corrected image is blurred compared to the second subject;store the second corrected image in the memory of the imaging device; anddisplay the second corrected image in the user interface of the imaging device.
  • 17. The processor of claim 15, wherein executing the instructions by the processor further causes the processor to calculate a pixel value for each pixel representing an image point of the first subject in the corrected image based on values of the pixels representing the image point of the first subject in the one or more of the plurality of images.
  • 18. The processor of claim 15, wherein executing the instructions by the processor further causes the processor to receive a user input via the user interface of the imaging device and the user input designates the first subject to be blur free in the corrected image.
  • 19. The processor of claim 15, wherein executing the instructions by the processor further causes the processor to process the plurality of images to obtain the corrected image such that areas of the corrected image other than the first subject are blurred compared to areas other than the first subject in the plurality of images being processed.
  • 20. The processor of claim 15, wherein executing the instructions by the processor further causes the processor to modify the plurality of images such that the first subject in the plurality of images is aligned in a same location in the corrected image.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 17/514,486, filed Oct. 29, 2021, which is a continuation of U.S. patent application Ser. No. 17/066,882, filed Oct. 9, 2020, which issued as U.S. Pat. No. 11,165,961, Nov. 2, 2021, which is a continuation of U.S. patent application Ser. No. 16/544,426, filed Aug. 19, 2019, which issued as U.S. Pat. No. 10,880,483 on Dec. 29, 2020, which is a continuation of U.S. patent application Ser. No. 15/858,339, filed Dec. 29, 2017, which issued as U.S. Pat. No. 10,389,944 on Aug. 20, 2019, which is a continuation of U.S. patent application Ser. No. 15/431,332, filed Feb. 13, 2017, which issued as U.S. Pat. No. 9,860,450 on Jan. 2, 2018, which is a continuation of U.S. patent application Ser. No. 15/149,481, filed May 9, 2016, which issued as U.S. Pat. No. 9,800,787 on Oct. 24, 2017, which is a continuation of U.S. patent application Ser. No. 14/690,818, filed on Apr. 20, 2015, which issued as U.S. Pat. No. 9,338,356 on May 10, 2016, which is a continuation of U.S. patent application Ser. No. 14/532,654, filed on Nov. 4, 2014, which issued as U.S. Pat. No. 9,013,587 on Apr. 21, 2015, which is a continuation of U.S. patent application Ser. No. 13/442,370, filed on Apr. 9, 2012, which issued as U.S. Pat. No. 8,922,663 on Dec. 30, 2014, which is a continuation of U.S. patent application Ser. No. 12/274,032, filed on Nov. 19, 2008, which issued as U.S. Pat. No. 8,154,607 on Apr. 10, 2012, which is a continuation of U.S. patent application Ser. No. 11/089,081, filed on Mar. 24, 2005, which issued as U.S. Pat. No. 8,331,723 on Dec. 11, 2012, which claims the benefit of U.S. Provisional Application Ser. No. 60/556,230, filed on Mar. 25, 2004, the contents of each of which are incorporated by reference herein.

US Referenced Citations (208)
Number Name Date Kind
4612575 Ishman et al. Sep 1986 A
4614966 Yunoki et al. Sep 1986 A
4646274 Martinez Feb 1987 A
4717958 Gal et al. Jan 1988 A
4890160 Thomas Dec 1989 A
5060074 Kinugasa et al. Oct 1991 A
5125041 Kimura et al. Jun 1992 A
5170255 Yamada et al. Dec 1992 A
5189518 Nishida Feb 1993 A
5193124 Subbarao Mar 1993 A
5262867 Kojima Nov 1993 A
5264846 Oikawa Nov 1993 A
5282044 Misawa et al. Jan 1994 A
5291300 Ueda Mar 1994 A
5309243 Tsai May 1994 A
5311240 Wheeler May 1994 A
5365603 Karmann Nov 1994 A
5418595 Iwasaki et al. May 1995 A
5430480 Allen et al. Jul 1995 A
5438361 Coleman Aug 1995 A
5475428 Hintz et al. Dec 1995 A
5510831 Mayhew Apr 1996 A
5559551 Sakamoto et al. Sep 1996 A
5596366 Takashima et al. Jan 1997 A
5610654 Parulski et al. Mar 1997 A
5617138 Ito et al. Apr 1997 A
5627543 Moreira May 1997 A
5646684 Nishizawa et al. Jul 1997 A
5649032 Burt et al. Jul 1997 A
5652918 Usui Jul 1997 A
5666158 Sekine et al. Sep 1997 A
5684887 Lee et al. Nov 1997 A
5706402 Bell Jan 1998 A
5706416 Mann et al. Jan 1998 A
5712474 Naneda Jan 1998 A
5729290 Tokumitsu et al. Mar 1998 A
5734739 Sheehan et al. Mar 1998 A
5742840 Hansen et al. Apr 1998 A
5771403 Imada Jun 1998 A
5828793 Mann Oct 1998 A
5867213 Ouchi Feb 1999 A
5870103 Luo Feb 1999 A
5881272 Balmer Mar 1999 A
5889553 Kino et al. Mar 1999 A
5963675 Van der Wal et al. Oct 1999 A
5982421 Inou et al. Nov 1999 A
5990942 Ogino Nov 1999 A
6067367 Nakajima et al. May 2000 A
6069639 Takasugi May 2000 A
6075905 Herman et al. Jun 2000 A
6079862 Kawashima et al. Jun 2000 A
6097854 Szeliski et al. Aug 2000 A
6101238 Murthy et al. Aug 2000 A
6122004 Hwang Sep 2000 A
6124864 Madden et al. Sep 2000 A
6157733 Swain Dec 2000 A
6166384 Dentinger et al. Dec 2000 A
6166853 Sapia et al. Dec 2000 A
6191813 Fujisaki et al. Feb 2001 B1
6195460 Kobayashi et al. Feb 2001 B1
6198283 Foo et al. Mar 2001 B1
6208765 Bergen Mar 2001 B1
6249616 Hashimoto Jun 2001 B1
6266086 Okada et al. Jul 2001 B1
6278460 Myers et al. Aug 2001 B1
6292593 Nako et al. Sep 2001 B1
6342918 Inou et al. Jan 2002 B1
6349114 Mory Feb 2002 B1
6353689 Kanamaru et al. Mar 2002 B1
6353823 Kumar et al. Mar 2002 B1
6384975 Hayakawa May 2002 B1
6385398 Matsumoto May 2002 B1
6392696 Onuki May 2002 B1
6400908 Parulski Jun 2002 B1
6411305 Chui Jun 2002 B1
6429895 Onuki Aug 2002 B1
6437306 Melen Aug 2002 B1
6466262 Miyatake et al. Oct 2002 B1
6470100 Horiuchi et al. Oct 2002 B2
6476869 Sekine et al. Nov 2002 B1
6480192 Sakamoto et al. Nov 2002 B1
6512807 Pohlman et al. Jan 2003 B1
6563542 Hatakenaka et al. May 2003 B1
6583823 Shimada et al. Jun 2003 B1
6646687 Vlahos Nov 2003 B1
6650704 Carlson et al. Nov 2003 B1
6687458 Masuda Feb 2004 B2
6745066 Lin et al. Jun 2004 B1
6757434 Miled et al. Jun 2004 B2
6759979 Vashisth et al. Jul 2004 B2
6773110 Gale Aug 2004 B1
6778210 Sugahara et al. Aug 2004 B1
6781623 Thomas Aug 2004 B1
6784927 Itokawa Aug 2004 B1
6856708 Aoki Feb 2005 B1
6909914 Pedrizzetti et al. Jun 2005 B2
6919927 Hyodo Jul 2005 B1
6930708 Sato et al. Aug 2005 B1
6940545 Ray et al. Sep 2005 B1
6947073 Seal Sep 2005 B1
6967780 Hillis et al. Nov 2005 B2
6993157 Oue et al. Jan 2006 B1
6993204 Yahil et al. Jan 2006 B1
7024050 Kondo et al. Apr 2006 B2
7057645 Hara et al. Jun 2006 B1
7058233 Silber Jun 2006 B2
7075569 Niikawa Jul 2006 B2
7095001 Kawahara Aug 2006 B2
7162102 Cahill et al. Jan 2007 B2
7180043 Washisu Feb 2007 B2
7232221 Hillis et al. Jun 2007 B2
7286164 Shinohara et al. Oct 2007 B2
7286168 Yamasaki Oct 2007 B2
7298923 Zhang et al. Nov 2007 B2
7352389 Uenaka Apr 2008 B2
7397500 Yost et al. Jul 2008 B2
7443434 Silverbrook Oct 2008 B2
7483056 Shinohara et al. Jan 2009 B2
7489760 Hemmendorff Feb 2009 B2
7561186 Poon Jul 2009 B2
7612909 Kondo et al. Nov 2009 B2
7619655 Kondo et al. Nov 2009 B2
7693563 Suresh et al. Apr 2010 B2
7710460 Stavely et al. May 2010 B2
7961323 Tibbetts Jun 2011 B2
8154607 Ozluturk Apr 2012 B2
8228400 Liu et al. Jul 2012 B2
8259184 Murashima et al. Sep 2012 B2
8331723 Ozluturk Dec 2012 B2
8798388 Atanassov et al. Aug 2014 B2
8922663 Ozluturk Dec 2014 B2
9013587 Ozluturk Apr 2015 B2
9313375 Chakravarty Apr 2016 B1
9338356 Ozluturk May 2016 B2
9800787 Ozluturk Oct 2017 B2
9860450 Ozluturk Jan 2018 B2
10389944 Ozluturk Aug 2019 B2
10880483 Ozluturk Dec 2020 B2
11165961 Ozluturk Nov 2021 B2
11627391 Ozluturk Apr 2023 B2
20010010546 Chen Aug 2001 A1
20010013895 Aizawa et al. Aug 2001 A1
20010022619 Nishiwaki Sep 2001 A1
20010022860 Kitamura et al. Sep 2001 A1
20010028798 Manowitz et al. Oct 2001 A1
20010030693 Fisher et al. Oct 2001 A1
20010045989 Onuki Nov 2001 A1
20020036692 Okada Mar 2002 A1
20020094200 Yamaguchi Jul 2002 A1
20020097324 Onuki Jul 2002 A1
20020110268 Brinker et al. Aug 2002 A1
20020140823 Sakurai et al. Oct 2002 A1
20020159651 Tener et al. Oct 2002 A1
20030067544 Wada Apr 2003 A1
20030076408 Dutta Apr 2003 A1
20030076421 Dutta Apr 2003 A1
20030108240 Gutta et al. Jun 2003 A1
20030113035 Cahill et al. Jun 2003 A1
20030117511 Belz et al. Jun 2003 A1
20030118227 Winsor et al. Jun 2003 A1
20030122942 Parker et al. Jul 2003 A1
20030128893 Castorina et al. Jul 2003 A1
20030156216 Nonaka Aug 2003 A1
20030174899 Kondo et al. Sep 2003 A1
20040008872 Goldberg et al. Jan 2004 A1
20040017930 Kim et al. Jan 2004 A1
20040080661 Afsenius et al. Apr 2004 A1
20040091158 Miled et al. May 2004 A1
20040100561 Shinohara et al. May 2004 A1
20040145673 Washisu Jul 2004 A1
20040170327 Kim et al. Sep 2004 A1
20040179111 Hattori Sep 2004 A1
20040184667 Raskar et al. Sep 2004 A1
20040218055 Yost et al. Nov 2004 A1
20040260935 Usami et al. Dec 2004 A1
20050018927 Manabe Jan 2005 A1
20050047672 Ben-Ezra et al. Mar 2005 A1
20050053309 Szczuka et al. Mar 2005 A1
20050063568 Sun et al. Mar 2005 A1
20050078881 Xu et al. Apr 2005 A1
20050140793 Kojima et al. Jun 2005 A1
20050157180 Takahashi et al. Jul 2005 A1
20050179784 Qi Aug 2005 A1
20050213850 Zhang et al. Sep 2005 A1
20050231603 Poon Oct 2005 A1
20050286388 Ayres et al. Dec 2005 A1
20060110147 Tomita et al. May 2006 A1
20060177145 Lee et al. Aug 2006 A1
20060257051 Zavadsky et al. Nov 2006 A1
20060280249 Poon Dec 2006 A1
20070025503 Hemmendorff Feb 2007 A1
20070031004 Matsui et al. Feb 2007 A1
20070086675 Chinen et al. Apr 2007 A1
20070236573 Alon et al. Oct 2007 A1
20070242141 Ciurea Oct 2007 A1
20070263914 Tibbetts Nov 2007 A1
20080123996 Zavadsky et al. May 2008 A1
20090097136 Otsu Apr 2009 A1
20100091124 Hablutzel Apr 2010 A1
20100231748 Takeda Sep 2010 A1
20100328482 Chang et al. Dec 2010 A1
20110228123 Matsumoto et al. Sep 2011 A1
20120128202 Shimizu et al. May 2012 A1
20140313367 Iwasaki Oct 2014 A1
20150103190 Corcoran et al. Apr 2015 A1
20150262341 Nash et al. Sep 2015 A1
20160117829 Yoon et al. Apr 2016 A1
20160127641 Gove May 2016 A1
Foreign Referenced Citations (30)
Number Date Country
1004983 May 2000 EP
01-174076 Jul 1989 JP
05-110931 Apr 1993 JP
06-078210 Mar 1994 JP
06-141191 May 1994 JP
06-087581 Nov 1994 JP
08-307762 Nov 1996 JP
09-261526 Oct 1997 JP
10-215405 Aug 1998 JP
11-024122 Jan 1999 JP
11-252445 Sep 1999 JP
2000-023024 Jan 2000 JP
2000-187478 Jul 2000 JP
2000-207538 Jul 2000 JP
2000-299813 Oct 2000 JP
2000-341577 Dec 2000 JP
2002-057933 Feb 2002 JP
2002-077725 Mar 2002 JP
2002-084412 Mar 2002 JP
2002-112095 Apr 2002 JP
2002-247444 Aug 2002 JP
3395770 Apr 2003 JP
2003-209727 Jul 2003 JP
2003-209773 Jul 2003 JP
2004-056581 Feb 2004 JP
2004-104652 Apr 2004 JP
2004-158905 Jun 2004 JP
2005-039680 Feb 2005 JP
2003045263 Jun 2003 WO
2003088147 Oct 2003 WO
Non-Patent Literature Citations (37)
Entry
Aizawa et al., “Implicit 3D Approach to Image Generation: Object-Based Visual Effects by Linear Processing of Multiple Differently Focused Images,” R. Klette et al. (Eds.): Multi-Image Analysis, LNCS 2032, pp. 226-237, (2001).
Aizawa et al., “Object-Based Visual Effects by Using Multi-Focus Images and its Real-Time Implementation,” Proceedings of the 2000 IEEE International Conference on Multimedia and Expo: Latest Advances in the Fast Changing World of Multimedia (Jul. 2000).
Aizawa et al., “Producing Object-Based Special Effects by Fusing Multiple Differently Focused Images,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 10, No. 2 (Mar. 2000).
Aizawa et al., “Producing Object-based Special Visual Effects by Integrating Multiple Differently Focused Images: Implicit 3D approach to Image Content Manipulation,” Proceedings of the 1999 International Conference on Image Processing (Oct. 1999).
Banham, et al., Digital Image Restoration, IEEE Signal Processing Magazine, Mar. 1997, pp. 24-41.
Biemond et al., “Iterative Methods for Image Deblurring,” Proceedings of the IEEE, May 1990, vol. 78, No. 5.
Bogoni et al., “Pattern-selective color image fusion,” Pattern Recognition 34, pp. 1515-1526 (2001).
Brott, “Digital Video Now Available to Consumers,” Videomaker.com, Nov. 1, 1995.
Burt et al., “Enhanced Image Capture Through Fusion,” 4th International Conference on Computer Vision (May 1993).
Canon ES2000 Camcorder.
Canon, 8mm Video Camcorder Instruction Manual, ES190/ES290, Pub. DIM-297 (1999).
Eltoukhy et al., “A Computationally Efficient Algorithm for Multi-Focus Image Reconstruction,” Proceedings of SPIE—The International Society for Optical Engineering (2003).
Hejtmanek, “Digital Camera Technology,” Broadcast Engineering Magazine, InterTec/Primedia Publishers, pp. 83-86 (Aug. 1998).
John, “Multiframe Selective Information Fusion for 'Looking Through the Woods,” Proceedings of the 2003 International Conference on Multimedia and Expo (Jul. 2003).
Kodak DCS Pro 14n / Pro 14nx Camera Manual.
Kodak Professional DCS Pro SLR/c Digital Camera User's Guide.
Kodama et al., “Generation of arbitrarily focused images by using multiple differently focused images,” Journal of Electronic Imaging 7(1), 138-144 (Jan. 1998).
Kubota et al., “A New Approach to Depth Range Detection by Producing Depth-Dependent Blurring Effect,” Proceedings of the 2001 International Conference on Image Processing (Oct. 2001).
Kubota et al., “A Novel Image-Based Rendering Method by Linear Filtering of Multiple Focused Images Acquired by a Camera Array,” Proceedings of the 2003 International Conference on Image Processing (Sep. 2003).
Kubota et al., “Arbitrary View and Focus Image Generation: Rendering Object-Based Shifting and Focussing Effect by Linear Filtering,” Proceedings of the International Conference on Image Processing (Sep. 2002).
Kubota et al., “Reconstructing Arbitrarily Focused Images From Two Differently Focused Images Using Linear Filters,” IEEE Transaction on Image Processing, vol. 14, No. 11 (Nov. 2005).
Kubota et al., “Registration and Blur Estimation Methods for Multiple Differently Focused Images,” Proceedings of the 1999 International Conference on Image Processing (Oct. 1999).
Kubota et al., “Virtual View Generation by Linear Processing of Two Differently Focused Images,” Object recognition supported by user interaction for service robots, pp. 504-507, vol. 1 (Aug. 2002).
Kundur, et al., Blind Image Deconvolution, IEEE Signal Processing Magazine, May 1996, pp. 43-64.
Kurazume et al., “Development of image stabilization system for remote operation of walking robots,” Proceedings of the 2000 IEEE International Conference on Robotics & Automation (Apr. 2000).
Liles, “Digital Camcorders,” Broadcast Engineering Magazine, InterTec/Primedia Publishers, pp. 86-92 (Jan. 2000).
McGarvey, “The DCS Story: 17 Years of Kodak Professional digital camera systems, 1987-2004,” (Jun. 2004).
McMann et al., “A Digital Noise Reducer for Encoded NTSC Signals,” SMPTE Journal, vol. 87, No. 3 (Mar. 1978).
Nikon Digital Camera D1 User's Manual.
Nikon N8008 AF Instruction Manual.
Pentax *istD Camera Manual.
Piella, “A region-based multiresolution image fusion algorithm,” Proceedings of the Fifth International Conference on Information Fusion. (Jul. 2002).
Popular Electronics Magazine, Mar. 1996.
Samsung Digimax 370 Digital Camera Manual.
Samsung Digimax 430 Digital Camera Manual.
Sole et al., “Region-Selective Sharpening of Magnetic Resonance Images,” Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Nov. 1988).
Yitzhaky, et al., Comparison of Direct Blind Deconvolution Methods for Motion-Blurred Images, Applied Optics, vol. 38, No. 20, Jul. 1999, pp. 4325-4332.
Related Publications (1)
Number Date Country
20230247295 A1 Aug 2023 US
Provisional Applications (1)
Number Date Country
60556230 Mar 2004 US
Continuations (11)
Number Date Country
Parent 17514486 Oct 2021 US
Child 18132673 US
Parent 17066882 Oct 2020 US
Child 17514486 US
Parent 16544426 Aug 2019 US
Child 17066882 US
Parent 15858339 Dec 2017 US
Child 16544426 US
Parent 15431332 Feb 2017 US
Child 15858339 US
Parent 15149481 May 2016 US
Child 15431332 US
Parent 14690818 Apr 2015 US
Child 15149481 US
Parent 14532654 Nov 2014 US
Child 14690818 US
Parent 13442370 Apr 2012 US
Child 14532654 US
Parent 12274032 Nov 2008 US
Child 13442370 US
Parent 11089081 Mar 2005 US
Child 12274032 US