A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The disclosed subject matter relates to methods and systems for coded rolling shutter of an image of a physical object or a scene. More particularly, the disclosed subject matter relates to controlling readout timing and/or exposure for different rows of a pixel array in an image sensor to flexibly sample the three-dimensional space-time volume of scene appearance for various applications.
Digital cameras, digital camcorders, and other imaging devices typically include image sensors, such as charge-coupled device (CCD) image sensors or complementary metal-oxide semiconductor (CMOS) image sensors. Between these two types of image sensors, CMOS image sensors are being used more frequently than CCD image sensors in a variety of imaging devices, such as digital still and video cameras, mobile phone cameras, surveillance cameras, web cameras, etc. This growth is based at least in part on the ability of CMOS image sensors to easily integrate electronics, such as programmable signal processing circuits, on-chip, which provides low cost, low power consumption, and high speed readout features that can be critical for many applications.
More particularly, as shown in
Rolling shutter and, in particular, the row-wise exposure discrepancy in rolling shutter are considered to be detrimental to image quality. Rolling shutter exposes pixels in different rows to light at different times. This often causes skew, geometric distortions, and other artifacts in an image of a physical object or a scene. The effects of rolling shutter are particularly noticeable in images of moving objects, which is shown in
Various approaches, such as global shutter, attempt to address the limitations of rolling shutter. Imaging devices that implement global shutter expose each pixel in every row of an image sensor to light at the same time to simultaneously capture an image. In global shutter, the readout times and the exposure length for each pixel is the same. However, despite these advances, global shutter and other shutter approaches remain one-dimensional functions. None of these approaches extend beyond one dimension—i.e., the time dimension.
There is therefore a need in the art for approaches that exploit rolling shutter advantageously for computational photography and other applications. There is also a need in the art for approaches that extend the shutter mechanisms to a two-dimensional sampling of the three-dimensional space-time volume of a scene.
Accordingly, it is desirable to provide methods and systems for coded readout of an image that overcome these and other deficiencies of the prior art.
Methods and systems for coded rolling shutter are provided.
In some embodiments, mechanisms are provided that control the readout timing and exposure length for each row of a pixel array in an image sensor, thereby flexibly sampling the three-dimensional space-time value of a scene and capturing sub-images that effectively encode motion and dynamic range information within a single captured image. Instead of sending out the row-reset and row-select signals sequentially, signals can be sent using a coded pattern. This is sometimes referred to herein as “coded rolling shutter” or “coded shutter.”
In some embodiments, the readout timing can be controlled by providing an interlaced readout pattern, such as the interlaced readout pattern shown in
In some embodiments, additionally or alternatively to controlling the readout timing, mechanisms are provided for controlling the exposure length. As described herein, an optimal exposure for each row of a pixel array can be determined. An illustrative coded exposure pattern, where an optimal exposure has been determined for each row of a pixel array, is shown in
Upon exposing each of the plurality of rows in an image sensor using a coded pattern (e.g., an interlaced readout pattern, a staggered readout pattern, a coded exposure pattern, and/or any suitable combination thereof), a single image is captured that encodes motion and/or dynamic range. From this image, multiple sub-images can be read out or extracted from different subsets of the rows of the pixel array. The multiple sub-images and/or other information encoded in the single captured image can be used to determine optical flow, generate a skew-free image, generate a slow motion video, generate a high dynamic range (HDR) image, etc.
These coded rolling shutter mechanisms can be implemented in an image sensor (such as the image sensor shown in
It should be noted that these mechanisms can be used in a variety of applications, such as skew compensation, recovering slow motion in images of moving objects, high-speed photography, and high dynamic range (HDR) imaging. For example, these mechanisms can be used to improve sampling over the time dimension for high-speed photography. In another example, the coded rolling shutter mechanisms can be used to estimate optical flow, which can be useful for recovering slow motion in an image of a moving object, generating a skew-free image, or removing motion blur in an image due to camera shake. In yet another example, these mechanisms can be used to control readout timing and exposure length to capture high dynamic range images from a single captured image. In a further example, these mechanisms can be used to control readout timing and exposure length to recover a skew-free video from a single captured image.
In accordance with various embodiments, a method for reading an image of a scene detected in an image sensor comprising a pixel array having a plurality of rows of pixels is provided, the method comprising: exposing each of the plurality of rows of the pixel array to the image of the scene; reading-out a first subset of the rows of the pixel array to extract a first sub-image from the image; and reading-out a second subset of the rows of the pixel array to extract a second sub-image from the image, wherein the first subset of the rows of the pixel array is different from the second subset of the rows of the pixel array.
It should be noted that, in some embodiments, the first subset of the rows of the pixel array and the second subset of the rows of the pixel array are uniformly distributed between the plurality of rows of the pixel array.
In some embodiments, optical flow between the first sub-image and the second sub-image can be estimated for at least one of recovering slow motion, substantially removing skew from the image, and substantially removing motion blur from the image. For example, an intermediate image that is interpolated between the first sub-image and the second sub-image can be determined based at least in part on the estimated optical flow, wherein skew is substantially removed in the intermediate image. In another example, the first sub-image, the intermediate image, and the second sub-image can be combined to create a slow motion interpolated video. In yet another example, where the estimated optical flow corresponds to motion information associated with the image sensor, a point spread function for the image can be estimated based at least in part on the motion information and applied to enhance an output image generated from at least one of the first sub-image and the second sub-image, thereby substantially removing motion blur the output image.
In accordance with various embodiments, a method for reading an image of a scene detected in an image sensor comprising a pixel array having a plurality of rows of pixels, where the plurality of rows includes a given row, a higher row that is higher in the pixel array than the given row, and a lower row that is lower in the pixel array than the given row, is provided, the method comprising: receiving a coded pattern for controlling readout times in the pixel array and for extracting a plurality of sub-images from the image; exposing each of the plurality of rows of the pixel array to the image of the scene; reading-out the given row of the plurality of rows, wherein the given row is selected for readout based on the number of sub-images; reading-out a first set of higher rows subsequent to reading-out the given row; reading-out the lower row subsequent to reading-out the first set of higher rows, wherein the lower row is selected for readout based on the number of sub-images; and reading-out a second set of higher rows subsequent to reading-out the lower row.
In accordance with various embodiments, a method for reading an image of a scene detected in an image sensor comprising a pixel array having a plurality of rows of pixels is provided, the method comprising: controlling a first exposure time for a first row of the plurality of rows and a second exposure time for a second row of the plurality of rows, wherein the first exposure time is controlled to be different from the second exposure time; and reading-out the first row and the second row.
In some embodiments, an optimal exposure time for at least one of the first exposure time for the first row and the second exposure time for the second row can be determined. For example, a first image of the scene (e.g., using conventional auto-exposure) can be obtained, an optimal exposure time for each of the plurality of rows of the pixel array can be determined based at least in part on scene radiance, and a second image of the scene can be obtained, where a first exposure time for a first row of the plurality of rows of the pixel array and a second exposure time for a second row of the plurality of rows of the pixel array are adjusted based at least in part on the determined optimal exposure time for that row. In addition, in some embodiments, the pixel values of the second image can be normalized with respect to the determined optimal exposure time applied for each of the plurality of rows of the pixel array.
In some embodiments, a plurality of sub-images can be extracted from the image using the controlled readout times and the controlled exposure times, where the plurality of sub-images are uniformly distributed between the plurality of rows in the pixel array.
In some embodiments, the plurality of sub-images extracted from the image can be combined to compose a high dynamic range image.
In some embodiments, two or more of the plurality of sub-images can be used and/or compared to compensate for motion blur. For example, optical flow between a first sub-image and a second sub-image of the plurality of sub-images can be estimated. Motion information (e.g., blur kernels) can be determined based at least in part on the estimated optical flow and the determined motion information can be applied to enhance a high dynamic range image that is composed by combining the plurality of sub-images extracted from the image, thereby substantially removing motion blur from the high dynamic range image.
In some embodiments, the determined motion information is incrementally applied to each of the plurality of sub-images to substantially remove motion blur from each of the plurality of sub-images prior to combining the plurality of sub-images to compose a high dynamic range image.
In accordance with various embodiments, a method for reading an image of a scene detected in an image sensor comprising a pixel array having a plurality of rows of pixels is provided, the method comprising: receiving a coded pattern that controls a plurality of exposure times and a plurality of readout times and wherein each of the plurality of exposure times and each of the plurality of readout times are associated with one of the plurality of rows of the pixel array; exposing each of the plurality of rows of the pixel array to the image of the scene in accordance with the received coded pattern; reading-out the plurality of rows of the pixel array in accordance with the received coded pattern to obtain a pixel value for each pixel; reconstructing estimated pixel values for each pixel over time based on pixel values from neighboring rows in the pixel array; and constructing a video using the reconstructed estimated pixel values, wherein skew is substantially reduced in the constructed video.
In some embodiments, the coded pattern randomly assigns the plurality of exposure times and the plurality of readout times for the plurality of rows of the pixel array.
In accordance with various embodiments, a system for reading an image of a scene is provided, the system comprising: an image sensor comprising a pixel array having a plurality of rows; and at least one controller that: exposes each of the plurality of rows of the pixel array to the image of the scene; reads-out a first subset of the rows of the pixel array to extract a first sub-image from the image; and reads-out a second subset of the rows of the pixel array to extract a second sub-image from the image, wherein the first subset of the rows of the pixel array is different from the second subset of the rows of the pixel array.
In accordance with various embodiments, a system for reading an image of a scene is provided, the system comprising: an image sensor comprising a pixel array having a plurality of rows of pixels, wherein the plurality of rows includes a given row, a higher row that is higher in the pixel array than the given row, and a lower row that is lower in the pixel array than the given row; and at least one controller that: receives a coded pattern for controlling readout times in the pixel array and for extracting a plurality of sub-images from the image; exposes each of the plurality of rows of the pixel array to the image of the scene; reads-out the given row of the plurality of rows, wherein the given row is selected for readout based on the number of sub-images; reads-out a first set of higher rows subsequent to reading-out the given row; reads-out the lower row subsequent to reading-out the first set of higher rows, wherein the lower row is selected for readout based on the number of sub-images; and reads-out a second set of higher rows subsequent to reading-out the lower row.
In accordance with various embodiments, a system for reading an image of a scene is provided, the system comprising: an image sensor comprising a pixel array having a plurality of rows; and at least one controller that: controls a first exposure time for a first row of the plurality of rows and a second exposure time for a second row of the plurality of rows, wherein the first exposure time is controlled to be different from the second exposure time; and reads-out the first row and the second row.
In accordance with various embodiments, a system for reading an image of a scene is provided, the system comprising: an image sensor comprising a pixel array having a plurality of rows; and at least one controller that: receives a coded pattern that controls a plurality of exposure times and a plurality of readout times and wherein each of the plurality of exposure times and each of the plurality of readout times are associated with one of the plurality of rows of the pixel array; exposes each of the plurality of rows of the pixel array to the image of the scene in accordance with the received coded pattern; reads-out the plurality of rows of the pixel array in accordance with the received coded pattern to obtain a pixel value for each pixel; reconstructs estimated pixel values for each pixel over time based on pixel values from neighboring rows in the pixel array; and constructs a video using the reconstructed estimated pixel values, wherein skew is substantially reduced in the constructed video.
In accordance with various embodiments, mechanisms for coded shutter are provided. In some embodiments, mechanisms are provided that control the readout timing and exposure length for each row of a pixel array in an image sensor, thereby flexibly sampling the three-dimensional space-time value of a scene and capturing sub-images that effectively encode motion and dynamic range information within a single captured image. Instead of sending out the row-reset and row-select signals sequentially, signals can be sent using a coded pattern.
In some embodiments, the readout timing can be controlled by providing an interlaced readout pattern, such as the interlaced readout pattern shown in
In some embodiments, additionally or alternatively to controlling the readout timing, mechanisms are provided for controlling the exposure length. As described herein, an optimal exposure for each row of a pixel array can be determined. An illustrative coded exposure pattern, where an optimal exposure has been determined for each row of a pixel array, is shown in
Upon exposing each of the plurality of rows in an image sensor using a coded pattern (e.g., an interlaced readout pattern, a staggered readout pattern, a coded exposure pattern, and/or any suitable combination thereof), a single image is captured that encodes motion and/or dynamic range. From this image, multiple sub-images can be read out or extracted from different subsets of the rows of the pixel array. The multiple sub-images and/or other information encoded in the single captured image can be used to determine optical flow, generate a skew-free image, generate a slow motion video, generate a high dynamic range (HDR) image, etc.
These coded rolling shutter mechanisms can be implemented in an image sensor (such as the image sensor shown in
It should be noted that these mechanisms can be used in a variety of applications, such as skew compensation, recovering slow motion in images of moving objects, high-speed photography, and high dynamic range (HDR) imaging. For example, these mechanisms can be used to improve sampling over the time dimension for high-speed photography. In another example, the coded rolling shutter mechanisms can be used to estimate optical flow, which can be useful for recovering slow motion in an image of a moving object, generating a skew-free image, or removing motion blur in an image due to camera shake. In yet another example, these mechanisms can be used to control readout timing and exposure length to capture high dynamic range images from a single captured image. In a further example, these mechanisms can be used to control readout timing and exposure length to recover a skew-free video from a single captured image.
Referring back to
In accordance with some embodiments of the disclosed subject matter, the reset timing, ts(y), and the readout timing, tr(y), can be controlled by the address generator. Accordingly, the address generator can also control the exposure time, Δte. For example,
It should be noted that controlling the readout timing and exposure length for the rows of an image sensor provide greater flexibility for sampling the three-dimensional space-time volume of a scene into a single two-dimensional image. To illustrate this, let E(x,y,t) denote the radiance of a scene point (x,y) at time t, and let S(x,y,t) denote the shutter function of a camera. The captured image, I(x,y), can then be represented as:
I(x,y)=∫−∞∞E(x,y,t)·S(x,y,t)dt
The coded rolling shutter mechanisms described herein extend the shutter function, S(x,y,t), to two dimensions as both readout timing, tr(y), and the exposure time, Δte, can be row-specific. That is, the shutter function for coded rolling shutter is a function of time and image row index. It should be noted that this is unlike conventional rolling shutter and other shutter mechanisms that are merely one-dimensional functions of time.
As mentioned previously, as there is one row of readout circuits, the readout timings for different rows cannot overlap. This imposes a constraint on readout timing, tr(y). More particularly, for an image sensor with M rows, the total readout time for one frame is MΔtr. Each readout timing pattern corresponds to a one-to-one assignment of the M readout timing slots to the M rows. The assignment for conventional rolling shutter is shown in
In accordance with some embodiments, these coded rolling shutter mechanisms can be used to better sample the time dimension by controlling (e.g., shuffling) the readout timings, tr(y), among rows. For example, in some embodiments, an interlaced readout pattern, such as the one shown in
It should be noted that, although
For the interlaced readout pattern, the readout timing, tr(y) for the y-th row can be represented as follows:
where the image sensor has a total of M rows and where └·┘ is the floor function.
In some embodiments, the interlaced readout pattern can be used for skew compensation and high speed photography. More particularly, an address generator, such as address generator 110 of
Regarding skew in the sub-images, it should be noted that the time lag between the top row and the bottom row of each sub-image is MΔtr/K, where M is the total number of rows in the image sensor, Δtr is the readout time, and K is the number of sub-images. As described above, for conventional rolling shutter, the time lag between the top row and the bottom row for one frame is MΔtr. Thus, the skew in these sub-images is 1/K time of the skew in conventional rolling shutter. Accordingly, the skew in the K sub-images is substantially reduced from the skew of an image captured using conventional rolling shutter.
In addition, it should be noted that the time lag between two consecutive sub-images is also reduced to MΔtr/K. Thus, the frame rate increases K times between the obtained sub-images. Accordingly, the frame rate in the K sub-images is substantially increased from the frame rate of an image captured using conventional rolling shutter.
Illustrative examples of the sub-images obtained using the interlaced readout pattern of
It should also be noted that cubic interpolation or any other suitable approach can be used to resize sub-images 820 and 830 vertically to full resolution. For example, for a captured input image with a resolution of 640×480 (M=480 and K=2), each sub-image read out from the captured input image has a resolution of 640×240. Note that, using the interlaced readout pattern, full resolution in the horizontal direction is preserved. Each sub-image can be resized to 640×480 or full resolution by interpolation. The interpolated images are represented by solid lines 910 and 920 (
In some embodiments, the extracted sub-images can be used to estimate optical flow. As described above, cubic interpolation or any other suitable interpolation approach can be used to resize the two sub-images I1 and I2 vertically to full resolution (shown as solid lines 910 and 920 in
Using the determined optical flow, intermediate images within a shaded area 930 and between the interpolated sub-images (I1 and I2) can be determined using bidirectional interpolation. For example, in some embodiments, intermediate images can be determined using the following equation:
Iw(p)=(1−w)I1(p−wuw(p))+wI2(p+(1−w)uw(p)),
where 0≦w≦1, p=(x,y) represents one pixel, and uw(p) is the forward-warped optical flow computed as uw (p+wu0(p))=u0(p). For example, as shown in
In some embodiments, a skew-free image can be interpolated from the obtained sub-images. As shown in
To illustrate the skew compensation feature of the disclosed subject matter,
In addition to compensating for skew, in some embodiments, the sub-images read out from the single captured image using the interlaced readout pattern of
It should be noted that motion blur due to camera shake is a common problem in photography. Merely pressing a shutter release button on a camera can in and of itself cause the camera to shake, and unfortunately result in blurred images. The compact form and small lenses of many of these digital cameras only services to increase the camera shake problem.
In some embodiments, the sub-images and the estimated optical flow from the sub-images can be used to remove motion blur from a single image caused by camera shake or any other suitable motion (sometimes referred to herein as “motion de-blurring”). As described above, cubic interpolation or any other suitable interpolation approach can be used to resize the two sub-images I1 and I2 vertically to full resolution (see, e.g.,
Alternatively, in accordance with some embodiments, the readout timing of the image sensor can be controlled by implementing a staggered readout pattern, such as the staggered readout pattern shown in
In a more particular example,
For the staggered readout pattern, the readout timing, tr(y), for the y-th row can be represented as follows:
It should be noted that the time lag within each sub-image for staggered readout is (M−K+1)Δtr.
It should also be noted that the time lag between two consecutive sub-images is Δtr, which is a substantially higher frame rate than the frame rate achieved using conventional rolling shutter. The readout time, Δtr, is generally between about 15×10−6 seconds or 15 microseconds and about 40×10−6 or 40 microseconds. Accordingly, an image sensor that uses staggered readout can be used for ultra-high speed photography of time-critical events, such as a speeding bullet, a bursting balloon, a foot touching the ground, etc.
Generally speaking, high dynamic range (HDR) imaging generally requires either multiple images of a particular scene that are taken with different exposures or specially-designed image sensors and/or hardware. Capturing multiple images of a particular scene with different exposures requires a static scene and a stable camera to avoid ghosting and/or motion blur. Specially-designed image sensors, on the other hand, are expensive. Accordingly, these generally make high dynamic range imaging inconvenient or impractical, especially for handheld consumer cameras.
In accordance with some embodiments, high dynamic range images can be obtained from a single captured image by controlling the exposure length, Δte(y), for each row of the pixel array. Moreover, as described herein, by controlling readout timing, Δtr(y), and the exposure length, Δte(y), for each row of the pixel array, high dynamic range images, where motion blur is substantially removed, can be obtained from a single captured image.
In some embodiments, an optimal exposure for each row of the pixel array can be determined (sometimes referred to herein as “adaptive row-wise auto-exposure”) using a process 1500 as illustrated in
In response to capturing the temporary image, an optimal exposure can be determined for each row of the pixel array at 1520. Generally speaking, an optimal exposure for a given row can be determined that minimizes the number of saturated and under-exposed pixels within the row while maintaining a substantially number of pixels well-exposed. As shown in
It should be noted that scene radiance can be measured everywhere except in the saturated regions, where no information is recorded. It should also be noted that a small value for the scale factor, s, corresponds to a conservative auto-exposure algorithm.
Accordingly, the optimal exposure, Δte(y), for the y-th row can be found by maximizing the following equation:
where μ(i) can be defined as:
μ(i)=μs(i)+λdμd(i)+λgμg(i),
which includes weights λd and λg and lower and upper bounds of exposure adjustment Δt and Δtu.
Referring back to
In some embodiments, the second image (Ic) can be normalized to generate the final output image (Ir) at 1540. For example, in some embodiments, the second image can be normalized by dividing the second image by the row-wise exposure, Δte(y). Accordingly:
Illustrative examples of the images and exposures obtained using process 1500 of
It should be noted that the adaptive row-wise auto-exposure mechanism described above requires little to no image processing. However, in some embodiments, additional post-processing, such as de-noising, can be performed. For example, noise amplification along the vertical direction, which can be derived from the exposure patterns, can be considered. In another example, for scenes where the dynamic range is predominantly spanned in the horizontal direction (e.g., a dark room that is being viewed from the outside), the adaptive row-wise auto-exposure mechanism can revert the imaging device to use a conventional auto-exposure feature.
In some embodiments, high dynamic range images can be obtained from a single captured image using the above-mentioned adaptive row-wise auto-exposure approach with the previously described coded readout pattern. Using adaptive row-wise auto-exposure to determine optimal exposure for each row in the pixel array along with a coded readout pattern, multiple exposures can be coded into a single captured image and planar camera motion can be estimated to remove blue due to camera shake.
These sub-images (I1, I2, and I3) can be used to compose a high dynamic range image. For example, for static scenes/cameras, an output high dynamic range image can be produced by combining the sub-images of multiple exposures.
In addition, in some embodiments, these sub-images (I1, I2, and I3) obtained from using coded pattern 1900 can be used to compose a high dynamic range and remove motion blur due to camera shake as shown in process flow 1950. For example, motion blur due to camera shake is a common problem in photography and the compact form and small lenses of handheld digital cameras only services to increase the camera shake problem. For handheld digital cameras, motion blur in images caused by camera shake is inevitable, especially for long exposure times. Accordingly, in some embodiments, where camera shake is an issue, optical flow can be determined between the sub-images to account for the camera shake.
It should be noted that, as the sub-images are obtained using a staggered readout, the time lag between the sub-images is small. Therefore, the camera shake velocity can generally be the same in the sub-images. It should be also noted that, within one frame time, the amount of motion caused by camera shake is small and can be approximated as a planar motion.
In some embodiments, the sub-images, which are sampled at different timings, and the estimated optical flow from the sub-images can be used to remove motion blur from a single image caused by camera shake or any other suitable motion as shown in flow 1950. A motion vector, {right arrow over (u)}=[ux, uy], can be estimated from sub-images I1 and I2 by the estimated optical flow:
{right arrow over (u)}=average(computeFlow(I1,I2−I1))
The motion vector can be used determine blur kernels. More particularly, by de-blurring two composed images, I1⊕I2 and I1⊕I2⊕I3, ringing can be effectively suppressed, where the operator ⊕ denotes that the images are first center-aligned with the motion vector, {right arrow over (u)}, and then added together. The two de-blurred images can be represented as:
Ib1=deblur(I1⊕I2,{right arrow over (u)},Δte1,Δte2)
Ib2=deblur(I1⊕I2⊕I3,{right arrow over (u)},Δte1,Δte2,Δte3)
Accordingly, the output de-blurred high dynamic range (HDR) image can be calculated by the following:
It should be noted that the optimal exposure ratios Δte3:Δte2:Δte1 (e.g., 8Δte1:2 Δte1:Δte1) can be determined based at least in part on the desired extended dynamic range and the noise amplification due to motion de-blurring. For example, a larger Δte3:Δte1 exposure ratio provides a larger extended dynamic range, but can also amplify noise during motion de-blurring.
Illustrative examples of the coded input image, sub-images, and output high dynamic range image obtained using the staggered readout and multiple exposure coding 1900 of
Although
It should be noted that, as described above in connection with interlaced readout patterns, the sub-images obtained using coded pattern 2100 has substantially reduced skew than images obtained using conventional rolling shutter. Accordingly, coded pattern 2100 of
In accordance with some embodiments, mechanisms are provided for controlling exposure length and readout times that can recover a skew-free video from a single captured image. Generally speaking, by modeling the scene brightness for one pixel (x,y) over time t as a one-dimensional signal, the corresponding pixel intensity in the captured image is a linear projection of this one-dimensional signal with the exposure pattern. Accordingly, with randomly coded exposure patterns, space-time volume (e.g., a skew-free video) can be reconstructed from a single captured image by exploiting the sparsity in signal gradients.
In a more particular example, a skew-free video can be recovered from a single captured image using compressive sensing techniques. Compressive sensing techniques provide an approach for reconstructing sparse signals from far fewer samples than required by other techniques, such as the Shannon sampling theorem. As described above, the captured image I (x,y) can be described as a line-integral measurement of the space-time volume E (x,y,t). Accordingly, by controlling the shutter function, S (x,y,t), from a single image, several measurements can be acquired within neighboring rows to recover the space-time volume E (x,y,t).
Consider the time-varying appearance of a given pixel (x,y) within one time frame E (x,y,t), where 0≦t≦MΔtr. This can be discretized into a P-element vector, where
As shown in the coded pattern 2200 of
If b denotes the intensities of the K neighboring pixels in the input image I, b can be represented as:
Accordingly, Ax=b, where A is a K×P matrix representing the coding patterns.
The process for recovering a skew-free video from a single captured image begins by obtaining an initial estimate E0 using, for example, block matching and linear interpolation. In a more particular example, the input image can be normalized by dividing by the exposure such that: In(x,y)=I(x,y)/Δte(y). Each pixel (x,y) in the normalized image (In) corresponds to a set of sampled voxels in the initial estimate, E0(x,y,t), where ts(y)≦t≦tr(y). These sampled voxels can be used to fill in portions of the initial estimate, E0.
In some embodiments, a particular voxel can be interpolated multiple times. If a particular voxel is interpolated multiple times, the value of that voxel can be set to the result computed from the matched pair with the minimum matching error. This fills in a substantial portion of the initial estimate, E0. The remaining voxels can then be initialized to the values in the normalized image (In(x,y)) at the corresponding rows.
The initial estimate, E0, can then be used to reconstruct the time-varying appearance x for each pixel by exploiting the sparsity in the gradient of the pixel's radiance over time:
min|x′|+λ|x−x0|
where |x′| is the L−1 norm of the gradient of x over time, λ is a weight parameter, and x0 is the corresponding signal in E0. An optimization using the initial estimate can be run multiple times (e.g., twice). The output of the first iteration can be used to adaptively adjust the values of K for different rows. It should be noted that for rows with large variance in the recovered appearance over time, the K value can be lowered, and vice versa. The adjustment is performed based on a precomputed or predetermined mapping between K values, variances of appearance over time, and reconstruction errors. Multiple iterations can continue to be performed and, in some embodiments, a median filter can be applied prior to generating the final output.
In some embodiments, if a frame buffer is available on the CMOS image sensor, intermittent exposures can be implemented for each pixel, where each pixel can receive multiple row-select and row-reset signals during one frame as shown in
In some embodiments, hardware used in connection with the coded mechanisms can include an image capture device. The image capture device can be any suitable device for capturing images and/or video, such as a portable camera, a video camera or recorder, a computer camera, a scanner, a mobile telephone, a personal data assistant, a closed-circuit television camera, a security camera, an Internet Protocol camera, etc.
The image capture device can include an image sensor, such as the image sensor shown in
In some embodiments, the hardware can also include an image processor. The image processor can be any suitable device that can process images and image-related data as described herein. For example, the image processor can be a general purpose device such as a computer or a special purpose device, such as a client, a server, an image capture device (such as a camera, video recorder, scanner, mobile telephone, personal data assistant, etc.), etc. It should be noted that any of these general or special purpose devices can include any suitable components such as a processor (which can be a microprocessor, digital signal processor, a controller, etc.), memory, communication interfaces, display controllers, input devices, etc.
In some embodiments, the hardware can also include image storage. The image storage can be any suitable device for storing images such as memory (e.g., non-volatile memory), an interface to an external device (such as a thumb drive, a memory stick, a network server, or other storage or target device), a disk drive, a network drive, a database, a server, etc.
In some embodiments, any suitable computer readable media can be used for storing instructions for performing the processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
Accordingly, methods and systems for coded readout of an image are provided.
Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is only limited by the claims which follow. Features of the disclosed embodiments can be combined and rearranged in various ways.
This application is a continuation of U.S. patent application Ser. No. 13/504,905, filed Dec. 7, 2012, which is the United States National Phase Application under 35 U.S.C. §371 of International Application No. PCT/US2010/054424, filed Oct. 28, 2010, which claims the benefit of U.S. Provisional Patent Application No. 61/255,802, filed Oct. 28, 2009. Each of the above-referenced patent applications is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3971065 | Bayer | Jul 1976 | A |
4590367 | Ross et al. | May 1986 | A |
4623928 | Handy | Nov 1986 | A |
4630307 | Cok | Dec 1986 | A |
4652916 | Suzaki et al. | Mar 1987 | A |
4868649 | Gaudin | Sep 1989 | A |
4873561 | Wen | Oct 1989 | A |
4918534 | Lam et al. | Apr 1990 | A |
5030985 | Bryant | Jul 1991 | A |
5138458 | Nagasaki et al. | Aug 1992 | A |
5185671 | Lieberman et al. | Feb 1993 | A |
5193016 | Cornuejols | Mar 1993 | A |
5282063 | Deacon et al. | Jan 1994 | A |
5309243 | Tsai | May 1994 | A |
5373322 | Laroche et al. | Dec 1994 | A |
5420635 | Konishi et al. | May 1995 | A |
5455621 | Morimura | Oct 1995 | A |
5629734 | Hamilton et al. | May 1997 | A |
5638118 | Takahashi et al. | Jun 1997 | A |
5638119 | Cornuejols | Jun 1997 | A |
5670280 | Lawandy | Sep 1997 | A |
5696848 | Patti et al. | Dec 1997 | A |
5703677 | Simoncelli et al. | Dec 1997 | A |
5767987 | Wolff et al. | Jun 1998 | A |
5789737 | Street | Aug 1998 | A |
5801773 | Ikeda et al. | Sep 1998 | A |
5828793 | Mann | Oct 1998 | A |
5889554 | Mutze | Mar 1999 | A |
5990952 | Hamazaki | Nov 1999 | A |
6122408 | Fang et al. | Sep 2000 | A |
6124974 | Burger | Sep 2000 | A |
6501504 | Tatko et al. | Dec 2002 | B1 |
6690422 | Daly et al. | Feb 2004 | B1 |
6753909 | Westerman et al. | Jun 2004 | B1 |
6809761 | Tamaru | Oct 2004 | B1 |
6864916 | Nayar et al. | Mar 2005 | B1 |
6922209 | Hwang et al. | Jul 2005 | B1 |
7084905 | Nayar et al. | Aug 2006 | B1 |
7304771 | Walmsley et al. | Dec 2007 | B2 |
7428019 | Irani et al. | Sep 2008 | B2 |
7511643 | Baraniuk et al. | Mar 2009 | B2 |
7525583 | Kimbell | Apr 2009 | B2 |
7612822 | Ajito et al. | Nov 2009 | B2 |
7639289 | Agrawal et al. | Dec 2009 | B2 |
7697778 | Steinberg et al. | Apr 2010 | B2 |
7924321 | Nayar et al. | Apr 2011 | B2 |
7986857 | Kim et al. | Jul 2011 | B2 |
7999858 | Nayar et al. | Aug 2011 | B2 |
8068153 | Kumar et al. | Nov 2011 | B2 |
8248496 | Sekine | Aug 2012 | B2 |
8797433 | Kaizu et al. | Aug 2014 | B2 |
8798395 | Jo | Aug 2014 | B2 |
8803985 | Kaizu et al. | Aug 2014 | B2 |
8848063 | Jo et al. | Sep 2014 | B2 |
8933924 | Sato | Jan 2015 | B2 |
9036060 | Kaizu et al. | May 2015 | B2 |
9060134 | Mitsunaga | Jun 2015 | B2 |
9100514 | Gu et al. | Aug 2015 | B2 |
9124809 | Kaizu et al. | Sep 2015 | B2 |
9344637 | Kasai et al. | May 2016 | B2 |
9357137 | Mitsunaga | May 2016 | B2 |
20020050518 | Roustaei | May 2002 | A1 |
20030076423 | Dolgoff | Apr 2003 | A1 |
20030108101 | Frossard et al. | Jun 2003 | A1 |
20030160875 | Kobayashi et al. | Aug 2003 | A1 |
20060221067 | Kim et al. | Oct 2006 | A1 |
20060291844 | Kakkori | Dec 2006 | A1 |
20070030342 | Wilburn et al. | Feb 2007 | A1 |
20070103595 | Gong et al. | May 2007 | A1 |
20070104382 | Jasinschi | May 2007 | A1 |
20070223059 | Oishi | Sep 2007 | A1 |
20080002043 | Inoue et al. | Jan 2008 | A1 |
20080219655 | Yoon et al. | Sep 2008 | A1 |
20080278610 | Boettiger | Nov 2008 | A1 |
20080316862 | Bernecky et al. | Dec 2008 | A1 |
20090257653 | Ashikaga | Oct 2009 | A1 |
20100026819 | Koh | Feb 2010 | A1 |
20100141263 | Nakamura | Jun 2010 | A1 |
20100309333 | Smith | Dec 2010 | A1 |
20120218426 | Kaizu et al. | Aug 2012 | A1 |
20120281111 | Jo et al. | Nov 2012 | A1 |
20120287294 | Kaizu et al. | Nov 2012 | A1 |
20120314124 | Kaizu et al. | Dec 2012 | A1 |
20130033616 | Kaizu et al. | Feb 2013 | A1 |
20130050177 | Sato | Feb 2013 | A1 |
20130050284 | Sato | Feb 2013 | A1 |
20130050520 | Takeuchi | Feb 2013 | A1 |
20130051700 | Jo | Feb 2013 | A1 |
20130308044 | Mitsunaga | Nov 2013 | A1 |
20130329128 | Kaizu et al. | Dec 2013 | A1 |
20140192235 | Hitomi et al. | Jul 2014 | A1 |
20140192250 | Mitsunaga | Jul 2014 | A1 |
20140267828 | Kasai et al. | Sep 2014 | A1 |
20140313400 | Kaizu et al. | Oct 2014 | A1 |
20140321766 | Jo | Oct 2014 | A1 |
20140340550 | Kaizu et al. | Nov 2014 | A1 |
20140368697 | Jo et al. | Dec 2014 | A1 |
20150312463 | Gupta et al. | Oct 2015 | A1 |
20150341576 | Gu et al. | Nov 2015 | A1 |
20160248956 | Mitsunaga | Aug 2016 | A1 |
Number | Date | Country |
---|---|---|
4305807 | Oct 1994 | DE |
4420637 | Dec 1995 | DE |
19618476 | Nov 1997 | DE |
1729524 | Dec 1899 | EP |
0472299 | Dec 2006 | EP |
2255465 | Nov 1992 | GB |
2331426 | May 1995 | GB |
59217358 | Dec 1984 | JP |
6070225 | Mar 1992 | JP |
6141229 | May 1994 | JP |
7077700 | Mar 1995 | JP |
7115643 | May 1995 | JP |
7254965 | Oct 1995 | JP |
7254966 | Oct 1995 | JP |
7264488 | Oct 1995 | JP |
8154201 | Jun 1996 | JP |
8223491 | Aug 1996 | JP |
8331461 | Dec 1996 | JP |
8340486 | Dec 1996 | JP |
10069011 | Mar 1998 | JP |
10270673 | Oct 1998 | JP |
2003009006 | Jan 2003 | JP |
2006033381 | Feb 2006 | JP |
2006166294 | Jun 2006 | JP |
2007027604 | Jan 2007 | JP |
2008035278 | May 2008 | JP |
2009177332 | Jun 2009 | JP |
WO 9001844 | Feb 1990 | WO |
WO 9314595 | Jul 1993 | WO |
WO 9705742 | Feb 1997 | WO |
Entry |
---|
Office Action dated Dec. 22, 2015 in Japanese Patent Application No. 2013-555637. |
Office Action dated Dec. 17, 2015 in U.S. Appl. No. 14/001,139. |
Agrawal, A. et al., “Optimal Coded Sampling for Temporal Super-Resolution”, In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 13-18, 2010, pp. 599-606. |
Aharon, M. et al., “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation”, In IEEE Transactions on Signal Processing, vol. 54, No. 11, Nov. 2006, pp. 4311-4322. |
Ait-Aider, O. and Berry, F., “Structure and Kinematics Triangulation with a Rolling Shutter Stereo Rig”, In Proceedings of IEEE International Conference on Computer Vision (ICCV '09), Kyoto, JP, Sep. 29-Oct. 2, 2009, pp. 1835-1840. |
Ait-Aider, O. et al., “Kinematics from Lines in a Single Rolling Shutter Image”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '2007, Minneapolis, MN, US, Jun. 17-22, 2007, pp. 1-6. |
Ait-Aider, O. et al., “Simultaneous Object Pose and Velocity Computation using a Single View from a Rolling Shutter Camera”, In Proceedings of European Conference on Computer Vision (ECCV '06, Graz, AT, May 7-13, 2006, pp. 56-68. |
Baker, S. et al., “A Database and Evaluation Methodology for Optical Flow”, In Proceedings of IEEE International Conference on Computer Vision (ICCV '07), Rio de Janeiro, BR, Oct. 14-20, 2007, pp. 1-8. |
Baone, G.A. and Qi, H., “Demosaicking Methods for Multispectral Cameras Using Mosaic Focal Plane Array Technology”, In Proceedings of SPIE, the International Society for Optical Engineering, vol. 6062, Jan. 15, 2006, pp. 1-13. |
Ben-Ezra, M., et al., “Penrose Pixels: Super-Resolution in the Detector Layout Domain”, In IEEE International Conference on Computer Vision (ICCV), Oct. 14-21, 2007, pp. 1-8. |
Ben-Ezra, M., “Segmentation with Invisible Keying Signal”, In the Proceedings of the Conference on Computer Vision and Pattern Recognition, vol. 1, Jun. 13-15, 2000, pp. 32-37. |
Bradley, D. et al., “Synchronization and Rolling Shutter Compensation for Consumer Video Camera Arrays”, In IEEE International Workop on Projector-Camera Systems (PROCAMS '09), Miami, FL, US, Jun. 20-25, 2009, pp. 1-8. |
Brajovi. V. and Kanade, T., “A Sorting Image Sensor: An Example of Massively Parallel Intensity-to-Time Processing for Low-Latency Computational Sensors”, In Proceedings of IEEE Conference on Robotics and Automaton, Minneapolis, MN, US, Apr. 22-28, 1996, pp. 1638-1643. |
Bruckstein, A. et al., “From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images”, In SIAM Review, vol. 51, No, 1. Feb. 2009, pp. 34-81. |
Bub, G. et al., “Temporal Pixel Multiplexing for Simultaneous High-Speed, High-Resolution Imaging”, Nature Methods, vol. 7, No. 3, Feb. 14, 2010, pp. 209-211. |
Burt, P. and Adelson, E., “A Multiresolution Spline with Application to Image Mosaics”, In ACM Transactions on Graphics, vol. 2, No. 4, Oct. 1983, pp. 217-236. |
Burt, P. and Kolczynski, R., “Enhanced Image Caputre through Fusion”, In Proceedings of International Conference on Computer Vision (ICCV), Berlin, DE, May 11-14, 1993, pp. 173-182. |
Candes, E. and Romberg, J., “Sparsity and Incoherence in Compressive Sampling”, In Inverse Problems, vol. 23, No, 3, Jun. 2007. pp. 969-985. |
Cantles, E. and Tao, T., “Near-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies?”, In IEEE Transactions on Information Theory, vol. 59, No. 12, Dec. 2006, pp. 5406-5425. |
Candes, E. et al., “Stable Signal Recovery from Incomplete and Inaccurate Measurements”, Communication on Pure and Applied Mathematics, vol. 59, No. 8. Aug. 2006, pp. 1207-1223. |
Chang, Y.-C. and Reid, J.F., “RGB Calibration for Analysis in Machine Vision”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 5, No. 10, Oct. 1996, pp. 1414-1422. |
Chen, T. et al., “How Small Should Pixel Size Be?”, In Proceedings of SPIE, vol. 3965, Apr. 2000, pp. 451-459. |
Chi, C. and Ben-Ezra, M., “Spectral Probing: Multi-Spectral Imaging by Optimized Wide Band Illumination”, In Proceedings of the First International Workshop on Photometric Analysis For Computer Vision (PACV '07), Rio de Janeiro, BR, Oct. 2007, pp. 1-8. |
Dabov, K. et al., “Image Denoising by Sparse 3D Transform-Domain Collaboration Filtering”, In IEEE Transactions on Image Processing, vol. 16, No. 8, Aug. 2007, pp. 2080-2095. |
Debevec, P. and Malik, J., “Recovering High Dynamic Range Radiance Maps from Photographs”, In Proceedings of ACM SIGGRAPH, Los Angeles, CA, US, Aug. 5-7, 1997, pp. 369-378. |
Donoho. D. et al., “Compressed Sensing”, In IEEE Transactions on Information Theory, vol. 52, No. 4, Apr. 2006, pp. 1289-1306. |
Donoho, D. et al., “Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise”, In IEEE Transactions on Information Theory, vol. 52, No. 1, Jan. 2006, pp. 6-18. |
Elad. M. and Aharon, M., “Image Denoising via Learned Dictionaries and Sparse Representation”, In IEEE Computer Society Conference on in Computer Vision an Pattern Recognition (CVPR), Jun. 17-22, 2006, pp. 895-900. |
Fife, K., et al., “A 0.5 μm Pixel Frame-Transfer CCD Image Sensor in 110nm CMOS”, In IEEE International Electron Devices Meeting (IEDM 2007), Dec. 10-12, 2007, pp. 1003-1006. |
Fife, K., et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7μm Pixels in 0.11μm CMOS”, In Proceedings in the IEEE International Solid-State Circuit Conference: Digest of Technical Papers (ISSCC '08), Feb. 3-8, 2008, pp. 48-50. |
Fossum. E.R., “CMOS Image Sensors: Electronic Camera-on-a-Chip”, In international Electron Devices Meeting, Washington D.C., US, Dec. 10-13, 1995, pp. 17-25. |
Fujifilm, “Fujifilm Announces Super CCD EXR”, Press Release, Sep. 22, 2008, pp. 1-5, available at: http://www.dpreview.com/news/0809/08092210fujifilmexr.asp. |
Gallo, O. et al., “Artifact-Free High Dynamic Range Imaging”, In Proceedings of IEEE International Conference on Computational Photography (ICCCP '09), San Francisco, CA, US, Apr. 16-17, 2009. pp. 1-7. |
Gamal, A. and Eltoukhy, H., “CMOS Image Sensors”, In IEEE Circuits and Devices Magazine, vol. 5, May 2005, pp. 6-20. |
Geyer, C. et al., “Geometric Models of Rolling-Shutter Cameras”, In IEEE Workshop on Omnidirectional Vision, Oct. 21, 2005, pp. 1-8. |
Gu, J. et al., “Coded Rolling Shutter Photography: Flexible Space-Time Sampling”, In IEEE International Conference on Computational Photography (ICCP '10), Cambridge, MA, US, Mar. 29-30, 2010, pp. 1-8. |
Gu, J. et al., “Compressive Structured Light for Recovering Inhomogeneous Participating Media”, In Proceedings of European Conference on Computer Vision (ECCV '08), Marseille, FR, Oct. 12-18, 2008, pp. 845-858. |
Gupta, A. et al., “Enhancing and Experiencing Spacetime Resolution with Videos and Stills”, In International Conference on Computational Photography (ICCP '09), San Francisco, CA, US, Apr. 16-17, 2009, pp. 1-9. |
Gupta, M. et al., “Flexible Voxels for Motion-Aware Videography”, In Proceedings of the 11th European Conference on Computer Vision: Part 1 (ECCV'10), Crete, GR, Sep. 5-11, 2010, pp. 100-114. |
Hirakawa, K. and Parks, T.W., “Adaptive Homogeneity-Directed Demosaicing Algorithm”, In IEEE International Conference on Image Processing, vol. 3, Sep. 14-17, 2003, pp. 669-672. |
International Patent Application No. PCT/US2000/014515, filed May 26, 2000 |
International Patent Application No. PCT/US2009/038510, filed Mar. 27, 2009. |
International Patent Application No. PCT/US2012/026816, filed Feb. 27, 2012. |
International Preliminary Report on Patentability dated Oct. 7, 2010 in International Patent Application No. PCT/US2009/038510. |
International Preliminary Report on Patentability dated May 1, 2012 in International Patent Application No. PCT/US2010/054424. |
International Preliminary Report on Patentability dated Sep. 6, 20013 in International Patent Application No. PCT/US2012/026816. |
International Search Report dated Dec. 21, 2010 in International Patent Application No. PCT/US2010/054424. |
International Search Report dated Apr. 12, 2000 in International Patent Application No. PCT/US2000/014515. |
International Search Report dated May 23, 2012 in International Patent Application No. PCT/US2012/026816. |
International Search Report dated May 27, 2009 in International Patent Application No. PCT/US2009/038510. |
Kapur, J.P., “Face Detection in Color images”, Technical Report (EE499), Department of Electrical Engineering, University of Washington, Spring 1997, pp. 1-6. |
Kemeny, S. et al., “Multiresoution Image Ssensor”, In IEEE Transactions on Circuits and Systems for Video Technology, vol. 7, No. 4, Aug. 1997, pp. 575-583. |
Kimmel, R., “Demosaicing: Image Reconstruction from Color CCD Samples”, In IEEE Transactions on Image Processing, vol. 8, No. 9, Sep. 1999, pp. 1221-1228. |
Kleinfelder, S. et al., “A 10.000 Frames/s CMOS Digital Pixel Sensor”, IEEE Journal of Solid-Stale Circuits, vol. 36, No. 12, Dec. 2001, pp. 2049-2059. |
Levin, A. et al., “image and Depth from a Conventional Camera with a Coded Aperture”, In ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH '07, vol. 26, No. 3, Jul. 2007, pp, 1-9. |
Liang, C.-K. et al., “Analysis and Compensation of Rolling Shutter Effect”, In IEEE Transations on Image Processing, vol, 17, No. 8, Jun. 2008, pp. 1323-1330. |
Liu, X. and Gamal, A., “Synthesis of High Dynamic Range Motion Blur Free Image from Multiple Captures”, In IEEE Transactions on Circuits and Systems—I: Fundamental Theory and Application, vol. 50. No. 4, Apr. 2003, pp. 530-539. |
Lu, P.-Y. et al., “High Dynamic Range Image Reconstruction from Hand-Held Cameras”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09), Miami, FL, US, Jun. 20-25, 2009, pp. 509-516. |
Lu, W. and Tan, Y.P., “Color Filter Army Demosaicking: New Method and Performance Measures”, In IEEE Transactions on Image Processing, vol. 12, No. 10, Oct. 2003, pp. 1194-1210. |
Lyon, R. and Hubel, P., “Eyeing the Camera: Into the Next Century”, In the IS&T Reporter, vol. 17, No. 6, Dec. 2002, pp. 1-7. |
Madden, B. C., “Extended Intensity Range Imaging”, Technical Report MS-CIS-93-96, Grasp Laboratory, University of Pennsylvania, Dec. 1993, pp. 1-21. |
Mairal, J. et al., “Learning Multiscale Sparse Representations for Image and Video Restoration”, Technical Support, 2007, pp. 214-241. |
Mairal, J. et al., “Non-Local Sparse Models for Image Restoration”, In IEEE 12th International Conference on Computer Vision (ICCV), Sep. 29-Oct. 2, 2009, pp. 2272-2279. |
Mann. S. and Picard, R.W., “On Being ‘Undigital’ with Digital Cameras: Extending Dynamic Range by Combining Differently Exposed Pictures”, In Proceedings of Society for Imaging Science and Technology's 48th Annual Conference (IS&T '95), Washington DC, US, May 1995, pp. 442-448. |
Mannami, H. et al., “Adaptive Dynamic Range Camera with Reflective Liquid Crystal”, In Journal of Visual Communication and Image Representation, vol. 18, No. 5, Oct. 2007, pp. 359-365. |
Marcia. R.F. and Willett, R.M., “Compressive Coded Aperture Video Reconstruction”, In European Signal Processing Conference (EUSIPCO '08), Lausanne, CH, Aug. 25-29, 2008, pp. 1-5. |
Mase, M. et al., “A Wide Dynamic Range CMOS Image Sensor with Multiple Exposure-Time Signal Outputs and 12-hit Column-Parallel Cyclic A/D Converters”, In IEEE Journal of Solid-State Circuits, vol. 40, No. 12, Dec. 2005, pp. 2787-2795. |
Milgram, D., “Computer methods for creating photomosaics”, In IEEE Transactions on Computers, vol c-24, No. 5, May 1975, pp. 1113-1119. |
Mitsunaga, T. and Nayar, S., “Radiometric Self Calibration”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Fort Collins, CO, US, Jun. 23-25, 1999, pp. 374-380. |
Nagahara, H. et al., “Programmable Aperture Camera Using LCoS”, In Proceedings of the 11th European Conference on Computer Vision: Part VI (ECCV'10), Crete, GR, Sep. 5-11, 2010, pp. 337-350. |
Nakamura, J., “Image Sensors and Signal Processingfor Digital Still Cameras”, CRC Press, Sep. 2005, pp. 1-322. |
Narasimhan, S G. and Nayar, S.K., “Enhancing Resolution Along Multiple Imaging Dimensions Using Assorted Pixels”, In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 27, No. 4, Apr. 2005, pp. 518-530. |
Nayar, S.K. and Branzoi, V., “Adaptive Dynamic Range Imaging: Optical Control of Pixel Exposures Over Space and Time”, In the International Conference on Computer Vision (ICCV '03), Nice, FR, Oct. 13-16, 2003, pp. 1168-1175. |
Nayar, S.K. and Mitsunaga, T., “High Dynamic Range Imaging: Spatially Varying Pixel Exposures”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), Hilton Head Island, SC, US, Jun. 15, 2000, pp. 472-479. |
Nayar, S.K. et al., “Programmable Imaging: Towards a Flexible Camera”, In International Journal of Computer Vision, vol. 70, No. 1, Oct. 2006, pp. 7-22. |
Ng, R. et al., “Light Field Photography with a Hand-Held Plenoptic Camera”, Technical Report CSTR Feb. 2005, Stanford University, Feb. 2005, pp. 1-11. |
Notice of Allowance dated May 23, 2014 in U.S. Appl. No. 12/736,333. |
Nyquist, H., “Certain Topics in Telegraph Transmission Theory”, In Proceedings of the IEEE, vol. 90, No. 2, Feb. 2002, pp. 280-305. |
Office Action dated Oct. 17, 2013 in U.S. Appl. No. 12/736,333. |
Office Action dated Oct. 17, 2014 in Japanese Patent Application No. 2012-537033. |
Office Action dated Oct. 24, 2003 in U.S. Appl. No. 09/326,422. |
Office Action dated Oct. 8, 2009 in U.S. Appl. No. 10/886,746. |
Office Action dated Nov. 13, 2012 in Japanese Patent Application No. 2010-219490. |
Office Action dated Dec. 8, 2010 in European Patent Application No. 00936324.3. |
Office Action dated Feb. 14, 2012 in European Patent Application No. 09724220.0. |
Office Action dated Feb. 20, 2007 in European Patent Application No. 00936324.3. |
Office Action dated Feb. 25, 2009 in U.S. Appl. No. 10/886,746. |
Office Action dated Feb. 25, 2013 in U.S. Appl. No. 12/736,333. |
Office Action dated Mar. 15, 2011 in European Application No. 09724220 0. |
Office Action dated Mar. 31, 2004 in U.S. Appl. No. 09/326,422. |
Office Action dated May 18, 2015 in Japanese Patent Application No. 2012-537033. |
Office Action dated May 22, 2015 in U.S. Appl. No. 14/546,627. |
Office Action dated May 27, 2015 in U.S. Appl. No. 14/001,139. |
Office Action dated Jun. 29, 2010 in Japanese Patent Application No. 2009-297250. |
Office Action dated Jun. 30, 2009 in Japanese Patent Application No. 2001-504676. |
Office Action dated Jul. 30, 2013 in Japanese Patent Application No. 2011-502091. |
Office Action dated Jul. 8, 2013 in U.S. Appl. No. 13/045,270. |
Office Action dated Sep. 15, 2014 in U.S. Appl. No. 13/504,905. |
Office Action dated Sep. 2, 2014 in Japanese Patent Application No. 2011-502091. |
Park, J. Y. “A Multiscale Framework for Compressive Sensing of Video”, In Proceedings of the 27th conference on Picture Coding Symposium (PCS'09), Chicago, IL, US, May 6-8, 2009, pp. 197-200. |
Park, J.I. et al., “Multispectral Imaging Using Multiplexed Illumination”, In Proceedings of the IEEE International Conference an Computer Vision (ICCV '07), Rio de Janeiro, BR, Oct. 14-21, 2007, pp. 1-8. |
Parkkinen, J.P.S. et al., “Characteristic Spectra of Munsell Colors”, In Journal of the Optical Society of America A: Optics. Image Science, and Vision, vol. 6, No. 2, Feb. 1989, pp. 318-322. |
Pati, Y.C. et al., “Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition”, In Conference Record of the 27th Asilomar Conference on Signals, Systems, and Computers, vol. 1, Nov. 1993, pp. 40-44. |
Pattanaik, S.N. et al., “A Multiscale Model of Adaptation and Spatial Vision for Realistic Image Display”, In Proceedings of the 25th Annual Conference on Computer Graphics (SIGGRAPH '98), Orlando, FL, US, Jul. 19-24, 1998, pp. 278-298. |
Peers, P. et al., “Compressive Light Transport Sensing”, In ACM Transations on Graphics, vo. 28, No. 1, Jan. 2009, pp. 1289-1306. |
Peleg, S. and Herman. J., “Panoramic Mosaics by Manifold Projection”, In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR 97), San Juan, Puerto Rico, US, Jun. 17-19, 1997, pp. 338-343. |
Protter, M. and Elad, M., “Image Sequence Denoising via Sparse and Redundant Representations”, In IEEE Transactions on Image Processing, vol. 18, No. 1, Jan. 2009, pp. 27-35. |
Quan, S. et al., “Unified Measure of Goodness and Optimal Design of Spectral Sensitivity Functions”, In Journal of Imaging Science and Technology, vol. 46, No. 6, Nov./Dec. 2002, pp. 485-497. |
Raskar, R. et al., “Coded Exposure Photography Motion Deblurring Using Fluttered Shutter”, In Proceedings of ACM SIGGRAPH 2006, vol. 25, No. 3, Jul. 2006, pp. 795-804. |
Reddy, D., “P2C2: Programmable Pixel Compressive Camera for High Speed Imaging”, In Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), Colorado Springs, CO, US, Jun. 20-25, 2011, pp. 329-336. |
Rubinstein, R. et al., “Dictionaries for Sparse Representation Modeling”, In Proceedings of the IEEE, vol. 98, No. 6., Jun. 2010, pp. 1045-1057. |
Sankaranarayanan, A. et al., “Compressive Acquisition of Dynamic Scenes”, In Proceedings of the 11th European Conference on Computer Vision: Part I (ECCV'10), Crete, GR, Sep. 5-11, 2010, pp. 129-142. |
Savard, J., “Color Filter Array Designs”, In Quadibloc, Feb. 19, 2006, pp. 1-18. |
Sharma, G. and Trussell, H.J., “Figures of Merit for Color Scanners”, In IEEE Transactions on Image Processing, vol. 6, No. 7, Jul. 1997, pp. 990-1001. |
Shogenji, R., et al., “Multispectral Imaging Using Compact Compound Optics”, In Optics Express, vol, 12, No. 8, Apr. 19, 2004, pp. 1643-1655. |
Tropp, J.A. and Gilbert, A.C., “Signal Recovery from Random Measurements via Orthogonal Matching Pursuit”, In IEEE Transactions on Information Theory, vol. 53, No. 12, Dec. 2007, pp. 4655-4666. |
Tropp, J.A., “Just Relax: Convex Programming Methods for Subset Selection and Sparse Approximation”, Technical Report, California Institute of Technology, 2004. pp. 1-39. |
U.S. Appl. No. 09/326,422, filed Jun. 4, 1999. |
U.S. Appl. No. 10/886,746, filed Jul. 7, 2004. |
U.S. Appl. No. 12/736,333, filed May 11, 2011. |
U.S. Appl. No. 13/045,270, filed Mar. 10, 2011. |
U.S. Appl. No. 61/072,301, filed Mar. 28, 2008. |
U.S. Appl. No. 61/194,725, filed Sep. 30, 2006. |
U.S. Appl. No. 61/446,970, filed Feb. 25, 2011. |
Veeraraghavan, A. et al., “Coded Strobing Photography. Compressive Sensing of High-Speed Periodic Events”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, No. 4, Apr. 2011, pp. 671-686. |
Veeraraghavan, A. et al., “Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing”. In Journal for ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH '07, vol. 26, No. 3, Jul. 2007, pp. 1-12. |
Wakin, M.B. et al., “Compressive Imaging for Video Representation and Coding”, in Proceedings of Picture Coding Symposium (PCS), Beijing, CN, Apr. 24-26, 2006, pp. 1-6. |
Wilburn, B. et al., “High Speed Video Using a Dense Camera Array”, In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'04), Washington, D.C., US, Jun. 27-Jul. 2, 2004, pp. 294-301. |
Written Opinion dated Dec. 21, 2010 in International Patent Application No. PCT/US2010/054424. |
Written Opinion dated Mar. 13, 2002 in International Patent Application No. PCT/US2000/014515. |
Written Opinion dated May 23, 2012 in International Patent Application No. PCT/US2012/026816. |
Written Opinion dated May 27, 2009 in International Patent Application No. PCT/US2009/038510. |
Yedid-Pecht, O. and Fossum, E., “Wide Intrascene Dynamic Range CMOS APS using Dual Sampling”, In IEEE Transcations on Electron Devices, vol. 44, No. 10, Oct. 1997, pp. 1721-1723. |
Yamada, K. et al., “Effectiveness of Video Camera Dynamic Range Expansion for Lane Mark Detection”, In Proceedings of the IEEE Conference on Intelligent Transportation System (ITSC '97), Boston, MA, US, Nov. 9-12, 1997, pp. 584-588. |
Yang, D.X.D. et al., “A 640 X 512 CMOS Image Sensor with Ultrawide Synamic Range Floating-Point Pixel-Level ADC”, In IEEE Journal of Solid-State Circuits, vol. 34, No. 12, Dec. 1999, pp. 1521-1834. |
Yoshihara, S. et el., “A 1/1.8-inch 6.4mpixe1 60 frames/s CMOS Image Sensor with Seamless Mode Change”, In IEEE International Solid-State Circuits Conference (ISSCC), vol. 41, No. 12, Dec. 2006, pp. 2995-3006. |
Yoshimura, S. et al., “A 48K frames/S CMOS Image Sensor for Real-Time 3-D Sensing and Motion Detection”. In IEEE International Solid-State Circuits Conference (ISSCC), vol. 436, Feb. 2001, pp. 94-95. |
Yuan, L. et al., “Image Deblurring with Blurred and Noisy Image Pairs”, In ACM Transations on Graphics (SIGGRAPH '07) vol. 26. No. 3, Jul. 2007, pp. 1-10. |
Number | Date | Country | |
---|---|---|---|
20150341576 A1 | Nov 2015 | US |
Number | Date | Country | |
---|---|---|---|
61255802 | Oct 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13504905 | US | |
Child | 14816976 | US |