The present invention relates to decoding of image files and more specifically to the decoding of light field image files.
The ISO/IEC 10918-1 standard, more commonly referred to as the JPEG standard after the Joint Photographic Experts Group that developed the standard, establishes a standard process for digital compression and coding of still images. The JPEG standard specifies a codec for compressing an image into a bitstream and for decompressing the bitstream back into an image.
A variety of container file formats including the JPEG File Interchange Format (JFIF) specified in ISO/IEC 10918-5 and the Exchangeable Image File Format (Exif) and can be used to store a JPEG bitstream. JFIF can be considered a minimal file format that enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. The color space used in JFIF files is YCbCr as defined by CCIR Recommendation 601, involving 256 levels. The Y, Cb, and Cr components of the image file are converted from R, G, and B, but are normalized so as to occupy the full 256 levels of an 8-bit binary encoding. YCbCr is one of the compression formats used by JPEG. Another popular option is to perform compression directly on the R, G and B color planes. Direct RGB color plane compression is also popular when lossless compression is being applied.
A JPEG bitstream stores 16-bit word values in big-endian format. JPEG data in general is stored as a stream of blocks, and each block is identified by a marker value. The first two bytes of every JPEG bitstream are the Start Of Image (SOI) marker values FFh D8h. In a JFIF-compliant file there is a JFIF APP0 (Application) marker, immediately following the SOI, which consists of the marker code values FFh E0h and the characters JFIF in the marker data, as described in the next section. In addition to the JFIF marker segment, there may be one or more optional JFIF extension marker segments, followed by the actual image data.
Overall, the JFIF format supports sixteen “Application markers” to store metadata. Using application markers makes it is possible for a decoder to parse a JFIF file and decode only required segments of image data. Application markers are limited to 64K bytes each but it is possible to use the same maker ID multiple times and refer to different memory segments.
An APP0 marker after the SOI marker is used to identify a JFIF file. Additional APP0 marker segments can optionally be used to specify JFIF extensions. When a decoder does not support decoding a specific JFIF application marker, the decoder can skip the segment and continue decoding.
One of the most popular file formats used by digital cameras is Exif. When Exif is employed with JPEG bitstreams, an APP1 Application marker is used to store the Exif data. The Exif tag structure is borrowed from the Tagged Image File Format (TIFF) maintained by Adobe Systems Incorporated of San Jose, Calif.
Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map to create a rendered image.
In a further embodiment the rendering application configuring the processor to post process the decoded image by modifying the pixels based on the depths indicated within the depth map to create the rendered image comprises applying a depth based effect to the pixels of the decoded image.
In another embodiment, the depth based effect comprises at least one effect selected from the group consisting of: modifying the focal plane of the decoded image; modifying the depth of field of the decoded image; modifying the blur in out-of-focus regions of the decoded image; locally varying the depth of field of the decoded image; creating multiple focus areas at different depths within the decoded image; and applying a depth related blur.
In a still further embodiment, the encoded image is an image of a scene synthesized from a reference viewpoint using a plurality of lower resolution images that capture the scene from different viewpoints, the metadata in the light field image file further comprises pixels from the lower resolution images that are occluded in the reference viewpoint, and the rendering application configuring the processor to post process the decoded image by modifying the pixels based on the depths indicated within the depth map to create the rendered image comprises rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint.
In still another embodiment, the metadata in the light field image file includes descriptions of the pixels from the lower resolution images that are occluded in the reference viewpoint including the color, location, and depth of the occluded pixels, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further includes: shifting pixels from the decoded image and the occluded pixels in the metadata to the different viewpoint based upon the depths of the pixels; determining pixel occlusions; and generating an image from the different viewpoint using the shifted pixels that are not occluded and by interpolating to fill in missing pixels using adjacent pixels that are not occluded.
In a yet further embodiment, the image rendered from the different viewpoint is part of a stereo pair of images.
In yet another embodiment, the metadata in the light field image file further comprises a confidence map for the depth map, where the confidence map indicates the reliability of the depth values provided for pixels by the depth map, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises applying at least one filter based upon the confidence map.
In a further embodiment again, the metadata in the light field image file further comprises an edge map that indicates pixels in the decoded image that lie on a discontinuity, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises applying at least one filter based upon the edge map.
In another embodiment again, the edge map identifies whether a pixel lies on an intensity discontinuity.
In a further additional embodiment, the edge map identifies whether a pixel lies on an intensity and depth discontinuity.
In another additional embodiment, the metadata in the light field image file further comprises a missing pixel map that indicates pixels in the decoded image that do not correspond to a pixel from the plurality of low resolution images of the scene and that are generated by interpolating pixel values from adjacent pixels in the synthesized image, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises ignoring pixels based upon the missing pixel map.
In a still further embodiment again, the light field image file conforms to the JFIF standard and the encoded image is encoded in accordance with the JPEG standard, the memory comprises a JPEG decoder application, and the rendering application configures the processor to: locate the encoded image by locating a Start of Image marker within the light field image file; and decode the encoded image using the JPEG decoder.
In still another embodiment again, the metadata is located within an Application marker segment within the light field image file.
In a still further additional embodiment, the Application marker segment is identified using the APPS marker.
In still another additional embodiment, the depth map is encoded in accordance with the JPEG standard using lossless compression, and the rendering application configures the processor to: locate at least one Application marker segment containing the metadata comprising the depth map; and decode the depth map using the JPEG decoder.
In a yet further embodiment again, the encoded image is an image of a scene synthesized from a reference viewpoint using a plurality of lower resolution images that capture the scene from different viewpoints, the metadata in the light field image file further comprises pixels from the lower resolution images that are occluded in the reference viewpoint, the rendering application configures the processor to locate at least one Application marker segment containing the metadata comprising the pixels from the lower resolution images that are occluded in the reference viewpoint, and the rendering application configuring the processor to post process the decoded image by modifying the pixels based on the depth of the pixel indicated within the depth map to create the rendered image comprises rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint.
In yet another embodiment again, the metadata in the light field image file includes descriptions of the pixels from the lower resolution images that are occluded in the reference viewpoint including the color, location, and depth of the occluded pixels, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further includes: shifting pixels from the decoded image and the occluded pixels in the metadata to the different viewpoint based upon the depths of the pixels; determining pixel occlusions; and generating an image from the different viewpoint using the shifted pixels that are not occluded and by interpolating to fill in missing pixels using adjacent pixels that are not occluded.
In a yet further additional embodiment, the image rendered from the different viewpoint is part of a stereo pair of images.
In yet another additional embodiment, the metadata in the light field image file further comprises a confidence map for the depth map, where the confidence map indicates the reliability of the depth values provided for pixels by the depth map, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises applying at least one filter based upon the confidence map.
In a further additional embodiment again, the metadata in the light field image file further comprises an edge map that indicates pixels in the decoded image that lie on a discontinuity, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises applying at least one filter based upon the edge map.
In another additional embodiment again, the edge map identifies whether a pixel lies on an intensity discontinuity.
In a still yet further embodiment again, the edge map identifies whether a pixel lies on an intensity and depth discontinuity.
In still yet another embodiment again, the edge map is encoded in accordance with the JPEG standard using lossless compression, and the rendering application configures the processor to: locate at least one Application marker segment containing the metadata comprising the edge map; and decode the edge map using the JPEG decoder.
In a still yet further additional embodiment, the metadata in the light field image file further comprises a missing pixel map that indicates pixels in the decoded image that do not correspond to a pixel from the plurality of low resolution images of the scene and that are generated by interpolating pixel values from adjacent pixels in the synthesized image, and rendering an image from a different viewpoint using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint further comprises ignoring pixels based upon the missing pixel map.
In still yet another additional embodiment, the missing pixel map is encoded in accordance with the JPEG standard using lossless compression, and the rendering application configures the processor to: locate at least one Application marker segment containing the metadata comprising the missing pixel; and decode the missing pixel map using the JPEG decoder.
An embodiment of the method of the invention includes locating an encoded image within an light field image file using a rendering device, decoding the encoded image using the rendering device, locating the metadata within the light field image file using the rendering device, and post processing the decoded image by modifying the pixels based on the depths indicated within the depth map to create a rendered image using the rendering device.
In a further embodiment of the method of the invention, post processing the decoded image by modifying the pixels based on the depths indicated within the depth map to create the rendered image comprises applying a depth based effect to the pixels of the decoded image using the rending device.
In another embodiment of the method of the invention, the depth based effect comprises at least one effect selected from the group consisting of: modifying the focal plane of the decoded image using the rendering device; modifying the depth of field of the decoded image using the rendering device; modifying the blur in out-of-focus regions of the decoded image using the rendering device; locally varying the depth of field of the decoded image using the rendering device; creating multiple focus areas at different depths within the decoded image using the rendering device; and applying a depth related blur using the rendering device.
In a still further embodiment of the method of the invention, the encoded image is an image of a scene synthesized from a reference viewpoint using a plurality of lower resolution images that capture the scene from different viewpoints, the metadata in the light field image file further comprises pixels from the lower resolution images that are occluded in the reference viewpoint, and post processing the decoded image by modifying the pixels based on the depths indicated within the depth map to create the rendered image comprises using the depth map and the pixels from the lower resolution images that are occluded in the reference viewpoint to render an image from a different viewpoint using the rendering device.
Another further embodiment of the invention includes a machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process involving: locating an encoded image within a light field image file, where the light field image file includes an encoded image and metadata describing the encoded image comprising a depth map that specifies depths from the reference viewpoint for pixels in the encoded image; decoding the encoded image; locating the metadata within the light field image file; and post processing the decoded image by modifying the pixels based on the depths indicated within the depth map to create a rendered image.
Turning now to the drawings, systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files and for rendering images using the stored image and the metadata in accordance with embodiments of the invention are illustrated. A file containing an image synthesized from light field image data and metadata derived from the light field image data can be referred to as a light field image file. As is discussed further below, the encoded image in a light field image file is typically synthesized using a super resolution process from a number of lower resolution images. The light field image file can also include metadata describing the synthesized image derived from the light field image data that enables post processing of the synthesized image. In many embodiments, a light field image file is created by encoding an image synthesized from light field image data and combining the encoded image with a depth map derived from the light field image data. In several embodiments, the encoded image is synthesized from a reference viewpoint and the metadata includes information concerning pixels in the light field image that are occluded from the reference viewpoint. In a number of embodiments, the metadata can also include additional information including (but not limited to) auxiliary maps such as confidence maps, edge maps, and missing pixel maps that can be utilized during post processing of the encoded image to improve the quality of an image rendered using the light field image data file.
In many embodiments, the light field image file is compatible with the JPEG File Interchange Format (JFIF). The synthesized image is encoded as a JPEG bitstream and stored within the file. The accompanying depth map, occluded pixels and/or any appropriate additional information including (but not limited to) auxiliary maps are then stored within the JFIF file as metadata using an Application marker to identify the metadata. A legacy rendering device can simply display the synthesized image by decoding the JPEG bitstream. Rendering devices in accordance with embodiments of the invention can perform additional post-processing on the decoded JPEG bitstream using the depth map and/or any available auxiliary maps. In many embodiments, the maps included in the metadata can also be compressed using lossless JPEG encoding and decoded using a JPEG decoder. Although much of the discussion that follows references the JFIF and JPEG standards, these standards are simply discussed as examples and it should be appreciated that similar techniques can be utilized to embed metadata derived from light field image data used to synthesize the encoded image within a variety of standard file formats, where the synthesized image and/or maps are encoded using any of a variety of standards based image encoding processes.
By transmitting a light field image file including an encoded image, and metadata describing the encoded image, a rendering device (i.e. a device configured to generate an image rendered using the information within the light field image file) can render new images using the information within the file without the need to perform super resolution processing on the original light field image data. In this way, the amount of data transmitted to the rendering device and the computational complexity of rendering an image is reduced. In several embodiments, rendering devices are configured to perform processes including (but not limited to) refocusing the encoded image based upon a focal plane specified by the user, synthesizing an image from a different viewpoint, and generating a stereo pair of images. The capturing of light field image data and the encoding and decoding of light field image files in accordance with embodiments of the invention are discussed further below.
Capturing Light Field Image Data
A light field, which is often defined as a 4D function characterizing the light from all direction at all points in a scene, can be interpreted as a two-dimensional (2D) collection of 2D images of a scene. Array cameras, such as those described in U.S. patent application Ser. No. 12/935,504 entitled “Capturing and Processing of Images using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., can be utilized to capture light field images. In a number of embodiments, super resolution processes such as those described in U.S. patent application Ser. No. 12/967,807 entitled “Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al., are utilized to synthesize a higher resolution 2D image or a stereo pair of higher resolution 2D images from the lower resolution images in the light field captured by an array camera. The terms high or higher resolution and low or lower resolution are used here in a relative sense and not to indicate the specific resolutions of the images captured by the array camera. The disclosures of U.S. patent application Ser. No. 12/935,504 and U.S. patent application Ser. No. 12/967,807 are hereby incorporated by reference in their entirety.
Array Cameras
Embodiments relate to using a distributed approach to capturing images using a plurality of imagers of different imaging characteristics. Each imager may be spatially shifted from another imager in such a manner that an imager captures an image that us shifted by a sub-pixel amount with respect to another imager captured by another imager. Each imager may also include separate optics with different filters and operate with different operating parameters (e.g., exposure time). Distinct images generated by the imagers are processed to obtain an enhanced image. Each imager may be associated with an optical element fabricated using wafer level optics (WLO) technology.
A sensor element or pixel refers to an individual light sensing element in a camera array. The sensor element or pixel includes, among others, traditional CIS (CMOS Image Sensor), CCD (charge-coupled device), high dynamic range pixel, multispectral pixel and various alternatives thereof.
An imager refers to a two dimensional array of pixels. The sensor elements of each imager have similar physical properties and receive light through the same optical component. Further, the sensor elements in the each imager may be associated with the same color filter.
A camera array refers to a collection of imagers designed to function as a unitary component. The camera array may be fabricated on a single chip for mounting or installing in various devices.
An array of camera arrays refers to an aggregation of two or more camera arrays. Two or more camera arrays may operate in conjunction to provide extended functionality over a single camera array.
Image characteristics of an imager refer to any characteristics or parameters of the imager associated with capturing of images. The imaging characteristics may include, among others, the size of the imager, the type of pixels included in the imager, the shape of the imager, filters associated with the imager, the exposure time of the imager, aperture size associated with the imager, the configuration of the optical element associated with the imager, gain of the imager, the resolution of the imager, and operational timing of the imager.
Structure of Camera Array
The camera array may include two or more types of heterogeneous imagers, each imager including two or more sensor elements or pixels. Each one of the imagers may have different imaging characteristics. Alternatively, there may be two or more different types of imagers where the same type of imagers shares the same imaging characteristics.
In one embodiment, each imager 1A through NM has its own filter and/or optical element (e.g., lens). Specifically, each of the imagers 1A through NM or a group of imagers may be associated with spectral color filters to receive certain wavelengths of light. Example filters include a traditional filter used in the Bayer pattern (R, G, B or their complements C, M, Y), an IR-cut filter, a near-IR filter, a polarizing filter, and a custom filter to suit the needs of hyper-spectral imaging. Some imagers may have no filter to allow reception of both the entire visible spectra and near-IR, which increases the imager's signal-to-noise ratio. The number of distinct filters may be as large as the number of imagers in the camera array. Further, each of the imagers 1A through NM or a group of imagers may receive light through lens having different optical characteristics (e.g., focal lengths) or apertures of different sizes.
In one embodiment, the camera array includes other related circuitry. The other circuitry may include, among others, circuitry to control imaging parameters and sensors to sense physical parameters. The control circuitry may control imaging parameters such as exposure times, gain, and black level offset. The sensor may include dark pixels to estimate dark current at the operating temperature. The dark current may be measured for on-the-fly compensation for any thermal creep that the substrate may suffer from.
In one embodiment, the circuit for controlling imaging parameters may trigger each imager independently or in a synchronized manner. The start of the exposure periods for the various imagers in the camera array (analogous to opening a shutter) may be staggered in an overlapping manner so that the scenes are sampled sequentially while having several imagers being exposed to light at the same time. In a conventional video camera sampling a scene at N exposures per second, the exposure time per sample is limited to 1/N seconds. With a plurality of imagers, there is no such limit to the exposure time per sample because multiple imagers may be operated to capture images in a staggered manner.
Each imager can be operated independently. Entire or most operations associated with each individual imager may be individualized. In one embodiment, a master setting is programmed and deviation (i.e., offset or gain) from such master setting is configured for each imager. The deviations may reflect functions such as high dynamic range, gain settings, integration time settings, digital processing settings or combinations thereof. These deviations can be specified at a low level (e.g., deviation in the gain) or at a higher level (e.g., difference in the ISO number, which is then automatically translated to deltas for gain, integration time, or otherwise as specified by context/master control registers) for the particular camera array. By setting the master values and deviations from the master values, higher levels of control abstraction can be achieved to facilitate simpler programming model for many operations. In one embodiment, the parameters for the imagers are arbitrarily fixed for a target application. In another embodiment, the parameters are configured to allow a high degree of flexibility and programmability.
In one embodiment, the camera array is designed as a drop-in replacement for existing camera image sensors used in cell phones and other mobile devices. For this purpose, the camera array may be designed to be physically compatible with conventional image sensors of approximately the same resolution although the achieved resolution of the camera array may exceed conventional image sensors in many photographic situations. Taking advantage of the increased performance, the camera array of the embodiment may include fewer pixels to obtain equal or better quality images compared to conventional image sensors. Alternatively, the size of the pixels in the imager may be reduced compared to pixels in conventional image sensors while achieving comparable results.
In order to match the raw pixel count of a conventional image sensor without increasing silicon area, the logic overhead for the individual imagers is preferably constrained in the silicon area. In one embodiment, much of the pixel control logic is a single collection of functions common to all or most of the imagers with a smaller set of functions applicable each imager. In this embodiment, the conventional external interface for the imager may be used because the data output does not increase significantly for the imagers.
The number of imagers in the camera array may be determined based on, among other factors, (i) resolution, (ii) parallax, (iii) sensitivity, and (iv) dynamic range. A first factor for the size of imager is the resolution. From a resolution point of view, the preferred number of the imagers ranges from 2×2 to 6×6 because an array size of larger than 6×6 is likely to destroy frequency information that cannot be recreated by the super-resolution process. For example, 8 Megapixel resolution with 2×2 imager will require each imager to have 2 Megapixels. Similarly, 8 Megapixel resolution with a 5×5 array will require each imager to have 0.32 Megapixels.
A second factor that may constrain the number of imagers is the issue of parallax and occlusion. With respect to an object captured in an image, the portion of the background scene that is occluded from the view of the imager is called as “occlusion set.” When two imagers capture the object from two different locations, the occlusion set of each imager is different. Hence, there may be scene pixels captured by one imager but not the other. To resolve this issue of occlusion, it is desirable to include a certain minimal set of imagers for a given type of imager.
A third factor that may put a lower bound on the number of imagers is the issue of sensitivity in low light conditions. To improve low light sensitivity, imagers for detecting near-IR spectrum may be needed. The number of imagers in the camera array may need to be increased to accommodate such near-IR imagers.
A fourth factor in determining the size of the imager is dynamic range. To provide dynamic range in the camera array, it is advantageous to provide several imagers of the same filter type (chroma or luma). Each imager of the same filter type may then be operated with different exposures simultaneously. The images captured with different exposures may be processed to generate a high dynamic range image.
Based on these factors, the preferred number of imagers is 2×2 to 6×6. 4×4 and 5×5 configurations are more preferable than 2×2 and 3×3 configurations because the former are likely to provide sufficient number of imagers to resolve occlusion issues, increase sensitivity and increase the dynamic range. At the same time, the computational load required to recover resolution from these array sizes will be modest in comparison to that required in the 6×6 array. Arrays larger than 6×6 may, however, be used to provide additional features such as optical zooming and multispectral imaging.
Another consideration is the number of imagers dedicated to luma sampling. By ensuring that the imagers in the array dedicated to near-IR sampling do not reduce the achieved resolution, the information from the near-IR images is added to the resolution captured by the luma imagers. For this purpose, at least 50% of the imagers may be used for sampling the luma and/or near-IR spectra. In one embodiment with 4×4 imagers, 4 imagers samples luma, 4 imagers samples near-IR, and the remaining 8 imagers samples two chroma (Red and Blue). In another embodiment with 5×5 imagers, 9 imagers samples luma, 8 imagers samples near-IR, and the remaining 8 imagers samples two chroma (Red and Blue). Further, the imagers with these filters may be arranged symmetrically within the camera array to address occlusion due to parallax.
In one embodiment, the imagers in the camera array are spatially separated from each other by a predetermined distance. By increasing the spatial separation, the parallax between the images captured by the imagers may be increased. The increased parallax is advantageous where more accurate distance information is important. Separation between two imagers may also be increased to approximate the separation of a pair of human eyes. By approximating the separation of human eyes, a realistic stereoscopic 3D image may be provided to present the resulting image on an appropriate 3D display device.
In one embodiment, multiple camera arrays are provided at different locations on a device to overcome space constraints. One camera array may be designed to fit within a restricted space while another camera array may be placed in another restricted space of the device. For example, if a total of 20 imagers are required but the available space allows only a camera array of 1×10 imagers to be provided on either side of a device, two camera arrays each including 10 imagers may be placed on available space at both sides of the device. Each camera array may be fabricated on a substrate and be secured to a motherboard or other parts of a device. The images collected from multiple camera arrays may be processed to generate images of desired resolution and performance.
A design for a single imager may be applied to different camera arrays each including other types of imagers. Other variables in the camera array such as spatial distances, color filters and combination with the same or other sensors may be modified to produce a camera array with differing imaging characteristics. In this way, a diverse mix of camera arrays may be produced while maintaining the benefits from economies of scale.
The embodiment of
The use of polychromatic imagers and near-IR imagers is advantageous because these sensors may capture high quality images in low lighting conditions. The images captured by the polychromatic imager or the near-IR imager are used to denoise the images obtained from regular color imagers.
The premise of increasing resolution by aggregating multiple low resolution images is based on the fact that the different low resolution images represent slightly different viewpoints of the same scene. If the LR images are all shifted by integer units of a pixel, then each image contains essentially the same information. Therefore, there is no new information in LR images that can be used to create the HR image. In the imagers according to embodiments, the layout of the imagers may be preset and controlled so that each imager in a row or a column is a fixed sub-pixel distance from its neighboring imagers. The wafer level manufacturing and packaging process allows accurate formation of imagers to attain the sub-pixel precisions required for the super-resolution processing.
An issue of separating the spectral sensing elements into different imagers is parallax caused by the physical separation of the imagers. By ensuring that the imagers are symmetrically placed, at least two imagers can capture the pixels around the edge of a foreground object. In this way, the pixels around the edge of a foreground object may be aggregated to increase resolution as well as avoiding any occlusions. Another issue related to parallax is the sampling of color. The issue of sampling the color may be reduced by using parallax information in the polychromatic imagers to improve the accuracy of the sampling of color from the color filtered imagers.
In one embodiment, near-IR imagers are used to determine relative luminance differences compared to a visible spectra imager. Objects have differing material reflectivity results in differences in the images captured by the visible spectra and the near-IR spectra. At low lighting conditions, the near-IR imager exhibits a higher signal to noise ratios. Therefore, the signals from the near-IR sensor may be used to enhance the luminance image. The transferring of details from the near-IR image to the luminance image may be performed before aggregating spectral images from different imagers through the super-resolution process. In this way, edge information about the scene may be improved to construct edge-preserving images that can be used effectively in the super-resolution process.
Image Fusion of Color Images with Near-IR Images
The spectral response of CMOS imagers is typically very good in the near-IR regions covering 650 nm to 800 nm and reasonably good between 800 nm and 1000 nm. Although near-IR images having no chroma information, information in this spectral region is useful in low lighting conditions because the near-IR images are relatively free of noise. Hence, the near-IR images may be used to denoise color images under the low lighting conditions.
In one embodiment, an image from a near-IR imager is fused with another image from a visible light imager. Before proceeding with the fusion, a registration is performed between the near-IR image and the visible light image to resolve differences in viewpoints. The registration process may be performed in an offline, one-time, processing step. After the registration is performed, the luminance information on the near-IR image is interpolated to grid points that correspond to each grid point on the visible light image.
After the pixel correspondence between the near-IR image and the visible light image is established, denoising and detail transfer process may be performed. The denoising process allows transfer of signal information from the near-IR image to the visible light image to improve the overall SNR of the fusion image. The detail transfer ensures that edges in the near-IR image and the visible light image are preserved and accentuated to improve the overall visibility of objects in the fused image.
In one embodiment, a near-IR flash may serve as a near-IR light source during capturing of an image by the near-IR imagers. Using the near-IR flash is advantageous, among other reasons, because (i) the harsh lighting on objects of interest may be prevented, (ii) ambient color of the object may be preserved, and (iii) red-eye effect may be prevented.
In one embodiment, a visible light filter that allows only near-IR rays to pass through is used to further optimize the optics for near-IR imaging. The visible light filter improves the near-IR optics transfer function because the light filter results in sharper details in the near-IR image. The details may then be transferred to the visible light images using a dual bilateral filter.
Dynamic Range Determination by Differing Exposures at Imagers
An auto-exposure (AE) algorithm is important to obtaining an appropriate exposure for the scene to be captured. The design of the AE algorithm affects the dynamic range of captured images. The AE algorithm determines an exposure value that allows the acquired image to fall in the linear region of the camera array's sensitivity range. The linear region is preferred because a good signal-to-noise ratio is obtained in this region. If the exposure is too low, the picture becomes under-saturated while if the exposure is too high the picture becomes over-saturated. In conventional cameras, an iterative process is taken to reduce the difference between measured picture brightness and previously defined brightness below a threshold. This iterative process requires a large amount of time for convergence, and sometimes results in an unacceptable shutter delay.
In one embodiment, the picture brightness of images captured by a plurality of imagers is independently measured. Specifically, a plurality of imagers are set to capturing images with different exposures to reduce the time for computing the adequate exposure. For example, in a camera array with 5×5 imagers where 8 luma imagers and 9 near-IR imagers are provided, each of the imagers may be set with different exposures. The near-IR imagers are used to capture low-light aspects of the scene and the luma imagers are used to capture the high illumination aspects of the scene. This results in a total of 17 possible exposures. If exposure for each imager is offset from an adjacent imager by a factor of 2, for example, a maximum dynamic range of 217 or 102 dB can be captured. This maximum dynamic range is considerably higher than the typical 48 dB attainable in a conventional camera with 8 bit image outputs.
At each time instant, the responses (under-exposed, over-exposed or optimal) from each of the multiple imagers are analyzed based on how many exposures are needed at the subsequent time instant. The ability to query multiple exposures simultaneously in the range of possible exposures accelerates the search compared to the case where only one exposure is tested at once. By reducing the processing time for determining the adequate exposure, shutter delays and shot-to-shot lags may be reduced.
In one embodiment, the HDR image is synthesized from multiple exposures by combining the images after linearizing the imager response for each exposure. The images from the imagers may be registered before combining to account for the difference in the viewpoints of the imagers.
In one embodiment, at least one imager includes HDR pixels to generate HDR images. HDR pixels are specialized pixels that capture high dynamic range scenes. Although HDR pixels show superior performances compared to other pixels, HDR pixels show poor performance at low lighting conditions in comparison with near-IR imagers. To improve performance at low lighting conditions, signals from the near-IR imagers may be used in conjunction with the signal from the HDR imager to attain better quality images across different lighting conditions.
In one embodiment, an HDR image is obtained by processing images captured by multiple imagers by processing. The ability to capture multiple exposures simultaneously using the imager is advantageous because artifacts caused by motion of objects in the scene can be mitigated or eliminated.
Hyperspectral Imaging by Multiple Imagers
In one embodiment, a multi-spectral image is rendered by multiple imagers to facilitate the segmentation or recognition of objects in a scene. Because the spectral reflectance coefficients vary smoothly in most real world objects, the spectral reflectance coefficients may be estimated by capturing the scene in multiple spectral dimensions using imagers with different color filters and analyzing the captured images using Principal Components Analysis (PCA).
In one embodiment, half of the imagers in the camera array are devoted to sampling in the basic spectral dimensions (R, G, and B) and the other half of the imagers are devoted to sampling in a shifted basic spectral dimensions (R′, G′, and B′). The shifted basic spectral dimensions are shifted from the basic spectral dimensions by a certain wavelength (e.g., 10 nm).
In one embodiment, pixel correspondence and non-linear interpolation is performed to account for the sub-pixel shifted views of the scene. Then the spectral reflectance coefficients of the scene are synthesized using a set of orthogonal spectral basis functions. The basis functions are eigenvectors derived by PCA of a correlation matrix and the correlation matrix is derived from a database storing spectral reflectance coefficients measured by, for example, Munsell color chips (a total of 1257) representing the spectral distribution of a wide range of real world materials to reconstruct the spectrum at each point in the scene.
At first glance, capturing different spectral images of the scene through different imagers in the camera array appears to trade resolution for higher dimensional spectral sampling. However, some of the lost resolution may be recovered. The multiple imagers sample the scene over different spectral dimensions where each sampling grid of each imager is offset by a sub-pixel shift from the others. In one embodiment, no two sampling grid of the imager overlap. That is, the superposition of all the sampling grids from all the imagers forms a dense, possibly non-uniform, montage of points. Scattered data interpolation methods may be used to determine the spectral density at each sample point in this non-uniform montage for each spectral image. In this way, a certain amount of resolution lost in the process of sampling the scene using different spectral filters may be recovered.
As described above, image segmentation and object recognition are facilitated by determining the spectral reflectance coefficients of the object. The situation often arises in security applications wherein a network of cameras is used to track an object as it moves from the operational zone of one camera to another. Each zone may have its own unique lighting conditions (fluorescent, incandescent, D65, etc.) that may cause the object to have a different appearance in each image captured by different cameras. If these cameras capture the images in a hyper-spectral mode, all images may be converted to the same illuminant to enhance object recognition performance.
In one embodiment, camera arrays with multiple imagers are used for providing medical diagnostic images. Full spectral digitized images of diagnostic samples contribute to accurate diagnosis because doctors and medical personnel can place higher confidence in the resulting diagnosis. The imagers in the camera arrays may be provided with color filters to provide full spectral data. Such camera array may be installed on cell phones to capture and transmit diagnostic information to remote locations. Further, the camera arrays including multiple imagers may provide images with a large depth of field to enhance the reliability of image capture of wounds, rashes, and other symptoms.
In one embodiment, a small imager (including, for example, 20-500 pixels) with a narrow spectral bandpass filters is used to produce a signature of the ambient and local light sources in a scene. By using the small imager, the exposure and white balance characteristics may be determined more accurately at a faster speed. The spectral bandpass filters may be ordinary color filters or diffractive elements of a bandpass width adequate to allow the number of camera arrays to cover the visible spectrum of about 400 nm. These imagers may run at a much higher frame rate and obtain data (which may or may not be used for its pictorial content) for processing into information to control the exposure and white balance of other larger imagers in the same camera array. The small imagers may also be interspersed within the camera array.
Optical Zoom Implemented Using Multiple Imagers
In one embodiment, a subset of imagers in the camera array includes telephoto lenses. The subset of imagers may have other imaging characteristics same as imagers with non-telephoto lenses. Images from this subset of imagers are combined and super-resolution processed to form a super-resolution telephoto image. In another embodiment, the camera array includes two or more subsets of imagers equipped with lenses of more than two magnifications to provide differing zoom magnifications.
Embodiments of the camera arrays may achieve its final resolution by aggregating images through super-resolution. Taking an example of providing 5×5 imagers with a 3× optical zoom feature, if 17 imagers are used to sample the luma (G) and 8 imagers are used to sample the chroma (R and B), 17 luma imagers allow a resolution that is four times higher than what is achieved by any single imager in the set of 17 imagers. If the number of the imager is increased from 5×5 to 6×6, an addition of 11 extra imagers becomes available. In comparison with the 8 Megapixel conventional image sensor fitted with a 3× zoom lens, a resolution that is 60% of the conventional image sensor is achieved when 8 of the additional 11 imagers are dedicated to sampling luma (G) and the remaining 3 imagers are dedicated to chroma (R and B) and near-IR sampling at 3× zoom. This considerably reduces the chroma sampling (or near-IR sampling) to luma sampling ratio. The reduced chroma to luma sampling ratio is somewhat offset by using the super-resolved luma image at 3× zoom as a recognition prior on the chroma (and near-IR) image to resample the chroma image at a higher resolution.
With 6×6 imagers, a resolution equivalent to the resolution of conventional image sensor is achieved at 1× zoom. At 3× zoom, a resolution equivalent to about 60% of conventional image sensor outfitted with a 3× zoom lens is obtained by the same imagers. Also, there is a decrease in luma resolution at 3× zoom compared with conventional image sensors with resolution at 3× zoom. The decreased luma resolution, however, is offset by the fact that the optics of conventional image sensor has reduced efficiency at 3× zoom due to crosstalk and optical aberrations.
The zoom operation achieved by multiple imagers has the following advantages. First, the quality of the achieved zoom is considerably higher than what is achieved in the conventional image sensor due to the fact that the lens elements may be tailored for each change in focal length. In conventional image sensors, optical aberrations and field curvature must be corrected across the whole operating range of the lens, which is considerably harder in a zoom lens with moving elements than in a fixed lens element where only aberrations for a fixed focal length need to be corrected. Additionally, the fixed lens in the imagers has a fixed chief ray angle for a given height, which is not the case with conventional image sensor with a moving zoom lens. Second, the imagers allow simulation of zoom lenses without significantly increasing the optical track height. The reduced height allows implementation of thin modules even for camera arrays with zooming capability.
The overhead required to support a certain level of optical zoom in camera arrays according to some embodiments is tabulated in Table 2.
In one embodiment, the pixels in the images are mapped onto an output image with a size and resolution corresponding to the amount of zoom desired in order to provide a smooth zoom capability from the widest-angle view to the greatest-magnification view. Assuming that the higher magnification lenses have the same center of view as the lower magnification lenses, the image information available is such that a center area of the image has a higher resolution available than the outer area. In the case of three or more distinct magnifications, nested regions of different resolution may be provided with resolution increasing toward the center.
An image with the most telephoto effect has a resolution determined by the super-resolution ability of the imagers equipped with the telephoto lenses. An image with the widest field of view can be formatted in at least one of two following ways. First, the wide field image may be formatted as an image with a uniform resolution where the resolution is determined by the super-resolution capability of the set of imagers having the wider-angle lenses. Second, the wide field image is formatted as a higher resolution image where the resolution of the central part of the image is determined by the super-resolution capability of the set of imagers equipped with telephoto lenses. In the lower resolution regions, information from the reduced number of pixels per image area is interpolated smoothly across the larger number of “digital” pixels. In such an image, the pixel information may be processed and interpolated so that the transition from higher to lower resolution regions occurs smoothly.
In one embodiment, zooming is achieved by inducing a barrel-like distortion into some, or all, of the array lens so that a disproportionate number of the pixels are dedicated to the central part of each image. In this embodiment, every image has to be processed to remove the barrel distortion. To generate a wide angle image, pixels closer to the center are sub-sampled relative to outer pixels are super-sampled. As zooming is performed, the pixels at the periphery of the imagers are progressively discarded and the sampling of the pixels nearer the center of the imager is increased.
In one embodiment, mipmap filters are built to allow images to be rendered at a zoom scale that is between the specific zoom range of the optical elements (e.g., 1× and 3× zoom scales of the camera array). Mipmaps are a precalculated optimized set of images that accompany a baseline image. A set of images associated with the 3× zoom luma image can be created from a baseline scale at 3× down to 1×. Each image in this set is a version of the baseline 3× zoom image but at a reduced level of detail. Rendering an image at a desired zoom level is achieved using the mipmap by (i) taking the image at 1× zoom, and computing the coverage of the scene for the desired zoom level (i.e., what pixels in the baseline image needs to be rendered at the requested scale to produce the output image), (ii) for each pixel in the coverage set, determine if the pixel is in the image covered by the 3× zoom luma image, (iii) if the pixel is available in the 3× zoom luma image, then choose the two closest mipmap images and interpolate (using smoothing filter) the corresponding pixels from the two mipmap images to produce the output image, and (iv) if the pixel is unavailable in the 3× zoom luma image, then choose the pixel from the baseline 1× luma image and scale up to the desired scale to produce the output pixel. By using mipmaps, smooth optical zoom may be simulated at any point between two given discrete levels (i.e., 1× zoom and 3× zoom).
Capturing Video Images
In one embodiment, the camera array generates high frame image sequences. The imagers in the camera array can operate independently to capture images. Compared to conventional image sensors, the camera array may capture images at the frame rate up to N time (where N is the number of imagers). Further, the frame period for each imager may overlap to improve operations under low-light conditions. To increase the resolution, a subset of imagers may operate in a synchronized manner to produce images of higher resolution. In this case, the maximum frame rate is reduced by the number of imagers operating in a synchronized manner. The high-speed video frame rates can enables slow-motion video playback at a normal video rate.
In one example, two luma imagers (green imagers or near-IR imagers), two blue imagers and two green imagers are used to obtain high-definition 1080p images. Using permutations of four luma imagers (two green imagers and two near-IR imagers or three green imagers and one near-IR imager) together with one blue imager and one red imager, the chroma imagers can be upsampled to achieve 120 frames/sec for 1080p video. For higher frame rate imaging devices, the number of frame rates can be scaled up linearly. For Standard-Definition (480p) operation, a frame rate of 240 frames/sec may be achieved using the same camera array.
Conventional imaging devices with a high-resolution image sensor (e.g., 8 Megapixels) use binning or skipping to capture lower resolution images (e.g., 1080p30, 720p30 and 480p30). In binning, rows and columns in the captured images are interpolated in the charge, voltage or pixel domains in order to achieve the target video resolutions while reducing the noise. In skipping, rows and columns are skipped in order to reduce the power consumption of the sensor. Both of these techniques result in reduced image quality.
In one embodiment, the imagers in the camera arrays are selectively activated to capture a video image. For example, 9 imagers (including one near-IR imager) may be used to obtain 1080p (1920×1080 pixels) images while 6 imagers (including one near-IR imager) may be used to obtain 720p (1280×720 pixels) images or 4 imagers (including one near-IR imager) may be used to obtain 480p (720×480 pixels) images. Because there is an accurate one-to-one pixel correspondence between the imager and the target video images, the resolution achieved is higher than traditional approaches. Further, since only a subset of the imagers is activated to capture the images, significant power savings can also be achieved. For example, 60% reduction in power consumption is achieved in 1080p and 80% of power consumption is achieved in 480p.
Using the near-IR imager to capture video images is advantageous because the information from the near-IR imager may be used to denoise each video image. In this way, the camera arrays of embodiments exhibit excellent low-light sensitivity and can operate in extremely low-light conditions. In one embodiment, super-resolution processing is performed on images from multiple imagers to obtain higher resolution video imagers. The noise-reduction characteristics of the super-resolution process along with fusion of images from the near-IR imager results in a very low-noise images.
In one embodiment, high-dynamic-range (HDR) video capture is enabled by activating more imagers. For example, in a 5×5 camera array operating in 1080p video capture mode, there are only 9 cameras active. A subset of the 16 cameras may be overexposed and underexposed by a stop in sets of two or four to achieve a video output with a very high dynamic range.
Other Applications for Multiple Imagers
In one embodiment, the multiple imagers are used for estimating distance to an object in a scene. Since information regarding the distance to each point in an image is available in the camera array along with the extent in x and y coordinates of an image element, the size of an image element may be determined. Further, the absolute size and shape of physical items may be measured without other reference information. For example, a picture of a foot can be taken and the resulting information may be used to accurately estimate the size of an appropriate shoe.
In one embodiment, reduction in depth of field is simulated in images captured by the camera array using distance information. The camera arrays according to the present invention produce images with greatly increased depth of field. The long depth of field, however, may not be desirable in some applications. In such case, a particular distance or several distances may be selected as the “in best focus” distance(s) for the image and based on the distance (z) information from parallax information, the image can be blurred pixel-by-pixel using, for example, a simple Gaussian blur. In one embodiment, the depth map obtained from the camera array is utilized to enable a tone mapping algorithm to perform the mapping using the depth information to guide the level, thereby emphasizing or exaggerating the 3D effect.
In one embodiment, apertures of different sizes are provided to obtain aperture diversity. The aperture size has a direct relationship with the depth of field. In miniature cameras, however, the aperture is generally made as large as possible to allow as much light to reach the camera array. Different imagers may receive light through apertures of different sizes. For imagers to produce a large depth of field, the aperture may be reduced whereas other imagers may have large apertures to maximize the light received. By fusing the images from sensor images of different aperture sizes, images with large depth of field may be obtained without sacrificing the quality of the image.
In one embodiment, the camera array according to the present invention refocuses based on images captured from offsets in viewpoints. Unlike a conventional plenoptic camera, the images obtained from the camera array of the present invention do not suffer from the extreme loss of resolution. The camera array according to the present invention, however, produces sparse data points for refocusing compared to the plenoptic camera. In order to overcome the sparse data points, interpolation may be performed to refocus data from the spare data points.
In one embodiment, each imager in the camera array has a different centroid. That is, the optics of each imager are designed and arranged so that the fields of view for each imager slightly overlap but for the most part constitute distinct tiles of a larger field of view. The images from each of the tiles are panoramically stitched together to render a single high-resolution image.
In one embodiment, camera arrays may be formed on separate substrates and mounted on the same motherboard with spatial separation. The lens elements on each imager may be arranged so that the corner of the field of view slightly encompasses a line perpendicular to the substrate. Thus, if four imagers are mounted on the motherboard with each imager rotated 90 degrees with respect to another imager, the fields of view will be four slightly overlapping tiles. This allows a single design of WLO lens array and imager chip to be used to capture different tiles of a panoramic image.
In one embodiment, one or more sets of imagers are arranged to capture images that are stitched to produce panoramic images with overlapping fields of view while another imager or sets of imagers have a field of view that encompasses the tiled image generated. This embodiment provides different effective resolution for imagers with different characteristics. For example, it may be desirable to have more luminance resolution than chrominance resolution. Hence, several sets of imagers may detect luminance with their fields of view panoramically stitched. Fewer imagers may be used to detect chrominance with the field of view encompassing the stitched field of view of the luminance imagers.
In one embodiment, the camera array with multiple imagers is mounted on a flexible motherboard such that the motherboard can be manually bent to change the aspect ratio of the image. For example, a set of imagers can be mounted in a horizontal line on a flexible motherboard so that in the quiescent state of the motherboard, the fields of view of all of the imagers are approximately the same. If there are four imagers, an image with double the resolution of each individual imager is obtained so that details in the subject image that are half the dimension of details that can be resolved by an individual imager. If the motherboard is bent so that it forms part of a vertical cylinder, the imagers point outward. With a partial bend, the width of the subject image is doubled while the detail that can be resolved is reduced because each point in the subject image is in the field of view of two rather than four imagers. At the maximum bend, the subject image is four times wider while the detail that can be resolved in the subject is further reduced.
Each two-dimensional (2D) image in a captured light field is from the viewpoint of one of the cameras in the array camera. A high resolution image synthesized using super resolution processing is synthesized from a specific viewpoint that can be referred to as a reference viewpoint. The reference viewpoint can be from the viewpoint of one of the cameras in a camera array. Alternatively, the reference viewpoint can be an arbitrary virtual viewpoint.
Due to the different viewpoint of each of the cameras, parallax results in variations in the position of foreground objects within the images of the scene. Processes for performing parallax detection are discussed in U.S. Provisional Patent Application Ser. No. 61/691,666 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras” to Venkataraman et al., the disclosure of which is incorporated by reference herein in its entirety. As is disclosed in U.S. Provisional Patent Application Ser. No. 61/691,666, a depth map from a reference viewpoint can be generated by determining the disparity between the pixels in the images within a light field due to parallax. A depth map indicates the distance of the surfaces of scene objects from a reference viewpoint. In a number of embodiments, the computational complexity of generating depth maps is reduced by generating an initial low resolution depth map and then increasing the resolution of the depth map in regions where additional depth information is desirable such as (but not limited to) regions involving depth transitions and/or regions containing pixels that are occluded in one or more images within the light field.
During super resolution processing, a depth map can be utilized in a variety of ways. U.S. patent application Ser. No. 12/967,807 describes how a depth map can be utilized during super resolution processing to dynamically refocus a synthesized image to blur the synthesized image to make portions of the scene that do not lie on the focal plane to appear out of focus. U.S. patent application Ser. No. 12/967,807 also describes how a depth map can be utilized during super resolution processing to generate a stereo pair of higher resolution images for use in 3D applications. A depth map can also be utilized to synthesize a high resolution image from one or more virtual viewpoints. In this way, a rendering device can simulate motion parallax and a dolly zoom (i.e. virtual viewpoints in front or behind the reference viewpoint). In addition to utilizing a depth map during super-resolution processing, a depth map can be utilized in a variety of post processing processes to achieve effects including (but not limited to) dynamic refocusing, generation of stereo pairs, and generation of virtual viewpoints without performing super-resolution processing. Light field image data captured by array cameras, storage of the light field image data in a light field image file, and the rendering of images using the light field image file in accordance with embodiments of the invention are discussed further below.
Array Camera Architecture
Array cameras in accordance with embodiments of the invention are configured so that the array camera software can control the capture of light field image data and can capture the light field image data into a file that can be used to render one or more images on any of a variety of appropriately configured rendering devices. An array camera including an imager array in accordance with an embodiment of the invention is illustrated in
In the illustrated embodiment, the processor receives image data generated by the sensor and reconstructs the light field captured by the sensor from the image data. The processor can manipulate the light field in any of a variety of different ways including (but not limited to) determining the depth and visibility of the pixels in the light field and synthesizing higher resolution 2D images from the image data of the light field. Sensors including multiple focal planes are discussed in U.S. patent application Ser. No. 13/106,797 entitled “Architectures for System on Chip Array Cameras”, to Pain et al., the disclosure of which is incorporated herein by reference in its entirety.
In the illustrated embodiment, the focal planes are configured in a 5×5 array. Each focal plane 104 on the sensor is capable of capturing an image of the scene. The sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements and/or any other structure configured to generate an electrical signal indicative of light incident on the structure. In many embodiments, the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present). In other embodiments, the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element.
In many embodiments, an array of images (i.e. a light field) is created using the image data captured by the focal planes in the sensor. As noted above, processors 108 in accordance with many embodiments of the invention are configured using appropriate software to take the image data within the light field and synthesize one or more high resolution images. In several embodiments, the high resolution image is synthesized from a reference viewpoint, typically that of a reference focal plane 104 within the sensor 102. In many embodiments, the processor is able to synthesize an image from a virtual viewpoint, which does not correspond to the viewpoints of any of the focal planes 104 in the sensor 102. Unless all of the objects within a captured scene are a significant distance from the array camera, the images in the light field will include disparity due to the different fields of view of the focal planes used to capture the images. Processes for detecting and correcting for disparity when performing super-resolution processing in accordance with embodiments of the invention are discussed in U.S. Provisional Patent Application Ser. No. 61/691,666 (incorporated by reference above). The detected disparity can be utilized to generate a depth map. The high resolution image and depth map can be encoded and stored in memory 110 in a light field image file. The processor 108 can use the light field image file to render one or more high resolution images. The processor 108 can also coordinate the sharing of the light field image file with other devices (e.g. via a network connection), which can use the light field image file to render one or more high resolution images.
Although a specific array camera architecture is illustrated in
Capturing and Storing Light Field Image Data
Processes for capturing and storing light field image data in accordance with many embodiments of the invention involve capturing light field image data, generating a depth map from a reference viewpoint, and using the light field image data and the depth map to synthesize an image from the reference viewpoint. The synthesized image can then be compressed for storage. The depth map and additional data that can be utilized in the post processing can also be encoded as metadata that can be stored in the same container file with the encoded image.
A process for capturing and storing light field image data in accordance with an embodiment of the invention is illustrated in
The light field image data and the depth map can be utilized to synthesize (206) an image from a specific viewpoint. In many embodiments, the light field image data includes a number of low resolution images that are used to synthesize a higher resolution image using a super resolution process. In a number of embodiments, a super resolution process such as (but not limited to) any of the super resolution processes disclosed in U.S. patent application Ser. No. 12/967,807 can be utilized to synthesize a higher resolution image from the reference viewpoint.
In order to be able to perform post processing to modify the synthesized image without the original light field image data, metadata can be generated (208) from the light field image data, the synthesized image, and/or the depth map. The metadata data can be included in a light field image file and utilized during post processing of the synthesized image to perform processing including (but not limited to) refocusing the encoded image based upon a focal plane specified by the user, and synthesizing one or more images from a different viewpoint. In a number of embodiments, the auxiliary data includes (but is not limited to) pixels in the light field image data occluded from the reference viewpoint used to synthesize the image from the light field image data, one or more auxiliary maps including (but not limited to) a confidence map, an edge map, and/or a missing pixel map. Auxiliary data that is formatted as maps or layers provide information corresponding to pixel locations within the synthesized image. A confidence map is produced during the generation of a depth map and reflects the reliability of the depth value for a particular pixel. This information may be used to apply different filters in areas of the image and improve image quality of the rendered image. An edge map defines which pixels are edge pixels, which enables application of filters that refine edges (e.g. post sharpening). A missing pixel map represents pixels computed by interpolation of neighboring pixels and enables selection of post-processing filters to improve image quality. As can be readily appreciated, the specific metadata generated depends upon the post processing supported by the image data file. In a number of embodiments, no auxiliary data is included in the image data file.
In order to generate an image data file, the synthesized image is encoded (210). The encoding typically involves compressing the synthesized image and can involve lossless or lossy compression of the synthesized image. In many embodiments, the depth map and any auxiliary data are written (212) to a file with the encoded image as metadata to generate a light field image data file. In a number of embodiments, the depth map and/or the auxiliary maps are encoded. In many embodiments, the encoding involves lossless compression.
Although specific processes for encoding light field image data for storage in a light field image file are discussed above, any of a variety of techniques can be utilized to process light field image data and store the results in an image file including but not limited to processes that encode low resolution images captured by an array camera and calibration information concerning the array camera that can be utilized in super resolution processing. Storage of light field image data in JFIF files in accordance with embodiments of the invention is discussed further below.
Image Data Formats
In several embodiments, the encoding of a synthesized image and the container file format utilized to create the light field image file are based upon standards including but not limited to the JPEG standard (ISO/IEC 10918-1) for encoding a still image as a bitstream and the JFIF standard (ISO/IEC 10918-5). By utilizing these standards, the synthesized image can be rendered by any rendering device configured to support rendering of JPEG images contained within JFIF files. In many embodiments, additional data concerning the synthesized image such as (but not limited to) a depth map and auxiliary data that can be utilized in the post processing of the synthesized image can be stored as metadata associated with an Application marker within the JFIF file. Conventional rendering devices can simply skip Application markers containing this metadata. Rendering device in accordance with many embodiments of the invention can decode the metadata and utilize the metadata in any of a variety of post processing processes.
A process for encoding an image synthesized using light field image data in accordance with the JPEG specification and for including the encoded image and metadata that can be utilized in the post processing of the image in a JFIF file in accordance with an embodiment of the invention is illustrated in
Although specific processes are discussed above for storing light field image data in JFIF files, any of a variety of processes can be utilized to encode synthesized images and additional metadata derived from the light field image data used to synthesize the encoded images in a JFIF file as appropriate to the requirements of a specific application in accordance with embodiments of the invention. The encoding of synthesized images and metadata for insertion into JFIF files in accordance with embodiments of the invention are discussed further below. Although much of the discussion that follows relates to JFIF files, synthesized images and metadata can be encoded for inclusion in a light field image file using any of a variety of proprietary or standards based encoding techniques and/or utilizing any of a variety of proprietary or standards based file formats.
Encoding Images Synthesized from Light Field Image Data
An image synthesized from light field image data using super resolution processing can be encoded in accordance with the JPEG standard for inclusion in a light field image file in accordance with embodiments of the invention. The JPEG standard is a lossy compression standard. However, the information losses typically do not impact edges of objects. Therefore, the loss of information during the encoding of the image typically does not impact the accuracy of maps generated based upon the synthesized image (as opposed to the encoded synthesized image). The pixels within images contained within files that comply with the JFIF standard are typically encoded as YCbCr values. Many array cameras synthesize images, where each pixel is expressed in terms of a Red, Green and Blue intensity value. In several embodiments, the process of encoding the synthesized image involves mapping the pixels of the image from the RGB domain to the YCbCr domain prior to encoding. In other embodiments, mechanisms are used within the file to encode the image in the RGB domain. Typically, encoding in the YCbCr domain provides better compression ratios and encoding in the RGB domain provides higher decoded image quality.
Storing Additional Metadata Derived from Light Field Image Data
The JFIF standard does not specify a format for storing depth maps or auxiliary data generated by an array camera. The JFIF standard does, however, provide sixteen Application markers that can be utilized to store metadata concerning the encoded image contained within the file. In a number of embodiments, one or more of the Application markers of a JFIF file is utilized to store an encoded depth map and/or one or more auxiliary maps that can be utilized in the post processing of the encoded image contained within the file.
A JFIF Application marker segment that can be utilized to store a depth map, individual camera occlusion data and auxiliary map data in accordance with an embodiment of the invention is illustrated in
The Application marker segment includes a header 404 indicated as “DZ Header” that provides a description of the metadata contained within the Application marker segment. In the illustrated embodiment, the “DZ Header” 404 includes a DZ Endian field that indicates whether the data in the “DZ Header” is big endian or little endian. The “DZ Header” 404 also includes a “DZ Selection Descriptor”.
An embodiment of a “DZ Selection Descriptor” is illustrated in
Depth Map
Referring back to
A “Depth Map Attributes” table in accordance with an embodiment of the invention is illustrated in
A “Depth Map Descriptor” in accordance with an embodiment of the invention is illustrated in
A JFIF Application marker segment is restricted to 65,533 bytes. However, an Application marker can be utilized multiple times within a JFIF file. Therefore, depth maps in accordance with many embodiments of the invention can span multiple APPS Application marker segments. The manner in which depth map data is stored within an Application marker segment in a JFIF file in accordance with an embodiment of the invention is illustrated in
Although specific implementations of a depth map and header describing a depth map within an Application marker segment of a JFIF file are illustrated in
Occlusion Data
Referring back to
A “Camera Array General Attributes” table in accordance with an embodiment of the invention is illustrated in
A “Camera Array Descriptor” in accordance with an embodiment of the invention is illustrated in
In many embodiments, occlusion data is provided on a camera by camera basis. In several embodiments, the occlusion data is included within a JFIF file using an individual camera descriptor and an associated set of occlusion data. An individual camera descriptor that identifies a camera and identifies the number of occluded pixels related to the identified camera described within the JFIF file in accordance with an embodiment of the invention is illustrated in
A table describing an occluded pixel that can be inserted within a JFIF file in accordance with an embodiment of the invention is illustrated in
Although specific implementations for storing information describing occluded pixel depth within an Application marker segment of a JFIF file are illustrated in
Auxiliary Maps
Referring back to
An “Auxiliary Map Descriptor” that describes an auxiliary map contained within a light field image file in accordance with an embodiment of the invention is illustrated in
“Auxiliary Map Data” stored in a JFIF file in accordance with an embodiment of the invention is conceptually illustrated in
Although specific implementations for storing auxiliary maps within an Application marker segment of a JFIF file are illustrated in
Confidence Maps
A confidence map can be utilized to provide information concerning the relative reliability of the information at a specific pixel location. In several embodiments, a confidence map is represented as a complimentary one bit per pixel map representing pixels within the encoded image that were visible in only a subset of the images used to synthesize the encoded image. In other embodiments, a confidence map can utilize additional bits of information to express confidence using any of a variety of metrics including (but not limited to) a confidence measure determined during super resolution processing, or the number of images in which the pixel is visible.
Edge Maps
A variety of edge maps can be provided included (but not limited to) a regular edge map and a silhouette map. A regular edge map is a map that identifies pixels that are on an edge in the image, where the edge is an intensity discontinuity. A silhouette edge maps is a map that identifies pixels that are on an edge, where the edge involves an intensity discontinuity and a depth discontinuity. In several embodiments, each can be expressed as a separate one bit map or the two maps can be combined as a map including two pixels per map. The bits simply signal the presence of a particular type of edge at a specific location to post processing processes that apply filters including (but not limited to) various edge preserving and/or edge sharpening filters.
Missing Pixel Maps
A missing pixel map indicates pixel locations in a synthesized image that do not include a pixel from the light field image data, but instead include an interpolated pixel value. In several embodiments, a missing pixel map can be represented using a complimentary one bit per pixel map. The missing pixel map enables selection of post-processing filters to improve image quality. In many embodiments, a simple interpolation algorithm can be used during the synthesis of a higher resolution from light field image data and the missing pixels map can be utilized to apply a more computationally expensive interpolation process as a post processing process. In other embodiments, missing pixel maps can be utilized in any of a variety of different post processing process as appropriate to the requirements of a specific application in accordance with embodiments of the invention.
Rendering Images Using Light Field Imaging Files
When light field image data is encoded in a light field image file, the light field image file can be shared with a variety of rendering devices including but not limited to cameras, mobile devices, personal computers, tablet computers, network connected televisions, network connected game consoles, network connected media players, and any other device that is connected to the Internet and can be configured to display images. A system for sharing light field image files in accordance with an embodiment of the invention is illustrated in
Rendering Devices
A rendering device in accordance with embodiments of the invention typically includes a processor and a rendering application that enables the rendering of an image based upon a light field image data file. The simplest rendering is for the rendering device to decode the encoded image contained within the light field image data file. More complex renderings involve applying post processing to the encoded image using the metadata contained within the light field image file to perform manipulations including (but not limited to) modifying the viewpoint of the image and/or modifying the focal plane of the image.
A rendering device in accordance with an embodiment of the invention is illustrated in
Processes for Rendering Images Using Light Field Image Files
As noted above, rendering a light field image file can be as simple as decoding an encoded image contained within the light field image file or can involve more complex post processing of the encoded image using metadata derived from the same light field image data used to synthesize the encoded image. A process for rendering a light field image in accordance with an embodiment of the invention is illustrated in
Although specific processes for rendering an image from a light field image file are discussed with reference to
Rendering Images Using JFIF Light Field Image Files
The ability to leverage deployed JPEG decoders can greatly simplify the process of rendering light field images. When a light field image file conforms to the JFIF standard and the image and/or metadata encoded within the light field image file is encoded in accordance with the JPEG standard, a rendering application can leverage an existing implementation of a JPEG decoder to render an image using the light field image file. Similar efficiencies can be obtained where the light field image file includes an image and/or metadata encoded in accordance with another popular standard for image encoding.
A rendering device configured by a rendering application to render an image using a light field image file in accordance with an embodiment of the invention is illustrated in
Although specific rendering devices including JPEG decoders are discussed above with reference to
Processes for Rendering Images from Jfif Light Field Image Files
Processes for rending images using light field image files that conform to the JFIF standard can utilize markers within the light field image file to identify encoded images and metadata. Headers within the metadata provide information concerning the metadata present in the file and can provide offset information or pointers to the location of additional metadata and/or markers within the file to assist with parsing the file. Once appropriate information is located a standard JPEG decoder implementation can be utilized to decode encoded images and/or maps within the file.
A process for displaying an image rendered using a light field image file that conforms to the JFIF standard using a JPEG decoder in accordance with an embodiment of the invention is illustrated in
Although specific processes for displaying images rendered using light field image files are discussed above with respect to
Post Processing of Images Using Metadata Derived from Light Field Image Data
Images can be synthesized from light field image data in a variety of ways. Metadata included in light field image files in accordance with embodiments of the invention can enable images to be rendered from a single image synthesized from the light field image data without the need to perform super resolution processing. Advantages of rendering images in this way can include that the process of obtaining the final image is less processor intensive and less data is used to obtain the final image. However, the light field image data provides rich information concerning a captured scene from multiple viewpoints. In many embodiments, a depth map and occluded pixels from the light field image data (i.e. pixels that are not visible from the reference viewpoint of the synthesized image) can be included in a light field image file to provide some of the additional information typically contained within light field image data. The depth map can be utilized to modify the focal plane when rendering an image and/or to apply depth dependent effects to the rendered image. The depth map and the occluded pixels can be utilized to synthesize images from different viewpoints. In several embodiments, additional maps are provided (such as, but not limited to, confidence maps, edge maps, and missing pixel maps) that can be utilized when rendering alternative viewpoints to improve the resulting rendered image. The ability to render images from different viewpoints can be utilized to simply render an image from a different viewpoint. In many embodiments, the ability to render images from different viewpoints can be utilized to generate a stereo pair for 3D viewing. In several embodiments, processes similar to those described in U.S. Provisional Patent Application Ser. No. 61/707,691, entitled “Synthesizing Images From Light Fields Utilizing Virtual Viewpoints” to Jain (the disclosure of which is incorporated herein by reference in its entirety) can be utilized to modify the viewpoint based upon motion of a rendering device to create a motion parallax effect. Processes for rendering images using depth based effects and for rendering images using different viewpoints are discussed further below.
Rendering Images Using Depth Based Effects
A variety of depth based effects can be applied to an image synthesized from light field image data in accordance with embodiments of the invention including (but not limited to) applying dynamic refocusing of an image, locally varying the depth of field within an image, selecting multiple in focus areas at different depths, and/or applying one or more depth related blur model. A process for applying depth based effects to an image synthesized from light field image data and contained within a light field image file that includes a depth map in accordance with an embodiment of the invention is illustrated in
Although specific processes for applying depth dependent effects to an image synthesized from light field image data using a depth map obtained using the light field image data are discussed above with respect to
Rendering Images Using Different Viewpoints
One of the compelling aspects of computational imaging is the ability to use light field image data to synthesize images from different viewpoints. The ability to synthesize images from different viewpoints creates interesting possibilities including the creation of stereo pairs for 3D applications and the simulation of motion parallax as a user interacts with an image. Light field image files in accordance with many embodiments of the invention can include an image synthesized from light field image data from a reference viewpoint, a depth map for the synthesized image and information concerning pixels from the light field image data that are occluded in the reference viewpoint. A rendering device can use the information concerning the depths of the pixels in the synthesized image and the depths of the occluded images to determine the appropriate shifts to apply to the pixels to shift them to the locations in which they would appear from a different viewpoint. Occluded pixels from the different viewpoint can be identified and locations on the grid of the different viewpoint that are missing pixels can be identified and hole filling can be performed using interpolation of adjacent non-occluded pixels. In many embodiments, the quality of an image rendered from a different viewpoint can be increased by providing additional information in the form of auxiliary maps that can be used to refine the rendering process. In a number of embodiments, auxiliary maps can include confidence maps, edge maps, and missing pixel maps. Each of these maps can provide a rendering process with information concerning how to render an image based on customized preferences provided by a user. In other embodiments, any of a variety of auxiliary information including additional auxiliary maps can be provided as appropriate to the requirements of a specific rendering process.
A process for rendering an image from a different viewpoint using a light field image file containing an image synthesized using light field image data from a reference viewpoint, a depth map describing the depth of the pixels of the synthesized image, and information concerning occluded pixels in accordance with an embodiment of the invention is illustrated in
Although specific processes for rendering an image from a different viewpoint using an image synthesized from a reference view point using light field image data, a depth map obtained using the light field image data, and information concerning pixels in the light field image data that are occluded in the reference viewpoint are discussed above with respect to
While the above description contains many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as an example of one embodiment thereof. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
The present invention is a continuation of U.S. patent application Ser. No. 14/477,374, filed Sep. 4, 2014, which is a continuation of U.S. patent application Ser. No. 13/955,411, filed Jul. 31, 2013 and issued as U.S. Pat. No. 8,831,367, which is a continuation of U.S. patent application Ser. No. 13/631,736, filed Sep. 28, 2012 and issued as U.S. Pat. No. 8,542,933, which claims priority to U.S. Provisional Application No. 61/540,188 entitled “JPEG-DX: A Backwards-compatible, Dynamic Focus Extension to JPEG”, to Venkataraman et al., filed Sep. 28, 2011, the disclosures of which are incorporated herein by reference in its entirety.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4124798 | Thompson | Nov 1978 | A |
| 4198646 | Alexander et al. | Apr 1980 | A |
| 4323925 | Abell et al. | Apr 1982 | A |
| 4460449 | Montalbano | Jul 1984 | A |
| 4467365 | Murayama et al. | Aug 1984 | A |
| 5005083 | Grage | Apr 1991 | A |
| 5070414 | Tsutsumi | Dec 1991 | A |
| 5144448 | Hornbaker | Sep 1992 | A |
| 5327125 | Iwase et al. | Jul 1994 | A |
| 5629524 | Stettner et al. | May 1997 | A |
| 5808350 | Jack et al. | Sep 1998 | A |
| 5832312 | Rieger et al. | Nov 1998 | A |
| 5880691 | Fossum et al. | Mar 1999 | A |
| 5933190 | Dierickx et al. | Aug 1999 | A |
| 5973844 | Burger | Oct 1999 | A |
| 6002743 | Telymonde | Dec 1999 | A |
| 6005607 | Uomori et al. | Dec 1999 | A |
| 6034690 | Gallery et al. | Mar 2000 | A |
| 6069351 | Mack | May 2000 | A |
| 6069365 | Chow et al. | May 2000 | A |
| 6097394 | Levoy et al. | Aug 2000 | A |
| 6124974 | Burger | Sep 2000 | A |
| 6137535 | Meyers | Oct 2000 | A |
| 6141048 | Meyers | Oct 2000 | A |
| 6160909 | Melen | Dec 2000 | A |
| 6163414 | Kikuchi et al. | Dec 2000 | A |
| 6175379 | Uomori et al. | Jan 2001 | B1 |
| 6205241 | Melen | Mar 2001 | B1 |
| 6239909 | Hayashi et al. | May 2001 | B1 |
| 6358862 | Ireland et al. | Mar 2002 | B1 |
| 6477260 | Shimomura | Nov 2002 | B1 |
| 6525302 | Dowski, Jr. et al. | Feb 2003 | B2 |
| 6563537 | Kawamura et al. | May 2003 | B1 |
| 6603513 | Berezin | Aug 2003 | B1 |
| 6611289 | Yu | Aug 2003 | B1 |
| 6627896 | Hashimoto et al. | Sep 2003 | B1 |
| 6628330 | Lin | Sep 2003 | B1 |
| 6635941 | Suda | Oct 2003 | B2 |
| 6657218 | Noda | Dec 2003 | B2 |
| 6671399 | Berestov | Dec 2003 | B1 |
| 6750904 | Lambert | Jun 2004 | B1 |
| 6765617 | Tangen et al. | Jul 2004 | B1 |
| 6771833 | Edgar | Aug 2004 | B1 |
| 6774941 | Boisvert et al. | Aug 2004 | B1 |
| 6795253 | Shinohara | Sep 2004 | B2 |
| 6819358 | Kagle et al. | Nov 2004 | B1 |
| 6879735 | Portniaguine et al. | Apr 2005 | B1 |
| 6903770 | Kobayashi et al. | Jun 2005 | B1 |
| 6909121 | Nishikawa | Jun 2005 | B2 |
| 6958862 | Joseph | Oct 2005 | B1 |
| 7085409 | Sawhney et al. | Aug 2006 | B2 |
| 7161614 | Yamashita et al. | Jan 2007 | B1 |
| 7199348 | Olsen et al. | Apr 2007 | B2 |
| 7262799 | Suda | Aug 2007 | B2 |
| 7292735 | Blake et al. | Nov 2007 | B2 |
| 7295697 | Satoh | Nov 2007 | B1 |
| 7369165 | Bosco et al. | May 2008 | B2 |
| 7391572 | Jacobowitz et al. | Jun 2008 | B2 |
| 7408725 | Sato | Aug 2008 | B2 |
| 7606484 | Richards et al. | Oct 2009 | B1 |
| 7633511 | Shum et al. | Dec 2009 | B2 |
| 7646549 | Zalevsky et al. | Jan 2010 | B2 |
| 7657090 | Omatsu et al. | Feb 2010 | B2 |
| 7675080 | Boettiger | Mar 2010 | B2 |
| 7675681 | Tomikawa et al. | Mar 2010 | B2 |
| 7706634 | Schmitt et al. | Apr 2010 | B2 |
| 7723662 | Levoy et al. | May 2010 | B2 |
| 7782364 | Smith | Aug 2010 | B2 |
| 7840067 | Shen et al. | Nov 2010 | B2 |
| 7912673 | Hébert et al. | Mar 2011 | B2 |
| 7986018 | Rennie | Jul 2011 | B2 |
| 7990447 | Honda et al. | Aug 2011 | B2 |
| 8000498 | Shih et al. | Aug 2011 | B2 |
| 8013904 | Tan et al. | Sep 2011 | B2 |
| 8027531 | Wilburn et al. | Sep 2011 | B2 |
| 8044994 | Vetro et al. | Oct 2011 | B2 |
| 8077245 | Adamo et al. | Dec 2011 | B2 |
| 8098304 | Pinto et al. | Jan 2012 | B2 |
| 8106949 | Tan et al. | Jan 2012 | B2 |
| 8126279 | Marcellin et al. | Feb 2012 | B2 |
| 8130120 | Kawabata et al. | Mar 2012 | B2 |
| 8131097 | Lelescu et al. | Mar 2012 | B2 |
| 8164629 | Zhang | Apr 2012 | B1 |
| 8180145 | Wu et al. | May 2012 | B2 |
| 8189089 | Georgiev et al. | May 2012 | B1 |
| 8212914 | Chiu | Jul 2012 | B2 |
| 8213711 | Tam et al. | Jul 2012 | B2 |
| 8231814 | Duparre | Jul 2012 | B2 |
| 8242426 | Ward et al. | Aug 2012 | B2 |
| 8244027 | Takahashi | Aug 2012 | B2 |
| 8254668 | Mashitani et al. | Aug 2012 | B2 |
| 8279325 | Pitts et al. | Oct 2012 | B2 |
| 8280194 | Wong et al. | Oct 2012 | B2 |
| 8289409 | Chang | Oct 2012 | B2 |
| 8294099 | Blackwell, Jr. | Oct 2012 | B2 |
| 8305456 | McMahon | Nov 2012 | B1 |
| 8315476 | Georgiev et al. | Nov 2012 | B1 |
| 8345144 | Georgiev et al. | Jan 2013 | B1 |
| 8360574 | Ishak et al. | Jan 2013 | B2 |
| 8406562 | Bassi et al. | Mar 2013 | B2 |
| 8446492 | Nakano et al. | May 2013 | B2 |
| 8514491 | Duparre | Aug 2013 | B2 |
| 8541730 | Inuiya | Sep 2013 | B2 |
| 8542933 | Venkataraman et al. | Sep 2013 | B2 |
| 8553093 | Wong et al. | Oct 2013 | B2 |
| 8559756 | Georgiev et al. | Oct 2013 | B2 |
| 8619082 | Ciurea et al. | Dec 2013 | B1 |
| 8655052 | Spooner et al. | Feb 2014 | B2 |
| 8682107 | Yoon et al. | Mar 2014 | B2 |
| 8692893 | McMahon | Apr 2014 | B2 |
| 8773536 | Zhang | Jul 2014 | B1 |
| 8780113 | Ciurea et al. | Jul 2014 | B1 |
| 8804255 | Duparre | Aug 2014 | B2 |
| 8830375 | Ludwig | Sep 2014 | B2 |
| 8831367 | Venkataraman et al. | Sep 2014 | B2 |
| 8854462 | Herbin et al. | Oct 2014 | B2 |
| 8861089 | Duparre | Oct 2014 | B2 |
| 8866920 | Venkataraman et al. | Oct 2014 | B2 |
| 8878950 | Lelescu et al. | Nov 2014 | B2 |
| 8885059 | Venkataraman et al. | Nov 2014 | B1 |
| 8896594 | Xiong et al. | Nov 2014 | B2 |
| 8896719 | Venkataraman et al. | Nov 2014 | B1 |
| 8902321 | Venkataraman et al. | Dec 2014 | B2 |
| 20010005225 | Clark et al. | Jun 2001 | A1 |
| 20010019621 | Hanna et al. | Sep 2001 | A1 |
| 20010038387 | Tomooka et al. | Nov 2001 | A1 |
| 20020012056 | Trevino | Jan 2002 | A1 |
| 20020027608 | Johnson | Mar 2002 | A1 |
| 20020039438 | Mori et al. | Apr 2002 | A1 |
| 20020063807 | Margulis | May 2002 | A1 |
| 20020087403 | Meyers et al. | Jul 2002 | A1 |
| 20020089596 | Suda | Jul 2002 | A1 |
| 20020094027 | Sato et al. | Jul 2002 | A1 |
| 20020101528 | Lee et al. | Aug 2002 | A1 |
| 20020113867 | Takigawa et al. | Aug 2002 | A1 |
| 20020113888 | Sonoda et al. | Aug 2002 | A1 |
| 20020163054 | Suda et al. | Nov 2002 | A1 |
| 20020167537 | Trajkovic | Nov 2002 | A1 |
| 20020177054 | Saitoh et al. | Nov 2002 | A1 |
| 20030086079 | Barth et al. | May 2003 | A1 |
| 20030124763 | Fan et al. | Jul 2003 | A1 |
| 20030140347 | Varsa | Jul 2003 | A1 |
| 20030179418 | Wengender et al. | Sep 2003 | A1 |
| 20030190072 | Adkins et al. | Oct 2003 | A1 |
| 20030211405 | Venkataraman | Nov 2003 | A1 |
| 20040008271 | Hagimori et al. | Jan 2004 | A1 |
| 20040012689 | Tinnerino | Jan 2004 | A1 |
| 20040027358 | Nakao | Feb 2004 | A1 |
| 20040047274 | Amanai | Mar 2004 | A1 |
| 20040050104 | Ghosh et al. | Mar 2004 | A1 |
| 20040056966 | Schechner et al. | Mar 2004 | A1 |
| 20040066454 | Otani et al. | Apr 2004 | A1 |
| 20040100570 | Shizukuishi | May 2004 | A1 |
| 20040114807 | Lelescu et al. | Jun 2004 | A1 |
| 20040151401 | Sawhney et al. | Aug 2004 | A1 |
| 20040165090 | Ning | Aug 2004 | A1 |
| 20040169617 | Yelton et al. | Sep 2004 | A1 |
| 20040170340 | Tipping et al. | Sep 2004 | A1 |
| 20040174439 | Upton | Sep 2004 | A1 |
| 20040207836 | Chhibber et al. | Oct 2004 | A1 |
| 20040213449 | Safaee-Rad et al. | Oct 2004 | A1 |
| 20040218809 | Blake et al. | Nov 2004 | A1 |
| 20040234873 | Venkataraman | Nov 2004 | A1 |
| 20040240052 | Minefuji et al. | Dec 2004 | A1 |
| 20040251509 | Choi | Dec 2004 | A1 |
| 20040264806 | Herley | Dec 2004 | A1 |
| 20050006477 | Patel | Jan 2005 | A1 |
| 20050012035 | Miller | Jan 2005 | A1 |
| 20050036778 | DeMonte | Feb 2005 | A1 |
| 20050047678 | Jones et al. | Mar 2005 | A1 |
| 20050048690 | Yamamoto | Mar 2005 | A1 |
| 20050068436 | Fraenkel et al. | Mar 2005 | A1 |
| 20050132098 | Sonoda et al. | Jun 2005 | A1 |
| 20050134712 | Gruhlke et al. | Jun 2005 | A1 |
| 20050147277 | Higaki et al. | Jul 2005 | A1 |
| 20050151759 | Gonzalez-Banos et al. | Jul 2005 | A1 |
| 20050175257 | Kuroki | Aug 2005 | A1 |
| 20050185711 | Pfister et al. | Aug 2005 | A1 |
| 20050205785 | Hornback et al. | Sep 2005 | A1 |
| 20050219363 | Kohler | Oct 2005 | A1 |
| 20050225654 | Feldman et al. | Oct 2005 | A1 |
| 20050275946 | Choo et al. | Dec 2005 | A1 |
| 20050286612 | Takanashi | Dec 2005 | A1 |
| 20060002635 | Nestares et al. | Jan 2006 | A1 |
| 20060023197 | Joel | Feb 2006 | A1 |
| 20060023314 | Boettiger et al. | Feb 2006 | A1 |
| 20060033005 | Jerdev et al. | Feb 2006 | A1 |
| 20060034003 | Zalevsky | Feb 2006 | A1 |
| 20060038891 | Okutomi et al. | Feb 2006 | A1 |
| 20060049930 | Zruya et al. | Mar 2006 | A1 |
| 20060054780 | Garrood et al. | Mar 2006 | A1 |
| 20060054782 | Olsen et al. | Mar 2006 | A1 |
| 20060055811 | Frtiz et al. | Mar 2006 | A1 |
| 20060069478 | Iwama | Mar 2006 | A1 |
| 20060072029 | Miyatake et al. | Apr 2006 | A1 |
| 20060087747 | Ohzawa et al. | Apr 2006 | A1 |
| 20060098888 | Morishita | May 2006 | A1 |
| 20060125936 | Gruhike et al. | Jun 2006 | A1 |
| 20060138322 | Costello et al. | Jun 2006 | A1 |
| 20060152803 | Provitola | Jul 2006 | A1 |
| 20060157640 | Perlman et al. | Jul 2006 | A1 |
| 20060159369 | Young | Jul 2006 | A1 |
| 20060176566 | Boettiger et al. | Aug 2006 | A1 |
| 20060187338 | May et al. | Aug 2006 | A1 |
| 20060197937 | Bamji et al. | Sep 2006 | A1 |
| 20060203113 | Wada et al. | Sep 2006 | A1 |
| 20060210186 | Berkner | Sep 2006 | A1 |
| 20060239549 | Kelly et al. | Oct 2006 | A1 |
| 20060243889 | Farnworth et al. | Nov 2006 | A1 |
| 20060251410 | Trutna | Nov 2006 | A1 |
| 20060274174 | Tewinkle | Dec 2006 | A1 |
| 20060278948 | Yamaguchi et al. | Dec 2006 | A1 |
| 20060279648 | Senba et al. | Dec 2006 | A1 |
| 20070002159 | Olsen et al. | Jan 2007 | A1 |
| 20070024614 | Tam et al. | Feb 2007 | A1 |
| 20070036427 | Nakamura et al. | Feb 2007 | A1 |
| 20070040828 | Zalevsky et al. | Feb 2007 | A1 |
| 20070040922 | McKee et al. | Feb 2007 | A1 |
| 20070041391 | Lin et al. | Feb 2007 | A1 |
| 20070052825 | Cho | Mar 2007 | A1 |
| 20070083114 | Yang et al. | Apr 2007 | A1 |
| 20070085917 | Kobayashi | Apr 2007 | A1 |
| 20070102622 | Olsen et al. | May 2007 | A1 |
| 20070126898 | Feldman | Jun 2007 | A1 |
| 20070127831 | Venkataraman | Jun 2007 | A1 |
| 20070139333 | Sato et al. | Jun 2007 | A1 |
| 20070146511 | Kinoshita et al. | Jun 2007 | A1 |
| 20070158427 | Zhu et al. | Jul 2007 | A1 |
| 20070159541 | Sparks et al. | Jul 2007 | A1 |
| 20070160310 | Tanida et al. | Jul 2007 | A1 |
| 20070165931 | Higaki | Jul 2007 | A1 |
| 20070171290 | Kroger | Jul 2007 | A1 |
| 20070206241 | Smith et al. | Sep 2007 | A1 |
| 20070211164 | Olsen et al. | Sep 2007 | A1 |
| 20070216765 | Wong et al. | Sep 2007 | A1 |
| 20070228256 | Mentzer et al. | Oct 2007 | A1 |
| 20070257184 | Olsen et al. | Nov 2007 | A1 |
| 20070258006 | Olsen et al. | Nov 2007 | A1 |
| 20070258706 | Raskar et al. | Nov 2007 | A1 |
| 20070263114 | Gurevich et al. | Nov 2007 | A1 |
| 20070268374 | Robinson | Nov 2007 | A1 |
| 20070296835 | Olsen et al. | Dec 2007 | A1 |
| 20080019611 | Larkin et al. | Jan 2008 | A1 |
| 20080025649 | Liu et al. | Jan 2008 | A1 |
| 20080030597 | Olsen et al. | Feb 2008 | A1 |
| 20080043095 | Vetro et al. | Feb 2008 | A1 |
| 20080043096 | Vetro et al. | Feb 2008 | A1 |
| 20080062164 | Bassi et al. | Mar 2008 | A1 |
| 20080079805 | Takagi et al. | Apr 2008 | A1 |
| 20080080028 | Bakin et al. | Apr 2008 | A1 |
| 20080084486 | Enge et al. | Apr 2008 | A1 |
| 20080088793 | Sverdrup et al. | Apr 2008 | A1 |
| 20080112635 | Kondo et al. | May 2008 | A1 |
| 20080118241 | TeKolste et al. | May 2008 | A1 |
| 20080131019 | Ng | Jun 2008 | A1 |
| 20080131107 | Ueno | Jun 2008 | A1 |
| 20080151097 | Chen et al. | Jun 2008 | A1 |
| 20080152215 | Horie et al. | Jun 2008 | A1 |
| 20080152296 | Oh et al. | Jun 2008 | A1 |
| 20080158259 | Kempf et al. | Jul 2008 | A1 |
| 20080158375 | Kakkori et al. | Jul 2008 | A1 |
| 20080158698 | Chang et al. | Jul 2008 | A1 |
| 20080187305 | Raskar et al. | Aug 2008 | A1 |
| 20080193026 | Horie et al. | Aug 2008 | A1 |
| 20080218610 | Chapman et al. | Sep 2008 | A1 |
| 20080219654 | Border et al. | Sep 2008 | A1 |
| 20080239116 | Smith | Oct 2008 | A1 |
| 20080240598 | Hasegawa | Oct 2008 | A1 |
| 20080247638 | Tanida et al. | Oct 2008 | A1 |
| 20080247653 | Moussavi et al. | Oct 2008 | A1 |
| 20080272416 | Yun | Nov 2008 | A1 |
| 20080273751 | Yuan et al. | Nov 2008 | A1 |
| 20080278591 | Barna et al. | Nov 2008 | A1 |
| 20080298674 | Baker et al. | Dec 2008 | A1 |
| 20090050946 | Duparre et al. | Feb 2009 | A1 |
| 20090052743 | Techmer | Feb 2009 | A1 |
| 20090060281 | Tanida et al. | Mar 2009 | A1 |
| 20090086074 | Li et al. | Apr 2009 | A1 |
| 20090091806 | Inuiya | Apr 2009 | A1 |
| 20090096050 | Park | Apr 2009 | A1 |
| 20090102956 | Georgiev | Apr 2009 | A1 |
| 20090109306 | Shan et al. | Apr 2009 | A1 |
| 20090128833 | Yahav | May 2009 | A1 |
| 20090152664 | Klem et al. | Jun 2009 | A1 |
| 20090167922 | Perlman et al. | Jul 2009 | A1 |
| 20090179142 | Duparre et al. | Jul 2009 | A1 |
| 20090180021 | Kikuchi et al. | Jul 2009 | A1 |
| 20090200622 | Tai et al. | Aug 2009 | A1 |
| 20090201371 | Matsuda et al. | Aug 2009 | A1 |
| 20090207235 | Francini et al. | Aug 2009 | A1 |
| 20090225203 | Tanida et al. | Sep 2009 | A1 |
| 20090237520 | Kaneko et al. | Sep 2009 | A1 |
| 20090263017 | Tanbakuchi | Oct 2009 | A1 |
| 20090268192 | Koenck et al. | Oct 2009 | A1 |
| 20090268970 | Babacan et al. | Oct 2009 | A1 |
| 20090268983 | Stone | Oct 2009 | A1 |
| 20090274387 | Jin | Nov 2009 | A1 |
| 20090284651 | Srinivasan | Nov 2009 | A1 |
| 20090297056 | Lelescu et al. | Dec 2009 | A1 |
| 20090302205 | Olsen et al. | Dec 2009 | A9 |
| 20090323195 | Hembree et al. | Dec 2009 | A1 |
| 20090323206 | Oliver et al. | Dec 2009 | A1 |
| 20090324118 | Maslov et al. | Dec 2009 | A1 |
| 20100002126 | Wenstrand et al. | Jan 2010 | A1 |
| 20100002313 | Duparre et al. | Jan 2010 | A1 |
| 20100002314 | Duparre | Jan 2010 | A1 |
| 20100013927 | Nixon | Jan 2010 | A1 |
| 20100053342 | Hwang et al. | Mar 2010 | A1 |
| 20100053600 | Tanida et al. | Mar 2010 | A1 |
| 20100060746 | Olsen et al. | Mar 2010 | A9 |
| 20100085425 | Tan | Apr 2010 | A1 |
| 20100086227 | Sun et al. | Apr 2010 | A1 |
| 20100091389 | Henriksen et al. | Apr 2010 | A1 |
| 20100097491 | Farina et al. | Apr 2010 | A1 |
| 20100103259 | Tanida et al. | Apr 2010 | A1 |
| 20100103308 | Butterfield et al. | Apr 2010 | A1 |
| 20100111444 | Coffman | May 2010 | A1 |
| 20100118127 | Nam et al. | May 2010 | A1 |
| 20100133230 | Henriksen et al. | Jun 2010 | A1 |
| 20100133418 | Sargent et al. | Jun 2010 | A1 |
| 20100141802 | Knight et al. | Jun 2010 | A1 |
| 20100142839 | Lakus-Becker | Jun 2010 | A1 |
| 20100157073 | Kondo et al. | Jun 2010 | A1 |
| 20100165152 | Lim | Jul 2010 | A1 |
| 20100177411 | Hegde et al. | Jul 2010 | A1 |
| 20100194901 | van Hoorebeke et al. | Aug 2010 | A1 |
| 20100195716 | Klein et al. | Aug 2010 | A1 |
| 20100201834 | Maruyama et al. | Aug 2010 | A1 |
| 20100208100 | Olsen et al. | Aug 2010 | A9 |
| 20100220212 | Perlman et al. | Sep 2010 | A1 |
| 20100231285 | Boomer et al. | Sep 2010 | A1 |
| 20100244165 | Lake et al. | Sep 2010 | A1 |
| 20100265385 | Knight et al. | Oct 2010 | A1 |
| 20100281070 | Chan et al. | Nov 2010 | A1 |
| 20100302423 | Adams, Jr. et al. | Dec 2010 | A1 |
| 20110001037 | Tewinkle | Jan 2011 | A1 |
| 20110018973 | Takayama | Jan 2011 | A1 |
| 20110032370 | Ludwig | Feb 2011 | A1 |
| 20110043661 | Podoleanu | Feb 2011 | A1 |
| 20110043665 | Ogasahara | Feb 2011 | A1 |
| 20110043668 | McKinnon et al. | Feb 2011 | A1 |
| 20110069189 | Venkataraman et al. | Mar 2011 | A1 |
| 20110080487 | Venkataraman et al. | Apr 2011 | A1 |
| 20110108708 | Olsen et al. | May 2011 | A1 |
| 20110121421 | Charbon et al. | May 2011 | A1 |
| 20110122308 | Duparre | May 2011 | A1 |
| 20110128412 | Milnes et al. | Jun 2011 | A1 |
| 20110149408 | Hahgholt et al. | Jun 2011 | A1 |
| 20110149409 | Haugholt et al. | Jun 2011 | A1 |
| 20110153248 | Gu et al. | Jun 2011 | A1 |
| 20110157321 | Nakajima et al. | Jun 2011 | A1 |
| 20110176020 | Chang | Jul 2011 | A1 |
| 20110211824 | Georgiev et al. | Sep 2011 | A1 |
| 20110221599 | Högasten | Sep 2011 | A1 |
| 20110221658 | Haddick et al. | Sep 2011 | A1 |
| 20110221939 | Jerdev | Sep 2011 | A1 |
| 20110234841 | Akeley et al. | Sep 2011 | A1 |
| 20110241234 | Duparre | Oct 2011 | A1 |
| 20110242342 | Goma et al. | Oct 2011 | A1 |
| 20110242355 | Goma et al. | Oct 2011 | A1 |
| 20110242356 | Aleksic et al. | Oct 2011 | A1 |
| 20110255592 | Sung et al. | Oct 2011 | A1 |
| 20110255745 | Hodder et al. | Oct 2011 | A1 |
| 20110267348 | Lin et al. | Nov 2011 | A1 |
| 20110273531 | Ito et al. | Nov 2011 | A1 |
| 20110274366 | Tardif | Nov 2011 | A1 |
| 20110279721 | McMahon | Nov 2011 | A1 |
| 20110285866 | Bhrugumalla et al. | Nov 2011 | A1 |
| 20110298917 | Yanagita | Dec 2011 | A1 |
| 20110300929 | Tardif et al. | Dec 2011 | A1 |
| 20110310980 | Mathew | Dec 2011 | A1 |
| 20110317766 | Lim et al. | Dec 2011 | A1 |
| 20120012748 | Pain et al. | Jan 2012 | A1 |
| 20120026297 | Sato | Feb 2012 | A1 |
| 20120026342 | Yu et al. | Feb 2012 | A1 |
| 20120039525 | Tian et al. | Feb 2012 | A1 |
| 20120044249 | Mashitani et al. | Feb 2012 | A1 |
| 20120044372 | Côte et al. | Feb 2012 | A1 |
| 20120069235 | Imai | Mar 2012 | A1 |
| 20120105691 | Waqas et al. | May 2012 | A1 |
| 20120113413 | Miahczylowicz-Wolski et al. | May 2012 | A1 |
| 20120147139 | Li et al. | Jun 2012 | A1 |
| 20120147205 | Lelescu et al. | Jun 2012 | A1 |
| 20120153153 | Chang et al. | Jun 2012 | A1 |
| 20120154551 | Inoue | Jun 2012 | A1 |
| 20120170134 | Bolis et al. | Jul 2012 | A1 |
| 20120176479 | Mayhew et al. | Jul 2012 | A1 |
| 20120188634 | Kubala et al. | Jul 2012 | A1 |
| 20120198677 | Duparre | Aug 2012 | A1 |
| 20120200734 | Tang | Aug 2012 | A1 |
| 20120229628 | Ishiyama et al. | Sep 2012 | A1 |
| 20120249550 | Akeley et al. | Oct 2012 | A1 |
| 20120262607 | Shimura et al. | Oct 2012 | A1 |
| 20120287291 | McMahon et al. | Nov 2012 | A1 |
| 20120293695 | Tanaka | Nov 2012 | A1 |
| 20120314033 | Lee et al. | Dec 2012 | A1 |
| 20120327222 | Ng et al. | Dec 2012 | A1 |
| 20130002828 | Ding et al. | Jan 2013 | A1 |
| 20130003184 | Duparre | Jan 2013 | A1 |
| 20130010073 | Do | Jan 2013 | A1 |
| 20130022111 | Chen et al. | Jan 2013 | A1 |
| 20130027580 | Olsen et al. | Jan 2013 | A1 |
| 20130033579 | Wajs | Feb 2013 | A1 |
| 20130050504 | Safaee-Rad et al. | Feb 2013 | A1 |
| 20130050526 | Keelan | Feb 2013 | A1 |
| 20130057710 | McMahon | Mar 2013 | A1 |
| 20130070060 | Chatterjee et al. | Mar 2013 | A1 |
| 20130077880 | Venkataraman et al. | Mar 2013 | A1 |
| 20130077882 | Venkataraman et al. | Mar 2013 | A1 |
| 20130088637 | Duparre | Apr 2013 | A1 |
| 20130113899 | Morohoshi et al. | May 2013 | A1 |
| 20130120605 | Georgiev et al. | May 2013 | A1 |
| 20130128068 | Georgiev et al. | May 2013 | A1 |
| 20130128069 | Georgiev et al. | May 2013 | A1 |
| 20130128087 | Georgiev et al. | May 2013 | A1 |
| 20130128121 | Agarwala et al. | May 2013 | A1 |
| 20130147979 | McMahon et al. | Jun 2013 | A1 |
| 20130215108 | McMahon et al. | Aug 2013 | A1 |
| 20130222556 | Shimada, Satoshi | Aug 2013 | A1 |
| 20130229540 | Farina et al. | Sep 2013 | A1 |
| 20130259317 | Gaddy | Oct 2013 | A1 |
| 20130265459 | Duparre et al. | Oct 2013 | A1 |
| 20140009586 | McNamer et al. | Jan 2014 | A1 |
| 20140076336 | Clayton et al. | Mar 2014 | A1 |
| 20140079336 | Venkataraman et al. | Mar 2014 | A1 |
| 20140092281 | Nisenzon et al. | Apr 2014 | A1 |
| 20140104490 | Hsieh et al. | Apr 2014 | A1 |
| 20140118493 | Sali et al. | May 2014 | A1 |
| 20140132810 | McMahon | May 2014 | A1 |
| 20140176592 | Wilburn et al. | Jun 2014 | A1 |
| 20140192253 | Laroia | Jul 2014 | A1 |
| 20140198188 | Izawa | Jul 2014 | A1 |
| 20140218546 | McMahon | Aug 2014 | A1 |
| 20140232822 | Venkataraman et al. | Aug 2014 | A1 |
| 20140240528 | Venkataraman et al. | Aug 2014 | A1 |
| 20140240529 | Venkataraman et al. | Aug 2014 | A1 |
| 20140253738 | Mullis | Sep 2014 | A1 |
| 20140267243 | Venkataraman et al. | Sep 2014 | A1 |
| 20140267286 | Duparre | Sep 2014 | A1 |
| 20140267633 | Venkataraman et al. | Sep 2014 | A1 |
| 20140267762 | Mullis | Sep 2014 | A1 |
| 20140267890 | Lelescu et al. | Sep 2014 | A1 |
| 20140285675 | Mullis | Sep 2014 | A1 |
| 20140321712 | Ciurea et al. | Oct 2014 | A1 |
| 20140333731 | Venkataraman et al. | Nov 2014 | A1 |
| 20140333764 | Venkataraman et al. | Nov 2014 | A1 |
| 20140333787 | Venkataraman et al. | Nov 2014 | A1 |
| 20140340539 | Venkataraman et al. | Nov 2014 | A1 |
| 20140347509 | Venkataraman et al. | Nov 2014 | A1 |
| 20140354773 | Venkataraman et al. | Dec 2014 | A1 |
| 20140354843 | Venkataraman et al. | Dec 2014 | A1 |
| 20140354844 | Venkataraman et al. | Dec 2014 | A1 |
| 20140354853 | Venkataraman et al. | Dec 2014 | A1 |
| 20140354854 | Venkataraman et al. | Dec 2014 | A1 |
| 20140354855 | Venkataraman et al. | Dec 2014 | A1 |
| 20140355870 | Venkataraman et al. | Dec 2014 | A1 |
| 20140368662 | Venkataraman et al. | Dec 2014 | A1 |
| 20140368683 | Venkataraman et al. | Dec 2014 | A1 |
| 20140368684 | Venkataraman et al. | Dec 2014 | A1 |
| 20140368685 | Venkataraman et al. | Dec 2014 | A1 |
| 20140368686 | Duparre | Dec 2014 | A1 |
| 20140369612 | Venkataraman et al. | Dec 2014 | A1 |
| 20140369615 | Venkataraman et al. | Dec 2014 | A1 |
| 20140376825 | Venkataraman et al. | Dec 2014 | A1 |
| 20140376826 | Venkataraman et al. | Dec 2014 | A1 |
| 20150003752 | Venkataraman et al. | Jan 2015 | A1 |
| 20150003753 | Venkataraman et al. | Jan 2015 | A1 |
| 20150009353 | Venkataraman et al. | Jan 2015 | A1 |
| 20150009354 | Venkataraman et al. | Jan 2015 | A1 |
| 20150009362 | Venkataraman et al. | Jan 2015 | A1 |
| 20150036014 | Lelescu et al. | Feb 2015 | A1 |
| 20150036015 | Lelescu et al. | Feb 2015 | A1 |
| 20150042766 | Ciurea et al. | Feb 2015 | A1 |
| 20150042767 | Ciurea et al. | Feb 2015 | A1 |
| 20150042833 | Lelescu et al. | Feb 2015 | A1 |
| 20150049915 | Ciurea et al. | Feb 2015 | A1 |
| 20150049916 | Ciurea et al. | Feb 2015 | A1 |
| 20150049917 | Ciurea et al. | Feb 2015 | A1 |
| 20150055884 | Venkataraman et al. | Feb 2015 | A1 |
| Number | Date | Country |
|---|---|---|
| 840502 | May 1998 | EP |
| 2336816 | Jun 2011 | EP |
| 2006033493 | Feb 2006 | JP |
| 2007520107 | Jul 2007 | JP |
| 2011109484 | Jun 2011 | JP |
| 2013526801 | Jun 2013 | JP |
| 2014521117 | Aug 2014 | JP |
| 1020110097647 | Aug 2011 | KR |
| 2007083579 | Jul 2007 | WO |
| 2008108271 | Sep 2008 | WO |
| 2009151903 | Dec 2009 | WO |
| 2011063347 | May 2011 | WO |
| 2011116203 | Sep 2011 | WO |
| 2011063347 | Oct 2011 | WO |
| 2011143501 | Nov 2011 | WO |
| 2012057619 | May 2012 | WO |
| 2012057620 | May 2012 | WO |
| 2012057621 | May 2012 | WO |
| 2012057622 | May 2012 | WO |
| 2012057623 | May 2012 | WO |
| 2012057620 | Jun 2012 | WO |
| 2012074361 | Jun 2012 | WO |
| 2012078126 | Jun 2012 | WO |
| 2012082904 | Jun 2012 | WO |
| 2012155119 | Nov 2012 | WO |
| 2013003276 | Jan 2013 | WO |
| 2013043751 | Mar 2013 | WO |
| 2013043761 | Mar 2013 | WO |
| 2013049699 | Apr 2013 | WO |
| 2013055960 | Apr 2013 | WO |
| 2013119706 | Aug 2013 | WO |
| 2013126578 | Aug 2013 | WO |
| 2014052974 | Apr 2014 | WO |
| 2014032020 | May 2014 | WO |
| 2014078443 | May 2014 | WO |
| 2014130849 | Aug 2014 | WO |
| 2014133974 | Sep 2014 | WO |
| 2014138695 | Sep 2014 | WO |
| 2014138697 | Sep 2014 | WO |
| 2014144157 | Sep 2014 | WO |
| 2014145856 | Sep 2014 | WO |
| 2014149403 | Sep 2014 | WO |
| 2014150856 | Sep 2014 | WO |
| 2014159721 | Oct 2014 | WO |
| 2014159779 | Oct 2014 | WO |
| 2014160142 | Oct 2014 | WO |
| 2014164550 | Oct 2014 | WO |
| 2014164909 | Oct 2014 | WO |
| 2014165244 | Oct 2014 | WO |
| Entry |
|---|
| US 8,957,977, 02/2015, Venkataraman et al. (withdrawn) |
| US 8,964,053, 02/2015, Venkataraman et al. (withdrawn) |
| US 8,965,058, 02/2015, Venkataraman et al. (withdrawn) |
| International Preliminary Report on Patentability for International Application PCT/US2013/024987, Mailed Aug 21, 2014, 13 Pgs. |
| International Preliminary Report on Patentability for International Application PCT/US2013/027146,Report Completed Apr. 2, 2013, Mailed Sep. 4, 2014, 10 Pages. |
| International Search Report and Written Opinion for International Application No. PCT/US13/46002, Search Completed Nov. 13, 2013, Mailed Nov 29, 2013, 7 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US13/48772, Search Completed Oct. 21, 2013, Mailed Nov. 8, 2013, 6 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US13/56065, Search Completed Nov. 25, 2013, Mailed Nov 26, 2013, 8 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US13/59991, Search Completed Feb. 6, 2014, Mailed Feb. 26, 2014, 8 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2009/044687, date completed Jan. 5, 2010, date mailed Jan. 13, 2010, 9 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2013/024987, Search Completed Mar. 27, 2013, Mailed Apr. 15, 2013, 14 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2013/056502, Search Completed Feb. 18, 2014, Mailed Mar. 19, 2014, 7 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2013/069932, International Filing Date Nov. 13, 2013, Search Completed Mar. 14, 2014, Mailed Apr. 14, 2014, 12 pgs. |
| International Search Report and Written Opinion for International Application PCT/US13/62720, report completed Mar. 25, 2014, Mailed Apr. 21, 2014, 9 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/024903 report completed Jun. 12, 2014, Mailed, Jun. 27, 2014, 13 pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/18116, report completed May 13, 2014, Mailed Jun. 2, 2014, 6 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/24407, report completed Jun. 11, 2014, Mailed Jul. 8, 2014, 9 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/25100, report completed Jul. 7, 2014, Mailed Aug 7, 2014 5 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/25904 report completed Jun. 10, 2014, Mailed Jul. 10, 2014, 6 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US2014/022123, report completed Jun. 9, 2014, Mailed Jun. 25, 2014, 5 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2014/024947, Report Completed Jul. 8, 2014, Mailed Aug 5, 2014, 8 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US2014/028447, report completed Jun. 30, 2014, Mailed Jul. 21, 2014, 8 Pgs. |
| International Search Report and Written Opinion for International Application PCT/US2014/030692, report completed Jul. 28, 2014, Mailed Aug 27, 2014, 7 Pages. |
| International Search Report and Written Opinion for International Application PCT/US2014/23762, Report Completed May 30, 2014, Mailed Jul. 3, 2014, 6 Pgs. |
| IPRP for International Application No. PCT/US2012/059813, International Filing Date Oct. 11, 2012, Search Completed Apr. 15, 2014, 7 pgs. |
| International Search Report and Written Opinion for International Application PCT/US11/36349, mailed Aug. 22, 2011, 12 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2011/64921, Report Completed Feb. 25, 2011, mailed Mar. 6, 2012, 17 pgs. |
| International Search Report and Written Opinion for International Application No. PCT/US2013/027146, completed Apr. 2, 2013, 12 pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/17766, completed May 28, 2014, Mailed Jun. 18, 2014, 9 pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/18084, completed May 23, 2014, Mailed Jun. 10, 2014, 12 pgs. |
| International Search Report and Written Opinion for International Application PCT/US14/22118, report completed Jun. 9, 2014, Mailed, Jun. 25, 2014, 5 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2010/057661, completed Mar. 9, 2011, 14 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2012/044014, completed Oct. 12, 2012, 15 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2012/056151, completed Nov. 14, 2012, 10 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2012/059813, completed Dec. 17, 2012, 8 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2012/37670, Mailed Jul. 18, 2012, Search Completed Jul. 5, 2012, 9 pgs. |
| International Search Report and Written Opinion for International Application PCT/US2012/58093, completed Nov. 15, 2012, 12 pgs. |
| Office Action for U.S. Appl. No. 12/952,106, dated Aug. 16, 2012, 12 pgs. |
| Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183. |
| Bertero et al., “Super-resolution in computational imaging”, Micron, 2003, vol. 34, Issues 6-7, 17 pgs. |
| Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV 2010, Part II, LNCS 6493, pp. 186-200. |
| Bishop et al., “Light Field Superresolution”, Retrieved from http://home.eps.hw.ac.uk/˜sz73/ICCP09/LightFieldSuperresolution.pdf, 9 pgs. |
| Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, pp. 972-986. |
| Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs. |
| Borman et al, “Image Sequence Processing”, Source unknown, Oct. 14, 2002, 81 pgs. |
| Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 1998, 3653, 10 pgs. |
| Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, Jun. 2003, 5016, 12 pgs. |
| Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 2004, vol. 5299, 12 pgs. |
| Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, 1998. 3459, 9 pgs. |
| Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473. |
| Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378. |
| Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, date unknown, 21 pgs. |
| Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE-IS&T Electronic Imaging, vol. 7246, pp. 72460X-1-72460X-9. |
| Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084. |
| Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, 2010, 11 pgs. |
| Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394. |
| Capel, “Image Mosaicing and Super-resolution”, [online], Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=1 0.1.1.226.2643&rep=rep1 &type=pdf>, Title pg., abstract, table of contents, pp. 1-263 (269 total pages). |
| Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, 2006, vol. 3, pp. 623-626. |
| Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP 2006, pp. 1177-1180. |
| Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim Syst Sign Process, 2007, vol. 18, pp. 83-101. |
| Chen et al., “Interactive deformation of light fields”, In Proceedings of SIGGRAPH I3D 2005, pp. 139-146. |
| Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, 8 pgs. |
| Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, 8 pgs. |
| Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 2009, vol. 83, Issue 3, 8 pgs. |
| Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310. |
| Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, 2008, vol. 3, pp. 1-6. |
| Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 2004, pp. 89-100. |
| Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551. |
| Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, 2006, vol. 1, pp. R1-R16. |
| Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, pp. 59622A-1-59622A-12. |
| Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs. |
| Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903. |
| Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33. |
| Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, 2006, vol. 6196, pp. 619607-1-619607-15. |
| Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418. |
| Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 3005, vol. 44, No. 15, pp. 2949-2956. |
| Duparre et al., “Ultra-Thin Camera Based on Artificial Apposistion Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs. |
| Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://www.site.uottawa.ca/-edubois/theses/Fanaswala—thesis.pdf>, 163 pgs. |
| Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, 2006, vol. 6069, 8 pgs. |
| Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, 2004, vol. 14, pp. 47-57. |
| Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, vol. 13, No. 10, pp. 1327-1344. |
| Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, pp. 141-159. |
| Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs. |
| Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284. |
| Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50. |
| Fischer et al., Optical System Design, 2nd Edition, SPIE Press, pp. 191-198. |
| Fischer et al., Optical System Design, 2nd Edition, SPIE Press, pp. 49-58. |
| Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, pp. 3-12. |
| Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, pp. 43-54. |
| Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, 30, 4, 2011, pp. 70:1-70:10. |
| Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs. |
| Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, vol. 16, No. 12, pp. 2953-2964. |
| Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, Computational Photography (ICCP) 2010, pp. 1-8. |
| Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, 2010, vol. 3, pp. 022501-1-022501-3. |
| Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, 2011, vol. 4, pp. 112501-1-112501-3. |
| Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D 2007, pp. 121-128. |
| Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, pp. 297-306. |
| Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80. |
| Joshi, et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL:http:l/ieeexplore.ieee.org/stamp/stamp.jsp!tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8. |
| Kang et al., “Handling Occlusions inn Dense Multi-View Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110. |
| Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727. |
| Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831. |
| Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Proc., CVPR 94, 8 pgs. |
| Lensvector, “How LensVector Autofocus Works”, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg. |
| Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Aug. 2006, pp. 46-55. |
| Levoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, pp. 1-12. |
| Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution,” Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab—research/08/deblur-feng.pdf on Feb. 5, 2014. |
| Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120. |
| Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10. |
| Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs. |
| Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 2006, pp. 30-38. |
| Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs. |
| Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378. |
| Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900. |
| Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, 2007, 12 pgs. |
| Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36. |
| Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, 2008, pp. 1-19. |
| Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html. |
| Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Jan. 2009, vol. 18, No. 1, pp. 36-51. |
| Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077. |
| Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552. |
| Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intel!, 2013, vol. 35, No. 2, pp. 504-511. |
| Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs. |
| Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228. |
| Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2007, pp. 208-215. |
| Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995, pp. 93-96. |
| Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, 2, pp. 115-129. |
| Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System,” Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162. Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField—TOG.pdf on Feb. 5. |
| Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759. |
| Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, Source and date unknown, 8 pgs. |
| Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975. |
| Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117. |
| Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813. |
| Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer 77, 9 (Sep.), 93-100. |
| Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, Proceeding, CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—vol. 2, pp. 2331-2338. |
| Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs. |
| Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs. |
| Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park-Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fal106/ccd—theory.pdf, 5 pgs. |
| Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008. |
| Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, 10 pgs. |
| Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426. |
| Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, 2005, 5674, 12 pgs. |
| Wikipedia, “Polarizing Filter (Photography)”, http://en.wikipedia.org/wiki/Polarizing—filter—(photography), 1 pg. |
| Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs. |
| Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 765-776. |
| Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceeding, CVPR'04 Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 294-301. |
| Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs. |
| Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C-11. |
| Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), pp. 1-10. |
| Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Source and date unknown, 8 pgs. |
| Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, 2004, 12 pgs. |
| Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6. |
| International Preliminary Report on Patentability for International Application PCT/US2013/039155, report completed Nov. 4, 2014, Mailed Nov. 13, 2014, 10 Pgs. |
| International Preliminary Report on Patentability for International Application PCT/US2013/048772, Report completed Dec. 31, 2014, Mailed Jan. 8, 2015, 8 Pgs. |
| Extended European Search Report for European Application EP12835041.0, Report Completed Jan. 28, 2015, Mailed Feb. 4, 2015, 6 Pgs. |
| International Preliminary Report on Patentability for International Application PCT/US2014/023762, Report Issued Mar. 2, 2015, Mailed Mar. 9, 2015, 19 Pgs. |
| International Preliminary Report on Patentability for International Application PCT/US13/56065, Report Issued Feb. 24, 2015, Mailed Mar. 5, 2015, 4 Pgs. |
| International Preliminary Report on Patentability for International Application PCT/US2013/056502, Report Issued Feb. 24, 2015, Mailed Mar. 5, 2015, 7 Pgs. |
| Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188. |
| Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, source and date unknown, 8 pgs, 2013. |
| Levin et al., “A Closed Form Solution to Natural Image Matting”, Pattern Analysis and Machine Intelligence, Feb. 2008, vol. 30, 8 pgs. |
| Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, 15 pgs, 2008. |
| Tallon et al., “Upsampling and Denoising of Depth Maps Via Joint-Segmentation”, 20th European Signal Processing Conference, Aug. 27-31, 2012, 5 pgs. |
| Zhang, Qiang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, Proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171. |
| Number | Date | Country | |
|---|---|---|---|
| 20150015669 A1 | Jan 2015 | US |
| Number | Date | Country | |
|---|---|---|---|
| 61540188 | Sep 2011 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 14477374 | Sep 2014 | US |
| Child | 14504687 | US | |
| Parent | 13955411 | Jul 2013 | US |
| Child | 14477374 | US | |
| Parent | 13631736 | Sep 2012 | US |
| Child | 13955411 | US |