Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information

Information

  • Patent Grant
  • 9462164
  • Patent Number
    9,462,164
  • Date Filed
    Friday, February 21, 2014
    10 years ago
  • Date Issued
    Tuesday, October 4, 2016
    8 years ago
Abstract
Systems and methods for the generating compressed light field representation data using captured light fields in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory connected configured to store an image processing application, wherein the image processing application configures the processor to obtain image data, wherein the image data includes a set of images including a reference image and at least one alternate view image, generate a depth map based on the image data, determine at least one prediction image based on the reference image and the depth map, compute prediction error data based on the at least one prediction image and the at least one alternate view image, and generate compressed light field representation data based on the reference image, the prediction error data, and the depth map.
Description
FIELD OF THE INVENTION

The present invention relates to systems and methods for capturing light fields and more specifically to the efficient representation of captured light fields using compressed light field representation data.


BACKGROUND

Imaging devices, such as cameras, can be used to capture images of portions of the electromagnetic spectrum, such as the visible light spectrum, incident upon an image sensor. For ease of discussion, the term light is generically used to cover radiation across the entire electromagnetic spectrum. In a typical imaging device, light enters through an opening (aperture) at one end of the imaging device and is directed to an image sensor by one or more optical elements such as lenses. The image sensor includes pixels or sensor elements that generate signals upon receiving light via the optical element. Commonly used image sensors include charge-coupled device (CCDs) sensors and complementary metal-oxide semiconductor (CMOS) sensors.


Image sensors are devices capable of converting an image into a digital signal. Image sensors utilized in digital cameras are typically made up of an array of pixels. Each pixel in an image sensor is capable of capturing light and converting the captured light into electrical signals. In order to separate the colors of light and capture a color image, a Bayer filter is often placed over the image sensor, filtering the incoming light into its red, blue, and green (RGB) components that are then captured by the image sensor. The RGB signal captured by the image sensor using a Bayer filter can then be processed and a color image can be created.


SUMMARY OF THE INVENTION

Systems and methods for the generating compressed light field representation data using captured light fields in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory connected to the processor and configured to store an image processing application, wherein the image processing application configures the processor to obtain image data, wherein the image data includes a set of images including a reference image and at least one alternate view image and each image in the set of images includes a set of pixels, generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image, determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image, compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image, and generate compressed light field representation data based on the reference image, the prediction error data, and the depth map.


In an additional embodiment of the invention, the array camera further includes an array camera module including an imager array having multiple focal planes and an optics array configured to form images through separate apertures on each of the focal planes, wherein the array camera module is configured to communicate with the processor and wherein the obtained image data includes images captured by the imager array.


In another embodiment of the invention, the reference image corresponds to an image captured using one of the focal planes within the image array.


In yet another additional embodiment of the invention, the at least one alternate view image corresponds to the image data captured using the focal planes within the image array separate from the focal planes associated with the reference image.


In still another additional embodiment of the invention, the reference image corresponds to a virtual image formed based on the images in the array.


In another embodiment of the invention, the depth map describes the geometrical linkage between the pixels in the reference image and the pixels in the other images in the image array.


In yet still another additional embodiment of the invention, the image processing application configures the processor to perform a parallax detection process to generate the depth map, where the parallax detection process identifies variations in the position of objects within the image data along epipolar lines between the reference image and the at least one alternate view image.


In yet another embodiment of the invention, the image processing application further configures the processor to compress the generated compressed light field representation data.


In still another embodiment of the invention, the generated compressed light field representation data is compressed using JPEG-DX.


In yet still another embodiment of the invention, the image processing application configures the processor to determine prediction error data by identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image, determining fractional pixel locations within the identified at least one pixel, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image, and mapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location.


In yet another additional embodiment of the invention, the mapping fractional pixel locations is determined as the pixel being nearest neighbor within the alternative view image.


In still another additional embodiment of the invention, the image processing application configures the processor to map the fractional pixel locations based on the depth map, where the pixel in the alternate view image is likely to be similar based on its proximity to the corresponding pixel location determined using the depth map of the reference image.


In yet still another additional embodiment of the invention, the image processing application further configures the processor to identify areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map and an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image.


In another embodiment of the invention, the depth map further comprises a confidence map describing areas of low confidence within the depth map.


In yet another embodiment of the invention, the image processing application further configures the processor to disregard identified areas of low confidence.


In still another embodiment of the invention, the image processing application further configures the processor to identify at least one additional reference image within the image data, where the at least one additional reference image is separate from the reference image, determine at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map, and compute the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image, and the generated compressed light field representation data further includes the supplemental prediction error data.


In yet still another embodiment of the invention, the generated compressed light field representation data further includes the at least one additional reference image.


In yet another additional embodiment of the invention, the image processing application configures the processor to identify the at least one additional reference image by generating an initial additional reference image based on the reference image and the depth map, where the initial additional reference image includes pixels projected from the viewpoint of the reference image based on the depth map and forming the additional reference image based on the initial additional reference image and the prediction error data, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.


In another embodiment of the invention, the prediction error data is decoded based on the reference image prior to the formation of the additional reference image.


Still another embodiment of the invention includes a method for generating compressed light field representation data including obtaining image data using an array camera, where the image data includes a set of images including a reference image and at least one alternate view image and the images in the set of images include a set of pixels, generating a depth map based on the image data using the array camera, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image based on the alternate view images, determining a set of prediction images based on the reference image and the depth map using the array camera, where a prediction image in the set of prediction images is a representation of a corresponding alternate view image in the at least one alternate view image, computing prediction error data by calculating the difference between a prediction image in the set of prediction images and the corresponding alternate view image that describes the difference in photometric information between a pixel in the reference image and a pixel in an alternate view image using the array camera, and generating compressed light field representation data based on the reference image, the prediction error data, and the depth map using the array camera.


In yet another additional embodiment of the invention, the reference image is a virtual image interpolated from a virtual viewpoint within the image data.


In still another additional embodiment of the invention, determining the set of predicted images further includes identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image using the array camera, determining fractional pixel locations within the identified at least one pixel using the array camera, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image, and mapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location using the array camera.


In yet still another embodiment of the invention, the method further includes identifying areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map using the array camera, where an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image.


In yet another additional embodiment of the invention, the method further includes identifying at least one additional reference image within the image data using the array camera, where the at least one additional reference image is separate from the reference image, determining at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map using the array camera, and computing the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image using the array camera, where the generated compressed light field representation data further includes the supplemental prediction error data.


In still another additional embodiment of the invention, identifying the at least one additional reference image includes generating an initial additional reference image based on the reference image and the depth map using the array camera, where the initial additional reference image includes pixels projected from the viewpoint of the reference image based on the depth map and forming the additional reference image based on the initial additional reference image and the prediction error data using the array camera, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a system diagram of an array camera including a 5×5 imager array with storage hardware connected with a processor in accordance with an embodiment of the invention.



FIG. 2 is a flow chart conceptually illustrating a process for capturing and processing light fields in accordance with an embodiment of the invention.



FIG. 3 is a flow chart conceptually illustrating a process for generating compressed light field representation data in accordance with an embodiment of the invention.



FIG. 4A is a conceptual illustration of a reference image in an 4×4 array of images and corresponding epipolar lines in accordance with an embodiment of the invention.



FIG. 4B is a conceptual illustration of multiple reference images in a 4×4 array of images in accordance with an embodiment of the invention.



FIG. 5 is a conceptual illustration of a prediction error histogram in accordance with an embodiment of the invention.



FIG. 6 is a flow chart conceptually illustrating a process for decoding compressed light field representation data in accordance with an embodiment of the invention.





DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for generating compressed light field representation data using captured light fields in accordance with embodiments of the invention are illustrated. Array cameras, such as those described in U.S. patent application Ser. No. 12/935,504, entitled “Capturing and Processing of Images using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., can be utilized to capture light fields and store the captured light fields. Captured light fields contain image data from an array of images of a scene captured from multiple points of view so that each image samples the light field of the same region within the scene (as opposed to a mosaic of images that sample partially overlapping regions of a scene). It should be noted that any configuration of images, including two-dimensional arrays, non-rectangular arrays, sparse arrays, and subsets of arrays of images could be utilized as appropriate to the requirements of specific embodiments of the invention. In a variety of embodiments, image data for a specific image that forms part of a captured light field describes a two-dimensional array of pixels. Storing all of the image data for the images in a captured light field can consume a disproportionate amount of storage space, limiting the number of light field images that can be stored within a fixed capacity storage device and increasing the amount of data transfer involved in transmitting a captured light field. Array cameras in accordance with many embodiments of the invention are configured to process captured light fields and generate data describing correlations between the images in the captured light field. Based on the image correlation data, some or all of the image data in the captured light field can be discarded, affording more efficient storage of the captured light fields as compressed light field representation data. Additionally, this process can be decoupled from the capturing of light fields to enable the efficient use of the hardware resources present in the array camera.


In many embodiments, each image in a captured light field is from a different viewpoint. Due to the different viewpoint of each of the images, parallax results in variations in the position of objects within the images of the scene. The disparity between corresponding pixels in images in a captured light field can be utilized to determine the distance to an object imaged by the corresponding pixels. Conversely, distance can be used to estimate the location of a corresponding pixel in another image. Processes that can be utilized to detect parallax and generate depth maps in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 13/972,881 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras that Contain Occlusions using Subsets of Images to Perform Depth Estimation” to Venkataraman et al. In many embodiments, a depth map is metadata describing the distance from the viewpoint from which an image is captured (or, in the case of super-resolution processing, synthesized) with respect to objects imaged by pixels within the image. Additionally, the depth map can also describe the geometrical linkage between pixels in the reference image and pixels in all other images in the array.


Array cameras in accordance with several embodiments of the invention are configured to process the images in a captured light field using a reference image selected from the captured array of images. In a variety of embodiments, the reference image is a synthetic image generated from the captured images, such as a synthetic viewpoint generated from a focal plane (e.g. a camera) that does not physically exist in the imager array. The remaining images can be considered to be images of alternate views of the scene relative to the viewpoint of the reference image. Using the reference image, array cameras in accordance with embodiments of the invention can generate a depth map using processes similar to those described above in U.S. patent application Ser. No. 13/972,881 and the depth map can be used to generate a set of prediction images describing the pixel positions within one or more of the alternate view images that correspond to specific pixels within the reference image. The relative locations of pixels in the alternate view images can be predicted along epipolar lines projected based on the configuration of the cameras (e.g. the calibration of the physical properties of the imager array in the array camera and their relationship to the reference viewpoint of the array camera) that captured the images. The predicted location of the pixels along the epipolar lines is a function of the distance from the reference viewpoint to the object imaged by the corresponding pixel in the reference image. In a number of embodiments, the predicted location is additionally a function of any calibration parameters intrinsic to or extrinsic to the physical imager array. The prediction images exploit the correlation between the images in the captured light field by describing the differences between the value of a pixel in the reference image and pixels adjacent to corresponding disparity-shifted pixel locations in the other alternate view images in the captured light field. The disparity-shifted pixel positions are often determined with fractional pixel precision (e.g. an integer position in the reference image is mapped to a fractional position in the alternate view image) based on a depth map of the reference image in the alternate view images. Significant compression of the image data forming the images of a captured light field can be achieved by selecting one reference image, generating prediction images with respect to the reference image using the depth map information relating the reference and alternate view images, generating prediction error data describing the differences between the predicted images and the alternate view images, and discarding the alternate view images. In a variety of embodiments, multiple reference images are utilized to generate prediction error data that describes the photometric differences between pixels in alternate view images adjacent to corresponding disparity-shifted pixel locations and pixels in one or more of the reference images.


It should also be noted that that while, in a variety of embodiments, the reference image corresponds to an image in the captured array of images, virtual (e.g. synthetic) images corresponding to a virtual viewpoint within the captured light field can also be utilized as the reference image in accordance with embodiments of the invention. For example, a virtual red image, a virtual green image, and/or a virtual blue image can be used to form a reference image for each respective color channel and used as a starting point for forming predicted images for the alternate view images of each respective color channel. In many embodiments, a color channel includes a set of images within the image array corresponding to a particular color, potentially as captured by the focal planes within the imager array. However, in accordance with embodiments of the invention, the reference image for a particular color channel can be taken from a different color channel; for example, an infrared image can be used as the reference image for the green channel within the captured light field.


The reference image(s) and the set of prediction error data stored by an array camera can be referred to as compressed light field representation data. The compressed light field representation data can also include the depth map utilized to generate the prediction error data and/or any other metadata related to the creation of the compressed light field representation data and/or the captured light field. The prediction error data can be compressed using any compression technique, such as discrete cosine transform (DCT) techniques, as appropriate to the requirements of specific embodiments of the invention. The compressed light field representation data can be compressed and stored in a variety of formats. One such file format is the JPEG-DX extension to ISO/IEC 10918-1 described in U.S. patent application Ser. No. 13/631,731, titled “Systems and Methods for Encoding Light Field Image Files” to Venkataraman et al. As can readily be appreciated, the prediction error data can be stored in a similar manner to a depth map as compressed or uncompressed layers and/or metadata within an image file. In a variety of embodiments, array cameras are configured to capture light fields separate from the generation of the compressed light field representation data. For example, the compressed light field representation data can be generated when the array camera is no longer capturing light fields or in the background as the array camera captures additional light fields. Any variety of decoupled processing techniques can be utilized in accordance with the requirements of embodiments of the invention. Many array cameras in accordance with embodiments of the invention are capable of performing a variety of processes that utilize the information contained in the captured light field using the compressed light field representation data.


In many instances, a captured light field contains image data from an array of images of a scene that sample an object space within the scene in such a way as to provide sampling diversity that can be utilized to synthesize higher resolution images of the object space using super-resolution processes. Systems and methods for performing super-resolution processing on image data captured by an array camera in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 12/967,807 entitled “System and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al. Synthesized high resolution images are representations of the scene captured in the captured light field. In many instances, the process of synthesizing a high resolution image may result in a single image, a stereoscopic pair of images that can be used to display three dimensional (3D) information via an appropriate 3D display, and/or a variety of images from different viewpoints. The process of synthesizing high resolution images from lower resolution image data captured by an array camera module in an array camera typically involves performing parallax detection and correction to reduce the effects of disparity between the images captured by each of the cameras in the array camera module. By using the reference image(s), the set of prediction error data, and/or the depth map contained in compressed light field representation data, high resolution images can be synthesized separately from the parallax detection and correction process, thereby alleviating the need to store and process the captured light field until the super-resolution process can be performed. Additionally, the parallax detection process can be optimized to improve speed or efficiency of compression. Once the compressed data is decoded, a parallax process can be re-run at a different (i.e. higher) precision using the reconstructed images. In this way, an initial super-resolution process can be performed in an efficient manner (such as on an array camera, where the processing power of the device limits the ability to perform a high precision parallax process in real-time) and, at a later time, a higher resolution parallax process can be performed to generate any of a variety of data, including a second set of compressed light field representation data and/or other captured light field image data, or perform any processing that relies on the captured light field. Later times include, but are not limited to, times when the array camera is not capturing light fields and/or when the compressed light field representation data has been transmitted to a separate image processing device with more advanced processing capabilities.


The disclosures of each of U.S. patent application Ser. Nos. 12/935,504, 12/967,807, 13/631,731, and 13/972,881 are hereby incorporated by reference in their entirety. Although the systems and methods described are with respect to array cameras configured to both capture and process captured light fields, devices that are configured to obtain captured light fields captured using a different device and process the received data can be utilized in accordance with the requirements of a variety of embodiments of the invention. Additionally, any of the various systems and processes described herein can be performed in sequence, in alternative sequences, and/or in parallel (e.g. on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application of the invention. Systems and methods for capturing light fields and generating compressed light field representation data using the captured light fields in accordance with embodiments of the invention are described below.


Array Camera Architectures


As described above, array cameras are capable of capturing and processing light fields and can be configured to generate compressed light field representation data using captured light fields in accordance with many embodiments of the invention. An array camera including an imager array in accordance with an embodiment of the invention is illustrated in FIG. 1. The array camera 100 includes an array camera module including an imager array 102 having multiple focal planes 104 and an optics array configured to form images through separate apertures on each of the focal planes. The imager array 102 is configured to communicate with a processor 108. In accordance with many embodiments of the invention, the processor 108 is configured to read out image data captured by the imager array 102 and generate compressed light field representation data using the image data captured by the imager array 102. Imager arrays including multiple focal planes are discussed in U.S. patent application Ser. No. 13/106,797, entitled “Architectures for System on Chip Array Cameras” to McMahon et al., the entirety of which is hereby incorporated by reference.


In the illustrated embodiment, the focal planes are configured in a 5×5 array. In other embodiments, any of a variety of array configurations can be utilized including linear arrays, non-rectangular arrays, and subsets of an array as appropriate to the requirements of specific embodiments of the invention. Each focal plane 104 of the imager array is capable of capturing image data from an image of the scene formed through a distinct aperture. Typically, each focal plane includes a plurality of rows of pixels that also forms a plurality of columns of pixels, and each focal plane is contained within a region of the imager that does not contain pixels from another focal plane. The pixels or sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements, and/or any other structure configured to generate an electrical signal indicative of light incident on the structure. In many embodiments, the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present). In other embodiments, the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element. In a variety of embodiments, a Bayer filter pattern of light filters can be applied to one or more of the focal planes 104. In a number of embodiments, the sensor elements are optimized to respond to light at a particular wavelength without utilizing a color filter. It should be noted that any optical channel, including those in non-visible portions of the electromagnetic spectrum (such as infrared) can be sensed by the focal planes as appropriate to the requirements of particular embodiments of the invention.


In several embodiments, information captured by one or more focal planes 104 is read out of the imager array 102 as packets of image data. In many embodiments, a packet of image data contains one or more pixels from a row of pixels captured from each of one or more of the focal planes 104. Packets of image data may contain other groupings of captured pixels, such as one or more pixels captured from a column of pixels in each of one or more focal planes 104 and/or a random sampling of pixels. Systems and methods for reading out image data from array cameras that can be utilized in array cameras configured in accordance with embodiments of the invention are described in U.S. Pat. No. 8,305,456, entitled “Systems and Methods for Transmitting and Receiving Array Camera Image Data” to McMahon, the entirety of which is hereby incorporated by reference. In several embodiments, the packets of image data are used to create a two-dimensional array of images representing the light field as captured from the one or more focal planes 104. In many embodiments, one or more of the images in the array of images are associated with a particular color; this color can be the same color associated with the focal plane 104 corresponding to the viewpoint of the image or a different color. The processor 108 can be configured to immediately process the captured light field from the one or more focal planes and/or the processor 108 can store the captured light field and later process the captured light field. In a number of embodiments, the processor 108 is configured to offload the captured light fields to an external device for processing.


The processing of captured light fields includes determining correspondences between pixels in the captured light field. In several embodiments, the pixels in the packets of image data are geometrically correlated based on a variety of factors, including, but not limited to, the characteristics of one or more of the focal planes 104. The calibration of imager arrays to determine the characteristics of focal planes are disclosed in U.S. patent application Ser. No. 12/967,807 incorporated by reference above. In several embodiments, processor 108 is configured (such as by an image processing application) to perform parallax detection on the captured light field to determine corresponding pixel locations along epipolar lines between a reference image and alternate view images within the captured light field. The process of performing parallax detection also involves generating a depth map with respect to the reference image (e.g. a reference viewpoint that may include synthesized ‘virtual’ viewpoints where a physical camera in the array does not exist). In a variety of embodiments, the captured packets of image data are associated with image packet timestamps and geometric calibration and/or photometric calibration between pixels in the packets of image data utilize the associated image packet timestamps. Corresponding pixel locations and differences between pixels in the reference image and the alternate view image(s) can be utilized by processor 108 to determine a prediction for at least some of the pixels of the alternate view image(s). In many embodiments, the corresponding pixel locations are determined with sub-pixel precision.


The prediction image can be formed by propagating pixels from the reference image(s) to the corresponding pixel locations in the alternate view grid. In many embodiments, the corresponding pixel locations in the alternate view grid are fractional positions (e.g. sub-pixel positions). Once the pixels from the reference image(s) are propagated to the corresponding positions in the alternate view grid, a predicted image (from the same perspective as the alternate view image) is formed by calculating prediction values for the integer grid points in the alternate view grid based on propagated pixel values from the reference image. The predicted image values in the integer grid of the alternate view image can be determined by interpolating from multiple pixels propagated from the reference image in the neighborhood of the integer pixel grid position in the predicted image. In many embodiments, the predicted image values on the integer grid points of the alternate view image are interpolated through an iterative interpolation schemes (e.g. a combination of linear or non-linear interpolations) that progressively fill in ‘holes’ or missing data at integer positions in the predicted alternate view image grid. In a variety of embodiments, integer grid locations in the predicted image can be filled using set selection criteria. In several embodiments, pixels propagated from the reference image within a particular radius of the integer pixel position can form a set, and the pixel in the set closest to the mean of the distribution of pixels in the region can be selected as the predictor. In a number of embodiments, within the same set, the pixel that lands nearest to the integer grid point may be used as the predictor (i.e. nearest neighbor interpolation). In another embodiment, an average of the N nearest neighbors may be used as the predicted image value at the integer grid point. However, it should be noted that the predicted value can be any function (linear or non-linear) that interpolates or inpaints values in the predicted image based on reference pixel values in some relationship of the integer grid position in the predicted image.


The prediction error data itself can be determined by performing a photometric comparison of the pixel values from the predicted image (e.g. the predicted alternate view image based on the reference image and the depth map) and the corresponding alternate view image. The prediction error data represents the difference between the predicted alternate view image based on the reference image and the depth map, and the actual alternate view image that must be later reproduced in the decoding process.


Due to variations in the optics and the pixels used to capture the image data, sampling diversity, and/or aliasing, the processor 108 is configured to anticipate photometric differences between corresponding pixels. These photometric differences may be further increased in the compared pixels because the nearest neighbor does not directly correspond to the pixel in the reference image. In many embodiments, the compression is lossless and the full captured light field can be reconstructed using the reference image, the depth map, and the prediction error data. In other embodiments, a lossy compression is used and an approximation of the full captured light field can be reconstructed. In this way, the pixel values of the alternate view images are available for use in super-resolution processing, enabling the super-resolution processes to exploit the sampling diversity and/or aliasing that may be reflected in the alternate view images. In a number of embodiments, the prediction images are sparse images. In several embodiments, sparse images contain predictions for some subset of points (e.g. pixels) in the space of the alternate view images. Processor 108 is further configured to generate compressed light field representation data using prediction error data, the reference image, and the depth map. Other data, such as one or more image packet timestamps, can be included as metadata associated with the compressed light field representation data as appropriate to the requirements of specific array cameras in accordance with embodiments of the invention. In several embodiments, the prediction error data and the reference image are compressed via lossless and/or lossy image compression techniques. In a variety of embodiments, an image processing application configures processor 108 to perform a variety of operations using the compressed light field representation data, including, but not limited to, synthesizing high resolution images using a super-resolution process. Other operations can be performed using the compressed light field representation data in accordance with a variety of embodiments of the invention.


Although a specific array camera configured to capture light fields and generate compressed light field representation data is illustrated in FIG. 1, alternative architectures, including those containing sensors measuring the movement of the array camera as light fields are captured, can also be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Systems and methods for capturing and processing light fields in accordance with embodiments of the invention are discussed below.


Processing and Interacting with Captured Light Fields


A captured light field, as an array of images, can consume a significant amount of storage space. Generating compressed light field representation data using the captured light field while reducing the storage space utilized can be a processor-intensive task. A variety of array cameras in accordance with embodiments of the invention lack the processing power to simultaneously capture and process light fields while maintaining adequate performance for one or both of the operations. Array cameras in accordance with several embodiments of the invention are configured to separately obtain a captured light field and generate compressed light field representation data using the captured light field, allowing the array camera to quickly capture light fields and efficiently process those light fields as the processing power becomes available and/or the compressed light field representation data is needed. A process for processing and interacting with captured light fields in accordance with an embodiment of the invention is illustrated in FIG. 2. The process 200 includes reading (210) image data from a captured light field out of an array camera module. Compressed light field representation data is generated (212) from the captured light field. In a variety of embodiments, a high resolution image is synthesized (214) using the compressed light field representation data. In several embodiments, users can then interact (216) with the synthesized high resolution image in a variety of ways appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In many embodiments, a captured light field is obtained (210) using an imager array and a processor in the array camera generating (212) the compressed light field representation data. In a number of embodiments, a captured light field is obtained (210) from a separate device. In several embodiments, the captured light field is obtained (210) and compressed light field representation data is generated (212) as part of a single capture operation. In a variety of embodiments, obtaining (210) the captured light field and generating (212) the compressed light field representation data occurs at disparate times.


Generating (212) the compressed light field representation data includes normalizing the obtained (210) captured light field and generating a depth map for the obtained (210) captured light field using geometric calibration data and photometric calibration data. In many embodiments, parallax detection processes, such as those disclosed in U.S. patent application Ser. No. 13/972,881, are utilized to generate a depth map and prediction error data describing correlation between pixels in the captured light field from the perspective of one or more reference images. Processes other than those disclosed in U.S. patent application Ser. No. 13/972,881 can be utilized in accordance with many embodiments of the invention. The generated (212) compressed light field representation data includes the prediction error data, the reference images, and the depth map. Additional metadata, such as timestamps, location information, and sensor information, can be included in the generated (212) compressed light field representation data as appropriate to the requirements of specific applications in accordance with embodiments of the invention. In many embodiments, the generated (212) compressed light field representation data is compressed using lossy and/or non-lossy compression techniques. The generated (212) compressed light field representation data can be stored in a variety of formats, such as the JPEG-DX standard. In several embodiments, the alternate view images in the obtained (210) captured light field are not stored in the generated (212) compressed light field representation data.


In a number of embodiments, synthesizing (214) a high resolution image utilizes the reference image(s), the prediction error data, and the depth map in the generated (212) compressed light field representation data. In a variety of embodiments, the reference images, the prediction error data, and the depth map are utilized to reconstruct the array of images (or an approximation of the images) to synthesize (214) a high resolution image using a super-resolution process. A high resolution image can be synthesized (214) using the array of images representing the captured light field reconstructed based on the compressed light field representation data (212). However, in a number of embodiments synthesizing (214) a high resolution image using the generated (212) compressed light field representation data includes reconstructing (e.g. decoding) the array of images using the compressed light field representation data once the captured light field is to be viewed, such as in an image viewing application running on an array camera or other device. Techniques for decoding compressed light field representation data that can be utilized in accordance with embodiments of the invention are described in more detail below. In several embodiments, high resolution images are synthesized (214) at a variety of resolutions to support different devices and/or varying performance requirements. In a number of embodiments, the synthesis (214) of a number of high resolution images is part of an image fusion process such as the processes described in U.S. patent application Ser. No. 12/967,807, the disclosure of which is incorporated by reference above.


Many operations can be performed while interacting (216) with synthesized high resolution images, such as, but not limited to, modifying the depth of field of the synthesized high resolution image, changing the focal plane of the synthesized high resolution image, recoloring the synthesized high resolution image, and detecting objects within the synthesized high resolution image. Systems and methods for interacting (216) with compressed light field representation data and synthesized high resolution images that can be utilized in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 13/773,284 to McMahon et al., the entirety of which is hereby incorporated by reference.


Although a specific process processing and interacting with captured light fields in accordance with an embodiment of the invention is described above with respect to FIG. 2, a variety of image deconvolution processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention. Processes for generating compressed light field representation data using captured light fields in accordance with embodiments of the invention are discussed below.


Generating Compressed Light Field Representation Data


A process for generating compressed light field representation data in accordance with an embodiment of the invention is illustrated in FIG. 3. The process 300 includes obtaining (310) an array of image data. A reference image viewpoint (e.g. a desired viewpoint for the reference image) is determined (312). Parallax detection is performed (314) to form a depth map from this reference viewpoint. Predicted images are determined (316) corresponding to the alternate view image by propagating pixels from the reference image to the alternate view grid. Prediction error data is computed (318) as the difference between the predicted image and the corresponding alternate view image. Where areas of low confidence are detected (320), supplemental prediction images and supplemental prediction error data is computed (322). In a variety of embodiments, the reference image(s), prediction error data, and/or the depth map are compressed (324). Compressed light field representation data can then be created (326) using the reference image(s), prediction error data, and/or depth map(s).


In a variety of embodiments, the array of images is obtained (310) from a captured light field. In several embodiments, the obtained (310) array of images is packets of image data captured using an imager array. In many embodiments, the determined (312) reference image corresponds to the reference viewpoint of the array of images. Furthermore, the determined (312) reference image can be an arbitrary image (or synthetic image) in the obtained (310) array of images. In a number of embodiments, each image in the obtained (310) array of images is associated with a particular color channel, such as, but not limited to, green, red, and blue. Other colors and/or portions of the electromagnetic spectrum can be associated with each image in the array of images in accordance with a variety of embodiments of the invention. In several embodiments, the determined (312) reference image is a green image in the array of images. Parallax detection is performed (314) with respect to the viewpoint of the determined (312) reference image to locate pixels corresponding to pixels in a reference image by searching along epipolar lines in the alternate view images in the array of images. In a number of embodiments, the parallax detection uses correspondences between cameras that are not co-located with the viewpoint of the reference image(s). In many embodiments, the search area need not be directly along an epipolar line, but rather a region surrounding the epipolar line; this area can be utilized to account for inaccuracies in determining imager calibration parameters and/or the epipolar lines. In several embodiments, parallax detection can be performed (314) with a fixed and/or dynamically determined level of precision; this level of precision can be based on performance requirements and/or desired compression efficiency, the array of images, and/or on the desired level of precision in the result of the performed (314) parallax detection. Additional techniques for performing parallax processes with varying levels of precision are disclosed in U.S. Provisional Patent Application Ser. No. 61/780,974, filed Mar. 13, 2013, the entirety of which is hereby incorporated by reference.


Disparity Information from a Single Reference Image


Turning now to FIG. 4A, a conceptual illustration of a two-dimensional array of images and associated epipolar lines as utilized in determining pixel correspondences in accordance with an embodiment of the invention is shown. The 4×4 array of images 400 includes a reference image 410, a plurality of alternate view images 412, a plurality of epipolar lines 414, and baselines 416 representing the distance between optical centers of particular pairs of cameras in the array. Performing (314) parallax detection along epipolar lines 414 calculates disparity information for the pixels in one or more of the alternate view images 412 relative to the corresponding pixels in the reference image 410. In a number of embodiments, the epipolar lines are geometric distortion-compensated epipolar lines between the pixels corresponding to the photosensitive sensors in the focal planes in the imager array that captured the array of images. In several embodiments, the calculation of disparity information first involves the utilization of geometric calibration data so that disparity searches can be directly performed along epipolar lines within the alternate view images. Geometric calibration data can include a variety of information, such as inter- and intra-camera lens distortion data obtained from an array camera calibration process. Other geometric calibration data can be utilized in accordance with a number of embodiments of the invention. In a variety of embodiments, photometric pre-compensation processes are performed on one or more of the images prior to determining the disparity information. A variety of photometric pre-compensation processes, such as vignette correction, can be utilized in accordance with many embodiments of the invention. Although specific techniques for determining disparity are discussed above, any of a variety of techniques appropriate to the requirements of a specific application can be utilized in accordance with embodiments of the invention, such as those disclosed in U.S. patent application Ser. No. 13/972,881, incorporated by reference above.


In a variety of embodiments, performing (314) parallax detection includes generating a depth map describing depth information in the array of images. In many embodiments, the depth map is metadata describing the distance from the reference camera (i.e. viewpoint) to the portion of the scene captured in the pixels (or a subset of the pixels) of an image determined using the corresponding pixels in some or all of the alternate view images. In several embodiments, candidate corresponding pixels are those pixels in alternate view images that appear along epipolar lines from pixels in the reference image. In a number of embodiments, a depth map is generated using only images in the array of images that are associated with the same color (for example, green) as the reference image. In several embodiments, the depth map is generated using images of the same color but a different color than the reference camera. For example, with a green reference image, a depth map can be generated using only the images associated with the color red (or blue) in the array of images. In many embodiments, depth information is determined with respect to multiple colors and combined to generate a depth map; e.g. depth information is determined separately for the subsets of green, red, and blue images in the array of images and a final depth map is generated using a combination of the green depth information, the red depth information, and the blue depth information. In a variety of embodiments, the depth map is generated using information from any set of cameras in the array. In a variety of embodiments, the depth map is generated without respect to colors associated with the images and/or with a combination of colors associated with the images. In several embodiments, performing (314) parallax detection can be performed utilizing techniques similar to those described in U.S. patent application Ser. No. 13/972,881, incorporated by reference above. Additionally, non-color images (such as infrared images) can be utilized to generate the depth map as appropriate to the requirements of specific embodiments of the invention.


Although a specific example of a 4×4 array of images that can be utilized to determine disparity information and a depth map from a reference image in the 4×4 array of images is described above with respect to FIG. 4A, any size array, and any set of cameras in that array can be used to determine disparity information and a depth map in accordance with embodiments of the invention.


Returning now to FIG. 3, depth information determined during the performed (314) parallax detection is used to determine (316) prediction images including pixel location predictions for one or more pixels in the reference image in at least one alternate view image. In several embodiments, the depth map generated during parallax detection can be used to identify pixel locations within alternate view images corresponding to a pixel location within the reference image with fractional pixel precision. In a variety of embodiments, determining (316) a prediction image in the alternate view includes mapping the fractional pixel location to a specific pixel location (or pixel locations) within the pixel grid for the alternate view image. In several embodiments, specific integer grid pixel location(s) in the predicted image for the alternate view are determined as a function of the neighbors within the support region. These functions include, but are not limited to the nearest neighbor (or a function of the nearest N neighbors) to the integer grid point within the support region. In other embodiments, any other localized fixed or adaptive mapping technique including (but not limited) to techniques that map based on depth in boundary regions can be utilized to identify a pixel within an alternate view image for the purpose of generating a prediction for the selected pixel in the alternate view image. Additionally, filtering can be incorporated into the computation of prediction images in order to reduce the amount of prediction error. In several embodiments, the prediction error data is computed (318) from the difference of the prediction image(s) and their respective alternative view image(s). The prediction error data can be utilized in the compression of one or more images in the captured light field. In several embodiments, the computed (318) prediction error data is the signed difference between the values of a pixel in the predicted image and the pixel at the same grid position in the alternate view image. In this way, the prediction error data typically does not reflect the error in the location prediction for a pixel in the reference image relative to a pixel location in the alternate view image. Instead, the prediction error data primarily describes the difference in photometric information between a pixel in the determined prediction image that was generated by propagating pixels from the reference image and a pixel in an alternate view image. Although specific techniques are identified for determining predicted images utilizing correspondence information determined using a depth map, any of a variety of approaches can be utilized for determining prediction images utilizing correspondence information determined using a depth map as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In a variety of embodiments, virtual red, virtual green, and/or virtual blue reference images can be utilized as a reference image. For example, a depth map can be determined for a particular reference viewpoint that may not correspond to the location of a physical camera in the array. This depth map can be utilized to form a virtual red image, virtual green, and/or virtual blue image from the captured light field. These virtual red, virtual green, and/or virtual blue images can then be utilized as the reference image(s) from which to create the prediction images for the alternate view(s) utilized in the processes described above. By way of a second example, one or more virtual red, virtual green, and/or virtual blue images and/or physical red, green, and/or blue images within the array of images can be used as reference images. When forming the prediction error, the depth map can be utilized to form a prediction image from virtual and/or actual reference image(s) and calculate the prediction error with respect to the corresponding alternate view images.


Turning now to FIG. 5, a prediction error histogram 500 conceptually illustrating computed (318) prediction error data between pixels in a predicted image and an alternate view image in accordance with an embodiment of the invention is shown. The prediction error represented by prediction error histogram 500 can be utilized in the compression of the corresponding image data using the computed (318) prediction error data. Although a specific example of a prediction error histogram in accordance with an embodiment of the invention is conceptually illustrated in FIG. 5, any variety of prediction errors, including those that have statistical properties differing from those illustrated in FIG. 5, and any other applicable error measurement can be utilized in accordance with the requirements of embodiments of the invention.


Returning now to FIG. 3, the correlation between spatially proximate pixels in an image can be exploited to compare all pixels within a patch of an alternate view image to a pixel and/or a patch from a reference image in several embodiments of the invention. Effectively, the pixels from a region in the predicted image are copied from a patch in the reference image. In this way, a trade-off can be achieved between determining fewer corresponding pixel locations based on the depth and/or generating a lower resolution depth map for a reference image and encoding a potentially larger range of prediction errors with respect to one or more alternate view images. In a number of embodiments, the process of encoding the prediction error data can be adaptive in the sense that pixels within a region of an alternate view image can be encoded with respect to a specific pixel in a reference image and a new pixel from the reference image can be selected that has a corresponding pixel location closer to a pixel in the alternate view image in the event that the prediction error exceeds a predetermined threshold.


Prediction error data for the alternate view is computed (318) using the determined (312) reference image, the determined (316) prediction images, and depth map. For the reference image pref and alternate view images pk,l where k,l represents the location of the alternate view image in the array of images p, the depth information provides a subset of images pk,l where

pref(x,y):=pk,l(i,j)

where (x,y) is the location of a pixel in pref and (i,j) is the location of a pixel at a fractional location in pk,l corresponding to a pixel in pref(x,y) based on the depth information. Using these mappings of subsets of pixels to the alternate viewpoint pk,l a prediction image is calculated from the reference pixels mapping to pk,l. (316). The prediction error data Ek,l can be computed (318) between pref and the prediction image for viewpoint pk,l described above. In a variety of embodiments, the determined (316) prediction images include sparse images. The missing values for the sparsely populated images can be interpolated using populated values within a neighborhood of the missing pixel value. For example, a kernel regression may be applied to the populated values to fill in the missing prediction values. In these cases, the prediction error data Ek,l is a representation of the error induced by the interpolation of the missing values.


In many embodiments, the determined (316) initial prediction image is a sparsely populated grid of fractionally-positioned points from the reference frame pref that includes “holes” that are locations or regions on the alternate view integer grid that are not occupied by any of the pixels mapped from the reference frame pref. The presence of “holes” can be particularly prevalent in occluded areas but may occur in non-occlusion regions due to non-idealities in the depth map or due to the fact that many pixels in the reference camera correspond to fractional positions in the alternate view image. In several embodiments, holes in the prediction error data can be filled using the absolute value of the pixel location in the alternate view image pk,l. This is similar to filling the predicted image with a value of zero (i.e. any null value or default value) to ensure that the coded error is equal to the value of the pixel in the alternate view image at that position. In a variety of embodiments, “holes” in a predicted image can be filled using interpolation with predicted values from neighboring pixels to create additional predictions for the holes based on the pixels from the predicted image. As can be readily appreciated, any interpolator can be utilized to create interpolated predicted image pixels from pixels propagated from the reference image. The details of the interpolation scheme used is a parameter of the encoding and decoding process and should be applied in both the encoder and decoder to ensure lossless output. In a variety of embodiments, residuals can provide more efficient compression than encoding holes with absolute values. In several embodiments, the pixels from the reference image pref do not map to exact grid locations within the prediction image and a mapping that assigns a single pixel value to multiple adjacent pixel locations on the integer grid of the prediction image is used. In this way, there is a possibility that multiple pixels from the reference image pref may map to the same integer grid location in the prediction image. In this case, pixel stacking rules can be utilized to generate multiple prediction images in which different stacked pixels are used in each image. In many embodiments, if N pixels exist in a pixel stack, then the resulting predicted value could be the mean of the N pixel values in the stack. However, any number of prediction images can be computed and/or any other techniques for determining prediction images where multiple pixels map to the same location (e.g. a pixel stack exists) as appropriate to the requirements of specific embodiments of the invention. In a variety of embodiments, holes can remain within the predicted images after the initial interpolation; additional interpolation processes can be performed until every location on the integer grid of the predicted image (or a predetermined number of locations) is assigned a pixel value. Any interpolation technique, such as kernel regression or inpainting, can be used to fill the remaining holes as described. In other embodiments, a variety of techniques can be utilized to achieve compression of raw data that involve creating multiple prediction images and/or pieces of prediction error data. Furthermore, any variety of interpolation techniques known to those skilled in the art can be utilized to fill holes in a prediction image as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


Prediction Error Data from Multiple Reference Images


In a variety of embodiments, performing (314) parallax detection does not return accurate disparity information for pixels in alternate view images that are occluded, appear in featureless (e.g. textureless) areas relative to the reference image, or where the depth map exhibits other non-idealities such as photometric mismatch. Using the determined (316) prediction error data and/or the depth map and/or a confidence map describing areas of low confidence in the depth map, areas of low confidence can be identified (320). Areas of low confidence indicate areas in the reference viewpoint where the depth measurement may be inaccurate or the pixels may otherwise not photometrically correspond (for example due to defects in the reference image), leading to potential inefficiencies in compression and/or performance. Low confidence can be determined in a variety of ways, such as identifying areas having a parallax cost function exceeding a threshold value. For example, if the parallax cost function indicates a low cost (e.g. low mismatch), this indicates that the focal planes agree on a particular depth and the pixels appear to correspond. Similarly, a high cost indicates that not all focal planes agree with respect to the depth, and therefore the computed depth is unlikely correctly represent the locations of objects within the captured light field. However, any of a variety of techniques for identifying areas of low confidence can be utilized as appropriate to the requirements of specific embodiments of the invention, such as those disclosed in U.S. patent application Ser. No. 13/972,881, incorporated by reference above. In many embodiments, these potential inefficiencies are disregarded and no additional action is taken with respect to the identified (320) areas of low confidence. In several embodiments, potential inefficiencies are disregarded by simply encoding the pixels from the alternate view images rather than computing the prediction error. In a number of embodiments, if an area of low confidence (e.g. correspondence mismatch) is identified (320), one or more additional reference images are selected and supplemental prediction images (or portions of supplemental prediction image(s)) are computed (322) from the additional reference images. In several embodiments, additional reference images or portions of additional reference images are utilized when detected objects in the array of images for areas where the determined (316) prediction error data would be large using a single reference image; for example, in an occlusion zone). A large prediction error rate can be predicted when objects in a captured light field are close to the imager array, although any situation where a large prediction error rate is (316) determined can be the basis for selecting additional reference images in accordance with embodiments of the invention.


Turning now to FIG. 4B, a conceptual illustration of a two-dimensional array of images from two reference images as utilized in determining supplemental pixel correspondences in accordance with an embodiment of the invention is shown. The 4×4 array of images 450 includes a reference image 460, a secondary reference image 466, a plurality of green alternate view images 462, a plurality of red alternate view images 461, a plurality of blue alternate view images 463, a plurality of prediction dependencies 464 extending from the reference image 460, and a plurality of secondary prediction dependencies 470 extending from the secondary reference image 466. A baseline 468 extends from the primary reference image 460 to the secondary reference image 466. Primary prediction images are computed (316) from those alternate view images associated with the reference image 460 by the prediction dependencies 464 utilizing processes similar to those described above. Likewise, supplemental prediction images are computed (322) along secondary epipolar lines from the secondary reference image 466 using those alternate view images associated with the secondary reference image 466 via the secondary prediction dependencies 470. In a number of embodiments, the pixels in the computed (322) supplemental prediction images can be mapped to the pixels in the reference image 460 using baseline 468.


In several embodiments, the alternate view images that are utilized in performing (314) parallax detection are clustered around the respective reference image that is utilized in performing (314) parallax detection. In a variety of embodiments, the images are clustered in a way to reduce the disparity and/or improve pixel correspondence between the clustered images, thereby reducing the number of pixels from the alternate view images that are occluded from the viewpoints of both the reference image and the secondary reference image. In many embodiments, the alternate view images are clustered to the primary reference image 460, the secondary reference image 466, and/or together based on the color associated with the images. For example, if reference image 460 and secondary reference image 466 are green, only the green alternate view images 462 are associated with the reference image 460 and/or the secondary reference image 466. Likewise, the red alternate view images 461 (or the blue alternate view images 463) are associated with each other for the purposes of computing (322) supplemental prediction images and/or performing (314) parallax detection.


In many embodiments, particularly those embodiments employing lossless compression techniques, the secondary reference image 466 is predicted using the reference image 460 and the baseline 468 that describes the distance between the optical centers of the reference image and the secondary reference image. In several embodiments, the secondary reference image 466 is selected to reduce the size of the occlusion zones (and thus predictability of pixels); that is, parallax detection is performed and error data is determined as described above using the secondary reference image 466. In many embodiments, the secondary reference image 466 is associated with the same color channel as the reference image 460. A specific example of a two-dimensional array of images with two reference images that can be utilized to compute (322) supplemental prediction images is conceptually illustrated in FIG. 4B; however, any array of images and more than two reference images can be utilized in accordance with embodiments of the invention. For example, supplemental references images can be computed per color channel. Taking the array illustrated in FIG. 4B, six reference images (one primary reference image for each of the red, blue, and green channels along with one secondary reference image for each of the red, blue, and green channels) can be utilized in the generation of prediction images and the associated predicted error data. Additionally, in a variety of embodiments, a subset of the pixels within the reference image and/or supplemental reference image (e.g. a region or a sub-portion) can be utilized in the calculation of prediction error data utilizing processes similar to those described above.


In a variety of embodiments, particularly those utilizing lossy compression techniques, a variety of coding techniques can be utilized to account for the effects of lossy compression in the reference images when predicting an alternate view image. In several embodiments, before the prediction image and prediction error data for the alternate view image are formed, the reference image is compressed using a lossy compression algorithm. The compressed reference image is then decompressed to form a lossy reference image. The lossy reference image represents the reference image that the decoder will have in the initial stages of decoding. The lossy reference image is used along with the depth map to form a lossy predicted image for the alternate view image. The prediction error data for the alternate view image is then calculated by comparing the lossy reference image with the alternate view image (e.g. by taking the signed difference of the two images). In this way, when using lossy compression, the prediction error data will take into account the lossy nature of the encoding of the reference image when forming the prediction error data.


In a variety of embodiments, the alternative reference images are based on the reference image. In several embodiments, the reference image used to predict the viewpoint of the alternate reference image undergoes a lossy compression. A lossy compression is applied to the prediction error data for the alternate reference image. The reference image is then decompressed to generate a lossy reference image. A lossy predicted image is generated from the decompressed reference image for the alternate reference image. The compressed prediction error data is decompressed to form the lossy prediction error data. The lossy prediction error data is added to the lossy predicted image to form the lossy predicted alternate reference image. The prediction image and prediction error data for any subsequent alternate view image that depends on the alternate reference image will be formed using the lossy predicted alternate reference image. In a number of embodiments, this forms the alternate view image that can be reconstructed utilizing lossy reconstruction techniques as described below. This process can be repeated for each alternate view image as necessary. In this way, prediction error data can be accurately computed (relative to the uncompressed light field data) using the lossy compressed image data.


Returning now to FIG. 3, in a number of embodiments, the determined (312) reference image along with the computed (318) prediction error data and/or any computed (322) supplemental prediction images (if relevant) are compressed (324). In many embodiments, supplemental prediction error data based on the computed (322) supplemental prediction images is compressed (324). This compression can be lossless or lossy depending on the requirements of a particular embodiment of the invention. When the images are compressed (324), they can be reconstructed (either exactly or approximately depending on the compression (324) technique(s) utilized) by forming a predicted alternate view image from the reference image data and the depth map, and adding the decoded prediction error data to the predicted alternate view image(s) using an image decoder. Additionally, particularly in those embodiments utilizing lossy compression techniques, metadata describing the information lost in the compression of the reference image(s) and/or prediction error data can be stored in the compressed light field representation data. Alternatively, this information can be stored in the prediction error data. This information can be utilized in the decoding of the compressed light field representation data to accurately reconstruct the originally captured images by correcting for the information lost in the lossy compression process. Techniques for decoding losslessly compressed light field representation data in accordance with embodiments of the invention are described in more detail below. In a variety of embodiments, the compression (324) of the images depends on the computed (318) prediction error data. Compressed light field representation data is generated (326) using the determined (312) reference image(s) and computed (318) prediction error data along with the depth map generated during the performed (314) parallax detection. In those embodiments with multiple reference images, the secondary reference images or portions of the secondary reference images can be included in the compressed light field representation data and/or the secondary reference images can be reconstructed using the computed (318) prediction error data with respect to the reference image. Additional metadata can be included in the generated (326) compressed light field representation data as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In a variety of embodiments, supplemental depth information is incorporated into the depth map and/or as metadata associated with the compressed light field representation data. In a number of embodiments, supplemental depth information is encoded with the additional reference viewpoint(s). In many embodiments, the depth information used for each reference viewpoint is calculated using any sets of cameras during the encoding process that may be similar or may be different depending on the viewpoint. In many embodiments, depth for an alternate reference viewpoint is calculated for only sub-regions of the alternate reference viewpoint so that an entire depth map does not need to be encoded for each viewpoint. In many embodiments, a depth map for the alternate reference viewpoint is formed by propagating pixels from the depth map from a primary reference viewpoint. If there are holes in the depth map propagated to the alternate reference viewpoint they can be filled by interpolating from nearby propagated pixels in the depth map, or through direct detection from the alternate viewpoint. In many embodiments, the depth map from the alternate reference viewpoint can be formed by a combination of propagating depth values from another reference viewpoint, interpolating for missing depth values in the alternate reference viewpoint, or directly detecting regions of particular depth values in the alternative reference viewpoint. In this way, the depth map created by performing (314) parallax detection above can be augmented with depth information generated from alternate reference images.


A specific process for generating compressed light field representation data in accordance with an embodiment of the invention is described above with respect to FIG. 3; however, a variety of processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention. In particular, the above processes can be performed using all or a subset of the images in the obtained (310) array of images.


Decoding Compressed Light Field Representation Data


As described above, compressed light field representation data can be utilized to efficiently store captured light fields. However, in order to utilize the compressed light field representation data to perform additional processed on the captured light field (such as parallax processes), the compressed light field representation data need be decoded to retrieve the original (or an approximation of the) captured light field. A process for decoding compressed light field representation data is conceptually illustrated in FIG. 6. The process 600 includes obtaining (610) compressed light field representation data. In many embodiments, the compressed light field representation data is decompressed (611). A reference image is determined (612) and alternate view images are formed (614). In many embodiments, alternative reference images are present (616). If alternative reference image are present, the process 600 repeats using (618) the alternative reference image(s). Once the alternate view images are reconstructed, the captured light field is reconstructed (620).


In a variety of embodiments, decompressing (611) the captured light field representation data includes decompressing the reference image, depth map, and/or prediction error data compressed utilizing techniques described above. In several embodiments, the reference image (612) corresponds to a viewpoint (e.g. a focal plane in an imager array) image in the compressed light field representation data; however, it should be noted that reference images from virtual viewpoints (e.g. viewpoints that do not correspond to a focal plane in the imager array) can also be utilized as appropriate to the requirements of specific embodiments of the invention. In a number of embodiments, the alternate view images are formed (614) by computing prediction images using the determined (612) reference image and the depth map, then applying the prediction error data to the computed prediction images. However, any technique for forming (614) the alternate view images, including directly forming the alternate view images using the determined (612) reference image and the prediction error data, can be utilized as appropriate to the requirements of specific embodiments of the invention. Additionally, metadata describing the interpolation techniques utilized in the creation of the compressed light field representation data can be utilized in computing the prediction images. In this way, the decoding process results in prediction images that, once the prediction error data is applied to the prediction images, correct (or an approximation to correct) alternate view images are formed (614). This allows for multiple interpolation techniques to be utilized in the encoding of compressed light field representation data, e.g. adaptive interpolation techniques can be utilized based on the requirements of specific embodiments of the invention. The captured light field is reconstructed (620) using the alternative view images and the reference image. In many embodiments, the captured light field also includes the depth map, the prediction error data, and/or any other metadata included in the compressed light field representation data.


In a variety of embodiments, multiple reference images (e.g. a primary reference image and one or more secondary reference images) exist within the compressed light field representation data. The alternate reference images can be directly included in the compressed light field representation data and/or formed utilizing techniques similar to those described above. Using (618) an alternative reference image further includes recursively (and/or iteratively) forming alternate view images from the viewpoint of each reference image utilizing techniques described above. In this way, the alternative view images are mapped back to the viewpoint of the (primary) reference image and allowing the captured light field to be reconstructed (620).


In those embodiments utilizing lossy compression techniques, information critical to determining (612) the reference image and/or an alternative reference image can be lost. However, this loss can be compensated for by storing the lost information as metadata within the compressed light field representation data and/or as part of the prediction error data. Then, when determining (612) the reference image and/or the alternate reference image, the metadata and/or prediction error data can be applied to the compressed image in order to reconstruct the original, uncompressed image. Using the uncompressed reference image, the decoding of the compressed captured light field representation data can proceed utilizing techniques similar to those described above to reconstruct (320) the captured light field. In a variety of embodiments, predicting the alternative reference image from the reference image, if lossy compression is used, includes reconstructing the alternate reference image by coding and decoding (losslessly) the prediction error data then adding the decoded prediction error to the reference image. In this way, the original alternate reference image can be reconstructed and used to predict specific alternate views as described above.


A specific process for decoding compressed light field representation data in accordance with an embodiment of the invention is described above with respect to FIG. 6; however, a variety of processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention.


Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention can be practiced otherwise than specifically described without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims
  • 1. An array camera, comprising: a processor; anda memory connected to the processor and configured to store an image processing application;wherein the image processing application configures the processor to: obtain image data, wherein: the image data comprises a set of images comprising a reference image and at least one alternate view image; andeach image in the set of images comprises a set of pixels;generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image by performing a parallax detection process to generate the depth map, where the parallax detection process identifies variations in the position of objects within the image data along epipolar lines between the reference image and the at least one alternate view image;determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image;compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image; andgenerate compressed light field representation data based on the reference image, the prediction error data, and the depth map.
  • 2. The array camera of claim 1, further comprising an array camera module comprising an imager array having multiple focal planes and an optics array configured to form images through separate apertures on each of the focal planes; wherein the array camera module is configured to communicate with the processor; andwherein the obtained image data comprises images captured by the imager array.
  • 3. The array camera of claim 2, wherein the reference image corresponds to an image captured using one of the focal planes within the image array.
  • 4. The array camera of claim 3, wherein the at least one alternate view image corresponds to the image data captured using the focal planes within the image array separate from the focal planes associated with the reference image.
  • 5. The array camera of claim 2, wherein the reference image corresponds to a virtual image formed based on the images in the array.
  • 6. The array camera of claim 1, wherein the image processing application further configures the processor to compress the generated compressed light field representation data.
  • 7. The array camera of claim 6, wherein the generated compressed light field representation data is compressed using JPEG-DX.
  • 8. A method for generating compressed light field representation data, comprising: obtaining image data using an array camera, where the image data comprises a set of images comprising a reference image and at least one alternate view image and the images in the set of images comprise a set of pixels;identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image using the array camera;determining fractional pixel locations within the identified at least one pixel using the array camera, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image;mapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location using the array camera:generating a depth map based on the image data using the array camera, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image based on the alternate view images;determining a set of prediction images based on the reference image and the depth map using the array camera, where a prediction image in the set of prediction images is a representation of a corresponding alternate view image in the at least one alternate view image;computing prediction error data by calculating the difference between a prediction image in the set of prediction images and the corresponding alternate view image that describes the difference in photometric information between a pixel in the reference image and a pixel in an alternate view image using the array camera; andgenerating compressed light field representation data based on the reference image, the prediction error data, and the depth map using the array camera.
  • 9. The method of claim 8, wherein the reference image is a virtual image interpolated from a virtual viewpoint within the image data.
  • 10. The method of claim 8, further comprising identifying areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map using the array camera, where an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image.
  • 11. The method of claim 10, further comprising: identifying at least one additional reference image within the image data using the array camera, where the at least one additional reference image is separate from the reference image;determining at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map using the array camera; andcomputing the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image using the array camera, where the generated compressed light field representation data further comprises the supplemental prediction error data.
  • 12. The method of claim 11, wherein identifying the at least one additional reference image comprises: generating an initial additional reference image based on the reference image and the depth map using the array camera, where the initial additional reference image comprises pixels projected from the viewpoint of the reference image based on the depth map; andforming the additional reference image based on the initial additional reference image and the prediction error data using the array camera, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.
  • 13. An array camera, comprising: a processor; anda memory connected to the processor and configured to store an image processing application;wherein the image processing application configures the processor to: obtain image data, wherein: the image data comprises a set of images comprising a reference image and at least one alternate view image; andeach image in the set of images comprises a set of pixels;generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image;determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image;compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image, by: identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image;determining fractional pixel locations within the identified at least one pixel, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image; andmapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location;generate compressed light field representation data based on the reference image, the prediction error data, and the depth map.
  • 14. The array camera of claim 13, wherein the mapping fractional pixel locations is determined as the pixel being nearest neighbor within the alternative view image.
  • 15. The array camera of claim 13, wherein the image processing application configures the processor to map the fractional pixel locations based on the depth map, where the pixel in the alternate view image is likely to be similar based on its proximity to the corresponding pixel location determined using the depth map of the reference image.
  • 16. An array camera, comprising: a processor; anda memory connected to the processor and configured to store an image processing application;wherein the image processing application configures the processor to: obtain image data, wherein: the image data comprises a set of images comprising a reference image and at least one alternate view image; andeach image in the set of images comprises a set of pixels;generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image;determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image;compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image;identify areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map, where an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image; andgenerate compressed light field representation data based on the reference image, the prediction error data, and the depth map.
  • 17. The array camera of claim 16, wherein the image processing application further configures the processor to disregard identified areas of low confidence.
  • 18. The array camera of claim 16, wherein: the image processing application further configures the processor to: identify at least one additional reference image within the image data, where the at least one additional reference image is separate from the reference image;determine at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map; andcompute the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image; andthe generated compressed light field representation data further comprises the supplemental prediction error data.
  • 19. The array camera of claim 18, wherein the generated compressed light field representation data further comprises the at least one additional reference image.
  • 20. The array camera of claim 18, wherein the image processing application configures the processor to identify the at least one additional reference image by: generating an initial additional reference image based on the reference image and the depth map, where the initial additional reference image comprises pixels projected from the viewpoint of the reference image based on the depth map; andforming the additional reference image based on the initial additional reference image and the prediction error data, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.
CROSS-REFERENCE TO RELATED APPLICATIONS

The current application claims priority to U.S. Provisional Patent Application Ser. No. 61/767,520, filed Feb. 21, 2013, and to U.S. Provisional Patent Application Ser. No. 61/786,976, filed Mar. 15, 2013, the disclosures of which are hereby incorporated by reference in their entirety.

US Referenced Citations (486)
Number Name Date Kind
4124798 Thompson Nov 1978 A
4198646 Alexander et al. Apr 1980 A
4323925 Abell et al. Apr 1982 A
4460449 Montalbano Jul 1984 A
5005083 Grage Apr 1991 A
5327125 Iwase et al. Jul 1994 A
5808350 Jack et al. Sep 1998 A
5832312 Rieger et al. Nov 1998 A
5880691 Fossum et al. Mar 1999 A
5911008 Hamada et al. Jun 1999 A
5933190 Dierickx et al. Aug 1999 A
5973844 Burger Oct 1999 A
6002743 Telymonde Dec 1999 A
6005607 Uomori et al. Dec 1999 A
6034690 Gallery et al. Mar 2000 A
6069365 Chow et al. May 2000 A
6124974 Burger Sep 2000 A
6137100 Fossum et al. Oct 2000 A
6137535 Meyers Oct 2000 A
6141048 Meyers Oct 2000 A
6163414 Kikuchi et al. Dec 2000 A
6175379 Uomori et al. Jan 2001 B1
6239909 Hayashi et al. May 2001 B1
6358862 Ireland et al. Mar 2002 B1
6443579 Myers et al. Sep 2002 B1
6477260 Shimomura Nov 2002 B1
6525302 Dowski, Jr. et al. Feb 2003 B2
6563537 Kawamura et al. May 2003 B1
6603513 Berezin Aug 2003 B1
6611289 Yu Aug 2003 B1
6628330 Lin Sep 2003 B1
6635941 Suda Oct 2003 B2
6639596 Shum et al. Oct 2003 B1
6647142 Beardsley Nov 2003 B1
6657218 Noda Dec 2003 B2
6671399 Berestov Dec 2003 B1
6765617 Tangen et al. Jul 2004 B1
6771833 Edgar Aug 2004 B1
6774941 Boisvert et al. Aug 2004 B1
6795253 Shinohara Sep 2004 B2
6819328 Moriwaki et al. Nov 2004 B1
6819358 Kagle et al. Nov 2004 B1
6879735 Portniaguine et al. Apr 2005 B1
6903770 Kobayashi et al. Jun 2005 B1
6909121 Nishikawa Jun 2005 B2
6927922 George et al. Aug 2005 B2
6958862 Joseph Oct 2005 B1
7085409 Sawhney et al. Aug 2006 B2
7161614 Yamashita et al. Jan 2007 B1
7199348 Olsen et al. Apr 2007 B2
7206449 Raskar et al. Apr 2007 B2
7262799 Suda Aug 2007 B2
7292735 Blake et al. Nov 2007 B2
7295697 Satoh Nov 2007 B1
7369165 Bosco et al. May 2008 B2
7391572 Jacobowitz et al. Jun 2008 B2
7408725 Sato Aug 2008 B2
7606484 Richards et al. Oct 2009 B1
7633511 Shum et al. Dec 2009 B2
7646549 Zalevsky et al. Jan 2010 B2
7657090 Omatsu et al. Feb 2010 B2
7675080 Boettiger Mar 2010 B2
7675681 Tomikawa et al. Mar 2010 B2
7706634 Schmitt et al. Apr 2010 B2
7723662 Levoy et al. May 2010 B2
7738013 Galambos et al. Jun 2010 B2
7782364 Smith Aug 2010 B2
7840067 Shen et al. Nov 2010 B2
7986018 Rennie Jul 2011 B2
7990447 Honda et al. Aug 2011 B2
8000498 Shih et al. Aug 2011 B2
8013904 Tan et al. Sep 2011 B2
8027531 Wilburn et al. Sep 2011 B2
8044994 Vetro et al. Oct 2011 B2
8077245 Adamo et al. Dec 2011 B2
8098297 Crisan et al. Jan 2012 B2
8106949 Tan et al. Jan 2012 B2
8126279 Marcellin et al. Feb 2012 B2
8130120 Kawabata et al. Mar 2012 B2
8131097 Lelescu et al. Mar 2012 B2
8169486 Corcoran et al. May 2012 B2
8180145 Wu et al. May 2012 B2
8189089 Georgiev et al. May 2012 B1
8212914 Chiu Jul 2012 B2
8213711 Tam et al. Jul 2012 B2
8231814 Duparre Jul 2012 B2
8242426 Ward et al. Aug 2012 B2
8244027 Takahashi Aug 2012 B2
8254668 Mashitani et al. Aug 2012 B2
8279325 Pitts et al. Oct 2012 B2
8280194 Wong et al. Oct 2012 B2
8289409 Chang Oct 2012 B2
8289440 Pitts et al. Oct 2012 B2
8305456 McMahon Nov 2012 B1
8345144 Georgiev et al. Jan 2013 B1
8360574 Ishak et al. Jan 2013 B2
8406562 Bassi et al. Mar 2013 B2
8446492 Nakano et al. May 2013 B2
8456517 Mor et al. Jun 2013 B2
8493496 Freedman et al. Jul 2013 B2
8514491 Duparre Aug 2013 B2
8542933 Venkataraman et al. Sep 2013 B2
8553093 Wong et al. Oct 2013 B2
8559756 Georgiev et al. Oct 2013 B2
8648918 Kauker et al. Feb 2014 B2
8655052 Spooner et al. Feb 2014 B2
8682107 Yoon et al. Mar 2014 B2
8687087 Pertsel et al. Apr 2014 B2
8692893 McMahon Apr 2014 B2
8780113 Ciurea et al. Jul 2014 B1
8804255 Duparre Aug 2014 B2
8830375 Ludwig Sep 2014 B2
8831367 Venkataraman et al. Sep 2014 B2
8854462 Herbin et al. Oct 2014 B2
8866920 Venkataraman et al. Oct 2014 B2
8878950 Lelescu et al. Nov 2014 B2
8885059 Venkataraman et al. Nov 2014 B1
8896594 Xiong et al. Nov 2014 B2
8896719 Venkataraman et al. Nov 2014 B1
8902321 Venkataraman et al. Dec 2014 B2
9019426 Han et al. Apr 2015 B2
9025894 Venkataraman et al. May 2015 B2
9025895 Venkataraman et al. May 2015 B2
9030528 Pesach et al. May 2015 B2
9031335 Venkataraman et al. May 2015 B2
9031342 Venkataraman et al. May 2015 B2
9031343 Venkataraman et al. May 2015 B2
9036928 Venkataraman et al. May 2015 B2
9036931 Venkataraman et al. May 2015 B2
9041823 Venkataraman et al. May 2015 B2
9041824 Lelescu et al. May 2015 B2
9041829 Venkataraman et al. May 2015 B2
9042667 Venkataraman et al. May 2015 B2
9055233 Venkataraman et al. Jun 2015 B2
9060124 Venkataraman et al. Jun 2015 B2
9077893 Venkataraman et al. Jul 2015 B2
9094661 Venkataraman et al. Jul 2015 B2
9123117 Ciurea et al. Sep 2015 B2
9123118 Ciurea et al. Sep 2015 B2
9124815 Venkataraman et al. Sep 2015 B2
9129183 Venkataraman et al. Sep 2015 B2
9129377 Ciurea et al. Sep 2015 B2
9143711 McMahon Sep 2015 B2
9147254 Ciurea et al. Sep 2015 B2
9188765 Venkataraman et al. Nov 2015 B2
9191580 Venkataraman et al. Nov 2015 B2
9235898 Venkataraman et al. Jan 2016 B2
9235900 Ciurea et al. Jan 2016 B2
9240049 Ciurea et al. Jan 2016 B2
20010005225 Clark et al. Jun 2001 A1
20010019621 Hanna et al. Sep 2001 A1
20020012056 Trevino Jan 2002 A1
20020027608 Johnson Mar 2002 A1
20020039438 Mori et al. Apr 2002 A1
20020057845 Fossum May 2002 A1
20020063807 Margulis May 2002 A1
20020087403 Meyers et al. Jul 2002 A1
20020089596 Suda Jul 2002 A1
20020094027 Sato et al. Jul 2002 A1
20020113867 Takigawa et al. Aug 2002 A1
20020113888 Sonoda et al. Aug 2002 A1
20020163054 Suda Nov 2002 A1
20020167537 Trajkovic Nov 2002 A1
20020177054 Saitoh et al. Nov 2002 A1
20030086079 Barth et al. May 2003 A1
20030124763 Fan et al. Jul 2003 A1
20030179418 Wengender et al. Sep 2003 A1
20030190072 Adkins et al. Oct 2003 A1
20030211405 Venkataraman Nov 2003 A1
20040008271 Hagimori et al. Jan 2004 A1
20040012689 Tinnerino Jan 2004 A1
20040027358 Nakao Feb 2004 A1
20040047274 Amanai Mar 2004 A1
20040050104 Ghosh et al. Mar 2004 A1
20040056966 Schechner et al. Mar 2004 A1
20040066454 Otani et al. Apr 2004 A1
20040096119 Williams May 2004 A1
20040100570 Shizukuishi May 2004 A1
20040114807 Lelescu et al. Jun 2004 A1
20040151401 Sawhney et al. Aug 2004 A1
20040165090 Ning Aug 2004 A1
20040169617 Yelton et al. Sep 2004 A1
20040170340 Tipping et al. Sep 2004 A1
20040174439 Upton Sep 2004 A1
20040207836 Chhibber et al. Oct 2004 A1
20040213449 Safaee-Rad et al. Oct 2004 A1
20040234873 Venkataraman Nov 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20040251509 Choi Dec 2004 A1
20040264806 Herley Dec 2004 A1
20050006477 Patel Jan 2005 A1
20050007461 Chou et al. Jan 2005 A1
20050012035 Miller Jan 2005 A1
20050036778 DeMonte Feb 2005 A1
20050047678 Jones et al. Mar 2005 A1
20050048690 Yamamoto Mar 2005 A1
20050068436 Fraenkel et al. Mar 2005 A1
20050132098 Sonoda et al. Jun 2005 A1
20050134698 Schroeder Jun 2005 A1
20050134712 Gruhlke et al. Jun 2005 A1
20050147277 Higaki et al. Jul 2005 A1
20050151759 Gonzalez-Banos et al. Jul 2005 A1
20050175257 Kuroki Aug 2005 A1
20050185711 Pfister et al. Aug 2005 A1
20050205785 Hornback et al. Sep 2005 A1
20050219363 Kohler et al. Oct 2005 A1
20050224843 Boemler Oct 2005 A1
20050225654 Feldman et al. Oct 2005 A1
20050286612 Takanashi Dec 2005 A1
20060002635 Nestares et al. Jan 2006 A1
20060007331 Izumi et al. Jan 2006 A1
20060023197 Joel Feb 2006 A1
20060023314 Boettiger et al. Feb 2006 A1
20060034003 Zalevsky Feb 2006 A1
20060034531 Poon Feb 2006 A1
20060038891 Okutomi et al. Feb 2006 A1
20060039611 Rother et al. Feb 2006 A1
20060049930 Zruya et al. Mar 2006 A1
20060054782 Olsen et al. Mar 2006 A1
20060069478 Iwama Mar 2006 A1
20060072029 Miyatake et al. Apr 2006 A1
20060087747 Ohzawa et al. Apr 2006 A1
20060098888 Morishita May 2006 A1
20060125936 Gruhike et al. Jun 2006 A1
20060138322 Costello et al. Jun 2006 A1
20060152803 Provitola Jul 2006 A1
20060157640 Perlman et al. Jul 2006 A1
20060159369 Young Jul 2006 A1
20060176566 Boettiger et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060197937 Bamji et al. Sep 2006 A1
20060210186 Berkner Sep 2006 A1
20060214085 Olsen Sep 2006 A1
20060239549 Kelly et al. Oct 2006 A1
20060243889 Farnworth et al. Nov 2006 A1
20060251410 Trutna Nov 2006 A1
20060278948 Yamaguchi et al. Dec 2006 A1
20060279648 Senba et al. Dec 2006 A1
20070002159 Olsen et al. Jan 2007 A1
20070024614 Tam Feb 2007 A1
20070036427 Nakamura et al. Feb 2007 A1
20070040828 Zalevsky et al. Feb 2007 A1
20070040922 McKee et al. Feb 2007 A1
20070041391 Lin et al. Feb 2007 A1
20070052825 Cho Mar 2007 A1
20070083114 Yang et al. Apr 2007 A1
20070085917 Kobayashi Apr 2007 A1
20070102622 Olsen et al. May 2007 A1
20070126898 Feldman Jun 2007 A1
20070127831 Venkataraman Jun 2007 A1
20070139333 Sato et al. Jun 2007 A1
20070146511 Kinoshita et al. Jun 2007 A1
20070158427 Zhu et al. Jul 2007 A1
20070160310 Tanida et al. Jul 2007 A1
20070165931 Higaki Jul 2007 A1
20070171290 Kroger Jul 2007 A1
20070206241 Smith et al. Sep 2007 A1
20070211164 Olsen et al. Sep 2007 A1
20070216765 Wong et al. Sep 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070258006 Olsen et al. Nov 2007 A1
20070258706 Raskar et al. Nov 2007 A1
20070268374 Robinson Nov 2007 A1
20070296832 Ota et al. Dec 2007 A1
20070296835 Olsen et al. Dec 2007 A1
20080019611 Larkin Jan 2008 A1
20080025649 Liu et al. Jan 2008 A1
20080030597 Olsen et al. Feb 2008 A1
20080043095 Vetro et al. Feb 2008 A1
20080043096 Vetro et al. Feb 2008 A1
20080062164 Bassi et al. Mar 2008 A1
20080079805 Takagi et al. Apr 2008 A1
20080080028 Bakin et al. Apr 2008 A1
20080084486 Enge et al. Apr 2008 A1
20080088793 Sverdrup et al. Apr 2008 A1
20080112635 Kondo et al. May 2008 A1
20080118241 Tekolste et al. May 2008 A1
20080131019 Ng Jun 2008 A1
20080151097 Chen et al. Jun 2008 A1
20080152296 Oh et al. Jun 2008 A1
20080156991 Hu et al. Jul 2008 A1
20080158259 Kempf et al. Jul 2008 A1
20080158375 Kakkori et al. Jul 2008 A1
20080187305 Raskar et al. Aug 2008 A1
20080218610 Chapman et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20080239116 Smith Oct 2008 A1
20080240598 Hasegawa Oct 2008 A1
20080247638 Tanida et al. Oct 2008 A1
20080247653 Moussavi et al. Oct 2008 A1
20080272416 Yun Nov 2008 A1
20080273751 Yuan et al. Nov 2008 A1
20080278591 Barna et al. Nov 2008 A1
20080310501 Ward et al. Dec 2008 A1
20090050946 Duparre et al. Feb 2009 A1
20090052743 Techmer Feb 2009 A1
20090060281 Tanida et al. Mar 2009 A1
20090086074 Li et al. Apr 2009 A1
20090096050 Park Apr 2009 A1
20090102956 Georgiev Apr 2009 A1
20090109306 Shan et al. Apr 2009 A1
20090128833 Yahav May 2009 A1
20090152664 Klem et al. Jun 2009 A1
20090179142 Duparre et al. Jul 2009 A1
20090180021 Kikuchi et al. Jul 2009 A1
20090200622 Tai et al. Aug 2009 A1
20090201371 Matsuda et al. Aug 2009 A1
20090207235 Francini et al. Aug 2009 A1
20090225203 Tanida et al. Sep 2009 A1
20090245573 Saptharishi et al. Oct 2009 A1
20090256947 Ciurea et al. Oct 2009 A1
20090263017 Tanbakuchi Oct 2009 A1
20090268192 Koenck et al. Oct 2009 A1
20090268970 Babacan et al. Oct 2009 A1
20090274387 Jin Nov 2009 A1
20090284651 Srinivasan Nov 2009 A1
20090297056 Lelescu et al. Dec 2009 A1
20090302205 Olsen et al. Dec 2009 A9
20090323195 Hembree et al. Dec 2009 A1
20090323206 Oliver et al. Dec 2009 A1
20090324118 Maslov et al. Dec 2009 A1
20100002126 Wenstrand et al. Jan 2010 A1
20100002313 Duparre et al. Jan 2010 A1
20100002314 Duparre Jan 2010 A1
20100013927 Nixon Jan 2010 A1
20100053342 Hwang et al. Mar 2010 A1
20100053600 Tanida et al. Mar 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100074532 Gordon et al. Mar 2010 A1
20100085425 Tan Apr 2010 A1
20100086227 Sun et al. Apr 2010 A1
20100097491 Farina et al. Apr 2010 A1
20100103259 Tanida et al. Apr 2010 A1
20100103308 Butterfield et al. Apr 2010 A1
20100118127 Nam et al. May 2010 A1
20100133418 Sargent et al. Jun 2010 A1
20100142839 Lakus-Becker Jun 2010 A1
20100157073 Kondo et al. Jun 2010 A1
20100165152 Lim Jul 2010 A1
20100177411 Hegde et al. Jul 2010 A1
20100195716 Klein et al. Aug 2010 A1
20100201834 Maruyama et al. Aug 2010 A1
20100208100 Olsen et al. Aug 2010 A9
20100220212 Perlman et al. Sep 2010 A1
20100223237 Mishra et al. Sep 2010 A1
20100231285 Boomer et al. Sep 2010 A1
20100244165 Lake et al. Sep 2010 A1
20100265385 Knight et al. Oct 2010 A1
20100281070 Chan et al. Nov 2010 A1
20100302423 Adams, Jr. et al. Dec 2010 A1
20110019243 Constant, Jr. et al. Jan 2011 A1
20110032370 Ludwig Feb 2011 A1
20110043661 Podoleanu Feb 2011 A1
20110043665 Ogasahara Feb 2011 A1
20110043668 McKinnon et al. Feb 2011 A1
20110069189 Venkataraman et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110108708 Olsen et al. May 2011 A1
20110121421 Charbon et al. May 2011 A1
20110122308 Duparre May 2011 A1
20110128412 Milnes et al. Jun 2011 A1
20110153248 Gu et al. Jun 2011 A1
20110206291 Kashani Aug 2011 A1
20110211824 Georgiev et al. Sep 2011 A1
20110221599 Högasten Sep 2011 A1
20110221658 Haddick et al. Sep 2011 A1
20110234841 Akeley et al. Sep 2011 A1
20110241234 Duparre Oct 2011 A1
20110255745 Hodder et al. Oct 2011 A1
20110261993 Weiming et al. Oct 2011 A1
20110273531 Ito et al. Nov 2011 A1
20110274366 Tardif Nov 2011 A1
20110279721 McMahon Nov 2011 A1
20110285866 Bhrugumalla et al. Nov 2011 A1
20110285910 Bamji et al. Nov 2011 A1
20110300929 Tardif et al. Dec 2011 A1
20110310980 Mathew Dec 2011 A1
20110317766 Lim, II et al. Dec 2011 A1
20120012748 Pain et al. Jan 2012 A1
20120023456 Sun et al. Jan 2012 A1
20120026342 Yu et al. Feb 2012 A1
20120039525 Tian et al. Feb 2012 A1
20120044249 Mashitani et al. Feb 2012 A1
20120069235 Imai Mar 2012 A1
20120105691 Waqas et al. May 2012 A1
20120113413 Miahczylowicz-Wolski et al. May 2012 A1
20120147205 Lelescu et al. Jun 2012 A1
20120153153 Chang et al. Jun 2012 A1
20120155830 Sasaki Jun 2012 A1
20120176479 Mayhew et al. Jul 2012 A1
20120188420 Black et al. Jul 2012 A1
20120198677 Duparre Aug 2012 A1
20120200669 Lai Aug 2012 A1
20120200726 Bugnariu Aug 2012 A1
20120200734 Tang Aug 2012 A1
20120219236 Ali et al. Aug 2012 A1
20120224083 Jovanovski et al. Sep 2012 A1
20120249550 Akeley et al. Oct 2012 A1
20120249836 Ali et al. Oct 2012 A1
20120262601 Choi Oct 2012 A1
20120262607 Shimura et al. Oct 2012 A1
20120268574 Gidon et al. Oct 2012 A1
20120287291 McMahon Nov 2012 A1
20120293695 Tanaka Nov 2012 A1
20120314033 Lee et al. Dec 2012 A1
20120327222 Ng et al. Dec 2012 A1
20130003184 Duparre Jan 2013 A1
20130010073 Do et al. Jan 2013 A1
20130016885 Tsujimoto et al. Jan 2013 A1
20130022111 Chen et al. Jan 2013 A1
20130027580 Olsen et al. Jan 2013 A1
20130033579 Wajs Feb 2013 A1
20130050504 Safaee-Rad et al. Feb 2013 A1
20130050526 Keelan Feb 2013 A1
20130070060 Chatterjee Mar 2013 A1
20130077880 Venkataraman et al. Mar 2013 A1
20130077882 Venkataraman et al. Mar 2013 A1
20130088489 Schmeitz et al. Apr 2013 A1
20130088637 Duparre Apr 2013 A1
20130121559 Hu May 2013 A1
20130128087 Georgiev et al. May 2013 A1
20130147979 McMahon et al. Jun 2013 A1
20130215108 McMahon et al. Aug 2013 A1
20130229540 Farina et al. Sep 2013 A1
20130230237 Schlosser et al. Sep 2013 A1
20130259317 Gaddy Oct 2013 A1
20130265459 Duparre et al. Oct 2013 A1
20130293760 Nisenzon et al. Nov 2013 A1
20140076336 Clayton et al. Mar 2014 A1
20140079336 Venkataraman et al. Mar 2014 A1
20140092281 Nisenzon et al. Apr 2014 A1
20140098267 Tian et al. Apr 2014 A1
20140118493 Sali et al. May 2014 A1
20140118584 Lee et al. May 2014 A1
20140132810 McMahon May 2014 A1
20140176592 Wilburn et al. Jun 2014 A1
20140218546 McMahon Aug 2014 A1
20140253738 Mullis Sep 2014 A1
20140267243 Venkataraman et al. Sep 2014 A1
20140267286 Duparre Sep 2014 A1
20140267633 Venkataraman et al. Sep 2014 A1
20140267890 Lelescu et al. Sep 2014 A1
20140285675 Mullis Sep 2014 A1
20140313315 Shoham et al. Oct 2014 A1
20140321712 Ciurea et al. Oct 2014 A1
20140333731 Venkataraman et al. Nov 2014 A1
20140333764 Venkataraman et al. Nov 2014 A1
20140333787 Venkataraman et al. Nov 2014 A1
20140340539 Venkataraman et al. Nov 2014 A1
20140347509 Venkataraman et al. Nov 2014 A1
20140354773 Venkataraman et al. Dec 2014 A1
20140354843 Venkataraman et al. Dec 2014 A1
20140354844 Venkataraman et al. Dec 2014 A1
20140354853 Venkataraman et al. Dec 2014 A1
20140354854 Venkataraman et al. Dec 2014 A1
20140354855 Venkataraman et al. Dec 2014 A1
20140355870 Venkataraman et al. Dec 2014 A1
20140368662 Venkataraman et al. Dec 2014 A1
20140368683 Venkataraman et al. Dec 2014 A1
20140368684 Venkataraman et al. Dec 2014 A1
20140368685 Venkataraman et al. Dec 2014 A1
20140369612 Venkataraman et al. Dec 2014 A1
20140369615 Venkataraman et al. Dec 2014 A1
20140376825 Venkataraman et al. Dec 2014 A1
20140376826 Venkataraman et al. Dec 2014 A1
20150003752 Venkataraman et al. Jan 2015 A1
20150003753 Venkataraman et al. Jan 2015 A1
20150009353 Venkataraman et al. Jan 2015 A1
20150009354 Venkataraman et al. Jan 2015 A1
20150009362 Venkataraman et al. Jan 2015 A1
20150015669 Venkataraman et al. Jan 2015 A1
20150036014 Lelescu et al. Feb 2015 A1
20150036015 Lelescu et al. Feb 2015 A1
20150042766 Ciurea et al. Feb 2015 A1
20150042767 Ciurea et al. Feb 2015 A1
20150042833 Lelescu et al. Feb 2015 A1
20150049915 Ciurea et al. Feb 2015 A1
20150049916 Ciurea et al. Feb 2015 A1
20150049917 Ciurea et al. Feb 2015 A1
20150055884 Venkataraman et al. Feb 2015 A1
20150091900 Yang et al. Apr 2015 A1
20150312455 Venkataraman et al. Oct 2015 A1
20160037097 Duparre Feb 2016 A1
20160044252 Molina Feb 2016 A1
20160044257 Venkataraman et al. Feb 2016 A1
20160057332 Ciurea et al. Feb 2016 A1
Foreign Referenced Citations (43)
Number Date Country
0677821 Oct 1995 EP
840502 May 1998 EP
2336816 Jun 2011 EP
09181913 Jul 1997 JP
2006033493 Feb 2006 JP
2007520107 Jul 2007 JP
2011109484 Jun 2011 JP
1020110097647 Aug 2011 KR
2007083579 Jul 2007 WO
2011063347 May 2011 WO
2011116203 Sep 2011 WO
2011063347 Oct 2011 WO
2011143501 Nov 2011 WO
2012057619 May 2012 WO
2012057620 May 2012 WO
2012057621 May 2012 WO
2012057622 May 2012 WO
2012057623 May 2012 WO
2012057620 Jun 2012 WO
2012074361 Jun 2012 WO
2012078126 Jun 2012 WO
2012082904 Jun 2012 WO
2013003276 Jan 2013 WO
2013043751 Mar 2013 WO
2013043761 Mar 2013 WO
2013049699 Apr 2013 WO
2013055960 Apr 2013 WO
2013119706 Aug 2013 WO
2013126578 Aug 2013 WO
2014078443 May 2014 WO
2014130849 Aug 2014 WO
2014138695 Sep 2014 WO
2014138697 Sep 2014 WO
2014144157 Sep 2014 WO
2014145856 Sep 2014 WO
2014150856 Sep 2014 WO
2014159721 Oct 2014 WO
2014159779 Oct 2014 WO
2014160142 Oct 2014 WO
2014164550 Oct 2014 WO
2014164909 Oct 2014 WO
2014165244 Oct 2014 WO
2015048694 Apr 2015 WO
Non-Patent Literature Citations (171)
Entry
International Search Report and Written Opinion for International Application PCT/US14/024903 report completed Jun. 12, 2014, Mailed, Jun. 27, 2014, 13 pgs.
International Search Report and Written Opinion for International Application PCT/US14/024407, report completed Jun. 11, 2014, Mailed Jul. 8, 2014, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US14/025100, report completed Jul. 7, 2014, Mailed Aug 7, 2014 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022123, report completed Jun. 9, 2014, Mailed Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024947, report completed Jul. 8, 2014, Mailed Aug. 5, 2014, 8 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/023762, report completed May 30, 2014, Mailed Jul. 3, 2014, 6 pgs.
International Search Report and Written Opinion for International Application PCT/US14/017766, completed May 28, 2014, Mailed Jun. 18, 2014, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US14/022118, report completed Jun. 9, 2014, Mailed, Jun. 25, 2014, 5 pgs.
Chen et al., “Interactive deformation of light fields”, In Proceedings of SIGGRAPH I3D 2005, pp. 139-146.
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, pp. 3-12.
Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, pp. 43-54.
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, 30, 4, 2011, pp. 70:1-70:10.
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, Computational Photography (ICCP) 2010, pp. 1-8.
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D 2007, pp. 121-128.
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, pp. 297-306.
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80.
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10.
Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, 2, pp. 115-129.
International Search Report and Written Opinion for International Application No. PCT/US13/46002, Search Completed Nov. 13, 2013, Mailed Nov. 29, 2013, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/48772, Search Completed Oct. 21, 2013, Mailed Nov. 8, 2013, 6 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/56065, Search Completed Nov. 25, 2013, Mailed Nov. 26, 2013, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/59991, Search Completed Feb. 6, 2014, Mailed Feb. 26, 2014, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/024987, Search Completed Mar. 27, 2013, Mailed Apr. 15, 2013, 14 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/056502, Search Completed Feb. 18, 2014, Mailed Mar. 19, 2014, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/069932, International Filing Date Nov. 13, 2013, Search Completed Mar. 14, 2014, Mailed Apr. 14, 2014, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US11/36349, mailed Aug. 22, 2011, 12 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2011/64921, Report Completed Feb. 25, 2011, mailed Mar. 6, 2012, 17 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/027146, completed Apr. 2, 2013, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US2009/044687, completed Jan. 5, 2010, 13 pgs.
International Search Report and Written Opinion for International Application PCT/US2010/057661, completed Mar. 9, 2011, 14 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/044014, completed Oct. 12, 2012, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/056151, completed Nov. 14, 2012, 10 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/059813,, completed Dec. 17, 2012, 8 pgs.
International Search Report and Written Opinion for International Application PCT/US12/37670, Mailed Jul. 18, 2012, Search Completed Jul. 5, 2012, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/58093, completed Nov. 15, 2012, 12 pgs.
Office Action for U.S. Appl. No. 12/952,106, dated Aug. 16, 2012, 12 pgs.
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183.
Bertero et al., “Super-resolution in computational imaging”, Micron, 2003, vol. 34, Issues 6-7, 17 pgs.
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV 2010, Part II, LNCS 6493, pp. 186-200, 2011.
Bishop et al., “Light Field Superresolution”, Retrieved from http://home.eps.hw.ac.uk/˜sz73/ICCP09/LightFieldSuperresolution.pdf, 9 pgs. 9.
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, pp. 972-986.
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs.
Borman et al, “Image Sequence Processing”, Source unknown, Oct. 14, 2002, 81 pgs.
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 1998, 3653, 10 pgs.
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, Jun. 2003, 5016, 12 pgs.
Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 2004, vol. 5299, 12 pgs.
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, 1998. 3459, 9 pgs.
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473.
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378.
Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, date unknown, 21 pgs.
Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE-IS&T Electronic Imaging, vol. 7246, pp. 72460X-1-72460X-9.
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084.
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, 2010, 11 pgs.
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394.
Capel, “Image Mosaicing and Super-resolution”, [online], Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=1.0.1.1.226.2643&rep=rep1 &type=pdf>, Title pg., abstract, table of contents, pp. 1-263 (269 total pages).
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, 2006, vol. 3, pp. 623-626.
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP 2006, pp. 1177-1180.
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim. Syst. Sign Process, 2007, vol. 18, pp. 83-101.
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, 8 pgs.
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, 8 pgs.
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 2009, vol. 83, Issue 3, 8 pgs.
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310.
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, 2008, vol. 3, pp. 1-6.
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 2004, pp. 89-100.
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551.
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, 2006, vol. 1, pp. R1-R16.
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, pp. 59622A-1-59622A-12.
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs.
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903.
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33.
Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, 2006, vol. 6196, pp. 619607-1-619607-15.
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418.
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 3005, vol. 44, No. 15, pp. 2949-2956.
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposistion Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs.
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://www.site.uottawa.ca/˜edubois/theses/Fanaswala—thesis.pdf>, 163 pgs., Aug. 2009.
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, 2006, vol. 6069, 8 pgs.
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, 2004, vol. 14, pp. 47-57.
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, vol. 13, No. 10, pp. 1327-1344.
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, pp. 141-159.
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs.
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284.
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50.
Fischer et al., Optical System Design, 2nd Edition, SPIE Press, pp. 191-198.
Fischer et al., Optical System Design, 2nd Edition, SPIE Press, pp. 49-58.
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs.
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, vol. 16, No. 12, pp. 2953-2964.
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, 2010, vol. 3, pp. 022501-1-022501-3.
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, 2011, vol. 4, pp. 112501-1-112501-3.
Kang et al., “Handling Occlusions inn Dense Multi-View Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110.
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727.
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831.
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Proc., CVPR 94, 8 pgs.
LensVector, “How LensVector Autofocus Works”, http://www.lensvector.com/overview.html.
LeVoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Aug. 2006, pp. 46-55.
LeVoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, pp. 1-12.
Li et al. “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution,” Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab—research/08/deblur-feng.pdf on Feb. 5, 2014.
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120.
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics.
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 2006, pp. 30-38.
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs.
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378.
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900.
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, 2007, 12 pgs.
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36.
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, 2008, pp. 1-19.
Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html.
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Jan. 2009, vol. 18, No. 1, pp. 36-51.
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077.
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552.
Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell, 2013, vol. 35, No. 2, pp. 504-511.
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs.
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228.
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2007, pp. 208-215.
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995, pp. 93-96.
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System,” Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162. Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField—TOG.pdf on Feb. 5.
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759.
Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, Source and date unknown, 8 pgs.
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975.
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117.
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813.
Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer 77, Sep. 9, 93-100.
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, Proceeding, CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—vol. 2, pp. 2331-2338.
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs.
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs.
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”.
Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, 10 pgs.
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426.
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, 2005, 5674, 12 pgs.
Wikipedia, “Polarizing Filter (Photography)”.
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs.
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 765-776.
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceeding, CVPR'04 Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 294-301.
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs.
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C-11.
Yang, et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002)pp. 1-10.
Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Source and date unknown, 8 pgs.
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, 2004, 12 pgs.
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6.
US 8,957,977, 02/2015, Venkataraman et al. (withdrawn).
US 8,964,053, 02/2015, Venkataraman et al. (withdrawn).
US 8,965,053, 02/2015, Venkataraman et al. (withdrawn).
US 9,014,491, 02/2015, Venkataraman et al. (withdrawn).
Extended European Search Report for European Application EP12835041.0, Report Completed Jan. 28, 2015, Mailed Feb. 4, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US13/56065, Report Issued Feb. 24, 2015, Mailed Mar. 5, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/069932, issued May 19, 2015, Mailed May 28, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/017766, issued Aug. 25, 2015, Mailed Sep. 3, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022118, issued Sep. 8, 2015, Mailed Sep. 17, 2015, 4pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022123, issued Sep. 8, 2015, Mailed Sep. 17, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/023762, issued Mar. 2, 2015, Mailed Mar. 9, 2015, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024407, issued Sep. 15, 2015, Mailed Sep. 24, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024903, issued Sep. 15, 2015, Mailed Sep. 24, 2015, 12Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024947, issued Sep. 15, 2015, Mailed Sep. 24, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/025100, issued Sep. 15, 2015, Mailed Sep. 24, 2015, 4 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US2009/044687, completed Jan. 5, 2010, mailed Jan. 13, 2010, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2015/019529, completed May 5, 2015, Mailed Jun. 8, 2015, 10 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/18084, completed May 23, 2014, Mailed Jun. 10, 2014, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US14/18116, completed May 13, 2014, Mailed Jun. 2, 2014, 12 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/028447, completed Jun. 30, 2014, Mailed Jul. 21, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/030692, completed Jul. 28, 2014, Mailed Aug. 27, 2014, 7 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/066229, Completed Mar. 6, 2015, Mailed Mar. 19, 2015, 9 Pgs.
Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188.
Joshi, et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:l/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8.
Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, May 2011, 8 pgs.
Levin et al., “A Closed Form Solution to Natural Image Matting”, Pattern Analysis and Machine Intelligence, Feb. 2008, vol. 30, 8 pgs.
Merkle et al., “Adaptation and optimization of coding algorithms for mobile 3DTV”, Mobile3DTV Project No. 216503, Nov. 2008, 55 pgs.
Mitra et al., “Light Field Denoising, Light Field Superresolution and Stereo Camera Based Refocussing using a GMM Light Field Patch Prior”, Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on Jun. 16-21, 2012, pp. 22-28.
Moreno-Noguer et al., “Active Refocusing of Images and Videos”, ACM SIGGRAPH, 2007, vol. 26, pp. 1-10, [retrieved on Jul. 8, 2015], Retrieved from the Internet <U RL:http://doi.acm.org/1 0.1145/1276377.1276461 >.
Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, Jan. 2012, 15 pgs.
Philips 3D Solutions, “3D Interface Specifications, White Paper”, Philips 3D Solutions retrieved from www.philips.com/3dsolutions, 29 pgs., Feb. 15, 2008.
Tallon et al., “Upsampling and Denoising of Depth Maps via Joint-Segmentation”, 20th European Signal Processing Conference, Aug 27-31, 2012, 5 pgs.
Zhang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, Proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171.
Related Publications (1)
Number Date Country
20140232822 A1 Aug 2014 US
Provisional Applications (2)
Number Date Country
61767520 Feb 2013 US
61786976 Mar 2013 US