Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information

Information

  • Patent Grant
  • 10009538
  • Patent Number
    10,009,538
  • Date Filed
    Friday, May 19, 2017
    7 years ago
  • Date Issued
    Tuesday, June 26, 2018
    6 years ago
Abstract
Systems and methods for the generating compressed light field representation data using captured light fields in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory connected configured to store an image processing application, wherein the image processing application configures the processor to obtain image data, wherein the image data includes a set of images including a reference image and at least one alternate view image, generate a depth map based on the image data, determine at least one prediction image based on the reference image and the depth map, compute prediction error data based on the at least one prediction image and the at least one alternate view image, and generate compressed light field representation data based on the reference image, the prediction error data, and the depth map.
Description
FIELD OF THE INVENTION

The present invention relates to systems and methods for capturing light fields and more specifically to the efficient representation of captured light fields using compressed light field representation data.


BACKGROUND

Imaging devices, such as cameras, can be used to capture images of portions of the electromagnetic spectrum, such as the visible light spectrum, incident upon an image sensor. For ease of discussion, the term light is generically used to cover radiation across the entire electromagnetic spectrum. In a typical imaging device, light enters through an opening (aperture) at one end of the imaging device and is directed to an image sensor by one or more optical elements such as lenses. The image sensor includes pixels or sensor elements that generate signals upon receiving light via the optical element. Commonly used image sensors include charge-coupled device (CCDs) sensors and complementary metal-oxide semiconductor (CMOS) sensors.


Image sensors are devices capable of converting an image into a digital signal. Image sensors utilized in digital cameras are typically made up of an array of pixels. Each pixel in an image sensor is capable of capturing light and converting the captured light into electrical signals. In order to separate the colors of light and capture a color image, a Bayer filter is often placed over the image sensor, filtering the incoming light into its red, blue, and green (RGB) components that are then captured by the image sensor. The RGB signal captured by the image sensor using a Bayer filter can then be processed and a color image can be created.


SUMMARY OF THE INVENTION

Systems and methods for the generating compressed light field representation data using captured light fields in accordance embodiments of the invention are disclosed. In one embodiment, an array camera includes a processor and a memory connected to the processor and configured to store an image processing application, wherein the image processing application configures the processor to obtain image data, wherein the image data includes a set of images including a reference image and at least one alternate view image and each image in the set of images includes a set of pixels, generate a depth map based on the image data, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image, determine at least one prediction image based on the reference image and the depth map, where the prediction images correspond to at least one alternate view image, compute prediction error data based on the at least one prediction image and the at least one alternate view image, where a portion of prediction error data describes the difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image, and generate compressed light field representation data based on the reference image, the prediction error data, and the depth map.


In an additional embodiment of the invention, the array camera further includes an array camera module including an imager array having multiple focal planes and an optics array configured to form images through separate apertures on each of the focal planes, wherein the array camera module is configured to communicate with the processor and wherein the obtained image data includes images captured by the imager array.


In another embodiment of the invention, the reference image corresponds to an image captured using one of the focal planes within the image array.


In yet another additional embodiment of the invention, the at least one alternate view image corresponds to the image data captured using the focal planes within the image array separate from the focal planes associated with the reference image.


In still another additional embodiment of the invention, the reference image corresponds to a virtual image formed based on the images in the array.


In another embodiment of the invention, the depth map describes the geometrical linkage between the pixels in the reference image and the pixels in the other images in the image array.


In yet still another additional embodiment of the invention, the image processing application configures the processor to perform a parallax detection process to generate the depth map, where the parallax detection process identifies variations in the position of objects within the image data along epipolar lines between the reference image and the at least one alternate view image.


In yet another embodiment of the invention, the image processing application further configures the processor to compress the generated compressed light field representation data.


In still another embodiment of the invention, the generated compressed light field representation data is compressed using JPEG-DX.


In yet still another embodiment of the invention, the image processing application configures the processor to determine prediction error data by identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image, determining fractional pixel locations within the identified at least one pixel, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image, and mapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location.


In yet another additional embodiment of the invention, the mapping fractional pixel locations is determined as the pixel being nearest neighbor within the alternative view image.


In still another additional embodiment of the invention, the image processing application configures the processor to map the fractional pixel locations based on the depth map, where the pixel in the alternate view image is likely to be similar based on its proximity to the corresponding pixel location determined using the depth map of the reference image.


In yet still another additional embodiment of the invention, the image processing application further configures the processor to identify areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map and an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image.


In another embodiment of the invention, the depth map further comprises a confidence map describing areas of low confidence within the depth map.


In yet another embodiment of the invention, the image processing application further configures the processor to disregard identified areas of low confidence.


In still another embodiment of the invention, the image processing application further configures the processor to identify at least one additional reference image within the image data, where the at least one additional reference image is separate from the reference image, determine at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map, and compute the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image, and the generated compressed light field representation data further includes the supplemental prediction error data.


In yet still another embodiment of the invention, the generated compressed light field representation data further includes the at least one additional reference image.


In yet another additional embodiment of the invention, the image processing application configures the processor to identify the at least one additional reference image by generating an initial additional reference image based on the reference image and the depth map, where the initial additional reference image includes pixels projected from the viewpoint of the reference image based on the depth map and forming the additional reference image based on the initial additional reference image and the prediction error data, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.


In another embodiment of the invention, the prediction error data is decoded based on the reference image prior to the formation of the additional reference image.


Still another embodiment of the invention includes a method for generating compressed light field representation data including obtaining image data using an array camera, where the image data includes a set of images including a reference image and at least one alternate view image and the images in the set of images include a set of pixels, generating a depth map based on the image data using the array camera, where the depth map describes the distance from the viewpoint of the reference image with respect to objects imaged by pixels within the reference image based on the alternate view images, determining a set of prediction images based on the reference image and the depth map using the array camera, where a prediction image in the set of prediction images is a representation of a corresponding alternate view image in the at least one alternate view image, computing prediction error data by calculating the difference between a prediction image in the set of prediction images and the corresponding alternate view image that describes the difference in photometric information between a pixel in the reference image and a pixel in an alternate view image using the array camera, and generating compressed light field representation data based on the reference image, the prediction error data, and the depth map using the array camera.


In yet another additional embodiment of the invention, the reference image is a virtual image interpolated from a virtual viewpoint within the image data.


In still another additional embodiment of the invention, determining the set of predicted images further includes identifying at least one pixel in the at least one alternative view image corresponding to a reference pixel in the reference image using the array camera, determining fractional pixel locations within the identified at least one pixel using the array camera, where a fractional pixel location maps to a plurality of pixels in at least one alternative view image, and mapping fractional pixel locations to a specific pixel location within the alternate view image having a determined fractional pixel location using the array camera.


In yet still another embodiment of the invention, the method further includes identifying areas of low confidence within the computed prediction images based on the at least one alternate view image, the reference image, and the depth map using the array camera, where an area of low confidence indicate areas where the information stored in a determined prediction image indicate areas in the reference viewpoint where the pixels in the determined prediction image may not photometrically correspond to the corresponding pixels in the alternate view image.


In yet another additional embodiment of the invention, the method further includes identifying at least one additional reference image within the image data using the array camera, where the at least one additional reference image is separate from the reference image, determining at least one supplemental prediction image based on the reference image, the at least one additional reference image, and the depth map using the array camera, and computing the supplemental prediction error data based on the at least one alternate additional reference image and the at least one supplemental prediction image using the array camera, where the generated compressed light field representation data further includes the supplemental prediction error data.


In still another additional embodiment of the invention, identifying the at least one additional reference image includes generating an initial additional reference image based on the reference image and the depth map using the array camera, where the initial additional reference image includes pixels projected from the viewpoint of the reference image based on the depth map and forming the additional reference image based on the initial additional reference image and the prediction error data using the array camera, where the additional reference image comprises pixels based on interpolations of pixels propagated from the reference image and the prediction error data.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a system diagram of an array camera including a 5×5 imager array with storage hardware connected with a processor in accordance with an embodiment of the invention.



FIG. 2 is a flow chart conceptually illustrating a process for capturing and processing light fields in accordance with an embodiment of the invention.



FIG. 3 is a flow chart conceptually illustrating a process for generating compressed light field representation data in accordance with an embodiment of the invention.



FIG. 4A is a conceptual illustration of a reference image in an 4×4 array of images and corresponding epipolar lines in accordance with an embodiment of the invention.



FIG. 4B is a conceptual illustration of multiple reference images in a 4×4 array of images in accordance with an embodiment of the invention.



FIG. 5 is a conceptual illustration of a prediction error histogram in accordance with an embodiment of the invention.



FIG. 6 is a flow chart conceptually illustrating a process for decoding compressed light field representation data in accordance with an embodiment of the invention.





DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for generating compressed light field representation data using captured light fields in accordance with embodiments of the invention are illustrated. Array cameras, such as those described in U.S. patent application Ser. No. 12/935,504, entitled “Capturing and Processing of Images using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., can be utilized to capture light fields and store the captured light fields. Captured light fields contain image data from an array of images of a scene captured from multiple points of view so that each image samples the light field of the same region within the scene (as opposed to a mosaic of images that sample partially overlapping regions of a scene). It should be noted that any configuration of images, including two-dimensional arrays, non-rectangular arrays, sparse arrays, and subsets of arrays of images could be utilized as appropriate to the requirements of specific embodiments of the invention. In a variety of embodiments, image data for a specific image that forms part of a captured light field describes a two-dimensional array of pixels. Storing all of the image data for the images in a captured light field can consume a disproportionate amount of storage space, limiting the number of light field images that can be stored within a fixed capacity storage device and increasing the amount of data transfer involved in transmitting a captured light field. Array cameras in accordance with many embodiments of the invention are configured to process captured light fields and generate data describing correlations between the images in the captured light field. Based on the image correlation data, some or all of the image data in the captured light field can be discarded, affording more efficient storage of the captured light fields as compressed light field representation data. Additionally, this process can be decoupled from the capturing of light fields to enable the efficient use of the hardware resources present in the array camera.


In many embodiments, each image in a captured light field is from a different viewpoint. Due to the different viewpoint of each of the images, parallax results in variations in the position of objects within the images of the scene. The disparity between corresponding pixels in images in a captured light field can be utilized to determine the distance to an object imaged by the corresponding pixels. Conversely, distance can be used to estimate the location of a corresponding pixel in another image. Processes that can be utilized to detect parallax and generate depth maps in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 13/972,881 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras that Contain Occlusions using Subsets of Images to Perform Depth Estimation” to Venkataraman et al. In many embodiments, a depth map is metadata describing the distance from the viewpoint from which an image is captured (or, in the case of super-resolution processing, synthesized) with respect to objects imaged by pixels within the image. Additionally, the depth map can also describe the geometrical linkage between pixels in the reference image and pixels in all other images in the array.


Array cameras in accordance with several embodiments of the invention are configured to process the images in a captured light field using a reference image selected from the captured array of images. In a variety of embodiments, the reference image is a synthetic image generated from the captured images, such as a synthetic viewpoint generated from a focal plane (e.g. a camera) that does not physically exist in the imager array. The remaining images can be considered to be images of alternate views of the scene relative to the viewpoint of the reference image. Using the reference image, array cameras in accordance with embodiments of the invention can generate a depth map using processes similar to those described above in U.S. patent application Ser. No. 13/972,881 and the depth map can be used to generate a set of prediction images describing the pixel positions within one or more of the alternate view images that correspond to specific pixels within the reference image. The relative locations of pixels in the alternate view images can be predicted along epipolar lines projected based on the configuration of the cameras (e.g. the calibration of the physical properties of the imager array in the array camera and their relationship to the reference viewpoint of the array camera) that captured the images. The predicted location of the pixels along the epipolar lines is a function of the distance from the reference viewpoint to the object imaged by the corresponding pixel in the reference image. In a number of embodiments, the predicted location is additionally a function of any calibration parameters intrinsic to or extrinsic to the physical imager array. The prediction images exploit the correlation between the images in the captured light field by describing the differences between the value of a pixel in the reference image and pixels adjacent to corresponding disparity-shifted pixel locations in the other alternate view images in the captured light field. The disparity-shifted pixel positions are often determined with fractional pixel precision (e.g. an integer position in the reference image is mapped to a fractional position in the alternate view image) based on a depth map of the reference image in the alternate view images. Significant compression of the image data forming the images of a captured light field can be achieved by selecting one reference image, generating prediction images with respect to the reference image using the depth map information relating the reference and alternate view images, generating prediction error data describing the differences between the predicted images and the alternate view images, and discarding the alternate view images. In a variety of embodiments, multiple reference images are utilized to generate prediction error data that describes the photometric differences between pixels in alternate view images adjacent to corresponding disparity-shifted pixel locations and pixels in one or more of the reference images.


It should also be noted that that while, in a variety of embodiments, the reference image corresponds to an image in the captured array of images, virtual (e.g. synthetic) images corresponding to a virtual viewpoint within the captured light field can also be utilized as the reference image in accordance with embodiments of the invention. For example, a virtual red image, a virtual green image, and/or a virtual blue image can be used to form a reference image for each respective color channel and used as a starting point for forming predicted images for the alternate view images of each respective color channel. In many embodiments, a color channel includes a set of images within the image array corresponding to a particular color, potentially as captured by the focal planes within the imager array. However, in accordance with embodiments of the invention, the reference image for a particular color channel can be taken from a different color channel; for example, an infrared image can be used as the reference image for the green channel within the captured light field.


The reference image(s) and the set of prediction error data stored by an array camera can be referred to as compressed light field representation data. The compressed light field representation data can also include the depth map utilized to generate the prediction error data and/or any other metadata related to the creation of the compressed light field representation data and/or the captured light field. The prediction error data can be compressed using any compression technique, such as discrete cosine transform (DCT) techniques, as appropriate to the requirements of specific embodiments of the invention. The compressed light field representation data can be compressed and stored in a variety of formats. One such file format is the JPEG-DX extension to ISO/IEC 10918-1 described in U.S. patent application Ser. No. 13/631,731, titled “Systems and Methods for Encoding Light Field Image Files” to Venkataraman et al. As can readily be appreciated, the prediction error data can be stored in a similar manner to a depth map as compressed or uncompressed layers and/or metadata within an image file. In a variety of embodiments, array cameras are configured to capture light fields separate from the generation of the compressed light field representation data. For example, the compressed light field representation data can be generated when the array camera is no longer capturing light fields or in the background as the array camera captures additional light fields. Any variety of decoupled processing techniques can be utilized in accordance with the requirements of embodiments of the invention. Many array cameras in accordance with embodiments of the invention are capable of performing a variety of processes that utilize the information contained in the captured light field using the compressed light field representation data.


In many instances, a captured light field contains image data from an array of images of a scene that sample an object space within the scene in such a way as to provide sampling diversity that can be utilized to synthesize higher resolution images of the object space using super-resolution processes. Systems and methods for performing super-resolution processing on image data captured by an array camera in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 12/967,807 entitled “System and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al. Synthesized high resolution images are representations of the scene captured in the captured light field. In many instances, the process of synthesizing a high resolution image may result in a single image, a stereoscopic pair of images that can be used to display three dimensional (3D) information via an appropriate 3D display, and/or a variety of images from different viewpoints. The process of synthesizing high resolution images from lower resolution image data captured by an array camera module in an array camera typically involves performing parallax detection and correction to reduce the effects of disparity between the images captured by each of the cameras in the array camera module. By using the reference image(s), the set of prediction error data, and/or the depth map contained in compressed light field representation data, high resolution images can be synthesized separately from the parallax detection and correction process, thereby alleviating the need to store and process the captured light field until the super-resolution process can be performed. Additionally, the parallax detection process can be optimized to improve speed or efficiency of compression. Once the compressed data is decoded, a parallax process can be re-run at a different (i.e. higher) precision using the reconstructed images. In this way, an initial super-resolution process can be performed in an efficient manner (such as on an array camera, where the processing power of the device limits the ability to perform a high precision parallax process in real-time) and, at a later time, a higher resolution parallax process can be performed to generate any of a variety of data, including a second set of compressed light field representation data and/or other captured light field image data, or perform any processing that relies on the captured light field. Later times include, but are not limited to, times when the array camera is not capturing light fields and/or when the compressed light field representation data has been transmitted to a separate image processing device with more advanced processing capabilities.


The disclosures of each of U.S. patent application Ser. Nos. 12/935,504, 12/967,807, 13/631,731, and 13/972,881 are hereby incorporated by reference in their entirety. Although the systems and methods described are with respect to array cameras configured to both capture and process captured light fields, devices that are configured to obtain captured light fields captured using a different device and process the received data can be utilized in accordance with the requirements of a variety of embodiments of the invention. Additionally, any of the various systems and processes described herein can be performed in sequence, in alternative sequences, and/or in parallel (e.g. on different computing devices) in order to achieve similar results in a manner that is more appropriate to the requirements of a specific application of the invention. Systems and methods for capturing light fields and generating compressed light field representation data using the captured light fields in accordance with embodiments of the invention are described below.


Array Camera Architectures


As described above, array cameras are capable of capturing and processing light fields and can be configured to generate compressed light field representation data using captured light fields in accordance with many embodiments of the invention. An array camera including an imager array in accordance with an embodiment of the invention is illustrated in FIG. 1. The array camera 100 includes an array camera module including an imager array 102 having multiple focal planes 104 and an optics array configured to form images through separate apertures on each of the focal planes. The imager array 102 is configured to communicate with a processor 108. In accordance with many embodiments of the invention, the processor 108 is configured to read out image data captured by the imager array 102 and generate compressed light field representation data using the image data captured by the imager array 102. Imager arrays including multiple focal planes are discussed in U.S. patent application Ser. No. 13/106,797, entitled “Architectures for System on Chip Array Cameras” to McMahon et al., the entirety of which is hereby incorporated by reference.


In the illustrated embodiment, the focal planes are configured in a 5×5 array. In other embodiments, any of a variety of array configurations can be utilized including linear arrays, non-rectangular arrays, and subsets of an array as appropriate to the requirements of specific embodiments of the invention. Each focal plane 104 of the imager array is capable of capturing image data from an image of the scene formed through a distinct aperture. Typically, each focal plane includes a plurality of rows of pixels that also forms a plurality of columns of pixels, and each focal plane is contained within a region of the imager that does not contain pixels from another focal plane. The pixels or sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements, and/or any other structure configured to generate an electrical signal indicative of light incident on the structure. In many embodiments, the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present). In other embodiments, the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element. In a variety of embodiments, a Bayer filter pattern of light filters can be applied to one or more of the focal planes 104. In a number of embodiments, the sensor elements are optimized to respond to light at a particular wavelength without utilizing a color filter. It should be noted that any optical channel, including those in non-visible portions of the electromagnetic spectrum (such as infrared) can be sensed by the focal planes as appropriate to the requirements of particular embodiments of the invention.


In several embodiments, information captured by one or more focal planes 104 is read out of the imager array 102 as packets of image data. In many embodiments, a packet of image data contains one or more pixels from a row of pixels captured from each of one or more of the focal planes 104. Packets of image data may contain other groupings of captured pixels, such as one or more pixels captured from a column of pixels in each of one or more focal planes 104 and/or a random sampling of pixels. Systems and methods for reading out image data from array cameras that can be utilized in array cameras configured in accordance with embodiments of the invention are described in U.S. Pat. No. 8,305,456, entitled “Systems and Methods for Transmitting and Receiving Array Camera Image Data” to McMahon, the entirety of which is hereby incorporated by reference. In several embodiments, the packets of image data are used to create a two-dimensional array of images representing the light field as captured from the one or more focal planes 104. In many embodiments, one or more of the images in the array of images are associated with a particular color; this color can be the same color associated with the focal plane 104 corresponding to the viewpoint of the image or a different color. The processor 108 can be configured to immediately process the captured light field from the one or more focal planes and/or the processor 108 can store the captured light field and later process the captured light field. In a number of embodiments, the processor 108 is configured to offload the captured light fields to an external device for processing.


The processing of captured light fields includes determining correspondences between pixels in the captured light field. In several embodiments, the pixels in the packets of image data are geometrically correlated based on a variety of factors, including, but not limited to, the characteristics of one or more of the focal planes 104. The calibration of imager arrays to determine the characteristics of focal planes are disclosed in U.S. patent application Ser. No. 12/967,807 incorporated by reference above. In several embodiments, processor 108 is configured (such as by an image processing application) to perform parallax detection on the captured light field to determine corresponding pixel locations along epipolar lines between a reference image and alternate view images within the captured light field. The process of performing parallax detection also involves generating a depth map with respect to the reference image (e.g. a reference viewpoint that may include synthesized ‘virtual’ viewpoints where a physical camera in the array does not exist). In a variety of embodiments, the captured packets of image data are associated with image packet timestamps and geometric calibration and/or photometric calibration between pixels in the packets of image data utilize the associated image packet timestamps. Corresponding pixel locations and differences between pixels in the reference image and the alternate view image(s) can be utilized by processor 108 to determine a prediction for at least some of the pixels of the alternate view image(s). In many embodiments, the corresponding pixel locations are determined with sub-pixel precision.


The prediction image can be formed by propagating pixels from the reference image(s) to the corresponding pixel locations in the alternate view grid. In many embodiments, the corresponding pixel locations in the alternate view grid are fractional positions (e.g. sub-pixel positions). Once the pixels from the reference image(s) are propagated to the corresponding positions in the alternate view grid, a predicted image (from the same perspective as the alternate view image) is formed by calculating prediction values for the integer grid points in the alternate view grid based on propagated pixel values from the reference image. The predicted image values in the integer grid of the alternate view image can be determined by interpolating from multiple pixels propagated from the reference image in the neighborhood of the integer pixel grid position in the predicted image. In many embodiments, the predicted image values on the integer grid points of the alternate view image are interpolated through an iterative interpolation schemes (e.g. a combination of linear or non-linear interpolations) that progressively fill in ‘holes’ or missing data at integer positions in the predicted alternate view image grid. In a variety of embodiments, integer grid locations in the predicted image can be filled using set selection criteria. In several embodiments, pixels propagated from the reference image within a particular radius of the integer pixel position can form a set, and the pixel in the set closest to the mean of the distribution of pixels in the region can be selected as the predictor. In a number of embodiments, within the same set, the pixel that lands nearest to the integer grid point may be used as the predictor (i.e. nearest neighbor interpolation). In another embodiment, an average of the N nearest neighbors may be used as the predicted image value at the integer grid point. However, it should be noted that the predicted value can be any function (linear or non-linear) that interpolates or inpaints values in the predicted image based on reference pixel values in some relationship of the integer grid position in the predicted image.


The prediction error data itself can be determined by performing a photometric comparison of the pixel values from the predicted image (e.g. the predicted alternate view image based on the reference image and the depth map) and the corresponding alternate view image. The prediction error data represents the difference between the predicted alternate view image based on the reference image and the depth map, and the actual alternate view image that must be later reproduced in the decoding process.


Due to variations in the optics and the pixels used to capture the image data, sampling diversity, and/or aliasing, the processor 108 is configured to anticipate photometric differences between corresponding pixels. These photometric differences may be further increased in the compared pixels because the nearest neighbor does not directly correspond to the pixel in the reference image. In many embodiments, the compression is lossless and the full captured light field can be reconstructed using the reference image, the depth map, and the prediction error data. In other embodiments, a lossy compression is used and an approximation of the full captured light field can be reconstructed. In this way, the pixel values of the alternate view images are available for use in super-resolution processing, enabling the super-resolution processes to exploit the sampling diversity and/or aliasing that may be reflected in the alternate view images. In a number of embodiments, the prediction images are sparse images. In several embodiments, sparse images contain predictions for some subset of points (e.g. pixels) in the space of the alternate view images. Processor 108 is further configured to generate compressed light field representation data using prediction error data, the reference image, and the depth map. Other data, such as one or more image packet timestamps, can be included as metadata associated with the compressed light field representation data as appropriate to the requirements of specific array cameras in accordance with embodiments of the invention. In several embodiments, the prediction error data and the reference image are compressed via lossless and/or lossy image compression techniques. In a variety of embodiments, an image processing application configures processor 108 to perform a variety of operations using the compressed light field representation data, including, but not limited to, synthesizing high resolution images using a super-resolution process. Other operations can be performed using the compressed light field representation data in accordance with a variety of embodiments of the invention.


Although a specific array camera configured to capture light fields and generate compressed light field representation data is illustrated in FIG. 1, alternative architectures, including those containing sensors measuring the movement of the array camera as light fields are captured, can also be utilized as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Systems and methods for capturing and processing light fields in accordance with embodiments of the invention are discussed below.


Processing and Interacting with Captured Light Fields


A captured light field, as an array of images, can consume a significant amount of storage space. Generating compressed light field representation data using the captured light field while reducing the storage space utilized can be a processor-intensive task. A variety of array cameras in accordance with embodiments of the invention lack the processing power to simultaneously capture and process light fields while maintaining adequate performance for one or both of the operations. Array cameras in accordance with several embodiments of the invention are configured to separately obtain a captured light field and generate compressed light field representation data using the captured light field, allowing the array camera to quickly capture light fields and efficiently process those light fields as the processing power becomes available and/or the compressed light field representation data is needed. A process for processing and interacting with captured light fields in accordance with an embodiment of the invention is illustrated in FIG. 2. The process 200 includes reading (210) image data from a captured light field out of an array camera module. Compressed light field representation data is generated (212) from the captured light field. In a variety of embodiments, a high resolution image is synthesized (214) using the compressed light field representation data. In several embodiments, users can then interact (216) with the synthesized high resolution image in a variety of ways appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In many embodiments, a captured light field is obtained (210) using an imager array and a processor in the array camera generating (212) the compressed light field representation data. In a number of embodiments, a captured light field is obtained (210) from a separate device. In several embodiments, the captured light field is obtained (210) and compressed light field representation data is generated (212) as part of a single capture operation. In a variety of embodiments, obtaining (210) the captured light field and generating (212) the compressed light field representation data occurs at disparate times.


Generating (212) the compressed light field representation data includes normalizing the obtained (210) captured light field and generating a depth map for the obtained (210) captured light field using geometric calibration data and photometric calibration data. In many embodiments, parallax detection processes, such as those disclosed in U.S. patent application Ser. No. 13/972,881, are utilized to generate a depth map and prediction error data describing correlation between pixels in the captured light field from the perspective of one or more reference images. Processes other than those disclosed in U.S. patent application Ser. No. 13/972,881 can be utilized in accordance with many embodiments of the invention. The generated (212) compressed light field representation data includes the prediction error data, the reference images, and the depth map. Additional metadata, such as timestamps, location information, and sensor information, can be included in the generated (212) compressed light field representation data as appropriate to the requirements of specific applications in accordance with embodiments of the invention. In many embodiments, the generated (212) compressed light field representation data is compressed using lossy and/or non-lossy compression techniques. The generated (212) compressed light field representation data can be stored in a variety of formats, such as the JPEG-DX standard. In several embodiments, the alternate view images in the obtained (210) captured light field are not stored in the generated (212) compressed light field representation data.


In a number of embodiments, synthesizing (214) a high resolution image utilizes the reference image(s), the prediction error data, and the depth map in the generated (212) compressed light field representation data. In a variety of embodiments, the reference images, the prediction error data, and the depth map are utilized to reconstruct the array of images (or an approximation of the images) to synthesize (214) a high resolution image using a super-resolution process. A high resolution image can be synthesized (214) using the array of images representing the captured light field reconstructed based on the compressed light field representation data (212). However, in a number of embodiments synthesizing (214) a high resolution image using the generated (212) compressed light field representation data includes reconstructing (e.g. decoding) the array of images using the compressed light field representation data once the captured light field is to be viewed, such as in an image viewing application running on an array camera or other device. Techniques for decoding compressed light field representation data that can be utilized in accordance with embodiments of the invention are described in more detail below. In several embodiments, high resolution images are synthesized (214) at a variety of resolutions to support different devices and/or varying performance requirements. In a number of embodiments, the synthesis (214) of a number of high resolution images is part of an image fusion process such as the processes described in U.S. patent application Ser. No. 12/967,807, the disclosure of which is incorporated by reference above.


Many operations can be performed while interacting (216) with synthesized high resolution images, such as, but not limited to, modifying the depth of field of the synthesized high resolution image, changing the focal plane of the synthesized high resolution image, recoloring the synthesized high resolution image, and detecting objects within the synthesized high resolution image. Systems and methods for interacting (216) with compressed light field representation data and synthesized high resolution images that can be utilized in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 13/773,284 to McMahon et al., the entirety of which is hereby incorporated by reference.


Although a specific process processing and interacting with captured light fields in accordance with an embodiment of the invention is described above with respect to FIG. 2, a variety of image deconvolution processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention. Processes for generating compressed light field representation data using captured light fields in accordance with embodiments of the invention are discussed below.


Generating Compressed Light Field Representation Data


A process for generating compressed light field representation data in accordance with an embodiment of the invention is illustrated in FIG. 3. The process 300 includes obtaining (310) an array of image data. A reference image viewpoint (e.g. a desired viewpoint for the reference image) is determined (312). Parallax detection is performed (314) to form a depth map from this reference viewpoint. Predicted images are determined (316) corresponding to the alternate view image by propagating pixels from the reference image to the alternate view grid. Prediction error data is computed (318) as the difference between the predicted image and the corresponding alternate view image. Where areas of low confidence are detected (320), supplemental prediction images and supplemental prediction error data is computed (322). In a variety of embodiments, the reference image(s), prediction error data, and/or the depth map are compressed (324). Compressed light field representation data can then be created (326) using the reference image(s), prediction error data, and/or depth map(s).


In a variety of embodiments, the array of images is obtained (310) from a captured light field. In several embodiments, the obtained (310) array of images is packets of image data captured using an imager array. In many embodiments, the determined (312) reference image corresponds to the reference viewpoint of the array of images. Furthermore, the determined (312) reference image can be an arbitrary image (or synthetic image) in the obtained (310) array of images. In a number of embodiments, each image in the obtained (310) array of images is associated with a particular color channel, such as, but not limited to, green, red, and blue. Other colors and/or portions of the electromagnetic spectrum can be associated with each image in the array of images in accordance with a variety of embodiments of the invention. In several embodiments, the determined (312) reference image is a green image in the array of images. Parallax detection is performed (314) with respect to the viewpoint of the determined (312) reference image to locate pixels corresponding to pixels in a reference image by searching along epipolar lines in the alternate view images in the array of images. In a number of embodiments, the parallax detection uses correspondences between cameras that are not co-located with the viewpoint of the reference image(s). In many embodiments, the search area need not be directly along an epipolar line, but rather a region surrounding the epipolar line; this area can be utilized to account for inaccuracies in determining imager calibration parameters and/or the epipolar lines. In several embodiments, parallax detection can be performed (314) with a fixed and/or dynamically determined level of precision; this level of precision can be based on performance requirements and/or desired compression efficiency, the array of images, and/or on the desired level of precision in the result of the performed (314) parallax detection. Additional techniques for performing parallax processes with varying levels of precision are disclosed in U.S. Provisional Patent Application Ser. No. 61/780,974, filed Mar. 13, 2013, the entirety of which is hereby incorporated by reference.


Disparity Information from a Single Reference Image


Turning now to FIG. 4A, a conceptual illustration of a two-dimensional array of images and associated epipolar lines as utilized in determining pixel correspondences in accordance with an embodiment of the invention is shown. The 4×4 array of images 400 includes a reference image 410, a plurality of alternate view images 412, a plurality of epipolar lines 414, and baselines 416 representing the distance between optical centers of particular pairs of cameras in the array. Performing (314) parallax detection along epipolar lines 414 calculates disparity information for the pixels in one or more of the alternate view images 412 relative to the corresponding pixels in the reference image 410. In a number of embodiments, the epipolar lines are geometric distortion-compensated epipolar lines between the pixels corresponding to the photosensitive sensors in the focal planes in the imager array that captured the array of images. In several embodiments, the calculation of disparity information first involves the utilization of geometric calibration data so that disparity searches can be directly performed along epipolar lines within the alternate view images. Geometric calibration data can include a variety of information, such as inter- and intra-camera lens distortion data obtained from an array camera calibration process. Other geometric calibration data can be utilized in accordance with a number of embodiments of the invention. In a variety of embodiments, photometric pre-compensation processes are performed on one or more of the images prior to determining the disparity information. A variety of photometric pre-compensation processes, such as vignette correction, can be utilized in accordance with many embodiments of the invention. Although specific techniques for determining disparity are discussed above, any of a variety of techniques appropriate to the requirements of a specific application can be utilized in accordance with embodiments of the invention, such as those disclosed in U.S. patent application Ser. No. 13/972,881, incorporated by reference above.


In a variety of embodiments, performing (314) parallax detection includes generating a depth map describing depth information in the array of images. In many embodiments, the depth map is metadata describing the distance from the reference camera (i.e. viewpoint) to the portion of the scene captured in the pixels (or a subset of the pixels) of an image determined using the corresponding pixels in some or all of the alternate view images. In several embodiments, candidate corresponding pixels are those pixels in alternate view images that appear along epipolar lines from pixels in the reference image. In a number of embodiments, a depth map is generated using only images in the array of images that are associated with the same color (for example, green) as the reference image. In several embodiments, the depth map is generated using images of the same color but a different color than the reference camera. For example, with a green reference image, a depth map can be generated using only the images associated with the color red (or blue) in the array of images. In many embodiments, depth information is determined with respect to multiple colors and combined to generate a depth map; e.g. depth information is determined separately for the subsets of green, red, and blue images in the array of images and a final depth map is generated using a combination of the green depth information, the red depth information, and the blue depth information. In a variety of embodiments, the depth map is generated using information from any set of cameras in the array. In a variety of embodiments, the depth map is generated without respect to colors associated with the images and/or with a combination of colors associated with the images. In several embodiments, performing (314) parallax detection can be performed utilizing techniques similar to those described in U.S. patent application Ser. No. 13/972,881, incorporated by reference above. Additionally, non-color images (such as infrared images) can be utilized to generate the depth map as appropriate to the requirements of specific embodiments of the invention.


Although a specific example of a 4×4 array of images that can be utilized to determine disparity information and a depth map from a reference image in the 4×4 array of images is described above with respect to FIG. 4A, any size array, and any set of cameras in that array can be used to determine disparity information and a depth map in accordance with embodiments of the invention.


Returning now to FIG. 3, depth information determined during the performed (314) parallax detection is used to determine (316) prediction images including pixel location predictions for one or more pixels in the reference image in at least one alternate view image. In several embodiments, the depth map generated during parallax detection can be used to identify pixel locations within alternate view images corresponding to a pixel location within the reference image with fractional pixel precision. In a variety of embodiments, determining (316) a prediction image in the alternate view includes mapping the fractional pixel location to a specific pixel location (or pixel locations) within the pixel grid for the alternate view image. In several embodiments, specific integer grid pixel location(s) in the predicted image for the alternate view are determined as a function of the neighbors within the support region. These functions include, but are not limited to the nearest neighbor (or a function of the nearest N neighbors) to the integer grid point within the support region. In other embodiments, any other localized fixed or adaptive mapping technique including (but not limited) to techniques that map based on depth in boundary regions can be utilized to identify a pixel within an alternate view image for the purpose of generating a prediction for the selected pixel in the alternate view image. Additionally, filtering can be incorporated into the computation of prediction images in order to reduce the amount of prediction error. In several embodiments, the prediction error data is computed (318) from the difference of the prediction image(s) and their respective alternative view image(s). The prediction error data can be utilized in the compression of one or more images in the captured light field. In several embodiments, the computed (318) prediction error data is the signed difference between the values of a pixel in the predicted image and the pixel at the same grid position in the alternate view image. In this way, the prediction error data typically does not reflect the error in the location prediction for a pixel in the reference image relative to a pixel location in the alternate view image. Instead, the prediction error data primarily describes the difference in photometric information between a pixel in the determined prediction image that was generated by propagating pixels from the reference image and a pixel in an alternate view image. Although specific techniques are identified for determining predicted images utilizing correspondence information determined using a depth map, any of a variety of approaches can be utilized for determining prediction images utilizing correspondence information determined using a depth map as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In a variety of embodiments, virtual red, virtual green, and/or virtual blue reference images can be utilized as a reference image. For example, a depth map can be determined for a particular reference viewpoint that may not correspond to the location of a physical camera in the array. This depth map can be utilized to form a virtual red image, virtual green, and/or virtual blue image from the captured light field. These virtual red, virtual green, and and/or virtual blue images can then be utilized as the reference image(s) from which to create the prediction images for the alternate view(s) utilized in the processes described above. By way of a second example, one or more virtual red, virtual green, and/or virtual blue images and/or physical red, green, and/or blue images within the array of images can be used as reference images. When forming the prediction error, the depth map can be utilized to form a prediction image from virtual and/or actual reference image(s) and calculate the prediction error with respect to the corresponding alternate view images.


Turning now to FIG. 5, a prediction error histogram 500 conceptually illustrating computed (318) prediction error data between pixels in a predicted image and an alternate view image in accordance with an embodiment of the invention is shown. The prediction error represented by prediction error histogram 500 can be utilized in the compression of the corresponding image data using the computed (318) prediction error data. Although a specific example of a prediction error histogram in accordance with an embodiment of the invention is conceptually illustrated in FIG. 5, any variety of prediction errors, including those that have statistical properties differing from those illustrated in FIG. 5, and any other applicable error measurement can be utilized in accordance with the requirements of embodiments of the invention.


Returning now to FIG. 3, the correlation between spatially proximate pixels in an image can be exploited to compare all pixels within a patch of an alternate view image to a pixel and/or a patch from a reference image in several embodiments of the invention. Effectively, the pixels from a region in the predicted image are copied from a patch in the reference image. In this way, a trade-off can be achieved between determining fewer corresponding pixel locations based on the depth and/or generating a lower resolution depth map for a reference image and encoding a potentially larger range of prediction errors with respect to one or more alternate view images. In a number of embodiments, the process of encoding the prediction error data can be adaptive in the sense that pixels within a region of an alternate view image can be encoded with respect to a specific pixel in a reference image and a new pixel from the reference image can be selected that has a corresponding pixel location closer to a pixel in the alternate view image in the event that the prediction error exceeds a predetermined threshold.


Prediction error data for the alternate view is computed (318) using the determined (312) reference image, the determined (316) prediction images, and depth map. For the reference image pref and alternate view images pk,l where k,l represents the location of the alternate view image in the array of images p, the depth information provides a subset of images pk,l where

pref(x,y):=pk,l(i,j)

where (x,y) is the location of a pixel in pref and (i,j) is the location of a pixel at a fractional location in pk,l corresponding to a pixel in pref(x,y) based on the depth information. Using these mappings of subsets of pixels to the alternate viewpoint pk,l a prediction image is calculated from the reference pixels mapping to pk,l. (316). The prediction error data Ek,l can be computed (318) between pref and the prediction image for viewpoint pk,l described above. In a variety of embodiments, the determined (316) prediction images include sparse images. The missing values for the sparsely populated images can be interpolated using populated values within a neighborhood of the missing pixel value. For example, a kernel regression may be applied to the populated values to fill in the missing prediction values. In these cases, the prediction error data Ek,l is a representation of the error induced by the interpolation of the missing values.


In many embodiments, the determined (316) initial prediction image is a sparsely populated grid of fractionally-positioned points from the reference frame pref that includes “holes” that are locations or regions on the alternate view integer grid that are not occupied by any of the pixels mapped from the reference frame pref. The presence of “holes” can be particularly prevalent in occluded areas but may occur in non-occlusion regions due to non-idealities in the depth map or due to the fact that many pixels in the reference camera correspond to fractional positions in the alternate view image. In several embodiments, holes in the prediction error data can be filled using the absolute value of the pixel location in the alternate view image pk,l. This is similar to filling the predicted image with a value of zero (i.e. any null value or default value) to ensure that the coded error is equal to the value of the pixel in the alternate view image at that position. In a variety of embodiments, “holes” in a predicted image can be filled using interpolation with predicted values from neighboring pixels to create additional predictions for the holes based on the pixels from the predicted image. As can be readily appreciated, any interpolator can be utilized to create interpolated predicted image pixels from pixels propagated from the reference image. The details of the interpolation scheme used is a parameter of the encoding and decoding process and should be applied in both the encoder and decoder to ensure lossless output. In a variety of embodiments, residuals can provide more efficient compression than encoding holes with absolute values. In several embodiments, the pixels from the reference image pref do not map to exact grid locations within the prediction image and a mapping that assigns a single pixel value to multiple adjacent pixel locations on the integer grid of the prediction image is used. In this way, there is a possibility that multiple pixels from the reference image pref may map to the same integer grid location in the prediction image. In this case, pixel stacking rules can be utilized to generate multiple prediction images in which different stacked pixels are used in each image. In many embodiments, if N pixels exist in a pixel stack, then the resulting predicted value could be the mean of the N pixel values in the stack. However, any number of prediction images can be computed and/or any other techniques for determining prediction images where multiple pixels map to the same location (e.g. a pixel stack exists) as appropriate to the requirements of specific embodiments of the invention. In a variety of embodiments, holes can remain within the predicted images after the initial interpolation; additional interpolation processes can be performed until every location on the integer grid of the predicted image (or a predetermined number of locations) is assigned a pixel value. Any interpolation technique, such as kernel regression or inpainting, can be used to fill the remaining holes as described. In other embodiments, a variety of techniques can be utilized to achieve compression of raw data that involve creating multiple prediction images and/or pieces of prediction error data. Furthermore, any variety of interpolation techniques known to those skilled in the art can be utilized to fill holes in a prediction image as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


Prediction Error Data from Multiple Reference Images


In a variety of embodiments, performing (314) parallax detection does not return accurate disparity information for pixels in alternate view images that are occluded, appear in featureless (e.g. textureless) areas relative to the reference image, or where the depth map exhibits other non-idealities such as photometric mismatch. Using the determined (316) prediction error data and/or the depth map and/or a confidence map describing areas of low confidence in the depth map, areas of low confidence can be identified (320). Areas of low confidence indicate areas in the reference viewpoint where the depth measurement may be inaccurate or the pixels may otherwise not photometrically correspond (for example due to defects in the reference image), leading to potential inefficiencies in compression and/or performance. Low confidence can be determined in a variety of ways, such as identifying areas having a parallax cost function exceeding a threshold value. For example, if the parallax cost function indicates a low cost (e.g. low mismatch), this indicates that the focal planes agree on a particular depth and the pixels appear to correspond. Similarly, a high cost indicates that not all focal planes agree with respect to the depth, and therefore the computed depth is unlikely correctly represent the locations of objects within the captured light field. However, any of a variety of techniques for identifying areas of low confidence can be utilized as appropriate to the requirements of specific embodiments of the invention, such as those disclosed in U.S. patent application Ser. No. 13/972,881, incorporated by reference above. In many embodiments, these potential inefficiencies are disregarded and no additional action is taken with respect to the identified (320) areas of low confidence. In several embodiments, potential inefficiencies are disregarded by simply encoding the pixels from the alternate view images rather than computing the prediction error. In a number of embodiments, if an area of low confidence (e.g. correspondence mismatch) is identified (320), one or more additional reference images are selected and supplemental prediction images (or portions of supplemental prediction image(s)) are computed (322) from the additional reference images. In several embodiments, additional reference images or portions of additional reference images are utilized when detected objects in the array of images for areas where the determined (316) prediction error data would be large using a single reference image; for example, in an occlusion zone). A large prediction error rate can be predicted when objects in a captured light field are close to the imager array, although any situation where a large prediction error rate is (316) determined can be the basis for selecting additional reference images in accordance with embodiments of the invention.


Turning now to FIG. 4B, a conceptual illustration of a two-dimensional array of images from two reference images as utilized in determining supplemental pixel correspondences in accordance with an embodiment of the invention is shown. The 4×4 array of images 450 includes a reference image 460, a secondary reference image 466, a plurality of green alternate view images 462, a plurality of red alternate view images 461, a plurality of blue alternate view images 463, a plurality of prediction dependencies 464 extending from the reference image 460, and a plurality of secondary prediction dependencies 470 extending from the secondary reference image 466. A baseline 468 extends from the primary reference image 460 to the secondary reference image 466. Primary prediction images are computed (316) from those alternate view images associated with the reference image 460 by the prediction dependencies 464 utilizing processes similar to those described above. Likewise, supplemental prediction images are computed (322) along secondary epipolar lines from the secondary reference image 466 using those alternate view images associated with the secondary reference image 466 via the secondary prediction dependencies 470. In a number of embodiments, the pixels in the computed (322) supplemental prediction images can be mapped to the pixels in the reference image 460 using baseline 468.


In several embodiments, the alternate view images that are utilized in performing (314) parallax detection are clustered around the respective reference image that is utilized in performing (314) parallax detection. In a variety of embodiments, the images are clustered in a way to reduce the disparity and/or improve pixel correspondence between the clustered images, thereby reducing the number of pixels from the alternate view images that are occluded from the viewpoints of both the reference image and the secondary reference image. In many embodiments, the alternate view images are clustered to the primary reference image 460, the secondary reference image 466, and/or together based on the color associated with the images. For example, if reference image 460 and secondary reference image 466 are green, only the green alternate view images 462 are associated with the reference image 460 and/or the secondary reference image 466. Likewise, the red alternate view images 461 (or the blue alternate view images 463) are associated with each other for the purposes of computing (322) supplemental prediction images and/or performing (314) parallax detection.


In many embodiments, particularly those embodiments employing lossless compression techniques, the secondary reference image 466 is predicted using the reference image 460 and the baseline 468 that describes the distance between the optical centers of the reference image and the secondary reference image. In several embodiments, the secondary reference image 466 is selected to reduce the size of the occlusion zones (and thus predictability of pixels); that is, parallax detection is performed and error data is determined as described above using the secondary reference image 466. In many embodiments, the secondary reference image 466 is associated with the same color channel as the reference image 460. A specific example of a two-dimensional array of images with two reference images that can be utilized to compute (322) supplemental prediction images is conceptually illustrated in FIG. 4B; however, any array of images and more than two reference images can be utilized in accordance with embodiments of the invention. For example, supplemental references images can be computed per color channel. Taking the array illustrated in FIG. 4B, six reference images (one primary reference image for each of the red, blue, and green channels along with one secondary reference image for each of the red, blue, and green channels) can be utilized in the generation of prediction images and the associated predicted error data. Additionally, in a variety of embodiments, a subset of the pixels within the reference image and/or supplemental reference image (e.g. a region or a sub-portion) can be utilized in the calculation of prediction error data utilizing processes similar to those described above.


In a variety of embodiments, particularly those utilizing lossy compression techniques, a variety of coding techniques can be utilized to account for the effects of lossy compression in the reference images when predicting an alternate view image. In several embodiments, before the prediction image and prediction error data for the alternate view image are formed, the reference image is compressed using a lossy compression algorithm. The compressed reference image is then decompressed to form a lossy reference image. The lossy reference image represents the reference image that the decoder will have in the initial stages of decoding. The lossy reference image is used along with the depth map to form a lossy predicted image for the alternate view image. The prediction error data for the alternate view image is then calculated by comparing the lossy reference image with the alternate view image (e.g. by taking the signed difference of the two images). In this way, when using lossy compression, the prediction error data will take into account the lossy nature of the encoding of the reference image when forming the prediction error data.


In a variety of embodiments, the alternative reference images are based on the reference image. In several embodiments, the reference image used to predict the viewpoint of the alternate reference image undergoes a lossy compression. A lossy compression is applied to the prediction error data for the alternate reference image. The reference image is then decompressed to generate a lossy reference image. A lossy predicted image is generated from the decompressed reference image for the alternate reference image. The compressed prediction error data is decompressed to form the lossy prediction error data. The lossy prediction error data is added to the lossy predicted image to form the lossy predicted alternate reference image. The prediction image and prediction error data for any subsequent alternate view image that depends on the alternate reference image will be formed using the lossy predicted alternate reference image. In a number of embodiments, this forms the alternate view image that can be reconstructed utilizing lossy reconstruction techniques as described below. This process can be repeated for each alternate view image as necessary. In this way, prediction error data can be accurately computed (relative to the uncompressed light field data) using the lossy compressed image data.


Returning now to FIG. 3, in a number of embodiments, the determined (312) reference image along with the computed (318) prediction error data and/or any computed (322) supplemental prediction images (if relevant) are compressed (324). In many embodiments, supplemental prediction error data based on the computed (322) supplemental prediction images is compressed (324). This compression can be lossless or lossy depending on the requirements of a particular embodiment of the invention. When the images are compressed (324), they can be reconstructed (either exactly or approximately depending on the compression (324) technique(s) utilized) by forming a predicted alternate view image from the reference image data and the depth map, and adding the decoded prediction error data to the predicted alternate view image(s) using an image decoder. Additionally, particularly in those embodiments utilizing lossy compression techniques, metadata describing the information lost in the compression of the reference image(s) and/or prediction error data can be stored in the compressed light field representation data. Alternatively, this information can be stored in the prediction error data. This information can be utilized in the decoding of the compressed light field representation data to accurately reconstruct the originally captured images by correcting for the information lost in the lossy compression process. Techniques for decoding losslessly compressed light field representation data in accordance with embodiments of the invention are described in more detail below. In a variety of embodiments, the compression (324) of the images depends on the computed (318) prediction error data. Compressed light field representation data is generated (326) using the determined (312) reference image(s) and computed (318) prediction error data along with the depth map generated during the performed (314) parallax detection. In those embodiments with multiple reference images, the secondary reference images or portions of the secondary reference images can be included in the compressed light field representation data and/or the secondary reference images can be reconstructed using the computed (318) prediction error data with respect to the reference image. Additional metadata can be included in the generated (326) compressed light field representation data as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


In a variety of embodiments, supplemental depth information is incorporated into the depth map and/or as metadata associated with the compressed light field representation data. In a number of embodiments, supplemental depth information is encoded with the additional reference viewpoint(s). In many embodiments, the depth information used for each reference viewpoint is calculated using any sets of cameras during the encoding process that may be similar or may be different depending on the viewpoint. In many embodiments, depth for an alternate reference viewpoint is calculated for only sub-regions of the alternate reference viewpoint so that an entire depth map does not need to be encoded for each viewpoint. In many embodiments, a depth map for the alternate reference viewpoint is formed by propagating pixels from the depth map from a primary reference viewpoint. If there are holes in the depth map propagated to the alternate reference viewpoint they can be filled by interpolating from nearby propagated pixels in the depth map, or through direct detection from the alternate viewpoint. In many embodiments, the depth map from the alternate reference viewpoint can be formed by a combination of propagating depth values from another reference viewpoint, interpolating for missing depth values in the alternate reference viewpoint, or directly detecting regions of particular depth values in the alternative reference viewpoint. In this way, the depth map created by performing (314) parallax detection above can be augmented with depth information generated from alternate reference images.


A specific process for generating compressed light field representation data in accordance with an embodiment of the invention is described above with respect to FIG. 3; however, a variety of processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention. In particular, the above processes can be performed using all or a subset of the images in the obtained (310) array of images.


Decoding Compressed Light Field Representation Data


As described above, compressed light field representation data can be utilized to efficiently store captured light fields. However, in order to utilize the compressed light field representation data to perform additional processed on the captured light field (such as parallax processes), the compressed light field representation data need be decoded to retrieve the original (or an approximation of the) captured light field. A process for decoding compressed light field representation data is conceptually illustrated in FIG. 6. The process 600 includes obtaining (610) compressed light field representation data. In many embodiments, the compressed light field representation data is decompressed (611). A reference image is determined (612) and alternate view images are formed (614). In many embodiments, alternative reference images are present (616). If alternative reference image are present, the process 600 repeats using (618) the alternative reference image(s). Once the alternate view images are reconstructed, the captured light field is reconstructed (620).


In a variety of embodiments, decompressing (611) the captured light field representation data includes decompressing the reference image, depth map, and/or prediction error data compressed utilizing techniques described above. In several embodiments, the reference image (612) corresponds to a viewpoint (e.g. a focal plane in an imager array) image in the compressed light field representation data; however, it should be noted that reference images from virtual viewpoints (e.g. viewpoints that do not correspond to a focal plane in the imager array) can also be utilized as appropriate to the requirements of specific embodiments of the invention. In a number of embodiments, the alternate view images are formed (614) by computing prediction images using the determined (612) reference image and the depth map, then applying the prediction error data to the computed prediction images. However, any technique for forming (614) the alternate view images, including directly forming the alternate view images using the determined (612) reference image and the prediction error data, can be utilized as appropriate to the requirements of specific embodiments of the invention. Additionally, metadata describing the interpolation techniques utilized in the creation of the compressed light field representation data can be utilized in computing the prediction images. In this way, the decoding process results in prediction images that, once the prediction error data is applied to the prediction images, correct (or an approximation to correct) alternate view images are formed (614). This allows for multiple interpolation techniques to be utilized in the encoding of compressed light field representation data, e.g. adaptive interpolation techniques can be utilized based on the requirements of specific embodiments of the invention. The captured light field is reconstructed (620) using the alternative view images and the reference image. In many embodiments, the captured light field also includes the depth map, the prediction error data, and/or any other metadata included in the compressed light field representation data.


In a variety of embodiments, multiple reference images (e.g. a primary reference image and one or more secondary reference images) exist within the compressed light field representation data. The alternate reference images can be directly included in the compressed light field representation data and/or formed utilizing techniques similar to those described above. Using (618) an alternative reference image further includes recursively (and/or iteratively) forming alternate view images from the viewpoint of each reference image utilizing techniques described above. In this way, the alternative view images are mapped back to the viewpoint of the (primary) reference image and allowing the captured light field to be reconstructed (620).


In those embodiments utilizing lossy compression techniques, information critical to determining (612) the reference image and/or an alternative reference image can be lost. However, this loss can be compensated for by storing the lost information as metadata within the compressed light field representation data and/or as part of the prediction error data. Then, when determining (612) the reference image and/or the alternate reference image, the metadata and/or prediction error data can be applied to the compressed image in order to reconstruct the original, uncompressed image. Using the uncompressed reference image, the decoding of the compressed captured light field representation data can proceed utilizing techniques similar to those described above to reconstruct (320) the captured light field. In a variety of embodiments, predicting the alternative reference image from the reference image, if lossy compression is used, includes reconstructing the alternate reference image by coding and decoding (losslessly) the prediction error data then adding the decoded prediction error to the reference image. In this way, the original alternate reference image can be reconstructed and used to predict specific alternate views as described above.


A specific process for decoding compressed light field representation data in accordance with an embodiment of the invention is described above with respect to FIG. 6; however, a variety of processes appropriate to the requirements of specific applications can be utilized in accordance with embodiments of the invention.


Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention can be practiced otherwise than specifically described without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims
  • 1. A method for decoding compressed light field representation data comprising: obtaining, using a processor, compressed light field representation data, wherein the compressed light field representation data contains a compressed representation of a plurality of images of a scene captured from multiple points of view comprising a compressed representation of a reference image, a depth map, and prediction error data;decompressing a reference image, a depth map, and prediction error data from the compressed light field representation data;forming at least one alternate view image by: computing at least one prediction image using the reference image and the depth map; andapplying the prediction error data to modify the pixels of the at least one computed prediction image to generate the at least one alternate view image;wherein the reference image and the at least one alternative view image form a light field comprising a plurality of images captured from multiple points of view.
  • 2. The method of claim 1, wherein the reference image corresponds to a viewpoint image in the compressed light field representation data.
  • 3. The method of claim 2, wherein the viewpoint image corresponds to a focal plane in an imager array.
  • 4. The method of claim 1, wherein the reference image corresponds to a virtual viewpoint that does not correspond to a focal plane in an imager array.
  • 5. The method of claim 1, wherein the at least one prediction image is computed using metadata describing interpolation techniques utilized in the creation of the compressed light field representation data.
  • 6. The method of claim 1, wherein the light field further comprises the depth map, the prediction error data, and other metadata included in the compressed light field representation data.
  • 7. The method of claim 1, wherein the compressed light field representation data comprises a plurality of reference images.
  • 8. The method of claim 7, further comprising recursively forming alternate view images from the viewpoint of each reference image.
  • 9. The method of claim 1, wherein the compressed light field representation data comprises the at least one alternate view image.
  • 10. The method of claim 1, wherein the compressed light field representation data comprises metadata and prediction error data providing information that compensates for information lost during a lossy compression.
  • 11. The method of claim 10, wherein determining the reference image comprises applying the metadata and the prediction error data to a compressed image in order to reconstruct an original, uncompressed image.
  • 12. The method of claim 1, wherein the depth map describes the distance from the viewpoint of the reference image with reference to objects imaged by pixels within the reference image.
  • 13. The method of claim 1, wherein a portion of the prediction error data describes a difference in photometric information between a pixel in a prediction image and a pixel in at least one alternate view image corresponding to the prediction image.
  • 14. The method of claim 1, wherein the reference image is a virtual image interpolated from a virtual viewpoint within the image data.
CROSS-REFERENCE TO RELATED APPLICATIONS

The current application is a continuation of U.S. patent application Ser. No. 15/253,605, entitled “Systems and Methods for Generating Compressed Light Field Representation Data using Captured Light Fields, Array Geometry, and Parallax Information” to Venkataraman et al., filed on Aug. 31, 2016, which is a continuation of U.S. patent application Ser. No. 14/186,871, entitled “Systems and Methods for Generating Compressed Light Field Representation Data using Captured Light Fields, Array Geometry, and Parallax Information” to Venkataraman et al., filed on Feb. 21, 2014 and issued as U.S. Pat. No. 9,462,164, which claims priority to U.S. Provisional Patent Application Ser. No. 61/767,520, filed Feb. 21, 2013, and to U.S. Provisional Patent Application Ser. No. 61/786,976, filed Mar. 15, 2013, the disclosures of which are hereby incorporated by reference in their entirety.

US Referenced Citations (914)
Number Name Date Kind
4124798 Thompson Nov 1978 A
4198646 Alexander et al. Apr 1980 A
4323925 Abell et al. Apr 1982 A
4460449 Montalbano Jul 1984 A
4467365 Murayama et al. Aug 1984 A
4652909 Glenn Mar 1987 A
4899060 Lischke Feb 1990 A
5005083 Grage Apr 1991 A
5070414 Tsutsumi Dec 1991 A
5144448 Hornbaker et al. Sep 1992 A
5157499 Oguma et al. Oct 1992 A
5325449 Burt Jun 1994 A
5327125 Iwase et al. Jul 1994 A
5488674 Burt Jan 1996 A
5629524 Stettner et al. May 1997 A
5793900 Nourbakhsh et al. Aug 1998 A
5801919 Griencewic et al. Sep 1998 A
5808350 Jack et al. Sep 1998 A
5832312 Rieger et al. Nov 1998 A
5880691 Fossum et al. Mar 1999 A
5911008 Niikura et al. Jun 1999 A
5933190 Dierickx et al. Aug 1999 A
5973844 Burger Oct 1999 A
6002743 Telymonde Dec 1999 A
6005607 Uomori et al. Dec 1999 A
6034690 Gallery et al. Mar 2000 A
6069351 Mack May 2000 A
6069365 Chow et al. May 2000 A
6097394 Levoy et al. Aug 2000 A
6124974 Burger Sep 2000 A
6130786 Osawa et al. Oct 2000 A
6137100 Fossum et al. Oct 2000 A
6137535 Meyers Oct 2000 A
6141048 Meyers Oct 2000 A
6160909 Melen Dec 2000 A
6163414 Kikuchi et al. Dec 2000 A
6172352 Liu et al. Jan 2001 B1
6175379 Uomori et al. Jan 2001 B1
6205241 Melen Mar 2001 B1
6239909 Hayashi et al. May 2001 B1
6292713 Jouppi et al. Sep 2001 B1
6340994 Margulis et al. Jan 2002 B1
6358862 Ireland et al. Mar 2002 B1
6443579 Myers Sep 2002 B1
6476805 Shum et al. Nov 2002 B1
6477260 Shimomura Nov 2002 B1
6502097 Chan et al. Dec 2002 B1
6525302 Dowski, Jr. et al. Feb 2003 B2
6563537 Kawamura et al. May 2003 B1
6571466 Glenn et al. Jun 2003 B1
6603513 Berezin Aug 2003 B1
6611289 Yu Aug 2003 B1
6627896 Hashimoto et al. Sep 2003 B1
6628330 Lin Sep 2003 B1
6635941 Suda Oct 2003 B2
6639596 Shum et al. Oct 2003 B1
6647142 Beardsley Nov 2003 B1
6657218 Noda Dec 2003 B2
6671399 Berestov Dec 2003 B1
6674892 Melen et al. Jan 2004 B1
6750904 Lambert Jun 2004 B1
6765617 Tangen et al. Jul 2004 B1
6771833 Edgar Aug 2004 B1
6774941 Boisvert et al. Aug 2004 B1
6788338 Dinev Sep 2004 B1
6795253 Shinohara Sep 2004 B2
6801653 Wu et al. Oct 2004 B1
6819328 Moriwaki et al. Nov 2004 B1
6819358 Kagle et al. Nov 2004 B1
6879735 Portniaguine et al. Apr 2005 B1
6897454 Sasaki et al. May 2005 B2
6903770 Kobayashi et al. Jun 2005 B1
6909121 Nishikawa Jun 2005 B2
6927922 George et al. Aug 2005 B2
6958862 Joseph Oct 2005 B1
7015954 Foote et al. Mar 2006 B1
7085409 Sawhney Aug 2006 B2
7161614 Yamashita et al. Jan 2007 B1
7199348 Olsen et al. Apr 2007 B2
7206449 Raskar et al. Apr 2007 B2
7235785 Hornback et al. Jun 2007 B2
7262799 Suda Aug 2007 B2
7292735 Blake et al. Nov 2007 B2
7295697 Satoh Nov 2007 B1
7333651 Kim et al. Feb 2008 B1
7369165 Bosco et al. May 2008 B2
7391572 Jacobowitz et al. Jun 2008 B2
7408725 Sato Aug 2008 B2
7425984 Chen Sep 2008 B2
7496293 Shamir et al. Feb 2009 B2
7564019 Olsen Jul 2009 B2
7599547 Sun et al. Oct 2009 B2
7606484 Richards et al. Oct 2009 B1
7620265 Wolff Nov 2009 B1
7633511 Shum et al. Dec 2009 B2
7639435 Chiang et al. Dec 2009 B2
7646549 Zalevsky et al. Jan 2010 B2
7657090 Omatsu et al. Feb 2010 B2
7667824 Moran Feb 2010 B1
7675080 Boettiger Mar 2010 B2
7675681 Tomikawa et al. Mar 2010 B2
7706634 Schmitt et al. Apr 2010 B2
7723662 Levoy et al. May 2010 B2
7738013 Galambos et al. Jun 2010 B2
7741620 Doering et al. Jun 2010 B2
7782364 Smith Aug 2010 B2
7826153 Hong Nov 2010 B2
7840067 Shen et al. Nov 2010 B2
7912673 Hébert et al. Mar 2011 B2
7965314 Miller et al. Jun 2011 B1
7973834 Yang Jul 2011 B2
7986018 Rennie Jul 2011 B2
7990447 Honda et al. Aug 2011 B2
8000498 Shih et al. Aug 2011 B2
8013904 Tan et al. Sep 2011 B2
8027531 Wilburn et al. Sep 2011 B2
8044994 Vetro et al. Oct 2011 B2
8077245 Adamo et al. Dec 2011 B2
8098297 Crisan et al. Jan 2012 B2
8098304 Pinto et al. Jan 2012 B2
8106949 Tan et al. Jan 2012 B2
8126279 Marcellin et al. Feb 2012 B2
8130120 Kawabata et al. Mar 2012 B2
8131097 Lelescu et al. Mar 2012 B2
8149323 Li Apr 2012 B2
8164629 Zhang Apr 2012 B1
8169486 Corcoran et al. May 2012 B2
8180145 Wu et al. May 2012 B2
8189065 Georgiev et al. May 2012 B2
8189089 Georgiev May 2012 B1
8194296 Compton Jun 2012 B2
8212914 Chiu Jul 2012 B2
8213711 Tam Jul 2012 B2
8231814 Duparre Jul 2012 B2
8242426 Ward et al. Aug 2012 B2
8244027 Takahashi Aug 2012 B2
8244058 Intwala et al. Aug 2012 B1
8254668 Mashitani et al. Aug 2012 B2
8279325 Pitts et al. Oct 2012 B2
8280194 Wong et al. Oct 2012 B2
8284240 Tubic et al. Oct 2012 B2
8289409 Chang Oct 2012 B2
8289440 Pitts et al. Oct 2012 B2
8290358 Georgiev Oct 2012 B1
8294099 Blackwell, Jr. Oct 2012 B2
8294754 Jung et al. Oct 2012 B2
8300085 Yang et al. Oct 2012 B2
8305456 McMahon Nov 2012 B1
8315476 Georgiev et al. Nov 2012 B1
8345144 Georgiev et al. Jan 2013 B1
8360574 Ishak et al. Jan 2013 B2
8400555 Georgiev Mar 2013 B1
8406562 Bassi et al. Mar 2013 B2
8411146 Twede Apr 2013 B2
8446492 Nakano et al. May 2013 B2
8456517 Mor et al. Jun 2013 B2
8493496 Freedman et al. Jul 2013 B2
8514291 Chang et al. Aug 2013 B2
8514491 Duparre Aug 2013 B2
8541730 Inuiya Sep 2013 B2
8542933 Venkataraman Sep 2013 B2
8553093 Wong et al. Oct 2013 B2
8559756 Georgiev et al. Oct 2013 B2
8565547 Strandemar Oct 2013 B2
8576302 Yoshikawa Nov 2013 B2
8577183 Robinson Nov 2013 B2
8581995 Lin et al. Nov 2013 B2
8619082 Ciurea et al. Dec 2013 B1
8648918 Kauker et al. Feb 2014 B2
8655052 Spooner et al. Feb 2014 B2
8682107 Yoon et al. Mar 2014 B2
8687087 Pertsel et al. Apr 2014 B2
8692893 McMahon Apr 2014 B2
8754941 Sarwari et al. Jun 2014 B1
8773536 Zhang Jul 2014 B1
8780113 Ciurea et al. Jul 2014 B1
8804255 Duparre Aug 2014 B2
8830375 Ludwig Sep 2014 B2
8831367 Venkataraman Sep 2014 B2
8842201 Tajiri Sep 2014 B2
8854462 Herbin et al. Oct 2014 B2
8861089 Duparre Oct 2014 B2
8866912 Mullis Oct 2014 B2
8866920 Venkataraman et al. Oct 2014 B2
8866951 Keelan Oct 2014 B2
8878950 Lelescu et al. Nov 2014 B2
8885059 Venkataraman et al. Nov 2014 B1
8885922 Ito et al. Nov 2014 B2
8896594 Xiong et al. Nov 2014 B2
8896719 Venkataraman et al. Nov 2014 B1
8902321 Venkataraman et al. Dec 2014 B2
8928793 McMahon Jan 2015 B2
8977038 Tian et al. Mar 2015 B2
9001226 Ng et al. Apr 2015 B1
9019426 Han et al. Apr 2015 B2
9025894 Venkataraman May 2015 B2
9025895 Venkataraman May 2015 B2
9030528 Pesach et al. May 2015 B2
9031335 Venkataraman May 2015 B2
9031342 Venkataraman May 2015 B2
9031343 Venkataraman May 2015 B2
9036928 Venkataraman May 2015 B2
9036931 Venkataraman et al. May 2015 B2
9041823 Venkataraman et al. May 2015 B2
9041824 Lelescu et al. May 2015 B2
9041829 Venkataraman et al. May 2015 B2
9042667 Venkataraman et al. May 2015 B2
9049367 Venkataraman et al. Jun 2015 B2
9055233 Venkataraman et al. Jun 2015 B2
9060124 Venkataraman et al. Jun 2015 B2
9077893 Venkataraman et al. Jul 2015 B2
9094661 Venkataraman et al. Jul 2015 B2
9123117 Ciurea et al. Sep 2015 B2
9123118 Ciurea et al. Sep 2015 B2
9124815 Venkataraman et al. Sep 2015 B2
9124831 Mullis Sep 2015 B2
9124864 Mullis Sep 2015 B2
9128228 Duparre Sep 2015 B2
9129183 Venkataraman et al. Sep 2015 B2
9129377 Ciurea et al. Sep 2015 B2
9143711 McMahon Sep 2015 B2
9147254 Florian et al. Sep 2015 B2
9185276 Rodda et al. Nov 2015 B2
9188765 Venkataraman et al. Nov 2015 B2
9191580 Venkataraman et al. Nov 2015 B2
9197821 McMahon Nov 2015 B2
9210392 Nisenzon et al. Dec 2015 B2
9214013 Venkataraman et al. Dec 2015 B2
9235898 Venkataraman et al. Jan 2016 B2
9235900 Ciurea et al. Jan 2016 B2
9240049 Ciurea et al. Jan 2016 B2
9253380 Venkataraman et al. Feb 2016 B2
9256974 Hines Feb 2016 B1
9264592 Rodda et al. Feb 2016 B2
9264610 Duparre Feb 2016 B2
9361662 Lelescu et al. Jun 2016 B2
9374512 Venkataraman et al. Jun 2016 B2
9412206 McMahon et al. Aug 2016 B2
9413953 Maeda Aug 2016 B2
9426343 Rodda et al. Aug 2016 B2
9426361 Venkataraman et al. Aug 2016 B2
9438888 Venkataraman et al. Sep 2016 B2
9445003 Lelescu et al. Sep 2016 B1
9456134 Venkataraman et al. Sep 2016 B2
9456196 Kim et al. Sep 2016 B2
9462164 Venkataraman et al. Oct 2016 B2
9485496 Venkataraman et al. Nov 2016 B2
9497370 Venkataraman et al. Nov 2016 B2
9497429 Mullis et al. Nov 2016 B2
9516222 Duparre et al. Dec 2016 B2
9519972 Venkataraman et al. Dec 2016 B2
9521319 Rodda et al. Dec 2016 B2
9521416 McMahon et al. Dec 2016 B1
9536166 Venkataraman et al. Jan 2017 B2
9576369 Venkataraman et al. Feb 2017 B2
9578237 Duparre et al. Feb 2017 B2
9578259 Molina Feb 2017 B2
9602805 Venkataraman et al. Mar 2017 B2
9661310 Deng et al. May 2017 B2
9811753 Venkataraman et al. Nov 2017 B2
9864921 Venkataraman et al. Jan 2018 B2
20010005225 Clark et al. Jun 2001 A1
20010019621 Hanna et al. Sep 2001 A1
20010028038 Hamaguchi et al. Oct 2001 A1
20010038387 Tomooka et al. Nov 2001 A1
20020012056 Trevino Jan 2002 A1
20020015536 Warren Feb 2002 A1
20020027608 Johnson Mar 2002 A1
20020028014 Ono et al. Mar 2002 A1
20020039438 Mori et al. Apr 2002 A1
20020057845 Fossum May 2002 A1
20020063807 Margulis May 2002 A1
20020075450 Aratani Jun 2002 A1
20020087403 Meyers et al. Jul 2002 A1
20020089596 Yasuo Jul 2002 A1
20020094027 Sato et al. Jul 2002 A1
20020101528 Lee Aug 2002 A1
20020113867 Takigawa et al. Aug 2002 A1
20020113888 Sonoda et al. Aug 2002 A1
20020120634 Min et al. Aug 2002 A1
20020122113 Foote et al. Sep 2002 A1
20020163054 Suda Nov 2002 A1
20020167537 Trajkovic Nov 2002 A1
20020177054 Saitoh et al. Nov 2002 A1
20020190991 Efran et al. Dec 2002 A1
20020195548 Dowski, Jr. et al. Dec 2002 A1
20030025227 Daniell Feb 2003 A1
20030086079 Barth et al. May 2003 A1
20030124763 Fan et al. Jul 2003 A1
20030140347 Varsa Jul 2003 A1
20030179418 Wengender et al. Sep 2003 A1
20030188659 Merry et al. Oct 2003 A1
20030190072 Adkins et al. Oct 2003 A1
20030198377 Ng et al. Oct 2003 A1
20030211405 Venkataraman Nov 2003 A1
20040003409 Berstis et al. Jan 2004 A1
20040008271 Hagimori et al. Jan 2004 A1
20040012689 Tinnerino Jan 2004 A1
20040027358 Nakao Feb 2004 A1
20040047274 Amanai Mar 2004 A1
20040050104 Ghosh et al. Mar 2004 A1
20040056966 Schechner et al. Mar 2004 A1
20040061787 Liu et al. Apr 2004 A1
20040066454 Otani et al. Apr 2004 A1
20040071367 Irani et al. Apr 2004 A1
20040075654 Hsiao et al. Apr 2004 A1
20040096119 Williams May 2004 A1
20040100570 Shizukuishi May 2004 A1
20040105021 Hu et al. Jun 2004 A1
20040114807 Lelescu et al. Jun 2004 A1
20040141659 Zhang Jul 2004 A1
20040151401 Sawhney et al. Aug 2004 A1
20040165090 Ning Aug 2004 A1
20040169617 Yelton et al. Sep 2004 A1
20040170340 Tipping et al. Sep 2004 A1
20040174439 Upton Sep 2004 A1
20040179008 Gordon et al. Sep 2004 A1
20040179834 Szajewski Sep 2004 A1
20040207836 Chhibber et al. Oct 2004 A1
20040213449 Safaee-Rad et al. Oct 2004 A1
20040218809 Blake et al. Nov 2004 A1
20040234873 Venkataraman Nov 2004 A1
20040239885 Jaynes et al. Dec 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20040251509 Choi Dec 2004 A1
20040264806 Herley Dec 2004 A1
20050006477 Patel Jan 2005 A1
20050007461 Chou et al. Jan 2005 A1
20050009313 Suzuki et al. Jan 2005 A1
20050010621 Pinto et al. Jan 2005 A1
20050012035 Miller Jan 2005 A1
20050036778 DeMonte Feb 2005 A1
20050047678 Jones et al. Mar 2005 A1
20050048690 Yamamoto Mar 2005 A1
20050068436 Fraenkel et al. Mar 2005 A1
20050128509 Tokkonen et al. Jun 2005 A1
20050128595 Shimizu Jun 2005 A1
20050132098 Sonoda et al. Jun 2005 A1
20050134698 Schroeder Jun 2005 A1
20050134699 Nagashima Jun 2005 A1
20050134712 Gruhlke et al. Jun 2005 A1
20050147277 Higaki et al. Jul 2005 A1
20050151759 Gonzalez-Banos et al. Jul 2005 A1
20050168924 Wu et al. Aug 2005 A1
20050175257 Kuroki Aug 2005 A1
20050185711 Pfister et al. Aug 2005 A1
20050205785 Hornback et al. Sep 2005 A1
20050219363 Kohler Oct 2005 A1
20050224843 Boemler Oct 2005 A1
20050225654 Feldman et al. Oct 2005 A1
20050265633 Piacentino et al. Dec 2005 A1
20050275946 Choo et al. Dec 2005 A1
20050286612 Takanashi Dec 2005 A1
20050286756 Hong et al. Dec 2005 A1
20060002635 Nestares et al. Jan 2006 A1
20060007331 Izumi et al. Jan 2006 A1
20060018509 Miyoshi Jan 2006 A1
20060023197 Joel Feb 2006 A1
20060023314 Boettiger et al. Feb 2006 A1
20060028476 Sobel et al. Feb 2006 A1
20060029270 Berestov et al. Feb 2006 A1
20060029271 Miyoshi et al. Feb 2006 A1
20060033005 Jerdev et al. Feb 2006 A1
20060034003 Zalevsky Feb 2006 A1
20060034531 Poon et al. Feb 2006 A1
20060035415 Wood Feb 2006 A1
20060038891 Okutomi et al. Feb 2006 A1
20060039611 Rother Feb 2006 A1
20060046204 Ono et al. Mar 2006 A1
20060049930 Zruya et al. Mar 2006 A1
20060054780 Garrood et al. Mar 2006 A1
20060054782 Olsen Mar 2006 A1
20060055811 Frtiz et al. Mar 2006 A1
20060069478 Iwama Mar 2006 A1
20060072029 Miyatake et al. Apr 2006 A1
20060087747 Ohzawa et al. Apr 2006 A1
20060098888 Morishita May 2006 A1
20060103754 Wenstrand et al. May 2006 A1
20060125936 Gruhike et al. Jun 2006 A1
20060138322 Costello et al. Jun 2006 A1
20060152803 Provitola Jul 2006 A1
20060157640 Perlman et al. Jul 2006 A1
20060159369 Young Jul 2006 A1
20060176566 Boettiger et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060197937 Bamji et al. Sep 2006 A1
20060203100 Ajito et al. Sep 2006 A1
20060203113 Wada et al. Sep 2006 A1
20060210146 Gu Sep 2006 A1
20060210186 Berkner Sep 2006 A1
20060214085 Olsen Sep 2006 A1
20060221250 Rossbach et al. Oct 2006 A1
20060239549 Kelly et al. Oct 2006 A1
20060243889 Farnworth et al. Nov 2006 A1
20060251410 Trutna Nov 2006 A1
20060274174 Tewinkle Dec 2006 A1
20060278948 Yamaguchi et al. Dec 2006 A1
20060279648 Senba et al. Dec 2006 A1
20060289772 Johnson et al. Dec 2006 A1
20070002159 Olsen Jan 2007 A1
20070008575 Yu et al. Jan 2007 A1
20070009150 Suwa Jan 2007 A1
20070024614 Tam Feb 2007 A1
20070030356 Yea et al. Feb 2007 A1
20070035707 Margulis Feb 2007 A1
20070036427 Nakamura et al. Feb 2007 A1
20070040828 Zalevsky et al. Feb 2007 A1
20070040922 McKee et al. Feb 2007 A1
20070041391 Lin et al. Feb 2007 A1
20070052825 Cho Mar 2007 A1
20070083114 Yang et al. Apr 2007 A1
20070085917 Kobayashi Apr 2007 A1
20070092245 Bazakos et al. Apr 2007 A1
20070102622 Olsen et al. May 2007 A1
20070126898 Feldman Jun 2007 A1
20070127831 Venkataraman Jun 2007 A1
20070139333 Sato et al. Jun 2007 A1
20070140685 Wu et al. Jun 2007 A1
20070146503 Shiraki Jun 2007 A1
20070146511 Kinoshita et al. Jun 2007 A1
20070153335 Hosaka Jul 2007 A1
20070158427 Zhu et al. Jul 2007 A1
20070159541 Sparks et al. Jul 2007 A1
20070160310 Tanida et al. Jul 2007 A1
20070165931 Higaki Jul 2007 A1
20070171290 Kroger Jul 2007 A1
20070177004 Kolehmainen et al. Aug 2007 A1
20070182843 Shimamura et al. Aug 2007 A1
20070201859 Sarrat et al. Aug 2007 A1
20070206241 Smith et al. Sep 2007 A1
20070211164 Olsen et al. Sep 2007 A1
20070216765 Wong et al. Sep 2007 A1
20070225600 Weibrecht et al. Sep 2007 A1
20070228256 Mentzer Oct 2007 A1
20070236595 Pan et al. Oct 2007 A1
20070247517 Zhang et al. Oct 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070258006 Olsen et al. Nov 2007 A1
20070258706 Raskar et al. Nov 2007 A1
20070263113 Baek et al. Nov 2007 A1
20070263114 Gurevich et al. Nov 2007 A1
20070268374 Robinson Nov 2007 A1
20070296832 Ota et al. Dec 2007 A1
20070296835 Olsen Dec 2007 A1
20070296847 Chang et al. Dec 2007 A1
20070297696 Hamza Dec 2007 A1
20080006859 Mionetto et al. Jan 2008 A1
20080019611 Larkin Jan 2008 A1
20080024683 Damera-Venkata et al. Jan 2008 A1
20080025649 Liu et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080030597 Olsen et al. Feb 2008 A1
20080043095 Vetro et al. Feb 2008 A1
20080043096 Vetro et al. Feb 2008 A1
20080054518 Ra et al. Mar 2008 A1
20080056302 Erdal et al. Mar 2008 A1
20080062164 Bassi et al. Mar 2008 A1
20080079805 Takagi et al. Apr 2008 A1
20080080028 Bakin et al. Apr 2008 A1
20080084486 Enge et al. Apr 2008 A1
20080088793 Sverdrup et al. Apr 2008 A1
20080095523 Schilling-Benz et al. Apr 2008 A1
20080099804 Venezia et al. May 2008 A1
20080106620 Sawachi et al. May 2008 A1
20080112059 Choi et al. May 2008 A1
20080112635 Kondo et al. May 2008 A1
20080117289 Schowengerdt et al. May 2008 A1
20080118241 Tekolste et al. May 2008 A1
20080131019 Ng Jun 2008 A1
20080131107 Ueno Jun 2008 A1
20080151097 Chen et al. Jun 2008 A1
20080152215 Horie et al. Jun 2008 A1
20080152296 Oh et al. Jun 2008 A1
20080156991 Hu et al. Jul 2008 A1
20080158259 Kempf et al. Jul 2008 A1
20080158375 Kakkori et al. Jul 2008 A1
20080158698 Chang et al. Jul 2008 A1
20080165257 Boettiger et al. Jul 2008 A1
20080174670 Olsen et al. Jul 2008 A1
20080187305 Raskar et al. Aug 2008 A1
20080193026 Horie et al. Aug 2008 A1
20080211737 Kim et al. Sep 2008 A1
20080218610 Chapman et al. Sep 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080218612 Border et al. Sep 2008 A1
20080218613 Janson et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20080239116 Smith Oct 2008 A1
20080240598 Hasegawa Oct 2008 A1
20080247638 Tanida et al. Oct 2008 A1
20080247653 Moussavi et al. Oct 2008 A1
20080272416 Yun Nov 2008 A1
20080273751 Yuan et al. Nov 2008 A1
20080278591 Barna et al. Nov 2008 A1
20080278610 Boettiger et al. Nov 2008 A1
20080284880 Numata Nov 2008 A1
20080291295 Kato et al. Nov 2008 A1
20080298674 Baker et al. Dec 2008 A1
20080310501 Ward et al. Dec 2008 A1
20090027543 Kanehiro et al. Jan 2009 A1
20090050946 Duparre et al. Feb 2009 A1
20090052743 Techmer Feb 2009 A1
20090060281 Tanida et al. Mar 2009 A1
20090086074 Li et al. Apr 2009 A1
20090091645 Trimeche et al. Apr 2009 A1
20090091806 Inuiya Apr 2009 A1
20090096050 Park Apr 2009 A1
20090102956 Georgiev Apr 2009 A1
20090109306 Shan Apr 2009 A1
20090127430 Hirasawa et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090128833 Yahav May 2009 A1
20090129667 Ho et al. May 2009 A1
20090140131 Utagawa et al. Jun 2009 A1
20090141933 Wagg Jun 2009 A1
20090147919 Goto et al. Jun 2009 A1
20090152664 Klem et al. Jun 2009 A1
20090167922 Perlman et al. Jul 2009 A1
20090167934 Gupta Jul 2009 A1
20090179142 Duparre et al. Jul 2009 A1
20090180021 Kikuchi et al. Jul 2009 A1
20090200622 Tai et al. Aug 2009 A1
20090201371 Matsuda et al. Aug 2009 A1
20090207235 Francini et al. Aug 2009 A1
20090219435 Yuan et al. Sep 2009 A1
20090225203 Tanida et al. Sep 2009 A1
20090237520 Kaneko et al. Sep 2009 A1
20090245573 Saptharishi et al. Oct 2009 A1
20090256947 Ciurea et al. Oct 2009 A1
20090263017 Tanbakuchi Oct 2009 A1
20090268192 Koenck et al. Oct 2009 A1
20090268970 Babacan et al. Oct 2009 A1
20090268983 Stone Oct 2009 A1
20090274387 Jin Nov 2009 A1
20090279800 Uetani Nov 2009 A1
20090284651 Srinivasan Nov 2009 A1
20090297056 Lelescu et al. Dec 2009 A1
20090302205 Olsen et al. Dec 2009 A9
20090317061 Jung et al. Dec 2009 A1
20090322876 Lee et al. Dec 2009 A1
20090323195 Hembree et al. Dec 2009 A1
20090323206 Oliver et al. Dec 2009 A1
20090324118 Maslov et al. Dec 2009 A1
20100002126 Wenstrand et al. Jan 2010 A1
20100002313 Duparre et al. Jan 2010 A1
20100002314 Duparre Jan 2010 A1
20100007714 Kim et al. Jan 2010 A1
20100013927 Nixon Jan 2010 A1
20100044815 Chang et al. Feb 2010 A1
20100053342 Hwang et al. Mar 2010 A1
20100053600 Tanida Mar 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100073463 Momonoi et al. Mar 2010 A1
20100074532 Gordon et al. Mar 2010 A1
20100085425 Tan Apr 2010 A1
20100086227 Sun et al. Apr 2010 A1
20100091389 Henriksen et al. Apr 2010 A1
20100097491 Farina et al. Apr 2010 A1
20100103175 Okutomi et al. Apr 2010 A1
20100103259 Tanida et al. Apr 2010 A1
20100103308 Butterfield et al. Apr 2010 A1
20100111444 Coffman May 2010 A1
20100118127 Nam May 2010 A1
20100128145 Pitts et al. May 2010 A1
20100133230 Henriksen et al. Jun 2010 A1
20100133418 Sargent et al. Jun 2010 A1
20100141802 Knight Jun 2010 A1
20100142828 Chang et al. Jun 2010 A1
20100142839 Lakus-Becker Jun 2010 A1
20100157073 Kondo et al. Jun 2010 A1
20100165152 Lim Jul 2010 A1
20100166410 Chang et al. Jul 2010 A1
20100171866 Brady et al. Jul 2010 A1
20100177411 Hegde et al. Jul 2010 A1
20100182406 Benitez et al. Jul 2010 A1
20100194860 Mentz et al. Aug 2010 A1
20100194901 van Hoorebeke et al. Aug 2010 A1
20100195716 Gunnewiek et al. Aug 2010 A1
20100201834 Maruyama et al. Aug 2010 A1
20100202054 Niederer Aug 2010 A1
20100202683 Robinson Aug 2010 A1
20100208100 Olsen et al. Aug 2010 A9
20100220212 Perlman et al. Sep 2010 A1
20100223237 Mishra et al. Sep 2010 A1
20100225740 Jung et al. Sep 2010 A1
20100231285 Boomer et al. Sep 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100244165 Lake et al. Sep 2010 A1
20100254627 Panahpour Tehrani et al. Oct 2010 A1
20100259610 Petersen et al. Oct 2010 A1
20100265346 Iizuka Oct 2010 A1
20100265381 Yamamoto et al. Oct 2010 A1
20100265385 Knight et al. Oct 2010 A1
20100281070 Chan et al. Nov 2010 A1
20100289941 Ito et al. Nov 2010 A1
20100290483 Park et al. Nov 2010 A1
20100302423 Adams, Jr. et al. Dec 2010 A1
20100309292 Ho et al. Dec 2010 A1
20100309368 Choi et al. Dec 2010 A1
20100321595 Chiu et al. Dec 2010 A1
20100321640 Yeh et al. Dec 2010 A1
20100329556 Mitarai et al. Dec 2010 A1
20110001037 Tewinkle Jan 2011 A1
20110018973 Takayama Jan 2011 A1
20110019048 Raynor et al. Jan 2011 A1
20110019243 Constant, Jr. et al. Jan 2011 A1
20110031381 Tay et al. Feb 2011 A1
20110032370 Ludwig Feb 2011 A1
20110033129 Robinson Feb 2011 A1
20110038536 Gong Feb 2011 A1
20110043661 Podoleanu Feb 2011 A1
20110043665 Ogasahara Feb 2011 A1
20110043668 McKinnon et al. Feb 2011 A1
20110044502 Liu et al. Feb 2011 A1
20110051255 Lee et al. Mar 2011 A1
20110055729 Mason et al. Mar 2011 A1
20110064327 Dagher et al. Mar 2011 A1
20110069189 Venkataraman et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110085028 Samadani et al. Apr 2011 A1
20110090217 Mashitani et al. Apr 2011 A1
20110108708 Olsen et al. May 2011 A1
20110115886 Nguyen May 2011 A1
20110121421 Charbon May 2011 A1
20110122308 Duparre May 2011 A1
20110128393 Tavi et al. Jun 2011 A1
20110128412 Milnes et al. Jun 2011 A1
20110129165 Lim et al. Jun 2011 A1
20110141309 Nagashima et al. Jun 2011 A1
20110142138 Tian et al. Jun 2011 A1
20110149408 Hahgholt et al. Jun 2011 A1
20110149409 Haugholt et al. Jun 2011 A1
20110153248 Gu et al. Jun 2011 A1
20110157321 Nakajima et al. Jun 2011 A1
20110157451 Chang Jun 2011 A1
20110169994 DiFrancesco et al. Jul 2011 A1
20110176020 Chang Jul 2011 A1
20110181797 Galstian et al. Jul 2011 A1
20110193944 Lian et al. Aug 2011 A1
20110206291 Kashani Aug 2011 A1
20110207074 Hall-Holt et al. Aug 2011 A1
20110211824 Georgiev et al. Sep 2011 A1
20110221599 Högasten Sep 2011 A1
20110221658 Haddick et al. Sep 2011 A1
20110221939 Jerdev Sep 2011 A1
20110221950 Oostra Sep 2011 A1
20110222757 Yeatman, Jr. et al. Sep 2011 A1
20110228142 Brueckner Sep 2011 A1
20110228144 Tian et al. Sep 2011 A1
20110234841 Akeley et al. Sep 2011 A1
20110241234 Duparre Oct 2011 A1
20110242342 Goma et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110242356 Aleksic et al. Oct 2011 A1
20110243428 Das Gupta et al. Oct 2011 A1
20110255592 Sung Oct 2011 A1
20110255745 Hodder et al. Oct 2011 A1
20110261993 Weiming et al. Oct 2011 A1
20110267264 McCarthy et al. Nov 2011 A1
20110267348 Lin Nov 2011 A1
20110273531 Ito et al. Nov 2011 A1
20110274366 Tardif Nov 2011 A1
20110279705 Kuang et al. Nov 2011 A1
20110279721 McMahon Nov 2011 A1
20110285701 Chen et al. Nov 2011 A1
20110285866 Bhrugumalla et al. Nov 2011 A1
20110285910 Bamji et al. Nov 2011 A1
20110292216 Fergus et al. Dec 2011 A1
20110298917 Yanagita Dec 2011 A1
20110300929 Tardif et al. Dec 2011 A1
20110310980 Mathew Dec 2011 A1
20110316968 Taguchi et al. Dec 2011 A1
20110317766 Lim, II et al. Dec 2011 A1
20120012748 Pain et al. Jan 2012 A1
20120014456 Martinez Bauza et al. Jan 2012 A1
20120019530 Baker Jan 2012 A1
20120019700 Gaber Jan 2012 A1
20120023456 Sun et al. Jan 2012 A1
20120026297 Sato Feb 2012 A1
20120026342 Yu et al. Feb 2012 A1
20120026366 Golan et al. Feb 2012 A1
20120026451 Nystrom Feb 2012 A1
20120039525 Tian et al. Feb 2012 A1
20120044249 Mashitani et al. Feb 2012 A1
20120044372 Côté et al. Feb 2012 A1
20120051624 Ando et al. Mar 2012 A1
20120056982 Katz et al. Mar 2012 A1
20120057040 Park et al. Mar 2012 A1
20120062697 Treado et al. Mar 2012 A1
20120062702 Jiang et al. Mar 2012 A1
20120062756 Tian Mar 2012 A1
20120069235 Imai Mar 2012 A1
20120081519 Goma Apr 2012 A1
20120086803 Malzbender et al. Apr 2012 A1
20120105691 Waqas et al. May 2012 A1
20120113232 Joblove et al. May 2012 A1
20120113318 Galstian et al. May 2012 A1
20120113413 Miahczylowicz-Wolski et al. May 2012 A1
20120127275 Von Zitzewitz et al. May 2012 A1
20120147139 Li et al. Jun 2012 A1
20120147205 Lelescu et al. Jun 2012 A1
20120153153 Chang et al. Jun 2012 A1
20120154551 Inoue Jun 2012 A1
20120155830 Sasaki Jun 2012 A1
20120163672 McKinnon Jun 2012 A1
20120169433 Mullins Jul 2012 A1
20120170134 Bolis et al. Jul 2012 A1
20120176479 Mayhew et al. Jul 2012 A1
20120176481 Lukk et al. Jul 2012 A1
20120188235 Wu et al. Jul 2012 A1
20120188341 Klein Gunnewiek et al. Jul 2012 A1
20120188389 Lin et al. Jul 2012 A1
20120188420 Black et al. Jul 2012 A1
20120188634 Kubala et al. Jul 2012 A1
20120198677 Duparre Aug 2012 A1
20120200669 Lai Aug 2012 A1
20120200726 Bugnariu Aug 2012 A1
20120200734 Tang Aug 2012 A1
20120206582 DiCarlo et al. Aug 2012 A1
20120219236 Ali et al. Aug 2012 A1
20120224083 Jovanovski et al. Sep 2012 A1
20120229602 Chen et al. Sep 2012 A1
20120229628 Ishiyama et al. Sep 2012 A1
20120237114 Park et al. Sep 2012 A1
20120249550 Akeley et al. Oct 2012 A1
20120249750 Izzat et al. Oct 2012 A1
20120249836 Ali et al. Oct 2012 A1
20120249853 Krolczyk et al. Oct 2012 A1
20120262601 Choi et al. Oct 2012 A1
20120262607 Shimura et al. Oct 2012 A1
20120268574 Gidon et al. Oct 2012 A1
20120274626 Hsieh et al. Nov 2012 A1
20120287291 McMahon et al. Nov 2012 A1
20120290257 Hodge et al. Nov 2012 A1
20120293489 Chen et al. Nov 2012 A1
20120293624 Chen et al. Nov 2012 A1
20120293695 Tanaka Nov 2012 A1
20120307093 Miyoshi Dec 2012 A1
20120307099 Yahata et al. Dec 2012 A1
20120314033 Lee et al. Dec 2012 A1
20120314937 Kim et al. Dec 2012 A1
20120327222 Ng et al. Dec 2012 A1
20130002828 Ding et al. Jan 2013 A1
20130003184 Duparre Jan 2013 A1
20130010073 Do Jan 2013 A1
20130016885 Tsujimoto et al. Jan 2013 A1
20130022111 Chen et al. Jan 2013 A1
20130027580 Olsen et al. Jan 2013 A1
20130033579 Wajs Feb 2013 A1
20130033585 Li et al. Feb 2013 A1
20130038696 Ding et al. Feb 2013 A1
20130050504 Safaee-Rad et al. Feb 2013 A1
20130050526 Keelan Feb 2013 A1
20130057710 McMahon Mar 2013 A1
20130070060 Chatterjee Mar 2013 A1
20130076967 Brunner et al. Mar 2013 A1
20130077859 Stauder et al. Mar 2013 A1
20130077880 Venkataraman et al. Mar 2013 A1
20130077882 Venkataraman et al. Mar 2013 A1
20130083172 Baba Apr 2013 A1
20130088489 Schmeitz et al. Apr 2013 A1
20130088637 Duparre Apr 2013 A1
20130093842 Yahata Apr 2013 A1
20130107061 Kumar et al. May 2013 A1
20130113899 Morohoshi et al. May 2013 A1
20130113939 Strandemar May 2013 A1
20130120605 Georgiev et al. May 2013 A1
20130121559 Hu May 2013 A1
20130128068 Georgiev et al. May 2013 A1
20130128069 Georgiev et al. May 2013 A1
20130128087 Georgiev et al. May 2013 A1
20130128121 Agarwala et al. May 2013 A1
20130135315 Bares May 2013 A1
20130147979 McMahon et al. Jun 2013 A1
20130176394 Tian et al. Jul 2013 A1
20130208138 Li Aug 2013 A1
20130215108 McMahon et al. Aug 2013 A1
20130215231 Hiramoto et al. Aug 2013 A1
20130222556 Shimada Aug 2013 A1
20130223759 Nishiyama et al. Aug 2013 A1
20130229540 Farina et al. Sep 2013 A1
20130230237 Schlosser et al. Sep 2013 A1
20130250123 Zhang et al. Sep 2013 A1
20130250150 Malone Sep 2013 A1
20130258067 Zhang et al. Oct 2013 A1
20130259317 Gaddy Oct 2013 A1
20130265459 Duparre et al. Oct 2013 A1
20130274596 Azizian et al. Oct 2013 A1
20130274923 By et al. Oct 2013 A1
20130293760 Nisenzon et al. Nov 2013 A1
20140002674 Duparre et al. Jan 2014 A1
20140009586 McNamer et al. Jan 2014 A1
20140013273 Ng et al. Jan 2014 A1
20140037137 Broaddus et al. Feb 2014 A1
20140037140 Benhimane et al. Feb 2014 A1
20140043507 Wang et al. Feb 2014 A1
20140076336 Clayton et al. Mar 2014 A1
20140078333 Miao Mar 2014 A1
20140079336 Venkataraman et al. Mar 2014 A1
20140092281 Nisenzon et al. Apr 2014 A1
20140098267 Tian et al. Apr 2014 A1
20140104490 Hsieh et al. Apr 2014 A1
20140118493 Sali et al. May 2014 A1
20140118584 Lee et al. May 2014 A1
20140132810 McMahon May 2014 A1
20140146201 Knight et al. May 2014 A1
20140176592 Wilburn et al. Jun 2014 A1
20140186045 Poddar et al. Jul 2014 A1
20140192154 Jeong et al. Jul 2014 A1
20140192253 Laroia Jul 2014 A1
20140198188 Izawa Jul 2014 A1
20140204183 Lee et al. Jul 2014 A1
20140218546 McMahon Aug 2014 A1
20140232822 Venkataraman et al. Aug 2014 A1
20140240528 Venkataraman et al. Aug 2014 A1
20140240529 Venkataraman et al. Aug 2014 A1
20140253738 Mullis Sep 2014 A1
20140267243 Venkataraman et al. Sep 2014 A1
20140267286 Duparre Sep 2014 A1
20140267633 Venkataraman et al. Sep 2014 A1
20140267762 Mullis et al. Sep 2014 A1
20140267890 Lelescu et al. Sep 2014 A1
20140285675 Mullis Sep 2014 A1
20140313315 Shoham et al. Oct 2014 A1
20140321712 Ciurea et al. Oct 2014 A1
20140333731 Venkataraman et al. Nov 2014 A1
20140333764 Venkataraman et al. Nov 2014 A1
20140333787 Venkataraman et al. Nov 2014 A1
20140340539 Venkataraman et al. Nov 2014 A1
20140347509 Venkataraman et al. Nov 2014 A1
20140347748 Duparre Nov 2014 A1
20140354773 Venkataraman et al. Dec 2014 A1
20140354843 Venkataraman et al. Dec 2014 A1
20140354844 Venkataraman et al. Dec 2014 A1
20140354853 Venkataraman et al. Dec 2014 A1
20140354854 Venkataraman et al. Dec 2014 A1
20140354855 Venkataraman et al. Dec 2014 A1
20140355870 Venkataraman et al. Dec 2014 A1
20140368662 Venkataraman et al. Dec 2014 A1
20140368683 Venkataraman et al. Dec 2014 A1
20140368684 Venkataraman et al. Dec 2014 A1
20140368685 Venkataraman et al. Dec 2014 A1
20140368686 Duparre Dec 2014 A1
20140369612 Venkataraman et al. Dec 2014 A1
20140369615 Venkataraman et al. Dec 2014 A1
20140376825 Venkataraman et al. Dec 2014 A1
20140376826 Venkataraman et al. Dec 2014 A1
20150002734 Lee Jan 2015 A1
20150003752 Venkataraman et al. Jan 2015 A1
20150003753 Venkataraman et al. Jan 2015 A1
20150009353 Venkataraman et al. Jan 2015 A1
20150009354 Venkataraman et al. Jan 2015 A1
20150009362 Venkataraman et al. Jan 2015 A1
20150015669 Venkataraman et al. Jan 2015 A1
20150035992 Mullis Feb 2015 A1
20150036014 Lelescu et al. Feb 2015 A1
20150036015 Lelescu et al. Feb 2015 A1
20150042766 Ciurea et al. Feb 2015 A1
20150042767 Ciurea et al. Feb 2015 A1
20150042833 Lelescu et al. Feb 2015 A1
20150049915 Ciurea et al. Feb 2015 A1
20150049916 Ciurea et al. Feb 2015 A1
20150049917 Ciurea et al. Feb 2015 A1
20150055884 Venkataraman et al. Feb 2015 A1
20150085174 Shabtay et al. Mar 2015 A1
20150091900 Yang et al. Apr 2015 A1
20150104101 Bryant et al. Apr 2015 A1
20150122411 Rodda et al. May 2015 A1
20150124113 Rodda et al. May 2015 A1
20150124151 Rodda et al. May 2015 A1
20150146029 Venkataraman et al. May 2015 A1
20150146030 Venkataraman et al. May 2015 A1
20150199793 Lelescu et al. Jul 2015 A1
20150199841 Venkataraman et al. Jul 2015 A1
20150243480 Yamada et al. Aug 2015 A1
20150248744 Hayasaka et al. Sep 2015 A1
20150254868 Srikanth et al. Sep 2015 A1
20150296137 Duparre et al. Oct 2015 A1
20150312455 Venkataraman et al. Oct 2015 A1
20150326852 Duparre et al. Nov 2015 A1
20150373261 Rodda et al. Dec 2015 A1
20160037097 Duparre Feb 2016 A1
20160044252 Molina Feb 2016 A1
20160044257 Venkataraman et al. Feb 2016 A1
20160057332 Ciurea et al. Feb 2016 A1
20160165106 Duparre Jun 2016 A1
20160165134 Lelescu et al. Jun 2016 A1
20160165147 Nisenzon et al. Jun 2016 A1
20160165212 Mullis Jun 2016 A1
20160195733 Lelescu et al. Jul 2016 A1
20160227195 Venkataraman et al. Aug 2016 A1
20160249001 McMahon Aug 2016 A1
20160255333 Nisenzon et al. Sep 2016 A1
20160266284 Duparre et al. Sep 2016 A1
20160267665 Venkataraman et al. Sep 2016 A1
20160267672 Ciurea et al. Sep 2016 A1
20160269626 McMahon Sep 2016 A1
20160269627 McMahon Sep 2016 A1
20160269650 Venkataraman et al. Sep 2016 A1
20160269651 Venkataraman et al. Sep 2016 A1
20160316140 Nayar et al. Oct 2016 A1
20170006233 Venkataraman et al. Jan 2017 A1
20170048468 Pain et al. Feb 2017 A1
20170053382 Lelescu et al. Feb 2017 A1
20170054901 Venkataraman et al. Feb 2017 A1
20170070672 Rodda et al. Mar 2017 A1
20170085845 Venkataraman et al. Mar 2017 A1
20170094243 Venkataraman et al. Mar 2017 A1
20170099465 Mullis et al. Apr 2017 A1
20170163862 Molina Jun 2017 A1
20170178363 Venkataraman et al. Jun 2017 A1
20170187933 Duparre Jun 2017 A1
20170365104 McMahon et al. Dec 2017 A1
20180007284 Venkataraman et al. Jan 2018 A1
Foreign Referenced Citations (138)
Number Date Country
1669332 Sep 2005 CN
1839394 Sep 2006 CN
101010619 Aug 2007 CN
101064780 Oct 2007 CN
101102388 Jan 2008 CN
101147392 Mar 2008 CN
101427372 May 2009 CN
101606086 Dec 2009 CN
101883291 Nov 2010 CN
102037717 Apr 2011 CN
102375199 Mar 2012 CN
104081414 Oct 2014 CN
104081414 Aug 2017 CN
107230236 Oct 2017 CN
0677821 Oct 1995 EP
0840502 May 1998 EP
1201407 May 2002 EP
1355274 Oct 2003 EP
1734766 Dec 2006 EP
2026563 Feb 2009 EP
2104334 Sep 2009 EP
2244484 Oct 2010 EP
2336816 Jun 2011 EP
2381418 Oct 2011 EP
2761534 Aug 2014 EP
2482022 Jan 2012 GB
2708CHENP2014 Aug 2015 IN
59-025483 Feb 1984 JP
64-037177 Feb 1989 JP
02-285772 Nov 1990 JP
07-15457 Jan 1995 JP
09181913 Jul 1997 JP
11142609 May 1999 JP
11223708 Aug 1999 JP
2000209503 Jul 2000 JP
2001008235 Jan 2001 JP
2001194114 Jul 2001 JP
2001264033 Sep 2001 JP
2001277260 Oct 2001 JP
2001337263 Dec 2001 JP
2002195910 Jul 2002 JP
2002205310 Jul 2002 JP
2002252338 Sep 2002 JP
2003094445 Apr 2003 JP
2003139910 May 2003 JP
2003163938 Jun 2003 JP
2003298920 Oct 2003 JP
2004221585 Aug 2004 JP
2005116022 Apr 2005 JP
2005181460 Jul 2005 JP
2005295381 Oct 2005 JP
2005303694 Oct 2005 JP
2005354124 Dec 2005 JP
2006033228 Feb 2006 JP
2006033493 Feb 2006 JP
2006047944 Feb 2006 JP
2006258930 Sep 2006 JP
2007520107 Jul 2007 JP
2007259136 Oct 2007 JP
2008039852 Feb 2008 JP
2008055908 Mar 2008 JP
2008507874 Mar 2008 JP
2008258885 Oct 2008 JP
2009132010 Jun 2009 JP
2009300268 Dec 2009 JP
2011017764 Jan 2011 JP
2011030184 Feb 2011 JP
2011109484 Jun 2011 JP
2011523538 Aug 2011 JP
2013526801 Jun 2013 JP
2014521117 Aug 2014 JP
2014535191 Dec 2014 JP
6140709 May 2017 JP
2017163587 Sep 2017 JP
20110097647 Aug 2011 KR
200828994 Jul 2008 TW
200939739 Sep 2009 TW
2005057922 Jun 2005 WO
2006039906 Apr 2006 WO
2006039906 Sep 2006 WO
2007013250 Feb 2007 WO
2007083579 Jul 2007 WO
2007134137 Nov 2007 WO
2008045198 Apr 2008 WO
2008050904 May 2008 WO
2008108271 Sep 2008 WO
2008108926 Sep 2008 WO
2008150817 Dec 2008 WO
2009073950 Jun 2009 WO
2009151903 Dec 2009 WO
2009157273 Dec 2009 WO
2011008443 Jan 2011 WO
2011055655 May 2011 WO
2011063347 May 2011 WO
2011105814 Sep 2011 WO
2011116203 Sep 2011 WO
2011063347 Oct 2011 WO
2011143501 Nov 2011 WO
2012057619 May 2012 WO
2012057620 May 2012 WO
2012057621 May 2012 WO
2012057622 May 2012 WO
2012057623 May 2012 WO
2012057620 Jun 2012 WO
2012074361 Jun 2012 WO
2012078126 Jun 2012 WO
2012082904 Jun 2012 WO
2012155119 Nov 2012 WO
2013003276 Jan 2013 WO
2013043751 Mar 2013 WO
2013043761 Mar 2013 WO
2013049699 Apr 2013 WO
2013055960 Apr 2013 WO
2013119706 Aug 2013 WO
2013126578 Aug 2013 WO
2014052974 Apr 2014 WO
2014032020 May 2014 WO
2014078443 May 2014 WO
2014130849 Aug 2014 WO
2014133974 Sep 2014 WO
2014138695 Sep 2014 WO
2014138697 Sep 2014 WO
2014144157 Sep 2014 WO
2014145856 Sep 2014 WO
2014149403 Sep 2014 WO
2014149902 Sep 2014 WO
2014150856 Sep 2014 WO
2014159721 Oct 2014 WO
2014159779 Oct 2014 WO
2014160142 Oct 2014 WO
2014164550 Oct 2014 WO
2014164909 Oct 2014 WO
2014165244 Oct 2014 WO
2014133974 Apr 2015 WO
2015048694 Apr 2015 WO
2015070105 May 2015 WO
2015074078 May 2015 WO
2015081279 Jun 2015 WO
Non-Patent Literature Citations (273)
Entry
Extended European Search Report for EP Application No. 13810429.4, Completed Jan. 7, 2016, dated Jan. 15, 2016, 6 Pgs.
Extended European Search Report for European Application EP12782935.6, completed Aug. 28, 2014, dated Sep. 4, 2014, 7 Pgs.
Extended European Search Report for European Application EP12804266.0, Report Completed Jan. 27, 2015, dated Feb. 3, 2015, 6 Pgs.
Extended European Search Report for European Application EP12835041.0, Report Completed Jan. 28, 2015, dated Feb. 4, 2015, 7 Pgs.
Extended European Search Report for European Application EP13751714.0, completed Aug. 5, 2015, dated Aug. 18, 2015, 8 Pgs.
Extended European Search Report for European Application EP13810229.8, Report Completed Apr. 14, 2016, dated Apr. 21, 2016, 7 pgs.
Extended European Search Report for European Application No. 13830945.5, Search completed Jun. 28, 2016, dated Jul. 7, 2016, 14 Pgs.
Extended European Search Report for European Application No. 13841613.6, Search completed Jul. 18, 2016, dated Jul. 26, 2016, 8 Pgs.
Extended European Search Report for European Application No. 14763087.5, Search completed Dec. 7, 2016, dated Dec. 19, 2016, 9 Pgs.
Extended European Search Report for European Application No. 14860103.2, Search completed Feb. 23, 2017, dated Mar. 3, 2017, 7 Pgs.
Supplementary European Search Report for EP Application No. 13831768.0, Search completed May 18, 2016, dated May 30, 2016, 13 Pgs.
Extended European Search Report for EP Application No. 11781313.9, Completed Oct. 1, 2013, dated Oct. 8, 2013, 6 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/058093, dated Sep. 18, 2013, Mailed Oct. 22, 2013, 40 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/059813, dated Apr. 15, 2014, 7 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2013/059991, dated Mar. 17, 2015, Mailed Mar. 26, 2015, 8 pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/056065, dated Feb. 24, 2015, Mailed Mar. 5, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/062720, dated Mar. 31, 2015, Mailed Apr. 9, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/024987, dated Aug. 12, 2014, Mailed Aug. 12, 2014, 13 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/027146, completed Aug. 26, 2014, dated Sep. 4, 2014, 10 Pgs.
Joshi et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:I/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8.
Kang et al., “Handling Occlusions in Dense Multi-view Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110.
Kim et al., “Scene reconstruction from high spatio-angular resolution light fields”, ACM Transactions on Graphics (TOG)—SIGGRAPH 2013 Conference Proceedings, vol. 32, Issue 4, Article 73, Jul. 21, 2013, 11 pages.
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727.
Konolige, “Projected Texture Stereo”, 2010 IEEE International Conference on Robotics and Automation, May 3-7, 2010, p. 148-155.
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831.
Kubota et al., “Reconstructing Dense Light Field From Array of Multifocus Images for Novel View Synthesis”, IEEE Transactions on Image Processing, vol. 16, No. 1, Jan. 2007, pp. 269-279.
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Computer Vision and Pattern Recognition, Proceedings CVPR 94, Seattle, Washington, Jun. 21-23, 1994, 8 pgs.
Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, Proceedings—IEEE International Conference on Robotics and Automation, Conference Date May 9-13, 2011, 8 pgs., DOI:10.1109/ICRA.201135980382.
Lane et al., “A Survey of Mobile Phone Sensing”, IEEE Communications Magazine, vol. 48, Issue 9, Sep. 2010, pp. 140-150.
Lee et al., “Electroactive Polymer Actuator for Lens-Drive Unit in Auto-Focus Compact Camera Module”, ETRI Journal, vol. 31, No. 6, Dec. 2009, pp. 695-702.
Lee et al., “Nonlocal matting”, CVPR 2011, Jun. 20-25, 2011, pp. 2193-2200.
Lensvector, “How LensVector Autofocus Works”, 2010, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg.
Levin et al., “A Closed Form Solution to Natural Image Matting”, Pattern Analysis and Machine Intelligence, Dec. 18, 2007, vol. 30, Issue 2, 8 pgs.
Levin et al., “Spectral Matting”, 2007 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 17-22, 2007, Minneapolis, MN, USA, pp. 1-8.
Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Sep. 1, 2006, vol. 39, Issue No. 8, pp. 46-55.
Levoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, pp. 1-12.
Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution”, Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab_research/08/deblur-feng.pdf on Feb. 5, 2014.
Li et al., “Fusing Images With Different Focuses Using Support Vector Machines”, IEEE Transactions on Neural Networks, vol. 15, No. 6, Nov. 8, 2004, pp. 1555-1561.
Lim, “Optimized Projection Pattern Supplementing Stereo Systems”, 2009 IEEE International Conference on Robotics and Automation, May 12-17, 2009, pp. 2823-2829.
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120.
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10.
Martinez et al., “Simple Telemedicine for Developing Regions: Camera Phones and Paper-Based Microfluidic Devices for Real-Time, Off-Site Diagnosis”, Analytical Chemistry (American Chemical Society), vol. 80, No. 10, May 15, 2008, pp. 3699-3707.
McGuire et al., “Defocus video matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2005, vol. 24, Issue 3, Jul. 2005, pp. 567-576.
Merkle et al., “Adaptation and optimization of coding algorithms for mobile 3DTV”, Mobile3DTV Project No. 216503, Nov. 2008, 55 pgs.
Mitra et al., “Light Field Denoising, Light Field Superresolution and Stereo Camera Based Refocussing using a GMM Light Field Patch Prior”, Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on Jun. 16-21, 2012, pp. 22-28.
Moreno-Noguer et al., “Active Refocusing of Images and Videos”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Jul. 2007, 10 pages.
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs.
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 14, 2006, pp. 30-38.
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs.
Ng et al., “Light Field Photography with a Hand-held Plenoptic Camera”, Stanford Tech Report CTSR Feb. 2005, Apr. 20, 2005, pp. 1-11.
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378.
Nguyen et al., “Error Analysis for Image-Based Rendering with Depth Information”, IEEE Transactions on Image Processing, vol. 18, Issue 4, Apr. 2009, pp. 703-716.
Nguyen et al., “Image-Based Rendering with Depth Information Using the Propagation Algorithm”, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005, vol. 5, Mar. 23-23, 2005, pp. II-589-II-592.
Nishihara, “PRISM: A Practical Real-Time Imaging Stereo Matcher”, Massachusetts Institute of Technology, A.I. Memo 780, May 1984, 32 pgs.
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900.
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, Jun. 2007, 12 pgs.
Park et al., “Multispectral Imaging Using Multiplexed Illumination”, 2007 IEEE 11th International Conference on Computer Vision, Oct. 14-21, 2007, Rio de Janeiro, Brazil, pp. 1-8.
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36.
Parkkinen et al., “Characteristic Spectra of Munsell Colors”, Journal of the Optical Society of America A, vol. 6, Issue 2, Feb. 1989, pp. 318-322.
Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, Jan. 22, 2012, 15 pgs.
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, Jul. 2008, pp. 1-19.
Philips 3D Solutions, “3D Interface Specifications, White Paper”, Feb. 15, 2008, 2005-2008 Philips Electronics Nederland B.V., Philips 3D Solutions retrieved from www.philips.com/3dsolutions, 29 pgs., Feb. 15, 2008.
Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, printed Nov. 2, 2012 from http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html, 1 pg.
Pouydebasque et al., “Varifocal liquid lenses with integrated actuator, high focusing power and low operating voltage fabricated on 200 mm wafers”, Sensors and Actuators A: Physical, vol. 172, Issue 1, Dec. 2011, pp. 280-286.
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Dec. 2, 2008, vol. 18, No. 1, pp. 36-51.
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077.
Rajan et al., “Simultaneous Estimation of Super Resolved Scene and Depth Map from Low Resolution Defocused Observations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 9, Sep. 8, 2003, pp. 1-16.
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552.
Rhemann et al, “A perceptually motivated online benchmark for image matting”, 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 20-25, 2009, Miami, FL, USA, pp. 1826-1833.
Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell, 2013, vol. 35, No. 2, pp. 504-511.
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/039155, completed Nov. 4, 2014, dated Nov. 13, 2014, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/046002, dated Dec. 31, 2014, Mailed Jan. 8, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/048772, dated Dec. 31, 2014, Mailed Jan. 8, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/056502, dated Feb. 24, 2015, Mailed Mar. 5, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/069932, dated May 19, 2015, Mailed May 28, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/017766, dated Aug. 25, 2015, Mailed Sep. 3, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/018084, dated Aug. 25, 2015, Mailed Sep. 3, 2015, 11 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/018116, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/021439, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022118, dated Sep. 8, 2015, Mailed Sep. 17, 2015, 4 pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022123, dated Sep. 8, 2015, Mailed Sep. 17, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022774, dated Sep. 22, 2015, Mailed Oct. 1, 2015, 5 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/023762, dated Mar. 2, 2015, Mailed Mar. 9, 2015, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024407, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024903, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024947, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/025100, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/025904, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 5 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/028447, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/030692, dated Sep. 15, 2015, Mailed Sep. 24, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/064693, dated May 10, 2016, Mailed May 19, 2016, 14 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/066229, dated May 24, 2016, Mailed Jun. 6, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/067740, dated May 31, 2016, Mailed Jun. 9, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2015/019529, dated Sep. 13, 2016, Mailed Sep. 22, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2010/057661, dated May 22, 2012, mailed May 31, 2012, 10 pgs.
International Preliminary Report on Patentability for International Application PCT/US2011/036349, dated Nov. 13, 2012, Mailed Nov. 22, 2012, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/046002, completed Nov. 13, 2013, dated Nov. 29, 2013, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/056065, Completed Nov. 25, 2013, dated Nov. 26, 2013, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/059991, Completed Feb. 6, 2014, dated Feb. 26, 2014, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2011/064921, Completed Feb. 25, 2011, dated Mar. 6, 2012, 17 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/024987, Completed Mar. 27, 2013, dated Apr. 15, 2013, 14 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/027146, completed Apr. 2, 2013, dated Apr. 19, 2013, 11 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/039155, completed Jul. 1, 2013, dated Jul. 11, 2013, 11 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/048772, Completed Oct. 21, 2013, dated Nov. 8, 2013, 6 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/056502, Completed Feb. 18, 2014, dated Mar. 19, 2014, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/069932, Completed Mar. 14, 2014, dated Apr. 14, 2014, 12 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2015/019529, completed May 5, 2015, dated Jun. 8, 2015, 11 Pgs.
International Search Report and Written Opinion for International Application PCT/US2011/036349, completed Aug. 11, 2011, dated Aug. 22, 2011, 11 pgs.
International Search Report and Written Opinion for International Application PCT/US2013/062720, completed Mar. 25, 2014, dated Apr. 21, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/017766, completed May 28, 2014, dated Jun. 18, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/018084, completed May 23, 2014, dated Jun. 10, 2014, 12 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/018116, completed May 13, 2014, dated Jun. 2, 2014, 12 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/021439, completed Jun. 5, 2014, dated Jun. 20, 2014, 10 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022118, completed Jun. 9, 2014, dated Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022774 report completed Jun. 9, 2014, dated Jul. 14, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024407, report completed Jun. 11, 2014, dated Jul. 8, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/025100, report completed Jul. 7, 2014, dated Aug. 7, 2014, 5 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/025904 report completed Jun. 10, 2014, dated Jul. 10, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2009/044687, completed Jan. 5, 2010, dated Jan. 13, 2010, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2010/057661, completed Mar. 9, 2011, dated Mar. 17, 2011, 14 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/037670, Completed Jul. 5, 2012, dated Jul. 18, 2012, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/044014, completed Oct. 12, 2012, dated Oct. 26, 2012, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/056151, completed Nov. 14, 2012, dated Nov. 30, 2012, 10 pgs.
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228.
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2002, pp. 208-215.
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995 Proceedings of the 1995 International Conference on Image Processing, Date of Conference: Oct. 23-26, 1995, pp. 93-96.
Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, Conference Date Jan. 7, 1998, 29 pgs., DOI: 10.1109/ICCV.1998.710696.
Shotton et al., “Real-time human pose recognition in parts from single depth images”, CVPR 2011, Jun. 20-25, 2011, Colorado Springs, CO, USA, pp. 1297-1304.
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System”, Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162. Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField_TOG.pdf on Feb. 5, 2014.
Silberman et al., “Indoor segmentation and support inference from RGBD images”, ECCV'12 Proceedings of the 12th European conference on Computer Vision, vol. Part V, Oct. 7-13, 2012, Florence, Italy, pp. 746-760.
Stober, “Stanford researchers developing 3-D camera with 12,616 lenses”, Stanford Report, Mar. 19, 2008, Retrieved from: http://news.stanford.edu/news/2008/march19/camera-031908.html, 5 pgs.
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759.
Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, 8 pgs.; DOI: 10.1109/CVPR.2008.4587659.
Taguchi et al., “Rendering-Oriented Decoding for a Distributed Multiview Coding System Using a Coset Code”, Hindawi Publishing Corporation, EURASIP Journal on Image and Video Processing, vol. 2009, Article ID 251081, Online: Apr. 22, 2009, 12 pages.
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975.
Tallon et al., “Upsampling and Denoising of Depth Maps Via Joint-Segmentation”, 20th European Signal Processing Conference, Aug. 27-31, 2012, 5 pgs.
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117.
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813.
Tao et al., “Depth from Combining Defocus and Correspondence Using Light-Field Cameras”, ICCV '13 Proceedings of the 2013 IEEE International Conference on Computer Vision, Dec. 1, 2013, pp. 673-680.
Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer vol. 77, No. 9, Sep. 1996, 93-100.
Tseng et al., “Automatic 3-D depth recovery from a single urban-scene image”, 2012 Visual Communications and Image Processing, Nov. 27-30, 2012, San Diego, CA, USA, pp. 1-6.
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2, Jun. 17-22, 2006, pp. 2331-2338.
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs.
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs.
Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park—Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fall06/ccd_theory.pdf, 5 pgs.
Venkataraman et al., “PiCam: An Ultra-Thin High Performance Monolithic Camera Array”, ACM Transactions on Graphics (TOG), ACM, US, vol. 32, No. 6, Nov. 1, 2013, pp. 1-13.
Vetro et al., “Coding Approaches for End-To-End 3D TV Systems”, Mitsubishi Electric Research Laboratories, Inc., TR2004-137, Dec. 2004, 6 pgs.
Viola et al., “Robust Real-time Object Detection”, Cambridge Research Laboratory, Technical Report Series, Compaq, CRL 2001/01, Feb. 2001, Printed from: http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf, 30 pgs.
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008, 5 pages.
Wang et al., “Automatic Natural Video Matting with Depth”, 15th Pacific Conference on Computer Graphics and Applications, PG '07, Oct. 29-Nov. 2, 2007, Maui, HI, USA, pp. 469-472.
Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, Dec. 29, 2010, OPTI521 Tutorial, 10 pgs.
Wang et al., “Soft scissors: an interactive tool for realtime high quality matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Article 9, Jul. 2007, 6 pages, published Aug. 5, 2007.
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426.
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, Mar. 11, 2005, vol. 5674, 12 pgs.
Wieringa et al., “Remote Non-invasive Stereoscopic Imaging of Blood Vessels: First In-vivo Results of a New Multispectral Contrast Enhancement Technology”, Annals of Biomedical Engineering, vol. 34, No. 12, Dec. 2006, pp. 1870-1878, Published online Oct. 12, 2006.
Wikipedia, “Polarizing Filter (Photography)”, retrieved from http://en.wikipedia.org/wiki/Polarizing_filter_(photography) on Dec. 12, 2012, last modified on Sep. 26, 2012, 5 pgs.
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs.
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 1-12.
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 2, Jun. 27-Jul. 2, 2004, pp. 294-301.
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs.
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C-11.
Wu et al., “A virtual view synthesis algorithm based on image inpainting”, 2012 Third International Conference on Networking and Distributed Computing, Hangzhou, China, Oct. 21-24, 2012, pp. 153-156.
Xu, “Real-Time Realistic Rendering and High Dynamic Range Image Display and Compression”, Dissertation, School of Computer Science in the College of Engineering and Computer Science at the University of Central Florida, Orlando, Florida, Fall Term 2005, 192 pgs.
Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), published Jul. 26, 2002, pp. 1-10.
Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Proceedings of SPIE—The International Society for Optical Engineering, Jul. 2002, 8 pgs.
Yokochi et al., “Extrinsic Camera Parameter Estimation Based-on Feature Tracking and GPS Data”, 2006, Nara Institute of Science and Technology, Graduate School of Information Science, LNCS 3851, pp. 369-378.
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, published Aug. 8, 2004, 12 pgs.
Zhang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, Proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171.
Zheng et al., “Balloon Motion Estimation Using Two Frames”, Proceedings of the Asilomar Conference on Signals, Systems and Computers, IEEE, Comp. Soc. Press, US, vol. 2 of 02, Nov. 4, 1991, pp. 1057-1061.
Zhu et al., “Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, Anchorage, AK, USA, pp. 1-8.
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6.
Debevec et al., “Recovering High Dynamic Range Radiance Maps from Photographs”, Computer Graphics (ACM SIGGRAPH Proceedings), Aug. 16, 1997, 10 pgs.
Do, “Immersive Visual Communication with Depth”, Presented at Microsoft Research, Jun. 15, 2011, Retrieved from: http://minhdo.ece.illinois.edu/talks/ImmersiveComm.pdf, 42 pgs.
Do et al., “Immersive Visual Communication”, IEEE Signal Processing Magazine, vol. 28, Issue 1, Jan. 2011, DOI: 10.1109/MSP.2010.939075, Retrieved from: http://minhdo.ece.illinois.edu/publications/ImmerComm_SPM.pdf, pp. 58-66.
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05), Ottawa, Ontario, Canada, Jun. 13-16, 2005, pp. 540-547.
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, Jun. 20-25, 2005, pp. 351-358.
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 5, 2006, vol. 83, Issue 3, 8 pgs.
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310.
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, Nov. 21, 2008, vol. 3, pp. 1-6.
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 24, 2004, pp. 89-100.
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551.
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, Apr. 6, 2006, vol. 1, pp. R1-R16.
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, Oct. 17, 2005, pp. 59622A-1-59622A-12.
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs.
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903.
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33.
Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, Apr. 21, 2006, vol. 6196, pp. 619607-1-619607-15.
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418.
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 2005, vol. 44, No. 15, pp. 2949-2956.
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposition Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs.
Eng, Wei Yong et al., “Gaze correction for 3D tele-immersive communication system”, IVMSP Workshop, 2013 IEEE 11th. IEEE, Jun. 10, 2013.
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012 (Oct. 11, 2012). Retrieved from the Internet at URL:<http://www.site.uottawa.ca/-edubois/theses/Fanaswala_thesis.pdf>, Aug. 2009, 163 pgs.
Fang et al., “Volume Morphing Methods for Landmark Based 3D Image Deformation”, SPIE vol. 2710, Proc. 1996 SPIE Intl Symposium on Medical Imaging, Newport Beach, CA, Feb. 10, 1996, pp. 404-415.
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, Feb. 2, 2006, vol. 6069, 8 pgs.
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, Aug. 12, 2004, vol. 14, pp. 47-57.
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, published Sep. 3, 2004, vol. 13, No. 10, pp. 1327-1344.
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, date of publication Dec. 12, 2005, pp. 141-159.
Fecker et al., “Depth Map Compression for Unstructured Lumigraph Rendering”, Proc. SPIE 6077, Proceedings Visual Communications and Image Processing 2006, Jan. 18, 2006, pp. 60770B-1-60770B-8.
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs.
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284.
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50.
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 191-198.
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 49-58.
Gastal et al., “Shared Sampling for Real-Time Alpha Matting”, Computer Graphics Forum, EUROGRAPHICS 2010, vol. 29, Issue 2, May 2010, pp. 575-584.
Georgeiv et al., “Light Field Camera Design for Integral View Photography”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs.
Georgiev et al., “Light-Field Capture by Multiplexing in the Frequency Domain”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs.
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, Oct. 19-22, 2008, Monterey CA, USA, pp. 3-12.
Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, published Aug. 1, 1996, pp. 43-54.
Gupta et al., “Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 564-571.
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, vol. 30, No. 4, Aug. 7, 2011, pp. 70:1-70:10.
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs.
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, published Nov. 19, 2007, vol. 16, No. 12, pp. 2953-2964.
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, 2010 International Conference: Computational Photography (ICCP) Mar. 2010, pp. 1-8.
Hernandez-Lopez et al., “Detecting objects using color and depth segmentation with Kinect sensor”, Procedia Technology, vol. 3, Jan. 1, 2012, pp. 196-204, XP055307680, ISSN: 2212-0173, DOI: 10.1016/j.protcy.2012.03.021.
HOLOEYE Photonics AG, “Spatial Light Modulators”, Oct. 2, 2013, Brochure retrieved from https://web.archive.org/web/20131002061028/http://holoeye.com/wp-content/uploads/Spatial_Light_Modulators.pdf on Oct. 13, 2017, 4 pgs.
HOLOEYE Photonics AG, “Spatial Light Modulators”, Sep. 18, 2013, retrieved from https://web. archive.org/web/20130918113140/http://holoeye.com/spatial-light-modulators/ on Oct. 13, 2017, 4 pages.
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, Jan. 29, 2010, vol. 3, pp. 022501-1-022501-3.
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, Oct. 13, 2011, vol. 4, pp. 112501-1-112501-3.
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D, Jan. 1, 2007, pp. 121-128.
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, pp. 297-306.
Janoch et al., “A category-level 3-D object dataset: Putting the Kinect to work”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Nov. 6-13, 2011, Barcelona, Spain, pp. 1168-1174.
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80.
Jiang et al., “Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, Jun. 17-22, 2006, New York, NY, USA, pp. 371-378.
International Search Report and Written Opinion for International Application PCT/US2012/058093, completed Nov. 15, 2012, dated Nov. 29, 2012, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/059813, completed Dec. 17, 2012, dated Jan. 7, 2013, 8 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022123, completed Jun. 9, 2014, dated Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/023762, Completed May 30, 2014, dated Jul. 3, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024903, completed Jun. 12, 2014, dated, Jun. 27, 2014, 13 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024947, Completed Jul. 8, 2014, dated Aug. 5, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/028447, completed Jun. 30, 2014, dated Jul. 21, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/030692, completed Jul. 28, 2014, dated Aug. 27, 2014, 7 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/064693, Completed Mar. 7, 2015, dated Apr. 2, 2015, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/066229, Completed Mar. 6, 2015, dated Mar. 19, 2015, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/067740, Completed Jan. 29, 2015, dated Mar. 3, 2015, 10 pgs.
Office Action for U.S. Appl. No. 12/952,106, dated Aug. 16, 2012, 12 pgs.
“Exchangeable image file format for digital still cameras: Exif Version 2.2”, Japan Electronics and Information Technology Industries Association, Prepared by Technical Standardization Committee on AV & IT Storage Systems and Equipment, JEITA CP-3451, Apr. 2002, Retrieved from: http://www.exif.org/Exif2-2.PDF, 154 pgs.
“File Formats Version 6”, Alias Systems, 2004, 40 pgs.
“Light fields and computational photography”, Stanford Computer Graphics Laboratory, Retrieved from: http://graphics.stanford.edu/projects/lightfield/, Earliest publication online: Feb. 10, 1997, 3 pgs.
Aufderheide et al., “A MEMS-based Smart Sensor System for Estimation of Camera Pose for Computer Vision Applications”, Research and Innovation Conference 2011, Jul. 29, 2011, pp. 1-10.
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183.
Barron et al., “Intrinsic Scene Properties from a Single RGB-D Image”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 17-24.
Bennett et al., “Multispectral Bilateral Video Fusion”, 2007 IEEE Transactions on Image Processing, vol. 16, No. 5, May 2007, published Apr. 16, 2007, pp. 1185-1194.
Bennett et al., “Multispectral Video Fusion”, Computer Graphics (ACM SIGGRAPH Proceedings), Jul. 25, 2006, published Jul. 30, 2006, 1 pg.
Bertero et al., “Super-resolution in computational imaging”, Micron, Jan. 1, 2003, vol. 34, Issues 6-7, 17 pgs.
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV Nov. 8, 2010, Part II, LNCS 6493, pp. 186-200.
Bishop et al., “Light Field Superresolution”, Computational Photography (ICCP), 2009 IEEE International Conference, Conference Date Apr. 16-17, published Jan. 26, 2009, 9 pgs.
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, published Aug. 18, 2011, pp. 972-986.
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs.
Borman et al, “Image Sequence Processing”, Dekker Encyclopedia of Optical Engineering, Oct. 14, 2002, 81 pgs.
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 28, 1998, vol. 3653, 10 pgs.
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, published Jul. 1, 2003, vol. 5016, 12 pgs.
Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 21, 2004, vol. 5299, 12 pgs.
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, Sep. 22, 1998, vol. 3459, 9 pgs.
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473.
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378.
Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, Aug. 2006, vol. 15, Issue 8, published Jul. 17, 2006, pp. 2239-2248.
Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE—IS&T Electronic Imaging, Feb. 3, 2009, vol. 7246, pp. 72460X-1-72460X-9; doi: 10.1117/12.810369.
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084.
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, May 13, 2010, 11 pgs.
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394.
Bryan et al., “Perspective Distortion from Interpersonal Distance is an Implicit Visual Cue for Social Judgments of Faces”, PLOS One, vol. 7, Issue 9, Sep. 26, 2012, e45301, doi:10.1371/journal.pone.0045301, 9 pages.
Capel, “Image Mosaicing and Super-resolution”, Retrieved on Nov. 10, 2012, Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.2643&rep=rep1 &type=pdf>, Trinity Term 2001, 269 pgs.
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, Jan. 1, 2006, vol. 3, pp. 623-626.
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ICASSP, Jun. 19, 2006, pp. 1177-1180.
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim Syst Sign Process, published online Feb. 23, 2007, vol. 18, pp. 83-101.
Chen et al., “Image Matting with Local and Nonlocal Smooth Priors”, CVPR '13 Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23, 2013, pp. 1902-1907.
Chen et al., “Interactive deformation of light fields”, In Proceedings of SIGGRAPH I3D, Apr. 3, 2005, pp. 139-146.
Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188.
Cooper et al., “The perceptual basis of common photographic practice”, Journal of Vision, vol. 12, No. 5, Article 8, May 25, 2012, pp. 1-14.
Crabb et al., “Real-time foreground segmentation via range and color imaging”, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, Jun. 23-28, 2008, pages 1-5a.
Robert et al., “Dense Depth Map Reconstruction: A Minimization and Regularization Approach which Preserves Discontinuities”, European Conference on Computer Vision (ECCV), pp. 439-451, 1996.
Van Der Wal et al., “The Acadia Vision Processor”, Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception, Sep. 13, 2000, Padova, Italy, pp. 31-40.
Related Publications (1)
Number Date Country
20170257562 A1 Sep 2017 US
Provisional Applications (2)
Number Date Country
61786976 Mar 2013 US
61767520 Feb 2013 US
Continuations (2)
Number Date Country
Parent 15253605 Aug 2016 US
Child 15599900 US
Parent 14186871 Feb 2014 US
Child 15253605 US