Systems and methods for encoding structured light field image files

Information

  • Patent Grant
  • 9036928
  • Patent Number
    9,036,928
  • Date Filed
    Thursday, September 4, 2014
    10 years ago
  • Date Issued
    Tuesday, May 19, 2015
    9 years ago
Abstract
Systems and methods configured to store images synthesized from light field image data and metadata describing the images in electronic files and render images using the stored image and the metadata in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to: synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image and metadata including the depth map.
Description
FIELD OF THE INVENTION

The present invention relates to encoding of image files and more specifically to the encoding of light field image files.


BACKGROUND

The ISO/IEC 10918-1 standard, more commonly referred to as the JPEG standard after the Joint Photographic Experts Group that developed the standard, establishes a standard process for digital compression and coding of still images. The JPEG standard specifies a codec for compressing an image into a bitstream and for decompressing the bitstream back into an image.


A variety of container file formats including the JPEG File Interchange Format (JFIF) specified in ISO/IEC 10918-5 and the Exchangeable Image File Format (Exif) and can be used to store a JPEG bitstream. JFIF can be considered a minimal file format that enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. The color space used in JFIF files is YCbCr as defined by CCIR Recommendation 601, involving 256 levels. The Y, Cb, and Cr components of the image file are converted from R, G, and B, but are normalized so as to occupy the full 256 levels of an 8-bit binary encoding. YCbCr is one of the compression formats used by JPEG. Another popular option is to perform compression directly on the R, G and B color planes. Direct RGB color plane compression is also popular when lossless compression is being applied.


A JPEG bitstream stores 16-bit word values in big-endian format. JPEG data in general is stored as a stream of blocks, and each block is identified by a marker value. The first two bytes of every JPEG bitstream are the Start Of Image (SOI) marker values FFh D8h. In a JFIF-compliant file there is a JFIF APP0 (Application) marker, immediately following the SOI, which consists of the marker code values FFh E0h and the characters JFIF in the marker data, as described in the next section. In addition to the JFIF marker segment, there may be one or more optional JFIF extension marker segments, followed by the actual image data.


Overall the JFIF format supports sixteen “Application markers” to store metadata. Using application markers makes it is possible for a decoder to parse a JFIF file and decode only required segments of image data. Application markers are limited to 64K bytes each but it is possible to use the same maker ID multiple times and refer to different memory segments.


An APP0 marker after the SOI marker is used to identify a JFIF file. Additional APP0 marker segments can optionally be used to specify JFIF extensions. When a decoder does not support decoding a specific JFIF application marker, the decoder can skip the segment and continue decoding.


One of the most popular file formats used by digital cameras is Exif. When Exif is employed with JPEG bitstreams, an APP1 Application marker is used to store the Exif data. The Exif tag structure is borrowed from the Tagged Image File Format (TIFF) maintained by Adobe Systems Incorporated of San Jose, Calif.


SUMMARY OF THE INVENTION

Systems and methods in accordance with embodiments of the invention are configured to store images synthesized from light field image data and metadata describing the images in electronic files. One embodiment of the invention includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to: synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image and metadata describing the encoded image, where the metadata includes the depth map.


In a further embodiment, the encoding application configures the processor to encode the depth map and the depth map included in the metadata describing the encoded image is the encoded depth map.


In another embodiment, synthesizing the higher resolution image involves identifying pixels in the plurality of low resolution images of the scene that are occluded in the reference viewpoint, and the metadata describing the encoded image in the light field image file includes descriptions of the occluded pixels.


In a still further embodiment, the descriptions of the occluded pixels include colors, locations, and depths of the occluded pixels.


In still another embodiment, synthesizing the higher resolution image involves creating a confidence map for the depth map, where the confidence map indicates the reliability of the depth value for a pixel in the depth map, and the metadata describing the encoded image in the light field image file includes the confidence map.


In a yet further embodiment, the encoding application configures the processor to encode the confidence map.


In yet another embodiment, the encoding application configures the processor to generate an edge map that indicates pixels in the synthesized image that lie on a discontinuity, and the metadata describing the encoded image in the light field image file includes the edge map.


In a further embodiment again, the edge map identifies whether a pixel lies on an intensity discontinuity.


In another embodiment again, the edge map identifies whether a pixel lies on an intensity and depth discontinuity.


In a further additional embodiment, the encoding application configures the processor to encode the edge map.


In another additional embodiment, the encoding application configures the processor to generate a missing pixel map that indicates pixels in the synthesized image that do not correspond to a pixel from the plurality of low resolution images of the scene and that are generated by interpolating pixel values from adjacent pixels in the synthesized image, and the metadata describing the encoded image in the light field image file includes the missing pixels map.


In a still yet further embodiment, the encoding application configures the processor to encode the missing pixels map.


In still yet another embodiment, the metadata also includes a focal plane.


In a still further embodiment again, the light field image file conforms to the JFIF standard.


In still another embodiment again, the high resolution image is encoded in accordance with the JPEG standard.


In a still further additional embodiment, the metadata is located within an Application marker segment within the light field image file.


In still another additional embodiment, the Application marker segment is identified using the APPS marker.


In a yet further embodiment again, the encoding application configures the processor to encode the depth map in accordance with the JPEG standard using lossless compression and the encoded depth map is stored within the Application marker segment containing the metadata.


In yet another embodiment again, synthesizing the higher resolution images involves identifying pixels from the plurality of low resolution images of the scene that are occluded in the reference viewpoint, and descriptions of the occluded pixels are stored within the Application marker segment containing the metadata.


In a yet further additional embodiment, the descriptions of the occluded pixels include colors, locations, and depths of the occluded pixels.


In yet another additional embodiment, synthesizing the higher resolution image involves creating a confidence map for the depth map, where the confidence map indicates the reliability of the depth value for a pixel in the depth map, and the confidence map is stored within the Application marker segment containing the metadata.


In a further additional embodiment again, the encoding application configures the processor to encode the confidence map in accordance with the JPEG standard using lossless compression.


In another additional embodiment again, the encoding application configures the processor to generate an edge map that indicates pixels in the synthesized image that lie on a discontinuity, and the edge map is stored within the Application marker segment containing the metadata.


In a still yet further embodiment again, the edge map identifies whether a pixel lies on an intensity discontinuity.


In still yet another embodiment again, the edge map identifies whether a pixel lies on an intensity and depth discontinuity.


In a still yet further additional embodiment, the encoding application configures the processor to encode the edge map in accordance with the JPEG standard using lossless compression.


In still yet another additional embodiment, the encoding application configures the processor to generate a missing pixel map that indicates pixels in the synthesized image that do not correspond to a pixel from the plurality of low resolution images of the scene and that are generated by interpolating pixel values from adjacent pixels in the synthesized image, and the missing pixel map is stored within the Application marker segment containing the metadata.


In a yet further additional embodiment again, the encoding application configures the processor to encode the missing pixels map in accordance with the JPEG standard using lossless compression.


An embodiment of the method of the invention includes synthesize a higher resolution image of a scene from a reference viewpoint and a depth map that describes depths of pixels in the synthesized image using an encoding device and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints and synthesizing the higher resolution image includes creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image, encoding the higher resolution image using the encoding device, and creating a light field image file including the encoded image and metadata describing the encoded image using the encoding device, where the metadata includes the depth map.


Another embodiment of the invention includes a machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process involving: synthesizing a higher resolution image of a scene from a reference viewpoint using light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints and synthesizing the higher resolution image includes creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encoding the higher resolution image; and creating a light field image file including the encoded image and metadata describing the encoded image, where the metadata includes the depth map.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 conceptually illustrates the architecture of an array camera configured to generate light field image files in accordance with embodiments of the invention.



FIG. 2 is a flow chart of a process for creating a light field image file including an image synthesized from light field image data and a depth map for the synthesized image generated using the light field image data in accordance with an embodiment of the invention.



FIG. 3 is a process for creating a light field image file that conforms to the JFIF standard and that includes an image encoded in accordance with the JPEG standard in accordance with an embodiment of the invention.



FIG. 4 illustrates an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 5 illustrates a “DZ Selection Descriptor” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 6 illustrates a “Depth Map, Camera Array and Auxiliary Maps Selection Descriptor” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 7 illustrates a “Depth Map, Camera Array and Auxiliary Maps Compression Descriptor” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 8 illustrates a “Depth Map Attributes” field within a “Depth Map Header” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 9 illustrates a “Depth Map Descriptor” field within a “Depth Map Header” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 10 illustrates a “Depth Map Data Descriptor” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 11 illustrates a “Camera Array Attributes” field within a “Camera Array Header” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 12 illustrates a “Camera Array Descriptor” field within a “Camera Array Header” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 13 illustrates an “Individual Camera Descriptor” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 14 illustrates “Individual Camera Data” within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 15 illustrates an “Individual Pixel Data Structure” within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 16 illustrates an “Auxiliary Map Descriptor” within an “Auxiliary Map Header” contained within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 17 illustrates an “Auxiliary Map Data Descriptor” within an APP9 Application marker segment of a light field image file that conforms to the JFIF standard in accordance with an embodiment of the invention.



FIG. 18 illustrates a network including at least one encoding device configured to capture light field image data and encode light field image files and to share the light field image file with rendering devices via a network in accordance with an embodiment of the invention.



FIG. 19 conceptually illustrates a rendering device configured by a rendering application to render an image using a light field image file.



FIG. 20 is a flow chart illustrating a process for rendering an image using a light field image file in accordance with an embodiment of the invention.



FIG. 21 conceptually illustrates a rendering device configured by a rendering application to render an image using a light field image file containing an image and/or a map encoded in accordance with the JPEG standard.



FIG. 22 is a flow chart illustrating a process for rendering an image using a light field image file that conforms to the JFIF standard and includes an image encoded in accordance with the JPEG standard and metadata describing the encoded image.



FIG. 23 is a flow chart illustrating a process applying depth dependent effects to an encoded image contained within the light field image file based upon a depth map contained within the light field image file in accordance with an embodiment of the invention.



FIG. 24 is a flow chart illustrating a process for rendering an image from a different viewpoint to a reference viewpoint of an image contained within a light field image file in accordance with an embodiment of the invention.





DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files and for rendering images using the stored images and metadata in accordance with embodiments of the invention are illustrated. A file containing an image synthesized from light field image data and metadata derived from the light field image data can be referred to as a light field image file. As is discussed further below, the encoded image in a light field image file is typically synthesized using a super resolution process from a number of lower resolution images. The light field image file can also include metadata describing the synthesized image derived from the light field image data that enables post processing of the synthesized image. In many embodiments, a light field image file is created by encoding an image synthesized from light field image data and combining the encoded image with a depth map derived from the light field image data. In several embodiments, the encoded image is synthesized from a reference viewpoint and the metadata includes information concerning pixels in the light field image that are occluded from the reference viewpoint. In a number of embodiments, the metadata can also include additional information including (but not limited to) auxiliary maps such as confidence maps, edge maps, and missing pixel maps that can be utilized during post processing of the encoded image to improve the quality of an image rendered using the light field image data file.


In many embodiments, the light field image file is compatible with the JPEG File Interchange Format (JFIF). The synthesized image is encoded as a JPEG bitstream and stored within the file. The accompanying depth map, occluded pixels and/or any appropriate additional information including (but not limited to) auxiliary maps are then stored within the JFIF file as metadata using an Application marker to identify the metadata. A legacy rendering device can simply display the synthesized image by decoding the JPEG bitstream. Rendering devices in accordance with embodiments of the invention can perform additional post-processing on the decoded JPEG bitstream using the depth map and/or any available auxiliary maps. In many embodiments, the maps included in the metadata can also be compressed using lossless JPEG encoding and decoded using a JPEG decoder. Although much of the discussion that follows references the JFIF and JPEG standards, these standards are simply discussed as examples and it should be appreciated that similar techniques can be utilized to embed metadata derived from light field image data used to synthesize the encoded image within a variety of standard file formats, where the synthesized image and/or maps are encoded using any of a variety of standards based image encoding processes.


By transmitting a light field image file including an encoded image, and metadata describing the encoded image, a rendering device (i.e. a device configured to generate an image rendered using the information within the light field image file) can render new images using the information within the file without the need to perform super resolution processing on the original light field image data. In this way, the amount of data transmitted to the rendering device and the computational complexity of rendering an image is reduced. In several embodiments, rendering devices are configured to perform processes including (but not limited to) refocusing the encoded image based upon a focal plane specified by the user, synthesizing an image from a different viewpoint, and generating a stereo pair of images. The capturing of light field image data and the encoding and decoding of light field image files in accordance with embodiments of the invention are discussed further below.


Capturing Light Field Image Data


A light field, which is often defined as a 4D function characterizing the light from all direction at all points in a scene, can be interpreted as a two-dimensional (2D) collection of 2D images of a scene. Array cameras, such as those described in U.S. patent application Ser. No. 12/935,504 entitled “Capturing and Processing of Images using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., can be utilized to capture light field images. In a number of embodiments, super resolution processes such as those described in U.S. patent application Ser. No. 12/967,807 entitled “Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al., are utilized to synthesize a higher resolution 2D image or a stereo pair of higher resolution 2D images from the lower resolution images in the light field captured by an array camera. The terms high or higher resolution and low or lower resolution are used here in a relative sense and not to indicate the specific resolutions of the images captured by the array camera. The disclosures of U.S. patent application Ser. No. 12/935,504 and U.S. patent application Ser. No. 12/967,807 are hereby incorporated by reference in their entirety.


Each two-dimensional (2D) image in a captured light field is from the viewpoint of one of the cameras in the array camera. A high resolution image synthesized using super resolution processing is synthesized from a specific viewpoint that can be referred to as a reference viewpoint. The reference viewpoint can be from the viewpoint of one of the cameras in a camera array. Alternatively, the reference viewpoint can be an arbitrary virtual viewpoint.


Due to the different viewpoint of each of the cameras, parallax results in variations in the position of foreground objects within the images of the scene. Processes for performing parallax detection are discussed in U.S. Provisional Patent Application Ser. No. 61/691,666 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras” to Venkataraman et al., the disclosure of which is incorporated by reference herein in its entirety. As is disclosed in U.S. Provisional Patent Application Ser. No. 61/691,666, a depth map from a reference viewpoint can be generated by determining the disparity between the pixels in the images within a light field due to parallax. A depth map indicates the distance of the surfaces of scene objects from a reference viewpoint. In a number of embodiments, the computational complexity of generating depth maps is reduced by generating an initial low resolution depth map and then increasing the resolution of the depth map in regions where additional depth information is desirable such as (but not limited to) regions involving depth transitions and/or regions containing pixels that are occluded in one or more images within the light field.


During super resolution processing, a depth map can be utilized in a variety of ways. U.S. patent application Ser. No. 12/967,807 describes how a depth map can be utilized during super resolution processing to dynamically refocus a synthesized image to blur the synthesized image to make portions of the scene that do not lie on the focal plane to appear out of focus. U.S. patent application Ser. No. 12/967,807 also describes how a depth map can be utilized during super resolution processing to generate a stereo pair of higher resolution images for use in 3D applications. A depth map can also be utilized to synthesize a high resolution image from one or more virtual viewpoints. In this way, a rendering device can simulate motion parallax and a dolly zoom (i.e. virtual viewpoints in front or behind the reference viewpoint). In addition to utilizing a depth map during super-resolution processing, a depth map can be utilized in a variety of post processing processes to achieve effects including (but not limited to) dynamic refocusing, generation of stereo pairs, and generation of virtual viewpoints without performing super-resolution processing. Light field image data captured by array cameras, storage of the light field image data in a light field image file, and the rendering of images using the light field image file in accordance with embodiments of the invention are discussed further below.


Array Camera Architecture


Array cameras in accordance with embodiments of the invention are configured so that the array camera software can control the capture of light field image data and can capture the light field image data into a file that can be used to render one or more images on any of a variety of appropriately configured rendering devices. An array camera including an imager array in accordance with an embodiment of the invention is illustrated in FIG. 1. The array camera 100 includes a sensor 102 having an array of focal planes 104 and which is configured to communicate with a processor 108. The processor is also configured to communicate with one or more different types of memory 110 that can be utilized to store image data and/or contain machine readable instructions utilized to configure the processor to perform processes including (but not limited to) the various processes described below. The array camera 100 also includes a display 112 that can be utilized by the processor 108 to present a user interface to a user and to display an image rendered using the light field image data. Although the processor is illustrated as a single processor, array cameras in accordance with embodiments of the invention can utilize a single processor or multiple processors including (but not limited to) a graphics processing unit (GPU).


In the illustrated embodiment, the processor receives image data generated by the sensor and reconstructs the light field captured by the sensor from the image data. The processor can manipulate the light field in any of a variety of different ways including (but not limited to) determining the depth and visibility of the pixels in the light field and synthesizing higher resolution 2D images from the image data of the light field. Sensors including multiple focal planes are discussed in U.S. patent application Ser. No. 13/106,797 entitled “Architectures for System on Chip Array Cameras”, to Pain et al., the disclosure of which is incorporated herein by reference in its entirety.


In the illustrated embodiment, the focal planes are configured in a 5×5 array. Each focal plane 104 on the sensor is capable of capturing an image of the scene. The sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements and/or any other structure configured to generate an electrical signal indicative of light incident on the structure. In many embodiments, the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present). In other embodiments, the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element.


In many embodiments, an array of images (i.e. a light field) is created using the image data captured by the focal planes in the sensor. As noted above, processors 108 in accordance with many embodiments of the invention are configured using appropriate software to take the image data within the light field and synthesize one or more high resolution images. In several embodiments, the high resolution image is synthesized from a reference viewpoint, typically that of a reference focal plane 104 within the sensor 102. In many embodiments, the processor is able to synthesize an image from a virtual viewpoint, which does not correspond to the viewpoints of any of the focal planes 104 in the sensor 102. Unless all of the objects within a captured scene are a significant distance from the array camera, the images in the light field will include disparity due to the different fields of view of the focal planes used to capture the images. Processes for detecting and correcting for disparity when performing super-resolution processing in accordance with embodiments of the invention are discussed in U.S. Provisional Patent Application Ser. No. 61/691,666 (incorporated by reference above). The detected disparity can be utilized to generate a depth map. The high resolution image and depth map can be encoded and stored in memory 110 in a light field image file. The processor 108 can use the light field image file to render one or more high resolution images. The processor 108 can also coordinate the sharing of the light field image file with other devices (e.g. via a network connection), which can use the light field image file to render one or more high resolution images.


Although a specific array camera architecture is illustrated in FIG. 1, alternative architectures can also be utilized in accordance with embodiments of the invention. Systems and methods for encoding high resolution images and depth maps for storage in electronic files in accordance with embodiments of the invention are discussed below.


Capturing and Storing Light Field Image Data


Processes for capturing and storing light field image data in accordance with many embodiments of the invention involve capturing light field image data, generating a depth map from a reference viewpoint, and using the light field image data and the depth map to synthesize an image from the reference viewpoint. The synthesized image can then be compressed for storage. The depth map and additional data that can be utilized in the post processing can also be encoded as metadata that can be stored in the same container file with the encoded image.


A process for capturing and storing light field image data in accordance with an embodiment of the invention is illustrated in FIG. 2. The process 200 includes capturing (202) light field image data. In several embodiments, the light field image data is captured using an array camera similar to the array cameras described above. In other embodiments, any of a variety of image capture device(s) can be utilized to capture light field image data. The light field image data is used to generate (204) a depth map. A depth map can be generated using any of a variety of techniques including (but not limited to) using any of the processes disclosed in U.S. Provisional Patent Application Ser. No. 61/691,666 or U.S. patent application Ser. No. 13/623,091 entitled “Systems and Methods for Determining Depth from Multiple Views of a Scene that Include Aliasing Using Hypothesized Fusion”, to Venkatarman et al. The disclosure of U.S. patent Ser. No. 13/623,091 is incorporated by reference herein in its entirety.


The light field image data and the depth map can be utilized to synthesize (206) an image from a specific viewpoint. In many embodiments, the light field image data includes a number of low resolution images that are used to synthesize a higher resolution image using a super resolution process. In a number of embodiments, a super resolution process such as (but not limited to) any of the super resolution processes disclosed in U.S. patent application Ser. No. 12/967,807 can be utilized to synthesize a higher resolution image from the reference viewpoint.


In order to be able to perform post processing to modify the synthesized image without the original light field image data, metadata can be generated (208) from the light field image data, the synthesized image, and/or the depth map. The metadata data can be included in a light field image file and utilized during post processing of the synthesized image to perform processing including (but not limited to) refocusing the encoded image based upon a focal plane specified by the user, and synthesizing one or more images from a different viewpoint. In a number of embodiments, the auxiliary data includes (but is not limited to) pixels in the light field image data occluded from the reference viewpoint used to synthesize the image from the light field image data, one or more auxiliary maps including (but not limited to) a confidence map, an edge map, and/or a missing pixel map. Auxiliary data that is formatted as maps or layers provide information corresponding to pixel locations within the synthesized image. A confidence map is produced during the generation of a depth map and reflects the reliability of the depth value for a particular pixel. This information may be used to apply different filters in areas of the image and improve image quality of the rendered image. An edge map defines which pixels are edge pixels, which enables application of filters that refine edges (e.g. post sharpening). A missing pixel map represents pixels computed by interpolation of neighboring pixels and enables selection of post-processing filters to improve image quality. As can be readily appreciated, the specific metadata generated depends upon the post processing supported by the image data file. In a number of embodiments, no auxiliary data is included in the image data file.


In order to generate an image data file, the synthesized image is encoded (210). The encoding typically involves compressing the synthesized image and can involve lossless or lossy compression of the synthesized image. In many embodiments, the depth map and any auxiliary data are written (212) to a file with the encoded image as metadata to generate a light field image data file. In a number of embodiments, the depth map and/or the auxiliary maps are encoded. In many embodiments, the encoding involves lossless compression.


Although specific processes for encoding light field image data for storage in a light field image file are discussed above, any of a variety of techniques can be utilized to process light field image data and store the results in an image file including but not limited to processes that encode low resolution images captured by an array camera and calibration information concerning the array camera that can be utilized in super resolution processing. Storage of light field image data in JFIF files in accordance with embodiments of the invention is discussed further below.


Image Data Formats


In several embodiments, the encoding of a synthesized image and the container file format utilized to create the light field image file are based upon standards including but not limited to the JPEG standard (ISO/IEC 10918-1) for encoding a still image as a bitstream and the JFIF standard (ISO/IEC 10918-5). By utilizing these standards, the synthesized image can be rendered by any rendering device configured to support rendering of JPEG images contained within JFIF files. In many embodiments, additional data concerning the synthesized image such as (but not limited to) a depth map and auxiliary data that can be utilized in the post processing of the synthesized image can be stored as metadata associated with an Application marker within the JFIF file. Conventional rendering devices can simply skip Application markers containing this metadata. Rendering device in accordance with many embodiments of the invention can decode the metadata and utilize the metadata in any of a variety of post processing processes.


A process for encoding an image synthesized using light field image data in accordance with the JPEG specification and for including the encoded image and metadata that can be utilized in the post processing of the image in a JFIF file in accordance with an embodiment of the invention is illustrated in FIG. 3. The process 300 includes encoding (302) an image synthesized from light field image data in accordance with the JPEG standard. The image data is written (304) to a JFIF file. A depth map for the synthesized image is compressed (306) and the compressed depth map and any additional auxiliary data are written (308) as metadata to an Application marker segment of the JFIF file containing the encoded image. Where the auxiliary data includes maps, the maps can also be compressed by encoding the maps in accordance with the JPEG standard. At which point, the JFIF file contains an encoded image and metadata that can be utilized to perform post processing on the encoded image in ways that utilize the additional information captured in the light field image data utilized to synthesize the high resolution image (without the need to perform super resolution processing on the underlying light field image data).


Although specific processes are discussed above for storing light field image data in JFIF files, any of a variety of processes can be utilized to encode synthesized images and additional metadata derived from the light field image data used to synthesize the encoded images in a JFIF file as appropriate to the requirements of a specific application in accordance with embodiments of the invention. The encoding of synthesized images and metadata for insertion into JFIF files in accordance with embodiments of the invention are discussed further below. Although much of the discussion that follows relates to JFIF files, synthesized images and metadata can be encoded for inclusion in a light field image file using any of a variety of proprietary or standards based encoding techniques and/or utilizing any of a variety of proprietary or standards based file formats.


Encoding Images Synthesized from Light Field Image Data


An image synthesized from light field image data using super resolution processing can be encoded in accordance with the JPEG standard for inclusion in a light field image file in accordance with embodiments of the invention. The JPEG standard is a lossy compression standard. However, the information losses typically do not impact edges of objects. Therefore, the loss of information during the encoding of the image typically does not impact the accuracy of maps generated based upon the synthesized image (as opposed to the encoded synthesized image). The pixels within images contained within files that comply with the JFIF standard are typically encoded as YCbCr values. Many array cameras synthesize images, where each pixel is expressed in terms of a Red, Green and Blue intensity value. In several embodiments, the process of encoding the synthesized image involves mapping the pixels of the image from the RGB domain to the YCbCr domain prior to encoding. In other embodiments, mechanisms are used within the file to encode the image in the RGB domain. Typically, encoding in the YCbCr domain provides better compression ratios and encoding in the RGB domain provides higher decoded image quality.


Storing Additional Metadata Derived from Light Field Image Data


The JFIF standard does not specify a format for storing depth maps or auxiliary data generated by an array camera. The JFIF standard does, however, provide sixteen Application markers that can be utilized to store metadata concerning the encoded image contained within the file. In a number of embodiments, one or more of the Application markers of a JFIF file is utilized to store an encoded depth map and/or one or more auxiliary maps that can be utilized in the post processing of the encoded image contained within the file.


A JFIF Application marker segment that can be utilized to store a depth map, individual camera occlusion data and auxiliary map data in accordance with an embodiment of the invention is illustrated in FIG. 4. The APPS Application marker segment 400 uses a format identifier 402 that uniquely identifies that the Application marker segment contains metadata describing an image synthesized using light field image data. In a number of embodiments, the identifier is referred to as the “DZ Format Identifier” 402 and is expressed as the zero terminated string “PIDZ0”.


The Application marker segment includes a header 404 indicated as “DZ Header” that provides a description of the metadata contained within the Application marker segment. In the illustrated embodiment, the “DZ Header” 404 includes a DZ Endian field that indicates whether the data in the “DZ Header” is big endian or little endian. The “DZ Header” 404 also includes a “DZ Selection Descriptor”.


An embodiment of a “DZ Selection Descriptor” is illustrated in FIG. 5, which includes four bytes. The first two bytes (i.e. bytes 0 and 1) contain information concerning the metadata describing the encoded image that are present (see FIG. 6) and the manner in which the different pieces of metadata are compressed (see FIG. 7). In the illustrated embodiment, the types of metadata that are supported are a depth map, occluded pixel data, virtual view point data, a missing pixel map, a regular edge map, a silhouette edge map, and/or a confidence map. In other embodiments, any of a variety of metadata describing an encoded image obtained from the light field image data used to synthesize the image can be included in the metadata contained within a JFIF file in accordance with an embodiment of the invention. In many instances, the metadata describing the encoded image can include maps that can be considered to be monochrome images that can be encoded using JPEG encoding. In a number of embodiments, the maps can be compressed using lossless JPEG LS encoding. In several embodiments, the maps can be compressed using lossy JPEG encoding. Utilizing JPEG encoding to compress the maps reduces the size of the maps and enables rendering devices to leverage a JPEG decoder to both decode the image contained within the JFIF file and the maps describing the encoded image. The third byte (i.e. byte 2) of the “DZ Selection Descriptor” indicates the number of sets of metadata describing the encoded image that are contained within the Application marker segment and the fourth byte is reserved. Although specific implementations of the header 404 describing the metadata contained within the Application marker segment are illustrated in FIGS. 4-7, any of a variety of implementations can be utilized to identify the maps describing the synthesized image that are present within the metadata contained within an light field image file as appropriate to the requirements of the application in accordance with embodiments of the invention.


Depth Map


Referring back to FIG. 4, the Application marker segment also includes a “Depth Map Header” 406 that describes depth map 416 included within the Application marker segment. The “Depth Map Header”406 includes an indication 408 of the size of “Depth Map Attributes” 410 included within the “Depth Map Header”, the “Depth Map Attributes” 410, and a “Depth Map Descriptor” 412. As noted above, the depth map 416 can be considered to be a monochrome image and lossless or lossy JPEG encoding can be utilized to compress the “Depth Map Data” included in a JFIF file.


A “Depth Map Attributes” table in accordance with an embodiment of the invention is illustrated in FIG. 8 and includes information concerning the manner in which the depth map should be used to render the encoded image. In the illustrated embodiment, the information contained within the “Depth Map Attributes” table includes the focal plane and the F# of the synthetic aperture to utilize when rendering the encoded image. Although specific pieces of information related to the manner in which the depth map can be utilized to render the encoded image are illustrated in FIG. 8, any of a variety of pieces of information appropriate to the requirements of a specific application can be utilized in accordance with embodiments of the invention.


A “Depth Map Descriptor” in accordance with an embodiment of the invention is illustrated in FIG. 9 and includes metadata describing the depth map. In the illustrated embodiment, the “Depth Map Descriptor” includes a zero terminated identifier string “PIDZDH0” and version information. In other embodiments, any of a variety of pieces of information appropriate to the specific requirements of particular applications can be utilized in accordance with embodiments of the invention.


A JFIF Application marker segment is restricted to 65,533 bytes. However, an Application marker can be utilized multiple times within a JFIF file. Therefore, depth maps in accordance with many embodiments of the invention can span multiple APPS Application marker segments. The manner in which depth map data is stored within an Application marker segment in a JFIF file in accordance with an embodiment of the invention is illustrated in FIG. 10. In the illustrated embodiment, the depth map data is contained within a descriptor that is uniquely identified using the “PIDZDD0” zero terminated string. The descriptor also includes the length of the descriptor and depth map data.


Although specific implementations of a depth map and header describing a depth map within an Application marker segment of a JFIF file are illustrated in FIGS. 4, 8, 9, and 10, any of a variety of implementations can be utilized to include a depth map describing an encoded image within a JFIF file as appropriate to the requirements of the application in accordance with embodiments of the invention.


Occlusion Data


Referring back to FIG. 4, the Application marker segment also includes a “Camera Array Header” 418 that describes occlusion data 428 for individual cameras within an array camera that captured the light field image data utilized to synthesize the image contained within the light field image file. The occlusion data can be useful in a variety of post processing processes including (but not limited) to process that involve modifying the viewpoint of the encoded image. The “Camera Array Header” 418 includes an indication 420 of the size of a “Camera Array General Attributes” table 422 included within the “Camera Array Header”, the “Camera Array General Attributes” table 422, and a “Camera Array Descriptor” 424.


A “Camera Array General Attributes” table in accordance with an embodiment of the invention is illustrated in FIG. 11 and includes information describing the number of cameras and dimensions of a camera array utilized to capture the light field image data utilized to synthesize the image encoded within the JFIF file. In addition, the “Camera Array General Attributes” table can indicate a reference camera position within the array and/or a virtual view position within the array. The “Camera Array General Attributes” table also provides information concerning the number of cameras within the array for which occlusion data is provided within the JFIF file.


A “Camera Array Descriptor” in accordance with an embodiment of the invention is illustrated in FIG. 12 and includes metadata describing the individual camera occlusion data contained within the JFIF file. In the illustrated embodiment, the “Camera Array Descriptor” includes a zero terminated identifier string “PIDZAH0” and version information. In other embodiments, any of a variety of pieces of information appropriate to the specific requirements of particular applications can be utilized in accordance with embodiments of the invention.


In many embodiments, occlusion data is provided on a camera by camera basis. In several embodiments, the occlusion data is included within a JFIF file using an individual camera descriptor and an associated set of occlusion data. An individual camera descriptor that identifies a camera and identifies the number of occluded pixels related to the identified camera described within the JFIF file in accordance with an embodiment of the invention is illustrated in FIG. 13. In the illustrated embodiment, the descriptor is identified using the “PIDZCD0” zero terminated string. The descriptor also includes a camera number that can be utilized to identify a camera within an array camera that captured light field image data utilized to synthesize the encoded image contained within the JFIF file. In addition, the descriptor includes the number of occluded pixels described in the JFIF file and the length (in bytes) of the data describing the occluded pixels. The manner in which the occluded pixel data can be described in accordance with embodiments of the invention is illustrated in FIG. 14. The same descriptor “PDIZCD0” is used to identify the occluded pixel data and the descriptor also includes the number of pixels of occluded data contained within the segment, the length of the data in bytes and an offset to the next marker in addition to the occluded pixel data. Due to the restriction on Application marker segments not exceeding 65,533 bytes in data, the additional information enables a rendering device to reconstruct the occluded pixel data across multiple APPS application marker segments within a JFIF file in accordance with embodiments of the invention.


A table describing an occluded pixel that can be inserted within a JFIF file in accordance with an embodiment of the invention is illustrated in FIG. 15. The table includes the depth of the occluded pixel, the pixel color of the occluded pixel and the pixel coordinates. In the illustrated embodiment, the pixel color is illustrated as being in the RGB domain. In other embodiments, the pixel color can be expressed in any domain including the YCbCr domain.


Although specific implementations for storing information describing occluded pixel depth within an Application marker segment of a JFIF file are illustrated in FIGS. 4, 13, 14, and 15, any of a variety of implementations can be utilized to include occluded pixel information within a JFIF file as appropriate to the requirements of the application in accordance with embodiments of the invention.


Auxiliary Maps


Referring back to FIG. 4, any of a variety of auxiliary maps can be included in an Application marker segment within a JFIF file in accordance with an embodiment of the invention. The total number of auxiliary maps and the types of auxiliary maps can be indicated in the Application marker segment. Each auxiliary map can be expressed using an “Auxiliary Map Descriptor” 432 and “Auxiliary Map Data” 434. In the illustrated embodiment, the “Auxiliary Map Descriptor” 432 is included in an “Auxiliary Map Header” 430 within the Application marker segment in the JFIF file.


An “Auxiliary Map Descriptor” that describes an auxiliary map contained within a light field image file in accordance with an embodiment of the invention is illustrated in FIG. 16. The “Auxiliary Map Descriptor” includes an identifier, which is the “PIDZAM0” zero terminated string and information specifying the type of auxiliary map and number of bits per pixel in the map. As noted above, any of a variety of auxiliary maps derived from light field image data used to synthesize an encoded image can be included within a JFIF file in accordance with embodiments of the invention. In the illustrated embodiment, confidence maps, silhouette edge maps, regular edge maps, and missing pixel maps are supported.


“Auxiliary Map Data” stored in a JFIF file in accordance with an embodiment of the invention is conceptually illustrated in FIG. 17. The “Auxiliary Map Data” uses the same “PDIZAD0” zero terminated string identifier and includes the number of pixels of the auxiliary map contained within the segment, the length of the data in bytes and an offset to the next marker in addition to pixels of the auxiliary map. Due to the restriction on Application marker segments not exceeding 65,533 bytes in data, the additional information enables a rendering device to reconstruct the auxiliary map describing the encoded image across multiple APPS application marker segments within a JFIF file.


Although specific implementations for storing auxiliary maps within an Application marker segment of a JFIF file are illustrated in FIGS. 4, 16, and 17, any of a variety of implementations can be utilized to include auxiliary map information within a JFIF file as appropriate to the requirements of the application in accordance with embodiments of the invention. Various examples of auxiliary maps that can be utilized to provide additional information concerning an encoded image based upon the light field image data utilized to synthesize the encoded image in accordance with embodiments of the invention are discussed below.


Confidence Maps


A confidence map can be utilized to provide information concerning the relative reliability of the information at a specific pixel location. In several embodiments, a confidence map is represented as a complimentary one bit per pixel map representing pixels within the encoded image that were visible in only a subset of the images used to synthesize the encoded image. In other embodiments, a confidence map can utilize additional bits of information to express confidence using any of a variety of metrics including (but not limited to) a confidence measure determined during super resolution processing, or the number of images in which the pixel is visible.


Edge Maps


A variety of edge maps can be provided included (but not limited to) a regular edge map and a silhouette map. A regular edge map is a map that identifies pixels that are on an edge in the image, where the edge is an intensity discontinuity. A silhouette edge maps is a map that identifies pixels that are on an edge, where the edge involves an intensity discontinuity and a depth discontinuity. In several embodiments, each can be expressed as a separate one bit map or the two maps can be combined as a map including two pixels per map. The bits simply signal the presence of a particular type of edge at a specific location to post processing processes that apply filters including (but not limited to) various edge preserving and/or edge sharpening filters.


Missing Pixel Maps


A missing pixel map indicates pixel locations in a synthesized image that do not include a pixel from the light field image data, but instead include an interpolated pixel value. In several embodiments, a missing pixel map can be represented using a complimentary one bit per pixel map. The missing pixel map enables selection of post-processing filters to improve image quality. In many embodiments, a simple interpolation algorithm can be used during the synthesis of a higher resolution from light field image data and the missing pixels map can be utilized to apply a more computationally expensive interpolation process as a post processing process. In other embodiments, missing pixel maps can be utilized in any of a variety of different post processing process as appropriate to the requirements of a specific application in accordance with embodiments of the invention.


Rendering Images Using Light Field Imaging Files


When light field image data is encoded in a light field image file, the light field image file can be shared with a variety of rendering devices including but not limited to cameras, mobile devices, personal computers, tablet computers, network connected televisions, network connected game consoles, network connected media players, and any other device that is connected to the Internet and can be configured to display images. A system for sharing light field image files in accordance with an embodiment of the invention is illustrated in FIG. 18. The system 1800 includes a mobile device 1802 including an array camera configured to capture light field image data and encode the light field image data in a light field image file. The mobile device 1802 also includes a network interface that enables the transfer of a light field image file to other rendering devices via the Internet 1804. In several embodiments, the light field image file is transferred with the assistance of a server system 1806 that can either store the light field image file for access by other devices or relay the light field image file to other rendering devices. In many embodiments, the server system 1806 provides a user interface that enables users to modify the rendering of the image provided to the device. In several embodiments, the server system 1806 provides the light field image file to a device for rendering. In the illustrated embodiment, a variety of network connected rendering devices 1808 are illustrated including a mobile phone and a personal computer. In other embodiments, any of a variety of network connected and/or disconnected devices can render images using a light field image file in accordance with embodiments of the invention. Rendering devices and processes for rendering images in accordance with embodiments of the invention are discussed further below.


Rendering Devices


A rendering device in accordance with embodiments of the invention typically includes a processor and a rendering application that enables the rendering of an image based upon a light field image data file. The simplest rendering is for the rendering device to decode the encoded image contained within the light field image data file. More complex renderings involve applying post processing to the encoded image using the metadata contained within the light field image file to perform manipulations including (but not limited to) modifying the viewpoint of the image and/or modifying the focal plane of the image.


A rendering device in accordance with an embodiment of the invention is illustrated in FIG. 19. The rendering device 1900 includes a processor 1902, memory 1904, and an optional network interface 1906. The memory contains a rendering application 1908 that is used to configure the microprocessor to render images for display using a light field image file 1910. In the illustrated embodiment, the light field image file is shown stored in memory. In other embodiments, the light field image file can be stored in an external storage device. Although a specific rendering device is illustrated in FIG. 19, any of a variety of rendering devices can be utilized in accordance with embodiments of the invention including (but not limited to) the types of devices that are customarily used to display images using image files. Processes for rendering of images using light field image files in accordance with embodiments of the invention are discussed further below.


Processes for Rendering Images Using Light Field Image Files


As noted above, rendering a light field image file can be as simple as decoding an encoded image contained within the light field image file or can involve more complex post processing of the encoded image using metadata derived from the same light field image data used to synthesize the encoded image. A process for rendering a light field image in accordance with an embodiment of the invention is illustrated in FIG. 20. The process 2000 includes parsing (2002) the light field image file to locate the encoded image contained within the image file. The encoded image file is decoded (2004). As noted above, the image can be encoded using a standards based encoder and so the decoding process can utilize a standards based codec within a rendering device, or the image can be encoded using a proprietary encoding and a proprietary decoder is provided on the rendering device to decode the image. When the process for rendering the image simply involves rendering the image, the decoded image can be displayed. When the process for rendering the image includes post processing, the image file is parsed (2006) to locate metadata within the file that can be utilized to perform the post processing. The metadata is decoded (2008). The metadata can often take the form of maps that can be encoded using standards based image encoders and a standards based decoder present on the rendering device can be utilized to decode the metadata. In other embodiments, a proprietary decoding process is utilized to decode the metadata. The metadata can then be used to perform (2010) the post processing of the encoded image and the image can be displayed (2012). The display of the image can be local. Alternatively the image can be streamed to a remote device or encoded as an image and provided to a remote device for display.


Although specific processes for rendering an image from a light field image file are discussed with reference to FIG. 20, any of a variety of processes appropriate to the requirements of a specific application can be utilized to render an image for display using a light field image file in accordance with an embodiment of the invention. As noted above, any of a variety of standards based encoders and decoders can be utilized in the encoding and decoding of light field image files in accordance with embodiments of the invention. Processes for rendering images using light field image files that conform to the JFIF standard and include an image and/or metadata encoded in accordance with the JPEG standard are discussed further below.


Rendering Images Using JFIF Light Field Image Files


The ability to leverage deployed JPEG decoders can greatly simplify the process of rendering light field images. When a light field image file conforms to the JFIF standard and the image and/or metadata encoded within the light field image file is encoded in accordance with the JPEG standard, a rendering application can leverage an existing implementation of a JPEG decoder to render an image using the light field image file. Similar efficiencies can be obtained where the light field image file includes an image and/or metadata encoded in accordance with another popular standard for image encoding.


A rendering device configured by a rendering application to render an image using a light field image file in accordance with an embodiment of the invention is illustrated in FIG. 21. The rendering device 2100 includes a processor 2102, memory 2104, and an optional network interface 2106 that can be utilized to receive light field image files. In the illustrated embodiment, the memory 2104 of the rendering device 2100 includes a rendering application 2108, a JPEG decoder application 2110, and a light field image file 2112 that contains at least one image and/or metadata encoded in accordance with the JPEG standard. The rendering application 2108 configures the processor to parse the light field image file to locate an encoded image and to decode the encoded image using the JPEG decoder application 2110. Similarly, the rendering application can configure the processor to parse the light field image file to locate metadata and to decode encoded maps contained within the metadata using the JPEG decoder.


Although specific rendering devices including JPEG decoders are discussed above with reference to FIG. 21, any of a variety of rendering devices incorporating standards based decoders can be utilized to render images from appropriately encoded light field image files in accordance with embodiments of the invention. Processes for decoding light field image files that confirm with the JFIF standard and that contain at least one image and/or metadata encoded in accordance with the JPEG standard in accordance with embodiments of the invention are discussed further below.


Processes for Rendering Images from JFIF Light Field Image Files


Processes for rending images using light field image files that conform to the JFIF standard can utilize markers within the light field image file to identify encoded images and metadata. Headers within the metadata provide information concerning the metadata present in the file and can provide offset information or pointers to the location of additional metadata and/or markers within the file to assist with parsing the file. Once appropriate information is located a standard JPEG decoder implementation can be utilized to decode encoded images and/or maps within the file.


A process for displaying an image rendered using a light field image file that conforms to the JFIF standard using a JPEG decoder in accordance with an embodiment of the invention is illustrated in FIG. 22. The process 2200 involves parsing (2202) the light field image file to locate a Start of Image (SOI) Marker. The SOI marker is used to locate an image file encoded in accordance with the JPEG format. The encoded image can be decoded (2204) using a JPEG decoder. When no post processing of the decoded image is desired, the image can simply be displayed. Where post processing of the image is desired (e.g. to change the view point of the image and/or the focal plane of the image), the process parses (2206) the light field image file to locate an appropriate Application marker. In the illustrated embodiment, an APP9 marker indicates the presence of metadata within the light field image file. The specific metadata within the file can be determined by parsing (2206) a header within the APP9 Application marker segment that describes the metadata within the file. In the illustrated embodiment, the header is the “DZ Header” within the APP9 Application marker segment. The information within the metadata header can be utilized to locate (2208) specific metadata utilized in a post processing process within the light field image file. In instances where the metadata is encoded, the metadata can be decoded. In many embodiments, metadata describing an encoded image within a light field image file is in the form of a map that provides information concerning specific pixels within an encoded image contained within the light field image file and JPEG encoding is used to compress the map. Accordingly, a JPEG decoder can be utilized to decode the map. The decoded metadata can be utilized to perform (2212) a post processes the decoded image. The image can then be displayed (2214). In many embodiments, the image is displayed on a local display. In a number of embodiments, the image is streamed to a remote display or encoded as an image and forwarded to a remote device for display.


Although specific processes for displaying images rendered using light field image files are discussed above with respect to FIG. 22, any of a variety of processes for parsing a light field image file and decoding images and/or metadata encoded in accordance with the JPEG standard using a JPEG decoder can be utilized in accordance with embodiments of the invention. Much of the discussion above references the use of metadata derived from light field image data and contained within a light field image file to perform post processing processes on an encoded image synthesized from the light field image data. Post processing of images synthesized from light field image data using metadata obtained using the light field image data in accordance with embodiments of the invention are discussed further below.


Post Processing of Images Using Metadata Derived from Light Field Image Data


Images can be synthesized from light field image data in a variety of ways. Metadata included in light field image files in accordance with embodiments of the invention can enable images to be rendered from a single image synthesized from the light field image data without the need to perform super resolution processing. Advantages of rendering images in this way can include that the process of obtaining the final image is less processor intensive and less data is used to obtain the final image. However, the light field image data provides rich information concerning a captured scene from multiple viewpoints. In many embodiments, a depth map and occluded pixels from the light field image data (i.e. pixels that are not visible from the reference viewpoint of the synthesized image) can be included in a light field image file to provide some of the additional information typically contained within light field image data. The depth map can be utilized to modify the focal plane when rendering an image and/or to apply depth dependent effects to the rendered image. The depth map and the occluded pixels can be utilized to synthesize images from different viewpoints. In several embodiments, additional maps are provided (such as, but not limited to, confidence maps, edge maps, and missing pixel maps) that can be utilized when rendering alternative viewpoints to improve the resulting rendered image. The ability to render images from different viewpoints can be utilized to simply render an image from a different viewpoint. In many embodiments, the ability to render images from different viewpoints can be utilized to generate a stereo pair for 3D viewing. In several embodiments, processes similar to those described in U.S. Provisional Patent Application Ser. No. 61/707,691, entitled “Synthesizing Images From Light Fields Utilizing Virtual Viewpoints” to Jain (the disclosure of which is incorporated herein by reference in its entirety) can be utilized to modify the viewpoint based upon motion of a rendering device to create a motion parallax effect. Processes for rendering images using depth based effects and for rendering images using different viewpoints are discussed further below.


Rendering Images Using Depth Based Effects


A variety of depth based effects can be applied to an image synthesized from light field image data in accordance with embodiments of the invention including (but not limited to) applying dynamic refocusing of an image, locally varying the depth of field within an image, selecting multiple in focus areas at different depths, and/or applying one or more depth related blur model. A process for applying depth based effects to an image synthesized from light field image data and contained within a light field image file that includes a depth map in accordance with an embodiment of the invention is illustrated in FIG. 23. The process 2300 includes decoding (2302) an image synthesized from light field image data contained within a light field image file. In addition, a depth map derived from the light field image data that describes the synthesized image is also decoded (2304) from metadata contained within the light field image file. One or more depth dependent effects can then be applied (2406) to the pixels of the decoded image based upon the depths of the pixels indicated by the depth map. In a number of embodiments, the depth dependent effects are automatically determined by modifying the focal plane, and/or F number (which provides different depths of fields and degree of blur in out-of-focus regions). The image can then be displayed (2308). In many embodiments, the image is displayed on a local display. In a number of embodiments, the image is streamed to a remote display or encoded as an image and forwarded to a remote device for display.


Although specific processes for applying depth dependent effects to an image synthesized from light field image data using a depth map obtained using the light field image data are discussed above with respect to FIG. 23, any of a variety of processes can be utilized for extracting an image and a depth map from a light field image file and for using the depth map to apply one or more depth dependent effects in accordance with embodiments of the invention. Processes for rendering images from different viewpoints to the reference viewpoint of an image contained within a light field image file based upon a depth map and information concerning occluded pixels contained within the light field image file in accordance with embodiments of the invention are discussed further below.


Rendering Images Using Different Viewpoints


One of the compelling aspects of computational imaging is the ability to use light field image data to synthesize images from different viewpoints. The ability to synthesize images from different viewpoints creates interesting possibilities including the creation of stereo pairs for 3D applications and the simulation of motion parallax as a user interacts with an image. Light field image files in accordance with many embodiments of the invention can include an image synthesized from light field image data from a reference viewpoint, a depth map for the synthesized image and information concerning pixels from the light field image data that are occluded in the reference viewpoint. A rendering device can use the information concerning the depths of the pixels in the synthesized image and the depths of the occluded images to determine the appropriate shifts to apply to the pixels to shift them to the locations in which they would appear from a different viewpoint. Occluded pixels from the different viewpoint can be identified and locations on the grid of the different viewpoint that are missing pixels can be identified and hole filling can be performed using interpolation of adjacent non-occluded pixels. In many embodiments, the quality of an image rendered from a different viewpoint can be increased by providing additional information in the form of auxiliary maps that can be used to refine the rendering process. In a number of embodiments, auxiliary maps can include confidence maps, edge maps, and missing pixel maps. Each of these maps can provide a rendering process with information concerning how to render an image based on customized preferences provided by a user. In other embodiments, any of a variety of auxiliary information including additional auxiliary maps can be provided as appropriate to the requirements of a specific rendering process.


A process for rendering an image from a different viewpoint using a light field image file containing an image synthesized using light field image data from a reference viewpoint, a depth map describing the depth of the pixels of the synthesized image, and information concerning occluded pixels in accordance with an embodiment of the invention is illustrated in FIG. 24. The process 2400 includes decoding (2402) an image contained within a light field image file, where the image is an image synthesized from light field image data. The process also includes decoding (2404) a depth map from the light field image file, where the depth map was also obtained from the light field image data used to synthesize the encoded image. Information concerning pixels from the light field image data that are occluded in the reference viewpoint is also obtained (2405) from the light field image file, where the information includes the location and depth of the occluded pixels from the reference viewpoint. In many embodiments, auxiliary information, including auxiliary maps that specify additional information concerning the pixels in the encoded image, is also contained within the light field image file and auxiliary information useful in the rendering of an image from a different viewpoint to the reference viewpoint can be extracted and decoded (2408) from the light field image file. Using the depth map and the depths of the occluded pixels, shifts in the location and depths of pixels in the different viewpoint can be determined (2410). Based upon the shifts, occluded pixels can be determined (2414) and the image displayed. Where auxiliary information is available, the auxiliary information can be utilized to adjust (2412) the pixels in the image prior to rendering. In many embodiments, the adjustments are performed prior to identifying occluded pixels and displaying the final image. In a number of embodiments, the adjustments are performed after occluded pixels are identifies.


Although specific processes for rendering an image from a different viewpoint using an image synthesized from a reference view point using light field image data, a depth map obtained using the light field image data, and information concerning pixels in the light field image data that are occluded in the reference viewpoint are discussed above with respect to FIG. 24, any of a variety of processes can be utilized for rendering images from different viewpoints using a light field image file as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Processes for rendering images simulating different lens characteristics from in accordance with embodiments of the invention are discussed further below.


While the above description contains many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as an example of one embodiment thereof. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims
  • 1. A system for encoding light field image data as a light field image file, comprising: a processor; andmemory containing an encoding application and light field image data, where the light field image data comprises a plurality of images of a scene captured from different viewpoints;wherein the encoding application directs the processor to: create a depth map that specifies depths for pixels in a reference image in the plurality of images;encode the reference image; andcreate a light field image file including the encoded image and metadata describing the encoded image, wherein: the light field image file is structured using the Exchangable image file (Exif) format; andthe metadata includes the depth map.
  • 2. The system of claim 1, wherein the light field image file conforms to the JFIF standard.
  • 3. The system of claim 1, wherein the encoded image is encoded in accordance with the JPEG standard.
  • 4. The system of claim 1, wherein the metadata is located within an application marker segment within the light field image file.
  • 5. The system of claim 4, wherein the application marker segment is identified using the APP9 marker.
  • 6. The system of claim 4, wherein the application marker segment is identified using an APP1 marker that is used to identify the Exif data.
  • 7. The system of claim 1, wherein the encoding application directs the processor to encode the depth map and the depth map included in the metadata describing the encoded image is the encoded depth map.
  • 8. The system of claim 7, wherein the depth map is losslessly encoded.
  • 9. The system of claim 8, wherein the encoding application directs the processor to decode the depth map using a JPEG decoder.
  • 10. The system of claim 7, wherein the depth map is encoded using lossy compression.
  • 11. The system of claim 1, wherein: encoding the reference image includes creating a confidence map for the depth map, where the confidence map indicates the reliability of the depth value for a pixel in the depth map; andthe metadata describing the encoded image in the light field image file includes the confidence map.
  • 12. The system of claim 1, wherein: encoding the reference image involves identifying pixels in the plurality images of the scene that are occluded in a reference viewpoint; andthe metadata describing the encoded image in the light field image file includes descriptions of the occluded pixels.
  • 13. The system of claim 12, wherein the descriptions of the occluded pixels includes data selected from the group consisting of colors, locations, and depths of the occluded pixels.
  • 14. The system of claim 1, wherein: the encoding application directs the processor to generate an edge map that indicates pixels in the reference image that lie on a discontinuity; andthe metadata describing the encoded image in the light field image file includes the edge map.
  • 15. The system of claim 14, wherein the edge map identifies whether a pixel lies on a discontinuity selected from the group consisting of an intensity and depth discontinuity.
  • 16. The system of claim 1, wherein: the encoding application directs the processor to generate a missing pixel map that indicates pixels in the reference image that do not correspond to a pixel from the plurality images of the scene and that are generated by interpolating pixel values from adjacent pixels in the reference image; andthe metadata describing the encoded image in the light field image file includes the missing pixels map.
  • 17. The system of claim 1, wherein the metadata also includes a focal plane.
  • 18. The system of claim 1, wherein the encoding application directs the processor to: select a second reference image from the plurality of images, where the second reference image is distinct from the reference image and the reference image and the second reference image comprise a stereo pair of images;encode the second reference image; andinclude the second reference image in the encoded light field image file.
  • 19. A method for encoding light field image data as a light field image file, comprising: obtaining a set of light field image data comprising a plurality of images of a scene captured from different viewpoints using an encoding system;creating a depth map that specifies depths for pixels in a reference image in the plurality of images using the encoding device;encoding the reference image using the encoding device; andcreating a light field image file including the encoded image and metadata describing the encoded image using the encoding device, wherein: the light field image file is structured using the Exchangable image file (Exif) format; andthe metadata includes the depth map.
  • 20. A non-transitory machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process comprising: obtaining a set of light field image data comprising a plurality of images of a scene captured from different viewpoints;creating a depth map that specifies depths for pixels in a reference image in the plurality of images;encoding the reference image; andcreating a light field image file including the encoded image and metadata describing the encoded image, wherein: the light field image file is structured using the Exchangable image file (Exif) format; andthe metadata includes the depth map.
CROSS-REFERENCE TO RELATED APPLICATION

The present invention is a continuation of U.S. patent application Ser. No. 13/631,731, filed Sep. 28, 2012, which claims priority to U.S. Provisional Application No. 61/540,188 entitled “JPEG-DX: A Backwards-compatible, Dynamic Focus Extension to JPEG”, to Venkataraman et al. and filed Sep. 28, 2011, the disclosures of which are incorporated herein by reference in their entirety.

US Referenced Citations (339)
Number Name Date Kind
4124798 Thompson Nov 1978 A
4198646 Alexander et al. Apr 1980 A
4323925 Abell et al. Apr 1982 A
4460449 Montalbano Jul 1984 A
4467365 Murayama et al. Aug 1984 A
5005083 Grage Apr 1991 A
5070414 Tsutsumi Dec 1991 A
5144448 Hornbaker Sep 1992 A
5629524 Stettner et al. May 1997 A
5808350 Jack et al. Sep 1998 A
5832312 Rieger et al. Nov 1998 A
5880691 Fossum et al. Mar 1999 A
5933190 Dierickx et al. Aug 1999 A
5973844 Burger Oct 1999 A
6002743 Telymonde Dec 1999 A
6005607 Uomori et al. Dec 1999 A
6034690 Gallery et al. Mar 2000 A
6069351 Mack May 2000 A
6069365 Chow et al. May 2000 A
6097394 Levoy et al. Aug 2000 A
6124974 Burger Sep 2000 A
6137535 Meyers Oct 2000 A
6141048 Meyers Oct 2000 A
6175379 Uomori et al. Jan 2001 B1
6239909 Hayashi et al. May 2001 B1
6358862 Ireland et al. Mar 2002 B1
6525302 Dowski, Jr. et al. Feb 2003 B2
6603513 Berezin Aug 2003 B1
6611289 Yu Aug 2003 B1
6627896 Hashimoto et al. Sep 2003 B1
6635941 Suda Oct 2003 B2
6671399 Berestov Dec 2003 B1
6750904 Lambert Jun 2004 B1
6765617 Tangen et al. Jul 2004 B1
6771833 Edgar Aug 2004 B1
6774941 Boisvert et al. Aug 2004 B1
6879735 Portniaguine et al. Apr 2005 B1
6903770 Kobayashi et al. Jun 2005 B1
6909121 Nishikawa Jun 2005 B2
6958862 Joseph et al. Oct 2005 B1
7085409 Sawhney et al. Aug 2006 B2
7199348 Olsen et al. Apr 2007 B2
7262799 Suda Aug 2007 B2
7292735 Blake et al. Nov 2007 B2
7295697 Satoh Nov 2007 B1
7369165 Bosco et al. May 2008 B2
7391572 Jacobowitz et al. Jun 2008 B2
7606484 Richards et al. Oct 2009 B1
7633511 Shum et al. Dec 2009 B2
7657090 Omatsu et al. Feb 2010 B2
7675080 Boettiger Mar 2010 B2
7675681 Tomikawa et al. Mar 2010 B2
7706634 Schmitt et al. Apr 2010 B2
7723662 Levoy et al. May 2010 B2
7986018 Rennie Jul 2011 B2
7990447 Honda et al. Aug 2011 B2
8013904 Tan et al. Sep 2011 B2
8027531 Wilburn et al. Sep 2011 B2
8044994 Vetro et al. Oct 2011 B2
8077245 Adamo et al. Dec 2011 B2
8106949 Tan et al. Jan 2012 B2
8126279 Marcellin et al. Feb 2012 B2
8131097 Lelescu et al. Mar 2012 B2
8180145 Wu et al. May 2012 B2
8189089 Georgiev May 2012 B1
8212914 Chiu Jul 2012 B2
8213711 Tam Jul 2012 B2
8231814 Duparre Jul 2012 B2
8242426 Ward et al. Aug 2012 B2
8289409 Chang Oct 2012 B2
8294099 Blackwell, Jr. Oct 2012 B2
8305456 McMahon Nov 2012 B1
8315476 Georgiev et al. Nov 2012 B1
8345144 Georgiev et al. Jan 2013 B1
8360574 Ishak et al. Jan 2013 B2
8406562 Bassi et al. Mar 2013 B2
8446492 Nakano et al. May 2013 B2
8514491 Duparre Aug 2013 B2
8541730 Inuiya Sep 2013 B2
8542933 Venkataraman et al. Sep 2013 B2
8619082 Ciurea et al. Dec 2013 B1
8692893 McMahon Apr 2014 B2
8780113 Ciurea et al. Jul 2014 B1
8804255 Duparre Aug 2014 B2
8831367 Venkataraman et al. Sep 2014 B2
8861089 Duparre Oct 2014 B2
20010005225 Clark et al. Jun 2001 A1
20010038387 Tomooka et al. Nov 2001 A1
20020012056 Trevino Jan 2002 A1
20020027608 Johnson et al. Mar 2002 A1
20020063807 Margulis May 2002 A1
20020087403 Meyers et al. Jul 2002 A1
20020089596 Yasuo Jul 2002 A1
20020094027 Sato et al. Jul 2002 A1
20020101528 Lee Aug 2002 A1
20020113867 Takigawa et al. Aug 2002 A1
20020113888 Sonoda et al. Aug 2002 A1
20020163054 Suda et al. Nov 2002 A1
20020167537 Trajkovic Nov 2002 A1
20020177054 Saitoh et al. Nov 2002 A1
20030086079 Barth et al. May 2003 A1
20030124763 Fan et al. Jul 2003 A1
20030140347 Varsa Jul 2003 A1
20030179418 Wengender et al. Sep 2003 A1
20030211405 Venkataraman Nov 2003 A1
20040008271 Hagimori et al. Jan 2004 A1
20040012689 Tinnerino Jan 2004 A1
20040047274 Amanai Mar 2004 A1
20040050104 Ghosh et al. Mar 2004 A1
20040056966 Schechner et al. Mar 2004 A1
20040100570 Shizukuishi May 2004 A1
20040114807 Lelescu et al. Jun 2004 A1
20040165090 Ning Aug 2004 A1
20040169617 Yelton et al. Sep 2004 A1
20040170340 Tipping et al. Sep 2004 A1
20040174439 Upton Sep 2004 A1
20040207836 Chhibber et al. Oct 2004 A1
20040213449 Safaee-Rad et al. Oct 2004 A1
20040234873 Venkataraman Nov 2004 A1
20040251509 Choi Dec 2004 A1
20050006477 Patel Jan 2005 A1
20050012035 Miller Jan 2005 A1
20050036778 DeMonte Feb 2005 A1
20050048690 Yamamoto Mar 2005 A1
20050068436 Fraenkel et al. Mar 2005 A1
20050132098 Sonoda et al. Jun 2005 A1
20050134712 Gruhlke et al. Jun 2005 A1
20050151759 Gonzalez-Banos et al. Jul 2005 A1
20050205785 Hornback et al. Sep 2005 A1
20050219363 Kohler et al. Oct 2005 A1
20050225654 Feldman et al. Oct 2005 A1
20050286612 Takanashi Dec 2005 A1
20060002635 Nestares et al. Jan 2006 A1
20060023197 Joel Feb 2006 A1
20060023314 Boettiger et al. Feb 2006 A1
20060033005 Jerdev et al. Feb 2006 A1
20060034003 Zalevsky Feb 2006 A1
20060038891 Okutomi et al. Feb 2006 A1
20060049930 Zruya et al. Mar 2006 A1
20060054780 Garrood et al. Mar 2006 A1
20060054782 Olsen et al. Mar 2006 A1
20060072029 Miyatake et al. Apr 2006 A1
20060087747 Ohzawa et al. Apr 2006 A1
20060098888 Morishita May 2006 A1
20060125936 Gruhike et al. Jun 2006 A1
20060138322 Costello et al. Jun 2006 A1
20060159369 Young Jul 2006 A1
20060176566 Boettiger et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060197937 Bamji et al. Sep 2006 A1
20060210186 Berkner Sep 2006 A1
20060243889 Farnworth et al. Nov 2006 A1
20060251410 Trutna Nov 2006 A1
20060274174 Tewinkle Dec 2006 A1
20060278948 Yamaguchi et al. Dec 2006 A1
20060279648 Senba et al. Dec 2006 A1
20070002159 Olsen et al. Jan 2007 A1
20070024614 Tam Feb 2007 A1
20070040828 Zalevsky et al. Feb 2007 A1
20070040922 McKee et al. Feb 2007 A1
20070041391 Lin et al. Feb 2007 A1
20070052825 Cho Mar 2007 A1
20070083114 Yang et al. Apr 2007 A1
20070085917 Kobayashi Apr 2007 A1
20070102622 Olsen et al. May 2007 A1
20070126898 Feldman Jun 2007 A1
20070127831 Venkataraman Jun 2007 A1
20070139333 Sato et al. Jun 2007 A1
20070146511 Kinoshita et al. Jun 2007 A1
20070159541 Sparks et al. Jul 2007 A1
20070160310 Tanida et al. Jul 2007 A1
20070171290 Kroger Jul 2007 A1
20070206241 Smith et al. Sep 2007 A1
20070211164 Olsen et al. Sep 2007 A1
20070216765 Wong et al. Sep 2007 A1
20070228256 Mentzer Oct 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070258006 Olsen et al. Nov 2007 A1
20070258706 Raskar et al. Nov 2007 A1
20070268374 Robinson Nov 2007 A1
20070296835 Olsen et al. Dec 2007 A1
20080019611 Larkin Jan 2008 A1
20080025649 Liu et al. Jan 2008 A1
20080030597 Olsen et al. Feb 2008 A1
20080043095 Vetro et al. Feb 2008 A1
20080043096 Vetro et al. Feb 2008 A1
20080080028 Bakin et al. Apr 2008 A1
20080084486 Enge et al. Apr 2008 A1
20080088793 Sverdrup et al. Apr 2008 A1
20080118241 Tekolste et al. May 2008 A1
20080131019 Ng Jun 2008 A1
20080131107 Ueno Jun 2008 A1
20080151097 Chen et al. Jun 2008 A1
20080152215 Horie et al. Jun 2008 A1
20080152296 Oh et al. Jun 2008 A1
20080158698 Chang et al. Jul 2008 A1
20080193026 Horie et al. Aug 2008 A1
20080218610 Chapman et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20080239116 Smith Oct 2008 A1
20080247638 Tanida et al. Oct 2008 A1
20080247653 Moussavi et al. Oct 2008 A1
20080278591 Barna et al. Nov 2008 A1
20090050946 Duparre et al. Feb 2009 A1
20090052743 Techmer Feb 2009 A1
20090060281 Tanida et al. Mar 2009 A1
20090086074 Li et al. Apr 2009 A1
20090091806 Inuiya Apr 2009 A1
20090102956 Georgiev Apr 2009 A1
20090109306 Shan et al. Apr 2009 A1
20090128833 Yahav May 2009 A1
20090152664 Klem et al. Jun 2009 A1
20090167922 Perlman et al. Jul 2009 A1
20090179142 Duparre et al. Jul 2009 A1
20090180021 Kikuchi et al. Jul 2009 A1
20090200622 Tai et al. Aug 2009 A1
20090201371 Matsuda et al. Aug 2009 A1
20090207235 Francini et al. Aug 2009 A1
20090225203 Tanida et al. Sep 2009 A1
20090237520 Kaneko et al. Sep 2009 A1
20090268192 Koenck et al. Oct 2009 A1
20090268983 Stone Oct 2009 A1
20090274387 Jin Nov 2009 A1
20090284651 Srinivasan Nov 2009 A1
20090297056 Lelescu et al. Dec 2009 A1
20090302205 Olsen et al. Dec 2009 A9
20090323195 Hembree et al. Dec 2009 A1
20090323206 Oliver et al. Dec 2009 A1
20090324118 Maslov et al. Dec 2009 A1
20100002126 Wenstrand et al. Jan 2010 A1
20100002313 Duparre et al. Jan 2010 A1
20100002314 Duparre Jan 2010 A1
20100053342 Hwang et al. Mar 2010 A1
20100053600 Tanida et al. Mar 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100086227 Sun et al. Apr 2010 A1
20100097491 Farina et al. Apr 2010 A1
20100103259 Tanida et al. Apr 2010 A1
20100103308 Butterfield et al. Apr 2010 A1
20100118127 Nam et al. May 2010 A1
20100133418 Sargent et al. Jun 2010 A1
20100141802 Knight et al. Jun 2010 A1
20100142839 Lakus-Becker Jun 2010 A1
20100157073 Kondo et al. Jun 2010 A1
20100177411 Hegde et al. Jul 2010 A1
20100195716 Klein et al. Aug 2010 A1
20100201834 Maruyama et al. Aug 2010 A1
20100208100 Olsen et al. Aug 2010 A9
20100220212 Perlman et al. Sep 2010 A1
20100231285 Boomer et al. Sep 2010 A1
20100244165 Lake et al. Sep 2010 A1
20100265385 Knight et al. Oct 2010 A1
20100281070 Chan et al. Nov 2010 A1
20100302423 Adams, Jr. et al. Dec 2010 A1
20110001037 Tewinkle Jan 2011 A1
20110032370 Ludwig Feb 2011 A1
20110043661 Podoleanu Feb 2011 A1
20110043665 Ogasahara Feb 2011 A1
20110043668 McKinnon et al. Feb 2011 A1
20110069189 Venkataraman et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110108708 Olsen et al. May 2011 A1
20110121421 Charbon et al. May 2011 A1
20110122308 Duparre May 2011 A1
20110153248 Gu et al. Jun 2011 A1
20110211824 Georgiev et al. Sep 2011 A1
20110221658 Haddick et al. Sep 2011 A1
20110241234 Duparre Oct 2011 A1
20110242342 Goma et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110242356 Aleksic et al. Oct 2011 A1
20110255592 Sung et al. Oct 2011 A1
20110255745 Hodder et al. Oct 2011 A1
20110267348 Lin et al. Nov 2011 A1
20110273531 Ito et al. Nov 2011 A1
20110274366 Tardif Nov 2011 A1
20110279721 McMahon Nov 2011 A1
20110285866 Bhrugumalla et al. Nov 2011 A1
20110300929 Tardif et al. Dec 2011 A1
20110310980 Mathew Dec 2011 A1
20110317766 Lim, II et al. Dec 2011 A1
20120012748 Pain et al. Jan 2012 A1
20120026342 Yu et al. Feb 2012 A1
20120039525 Tian et al. Feb 2012 A1
20120044372 Côté et al. Feb 2012 A1
20120069235 Imai Mar 2012 A1
20120105691 Waqas et al. May 2012 A1
20120113413 Miahczylowicz-Wolski et al. May 2012 A1
20120147205 Lelescu et al. Jun 2012 A1
20120188634 Kubala et al. Jul 2012 A1
20120198677 Duparre Aug 2012 A1
20120200734 Tang Aug 2012 A1
20120249550 Akeley et al. Oct 2012 A1
20120262607 Shimura et al. Oct 2012 A1
20120287291 McMahon Nov 2012 A1
20120293695 Tanaka Nov 2012 A1
20120314033 Lee, III et al. Dec 2012 A1
20120327222 Ng et al. Dec 2012 A1
20130003184 Duparre Jan 2013 A1
20130010073 Do Jan 2013 A1
20130022111 Chen et al. Jan 2013 A1
20130027580 Olsen et al. Jan 2013 A1
20130033579 Wajs Feb 2013 A1
20130050504 Safaee-Rad et al. Feb 2013 A1
20130050526 Keelan Feb 2013 A1
20130057710 McMahon Mar 2013 A1
20130070060 Chatterjee Mar 2013 A1
20130077880 Venkataraman et al. Mar 2013 A1
20130077882 Venkataraman et al. Mar 2013 A1
20130088637 Duparre Apr 2013 A1
20130120605 Georgiev et al. May 2013 A1
20130128068 Georgiev et al. May 2013 A1
20130128069 Georgiev et al. May 2013 A1
20130128087 Georgiev et al. May 2013 A1
20130147979 McMahon et al. Jun 2013 A1
20130215108 McMahon et al. Aug 2013 A1
20130259317 Gaddy Oct 2013 A1
20130265459 Duparre et al. Oct 2013 A1
20140104490 Hsieh et al. Apr 2014 A1
20140118493 Sali et al. May 2014 A1
20140132810 McMahon May 2014 A1
20140192253 Laroia Jul 2014 A1
20140218546 Mcmahon Aug 2014 A1
20140232822 Venkataraman et al. Aug 2014 A1
20140240528 Venkataraman et al. Aug 2014 A1
20140240529 Venkataraman et al. Aug 2014 A1
20140267243 Venkataraman et al. Sep 2014 A1
20140267633 Venkataraman et al. Sep 2014 A1
20140267762 Mullis et al. Sep 2014 A1
20140267890 Lelescu et al. Sep 2014 A1
20150036014 Lelescu et al. Feb 2015 A1
20150036015 Lelescu et al. Feb 2015 A1
20150042766 Ciurea et al. Feb 2015 A1
20150042767 Ciurea et al. Feb 2015 A1
20150042833 Lelescu et al. Feb 2015 A1
20150049915 Ciurea et al. Feb 2015 A1
20150049916 Ciurea et al. Feb 2015 A1
20150049917 Ciurea et al. Feb 2015 A1
20150055884 Venkataraman et al. Feb 2015 A1
Foreign Referenced Citations (39)
Number Date Country
840502 May 1998 EP
2336816 Jun 2011 EP
2006033493 Feb 2006 JP
2007520107 Jul 2007 JP
2011109484 Jun 2011 JP
2007083579 Jul 2007 WO
2008108271 Sep 2008 WO
2009151903 Dec 2009 WO
2011063347 May 2011 WO
2011116203 Sep 2011 WO
2011063347 Oct 2011 WO
2011143501 Nov 2011 WO
2012057619 May 2012 WO
2012057620 May 2012 WO
2012057622 May 2012 WO
2012057623 May 2012 WO
2012057620 Jun 2012 WO
2012074361 Jun 2012 WO
2012078126 Jun 2012 WO
2012082904 Jun 2012 WO
2012155119 Nov 2012 WO
2013003276 Jan 2013 WO
2013043751 Mar 2013 WO
2013043761 Mar 2013 WO
2013049699 Apr 2013 WO
2013055960 Apr 2013 WO
2013119706 Aug 2013 WO
2013126578 Aug 2013 WO
2014052974 Apr 2014 WO
2012057621 May 2014 WO
2014032020 May 2014 WO
2014078443 May 2014 WO
2014130849 Aug 2014 WO
2014133974 Sep 2014 WO
2014138695 Sep 2014 WO
2014138697 Sep 2014 WO
2014145856 Sep 2014 WO
2014149403 Sep 2014 WO
2014150856 Sep 2014 WO
Non-Patent Literature Citations (150)
Entry
US 8,957,977, 02/2015, Venkataraman et al. (withdrawn)
US 8,964,053, 02/2015, Venkataraman et al. (withdrawn)
US 8,965,058, 02/2015, Venkataraman et al. (withdrawn)
International Search Report and Written Opinion for International Application No. PCT/US13/46002, Search Completed Nov. 13, 2013, Mailed Nov. 29, 2013, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/48772, Search Completed Oct. 21, 2013, Mailed Nov. 8, 2013, 6 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/56065, Search Completed Nov. 25, 2013, Mailed Nov. 26, 2013, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/59991, Search Completed Feb. 6, 2014, Mailed Feb. 26, 2014, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2009/044687, date completed Jan. 5, 2010, date mailed Jan. 13, 2010, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/024987, Search Completed Mar. 27, 2013, Mailed Apr. 15, 2013, 14 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/056502, Search Completed Feb. 18, 2014, Mailed Mar. 19, 2014, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/069932, International Filing Date Nov. 13, 2013, Search Completed Mar. 14, 2014, Mailed Apr. 14, 2014, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US13/62720, report completed Mar. 25, 2014, Mailed Apr. 21, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/024903 report completed Jun. 12, 2014, Mailed, Jun. 27, 2014, 13 pgs.
International Search Report and Written Opinion for International Application PCT/US14/17766, report completed May 28, 2014, Mailed Jun. 18, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/18084, report completed May 23, 2014, Mailed Jun. 10, 2014, 12 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/18116, report completed May 13, 2014, Mailed Jun. 2, 2014, 12 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/24407, report completed Jun. 11, 2014, Mailed Jul. 8, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/25100, report completed Jul. 7, 2014, Mailed Aug. 7, 2014 5 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/25904 report completed Jun. 10, 2014, Mailed Jul. 10, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022123, report completed Jun. 9, 2014, Mailed Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024947, Report Completed Jul. 8, 2014, Mailed Aug. 5, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/028447, report completed Jun. 30, 2014, Mailed Jul. 21, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/23762, Report Completed May 30, 2014, Mailed Jul. 3, 2014, 6 Pgs.
IPRP for International Application No. PCT/US2012/059813, International Filing Date Oct. 11, 2012, Search Completed Apr. 15, 2014, 7 pgs.
Search Report and Written Opinion for International Application PCT/US11/36349, mailed Aug. 22, 2011, 12 pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/024987, Mailed Aug. 21, 2014, 13 Pgs., Aug. 21, 2014.
International Search Report and Written Opinion for International Application PCT/US2014/030692, report completed Jul. 28, 2014, Mailed Aug. 27, 2014, 7 Pages.
International Search Report and Written Opinion for International Application No. PCT/US2011/64921, Report Completed Feb. 25, 2011, mailed Mar. 6, 2012, 17 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/027146, completed Apr. 2, 2013, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US14/22118, report completed Jun. 9, 2014, Mailed, Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2010/057661, completed Mar. 9, 2011, 14 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/044014, completed Oct. 12, 2012, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/056151, completed Nov. 14, 2012, 10 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/059813, completed Dec. 17, 2012, 8 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/37670, Mailed Jul. 18, 2012, Search Completed Jul. 5, 2012, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/58093, completed Nov. 15, 2012, 12 pgs.
Office Action for U.S. Appl. No. 12/952,106, dated Aug. 16, 2012, 12 pgs.
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183.
Bertero et al., “Super-resolution in computational imaging”, Micron, 2003, vol. 34, Issues 6-7, 17 pgs.
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV 2010, Part II, LNCS 6493, pp. 186-200.
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, pp. 972-986.
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs.
Borman et al, “Image Sequence Processing”, Source unknown, Oct. 14, 2002, 81 pgs.
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 1998, 3653, 10 pgs.
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, Jun. 2003, 5016, 12 pgs.
Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 2004, vol. 5299, 12 pgs.
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, 1998. 3459, 9 pgs.
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473.
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378.
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084.
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, 2010, 11 pgs.
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394.
Capel, “Image Mosaicing and Super-resolution”, [online], Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.2643&rep=rep1 &type=pdf>, Title pg., abstract, table of contents, pp. 1-263 (269 total pages).
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, 2006, vol. 3, pp. 623-626.
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP 2006, pp. 1177-1180.
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim. Syst. Sign Process, 2007, vol. 18, pp. 83-101.
Chen et al., “Interactive deformation of light fields”, In Proceedings of Siggraph I3D 2005, pp. 139-146.
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Proceedings of The Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, 8 pgs.
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, 8 pgs.
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 2009, vol. 83, Issue 3, 8 pgs.
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310.
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, 2008, vol. 3, pp. 1-6.
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 2004, pp. 89-100.
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551.
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, 2006, vol. 1, pp. R1-R16.
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, pp. 59622A-1-59622A-12.
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs.
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903.
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33.
Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, 2006, vol. 6196, pp. 619607-1-619607-15.
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418.
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 3005, vol. 44, No. 15, pp. 2949-2956.
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposistion Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs.
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012. Retrieved from the Internet at URL:<http://www.site.uottawa.ca/-edubois/theses/Fanaswala—thesis.pdf>, 163 pgs.
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, 2006, vol. 6069, 8 pgs.
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, 2004, vol. 14, pp. 47-57.
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, vol. 13, No. 10, pp. 1327-1344.
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, pp. 141-159.
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs.
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284.
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50.
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, pp. 3-12.
Gortler et al., “The Lumigraph”, In Proceedings of Siggraph 1996, pp. 43-54.
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, 30, 4, 2011, pp. 70:1-70:10.
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs.
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, vol. 16, No. 12, pp. 2953-2964.
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, Computational Photography (ICCP) 2010, pp. 1-8.
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, 2010, vol. 3, pp. 022501-1-022501-3.
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, 2011, vol. 4, pp. 112501-1-112501-3.
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D 2007, pp. 121-128.
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of Siggraph 2000, pp. 297-306.
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80.
Kang et al., “Handling Occlusions inn Dense Multi-View Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. 1-103-1-110.
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727.
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831.
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Proc., CVPR 94, 8 pgs.
Lensvector, “How LensVector Autofocus Works”, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg.
Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Aug. 2006, pp. 46-55.
Levoy et al., “Light Field Rendering”, Proc. ADM Siggraph '96, pp. 1-12.
Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution,” Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab—research/08/deblur-feng.pdf on Feb. 5, 2014.
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120.
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10.
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs.
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 2006, pp. 30-38.
Joshi et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL:http:I/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8.
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs.
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378.
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900.
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, 2007, 12 pgs.
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36.
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, 2008, pp. 1-19.
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Jan. 2009, vol. 18, No. 1, pp. 36-51.
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077.
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552.
Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell, 2013, vol. 35, No. 2, pp. 504-511.
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs.
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228.
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2007, pp. 208-215.
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995, pp. 93-96.
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System”, ACM Transactions on Graphics, vol. 23, No. 2, Apr. 2004, pp. 143-162.
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759.
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975.
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117.
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813.
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, Proceeding, CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—vol. 2, pp. 2331-2338.
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs.
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs.
Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park-Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fall06/ccd—theory.pdf, 5 pgs.
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008.
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426.
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, 2005, 5674, 12 pgs.
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs.
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 765-776.
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceeding, CVPR'04 Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 294-301.
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs.
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C11.
Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), pp. 1-10.
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, 2004, 12 pgs.
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6.
International Preliminary Report on Patentability for International Application PCT/US2013/027146,Report Completed Apr. 2, 2013, Mailed Sep. 4, 2014, 10 Pages.
Extended European Search Report for European Application EP12835041.0, Report Completed Jan. 28, 2015, Mailed Feb. 4, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/023762, Report Issued Mar. 2, 2015, Mailed Mar. 9, 2015, 19 Pgs.
International Preliminary Report on Patentability for International Application PCT/US13/56065, Report Issued Feb 24, 2015, Mailed Mar 5, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/056502, Report Issued Feb 24, 2015, Mailed Mar 5, 2015, 7 Pgs.
Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188.
Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, 8 pgs. Robotics and Automation (ICRA), 2011 IEEE International Conference on, May 2011.
Levin et al., “A Closed Form Solution to Natural Image Matting”, Pattern Analysis and Machine Intelligence, Feb. 2008, vol. 30, 8 pgs.
Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, 15 pgs. SPIE Proceedings, vol. 8291, Feb. 9, 2012.
Tallon et al., “Upsampling and Denoising of Depth Maps Via Joint-Segmentation”, 20th European Signal Processing Conference, Aug 27-31, 2012, 5 pgs.
Zhang, Qiang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, Proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171.
Related Publications (1)
Number Date Country
20140369612 A1 Dec 2014 US
Provisional Applications (1)
Number Date Country
61540188 Sep 2011 US
Continuations (1)
Number Date Country
Parent 13631731 Sep 2012 US
Child 14477396 US