System and methods for depth regularization and semiautomatic interactive matting using RGB-D images

Information

  • Patent Grant
  • 10574905
  • Patent Number
    10,574,905
  • Date Filed
    Monday, October 1, 2018
    6 years ago
  • Date Issued
    Tuesday, February 25, 2020
    4 years ago
Abstract
Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, the image processing pipeline application directs a processor to receive (i) an image (ii) an initial depth map corresponding to the depths of pixels within the image, regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels, determine an object of interest to be extracted from the image, generate an initial trimap using the dense depth map and the object of interest to be extracted from the image, and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
Description
FIELD OF INVENTION

The present invention relates image matting using depth information. In particular, the present invention relates to automatic generation of a trimap for image matting using a regularized depth map and/or interactive user input.


BACKGROUND

Image matting is the process of extracting an object of interest (e.g., a foreground object) from its background an important task in image and video editing. Image matting may be used for a variety of different purposes, such as to extract an object for placement with a different background image. Current techniques exist that may be used to determine which pixels belong to the foreground object and which pixels are part of the background. In particular, many current approaches generally estimate foreground and background colors based on color values of nearby pixels, where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. Conventional matting operations, however, could result in errors in describing certain portions of images that are more difficult to identify as belonging to the background or foreground of the image.


SUMMARY OF THE INVENTION

Systems and methods in accordance with embodiments of this invention perform depth regularization and semiautomatic interactive matting using images. In an embodiment of the invention, an image matting system includes at least one processor for executing sets of instructions, memory containing an image processing pipeline application. In an embodiment of the invention, the image processing pipeline application directs the processor to receive (i) an image comprising a plurality of pixel color values for pixels in the image and (ii) an initial depth map corresponding to the depths of the pixels within the image; regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels; determine an object of interest to be extracted from the image; generate an initial trimap using the dense depth map and the object of interest to be extracted from the image; and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.


In another embodiment of the invention, the image matting system further includes: a camera that captures images of a scene; and a display on the camera for providing a preview of the scene.


In yet another embodiment of the invention, the image processing application further directs the processor to: detect an insufficient separation between the object of interest and remaining portions of the scene being captured; and provide a notification within the display of the camera to capture a new image at a suggested setting.


In still yet another embodiment of the invention, the image processing application further directs the processor to: capture a candidate image using the camera; display the candidate image on the display of the camera; receive a selection of the portion of the image for image matting through a user interface of the camera, where generating the initial trimap includes using the selected portion of the image to determine foreground and background pixels of the image in the initial trimap.


In a still further embodiment of the invention, regularizing the initial depth map into a dense depth map includes performing Laplacian matting to compute a Laplacian L.


In still a further embodiment of the invention, the image processing application directs the processor to prune the Laplacian L.


In another embodiment of the invention, pruning the Laplacian L includes: for each pair i,j of pixels in affinity matrix A, determine if i and j have depth differences beyond a threshold; and if the difference is beyond the threshold, purge the pair i,j within the affinity matrix A.


In still another embodiment of the invention, the image processing application further directs the processor to detect and correct depth bleeding across edges by computing a Laplacian residual R and removing incorrect depth values based on the Laplacian residual R.


In another embodiment of the invention, the image processing application computes the Laplacian residual R by computing R=Lz* where z* is the regularized depth map, where removing incorrect depth values includes identifying regions where R>0.


In a further embodiment of the invention, the image is defined according to a red, green, blue (RGB) color model.


In still another embodiment, the camera is at least one select from the group consisting of an array camera, a light-field camera, a time-of-flight depth camera, and a camera equipped with a depth sensor.


In a yet further embodiment, the image processing application further directs the processor to determine the object of interest to be extracted from the image using at least one selected from the group consisting of: face recognition and object recognition to automatically identify the object of interest in the image.


In another embodiment again, the image processing application further directs the processor to: receive a user touch input on the display of the camera indicating at least one selected from the group consisting: an object of interest, foreground region of the image, and background region of the image.


In another embodiment yet again, the image processing application further directs the processor to place the object of interest on a new background as a composite image.


In a still further embodiment, the initial depth map is received from a device used to capture the image, where the device is at least one selected from the group consisting of: a camera, an array camera, a depth sensor, a time-of-flight camera, and a light-field camera.


An embodiment of the invention includes an array camera, including: a plurality of cameras that capture images of a scene from different viewpoints; memory containing an image processing pipeline application; where the image processing pipeline application direct the processor to: receive (i) an image comprising a plurality of pixel color values for pixels in the image and (ii) an initial depth map corresponding to the depths of the pixels within the image; regularize the initial depth map into a dense depth map using depth values of known pixels to compute depth values of unknown pixels; determine an object of interest to be extracted from the image; generate an initial trimap using the dense depth map and the object of interest to be extracted from the image; and apply color image matting to unknown regions of the initial trimap to generate a matte for image matting.


In a further embodiment, the image processing pipeline application further directs the processer to: capture a set of images using a group of cameras and determine the initial depth map using the set of images.


In yet a further embodiment of the invention, the image processing pipeline application regularizes the initial depth map into a dense depth map by performing Laplacian matting to compute a Laplacian L.


In yet another embodiment of the invention, the image processing application further directs the processor to prune the Laplacian L, wherein pruning the Laplacian L includes for each pair i,j of pixels in affinity matrix A, determine if i and j have depth differences beyond a threshold; and if the difference is beyond the threshold, purge the pair i,j within the affinity matrix A.


In still a further embodiment of the invention, the image processing application further directs the processor to detect and correct depth bleeding across edges by computing a Laplacian residual R and removing incorrect depth values based on the Laplacian residual R.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A conceptually illustrates an array camera in accordance with an embodiment of the invention.



FIG. 1B illustrates a process for interactive guided image capture for image matting in accordance with embodiments of the invention.



FIGS. 1C-1E conceptually illustrate examples of notifications provided to a user based on a captured candidate image in accordance with an embodiment of the invention.



FIG. 1F illustrates a process for depth regularization and semiautomatic interactive matting in accordance with an embodiment of the invention.



FIG. 1G conceptually illustrates an overall pipeline for semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 2 conceptually illustrates an example of depth map regularization for a sample 3D scene in accordance with an embodiment of the invention.



FIG. 3 conceptually illustrates Laplacian pruning and residual correction of a depth map in accordance with an embodiment of the invention.



FIG. 4 conceptually illustrates an example of depth regularization of an image produced from an array camera in accordance with an embodiment of the invention.



FIG. 5 conceptually illustrates a depth regularization comparison with several different depth sensors in accordance with embodiments of the invention.



FIG. 6 conceptually illustrates applications of regularized depths in accordance with embodiments of the invention.



FIG. 7 conceptually illustrates fully automatic image matting using object recognition in accordance with an embodiment of the invention.



FIG. 8 conceptually illustrates two mattes computed with an embodiment of the image matting system, one on a color RGB alpha matting benchmark and one with an embodiment of the image matting system.



FIG. 9 conceptually illustrates the speedup of an embodiment of the image matting system versus KNN matting in accordance with an embodiment of the invention.



FIG. 10 conceptually illustrates interactive editing of trimaps with real-time feedback in accordance with an embodiment of the invention.



FIG. 11 conceptually illustrates an example of an additional efficiency optimization tocolor matting to also solve for foreground and background efficiently in accordance with an embodiment of the invention.



FIG. 12 conceptually illustrates an example of a fast approximation optimization in accordance with an embodiment of the invention.



FIG. 13 conceptually illustrates a first example of semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 14 conceptually illustrates a second example of semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 15 conceptually illustrates a third example of semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 16 conceptually illustrates a fourth example of semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 17 conceptually illustrates a fifth example of semi-automatic interactive image matting in accordance with an embodiment of the invention.



FIG. 18 conceptually illustrates a sixth example of semi-automatic interactive image matting in accordance with an embodiment of the invention.





DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for depth regularization and semiautomatic interactive matting using depth information in images in accordance with embodiments of the invention are illustrated. In some embodiments, a camera may be used to capture images for use in image matting. In several embodiments, the camera may also provide information regarding depth to objects within a scene captured by an image that may be used in image matting. In many embodiments, the depth information may be captured using any one of array cameras, time-of-flight depth cameras, light-field cameras, and/or cameras that include depth sensors, among various other types of devices capable of capturing images and/or providing depth information. In several embodiments, the depth information may be obtained through various other methods, such as using a camera that captures multiple images and computing depth information from motion, multiple images captured with different focal lengths and computing depth from defocus, among various other methods for obtaining depth information.


In a number of embodiments, in image matting, the depth information may be used to generate a trimap for use in image matting. In particular, in several embodiments, the trimap may be generated based on the depth of the pixels of the foreground relative to the depth of the pixels in the background. The discussion below may use the term “foreground object” to indicate an object of interest that is to be extracted from an image through image matting, however, this object does not necessarily have to be positioned as the foremost object in the foreground of the image, but may be located at any variety of different depths within the image. However, typically users generally are interested in extracting the foreground object(s) (e.g., faces, people, among other items) appearing in an image during image matting and for purposes of this discussion, the term foreground object may be used to specify the particular object of interest that is to be extracted with image matting.


In certain embodiments, a user may provide an indication of the object of interest (i.e., foreground object) to be extracted from an image through various types of user input and/or interaction with the device. In many embodiments, the user input may be a single stroke indicating an object of interest, a foreground and/or background of the image. In various embodiments, the image matting system may use face recognition and/or object recognition to avoid user input entirely during the image matting process.


In some embodiments, the camera may capture a candidate image and provide a preview display of the candidate image to a user and the user may then provide user input that indicates an object of interest that is to be extracted from the candidate image through image matting. In several embodiments, an image-matting system can be utilized to determine whether the indicated object of interest within the candidate image can be readily extracted from the captured candidate image or if the user should modify certain aspects of the scene being captured and/or camera settings to provide for a better candidate image for use in the matting of the indicated object of interest. In particular, in some embodiments, the camera may provide a notification to adjust the separation between the camera, foreground object of interest, and/or background relative to one another in order to provide the user with the ability to capture a better candidate image and/or depth information for use in image matting of the indicated object of interest.


As described above, the preview display of the camera in some embodiments may display a notification to the user to adjust certain properties of the camera and/or scene being captured. For example, the display may provide a notification to the user to increase the distance (i.e., separation) between the object of interest and the background, to decrease the distance between the camera lens and the object of interest (e.g., move the camera towards the object of interest or move the object closer to the camera), and/or to increase the distance both between the background, object of interest, and camera lens. In some embodiments, the particular recommendation provided within the notification may vary based on the characteristics of the candidate image and/or scene settings.


In some embodiments, the image matting system may also use the user input identifying the object of interest, foreground, and/or background of the image during various stages of the image matting process, including during the generation of the trimap from the user input.


As described above, in many embodiments, the camera may capture color images (e.g., RGB images) that also provide depth information of the scene being captured. In certain embodiments, the initial depth information is provided as an initial sparse depth map. In particular, the initial sparse depth map may not have depth values for all pixels within the captured image, or may have depth values that are below a certain confidence value regarding the accuracy of the depth, for a certain portion of the pixels within the image.


In order to be able to use the initial sparse depth map for image matting, the sparse depth map can be regularized into a dense depth map. In a number of embodiments, the dense depth map is created using an affine combination of depths at nearby pixels to compute depths for unknown and/or ambiguous pixel depths. In several embodiments, the image matting system uses dense depth regularization and matting within a unified Laplacian framework. In certain embodiments, the image matting system may also use the Laplacian residual to correct input depth errors. In particular, the resulting dense depth map may be fairly accurate in most regions, but may not be as precise at boundaries as image pixel values. Therefore, some embodiments utilize depth discontinuities in RGB-D images to automate creation of a thin uncertain region in a trimap. In these embodiments, the user may mark as little as a single foreground and/or background stroke to provide an indication of the object of interest to be extracted with image matting. In some embodiments, the image matting system may also use occlusion and visibility information to automatically create an initial trimap.


In some embodiments, the image matting system uses the dense depth map for trimap generation for use in image matting. In particular, based on an identified object of interest, in some embodiments, the image matting system generates an initial trimap with thin uncertain regions by doing a segmentation based on depth into parts of the image closer to the foreground or background depth.


Upon generating the initial trimap using the dense depth map, the image matting system can apply conventional color matting (e.g, conventional Laplacian color matting) based on the color image to the initial thin tripmap to extract the object of interest, using an optimization to the conventional color matting that solves a reduced linear system for alpha values in only the uncertain regions of the trimap. In particular, the image matting system may use an efficient color matting algorithm that utilizes a reduced linear system to compute alpha values only at unknown pixels and to generate the matte to use for image matting, achieving speedups of one to two orders of magnitude. This also lends itself to incremental computation, enabling interactive changes to the initial automatically-generated trimap, with real-time updates of the matte. In some embodiments, the image matting system may use the matte to extract the object of interest and subsequently overlay the object of interest and/or composite it with other images including (but not limited to) background images.


Image matting using depth information from images, dense depth map regularization, and optimizations to conventional color matting processes in accordance with embodiments of the invention are described below.


Array Cameras


As described above, an array camera may be used to capture color images that include depth information for use in image matting. Array cameras in accordance with many embodiments of the invention can include an array camera module including an array of cameras and a processor configured to read out and process image data from the camera module to synthesize images. An array camera in accordance with an embodiment of the invention is illustrated in FIG. 1A. The array camera 100 includes an array camera module 102 with an array of individual cameras 104, where an array of individual cameras refers to a plurality of cameras in a particular arrangement, such as (but not limited to) the square arrangement utilized in the illustrated embodiment. In other embodiments, any of a variety of grid or non-grid arrangements of cameras can be utilized. Various array camera configurations including monolithic and monolithic arrays incorporating various different types of cameras are disclosed in U.S. Patent Publication No. 2011/0069189 entitled “Capturing and Processing of Images Using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., the relevant disclosure with respect to different array camera configurations including (but not limited to) the disclosure with respect to arrays of arrays is hereby incorporated by reference herein in its entirety. The array camera module 102 is connected to the processor 106. The processor is also configured to communicate with one or more different types of memory 108 that can be utilized to store an image processing pipeline application 110, image data 112 captured by the array camera module 102, image matting application 108, and an image display UI 114. The image processing pipeline application 110 is typically non-transitory machine readable instructions utilized to direct the processor to perform processes including (but not limited to) the various processes described below. In several embodiments, the processes include coordinating the staggered capture of image data by groups of cameras within the array camera module 102, the estimation of depth information from the captured image data 112 and image matting of the captured images using the captured image data, include the captured depth information. The image display UI 114 receives user inputs on a display of the array camera regarding portions of a captured image to be extracted during image matting and displays the captured images on a display of the device. In some embodiments, the image display UI 114 may continuously receive user inputs for portions of a captured image to be extracted during image matting and provide these inputs to the image processing pipeline application 110. In some embodiments, the image processing pipeline application 110 may determine whether a captured image is optimal for image matting and based on this determination, provides a notification, via the image display UI 114, providing a suggestion for capturing an image to use for image matting.


Processors 108 in accordance with many embodiments of the invention can be implemented using a microprocessor, a coprocessor, an application specific integrated circuit and/or an appropriately configured field programmable gate array that is directed using appropriate software to take the image data captured by the cameras within the array camera module 102 and apply image matting to captured images in order to extract one or more objects of interest from the captured images.


In several embodiments, a captured image is rendered from a reference viewpoint, typically that of a reference camera 104 within the array camera module 102. In many embodiments, the processor is able to synthesize the captured image from one or more virtual viewpoints, which do not correspond to the viewpoints of any of the focal planes 104 in the array camera module 102. Unless all of the objects within a captured scene are a significant distance from the array camera, the images of the scene captured within the image data will include disparity due to the different fields of view of the cameras used to capture the images. Processes for detecting and correcting for disparity are discussed further below. Although specific array camera architectures are discussed above with reference to FIG. 1A, alternative architectures can also be utilized in accordance with embodiments of the invention.


Array camera modules that can be utilized in array cameras in accordance with embodiments of the invention are disclosed in U.S. Patent Publication 2011/0069189 entitled “Capturing and Processing of Images Using Monolithic Camera Array with Heterogeneous Imagers”, to Venkataraman et al. and U.S. patent application Ser. No. 14/536,537 entitled “Methods of Manufacturing Array Camera Modules Incorporating Independently Aligned Lens Stacks,” to Rodda et al., which are hereby incorporated by reference in their entirety. Array cameras that include an array camera module augmented with a separate camera that can be utilized in accordance with embodiments of the invention are disclosed in U.S. patent application Ser. No. 14/593,369 entitled “Array Cameras Including An Array Camera Module Augmented With A Separate Camera,” to Venkataraman et al., and is herein incorporated by reference in its entirety. The use of image depth information for image matting in accordance with embodiments of the invention is discussed further below.


Guided Image Capture for Image Matting


In some embodiments, the image matting system guides a user to allow the user to capture a better quality image for image matting. In particular, a camera display can provide a notification to a user to adjust camera settings and/or physical composition of the imaged scene for better image matting results. For example, the camera may display real-time notifications to the user to move the object of interest closer to the camera lens, move the object of interest further away from the background, among various other possible notifications that would allow an image to be captured that is optimized for image matting. A process for guided image capture for image matting in accordance with an embodiment of the invention is illustrated in FIG. 1B.


The process displays 1B05 a preview of a captured candidate image. In some embodiments, the image may be captured by a camera, such as an array camera, time-of-flight camera, light-field camera, among various other types of cameras. In some embodiments, the image being previewed may be captured from a camera while the depth information may be provided by a depth sensor, such as a secondary depth camera.


In some embodiments, the process displays a preview of an image (and the user does not necessarily need to provide a command to capture the image), for example, through a real-time display of the image that is being imaged by the lens of the camera device.


The process may, optionally, receive 1B10 a selection of an image matting mode. For example, a user may select the image matting mode on the camera. The image matting mode may be used to extract an object of interest from the candidate image.


The process receives 1B15 an indication of an object of interest that is to be extracted from the candidate image and/or image being previewed within the display. The selection may be received from a user input identifying the object of interest (e.g., a foreground object), such as a user stroke over the object of interest, a user touch of the display, among various other mechanisms. In some embodiments, the object of interest may be automatically identified using one or more different object recognition processes. In particular, some embodiments may use face recognition processes to automatically identify the object(s) of interest.


The process determines (at 1B20) whether it detects an insufficient separation between the object of interest, foreground, background, and/or the camera. If the process determines (at 1B20) that it does not detect an insufficient separation, the process may provide (at 1B27) a notification that it is ready to capture the image (upon which the user may trigger the capturing of an image) and the process may compute (1B30) the image matte for the captured image.


In some embodiments, in order to determine whether a candidate image provides a sufficient separation, the process estimates depths of pixels in the candidate image scene and determines whether the depths of the object of interest are within a threshold of the depths of the foreground and/or background remaining scene. In some embodiments, the process regularized the sparse depth map into a dense depth map. Techniques for depth regularization are described in detail below.


Based on the dense depth map, in several embodiments, the process computes a histogram and analyzes the distribution of pixels to determine the existence of a prominent foreground object (or object of interest) and a distribution of one or more background depths. In some embodiments, the process uses an automated threshold to separate the foreground object of interest from the background. As described above, when the object of interest is not necessarily the foremost object within the image, some embodiments may use a second threshold to exclude the foreground from the object of interest as well. In several embodiments, once a satisfactory separation of the distribution/histogram of the object of interest is obtained from the distribution/histogram of depths for the rest of the scene, the process determines that the scene satisfies criteria optimal for image matting.


If the process determines (at 1B20) that it detects an insufficient separation between the object of interest, foreground, background, and/or camera, the process displays (at 1B25) a notification with a suggestion for obtaining a sufficient separation between the object of interest, foreground, background, and/or camera that is optimized for image matting.


The process then receives a new captured candidate image, and the initial sparse depth map for the image, for image matting and returns to 1B20 to determine whether the new candidate image is of a sufficient quality for image matting. This process can iterate until a candidate image with sufficient quality is captured.


The process then completes. Although specific processes are described above with respect to FIG. 1B with respect to guided image capture for image matting, any of a variety of processes can be utilized to provide physical (e.g., vibrations), audio and/or visual guides via a user interface to direct image capture for improved image matting quality as appropriate to the requirements of specific applications in accordance with embodiments of the invention. Several examples of guided image capture will now be described.



FIGS. 1C-1E illustrate several examples of various different notifications that may be provided to a user via a camera user interface based on a detection of an insufficient separation between a foreground object for image matting. In particular, FIG. 1C illustrates a candidate captured image 1C05 for image matting. In the candidate captured image 1C05, the user is selecting an object of interest of a person standing in front of a door. The image matting system computes a depth map 1C10 for the captured image. As illustrated in the depth map 1C10, the color coding for the depth of the person is a light shade of blue and that of the background is a slightly darker shade of blue, which, according to the depth key, indicate that there is relatively small depth separation between the object of interest and background.


Likewise the image matting system has detected an insufficient separation between the object of interest (i.e., the person) and the background (i.e., the door and wall), and thus provides a notification 1C25 with a suggestion that the user increase the distance between the object of interest (i.e., person) and the background. In this example, the user (i.e., camera) move back and the object of interest (i.e., person) move closer to the camera, increasing the separation between the foreground person and the background wall/door.


In some embodiments, the image matting system provides an indication of “ready for capture” once it detects a sufficient depth separation between the foreground and background. As illustrated in this example, the depth map 1C20 now illustrates a greater depth distribution between the foreground object of interest and background, with the foreground person in bright green and the background wall/door in dark blue indicating a greater depth separation as compared to the depth map 1C10.


Another example of a preview for image matting in accordance with an embodiment of the invention is illustrated in FIG. 1D. Like the example illustrated in FIG. 1C, the image matting system has detected an insufficient separation between the foreground object of interest (i.e., person) and the background (i.e., wall/door). In this example, the image matting system suggests 1D25 that the user (i.e., camera) move closer to the object of interest and that the object of interest remain static. As illustrated by the new captured image 1D15 and the new depth map 1D20, there is an increase depth distribution between the foreground object of interest (illustrated as orange in the depth map) and the background (illustrated as bright green in the depth map).


Yet another example of a preview for image matting in accordance with an embodiment of the invention is illustrated in FIG. 1E. In this example, the notification 1E25 suggests that the user remain static and the object of interest (i.e., person) move closer to the user (i.e., camera). This allows for a sufficient depth separation for image matting.


Although FIGS. 1C-1E illustrate several examples of preview for image matting, many different preview matting and/or notifications may be implemented as appropriate to the requirements of specific applications and/or scene content in accordance with embodiments of the invention. The use of depth regularization for image matting in accordance with embodiments of the invention is described further below.


Introduction to Image Matting with RGB-D Images


As described above, recent developments have made it easier to acquire RGB-D images with scene depth information D in addition to pixel RGB color. Examples of devices that may capture depth information include time-of-flight depth cameras, camera arrays, depth from light-field cameras and depth from sensors (e.g., Kinect). These developments provide for new opportunities for computer vision applications that utilize RGB-D images. However, the initial sparse depth is typically coarse and may only be available in sparse locations such as image and texture edges for stereo-based methods. Thus, in order to allow for image matting using the depth information, in some embodiments, the initial sparse depth may be regularized into a dense depth map. During the depth map regularization, some embodiments detect and correct depth bleeding across edges.


Accordingly, many embodiments provide for depth regularization and semi-automatic interactive alpha-matting of RGB-D images. In several embodiments, a compact form-factor camera array with multi-view stereo is utilized for depth acquisition. Certain embodiments may use high quality color images captured via an additional camera(s) that are registered with the depth map to create an RGB-D image. Although RGB-D images captured using array cameras are described above, image matting using image depth information provided by many different types of depth sensors may be used as appropriate to the requirements of specific application in accordance with embodiments of the invention.


As described above, many embodiments of the image matting system leverage a Laplacian-based matting framework, with the recent K Nearest Neighbors (“kNN”) matting approach disclosed in Chen et al. “Knn Matting”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 869-876, (2012), (“the Chen et al. 2012 paper”) the relevant disclosure from which is incorporated by reference herein in its entirety, to build the Laplacian in order to regularize the initial sparse depth map into a dense depth map for image matting. Some embodiments enable semi-automatic interactive matting with RGB-D images by making the following contributions described in more detail below: (1) dense depth regularization and matting within a unified Laplacian framework, and also use of the Laplacian residual to correct input depth errors; (2) the use of the dense depth segmentation for automatic detailed trimap generation from very sparse user input (a single stroke for foreground and/or background); (3) the use of face detectors to avoid user input entirely of some embodiments; and (4) efficient interactive color matting by incrementally solving a reduced linear system.


A process for depth regularization and semiautomatic interactive matting using color and/or depth information of images for image matting in accordance with an embodiment of the invention is illustrated in FIG. 1F.


The process receives (at 1F05) a candidate image that includes depth information. In some embodiments the image is an RGB-D (i.e., a red, green, blue image with an initial sparse depth map) image. In other embodiments, the image may be defined according to a different color model.


The process regularizes (at 1F10) the initial sparse depth map into a dense depth map. As described in detail below, the regularization process of some embodiments may include using Laplacian framework to compute depth values and using the Laplacian residual to correct input depth bleeding across image edges.


The process receives (at 1F15) an indication of an object of interest to be extracted by the image matting system. In some embodiments, the indication of an object of interest may be received from a user input identifying the object of interest, foreground, and/or background portions of the image. In some embodiments, the process may receive a single user stroke of foreground and the process may automatically identify the foreground and/or background layers based on the stroke. In certain embodiments, the user input may be a single stroke on the foreground and a single stroke on the background for image matting. In certain embodiments, object recognition processes, such as (but not limited to) face recognition, may be used to automatically identify an object of interest.


The process can generate (at 1F20) a trimap with a thin uncertain zone based on the indicated foreground object of interest and/or the dense depth map. In particular, in some embodiments, the process computes the average depth of the regions under the indicated foreground and/or background and segments based on depth into parts closer to the foreground depth and/or background depth. The process then automatically generates the thin trimap by dilating the boundary between foreground and background for the unknown regions. In some embodiments, the process may continue to receive user inputs (e.g., user strokes) indicative of foreground and/or background regions of the image, and the process may continue to refine the initial trimap. This may occur when the process omits parts of the object of interest (e.g., foreground object) and the user may provide more hints as to the foreground object of interest.


The process can apply (at 1F25) optimized color matting to compute the image matte. As will be described in detail below, in some embodiments, the process may apply a conventional kNN-based (K nearest-neighbor) Laplacian color matting based on the color values of the image, with the Laplacian matting optimized to solve for alpha values in only the unknown regions of the trimap. In the illustrated embodiment, the process then completes.


Although specific processes are described above with respect to FIG. 1F with respect to depth regularization and semiautomatic interactive image matting using color and depth information for an image, any of a variety of processes can be utilized for depth regularization as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


An example of the image matting pipeline for semi-automatic interactive RGB-D matting in accordance with an embodiment of the invention is illustrated in FIG. 1G. As illustrated, given an input image 1G05 with a single user stroke or blob to identify foreground (F) and background (B), the image matting system produces an alpha matte 1G15. The image matting system can automatically create a thin trimap 1G10 that can also be interactively modified (e.g., to correct it slightly near the arm and shoulder, marked by squares). The bottom row shows the input depth map 1G20 and the regularized dense depth map 1G25 that can be utilized to perform depth-based segmentation 1G30, that can be dilated to create the trimap 1G10. In some embodiments, the RGB-D image may be captured on a tablet equipped with a miniature array camera 1G35. In other embodiments, any of a variety of camera architectures including (but not limited to) an array incorporating a sub-array dedicated to depth sensing and a separate camera(s) dedicated to capturing texture (e.g. RGB) data and/or an active depth sensor can be utilized as appropriate to the requirements of specific applications.


In some embodiments of the image matting system, essentially the same machinery as Laplacian-matting may be used for depth regularization. This enables an efficient unified framework for depth computation and matting. Some embodiments also provide a novel approach of using the Laplacian residual to correct input depth bleeding across image edges. In, many embodiments, the dense depth may also be used for other tasks, such as image-based rendering.


In several embodiments, the image matting system may receive a single stroke on the foreground and/or background to identify the foreground object that is to be extracted during image matting. During image matting, the image matting system can compute the average depth of the regions under one or both strokes (i.e, foreground and background), and the image matting system may do a quick segmentation based on depth, into parts closer to the foreground or background depth. In some embodiments, the image matting system can automatically create a thin trimap for image matting, by dilating the boundary between foreground and background for the unknown regions. Thereafter, the image matting system may apply a kNN-based Laplacian color matting in the conventional way, but with certain optimizations described in detail below, based on the color image only (since colors typically have greater precision at object boundaries in the image than the regularized depth).


Some embodiments of the image matting system provide an optimization to conventional Laplacian color matting that makes it one to two orders of magnitude more efficient without any loss in quality. In particular, in some embodiments, instead of solving for alpha values over the entire image and treating the user-provided trimap as a soft constraint with high weight, the image matting system solves a reduced linear system for alpha values only in the unknown regions and no constraint weight is needed. Moreover, by starting the linear solver at the previous solution, the image matting system may incrementally update a matte efficiently. Thus, the user can interactively modify or correct the automatic trimap with real-time feedback.


Related Work—Alpha Matting


Laplacian matting is a popular framework. Methods like local and non-local smooth priors (“LNSP”) matting build on this, achieving somewhat higher performance on the Alpha Matting benchmark by combining nonlocal and local priors. A few matting systems are interactive, but typically must make compromises in quality. Other methods are specialized to inputs from array cameras but do not apply to general RGB-D images, or demonstrate quality comparable to state of the art color matting.


Accordingly, image matting systems in accordance with many embodiments of the invention can provide for making semi-automatic and interactive, Laplacian matting methods on RGB-D images and also enable depth regularization in a unified framework.


Background of Affinity Based Matting


Described below is a brief review of Affinity-Matrix based matting. These methods typically involve construction of a Laplacian matrix. Some embodiments use this approach for its simplicity and high-quality. However, processes in accordance with several embodiments of the invention perform certain optimizations, and use the framework as the basic building block for both depth regularization and matting.


In convention matting, a color image is assumed to be generated as I=αF+(1−α)B, where I is the image, α is the matte, between 0 and 1, and F and B are the foreground and backgrounds custom character layers. α is a number, while I, F and B are RGB intensity values (or intensity values in another appropriate color space).


The Laplacian L=D−A, where A is the affinity matrix (methods to construct A are discussed at the end of the section). D is a diagonal matrix, usually set to the row sum of A. The idea is that alpha at a pixel is an affine combination of close-by alpha values, guided by A. Some embodiments define x as a large (size of image) vector of alpha or matte values (between 0 and 1). Ideally, some embodiments provide that:










x
i

=






j




A
ij



x
j






j



A
ij






D
ii



x
i



=



j




A
ij



x
j








(
1
)








where the diagonal matrix D is the row sum, such that DiijAij.


Succinctly,

Lx≈0  (2)


Since L=D−A and D is a diagonal matrix. However, solving this equation without any additional constraints is largely meaningless; for example x=x0 for any constant x0 is a solution. Therefore, Laplacian-matting systems solve:

x=arg min xTLx+λ(x−y)TC(x−y),custom character  (3)

where the first term optimizes for the constraint that Lx=0, and the second term enforces user constraints in a soft way, with λ a user-defined parameter, y being the user-marked known region (either 0 or 1) and C being a diagonal confidence matrix (that will usually have entries of 1 for known or 0 for unknown).


The solution to this optimization problem (minimization) is: custom character
(L+λC)x=λCy,  (4)

which can be viewed as a sum of constraints Lx=0 and Cx=Cy. This equation (6) is usually solved using preconditioned conjugate gradient descent. custom character


Several methods have been proposed to generate the affinity matrix including the procedures described in Levin et al. “A Closed Form Solution To Natural Image Matting,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pages 61-68, (2006), the disclosure of which is hereby incorporated by reference in its entirety, uses the color lines model to look for neighboring pixels. Some embodiments use the kNN-method described in the Chen et al. 2012 paper. First, a feature vector may be computed for each pixel in the image, usually a combination of pixel color and scaled spatial location that may be referred to as RGBxy. Some embodiments may also use the regularized depth estimates as the feature vector, that is, the feature vector can be RGBDxy. A kd-tree may be constructed using this array of features. For each pixel (row), n nearest neighbors may be found by searching the kd-tree, and may be assigned affinity scores (mapped to the range [0, 1] based on distance in feature space). Several other methods, do not use the affinity matrix explicitly; however, one can think of them in the affinity matrix framework. custom character


Depth Regularization


In many embodiments, before the initial sparse depth map may be usable for image matting, it may be regularized into a dense depth map. Some embodiments provide a novel matting-Laplacian based process to create the dense depth map. In particular, an unknown depth may be approximately an affine combination of depths at “nearby” pixels, just as the alpha value is in color matting, and there may be a strong correlation between image and depth discontinuities. Some embodiments thus are able to precompute the Laplacian once while creating the RGB-D image, and make it available to subsequent applications (see FIG. 6). As described above, in some embodiments, the image matting system handles depth ambiguity that is typical for depth maps obtained through multi-view stereo. custom character


As noted above, depth map maps can be obtained using multi-view stereo (MVS) to generate the dense depth map. Other embodiments may use other depth acquisition devices with different characteristics, such as the Kinect depth sensor distributed by Microsoft Corporation of Redmond, Wash., and 3D light field cameras distributed by Raytrix GmbH of Kiel, Germany, as illustrated in the example in FIG. 5 to obtain the initial sparse depth map.


In some embodiments, the inputs to the image matting system may be a high resolution RGB image I with m×n pixels, and an indexed depth map z0. In the image matting system, z0 may correspond essentially to disparity, with higher z indicating greater disparity. Some embodiments may also use a binary confidence map C that indicates whether the depth at a given pixel is confident. Some embodiments may obtain C by a thresholded gradient of the intensity of the input image I, since stereo depth is generally accurate at image and texture edges. FIGS. 2-4 show synthetic (simulated) and real example outputs of the sensor.


In a number of embodiments, since the confidence map may be defined only at edges, the depth map reported may span (“bleeds” across) an edge, as shown for simple synthetic data in FIG. 2(c,f). In particular, FIG. 2 illustrates an example of a simple synthetic scene (5 depth planes). FIG. 2(a) illustrates a 3D view of the scene. FIG. 2(b) illustrates the color image. FIG. 2(c) illustrates ground truth depth map (disparity) z0* shown in false color (normalized). FIG. 2(d) illustrates the confidence map C where the depth is available (simulated), mainly at texture and object edges. FIG. 2(e) illustrates bleeding edge depth z0 reported by the sensor (simulated). FIG. 2(f) shows a sectional view of the 3D scene along a cutting plane p-q marked in FIG. 2(c) and FIG. 2(e). Bleeding depth is shown along the profile indicated by the white line segment.


In FIG. 2, 14% of the pixels are confident, 11.2% because of surface texture gradients and 2.8% from depth at edges. Of these 2.8%, half (1.4%) are the correct depth (interior of foreground object) while the other half (1.4%) are incorrect depth (bleeding), assigning foreground depth on the background object. When there is a geometric discontinuity, it is difficult to determine which side of the surface the depth really belongs to. This is a fundamental limitation. However, the depth may belong to the closer foreground layer with greater disparity, whichever side that happens to be.


Depth Regularization in Laplacian Framework


An example of Laplacian pruning and residual correction in accordance with an embodiment of the invention is illustrated in FIG. 3. The synthetic scene is from FIG. 2. FIG. (A) illustrates incorrect connection. The white line indicates the pixel highlighted by a circle is connected to another similar pixel from a different depth group. Black lines indicate acceptable connections. (B) highlights more pixels with “bad” connections. (C) illustrates diffusion depth map zD. (D) illustrates Laplacian residual R without Laplacian pruning, that is retaining all the connections shown in (A). (E) illustrates residue R after Laplacian pruning. (F) illustrates regularized depth map z*. (G) illustrates regularized depth map after first iteration of residue correction. (H) illustrates regularized depth map after second pass of residue correction.


As described above, some embodiments perform depth regularization using the Laplacian L and the data term z0 weighed by confidence map C. For mathematical convenience, some embodiments treat C as a sparse mn×mn diagonal matrix, whose diagonal entries are the binary confidence values. The regularized depth z* with reference to a given C can be computed through an optimization process similar to equations 3, 4 as:










z
*

=




argmin
z



(



z
T


Lz

+



λ


(

z
-

z
0


)


T



C


(

z
-

z
0


)




)





(

L
+

λ





C


)



z
*



=

λ






Cz
0







(
5
)







An example of z* in accordance with an embodiment of the invention is illustrated in FIG. 3f. As illustrated, the very sparse depth at edges of the initial depth map has been regularized into a dense depth map everywhere in the image. However, the depth edges are smeared, which can be attributed to incorrect bleeding of input depth. The remainder of this sub-section addresses this issue by (1) pruning the Laplacian to avoid incorrect non-local affinity matches across depth boundaries, and (2) exploiting Laplacian residuals to detect and correct bleeding of input depths. The final result in FIG. 3(h) is much more precise.


As described above, some embodiments use the kNN approach which may pair similar pixels without regards to their depth, when constructing the affinity matrix A and Laplacian L. Two nearby color-wise similar pixels may be at two different depths (FIGS. 3(a,b)). Such non-local pairings can be beneficial to matte extraction. However, they can also cause issues in some cases for depth regularization, and unexpected results during matting if foreground and background objects have similar colors. Ideally, for each pair (i,j) corresponding to a nonzero entry in Aij, some embodiments test if i and j have depth differences beyond a threshold. However, detecting and pruning such pairs is a chicken-and-egg challenge, since depth is needed to do the pruning, but depth is obtained by using the original Laplacian. Therefore, some embodiments may first find an approximate dense depth map through a diffusion process,

(LD+λC)zD=λCz0  (6)

where LD is the diffusion Laplacian constructed such that each pixel may be connected to 8 of its surrounding neighbors (using only spatial proximity, not RGB color values). A result is shown in FIG. 3(c). Using zD, some embodiments may compute the depth difference for each pair (i,j) as |zD(i)−zD(j)|. If this difference is above a certain threshold, some embodiments purge those pairs by assigning Aij=0. While pruning, some embodiments ensure a minimum number of connections are retained.


Processes in accordance with certain embodiments of the invention provide a novel approach to detect and correct depth bleeding across edges. The key insight is that solving equation 5 above, the basic Laplacian condition in equation 2, namely that Lz*≈0 should also be satisfied. In some embodiments, after solving for z*, it is easy to compute the Laplacian residual, custom character
R=Lz*  (7)


As shown in the example illustrated in FIG. 3(d), the residual R is close to 0 in most regions—those pixels have depth or disparity well represented as an affine combination of values at nearby pixels. However, there are regions involving connections across different depths, with large residuals. The pruning discussed above resolves most of these issues (FIG. 3(e)), but there are still regions along depth edges where |R| is large. This residue indicates a disagreement between the Laplacian prior and the data term. It indicates the data term is enforcing an incorrect depth value.


Some embodiments seek to find and remove depth edges that have “bled” over to the wrong side. Some embodiments observe that confident depth may always belong to the foreground layer. Since z represents disparity in some embodiments, the z value (disparity) of foreground should be greater than that of background. For example, consider a pixel that should have background depth, but is incorrectly given confident foreground depth (FIG. 2(c,f)). After regularization, this pixel may have higher disparity z* than the average of its neighbors (which will include both foreground and background disparity values). The Laplacian L=D−A, and the residual R=Lz*−Az*. This is effectively the value at z* minus the average of its neighbors (connected through the affinity matrix). Since disparity z* is higher than its neighbors, incorrect confident depths can be identified as those regions where R>0.


A new confidence map can be computed and set Ci=0 at pixel i whenever R1>τ, leaving C unchanged otherwise (for example using an appropriate value such as, but not limited to, τ=0.005). In several embodiments, the process may iterate by solving equation 5 with the new confidence map, starting with the previous solution (compare FIGS. 3(f,g)). Results in this section are presented with two iterations found as adequate. Although, as can readily be appreciated, any number of iterations can be performed as appropriate to the requirements of a specific application. FIG. 3(h) shows the result in the synthetic case, that is much more accurate than without the residual correction.


An example of image matting on a real scene in accordance with an embodiment of the invention is illustrated in FIG. 4. In particular, FIG. 4 illustrates an RGB image 405 produced from an array camera, initial sparse depth map 410 from multiview stereo at edges and texture boundaries, initial regularized depth map with bleeding 415, and final regularized depth map 420 after Laplacian residual correction. As illustrated in this example, the initial sparse depth map from the sensor is regularized into a high quality dense depth map, with Laplacian residual correction improving the results further.


Comparison and Generality:



FIG. 5 shows the generality of the approach to other RGB-D systems, and compares against other methods. In all cases, it can be seen that the depth from an embodiment of the image matting system is sharper and more complete. In particular, FIG. 5 illustrates the results of depth regularization processes performed on depth maps generated by various types of depth sensing technologies and provides a comparison. The top row (a-f) provide a comparison of the image matting system to Markov Random Field (MRF) and the method described in Tallon et al., “Upsampling and Denoising of Depth Maps Via Joint-Segmentation,” 20th European Signal Processing Conference (EUSIPCO), pp. 245-249 (2012), (the “Tallon et al. paper”), the relevant disclosure of which is hereby incorporated by reference in its entirety. (A) illustrates an input color image. (B) illustrates ⅛ scaled and noisy version of the ground truth depth map (c). (D) is the recovered upscaled depth map using MRF. (E) is the method in the Tallon et al. paper. (F) is an embodiment of the image matting system method that produces a depth map closer to the ground truth. The middle row (g-h) compare with Kinect images. (G) is the colored image, (h) is the Kinect depth map that has holes, (i) is the sharper and more complete version after regularization. Bottom row (j-l) compare with Raytrix images. The color image from one view is in (j). Their regularized depth map in (k) has more spurious depths and bleeding than the sharper result from an embodiment of regularization of the image matting system in (l).


To compare to MRF, source images and results from the method described in the Tallon et al. paper are used. Here, a known ground truth image is downsized to ⅛ its original size and noise is added. The Tallon et al. paper proposes a method to upscale the depth map using the original color image as prior, and also performs comparison to MRFs. The comparison started with the same downsized image, upscaled it, and regularized the depth map. Accordingly, this produces a depth map with less noise, and the edges are well defined, resembling the ground truth, as shown in FIG. 5-(top row).


Next, an RGB-D image from the Kinect sensor (depth map warped to color image) using the method disclosed in Lai et al. “A large-scale hierarchical multi-view RGB-D object dataset,” Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 1817-1824 (2011), the relevant disclosure of which is herein incorporated by reference in its entirety, is considered. Due to warping errors and occlusions, the input depth map has holes and misalignment. In several embodiments, the image matting system may fill 1 in the missing values (confidence map is set to 1 when the depth is known) and may also align the depth map to the color image. In this case, residue correction (confidence set to 0) is performed wherever the absolute value of the Laplacian residual |R| is greater than a threshold. These regions essentially indicate incorrect alignment of depth. The result is shown in FIG. 5-(middle row).


Regularization was also performed on a sparse depth image captured with the Raytrix light field camera as disclosed in Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field,” SPIE 8291, 29-36, (2012), the relevant disclosure from which is hereby incorporated by reference in its entirety. A confidence map was generated based on the Raytrix depth map (wherever the depth is available, which is clustered around textured areas). The resulting regularized depth map shows a lot more depth detail, compared to the Raytrix regularization method, as seen in FIG. 5-(bottom row). However, a few outliers in are observed in the resulting regularized depth map due to a lack of access to the noise model of the input depth map, and the noisy input depth.


Various application of regularized depth maps in accordance with various embodiments of the invention are illustrated in FIG. 6. The left three images show examples of changing view-point as in image-based rendering, as well as the 3D mesh obtained simply by connecting neighboring pixels (foreground/background connections are not prevented). The right two images show refocusing on the foreground object with a simulated large aperture, and a visual effect obtained by making the light intensity fall off with depth from the camera.


Efficiency Optimizations


Performing depth regularization in large flat areas may involve redundant computation. Accordingly, in order to speed-up processing, an image can be broken into super-pixels, where the size of a super-pixel is a function of the texture content in the underlying image. For example, processes in accordance with several embodiments of the invention use a quad-tree structure, starting with an entire image as a single super-pixel and sub-divide each super-pixel into four roughly equal parts if the variance of pixel intensities within the initial super-pixel is larger than a certain threshold. This may be done recursively with the stopping criteria being: until either the variance in each super-pixel is lower than the predetermined threshold, or the number of pixels in the super-pixel is lower than a chosen number.


In some embodiments, each super-pixel can be denoted by a single feature vector. For example, RGBxy or RGBxyz feature of the centroid of each super-pixel. This process may significantly reduce the number of unknowns for images that have large texture-less regions, thus allowing the image matting system to solve a reduced system of equations.


After performing regularization on the superpixels, an interpolation can be performed to achieve smoothness at the seams of superpixels. In particular, to smoothen this out, each uncertain (unknown) pixel's depth can be estimated as a weighted average of the depths of the k-nearest super-pixels. In some embodiments, the weights may be derived as a function of the distance of the RGBxy (or RGBxyz) feature of the pixel from the super-pixel centroids.


The speedup factor can be roughly linear in the percentage of pixel reduction. For example, with superpixels equal to 10% of the original image pixels, a speed up of an order of magnitude is obtained, and the regularized depths can be accurate to within 2-5%.


Automatic Trimap Generation


In some embodiments, an initial step in matting processes may be to create a trimap. In several embodiments, the trimap includes user-marked known foreground and background regions, and the unknown region. While methods like kNN matting can work with a coarse trimap or sparse strokes, they usually produce high quality results only when given a detailed thin trimap. Creating such a trimap often involves significant user effort. Accordingly, some embodiments provide a semi-automatic solution by providing that RGB-D images enable separation of foreground and background based on depth. However, in a number of embodiments, the depth may be less precise at boundaries than the color image, even with the Laplacian-residue adjustments described above. Accordingly, some embodiments may use the depth to automatically create a detailed thin trimap from sparse user strokes, followed by color matting. Note that this is not possible with RGB only images.


As described above, FIG. 1 illustrates the overall pipeline for semi-automatic interactive RGB-D matting. As illustrated in the example in FIG. 1, the user may draw a single foreground and a single background stroke or blob. In another embodiment, the user may only select the object of interest in the scene. This object need not necessarily be the foreground (or foremost) object in the scene. The image matting system in some embodiments analyzes the user stroke, along with the distribution of objects (in 3D, i.e., including depth) in the scene and automatically suggests a depth range so as to separate the object of interest from the rest of the scene. In some embodiments, the image matting system does so by analyzing the depth histogram of the image. If a multimodal distribution is found, the image matting system attempts to estimate thresholds to segregate the mode containing the depth of the object of interest from the rest of the depth distribution.


If multiple objects lie in the same depth range, the image matting system may also automatically limit the selection to a single object by analyzing depth discontinuities. If an incorrect selection is made, the user may have the option to provide active inputs (e.g., marking regions to be region of interest or not), whereby the image matting system may refine the initial identification of the object of interest from multiple inputs.


In many embodiments, the image matting system may analyze depths within only a window around the object of interest. The trimap may be enforced globally (that is, to the entire scene), locally (only in the window selected, the window may be resizable by the user) or by an object segmentation process whereby the object of interest is identified in the entire image based on the threshold selected from the user input.


In several embodiments, the image matting system computes the average depth in the foreground versus background strokes, and simply classifies pixels based on the depth to which they are closest. As shown in FIG. 1, this may provide a good segmentation, but may not be perfect at the boundaries. Therefore, some embodiments use edge detection, and apply standard image dilation on the boundaries to create the unknown part of the trimap (FIG. 1 top row).


Many different alternatives may be possible for semi-automatic or automatic matting. In a number of embodiments, the user could simply draw a box around the region of interest. Alternatively, the image matting system could use a face detector and/or other types of automatic object detectors and automatically create the box, as illustrated in an example in FIG. 7, where the front person is matted from the background completely automatically. In particular, FIG. 7 illustrates an example of fully automatic RGB-D matting using a face detector in accordance with an embodiment of the invention. FIG. 7 illustrates the input image 705 with face detection. The image matting system creates a thin trimap 710 from which the alpha matte 715 is automatically computed. As illustrated, the insert in the alpha matte 715 shows using the same blobs for RGB-only matting; without depth information, the quality is much worse.


In some embodiments, the foreground and background blobs may be automatically computed as circles within the two largest boxes from the face detector. In certain embodiments, a comparison can be performed with simple RGB only matting, using the same foreground/background blobs (alpha inset in third sub-figure). Accordingly, the RGB-D image may be used for detailed trimap creation and high quality mattes.


Using Occlusion and Visibilty Information


As described above, initial sparse depth maps from camera arrays may be obtained through disparity estimation (e.g., parallax processing) and typically yield high confidence depth values for textured regions. As part of the disparity estimation process in the camera array pipeline, it is easy to identify regions containing occlusions, which are regions that are adjacent to foreground objects that are occluded by the foreground object to some of the cameras in the array. These occluded regions may be easily identified in the disparity estimation step and potentially seed the trimap that is needed for the start of the color image matting stage. This enables a reduction in the user inputs to the matting process resulting in an improved user experience


Efficient Interactive Laplacian Color Matting


In some embodiments, after trimap extraction, Laplacian color matting is employed to compute the matte, and extract foreground/background colors if desired. Reduced matting equations can be used that solve for alpha only in the unknown regions of the trimap, with a speedup proportional to the ratio of the full image to the unknown part of the trimap. In several embodiments, since the automatic trimap may have a thin unknown region, efficiencies of one to two orders of magnitude may be achieved, providing interactive frame rates without any loss in quality. Moreover, in a number of embodiments, the exact Laplacian equation can be solved for, without an arbitrary parameter λ to enforce user-defined constraints (that by definition are now met exactly in the known regions). In certain embodiments, this method is very simple and may extend easily to incremental matting where interactive edits are made to the trimap, with real-time updates of the matte. Furthermore, as described above, some embodiments may use the interactive guided image capture for image matting.


Computational Considerations and Reduced Matting


In general, the conventional matting formulation wastes considerable effort in trying to enforce and trade-off the Laplacian constraint even for pixel values in the known regions of the trimap. Instead, some embodiments solve for the constraint only at pixels marked unknown in the trimap. In other words, these embodiments directly solve equation 2 for unknown pixels. Accordingly, this provides a much simpler system, with no parameter λ. Furthermore, the image matting system is no longer under-constrained, since the unknown pixels will have neighbors that are known, and this may provide constraints that lead to a unique solution. More formally, these embodiments use superscripts u to denote unknown pixels f for foreground pixels and b for background. The pixels can be conceptually re-ordered for simplicity, so that equation 2 can be written as











(





L
uu



L
uf



L
ub








L
uf



L
ff



L
fb








L
ub



L
fb



L
bb





)



(




x
u






x
f






x
b




)


=
0




(
8
)







So far, this is simply rewriting equation 2. In some embodiments, the image matting system may now restrict the Laplacian and solve only for the rows corresponding to the unknown pixels. Unlike in the standard formulation, these embodiments may simply leave known pixels unchanged, and do not consider the corresponding rows in the Laplacian. Accordingly, in some embodiments this is rewritten as:











(


L
uu



L
uf



L
ub


)



(




x
u






x
f






x
b




)


=
0




(
9
)







Several embodiments can do this, in a modified form, for depth regularization. This may be especially useful for images that are well textured and regularization is only needed for a small percentage of pixels that are marked as non-confident. The formulation is as follows:











(


L
uu



L
uk


)



(




x
u






x
k




)


=
0




(

9

B

)








where Luk is the Laplacian connections between unknown (not confident) pixel with unknown depth xu and pixels with known (high confidence) depths annotated by xk.


In the above equation (9), some embodiments may now set xf=1 and xb=0, to derive

Luuxu=−Luf·1  (10)

where the right-hand side corresponds to row-sums of Luf (1 is a column-matrix of 1, of the same size as the number of foreground pixels).


Note that Luu is diagonally dominant, since the diagonal elements are row-sums of the full affinity matrix, which is more than the reduced affinity in Luu. Therefore, the image matting system may have a solution and is in fact usually better conditioned than the original system.


The computational savings may be considerable, since the image matting system may only need the reduced matrices for the unknown pixels. The Laplacian size is now ur rather than pr, where r is the number of neighbors for each pixel, and u<<p is the number of unknown pixels while p is the total number of pixels. If unknown pixels in the trimap are one-tenth of the image, the image matting system can easily save an order of magnitude in computation with essentially no change in the final image.


Within an interactive matting system, it may seek to update x in real-time, as the user changes the trimap. A simple approach is to use x from the previous frame as the initial value for preconditioned conjugate gradient. Often, this is close enough to the error tolerance immediately, and usually only one or two iterations are needed, making the incremental cost of an update very fast. As can readily be appreciated, motion tracking can be utilized to accommodate motion of the depth sensor and/or camera relative to the scene.


Matting also may often involve solving for foreground and background. For kNN-matting, Chen et al., “KNN Matting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, No. 9, September 2013, (the Chen et al. 2013 paper) the relevant disclosure of which is hereby incorporated by reference in its entirety, pose an optimization problem. As described in detail below, with respect to this optimization problem, some embodiments can solve a reduced system, providing considerable speedups for layer extraction.


Accuracy and Quality


An example of a quality comparison to unoptimzed kNN on a color image (top row) and from an embodiment of the RGB-D data (bottom row) in accordance with an embodiment of the invention is illustrated in FIG. 8. The results (c,g) are almost identical to the full Laplacian solution (b,f). Solving the reduced Laplacian provides a speed benefit without any loss in quality (d,h).


In particular, FIG. 8 shows two mattes computed with our method, one on the color RGB alpha matting benchmark, and one with an image matting system in accordance with an embodiment of the invention. In both cases, the quality is essentially identical to kNN matting without the speed optimization. For all 20+ images in the alpha matting benchmark, the optimization introduced a difference in the matte of less than 0.1%, with the median being 0.0001.


An example of timings in accordance with an embodiment of the invention is illustrated in FIG. 9. The graph 905 graphs speedup vs. ratio of total to unknown trimap pixels on the alpha matting benchmark referred to above with respect to FIG. 8. Two trimaps are shown in red and blue circles, and the black line indicates the linear speedup achieved on average. The table 910 on the right illustrates timing for 1000×750 RGB-D images captured with the image matting system. Speedups of more than an order of magnitude are achieved, given the image matting system's detailed thin trimaps.


In particular, FIG. 9 plots 905 the speedup of the optimized method versus standard kNN. The example ran comparisons on all of the standard color only images in the alpha matting benchmark using two trimaps of varying precision (shown as red and blue circles in FIG. 9). The speed-ups are essentially linear (black line), based on the ratio of the number of total pixels to unknown trimap pixels, and obtain a 10x benefit in many cases. FIG. 9 (right) shows the running time and speedup for the final color matting step on a number of RGB-D images captured with the image matting system. Because the image matting system's method automatically creates thin detailed trimaps, speedups are even more impressive, with running times a fraction of a second. In some embodiments, a user may adjust the trimap in real-time using various interactive editing tools as described in detail below.


Interactive Editing



FIG. 10 illustrates screen shots from an example of an interactive system, that allows the user to adjust the trimap in real-time. In particular, FIG. 10 illustrates an interactive editing of trimaps with real-time feedback on the matte. FIG. 10 illustrates the graphical user interface (“GUI”) 1005 and minor touch-ups to specific trimap 1010 regions, illustrated as boxes 1-4 on the trimap, to produce the final high-quality matte 1020. The user simply adds small foreground, background, or unknown strokes 1015 to modify the trimap. Updates to the matte happen in less than 0.1 seconds.


In particular, the incremental computation happens in real-time, taking an average of 0.1 seconds in Matlab to update after each edit, with between 0 and 5 iterations, starting from the previous solution. In many frames, the previous solution is close enough (using a threshold of 0.001), so no iterations or changes are required. The disclosure below includes more examples of depth regularization and matting using images from an array camera. This disclosure considers various scenes (objects, people, indoor and outdoor) and present timing and performance information.


In some embodiments, the Laplacian residual may also be applied to image matting, to automatically fix regions that are incorrectly marked in the trimap. Some embodiments may use reduced Laplacians to reduce precompute time, in addition to speeding up run-time. In some embodiments, the image matting system could also be extended to depth estimation and matting in video. In several embodiments, the image matting system may be used in potential applications in mobile devices like phones and tablets, equipped with mobile light field or camera array sensors, among various other devices.


Depth Regularization and Semiautomatic Interactive Matting Using RGB-D Images


The image matting system of some embodiments may be further optimized using various different optimization techniques, as will be described in detail below. These techniques include an extension to efficient Laplacian color matting to also solve for foreground and background efficiently. In several embodiments, a further efficiency optimization may be performed, as will be described below, that makes a small additional approximation to achieve faster times and eliminate the tradeoff parameter λ, as illustrated in the examples in FIG. 11 and FIG. 12.


In particular FIG. 11 illustrates (a) a color image, (b) ground truth alpha, (c) color image multiplied by alpha, with the blue background bleeding in the fur on the top, (d) foreground multiplied by alpha using kNN took 140 seconds, (e) foreground multiplied by alpha: using reduced system took 3.5 sec (speed up of 40×), (f) using faster approximation extraction to compute foreground multiplied by alpha. Took 1.4 sec for a net speed up of 100×. The images are almost identical in all cases.



FIG. 12 illustrates (a) color image multiplied by alpha; see incorrect colors in the top of the hair, (b) kNN-matting, (c) reduced system, (d) fast approximate method. The images are almost identical with two order of magnitude speedups between (d) and (b).


Finally, FIGS. 13-18 show a number of additional examples of semiautomatic interactive depth-based matting on RGB-D images from a compact array camera, beyond the results described above in accordance with embodiments of the invention. In particular, FIGS. 13-18 showcase a variety of successful examples of automatic trimap generation with minimal user input, and one failure case in FIG. 17.


Efficiently Solving for Foreground and Background


While the above description primarily focuses on estimation of the alpha value, processes in accordance with many embodiment of the invention also involve solving for the foreground and background regions of images. For kNN-matting, the Chen et al. 2013 paper poses an optimization problem. In this case, the image matting system cannot avoid the optimization, but can solve a reduced system, providing considerable speedups for layer extraction as well. This section first introduces the original formulation of Chen et al. 2013 paper (correcting a minor typographical error in their equations) and then develops the speedup. The next section develops an approximation of some embodiments that may be even faster and does not require optimization.


Optimization Formulation


The optimization considers two terms: The closeness of foreground and background to their neighbors, and faithfulness to data. The data term can be written as,

min2λΣkkFk+αkBk−Ik)2  (11)

where the subscript k denotes the pixel, and α=1−α. Note that the data term handles each pixel separately, and couples foreground and background. Setting the derivative with respect to Fk and Bk to zero separately, and omitting the constant factor of 4 (that is also present in the proximity term), to obtain:

αk2FkkαkBkkIk  (12)
αkαkFk+αk2BkαkIk  (12)


One can write this in matrix form as











[

K
_

]



[



F




B



]


=

[




α





I








α





_


I




]





(
13
)








where the matrix K is such that Kiii2 for 0≤i≤p, where p is the number of pixels (K is a 2p×2p), and Kii=αi2 p≤i≤2p.


The off-diagonal values are also sparse, with Ki,i+p=Ki+p,iiαi f or 0≤i<p. Some embodiments also use Ī to denote the matrix on the right-hand side.


The proximity constraint (proximity is in the standard RGBxy space for kNN) leads to the standard Laplacian equation for foreground and background,

LF=0 LB=0  (14)

which can be combined to











[



L


0




0


L



]



[



F




B



]


=

[



0




0



]





(
15
)







Some embodiments use a matrix L to encapsulate the matrix on the left. Some embodiments can combine all of these constraints to write, similar to the original matting (equation 4 above),custom character
(LK)X=λĪ  (16)

where X=[FB]T. This can again be solved using preconditioned conjugate gradient with an initial condition of X=Ī. The solution is run separately for each color channel.


Reduced Formulation


As for efficient alpha matting, processes in accordance with many embodiments of the invention only solve in the unknown trimap regions (other regions are foreground or background and their colors are known). The formulation is even simpler than before, because Luuk=0 using the weights in the Chen et al. 2013 paper (here the superscript k stands for known, that may be foreground or background). Indeed, the affinity matrix is weighted in such a way that it reduces to 0 when α=0 or α=1 (technically it uses Ajk=Ajk×min(Wj, Wk) where Wj=1−|2α−1|.) Thus, some embodiments can simply replace the Laplacian L by the reduced form Luu writing











[




L
uu



0




0



L
uu




]



[




F
u






B
u




]


=

[



0




0



]





(
17
)







Similarly, the data term can be reduced to simply looking at the unknown pixels. The ultimate reduced form is directly analogous to equation 16,

(LuKu)Xuu  (18)

where it has simply used a single super-script u to consider the restriction of matrices/vectors to unknown rows and columns. This may immediately provide a dramatic speedup proportional to the size of the unknown region, relative to the image. Note that known regions are not needed or used here. As before, the image matting system may be well conditioned since L is diagonally-dominant, and it also has the data term. Unlike previously, the image matting system may not omit the data term, since the requirement that the alpha-matting equation hold is fundamental.


Fast Direction Estimation of αF and (1−α)B


Instead of estimating pure foreground and background colors F and B, some embodiments may seek to estimate αF and (1α)B instead. Computing only αF suffices for applications such as compositing (replacing background), since the compositing is typically achieved through Icomposited=αF+(1−α)Bnew-background.


The basic idea is similar to previous Laplacian formulations, now applied to αF instead. Note that this is technically an approximation, since even if both α and F satisfy the Laplacian condition, the product does not necessarily do so. However, as seen in FIGS. 11 and 12, the images are almost identical, while speedups of 2-3× are obtained over the previous section, and two orders of magnitude over standard kNN. Moreover, no λ parameter or optimization is needed.


Let X=αF be the foreground layer and Y=(1−α)B be the background layer; thus X+y=I. Since α has already been computed, it possible to segment the image into three regions: foreground pixels f where α>0.99, background pixels b where α<0.01 and unknown pixels u elsewhere. Expanding equation 8 of the main paper, provides

LuuXu+LufXf+LubXb=0  (19)
LuuYu+LufYf+LubYb=0  (19)


For foreground pixels, the foreground layer is simply the image, Xf=If. Similarly, for background pixels, Yb=Ib. One can assume the foreground color is black in background regions and vice-versa, so Xb=0 and Yf=0. Hence, one can reduce equation 19 to

LuuXu+LufIf+0=0  (20)
LuuYu+0+LubIb=0  (21)


In addition, it is known that Xu+Yu=Iu. In principal, this is an over-constrained system of three linear equations that can only be solved for by optimization as in the previous section. However, some algebraic manipulation will allow us to derive a symmetric form that can be solved directly. Let us now solve for Xu. Applying Luu on both sides, provides:

LuuYu=LuuIu−LuuXu  (22)


Replacing LuuYu in equation 21 using equation 22, provides:

LuuIu−LuuXu+LubIb=0  (23)


Note that both equations 20 and 23 constraints on Xu. One simply combines (sums) the two equations rather than solving an optimization problem, to arrive at a more symmetric formulation:

2LuuXu+LufIf−LuuIu−LubIb=0  (24)

leading to a system that can be solved using the preconditioned conjugate gradient method:













L
uu



X
u



=


-

(



L
uf



I
f


-


L
uu



I
U


-


L
ub



I
b



)


2





(
25
)







One can solve for Yu if desired simply using Xu+Yu=Iu. This simply flips the signs of If and Ib in the above equation.


Laplacian Pruning


In order to take advantage of known α, some embodiments prune the links in the Laplacian when the α difference is high. That is for nonzero Ai,j, some embodiments find the difference αi−αj. If the difference is beyond a threshold, some embodiments make Ai,j=0.


RGB-D Matting Examples


This section (FIGS. 13-18) shows a number of examples from a full RGB-D semi-automatic interactive matting system in accordance with an embodiment of the invention, similar to FIG. 1 describe above. As illustrated, in most cases, a very good trimap is obtained with a couple of user strokes, and minor interactive touch-ups can be used to refine the matte. FIG. 17 shows one challenging failure case near the bottom of the trunk, where similar colors/texture lead to imperfect matting. As can readily be appreciated, feedback via a user interface during image capture preview can minimize the likelihood of capturing an RGB-D image that will yield a degenerate case during matting.



FIG. 13 illustrates results: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. In this example, the depth near the tape-roll 1305 is inaccurate, which leads to incorrect trimap generation. Even though the trimap is accurate at the top of the tea-box 1310, the alpha matting has failed due to thick trimap. The user can fix the trimap by reducing the width to obtain (E).



FIG. 14 illustrates: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. In this example, the user corrected only a very small portion of the trimap (1405 circled in (C)). After which, a high quality matte was obtained in (E).



FIG. 15 illustrates: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. In this example, the raw depth is very sparse and noisy (on the left eye) and also on at many places on the wall. Given that the subject is really close to the wall, the user had to mark larger foreground and background blobs for trimap generation. However, the initial alpha matte had a minor issue near the left shoulder 1505 which was fixed to obtain (E).



FIG. 16 illustrates: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. In this example, a very noisy input depth map (F) was used. In addition, the depth map was scaled down to quarter size in order to perform fast regularization. The regularized depth map was scaled up before using it for trimap generation. Even with such an operation, a thin trimap was obtained. It can be noted that, the depth map does not retain holes (in the areas 1605 marked in (C)). This is because the raw depth does not have this information. The user then manually corrected through the GUI to obtain a high quality final alpha (E).



FIG. 17 illustrates: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. This example uses a very cluttered scene with strong highlights and shadows. The input raw depth map (F) is very sparse after noise-filtering. Even in this case, only few foreground/background blobs were sufficient. However, as expected it is hard to isolate the tree back since the depth does not provide discrimination. The user then manually cleans up only the ground area. The trimap on the trunk of the tree 1705 was found to be accurate and no adjustment was needed. The final alpha matte (E) shows the fuzzy area at the bottom of the trunk. This is because the ground has similar colors/textures as that of the tree trunk at the bottom. This is a failure case of matting.



FIG. 18 illustrates: (A) input RGB image. User marked foreground (white blob) and background (orange blob). (B) segmented depth map with trimap outlining it. (C) Initial alpha matte. Circles 1805 show problematic areas, which the user then fixes through the GUI. (D) After the user fixes the trimap, final image cutout. (E) Final alpha. (F) Input raw depth map. (G) Regularized depth map. (H) Segmented depth map. In this example, the scene is cluttered and also the input depth map has outliers (on the hair of the front most subject). As a result, the trimap fails to outline the subject correctly. However, the trimap is reasonable in most of the other places. After a few touch up strokes, a high quality final matte (E) was obtained.


While the above description contains many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as an example of embodiments thereof. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.

Claims
  • 1. An array camera, comprising: a plurality of cameras that capture images of a scene from different viewpoints;memory containing an image processing pipeline application;wherein the image processing pipeline application direct the processor to: capture a set of images using a group of cameras from the plurality of cameras;receive (i) an image comprising a plurality of pixel color values for pixels in the image and (ii) an initial depth map corresponding to the depths of the pixels within the image, wherein the initial depth map is generated using the set of images; andregularize the initial depth map into a dense depth map using pixels for which depth is known to estimate depths of pixels for which depth is unknown, wherein regularizing the initial depth map into the dense depth map further comprises: performing Laplacian matting to compute a Laplacian L;obtain a binary confidence map C that indicates whether a depth at a given pixel is confident, where the confidence map C is obtained by a thresholded gradient of the image;wherein the Laplacian matting is optimized by solving a reduced linear system for depth values of pixels that are marked as non-confident based on the confidence map C; andusing the dense depth map to perform image-based rendering.
  • 2. The array camera of claim 1, wherein the image processing application further directs the processor to prune the Laplacian L, wherein pruning the Laplacian L comprises: for each pair i,j of pixels in affinity matrix A, determine if i and j have depth differences beyond a threshold; andif the difference is beyond the threshold, purge the pair i,j within the affinity matrix A.
  • 3. The array camera of claim 2, wherein the image processing application further directs the processor to: detect and correct depth bleeding across edges by computing a Laplacian residual R and removing incorrect depth values based on the Laplacian residual R.
  • 4. The array camera of claim 3, wherein the computing the Laplacian residual R comprises computing R=Lz* where z* is the regularized depth map, wherein removing incorrect depth values comprises identifying regions where R>0.
  • 5. The array camera of claim 1, wherein a pixel for which depth is unknown is a pixel that has a confidence value below a particular threshold regarding the accuracy of the depth.
  • 6. The array camera of claim 1, wherein the confidence map C is defined at texture and object edges within the image.
  • 7. The array camera of claim 1, wherein the confidence map C is a sparse mn x mn diagonal matrix whose diagonal entries are binary confidence values.
  • 8. The array camera of claim 3, wherein the image processing application further directs the processor to compute a new confidence map whenever the residual R is greater than a threshold.
  • 9. The array camera of claim 1, wherein the image processing application further directs the processor to utilize the regularized dense depth map to perform depth-based segmentation that can be dilated to create a trimap.
  • 10. The array camera of claim 1, wherein an unknown pixel's depth is estimated as a weighted average of depths of the k-nearest super-pixels.
  • 11. The array camera of claim 10, where the weights are derived as a function of distance of the RGBxy feature of the pixel from the super-pixel centroids.
  • 12. The array camera of claim 1, wherein regularizing the initial depth map into the dense depth map comprises solving for depths in only unknown regions of the image.
  • 13. The array camera of claim 1, wherein the Laplacian matting is optimized by solving a reduced linear system for alpha values only in unknown regions.
  • 14. The array camera of claim 1, wherein the image processing application further directs the processor: determine an object of interest to be extracted from the image;generate an initial trimap using the dense depth map and the object of interest to be extracted from the image; andapply color image matting to unknown regions of the initial trimap to generate a matte for image matting.
  • 15. The array camera of claim 1, wherein the image processing pipeline application directs the processor to generate a trimap based on the regularized depth map.
  • 16. The array camera of claim 1, wherein the Laplacian matting is a convention kNN-based (K nearest-neighbor) Laplacian that pairs similar pixels without regards to their depth when constructing the affinity matrix A and Laplacian L.
  • 17. The array camera of claim 1, wherein the image processing application further directs the processor to detect and correct depth bleeding across edges.
RELATED APPLICATIONS

This application claims priority to U.S. patent application Ser. No. 14/642,637 filed Mar. 9, 2015 which claims priority to U.S. patent application Ser. No. 61/949,999 filed Mar. 7, 2014, the disclosures of which are incorporated by reference herein in their entirety.

US Referenced Citations (1075)
Number Name Date Kind
4124798 Thompson Nov 1978 A
4198646 Alexander et al. Apr 1980 A
4323925 Abell et al. Apr 1982 A
4460449 Montalbano Jul 1984 A
4467365 Murayama et al. Aug 1984 A
4652909 Glenn Mar 1987 A
4899060 Lischke Feb 1990 A
4962425 Rea Oct 1990 A
5005083 Grage Apr 1991 A
5070414 Tsutsumi Dec 1991 A
5144448 Hornbaker et al. Sep 1992 A
5157499 Oguma et al. Oct 1992 A
5325449 Burt Jun 1994 A
5327125 Iwase et al. Jul 1994 A
5463464 Ladewski Oct 1995 A
5488674 Burt Jan 1996 A
5629524 Stettner et al. May 1997 A
5638461 Fridge Jun 1997 A
5757425 Barton et al. May 1998 A
5793900 Nourbakhsh et al. Aug 1998 A
5801919 Griencewic Sep 1998 A
5808350 Jack et al. Sep 1998 A
5832312 Rieger et al. Nov 1998 A
5833507 Woodgate et al. Nov 1998 A
5880691 Fossum et al. Mar 1999 A
5911008 Niikura et al. Jun 1999 A
5933190 Dierickx et al. Aug 1999 A
5963664 Kumar et al. Oct 1999 A
5973844 Burger Oct 1999 A
6002743 Telymonde Dec 1999 A
6005607 Uomori et al. Dec 1999 A
6034690 Gallery et al. Mar 2000 A
6069351 Mack May 2000 A
6069365 Chow et al. May 2000 A
6095989 Hay et al. Aug 2000 A
6097394 Levoy et al. Aug 2000 A
6124974 Burger Sep 2000 A
6130786 Osawa et al. Oct 2000 A
6137100 Fossum et al. Oct 2000 A
6137535 Meyers Oct 2000 A
6141048 Meyers Oct 2000 A
6160909 Melen Dec 2000 A
6163414 Kikuchi et al. Dec 2000 A
6172352 Liu et al. Jan 2001 B1
6175379 Uomori et al. Jan 2001 B1
6205241 Melen Mar 2001 B1
6239909 Hayashi et al. May 2001 B1
6292713 Jouppi et al. Sep 2001 B1
6340994 Margulis et al. Jan 2002 B1
6358862 Ireland et al. Mar 2002 B1
6373518 Sogawa Apr 2002 B1
6419638 Hay et al. Jul 2002 B1
6443579 Myers Sep 2002 B1
6476805 Shum et al. Nov 2002 B1
6477260 Shimomura Nov 2002 B1
6502097 Chan et al. Dec 2002 B1
6525302 Dowski, Jr. et al. Feb 2003 B2
6552742 Seta Apr 2003 B1
6563537 Kawamura et al. May 2003 B1
6571466 Glenn et al. Jun 2003 B1
6603513 Berezin Aug 2003 B1
6611289 Yu Aug 2003 B1
6627896 Hashimoto et al. Sep 2003 B1
6628330 Lin Sep 2003 B1
6628845 Stone et al. Sep 2003 B1
6635941 Suda Oct 2003 B2
6639596 Shum et al. Oct 2003 B1
6647142 Beardsley Nov 2003 B1
6657218 Noda Dec 2003 B2
6671399 Berestov Dec 2003 B1
6674892 Melen Jan 2004 B1
6750904 Lambert Jun 2004 B1
6765617 Tangen et al. Jul 2004 B1
6771833 Edgar Aug 2004 B1
6774941 Boisvert et al. Aug 2004 B1
6788338 Dinev Sep 2004 B1
6795253 Shinohara Sep 2004 B2
6801653 Wu et al. Oct 2004 B1
6819328 Moriwaki et al. Nov 2004 B1
6819358 Kagle et al. Nov 2004 B1
6879735 Portniaguine et al. Apr 2005 B1
6897454 Sasaki et al. May 2005 B2
6903770 Kobayashi et al. Jun 2005 B1
6909121 Nishikawa Jun 2005 B2
6917702 Beardsley Jul 2005 B2
6927922 George Aug 2005 B2
6958862 Joseph Oct 2005 B1
6985175 Iwai et al. Jan 2006 B2
7015954 Foote et al. Mar 2006 B1
7085409 Sawhney Aug 2006 B2
7161614 Yamashita et al. Jan 2007 B1
7199348 Olsen et al. Apr 2007 B2
7206449 Raskar et al. Apr 2007 B2
7215364 Wachtel et al. May 2007 B2
7235785 Hornback et al. Jun 2007 B2
7245761 Swaminathan et al. Jul 2007 B2
7262799 Suda Aug 2007 B2
7292735 Blake et al. Nov 2007 B2
7295697 Satoh Nov 2007 B1
7333651 Kim et al. Feb 2008 B1
7369165 Bosco et al. May 2008 B2
7391572 Jacobowitz et al. Jun 2008 B2
7408725 Sato Aug 2008 B2
7425984 Chen Sep 2008 B2
7430312 Gu Sep 2008 B2
7471765 Jaffray et al. Dec 2008 B2
7496293 Shamir et al. Feb 2009 B2
7564019 Olsen Jul 2009 B2
7599547 Sun et al. Oct 2009 B2
7606484 Richards et al. Oct 2009 B1
7620265 Wolff Nov 2009 B1
7633511 Shum et al. Dec 2009 B2
7639435 Chiang et al. Dec 2009 B2
7646549 Zalevsky et al. Jan 2010 B2
7657090 Omatsu et al. Feb 2010 B2
7667824 Moran Feb 2010 B1
7675080 Boettiger Mar 2010 B2
7675681 Tomikawa et al. Mar 2010 B2
7706634 Schmitt et al. Apr 2010 B2
7723662 Levoy et al. May 2010 B2
7738013 Galambos et al. Jun 2010 B2
7741620 Doering et al. Jun 2010 B2
7782364 Smith Aug 2010 B2
7826153 Hong Nov 2010 B2
7840067 Shen et al. Nov 2010 B2
7912673 Hébert et al. Mar 2011 B2
7924321 Nayar et al. Apr 2011 B2
7956871 Fainstain et al. Jun 2011 B2
7965314 Miller et al. Jun 2011 B1
7973834 Yang Jul 2011 B2
7986018 Rennie Jul 2011 B2
7990447 Honda et al. Aug 2011 B2
8000498 Shih et al. Aug 2011 B2
8013904 Tan et al. Sep 2011 B2
8027531 Wilburn et al. Sep 2011 B2
8044994 Vetro et al. Oct 2011 B2
8055466 Bryll Nov 2011 B2
8077245 Adamo et al. Dec 2011 B2
8089515 Chebil et al. Jan 2012 B2
8098297 Crisan et al. Jan 2012 B2
8098304 Pinto et al. Jan 2012 B2
8106949 Tan et al. Jan 2012 B2
8111910 Tanaka Feb 2012 B2
8126279 Marcellin et al. Feb 2012 B2
8130120 Kawabata et al. Mar 2012 B2
8131097 Lelescu et al. Mar 2012 B2
8149323 Li Apr 2012 B2
8164629 Zhang Apr 2012 B1
8169486 Corcoran et al. May 2012 B2
8180145 Wu et al. May 2012 B2
8189065 Georgiev et al. May 2012 B2
8189089 Georgiev et al. May 2012 B1
8194296 Compton Jun 2012 B2
8212914 Chiu Jul 2012 B2
8213711 Tam Jul 2012 B2
8231814 Duparre Jul 2012 B2
8242426 Ward et al. Aug 2012 B2
8244027 Takahashi Aug 2012 B2
8244058 Intwala et al. Aug 2012 B1
8254668 Mashitani et al. Aug 2012 B2
8279325 Pitts et al. Oct 2012 B2
8280194 Wong et al. Oct 2012 B2
8284240 Saint-Pierre et al. Oct 2012 B2
8289409 Chang Oct 2012 B2
8289440 Pitts et al. Oct 2012 B2
8290358 Georgiev Oct 2012 B1
8294099 Blackwell, Jr. Oct 2012 B2
8294754 Jung et al. Oct 2012 B2
8300085 Yang et al. Oct 2012 B2
8305456 McMahon Nov 2012 B1
8315476 Georgiev et al. Nov 2012 B1
8345144 Georgiev et al. Jan 2013 B1
8360574 Ishak et al. Jan 2013 B2
8400555 Georgiev Mar 2013 B1
8406562 Bassi et al. Mar 2013 B2
8411146 Twede Apr 2013 B2
8446492 Nakano et al. May 2013 B2
8456517 Mor et al. Jun 2013 B2
8493496 Freedman et al. Jul 2013 B2
8514291 Chang Aug 2013 B2
8514491 Duparre Aug 2013 B2
8541730 Inuiya Sep 2013 B2
8542933 Venkataraman et al. Sep 2013 B2
8553093 Wong et al. Oct 2013 B2
8559756 Georgiev et al. Oct 2013 B2
8565547 Strandemar Oct 2013 B2
8576302 Yoshikawa Nov 2013 B2
8577183 Robinson Nov 2013 B2
8581995 Lin et al. Nov 2013 B2
8619082 Ciurea et al. Dec 2013 B1
8648918 Kauker et al. Feb 2014 B2
8655052 Spooner et al. Feb 2014 B2
8682107 Yoon et al. Mar 2014 B2
8687087 Pertsel et al. Apr 2014 B2
8692893 McMahon Apr 2014 B2
8754941 Sarwari et al. Jun 2014 B1
8773536 Zhang Jul 2014 B1
8780113 Ciurea et al. Jul 2014 B1
8804255 Duparre Aug 2014 B2
8830375 Ludwig Sep 2014 B2
8831367 Venkataraman et al. Sep 2014 B2
8831377 Pitts et al. Sep 2014 B2
8836793 Kriesel et al. Sep 2014 B1
8842201 Tajiri Sep 2014 B2
8854462 Herbin et al. Oct 2014 B2
8861089 Duparre Oct 2014 B2
8866912 Mullis Oct 2014 B2
8866920 Venkataraman et al. Oct 2014 B2
8866951 Keelan Oct 2014 B2
8878950 Lelescu et al. Nov 2014 B2
8885059 Venkataraman et al. Nov 2014 B1
8885922 Ito et al. Nov 2014 B2
8896594 Xiong et al. Nov 2014 B2
8896719 Venkataraman et al. Nov 2014 B1
8902321 Venkataraman et al. Dec 2014 B2
8928793 McMahon Jan 2015 B2
8977038 Tian et al. Mar 2015 B2
9001226 Ng et al. Apr 2015 B1
9019426 Han et al. Apr 2015 B2
9025894 Venkataraman May 2015 B2
9025895 Venkataraman May 2015 B2
9030528 Pesach et al. May 2015 B2
9031335 Venkataraman May 2015 B2
9031342 Venkataraman May 2015 B2
9031343 Venkataraman May 2015 B2
9036928 Venkataraman May 2015 B2
9036931 Venkataraman et al. May 2015 B2
9041823 Venkataraman et al. May 2015 B2
9041824 Lelescu et al. May 2015 B2
9041829 Venkataraman et al. May 2015 B2
9042667 Venkataraman et al. May 2015 B2
9047684 Lelescu et al. Jun 2015 B2
9049367 Venkataraman et al. Jun 2015 B2
9055233 Venkataraman et al. Jun 2015 B2
9060120 Venkataraman et al. Jun 2015 B2
9060124 Venkataraman et al. Jun 2015 B2
9077893 Venkataraman et al. Jul 2015 B2
9094661 Venkataraman et al. Jul 2015 B2
9100586 McMahon et al. Aug 2015 B2
9100635 Duparre et al. Aug 2015 B2
9123117 Ciurea et al. Sep 2015 B2
9123118 Ciurea et al. Sep 2015 B2
9124815 Venkataraman et al. Sep 2015 B2
9124831 Mullis Sep 2015 B2
9124864 Mullis Sep 2015 B2
9128228 Duparre Sep 2015 B2
9129183 Venkataraman et al. Sep 2015 B2
9129377 Ciurea et al. Sep 2015 B2
9143711 McMahon Sep 2015 B2
9147254 Florian et al. Sep 2015 B2
9185276 Rodda et al. Nov 2015 B2
9188765 Venkataraman et al. Nov 2015 B2
9191580 Venkataraman et al. Nov 2015 B2
9197821 McMahon Nov 2015 B2
9210392 Nisenzon et al. Dec 2015 B2
9214013 Venkataraman et al. Dec 2015 B2
9235898 Venkataraman et al. Jan 2016 B2
9235900 Ciurea et al. Jan 2016 B2
9240049 Ciurea et al. Jan 2016 B2
9253380 Venkataraman et al. Feb 2016 B2
9256974 Hines Feb 2016 B1
9264592 Rodda et al. Feb 2016 B2
9264610 Duparre Feb 2016 B2
9361662 Lelescu et al. Jun 2016 B2
9374512 Venkataraman et al. Jun 2016 B2
9412206 McMahon et al. Aug 2016 B2
9413953 Maeda Aug 2016 B2
9426343 Rodda et al. Aug 2016 B2
9426361 Venkataraman et al. Aug 2016 B2
9438888 Venkataraman et al. Sep 2016 B2
9445003 Lelescu et al. Sep 2016 B1
9456134 Venkataraman et al. Sep 2016 B2
9456196 Kim et al. Sep 2016 B2
9462164 Venkataraman et al. Oct 2016 B2
9485496 Venkataraman et al. Nov 2016 B2
9497370 Venkataraman et al. Nov 2016 B2
9497429 Mullis et al. Nov 2016 B2
9516222 Duparre et al. Dec 2016 B2
9519972 Venkataraman et al. Dec 2016 B2
9521319 Rodda et al. Dec 2016 B2
9521416 McMahon et al. Dec 2016 B1
9536166 Venkataraman et al. Jan 2017 B2
9576369 Venkataraman et al. Feb 2017 B2
9578237 Duparre et al. Feb 2017 B2
9578259 Molina Feb 2017 B2
9602805 Venkataraman et al. Mar 2017 B2
9633442 Venkataraman et al. Apr 2017 B2
9635274 Lin et al. Apr 2017 B2
9638883 Duparre May 2017 B1
9661310 Deng et al. May 2017 B2
9706132 Nisenzon et al. Jul 2017 B2
9712759 Venkataraman et al. Jul 2017 B2
9733486 Lelescu et al. Aug 2017 B2
9741118 Mullis Aug 2017 B2
9743051 Venkataraman et al. Aug 2017 B2
9749547 Venkataraman et al. Aug 2017 B2
9749568 McMahon Aug 2017 B2
9754422 McMahon et al. Sep 2017 B2
9766380 Duparre et al. Sep 2017 B2
9769365 Jannard Sep 2017 B1
9774789 Ciurea et al. Sep 2017 B2
9774831 Venkataraman et al. Sep 2017 B2
9787911 McMahon et al. Oct 2017 B2
9794476 Nayar et al. Oct 2017 B2
9800856 Venkataraman et al. Oct 2017 B2
9800859 Venkataraman et al. Oct 2017 B2
9807382 Duparre et al. Oct 2017 B2
9811753 Venkataraman et al. Nov 2017 B2
9813616 Lelescu et al. Nov 2017 B2
9813617 Venkataraman et al. Nov 2017 B2
9858673 Ciurea et al. Jan 2018 B2
9864921 Venkataraman et al. Jan 2018 B2
9888194 Duparre Feb 2018 B2
9898856 Yang et al. Feb 2018 B2
9917998 Venkataraman et al. Mar 2018 B2
9924092 Rodda et al. Mar 2018 B2
9936148 McMahon Apr 2018 B2
9955070 Lelescu et al. Apr 2018 B2
9986224 Mullis May 2018 B2
10009538 Venkataraman et al. Jun 2018 B2
10019816 Venkataraman et al. Jul 2018 B2
10027901 Venkataraman et al. Jul 2018 B2
10089740 Srikanth et al. Oct 2018 B2
10091405 Molina Oct 2018 B2
10142560 Venkataraman et al. Nov 2018 B2
20010005225 Clark et al. Jun 2001 A1
20010019621 Hanna et al. Sep 2001 A1
20010028038 Hamaguchi et al. Oct 2001 A1
20010038387 Tomooka et al. Nov 2001 A1
20020012056 Trevino Jan 2002 A1
20020015536 Warren Feb 2002 A1
20020027608 Johnson et al. Mar 2002 A1
20020028014 Ono Mar 2002 A1
20020039438 Mori et al. Apr 2002 A1
20020057845 Fossum May 2002 A1
20020061131 Sawhney et al. May 2002 A1
20020063807 Margulis May 2002 A1
20020075450 Aratani Jun 2002 A1
20020087403 Meyers et al. Jul 2002 A1
20020089596 Suda Jul 2002 A1
20020094027 Sato et al. Jul 2002 A1
20020101528 Lee Aug 2002 A1
20020113867 Takigawa et al. Aug 2002 A1
20020113888 Sonoda et al. Aug 2002 A1
20020118113 Oku et al. Aug 2002 A1
20020120634 Min et al. Aug 2002 A1
20020122113 Foote et al. Sep 2002 A1
20020163054 Suda et al. Nov 2002 A1
20020167537 Trajkovic Nov 2002 A1
20020177054 Saitoh et al. Nov 2002 A1
20020190991 Efran et al. Dec 2002 A1
20020195548 Dowski, Jr. et al. Dec 2002 A1
20030025227 Daniell Feb 2003 A1
20030086079 Barth et al. May 2003 A1
20030124763 Fan et al. Jul 2003 A1
20030140347 Varsa Jul 2003 A1
20030156189 Utsumi et al. Aug 2003 A1
20030179418 Wengender et al. Sep 2003 A1
20030188659 Merry et al. Oct 2003 A1
20030190072 Adkins et al. Oct 2003 A1
20030198377 Ng Oct 2003 A1
20030211405 Venkataraman Nov 2003 A1
20030231179 Suzuki Dec 2003 A1
20040003409 Berstis Jan 2004 A1
20040008271 Hagimori et al. Jan 2004 A1
20040012689 Tinnerino Jan 2004 A1
20040027358 Nakao Feb 2004 A1
20040047274 Amanai Mar 2004 A1
20040050104 Ghosh et al. Mar 2004 A1
20040056966 Schechner et al. Mar 2004 A1
20040061787 Liu et al. Apr 2004 A1
20040066454 Otani et al. Apr 2004 A1
20040071367 Irani et al. Apr 2004 A1
20040075654 Hsiao et al. Apr 2004 A1
20040096119 Williams May 2004 A1
20040100570 Shizukuishi May 2004 A1
20040105021 Hu et al. Jun 2004 A1
20040114807 Lelescu et al. Jun 2004 A1
20040141659 Zhang Jul 2004 A1
20040151401 Sawhney et al. Aug 2004 A1
20040165090 Ning Aug 2004 A1
20040169617 Yelton et al. Sep 2004 A1
20040170340 Tipping et al. Sep 2004 A1
20040174439 Upton Sep 2004 A1
20040179008 Gordon et al. Sep 2004 A1
20040179834 Szajewski Sep 2004 A1
20040196379 Chen et al. Oct 2004 A1
20040207600 Zhang et al. Oct 2004 A1
20040207836 Chhibber et al. Oct 2004 A1
20040213449 Safaee-Rad et al. Oct 2004 A1
20040218809 Blake et al. Nov 2004 A1
20040234873 Venkataraman Nov 2004 A1
20040239782 Equitz et al. Dec 2004 A1
20040239885 Jaynes et al. Dec 2004 A1
20040240052 Minefuji et al. Dec 2004 A1
20040251509 Choi Dec 2004 A1
20040264806 Herley Dec 2004 A1
20050006477 Patel Jan 2005 A1
20050007461 Chou et al. Jan 2005 A1
20050009313 Suzuki et al. Jan 2005 A1
20050010621 Pinto et al. Jan 2005 A1
20050012035 Miller Jan 2005 A1
20050036778 DeMonte Feb 2005 A1
20050047678 Jones et al. Mar 2005 A1
20050048690 Yamamoto Mar 2005 A1
20050068436 Fraenkel et al. Mar 2005 A1
20050083531 Millerd et al. Apr 2005 A1
20050084179 Hanna et al. Apr 2005 A1
20050128509 Tokkonen et al. Jun 2005 A1
20050128595 Shimizu Jun 2005 A1
20050132098 Sonoda et al. Jun 2005 A1
20050134698 Schroeder Jun 2005 A1
20050134699 Nagashima Jun 2005 A1
20050134712 Gruhlke et al. Jun 2005 A1
20050147277 Higaki et al. Jul 2005 A1
20050151759 Gonzalez-Banos et al. Jul 2005 A1
20050168924 Wu et al. Aug 2005 A1
20050175257 Kuroki Aug 2005 A1
20050185711 Pfister et al. Aug 2005 A1
20050205785 Hornback et al. Sep 2005 A1
20050219264 Shum et al. Oct 2005 A1
20050219363 Kohler et al. Oct 2005 A1
20050224843 Boemler Oct 2005 A1
20050225654 Feldman et al. Oct 2005 A1
20050265633 Piacentino et al. Dec 2005 A1
20050275946 Choo et al. Dec 2005 A1
20050286612 Takanashi Dec 2005 A1
20050286756 Hong et al. Dec 2005 A1
20060002635 Nestares et al. Jan 2006 A1
20060007331 Izumi et al. Jan 2006 A1
20060013318 Webb et al. Jan 2006 A1
20060018509 Miyoshi Jan 2006 A1
20060023197 Joel Feb 2006 A1
20060023314 Boettiger et al. Feb 2006 A1
20060028476 Sobel et al. Feb 2006 A1
20060029270 Berestov et al. Feb 2006 A1
20060029271 Miyoshi et al. Feb 2006 A1
20060033005 Jerdev et al. Feb 2006 A1
20060034003 Zalevsky Feb 2006 A1
20060034531 Poon et al. Feb 2006 A1
20060035415 Wood Feb 2006 A1
20060038891 Okutomi et al. Feb 2006 A1
20060039611 Rother Feb 2006 A1
20060046204 Ono et al. Mar 2006 A1
20060049930 Zruya et al. Mar 2006 A1
20060050980 Kohashi et al. Mar 2006 A1
20060054780 Garrood et al. Mar 2006 A1
20060054782 Olsen et al. Mar 2006 A1
20060055811 Frtiz et al. Mar 2006 A1
20060069478 Iwama Mar 2006 A1
20060072029 Miyatake et al. Apr 2006 A1
20060087747 Ohzawa et al. Apr 2006 A1
20060098888 Morishita May 2006 A1
20060103754 Wenstrand et al. May 2006 A1
20060125936 Gruhike et al. Jun 2006 A1
20060138322 Costello et al. Jun 2006 A1
20060152803 Provitola Jul 2006 A1
20060157640 Perlman et al. Jul 2006 A1
20060159369 Young Jul 2006 A1
20060176566 Boettiger et al. Aug 2006 A1
20060187338 May et al. Aug 2006 A1
20060197937 Bamji et al. Sep 2006 A1
20060203100 Ajito et al. Sep 2006 A1
20060203113 Wada et al. Sep 2006 A1
20060210146 Gu Sep 2006 A1
20060210186 Berkner Sep 2006 A1
20060214085 Olsen Sep 2006 A1
20060221250 Rossbach et al. Oct 2006 A1
20060239549 Kelly et al. Oct 2006 A1
20060243889 Farnworth et al. Nov 2006 A1
20060251410 Trutna Nov 2006 A1
20060274174 Tewinkle Dec 2006 A1
20060278948 Yamaguchi et al. Dec 2006 A1
20060279648 Senba et al. Dec 2006 A1
20060289772 Johnson et al. Dec 2006 A1
20070002159 Olsen et al. Jan 2007 A1
20070008575 Yu et al. Jan 2007 A1
20070009150 Suwa Jan 2007 A1
20070024614 Tam Feb 2007 A1
20070030356 Yea et al. Feb 2007 A1
20070035707 Margulis Feb 2007 A1
20070036427 Nakamura et al. Feb 2007 A1
20070040828 Zalevsky et al. Feb 2007 A1
20070040922 McKee et al. Feb 2007 A1
20070041391 Lin et al. Feb 2007 A1
20070052825 Cho Mar 2007 A1
20070083114 Yang et al. Apr 2007 A1
20070085917 Kobayashi Apr 2007 A1
20070092245 Bazakos et al. Apr 2007 A1
20070102622 Olsen et al. May 2007 A1
20070126898 Feldman et al. Jun 2007 A1
20070127831 Venkataraman Jun 2007 A1
20070139333 Sato et al. Jun 2007 A1
20070140685 Wu Jun 2007 A1
20070146503 Shiraki Jun 2007 A1
20070146511 Kinoshita et al. Jun 2007 A1
20070153335 Hosaka Jul 2007 A1
20070158427 Zhu et al. Jul 2007 A1
20070159541 Sparks et al. Jul 2007 A1
20070160310 Tanida et al. Jul 2007 A1
20070165931 Higaki Jul 2007 A1
20070171290 Kroger Jul 2007 A1
20070177004 Kolehmainen et al. Aug 2007 A1
20070182843 Shimamura et al. Aug 2007 A1
20070201859 Sarrat Aug 2007 A1
20070206241 Smith et al. Sep 2007 A1
20070211164 Olsen et al. Sep 2007 A1
20070216765 Wong et al. Sep 2007 A1
20070225600 Weibrecht et al. Sep 2007 A1
20070228256 Mentzer Oct 2007 A1
20070236595 Pan et al. Oct 2007 A1
20070242141 Ciurea Oct 2007 A1
20070247517 Zhang et al. Oct 2007 A1
20070257184 Olsen et al. Nov 2007 A1
20070258006 Olsen et al. Nov 2007 A1
20070258706 Raskar et al. Nov 2007 A1
20070263113 Baek et al. Nov 2007 A1
20070263114 Gurevich et al. Nov 2007 A1
20070268374 Robinson Nov 2007 A1
20070296721 Chang et al. Dec 2007 A1
20070296832 Ota et al. Dec 2007 A1
20070296835 Olsen Dec 2007 A1
20070296847 Chang et al. Dec 2007 A1
20070297696 Hamza Dec 2007 A1
20080006859 Mionetto et al. Jan 2008 A1
20080019611 Larkin et al. Jan 2008 A1
20080024683 Damera-Venkata et al. Jan 2008 A1
20080025649 Liu et al. Jan 2008 A1
20080030592 Border et al. Feb 2008 A1
20080030597 Olsen et al. Feb 2008 A1
20080043095 Vetro et al. Feb 2008 A1
20080043096 Vetro et al. Feb 2008 A1
20080054518 Ra et al. Mar 2008 A1
20080056302 Erdal et al. Mar 2008 A1
20080062164 Bassi et al. Mar 2008 A1
20080079805 Takagi et al. Apr 2008 A1
20080080028 Bakin et al. Apr 2008 A1
20080084486 Enge et al. Apr 2008 A1
20080088793 Sverdrup et al. Apr 2008 A1
20080095523 Schilling-Benz et al. Apr 2008 A1
20080099804 Venezia et al. May 2008 A1
20080106620 Sawachi May 2008 A1
20080112059 Choi et al. May 2008 A1
20080112635 Kondo et al. May 2008 A1
20080117289 Schowengerdt et al. May 2008 A1
20080118241 Tekolste et al. May 2008 A1
20080131019 Ng Jun 2008 A1
20080131107 Ueno Jun 2008 A1
20080151097 Chen et al. Jun 2008 A1
20080152215 Horie et al. Jun 2008 A1
20080152296 Oh et al. Jun 2008 A1
20080156991 Hu et al. Jul 2008 A1
20080158259 Kempf et al. Jul 2008 A1
20080158375 Kakkori et al. Jul 2008 A1
20080158698 Chang et al. Jul 2008 A1
20080165257 Boettiger Jul 2008 A1
20080174670 Olsen et al. Jul 2008 A1
20080187305 Raskar et al. Aug 2008 A1
20080193026 Horie et al. Aug 2008 A1
20080211737 Kim et al. Sep 2008 A1
20080218610 Chapman et al. Sep 2008 A1
20080218611 Parulski et al. Sep 2008 A1
20080218612 Border et al. Sep 2008 A1
20080218613 Janson et al. Sep 2008 A1
20080219654 Border et al. Sep 2008 A1
20080239116 Smith Oct 2008 A1
20080240598 Hasegawa Oct 2008 A1
20080247638 Tanida et al. Oct 2008 A1
20080247653 Moussavi et al. Oct 2008 A1
20080272416 Yun Nov 2008 A1
20080273751 Yuan et al. Nov 2008 A1
20080278591 Barna et al. Nov 2008 A1
20080278610 Boettiger Nov 2008 A1
20080284880 Numata Nov 2008 A1
20080291295 Kato et al. Nov 2008 A1
20080298674 Baker et al. Dec 2008 A1
20080310501 Ward et al. Dec 2008 A1
20090027543 Kanehiro Jan 2009 A1
20090050946 Duparre et al. Feb 2009 A1
20090052743 Techmer Feb 2009 A1
20090060281 Tanida et al. Mar 2009 A1
20090066693 Carson Mar 2009 A1
20090079862 Subbotin Mar 2009 A1
20090086074 Li et al. Apr 2009 A1
20090091645 Trimeche et al. Apr 2009 A1
20090091806 Inuiya Apr 2009 A1
20090092363 Daum et al. Apr 2009 A1
20090096050 Park Apr 2009 A1
20090102956 Georgiev Apr 2009 A1
20090103792 Rahn et al. Apr 2009 A1
20090109306 Shan Apr 2009 A1
20090127430 Hirasawa et al. May 2009 A1
20090128644 Camp et al. May 2009 A1
20090128833 Yahav May 2009 A1
20090129667 Ho et al. May 2009 A1
20090140131 Utagawa et al. Jun 2009 A1
20090141933 Wagg Jun 2009 A1
20090147919 Goto et al. Jun 2009 A1
20090152664 Klem et al. Jun 2009 A1
20090167922 Perlman et al. Jul 2009 A1
20090167934 Gupta Jul 2009 A1
20090175349 Ye et al. Jul 2009 A1
20090179142 Duparre et al. Jul 2009 A1
20090180021 Kikuchi et al. Jul 2009 A1
20090200622 Tai et al. Aug 2009 A1
20090201371 Matsuda et al. Aug 2009 A1
20090207235 Francini et al. Aug 2009 A1
20090219435 Yuan et al. Sep 2009 A1
20090225203 Tanida et al. Sep 2009 A1
20090237520 Kaneko et al. Sep 2009 A1
20090245573 Saptharishi et al. Oct 2009 A1
20090256947 Ciurea Oct 2009 A1
20090263017 Tanbakuchi Oct 2009 A1
20090268192 Koenck et al. Oct 2009 A1
20090268970 Babacan et al. Oct 2009 A1
20090268983 Stone et al. Oct 2009 A1
20090273663 Yoshida et al. Nov 2009 A1
20090274387 Jin Nov 2009 A1
20090279800 Uetani et al. Nov 2009 A1
20090284651 Srinivasan Nov 2009 A1
20090290811 Imai Nov 2009 A1
20090297056 Lelescu et al. Dec 2009 A1
20090302205 Olsen et al. Dec 2009 A9
20090317061 Jung et al. Dec 2009 A1
20090322876 Lee et al. Dec 2009 A1
20090323195 Hembree et al. Dec 2009 A1
20090323206 Oliver et al. Dec 2009 A1
20090324118 Maslov et al. Dec 2009 A1
20100002126 Wenstrand et al. Jan 2010 A1
20100002313 Duparre et al. Jan 2010 A1
20100002314 Duparre Jan 2010 A1
20100007714 Kim et al. Jan 2010 A1
20100013927 Nixon Jan 2010 A1
20100044815 Chang et al. Feb 2010 A1
20100045809 Packard Feb 2010 A1
20100053342 Hwang et al. Mar 2010 A1
20100053600 Tanida et al. Mar 2010 A1
20100060746 Olsen et al. Mar 2010 A9
20100073463 Momonoi et al. Mar 2010 A1
20100074532 Gordon et al. Mar 2010 A1
20100085351 Deb et al. Apr 2010 A1
20100085425 Tan Apr 2010 A1
20100086227 Sun et al. Apr 2010 A1
20100091389 Henriksen et al. Apr 2010 A1
20100097491 Farina et al. Apr 2010 A1
20100103175 Okutomi et al. Apr 2010 A1
20100103259 Tanida et al. Apr 2010 A1
20100103308 Butterfield et al. Apr 2010 A1
20100111444 Coffman May 2010 A1
20100118127 Nam et al. May 2010 A1
20100128145 Pitts et al. May 2010 A1
20100129048 Pitts et al. May 2010 A1
20100133230 Henriksen et al. Jun 2010 A1
20100133418 Sargent et al. Jun 2010 A1
20100141802 Knight et al. Jun 2010 A1
20100142828 Chang et al. Jun 2010 A1
20100142839 Lakus-Becker Jun 2010 A1
20100157073 Kondo et al. Jun 2010 A1
20100165152 Lim Jul 2010 A1
20100166410 Chang et al. Jul 2010 A1
20100171866 Brady et al. Jul 2010 A1
20100177411 Hegde et al. Jul 2010 A1
20100182406 Benitez Jul 2010 A1
20100194860 Mentz et al. Aug 2010 A1
20100194901 van Hoorebeke et al. Aug 2010 A1
20100195716 Gunnewiek et al. Aug 2010 A1
20100201809 Oyama et al. Aug 2010 A1
20100201834 Maruyama et al. Aug 2010 A1
20100202054 Niederer Aug 2010 A1
20100202683 Robinson Aug 2010 A1
20100208100 Olsen et al. Aug 2010 A9
20100220212 Perlman et al. Sep 2010 A1
20100223237 Mishra et al. Sep 2010 A1
20100225740 Jung et al. Sep 2010 A1
20100231285 Boomer et al. Sep 2010 A1
20100238327 Griffith et al. Sep 2010 A1
20100244165 Lake et al. Sep 2010 A1
20100245684 Xiao et al. Sep 2010 A1
20100254627 Panahpour Tehrani et al. Oct 2010 A1
20100259610 Petersen et al. Oct 2010 A1
20100265346 Iizuka Oct 2010 A1
20100265381 Yamamoto et al. Oct 2010 A1
20100265385 Knight et al. Oct 2010 A1
20100281070 Chan et al. Nov 2010 A1
20100289941 Ito et al. Nov 2010 A1
20100290483 Park et al. Nov 2010 A1
20100302423 Adams, Jr. et al. Dec 2010 A1
20100309292 Ho et al. Dec 2010 A1
20100309368 Choi et al. Dec 2010 A1
20100321595 Chiu et al. Dec 2010 A1
20100321640 Yeh et al. Dec 2010 A1
20100329556 Mitarai et al. Dec 2010 A1
20110001037 Tewinkle Jan 2011 A1
20110018973 Takayama Jan 2011 A1
20110019048 Raynor et al. Jan 2011 A1
20110019243 Constant, Jr. et al. Jan 2011 A1
20110031381 Tay et al. Feb 2011 A1
20110032341 Ignatov et al. Feb 2011 A1
20110032370 Ludwig Feb 2011 A1
20110033129 Robinson Feb 2011 A1
20110038536 Gong Feb 2011 A1
20110043661 Podoleanu Feb 2011 A1
20110043665 Ogasahara Feb 2011 A1
20110043668 McKinnon et al. Feb 2011 A1
20110044502 Liu et al. Feb 2011 A1
20110051255 Lee et al. Mar 2011 A1
20110055729 Mason et al. Mar 2011 A1
20110064327 Dagher et al. Mar 2011 A1
20110069189 Venkataraman et al. Mar 2011 A1
20110080487 Venkataraman et al. Apr 2011 A1
20110085028 Samadani et al. Apr 2011 A1
20110090217 Mashitani et al. Apr 2011 A1
20110108708 Olsen et al. May 2011 A1
20110115886 Nguyen May 2011 A1
20110121421 Charbon May 2011 A1
20110122308 Duparre May 2011 A1
20110128393 Tavi et al. Jun 2011 A1
20110128412 Milnes et al. Jun 2011 A1
20110129165 Lim et al. Jun 2011 A1
20110141309 Nagashima et al. Jun 2011 A1
20110142138 Tian et al. Jun 2011 A1
20110149408 Hahgholt et al. Jun 2011 A1
20110149409 Haugholt et al. Jun 2011 A1
20110150321 Cheong et al. Jun 2011 A1
20110153248 Gu et al. Jun 2011 A1
20110157321 Nakajima et al. Jun 2011 A1
20110157451 Chang Jun 2011 A1
20110169994 DiFrancesco et al. Jul 2011 A1
20110176020 Chang Jul 2011 A1
20110181797 Galstian et al. Jul 2011 A1
20110193944 Lian et al. Aug 2011 A1
20110199458 Hayasaka et al. Aug 2011 A1
20110200319 Kravitz et al. Aug 2011 A1
20110206291 Kashani et al. Aug 2011 A1
20110207074 Hall-Holt et al. Aug 2011 A1
20110211068 Yokota Sep 2011 A1
20110211077 Nayar et al. Sep 2011 A1
20110211824 Georgiev et al. Sep 2011 A1
20110221599 Högasten Sep 2011 A1
20110221658 Haddick et al. Sep 2011 A1
20110221939 Jerdev Sep 2011 A1
20110221950 Oostra Sep 2011 A1
20110222757 Yeatman, Jr. et al. Sep 2011 A1
20110228142 Brueckner Sep 2011 A1
20110228144 Tian et al. Sep 2011 A1
20110234841 Akeley et al. Sep 2011 A1
20110241234 Duparre Oct 2011 A1
20110242342 Goma et al. Oct 2011 A1
20110242355 Goma et al. Oct 2011 A1
20110242356 Aleksic et al. Oct 2011 A1
20110243428 Das Gupta Oct 2011 A1
20110255592 Sung Oct 2011 A1
20110255745 Hodder et al. Oct 2011 A1
20110261993 Weiming et al. Oct 2011 A1
20110267264 McCarthy et al. Nov 2011 A1
20110267348 Lin Nov 2011 A1
20110273531 Ito et al. Nov 2011 A1
20110274175 Sumitomo Nov 2011 A1
20110274366 Tardif Nov 2011 A1
20110279705 Kuang et al. Nov 2011 A1
20110279721 McMahon Nov 2011 A1
20110285701 Chen et al. Nov 2011 A1
20110285866 Bhrugumalla et al. Nov 2011 A1
20110285910 Bamji et al. Nov 2011 A1
20110292216 Fergus et al. Dec 2011 A1
20110298898 Jung et al. Dec 2011 A1
20110298917 Yanagita Dec 2011 A1
20110300929 Tardif et al. Dec 2011 A1
20110310980 Mathew Dec 2011 A1
20110316968 Taguchi et al. Dec 2011 A1
20110317766 Lim, II et al. Dec 2011 A1
20120012748 Pain et al. Jan 2012 A1
20120014456 Martinez Bauza et al. Jan 2012 A1
20120019530 Baker Jan 2012 A1
20120019700 Gaber Jan 2012 A1
20120023456 Sun et al. Jan 2012 A1
20120026297 Sato Feb 2012 A1
20120026342 Yu et al. Feb 2012 A1
20120026366 Golan et al. Feb 2012 A1
20120026451 Nystrom Feb 2012 A1
20120038745 Yu et al. Feb 2012 A1
20120039525 Tian et al. Feb 2012 A1
20120044249 Mashitani et al. Feb 2012 A1
20120044372 Côté et al. Feb 2012 A1
20120051624 Ando Mar 2012 A1
20120056982 Katz et al. Mar 2012 A1
20120057040 Park et al. Mar 2012 A1
20120062697 Treado et al. Mar 2012 A1
20120062702 Jiang et al. Mar 2012 A1
20120062756 Tian Mar 2012 A1
20120069235 Imai Mar 2012 A1
20120081519 Goma Apr 2012 A1
20120086803 Malzbender et al. Apr 2012 A1
20120105590 Fukumoto et al. May 2012 A1
20120105691 Waqas et al. May 2012 A1
20120113232 Joblove May 2012 A1
20120113318 Galstian et al. May 2012 A1
20120113413 Miahczylowicz-Wolski et al. May 2012 A1
20120114224 Xu et al. May 2012 A1
20120127275 Von Zitzewitz et al. May 2012 A1
20120147139 Li et al. Jun 2012 A1
20120147205 Lelescu et al. Jun 2012 A1
20120153153 Chang et al. Jun 2012 A1
20120154551 Inoue Jun 2012 A1
20120155830 Sasaki et al. Jun 2012 A1
20120163672 McKinnon Jun 2012 A1
20120163725 Fukuhara Jun 2012 A1
20120169433 Mullins et al. Jul 2012 A1
20120170134 Bolis et al. Jul 2012 A1
20120176479 Mayhew et al. Jul 2012 A1
20120176481 Lukk et al. Jul 2012 A1
20120188235 Wu et al. Jul 2012 A1
20120188341 Klein Gunnewiek et al. Jul 2012 A1
20120188389 Lin et al. Jul 2012 A1
20120188420 Black et al. Jul 2012 A1
20120188634 Kubala et al. Jul 2012 A1
20120198677 Duparre Aug 2012 A1
20120200669 Lai Aug 2012 A1
20120200726 Bugnariu Aug 2012 A1
20120200734 Tang Aug 2012 A1
20120206582 DiCarlo et al. Aug 2012 A1
20120219236 Ali Aug 2012 A1
20120224083 Jovanovski et al. Sep 2012 A1
20120229602 Chen et al. Sep 2012 A1
20120229628 Ishiyama et al. Sep 2012 A1
20120237114 Park et al. Sep 2012 A1
20120249550 Akeley et al. Oct 2012 A1
20120249750 Izzat et al. Oct 2012 A1
20120249836 Ali et al. Oct 2012 A1
20120249853 Krolczyk et al. Oct 2012 A1
20120262601 Choi et al. Oct 2012 A1
20120262607 Shimura et al. Oct 2012 A1
20120268574 Gidon et al. Oct 2012 A1
20120274626 Hsieh Nov 2012 A1
20120287291 McMahon et al. Nov 2012 A1
20120290257 Hodge et al. Nov 2012 A1
20120293489 Chen et al. Nov 2012 A1
20120293624 Chen et al. Nov 2012 A1
20120293695 Tanaka Nov 2012 A1
20120307093 Miyoshi Dec 2012 A1
20120307099 Yahata et al. Dec 2012 A1
20120314033 Lee et al. Dec 2012 A1
20120314937 Kim et al. Dec 2012 A1
20120327222 Ng et al. Dec 2012 A1
20130002828 Ding et al. Jan 2013 A1
20130003184 Duparre Jan 2013 A1
20130010073 Do et al. Jan 2013 A1
20130016245 Yuba Jan 2013 A1
20130016885 Tsujimoto et al. Jan 2013 A1
20130022111 Chen et al. Jan 2013 A1
20130027580 Olsen et al. Jan 2013 A1
20130033579 Wajs Feb 2013 A1
20130033585 Li et al. Feb 2013 A1
20130038696 Ding et al. Feb 2013 A1
20130047396 Au et al. Feb 2013 A1
20130050504 Safaee-Rad et al. Feb 2013 A1
20130050526 Keelan Feb 2013 A1
20130057710 McMahon Mar 2013 A1
20130070060 Chatterjee Mar 2013 A1
20130076967 Brunner et al. Mar 2013 A1
20130077859 Stauder et al. Mar 2013 A1
20130077880 Venkataraman et al. Mar 2013 A1
20130077882 Venkataraman et al. Mar 2013 A1
20130083172 Baba Apr 2013 A1
20130088489 Schmeitz et al. Apr 2013 A1
20130088637 Duparre Apr 2013 A1
20130093842 Yahata Apr 2013 A1
20130107061 Kumar et al. May 2013 A1
20130113888 Koguchi May 2013 A1
20130113899 Morohoshi et al. May 2013 A1
20130113939 Strandemar May 2013 A1
20130120536 Song et al. May 2013 A1
20130120605 Georgiev et al. May 2013 A1
20130121559 Hu May 2013 A1
20130127988 Wang et al. May 2013 A1
20130128068 Georgiev et al. May 2013 A1
20130128069 Georgiev et al. May 2013 A1
20130128087 Georgiev et al. May 2013 A1
20130128121 Agarwala et al. May 2013 A1
20130135315 Bares May 2013 A1
20130135448 Nagumo et al. May 2013 A1
20130147979 McMahon et al. Jun 2013 A1
20130155050 Rastogi et al. Jun 2013 A1
20130162641 Zhang et al. Jun 2013 A1
20130169754 Aronsson et al. Jul 2013 A1
20130176394 Tian et al. Jul 2013 A1
20130208138 Li Aug 2013 A1
20130215108 McMahon et al. Aug 2013 A1
20130215231 Hiramoto et al. Aug 2013 A1
20130222556 Shimada Aug 2013 A1
20130222656 Kaneko Aug 2013 A1
20130223759 Nishiyama et al. Aug 2013 A1
20130229540 Farina et al. Sep 2013 A1
20130230237 Schlosser Sep 2013 A1
20130250123 Zhang et al. Sep 2013 A1
20130250150 Malone et al. Sep 2013 A1
20130258067 Zhang et al. Oct 2013 A1
20130259317 Gaddy Oct 2013 A1
20130265459 Duparre et al. Oct 2013 A1
20130274596 Azizian et al. Oct 2013 A1
20130274923 By et al. Oct 2013 A1
20130286236 Mankowski Oct 2013 A1
20130293760 Nisenzon et al. Nov 2013 A1
20130321581 El-ghoroury et al. Dec 2013 A1
20130335598 Gustavsson et al. Dec 2013 A1
20140002674 Duparre et al. Jan 2014 A1
20140002675 Duparre et al. Jan 2014 A1
20140009586 McNamer et al. Jan 2014 A1
20140013273 Ng Jan 2014 A1
20140037137 Broaddus et al. Feb 2014 A1
20140037140 Benhimane et al. Feb 2014 A1
20140043507 Wang et al. Feb 2014 A1
20140059462 Wernersson Feb 2014 A1
20140076336 Clayton et al. Mar 2014 A1
20140078333 Miao Mar 2014 A1
20140079336 Venkataraman et al. Mar 2014 A1
20140081454 Nuyujukian et al. Mar 2014 A1
20140085502 Lin et al. Mar 2014 A1
20140092281 Nisenzon et al. Apr 2014 A1
20140098266 Nayar et al. Apr 2014 A1
20140098267 Tian et al. Apr 2014 A1
20140104490 Hsieh et al. Apr 2014 A1
20140118493 Sali et al. May 2014 A1
20140118584 Lee et al. May 2014 A1
20140125771 Grossmann et al. May 2014 A1
20140132810 McMahon May 2014 A1
20140146132 Bagnato et al. May 2014 A1
20140146201 Knight et al. May 2014 A1
20140176592 Wilburn et al. Jun 2014 A1
20140183334 Wang et al. Jul 2014 A1
20140186045 Poddar et al. Jul 2014 A1
20140192154 Jeong et al. Jul 2014 A1
20140192253 Laroia Jul 2014 A1
20140198188 Izawa Jul 2014 A1
20140204183 Lee et al. Jul 2014 A1
20140218546 McMahon Aug 2014 A1
20140232822 Venkataraman et al. Aug 2014 A1
20140240528 Venkataraman et al. Aug 2014 A1
20140240529 Venkataraman et al. Aug 2014 A1
20140253738 Mullis Sep 2014 A1
20140267243 Venkataraman et al. Sep 2014 A1
20140267286 Duparre Sep 2014 A1
20140267633 Venkataraman et al. Sep 2014 A1
20140267762 Mullis et al. Sep 2014 A1
20140267829 McMahon et al. Sep 2014 A1
20140267890 Lelescu et al. Sep 2014 A1
20140285675 Mullis Sep 2014 A1
20140300706 Song Oct 2014 A1
20140313315 Shoham et al. Oct 2014 A1
20140321712 Ciurea et al. Oct 2014 A1
20140333731 Venkataraman et al. Nov 2014 A1
20140333764 Venkataraman et al. Nov 2014 A1
20140333787 Venkataraman et al. Nov 2014 A1
20140340539 Venkataraman et al. Nov 2014 A1
20140347509 Venkataraman et al. Nov 2014 A1
20140347748 Duparre Nov 2014 A1
20140354773 Venkataraman et al. Dec 2014 A1
20140354843 Venkataraman et al. Dec 2014 A1
20140354844 Venkataraman et al. Dec 2014 A1
20140354853 Venkataraman et al. Dec 2014 A1
20140354854 Venkataraman et al. Dec 2014 A1
20140354855 Venkataraman et al. Dec 2014 A1
20140355870 Venkataraman et al. Dec 2014 A1
20140368662 Venkataraman et al. Dec 2014 A1
20140368683 Venkataraman et al. Dec 2014 A1
20140368684 Venkataraman et al. Dec 2014 A1
20140368685 Venkataraman et al. Dec 2014 A1
20140368686 Duparre Dec 2014 A1
20140369612 Venkataraman et al. Dec 2014 A1
20140369615 Venkataraman et al. Dec 2014 A1
20140376825 Venkataraman et al. Dec 2014 A1
20140376826 Venkataraman et al. Dec 2014 A1
20150002734 Lee Jan 2015 A1
20150003752 Venkataraman et al. Jan 2015 A1
20150003753 Venkataraman et al. Jan 2015 A1
20150009353 Venkataraman et al. Jan 2015 A1
20150009354 Venkataraman et al. Jan 2015 A1
20150009362 Venkataraman et al. Jan 2015 A1
20150015669 Venkataraman et al. Jan 2015 A1
20150035992 Mullis Feb 2015 A1
20150036014 Lelescu et al. Feb 2015 A1
20150036015 Lelescu et al. Feb 2015 A1
20150042766 Ciurea et al. Feb 2015 A1
20150042767 Ciurea et al. Feb 2015 A1
20150042833 Lelescu et al. Feb 2015 A1
20150049915 Ciurea et al. Feb 2015 A1
20150049916 Ciurea et al. Feb 2015 A1
20150049917 Ciurea et al. Feb 2015 A1
20150055884 Venkataraman et al. Feb 2015 A1
20150085073 Bruls et al. Mar 2015 A1
20150085174 Shabtay et al. Mar 2015 A1
20150091900 Yang et al. Apr 2015 A1
20150098079 Montgomery et al. Apr 2015 A1
20150104076 Hayasaka Apr 2015 A1
20150104101 Bryant et al. Apr 2015 A1
20150122411 Rodda et al. May 2015 A1
20150124059 Georgiev et al. May 2015 A1
20150124113 Rodda et al. May 2015 A1
20150124151 Rodda et al. May 2015 A1
20150138346 Venkataraman et al. May 2015 A1
20150146029 Venkataraman et al. May 2015 A1
20150146030 Venkataraman et al. May 2015 A1
20150161798 Venkataraman et al. Jun 2015 A1
20150199793 Venkataraman et al. Jul 2015 A1
20150199841 Venkataraman et al. Jul 2015 A1
20150235476 McMahon et al. Aug 2015 A1
20150243480 Yamada Aug 2015 A1
20150244927 Laroia et al. Aug 2015 A1
20150248744 Hayasaka et al. Sep 2015 A1
20150254868 Srikanth et al. Sep 2015 A1
20150264337 Venkataraman et al. Sep 2015 A1
20150296137 Duparre et al. Oct 2015 A1
20150312455 Venkataraman et al. Oct 2015 A1
20150326852 Duparre et al. Nov 2015 A1
20150332468 Hayasaka et al. Nov 2015 A1
20150373261 Rodda et al. Dec 2015 A1
20160037097 Duparre Feb 2016 A1
20160044252 Molina Feb 2016 A1
20160044257 Venkataraman et al. Feb 2016 A1
20160057332 Ciurea et al. Feb 2016 A1
20160065934 Kaza et al. Mar 2016 A1
20160163051 Mullis Jun 2016 A1
20160165106 Duparre Jun 2016 A1
20160165134 Lelescu et al. Jun 2016 A1
20160165147 Nisenzon et al. Jun 2016 A1
20160165212 Mullis Jun 2016 A1
20160195733 Lelescu et al. Jul 2016 A1
20160198096 McMahon et al. Jul 2016 A1
20160227195 Venkataraman et al. Aug 2016 A1
20160249001 McMahon Aug 2016 A1
20160255333 Nisenzon et al. Sep 2016 A1
20160266284 Duparre et al. Sep 2016 A1
20160267665 Venkataraman et al. Sep 2016 A1
20160267672 Ciurea et al. Sep 2016 A1
20160269626 McMahon Sep 2016 A1
20160269627 McMahon Sep 2016 A1
20160269650 Venkataraman et al. Sep 2016 A1
20160269651 Venkataraman et al. Sep 2016 A1
20160269664 Duparre Sep 2016 A1
20160316140 Nayar et al. Oct 2016 A1
20170006233 Venkataraman et al. Jan 2017 A1
20170048468 Pain et al. Feb 2017 A1
20170053382 Lelescu et al. Feb 2017 A1
20170054901 Venkataraman et al. Feb 2017 A1
20170070672 Rodda et al. Mar 2017 A1
20170070673 Lelescu et al. Mar 2017 A1
20170078568 Venkataraman et al. Mar 2017 A1
20170085845 Venkataraman et al. Mar 2017 A1
20170094243 Venkataraman et al. Mar 2017 A1
20170099465 Mullis et al. Apr 2017 A1
20170163862 Molina Jun 2017 A1
20170178363 Venkataraman et al. Jun 2017 A1
20170187933 Duparre Jun 2017 A1
20170188011 Panescu et al. Jun 2017 A1
20170244960 Ciurea et al. Aug 2017 A1
20170257562 Venkataraman et al. Sep 2017 A1
20170365104 McMahon et al. Dec 2017 A1
20180007284 Venkataraman et al. Jan 2018 A1
20180013945 Ciurea et al. Jan 2018 A1
20180024330 Laroia Jan 2018 A1
20180035057 McMahon et al. Feb 2018 A1
20180040135 Mullis Feb 2018 A1
20180048830 Venkataraman et al. Feb 2018 A1
20180048879 Venkataraman et al. Feb 2018 A1
20180081090 Duparre et al. Mar 2018 A1
20180097993 Nayar et al. Apr 2018 A1
20180109782 Duparre et al. Apr 2018 A1
20180124311 Lelescu et al. May 2018 A1
20180139382 Venkataraman et al. May 2018 A1
20180197035 Venkataraman et al. Jul 2018 A1
20180211402 Ciurea et al. Jul 2018 A1
20180240265 Yang et al. Aug 2018 A1
20180270473 Mullis Sep 2018 A1
20180302554 Lelescu et al. Oct 2018 A1
20180330182 Venkataraman et al. Nov 2018 A1
Foreign Referenced Citations (191)
Number Date Country
1669332 Sep 2005 CN
1839394 Sep 2006 CN
101010619 Aug 2007 CN
101064780 Oct 2007 CN
101102388 Jan 2008 CN
101147392 Mar 2008 CN
101427372 May 2009 CN
101606086 Dec 2009 CN
101883291 Nov 2010 CN
102037717 Apr 2011 CN
102375199 Mar 2012 CN
104081414 Oct 2014 CN
104508681 Apr 2015 CN
104662589 May 2015 CN
104685513 Jun 2015 CN
104685860 Jun 2015 CN
104081414 Aug 2017 CN
107230236 Oct 2017 CN
107346061 Nov 2017 CN
104685513 Apr 2018 CN
602011041799.1 Sep 2017 DE
0677821 Oct 1995 EP
0840502 May 1998 EP
1201407 May 2002 EP
1355274 Oct 2003 EP
1734766 Dec 2006 EP
1243945 Jan 2009 EP
2026563 Feb 2009 EP
2104334 Sep 2009 EP
2244484 Oct 2010 EP
0957642 Apr 2011 EP
2336816 Jun 2011 EP
2339532 Jun 2011 EP
2381418 Oct 2011 EP
2652678 Oct 2013 EP
2761534 Aug 2014 EP
2867718 May 2015 EP
2873028 May 2015 EP
2888698 Jul 2015 EP
2888720 Jul 2015 EP
2901671 Aug 2015 EP
2973476 Jan 2016 EP
3066690 Sep 2016 EP
2652678 Sep 2017 EP
2817955 Apr 2018 EP
3328048 May 2018 EP
3075140 Jun 2018 EP
2482022 Jan 2012 GB
2708CHENP2014 Aug 2015 IN
59-025483 Feb 1984 JP
64-037177 Feb 1989 JP
02-285772 Nov 1990 JP
06129851 May 1994 JP
07-015457 Jan 1995 JP
09171075 Jun 1997 JP
09181913 Jul 1997 JP
10253351 Sep 1998 JP
11142609 May 1999 JP
11223708 Aug 1999 JP
11325889 Nov 1999 JP
2000209503 Jul 2000 JP
2001008235 Jan 2001 JP
2001194114 Jul 2001 JP
2001264033 Sep 2001 JP
2001277260 Oct 2001 JP
2001337263 Dec 2001 JP
2002195910 Jul 2002 JP
2002205310 Jul 2002 JP
2002250607 Sep 2002 JP
2002252338 Sep 2002 JP
2003094445 Apr 2003 JP
2003139910 May 2003 JP
2003163938 Jun 2003 JP
2003298920 Oct 2003 JP
2004221585 Aug 2004 JP
2005116022 Apr 2005 JP
2005181460 Jul 2005 JP
2005295381 Oct 2005 JP
2005303694 Oct 2005 JP
2005341569 Dec 2005 JP
2005354124 Dec 2005 JP
2006033228 Feb 2006 JP
2006033493 Feb 2006 JP
2006047944 Feb 2006 JP
2006258930 Sep 2006 JP
2007520107 Jul 2007 JP
2007259136 Oct 2007 JP
2008039852 Feb 2008 JP
2008055908 Mar 2008 JP
2008507874 Mar 2008 JP
2008172735 Jul 2008 JP
2008258885 Oct 2008 JP
2009064421 Mar 2009 JP
2009132010 Jun 2009 JP
2009300268 Dec 2009 JP
2010139288 Jun 2010 JP
2011017764 Jan 2011 JP
2011030184 Feb 2011 JP
2011109484 Jun 2011 JP
2011523538 Aug 2011 JP
2011203238 Oct 2011 JP
2012504805 Feb 2012 JP
2013509022 Mar 2013 JP
2013526801 Jun 2013 JP
2014521117 Aug 2014 JP
2014535191 Dec 2014 JP
2015522178 Aug 2015 JP
2015534734 Dec 2015 JP
2016524125 Aug 2016 JP
6140709 May 2017 JP
2017163550 Sep 2017 JP
2017163587 Sep 2017 JP
2017531976 Oct 2017 JP
1020110097647 Aug 2011 KR
20170063827 Jun 2017 KR
101824672 Feb 2018 KR
101843994 Mar 2018 KR
191151 Jul 2013 SG
200828994 Jul 2008 TW
200939739 Sep 2009 TW
2005057922 Jun 2005 WO
2006039906 Apr 2006 WO
2006039906 Apr 2006 WO
2007013250 Feb 2007 WO
2007083579 Jul 2007 WO
2007134137 Nov 2007 WO
2008045198 Apr 2008 WO
2008050904 May 2008 WO
2008108271 Sep 2008 WO
2008108926 Sep 2008 WO
2008150817 Dec 2008 WO
2009073950 Jun 2009 WO
2009151903 Dec 2009 WO
2009157273 Dec 2009 WO
2010037512 Apr 2010 WO
2011008443 Jan 2011 WO
2011026527 Mar 2011 WO
2011046607 Apr 2011 WO
2011055655 May 2011 WO
2011063347 May 2011 WO
2011105814 Sep 2011 WO
2011116203 Sep 2011 WO
2011063347 Oct 2011 WO
2011143501 Nov 2011 WO
2012057619 May 2012 WO
2012057620 May 2012 WO
2012057620 May 2012 WO
2012057621 May 2012 WO
2012057622 May 2012 WO
2012057623 May 2012 WO
2012074361 Jun 2012 WO
2012078126 Jun 2012 WO
2012082904 Jun 2012 WO
2012155119 Nov 2012 WO
2013003276 Jan 2013 WO
2013043751 Mar 2013 WO
2013043761 Mar 2013 WO
2013049699 Apr 2013 WO
2013055960 Apr 2013 WO
2013119706 Aug 2013 WO
2013126578 Aug 2013 WO
2013166215 Nov 2013 WO
2014004134 Jan 2014 WO
2014005123 Jan 2014 WO
2014031795 Feb 2014 WO
2014052974 Apr 2014 WO
2014032020 May 2014 WO
2014078443 May 2014 WO
2014130849 Aug 2014 WO
2014133974 Sep 2014 WO
2014138695 Sep 2014 WO
2014138697 Sep 2014 WO
2014144157 Sep 2014 WO
2014145856 Sep 2014 WO
2014149403 Sep 2014 WO
2014149902 Sep 2014 WO
2014150856 Sep 2014 WO
2014153098 Sep 2014 WO
2014159721 Oct 2014 WO
2014159779 Oct 2014 WO
2014160142 Oct 2014 WO
2014164550 Oct 2014 WO
2014164909 Oct 2014 WO
2014165244 Oct 2014 WO
2014133974 Apr 2015 WO
2015048694 Apr 2015 WO
2015070105 May 2015 WO
2015074078 May 2015 WO
2015081279 Jun 2015 WO
2015134996 Sep 2015 WO
2016054089 Apr 2016 WO
Non-Patent Literature Citations (299)
Entry
Levin et al., “A Closed Form Solution to Natural Image Matting,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 61-68, 2006 (Year: 2006).
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2002, pp. 208-215.
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995 Proceedings of the 1995 International Conference on Image Processing, Date of Conference: Oct. 23-26, 1995, pp. 93-96.
Scharstein et al., “High-Accuracy Stereo Depth Maps Using Structured Light”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2003), Jun. 2003, vol. 1, pp. 195-202.
Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, Conference Date Jan. 7, 1998, 29 pgs, DOI: 10.1109/ICCV.1998.710696 ⋅ Source: DBLP Conference: Computer Vision, Sixth International Conference.
Shotton et al., “Real-time human pose recognition in parts from single depth images”, CVPR 2011, Jun. 20-25, 2011, Colorado Springs, CO, USA, pp. 1297-1304.
Shum et al., “A Review of Image-based Rendering Techniques”, Visual Communications and Image Processing 2000, May 2000, 12 pgs.
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System”, Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162. Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField_TOG.pdf on Feb. 5, 2014.
Silberman et al., “Indoor segmentation and support inference from RGBD images”, ECCV'12 Proceedings of the 12th European conference on Computer Vision, vol. Part V, Oct. 7-13, 2012, Florence, Italy, pp. 746-760.
Stober, “Stanford researchers developing 3-D camera with 12,616 lenses”, Stanford Report, Mar. 19, 2008, Retrieved from: http://news.stanford.edu/news/2008/march19/camera-031908.html, 5 pgs.
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759.
Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, 8 pgs.; DOI: 10.1109/CVPR.2008.4587659.
Taguchi et al., “Rendering-Oriented Decoding for a Distributed Multiview Coding System Using a Coset Code”, Hindawi Publishing Corporation, EURASIP Journal on Image and Video Processing, vol. 2009, Article ID 251081, Online: Apr. 22, 2009, 12 pages.
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975.
Tallon et al., “Upsampling and Denoising of Depth Maps via Joint-Segmentation”, 20th European Signal Processing Conference, Aug. 27-31, 2012, 5 pgs.
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117.
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813.
Tao et al., “Depth from Combining Defocus and Correspondence Using Light-Field Cameras”, ICCV '13 Proceedings of the 2013 IEEE International Conference on Computer Vision, Dec. 1, 2013, pp. 673-680.
Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer vol. 77, No. 9, Sep. 1996, 93-100.
Tseng et al., “Automatic 3-D depth recovery from a single urban-scene image”, 2012 Visual Communications and Image Processing, Nov. 27-30, 2012, San Diego, CA, USA, pp. 1-6.
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2, Jun. 17-22, 2006, pp. 2331-2338.
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs.
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs.
Van Der Wal et al., “The Acadia Vision Processor”, Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception, Sep. 13, 2000, Padova, Italy, pp. 31-40.
Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park—Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fall06/ccd_theory.pdf, 5 pgs.
Venkataraman et al., “PiCam: An Ultra-Thin High Performance Monolithic Camera Array”, ACM Transactions on Graphics (TOG), ACM, US, vol. 32, No. 6, 1 Nov. 1, 2013, pp. 1-13.
Vetro et al., “Coding Approaches for End-To-End 3D TV Systems”, Mitsubishi Electric Research Laboratories, Inc., TR2004-137, Dec. 2004, 6 pgs.
Viola et al., “Robust Real-time Object Detection”, Cambridge Research Laboratory, Technical Report Series, Compaq, CRL 2001/01, Feb. 2001, Printed from: http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf, 30 pgs.
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008.
Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, Dec. 29, 2010, OPTI521 Tutorial, 10 pgs.
Wang et al., “Automatic Natural Video Matting with Depth”, 15th Pacific Conference on Computer Graphics and Applications, PG '07, Oct. 29-Nov. 2, 2007, Maui, HI, USA, pp. 469-472.
Wang et al., “Image and Video Matting: A Survey”, Foundations and Trends, Computer Graphics and Vision, vol. 3, No. 2, 2007, pp. 91-175.
Wang et al., “Soft scissors: an interactive tool for realtime high quality matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Article 9, Jul. 2007, 6 pages, published Aug. 5, 2007.
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426.
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, Mar. 11, 2005, vol. 5674, 12 pgs.
Wieringa et al., “Remote Non-invasive Stereoscopic Imaging of Blood Vessels: First In-vivo Results of a New Multispectral Contrast Enhancement Technology”, Annals of Biomedical Engineering, vol. 34, No. 12, Dec. 2006, pp. 1870-1878, Published online Oct. 12, 2006.
Wikipedia, “Polarizing Filter (Photography)”, retrieved from http://en.wikipedia.org/wiki/Polarizing_filter_(photography) on Dec. 12, 2012, last modified on Sep. 26, 2012, 5 pgs.
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs.
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 1-12.
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, vol. 2, Jun. 27-Jul. 2, 2004, pp. 294-301.
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs.
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C-11.
Wu et al., “A virtual view synthesis algorithm based on image inpainting”, 2012 Third International Conference on Networking and Distributed Computing, Hangzhou, China, Oct. 21-24, 2012, pp. 153-156.
Xu, “Real-Time Realistic Rendering and High Dynamic Range Image Display and Compression”, Dissertation, School of Computer Science in the College of Engineering and Computer Science at the University of Central Florida, Orlando, Florida, Fall Term 2005, 192 pgs.
Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), published Jul. 26, 2002, pp. 1-10.
Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Proceedings of SPIE—The International Society for Optical Engineering, Jul. 2002, 8 pgs.
Yokochi et al., “Extrinsic Camera Parameter Estimation Based-on Feature Tracking and GPS Data”, 2006, Nara Institute of Science and Technology, Graduate School of Information Science, LNCS 3851, pp. 369-378.
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, published Aug. 8, 2004, 12 pgs.
Zhang et al., “Depth estimation, spatially variant image registration, and super-resolution using a multi-lenslet camera”, Proceedings of SPIE, vol. 7705, Apr. 23, 2010, pp. 770505-770505-8, XP055113797 ISSN: 0277-786X, DOI: 10.1117/12.852171.
Zheng et al., “Balloon Motion Estimation Using Two Frames”, Proceedings of the Asilomar Conference on Signals, Systems and Computers, IEEE, Comp. Soc. Press, US, vol. 2 of 02, Nov. 4, 1991, pp. 1057-1061.
Zhu et al., “Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2008, Anchorage, AK, USA, pp. 1-8.
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6.
International Search Report and Written Opinion for International Application PCT/US2012/037670, dated Jul. 18, 2012, Completed Jul. 5, 2012, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/044014, completed Oct. 12, 2012, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/056151, completed Nov. 14, 2012, 10 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/058093, Report completed Nov. 15, 2012, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US2012/059813, completed Dec. 17, 2012, 8 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/022123, completed Jun. 9, 2014, dated Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/023762, Completed May 30, 2014, dated Jul. 3, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024903, completed Jun. 12, 2014, dated Jun. 27, 2014, 13 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/024947, Completed Jul. 8, 2014, dated Aug. 5, 2014, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/028447, completed Jun. 30, 2014, dated Jul. 21, 2017, 8 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/029052, completed Jun. 30, 2014, dated Jul. 24, 2014, 10 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/030692, completed Jul. 28, 2014, dated Aug. 27, 2014, 7 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/064693, Completed Mar. 7, 2015, dated Apr. 2, 2015, 15 pgs.
International Search Report and Written Opinion for International Application PCT/US2014/066229, Completed Mar. 6, 2015, dated Mar. 19, 2015, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US2014/067740, Completed Jan. 29, 2015, dated Mar. 3 2015, 10 pgs.
Office Action for U.S. Appl. No. 12/952,106, dated Aug. 16, 2012, 12 pgs.
“Exchangeable image file format for digital still cameras: Exif Version 2.2”, Japan Electronics and Information Technology Industries Association, Prepared by Technical Standardization Committee on AV & IT Storage Systems and Equipment, JEITA CP-3451, Apr. 2002, Retrieved from: http://www.exif.org/Exif2-2.PDF, 154 pgs.
“File Formats Version 6”, Alias Systems, 2004, 40 pgs.
“Light fields and computational photography”, Stanford Computer Graphics Laboratory, Retrieved from: http://graphics.stanford.edu/projects/lightfield/, Earliest publication online: Feb. 10, 1997, 3 pgs.
Aufderheide et al., “A MEMS-based Smart Sensor System for Estimation of Camera Pose for Computer Vision Applications”, Research and Innovation Conference 2011, Jul. 29, 2011, pp. 1-10.
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183.
Barron et al., “Intrinsic Scene Properties from a Single RGB-D Image”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 17-24.
Bennett et al., “Multispectral Bilateral Video Fusion”, 2007 IEEE Transactions on Image Processing, vol. 16, No. 5, May 2007, published Apr. 16, 2007, pp. 1185-1194.
Bennett et al., “Multispectral Video Fusion”, Computer Graphics (ACM SIGGRAPH Proceedings), Jul. 25, 2006, published Jul. 30, 2006, 1 pg.
Bertalmio et al., “Image Inpainting”, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, ACM Pres/Addison-Wesley Publishing Co., pp. 417-424.
Bertero et al., “Super-resolution in computational imaging”, Micron, Jan. 1, 2003, vol. 34, Issues 6-7, 17 pgs.
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV Nov. 8, 2010, Part II, LNCS 6493, pp. 186-200.
Bishop et al., “Light Field Superresolution”, Computational Photography (ICCP), 2009 IEEE International Conference, Conference Date Apr. 16-17, published Jan. 26, 2009, 9 pgs.
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, published Aug. 18, 2011, pp. 972-986.
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs.
Borman et al., “Image Sequence Processing”, Dekker Encyclopedia of Optical Engineering, Oct. 14, 2002, 81 pgs.
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 28, 1998, vol. 3653, 10 pgs.
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, published Jul. 1, 2003, vol. 5016, 12 pgs.
Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 21, 2004, vol. 5299, 12 pgs.
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, Sep. 22, 1998, vol. 3459, 9 pgs.
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473.
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378.
Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, Aug. 2006, vol. 15, Issue 8, published Jul. 17, 2006, pp. 2239-2248.
Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE—IS&T Electronic Imaging, Feb. 3, 2009, vol. 7246, pp. 72460X-1-72460X-9; doi: 10.1117/12.810369.
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084.
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, May 13, 2010, 11 pgs.
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394.
Bryan et al., “Perspective Distortion from Interpersonal Distance is an Implicit Visual Cue for Social Judgments of Faces”, PLOS One, vol. 7, Issue 9, Sep. 26, 2012, e45301, doi:10.1371/journal.pone.0045301, 9 pages.
Capel, “Image Mosaicing and Super-resolution”, Retrieved on Nov. 10, 2012, Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.2643&rep=rep1 &type=pdf>, 2001, 269 pgs.
Carroll et al., “Image Warps for Artistic Perspective Manipulation”, ACM Transactions on Graphics (TOG), vol. 29, No. 4, Jul. 26, 2010, Article No. 127, 9 pgs.
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, Jan. 1, 2006, vol. 3, pp. 623-626.
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP, Jun. 19, 2006, pp. 1177-1180.
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim Syst Sign Process, published online Feb. 23, 2007, vol. 18, pp. 83-101.
Chen et al., “Image Matting with Local and Nonlocal Smooth Priors”, CVPR '13 Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23, 2013, pp. 1902-1907.
Chen et al., “Interactive deformation of light fields”, Symposium on Interactive 3D Graphics, 2005, pp. 139-146.
Chen et al., “KNN matting”, 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 16-21, 2012, Providence, RI, USA, pp. 869-876.
Chen et al., “KNN Matting”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2013, vol. 35, No. 9, pp. 2175-2188.
Collins et al., “An Active Camera System for Acquiring Multi-View Video”, IEEE 2002 International Conference on Image Processing, Date of Conference: Sep. 22-25, 2002, Rochester, NY, 4 pgs.
Cooper et al., “The perceptual basis of common photographic practice”, Journal of Vision, vol. 12, No. 5, Article 8, May 25, 2012, pp. 1-14.
Crabb et al., “Real-time foreground segmentation via range and color imaging”, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, Jun. 23-28, 2008, pp. 1-5.
Debevec et al., “Recovering High Dynamic Range Radiance Maps from Photographs”, Computer Graphics (ACM SIGGRAPH Proceedings), Aug. 16, 1997, 10 pgs.
Joshi et al., “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:I/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8.
Kang et al., “Handling Occlusions in Dense Multi-view Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110.
Kim et al., “Scene reconstruction from high spatio-angular resolution light fields”, ACM Transactions on Graphics (TOG)—SIGGRAPH 2013 Conference Proceedings, vol. 32 Issue 4, Article 73, Jul. 21, 2013, 11 pages.
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727.
Konolige, Kurt, “Projected Texture Stereo”, 2010 IEEE International Conference on Robotics and Automation, May 3-7, 2010, pp. 148-155.
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831.
Kubota et al., “Reconstructing Dense Light Field From Array of Multifocus Images for Novel View Synthesis”, IEEE Transactions on Image Processing, vol. 16, No. 1, Jan. 2007, pp. 269-279.
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Computer Vision and Pattern Recognition, Proceedings CVPR 94, Seattle, Washington, Jun. 21-23, 1994, 8 pgs.
Lai et al., “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, Proceedings—IEEE International Conference on Robotics and Automation, Conference Date May 9-13, 2011, 8 pgs., DOI:10.1109/ICRA.201135980382.
Lane et al., “A Survey of Mobile Phone Sensing”, IEEE Communications Magazine, vol. 48, Issue 9, Sep. 2010, pp. 140-150.
Lee et al., “Automatic Upright Adjustment of Photographs”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 877-884.
Lee et al., “Electroactive Polymer Actuator for Lens-Drive Unit in Auto-Focus Compact Camera Module”, ETRI Journal, vol. 31, No. 6, Dec. 2009, pp. 695-702.
Lee et al., “Nonlocal matting”, CVPR 2011, Jun. 20-25, 2011, pp. 2193-2200.
LensVector, “How LensVector Autofocus Works”, 2010, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg.
Levin et al., “A Closed Form Solution to Natural Image Matting”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 61-68, (2006).
Levin et al., “Spectral Matting”, 2007 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 17-22, 2007, Minneapolis, MN, USA, pp. 1-8.
Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Sep. 1, 2006, vol. 39, Issue No. 8, pp. 46-55.
Levoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, pp. 1-12.
Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution”, Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab_research/08/deblur-feng.pdf on Feb. 5, 2014.
Li et al., “Fusing Images With Different Focuses Using Support Vector Machines”, IEEE Transactions on Neural Networks, vol. 15, No. 6, Nov. 8, 2004, pp. 1555-1561.
Lim, Jongwoo, “Optimized Projection Pattern Supplementing Stereo Systems”, 2009 IEEE International Conference on Robotics and Automation, May 12-17, 2009, pp. 2823-2829.
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120.
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10.
Martinez et al., “Simple Telemedicine for Developing Regions: Camera Phones and Paper-Based Microfluidic Devices for Real-Time, Off-Site Diagnosis”, Analytical Chemistry (American Chemical Society), vol. 80, No. 10, May 15, 2008, pp. 3699-3707.
McGuire et al., “Defocus video matting”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2005, vol. 24, Issue 3, Jul. 2005, pp. 567-576.
Merkle et al., “Adaptation and optimization of coding algorithms for mobile 3DTV”, Mobile3DTV Project No. 216503, Nov. 2008, 55 pgs.
Mitra et al., “Light Field Denoising, Light Field Superresolution and Stereo Camera Based Refocussing using a GMM Light Field Patch Prior”, Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on Jun. 16-21, 2012, pp. 22-28.
Moreno-Noguer et al., “Active Refocusing of Images and Videos”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Jul. 2007, 10 pages.
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs.
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 14, 2006, pp. 30-38.
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs.
Ng et al., “Light Field Photography with a Hand-held Plenoptic Camera”, Stanford Tech Report CTSR Feb. 2005, Apr. 20, 2005, pp. 1-11.
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378.
Nguyen et al., “Error Analysis for Image-Based Rendering with Depth Information”, IEEE Transactions on Image Processing, vol. 18, Issue 4, Apr. 2009, pp. 703-716.
Nguyen et al., “Image-Based Rendering with Depth Information Using the Propagation Algorithm”, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005, vol. 5, Mar. 23-23, 2005, pp. II-589-II-592.
Nishihara, H.K., “PRISM: A Practical Real-Time Imaging Stereo Matcher”, Massachusetts Institute of Technology, A.I. Memo 780, May 1984, 32 pgs.
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900.
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, Jun. 2007, 12 pgs.
Park et al., “Multispectral Imaging Using Multiplexed Illumination”, 2007 IEEE 11th International Conference on Computer Vision, Oct. 14-21, 2007, Rio de Janeiro, Brazil, pp. 1-8.
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36.
Parkkinen et al., “Characteristic Spectra of Munsell Colors”, Journal of the Optical Society of America A, vol. 6, Issue 2, Feb. 1989, pp. 318-322.
Perwass et al., “Single Lens 3D-Camera with Extended Depth-of-Field”, printed from www.raytrix.de, Jan. 22, 2012, 15 pgs.
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, Jul. 2008, pp. 1-19.
Philips 3D Solutions, “3D Interface Specifications, White Paper”, Feb. 15, 2008, 2005-2008 Philips Electronics Nederland B.V., Philips 3D Solutions retrieved from www.philips.com/3dsolutions, 29 pgs., Feb. 15, 2008.
Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, printed Nov. 2, 2012 from http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html, 1 pg.
Pouydebasque et al., “Varifocal liquid lenses with integrated actuator, high focusing power and low operating voltage fabricated on 200 mm wafers”, Sensors and Actuators A: Physical, vol. 172, Issue 1, Dec. 2011, pp. 280-286.
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Dec. 2, 2008, vol. 18, No. 1, pp. 36-51.
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077.
Rajan et al., “Simultaneous Estimation of Super Resolved Scene and Depth Map from Low Resolution Defocused Observations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 9, Sep. 8, 2003, pp. 1-16.
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552.
Rhemann et al., “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell , 2013, vol. 35, No. 2, pp. 504-511.
Rhemann et al., “A perceptually motivated online benchmark for image matting”, 2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 20-25, 2009, Miami, FL, USA, pp. 1826-1833.
Robert et al., “Dense Depth Map Reconstruction: A Minimization and Regularization Approach which Preserves Discontinuities”, European Conference on Computer Vision (ECCV), pp. 439-451, (1996).
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs.
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228.
Extended European Search Report for EP Application No. 11781313.9, Completed Oct. 1, 2013, dated Oct. 8, 2013, 6 pages.
Extended European Search Report for EP Application No. 13810429.4, Completed Jan. 7, 2016, dated Jan. 15, 2016, 6 Pgs.
Extended European Search Report for European Application EP12782935.6, completed Aug. 28, 2014, dated Sep. 4, 2014, 7 Pgs.
Extended European Search Report for European Application EP12804266.0, Report Completed Jan. 27, 2015, dated Feb. 3, 2015, 6 Pgs.
Extended European Search Report for European Application EP12835041.0, Report Completed Jan. 28, 2015, dated Feb. 4, 2015, 7 Pgs.
Extended European Search Report for European Application EP13751714.0, completed Aug. 5, 2015, dated Aug. 18, 2015, 8 Pgs.
Extended European Search Report for European Application EP13810229.8, Report Completed Apr. 14, 2016, dated Apr. 21, 2016, 7 pgs.
Extended European Search Report for European Application No. 13830945.5, Search completed Jun. 28, 2016, dated Jul. 7, 2016, 14 Pgs.
Extended European Search Report for European Application No. 13841613.6, Search completed Jul. 18, 2016, dated Jul. 26, 2016, 8 Pgs.
Extended European Search Report for European Application No. 14763087.5, Search completed Dec. 7, 2016, dated Dec. 19, 2016, 9 pgs.
Extended European Search Report for European Application No. 14860103.2, Search completed Feb. 23, 2017, dated Mar. 3, 2017, 7 Pgs.
Extended European Search Report for European Application No. 14865463.5, Search completed May 30, 2017, dated Jun. 8, 2017, 6 Pgs.
Extended European Search Report for European Application No. 15847754.7, Search completed Jan. 25, 2018, dated Feb. 9, 2018, 8 Pgs.
Extended European Search Report for European Application No. 18151530.5, Completed Mar. 28, 2018, dated Apr. 20, 2018,11 pages.
Supplementary European Search Report for EP Application No. 13831768.0, Search completed May 18, 2016, dated May 30, 2016, 13 Pgs.
Supplementary European Search Report for European Application 09763194.9, completed Nov. 7, 2011, dated Nov. 29, 2011, 9 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2009/044687, Completed Jul. 30, 2010, 9 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/056151, Report dated Mar. 25, 2014, 9 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/056166, Report dated Mar. 25, 2014, Report dated Apr. 3, 2014 8 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/058093, Report dated Sep. 18, 2013, dated Oct. 22, 2013, 40 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2012/059813, Search Completed Apr. 15, 2014, 7 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2013/059991, dated Mar. 17, 2015, dated Mar. 26, 2015, 8 pgs.
International Preliminary Report on Patentability for International Application PCT/US10/057661, dated May 22, 2012, dated May 31, 2012, 10 pages.
International Preliminary Report on Patentability for International Application PCT/US11/036349, Report dated Nov. 13, 2012, dated Nov. 22, 2012, 9 pages.
International Preliminary Report on Patentability for International Application PCT/US13/56065, dated Feb. 24, 2015, dated Mar. 5, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2011/064921, dated Jun. 18, 2013, dated Jun. 27, 2013, 14 pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/024987, dated Aug. 12, 2014, 13 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/027146, completed Aug. 26, 2014, dated Sep. 4, 2014, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/039155, completed Nov. 4, 2014, dated Nov. 13, 2014, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/046002, dated Dec. 31, 2014, dated Jan. 8, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/048772, dated Dec. 31, 2014, dated Jan. 8, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/056502, dated Feb. 24, 2015, dated Mar. 5, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/069932, dated May 19, 2015, dated May 28, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/017766, dated Aug. 25, 2015, dated Sep 3, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/018084, dated Aug. 25, 2015, dated Sep. 3, 2015, 11 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/018116, dated Sep. 15, 2015, dated Sep. 24, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/021439, dated Sep. 15, 2015, dated Sep. 24, 2015, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022118, dated Sep. 8, 2015, dated Sep. 17, 2015, 4 pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022123, dated Sep. 8, 2015, dated Sep. 17, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/022774, dated Sep. 22, 2015, dated Oct. 1, 2015, 5 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/023762, dated Mar. 2, 2015, dated Mar. 9, 2015, 10 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024407, dated Sep. 15, 2015, dated Sep. 24, 2015, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024903, dated Sep. 15, 2015, dated Sep. 24, 2015, 12 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/024947, dated Sep. 15, 2015, dated Sep. 24, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/025100, dated Sep. 15, 2015, dated Sep. 24, 2015, 4 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/025904, dated Sep. 15, 2015, dated Sep. 24, 2015, 5 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/028447, dated Sep. 15, 2015, dated Sep. 24, 2015, 7 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/029052, dated Sep. 15, 2015, dated Sep. 24, 2015, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/030692, dated Sep. 15, 2015, dated Sep. 24, 2015, 6 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/064693, dated May 10, 2016, dated May 19, 2016, 14 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/066229, dated May 24, 2016, dated Jun. 2, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/067740, dated May 31, 2016, dated Jun. 9, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2015/019529, dated Sep. 13, 2016, dated Sep. 22, 2016, 9 Pgs.
International Preliminary Report on Patentability for International Application PCT/US2015/053013, dated Apr. 4, 2017, dated Apr. 13, 2017, 8 Pgs.
International Preliminary Report on Patentability for International Application PCT/US13/62720, dated Mar. 31, 2015, dated Apr. 9, 2015, 8 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/46002, completed Nov. 13, 2013, dated Nov. 29, 2013, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/56065, Completed Nov. 25, 2013, dated Nov. 26, 2013, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US13/59991, Completed Feb. 6, 2014, dated Feb. 26, 2014, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2012/056166, Report Completed Nov. 10, 2012, dated Nov. 20, 2012, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/024987, Completed Mar. 27, 2013, dated Apr. 15, 2013, 14 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/027146, completed Apr. 2, 2013, 11 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/039155, completed Jul. 1, 2013, dated Jul. 11, 2013, 11 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/048772, Completed Oct. 21, 2013, dated Nov. 8, 2013, 6 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/056502, Completed Feb. 18, 2014, dated Mar. 19, 2014, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/069932, Completed Mar. 14, 2014, dated Apr. 14, 2014, 12 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2015/019529, completed May 5, 2015, dated Jun. 8, 2015, 11 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US2015/053013, completed Dec. 1, 2015, dated Dec. 30, 2015, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US11/36349, dated Aug. 22, 2011, 11 pgs.
International Search Report and Written Opinion for International Application PCT/US13/62720, completed Mar. 25, 2014, dated Apr. 21, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/17766, completed May 28, 2014, dated Jun. 18, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/18084, completed May 23, 2014, dated Jun. 10, 2014, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US14/18116, Report completed May 13, 2014, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US14/21439, completed Jun. 5, 2014, dated Jun. 20, 2014, 10 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/22118, completed Jun. 9, 2014, dated Jun. 25, 2014, 5 pgs.
International Search Report and Written Opinion for International Application PCT/US14/22774 report completed Jun. 9, 2014, dated Jul. 14, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/24407, report completed Jun. 11, 2014, dated Jul. 8, 2014, 9 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/25100, report completed Jul. 7, 2014, dated Aug. 7, 2014, 5 Pgs.
International Search Report and Written Opinion for International Application PCT/US14/25904, report completed Jun. 10, 2014, dated Jul. 10, 2014, 6 Pgs.
International Search Report and Written Opinion for International Application PCT/US2009/044687, completed Jan. 5, 2010, dated Jan. 13, 2010, 9 pgs.
International Search Report and Written Opinion for International Application PCT/US2010/057661, completed Mar. 9, 2011, 14 pgs.
International Search Report and Written Opinion for International Application PCT/US2011/064921, completed Feb. 25, 2011, dated Mar. 6, 2012, 17 pgs.
Do, Minh N. “Immersive Visual Communication with Depth”, Presented at Microsoft Research, Jun. 15, 2011, Retrieved from: http://minhdo.ece.illinois.edu/talks/ImmersiveComm.pdf, 42 pgs.
Do et al., “Immersive Visual Communication”, IEEE Signal Processing Magazine, vol. 28, Issue 1, Jan. 2011, DOI: 10.1109/MSP.2010.939075, Retrieved from: http://minhdo.ece.illinois.edu/publications/ImmerComm_SPM.pdf, pp. 58-66.
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05), Ottawa, Ontario, Canada, Jun. 13-16, 2005, pp. 540-547.
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 1, Jun. 20-25, 2005, pp. 351-358.
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 5, 2006, vol. 83, Issue 3, 8 pgs.
Drulea et al., “Motion Estimation Using the Correlation Transform”, IEEE Transactions on Image Processing, Aug. 2013, vol. 22, No. 8, pp. 3260-3270, first published May 14, 2013.
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310.
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, Nov. 21, 2008, vol. 3, pp. 1-6.
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 24, 2004, pp. 89-100.
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551.
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, Apr. 6, 2006, vol. 1, pp. R1-R16.
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, Oct. 17, 2005, pp. 59622A-1-59622A-12.
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs.
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903.
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33.
Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, Apr. 21, 2006, vol. 6196, pp. 619607-1-619607-15.
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418.
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 2005, vol. 44, No. 15, pp. 2949-2956.
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposition Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs.
Eng et al., “Gaze correction for 3D tele-immersive communication system”, IVMSP Workshop, 2013 IEEE 11th, IEEE, Jun. 10, 2013.
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012 (Nov. 10, 2012). Retrieved from the Internet at URL:<http://www.site.uottawa.ca/-edubois/theses/Fanaswala_thesis.pdf>, 2009, 163 pgs.
Fang et al., “Volume Morphing Methods for Landmark Based 3D Image Deformation”, SPIE vol. 2710, Proc. 1996 SPIE Intl Symposium on Medical Imaging, Newport Beach, CA, Feb. 10, 1996, pp. 404-415.
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, Feb. 2, 2006, vol. 6069, 8 pgs.
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, Aug. 12, 2004, vol. 14, pp. 47-57.
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, published Sep. 3, 2004, vol. 13, No. 10, pp. 1327-1344.
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, date of publication Dec. 12, 2005, pp. 141-159.
Fecker et al., “Depth Map Compression for Unstructured Lumigraph Rendering”, Proc. SPIE 6077, Proceedings Visual Communications and Image Processing 2006, Jan. 18, 2006, pp. 60770B-1-60770B-8.
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs.
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284.
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50.
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 191-198.
Fischer et al., “Optical System Design”, 2nd Edition, SPIE Press, Feb. 14, 2008, pp. 49-58.
Gastal et al., “Shared Sampling for Real-Time Alpha Matting”, Computer Graphics Forum, EUROGRAPHICS 2010, vol. 29, Issue 2, May 2010, pp. 575-584.
Georgeiv et al., “Light Field Camera Design for Integral View Photography”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs.
Georgiev et al., “Light-Field Capture by Multiplexing in the Frequency Domain”, Adobe Systems Incorporated, Adobe Technical Report, 2003, 13 pgs.
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, Oct. 19-22, 2008, Monterey CA, USA, pp. 3-12.
Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, published Aug. 1, 1996, pp. 43-54.
Gupta et al., “Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images”, 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 23-28, 2013, Portland, OR, USA, pp. 564-571.
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, vol. 30, No. 4, Aug. 7, 2011, 9 pgs.
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs.
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, published Nov. 19, 2007, vol. 16, No. 12, pp. 2953-2964.
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, 2010 International Conference: Computational Photography (ICCP) Mar. 2010, pp. 1-8.
Hernandez-Lopez et al., “Detecting objects using color and depth segmentation with Kinect sensor”, Procedia Technology, vol. 3, Jan. 1, 2012, pp. 196-204, XP055307680, ISSN: 2212-0173, DOI: 10.1016/j.protcy.2012.03.021.
Holoeye Photonics AG, “LC 2012 Spatial Light Modulator (transmissive)”, Sep. 18, 2013, retrieved from https://web.archive.org/web/20130918151716/http://holoeye.com/spatial-light-modulators/lc-2012-spatial-light-modulator/ on Oct. 20, 2017, 3 pages.
Holoeye Photonics AG, “Spatial Light Modulators”, Oct. 2, 2013, Brochure retrieved from https://web.archive.org/web/20131002061028/http://holoeye.com/wp-content/uploads/Spatial_Light_Modulators.pdf on Oct. 13, 2017, 4 pgs.
Holoeye Photonics AG, “Spatial Light Modulators”, Sep. 18, 2013, retrieved from https://web.archive.org/web/20130918113140/http://holoeye.com/spatial-light-modulators/ on Oct. 13, 2017, 4 pages.
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, Jan. 29, 2010, vol. 3, pp. 022501-1-022501-3.
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, Oct. 13, 2011, vol. 4, pp. 112501-1-112501-3.
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D, Jan. 1, 2007, pp. 121-128.
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, pp. 297-306.
Izadi et al., “KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera”, UIST'11, Oct. 16-19, 2011, Santa Barbara, CA, pp. 559-568.
Janoch et al., “A category-level 3-D object dataset: Putting the Kinect to work”, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Nov. 6-13, 2011, Barcelona, Spain, pp. 1168-1174.
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80.
Jiang et al., “Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints”, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, Jun. 17-22, 2006, New York, NY, USA, pp. 371-378.
Joshi, Neel S. “Color Calibration for Arrays of Inexpensive Image Sensors”, Master's with Distinction in Research Report, Stanford University, Department of Computer Science, Mar. 2004, 30 pgs.
Related Publications (1)
Number Date Country
20190037150 A1 Jan 2019 US
Provisional Applications (1)
Number Date Country
61949999 Mar 2014 US
Continuations (1)
Number Date Country
Parent 14642637 Mar 2015 US
Child 16148816 US