The present invention relates to stereo imaging with camera arrays and how this extends the core capabilities of a single monolithic light field camera array.
An image captured by a digital camera provides some sense of the location of objects in a scene and the location of one object relative to another. Without information in a third dimension (depth), it can be difficult to make strict conclusions about locations and linear measurements.
A legacy stereo camera is a type of camera with two or more lenses with a separate image sensor for each lens. This allows the camera to simulate human binocular vision and the ability to capture three-dimensional (stereo) images. A legacy stereo camera has some ability to determine depth of objects in its field of view when the baseline, or distance, between the two cameras is known.
An array camera includes a plurality of individual imagers (i.e., cameras) that can capture images of a scene where the image obtained by each imager is from a slightly different viewpoint. The diversity of information between viewpoints can be used to calculate depth information. The depth calculations in an array camera are more sophisticated than in a stereo camera because additional combinations of images, from different cameras in the array, can be compared and correlated to make the estimates more robust in the presence of noise and aliasing. An array camera system typically still has reduced precision in determining depth beyond a certain distance from the camera because the information used to calculate depth becomes less diverse in magnitude the farther an object is from the camera.
Systems and methods for stereo imaging with camera arrays in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating depth information for an object in a scene using two or more array cameras that each include a plurality of imagers where each imager captures an image of the scene includes obtaining a first set of image data including image data of a scene captured from a first set of different viewpoints, identifying an object of interest in the first set of image data, determining a first depth measurement for the object of interest using at least a portion of the first set of image data, determining whether the first depth measurement for the object of interest is above a predetermined threshold, and when the depth for the object of interest is above the predetermined threshold: obtaining a second set of image data including image data of the same scene from a second set of different viewpoints located known distances from at least one viewpoint in the first set of different viewpoints, identifying the object of interest in the second set of image data, and determining a second depth measurement for the object of interest using at least a portion of the first set of image data and at least a portion of the second set of image data.
In a further embodiment, obtaining a first set of image data including image data of a scene captured from a first set of different viewpoints includes capturing a first set of image data using a first plurality of imagers in a first array.
Another embodiment also includes determining image capture settings for active imagers in the first array.
In a still further embodiment, determining image capture settings for active imagers in the first array includes calibrating for nonlinearities in the lenses of a plurality of the first plurality of imagers.
In still another embodiment, one of the imagers is designated as a reference camera and captures image data from a reference viewpoint.
In a yet further embodiment, identifying an object of interest in the first set of image data includes generating a preview image, presenting the preview image via a screen, and capturing user input.
In yet another embodiment, identifying an object of interest in the first set of image data includes detecting motion in an area of the scene.
In a further embodiment again, identifying an object of interest in the first set of image data includes detecting an object in a designated region of interest.
In another embodiment again, identifying an object of interest in the first set of image data includes selecting one or more pixels and associating the one or more pixels with the object.
In a further additional embodiment, obtaining a second set of image data including image data of the same scene from a second set of different viewpoints located known distances from at least one viewpoint in the first set of different viewpoints includes capturing a second set of image data using a second plurality of imagers in a second array.
In another additional embodiment, the second array is the first array placed in a different location from the location in which the first array captured the first set of image data.
A still yet further embodiment also includes estimating the baseline distance between the two arrays by cross-correlating one or more sets of corresponding pixels from the first set of image data and the second set of image data.
In still yet another embodiment, determining a first depth measurement for the object of interest using at least a portion of the first set of image data includes determining the disparity between the location of a pixel in one image in the first set of image data and a corresponding pixel in a second image in the first set of image data.
A still further embodiment again also includes calculating a confidence measure for the depth of the object of interest.
A still another embodiment again also includes determining if the object of interest is visible in the second set of image data and identifying the object of interest in the second set of image data when the object of interest is visible in the second set of image data.
In a still further additional embodiment, determining a second depth measurement for the object of interest using at least a portion of the first set of image data and at least a portion of the second set of image data includes determining the disparity between a first pixel associated with the object of interest in at least one image in the first set of image data and a corresponding second pixel in at least one image in the second set of image data.
In still another additional embodiment, determining a second depth measurement for the object of interest using at least a portion of the first set of image data and at least a portion of the second set of image data includes calculating the disparity between the location of a pixel in one image in the first set of image data and a corresponding pixel in a second image in the second set of image data.
In a yet further embodiment again, determining a second depth measurement for the object of interest using at least a portion of the first set of image data and at least a portion of the second set of image data includes utilizing the first depth measurement for the object of interest.
In yet another embodiment again, a method for calculating the speed of an object in a scene using one or more array cameras that each include a plurality of imagers where each imager captures an image of the scene includes obtaining a first set of image data including image data of a scene captured from a first set of different viewpoints, identifying an object of interest in the first set of image data, determining a first depth measurement and a first angular measurement for the object of interest using at least a portion of the first set of image data, determining a first location of the object of interest using at least the first depth measurement and first angular measurement, obtaining a second set of image data including a second image data of a scene captured from a second set of different viewpoints at a time t after the first set of image data was captured, identifying the object of interest in the second set of image data, determining a second depth measurement and a second angular measurement for the object of interest using at least a portion of the second set of image data, determining a second location of the object of interest using at least the second depth measurement and second angular measurement, calculating a speed for the object of interest using at least the first location and the second location of the object of interest.
In a yet further additional embodiment, obtaining a first set of image data including image data of a scene captured from a first set of different viewpoints includes capturing a first set of image data using a first plurality of imagers in a first array, and obtaining a second set of image data including a second image data of a scene captured from a second set of different viewpoints at a time t after the first set of image data was captured includes capturing a second set of image data using a second plurality of imagers at a time t after the first set of image data was captured.
In yet another additional embodiment, the second plurality of imagers is the same as the first plurality of imagers and the second set of different viewpoints is the same as the first set of different viewpoints.
In a further additional embodiment again, the second plurality of imagers is selected from a second array that is different from the first array.
Another additional embodiment again also includes determining image capture settings for active imagers in the first array.
In a still yet further embodiment again, identifying an object of interest in the first set of image data includes generating a preview image, presenting the preview image via a screen, and capturing user input.
In still yet another embodiment again, identifying an object of interest in the first set of image data includes detecting motion in an area of the scene.
In a still yet further additional embodiment, identifying an object of interest in the first set of image data includes detecting an object in a designated region of interest.
In still yet another additional embodiment, identifying an object of interest in the first set of image data includes selecting one or more pixels and associating the one or more pixels with the object.
In a yet further additional embodiment again, capturing a first set of image data using a first plurality of imagers in a first array includes capturing a first set of image data at a first location, capturing a second set of image data using a second plurality of imagers at a time t after the first set of image data was captured includes capturing a second set of image data in a second location at a time t after the first set of image data was captured, and determining a second location of the object of interest using at least the second depth measurement and second angular measurement includes determining the change from the first location to the second location.
In yet another additional embodiment again, capturing a first set of image data using a first plurality of imagers in a first array includes determining a first orientation of the first array, capturing a second set of image data using a second plurality of imagers at a time t after the first set of image data was captured includes determining a second orientation of the first array at a time t after the first set of image data was captured, and determining a second location of the object of interest using at least the second depth measurement and second angular measurement includes determining the change from the first orientation to the second orientation.
A still yet further additional embodiment again also includes calculating a confidence measure of the speed of the object of interest.
In still yet another additional embodiment again, determining a second depth measurement and a second angular measurement for the object of interest using at least a portion of the second set of image data includes determining a second depth measurement for the object of interest using at least a portion of the first set of image data and at least a portion of the second set of image data.
Turning now to the drawings, systems and methods for measuring distance and speed in accordance with embodiments of the invention are illustrated. In many embodiments of the invention, one or more array cameras are utilized to capture image data of a scene from slightly different viewpoints. The diversity of information acquired from different viewpoints can be used to calculate depth of objects in the scene and a depth map that includes a set of depth information for a scene. In many embodiments of the invention, a stereo array camera composed of two array cameras provides depth information that can be used to calculate depth with greater accuracy at distances farther from the camera than can a single array camera. In other embodiments, a stereo array camera is composed of an array camera and a legacy camera (i.e. having a single lens and single image sensor/imager).
Array cameras including camera modules that can be utilized to capture image data from different viewpoints are disclosed in U.S. patent application Ser. No. 12/935,504, entitled “Capturing and Processing of Images using Monolithic Camera Array with Heteregeneous Images”, filed May 20, 2009, the disclosure of which is incorporated by reference herein in its entirety. Array cameras offer a number of advantages and features over legacy cameras. An array camera typically contains two or more imagers (which can be referred to as cameras), each of which receives light through a separate lens system. The imagers operate to capture image data of a scene from slightly different viewpoints. Array cameras have a variety of applications, including capturing image data from multiple viewpoints that can be used in super-resolution processing and depth calculation.
Depth of an object of interest can be calculated by observing the disparity, or difference, in the location of corresponding pixels making up the object (pixels that capture the same content in a scene) in the images from two different cameras. As will be discussed further below, the baseline distance between the cameras, back focal length of the cameras, and disparity are factors in determining depth. The accuracy of a depth measurement is reduced with further distance from the camera because the disparity between the cameras of an image for a given object at that distance reduces with increasing distance. Eventually, the disparity gets smaller than a certain threshold sub-pixel amount for the given pixels size, and the resolution of depth measurement thus becomes more coarsely quantized with greater depth. Because increasing the baseline increases disparity, the accuracy of depth measurement can also be increased accordingly with the baseline. In many embodiments of the invention, a stereo array camera configuration includes two array cameras mounted a fixed distance apart. The fixed distance is greater than the distance between the cameras in a single array and the stereo array camera can therefore provide greater accuracy when making depth estimates than a single array.
The depth and two-dimensional location of an object of interest (such as from an image captured by an array camera) can be used to locate the object in three-dimensional space. Given two sets of three-dimensional coordinates of an object and the time elapsed, the object's speed and direction can be calculated to within a certain accuracy depending on the accuracy of the depth estimates in the two measurements. In several embodiments of the invention, speed and direction are calculated for an object from depth and two-dimensional location information determined using an array camera or stereo array camera. Array camera architectures that can be utilized for depth and speed measurements in accordance with embodiments of the invention are discussed below.
Array Camera Architecture
An array camera architecture that can be used in a variety of array camera configurations in accordance with embodiments of the invention is illustrated in
Although a specific architecture is illustrated in
Stereo Array Cameras
In many embodiments of the invention, two array cameras mounted a fixed distance apart form a pair of stereo array cameras. In other embodiments, an array camera and a legacy camera form a stereo array camera. Each array camera can be of an architecture as described above with respect to
Distance Measurement Using Array Cameras
Images of a scene captured by different cameras in an array camera have differences due to the different points of view resulting from the different locations of the cameras, an effect known as parallax. These differences, referred to as disparity, provide information that can be used to measure depth of objects within a scene. Systems and methods for detecting disparity and calculating depth maps for an image are discussed in U.S. Patent Application Ser. No. 61/691,666 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras” to Venkataraman et al., filed Aug. 21, 2012, the disclosure of which is incorporated by reference herein in its entirety.
Parallax in a two camera system is illustrated in
U.S. Patent Application Ser. No. 61/691,666 incorporated above discusses depth measurement using the following relationship between disparity and depth with respect to
From the above equation and figure, it can be seen that disparity between images captured by the cameras is along a vector in the direction of the baseline of the two cameras, which can be referred to as the epipolar line between the two cameras. Furthermore, the magnitude of the disparity is directly proportional to the baseline separation of the two cameras and the back focal length of the cameras and is inversely proportional to the distance from the camera to an object appearing in the scene. The distance (or depth) from the two cameras to the foreground object can be obtained by determining the disparity of the foreground object in the two captured images.
One method of determining depth of a pixel or object using images captured by an array camera involves selecting an initial hypothesized depth or distance for a selected pixel from an image captured from a reference viewpoint/camera, and searching pixel locations in other images along the epipolar line between the reference viewpoint/camera and the camera capturing each of the other images for similar/matching pixels. This process is discussed in the patent incorporated by reference above, and can be modified to utilize two array cameras set farther apart than the cameras in a single array to determine depth to a higher precision as will be discussed further below.
Techniques such as those disclosed in the patent application incorporated above are typically used to generate a depth map from a reference viewpoint. The reference viewpoint can be from the viewpoint of one of the cameras in a camera array. Alternatively, the reference viewpoint can be an arbitrary virtual viewpoint. A depth map indicates the distance of the surfaces of scene objects from a reference viewpoint. Although a process for calculating depth using disparity is discussed above, any of a variety of techniques for calculating depth can be utilized in accordance with embodiments of the invention. Processes for depth measurement using stereo array cameras are discussed below.
Enhanced Distance Measurement Using Stereo Array Cameras
The closer that an object is to an array camera, the larger the disparity that will be observed in the object's location in different images captured by different cameras in the array. A representative graph of object distance with observed disparity is illustrated in
The further a camera is from the reference viewpoint the larger the disparity that will be observed. Typically larger shifts enable depth to be determined with greater precision. Increasing the baseline (distance between cameras) increases the observed disparity accordingly. Therefore, using a camera that captures an image from a reference viewpoint and the cameras that are furthest from that camera to determine depth information can improve precision.
In many embodiments of the invention, two array cameras are set apart at a known distance in a stereo array camera configuration and image data from the two array cameras are used to generate depth information for an object observed by the cameras. In other embodiments, a stereo array camera includes an array camera and a legacy camera located a known distance from each other. A process for measuring depth using stereo array cameras in accordance with embodiments of the invention is illustrated in
A first set of image data is captured (320) using active cameras in the first array. Typically, each camera collects image data that can be used to form an image from the point of view of the camera. In array cameras, often one camera is designated a reference camera and the image data captured by that camera is referred to as being captured from a reference viewpoint. In many embodiments of the invention, image data that is captured includes image data from a reference camera. In several embodiments, the active imagers capturing the image data are configured with color filters or other mechanisms to limit the spectral band of light captured. The spectral band can be (but is not limited to) red, blue, green, infrared, or extended color. Extended color is a band that includes at least a portion of at the band of wavelengths of least two colors. Systems and methods for capturing and utilizing extended color are disclosed in U.S. Patent Application No. 61/798,602 and Ser. No. 14/145,734 incorporated by reference above.
An object of interest is identified (325) in the first set of image data. The identification can be based upon a variety of techniques that include, but are not limited to: user input (e.g., selection on a screen), motion activation, shape recognition, and region(s) of interest. The identification can be made in an image generated from the first set of image data from the cameras in the first array. For example, the object of interest can be indicated in a preview image generated from the first set of image data or in a reference image from a reference viewpoint that corresponds to a reference camera in the first array. The identification can include selection of a pixel or set of pixels within the image associated with the object.
Using the first set of image data, a depth is determined (330) for the object. Techniques for determining the depth of the object can include those disclosed in U.S. Patent Application Ser. No. 61/691,666 incorporated by reference and discussed further above. The effects of noise can be reduced by binning or averaging corresponding pixels across images captured by different cameras utilizing techniques such as, but not limited to, those disclosed in U.S. Patent Application Ser. No. 61/783,441, filed Mar. 14, 2013, entitled “Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras” to Molina and P.C.T. patent application Ser. No. 14/025,100, filed Mar. 12, 2014, entitled “Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras” to Molina, the disclosures of which are hereby incorporated in their entirety. In several embodiments of the invention, intermediate images can be formed with pixel values in locations in each image where the pixel values are binned or averaged from corresponding pixels in different images. The intermediate images, which have noise components “averaged out” can then be used in depth calculation.
If the disparity of the object is above a predetermined threshold (340), i.e. is within a predetermined distance from the first array, the depth calculated above (330) is accepted as the depth of the object (350). A confidence measure can be given that is based on factors such as lens calibration and/or pixel resolution (the width that a pixel represents based on distance from the camera). The confidence measure can also incorporate information from a confidence map that indicates the reliability of depth measurements for specific pixels as disclosed in U.S. Patent Application Ser. No. 61/691,666 incorporated by reference above.
If the disparity of the object is below the predetermined threshold (340), then the depth measurement of the object is refined using a second set of image data from camera(s) in a second array. In some embodiments, the second array is instead a legacy camera. As discussed further above, a longer baseline between cameras can provide increased precision, because of increased disparity, out to further distances in measuring depth.
A second set of image data is captured (355) using at least one camera in the second array (or legacy camera). The object of interest is identified (370) in the second set of image data based upon a variety of techniques that can include those discussed above with respect to identifying the object in the first set of image data or other tracking techniques known in the art. If the system does not assume that the object of interest is visible to the second array, it can first determine (360) if the object is visible to at least one camera in the second array. Visibility can be determined, for example, by searching for similar pixels as discussed with respect to
A depth measurement is performed (380) on the object using at least a portion of the first set of image data and at least a portion of the second set of image data. The measurement can include determining the disparity between pixel(s) associated with the object of interest in images captured by one or more cameras in the first array and corresponding pixel(s) in images captured by one or more cameras in the second array. In some embodiments, the second array is instead a legacy camera that captures a single image. The single image can similar be used as a second set of image data to determine disparity so long as pixel correspondences can be found between pixels in the first set of image data and the second set of image data.
Although specific processes are described above for obtaining depth measurements using multiple array cameras, any of a variety of combinations of two or more array cameras can be utilized to obtain depth measurements based upon the disparity observed between image data captured by cameras within the two array cameras can be utilized as appropriate to specific applications in accordance with embodiments of the invention.
A stereo array camera configuration can be formed in an ad hoc manner using one array camera and changing the position of the array camera. In many embodiments of the invention, an ad hoc stereo array camera includes an array camera capturing an image of a scene in one position, moving the array camera to a second position, and capturing a second image with the array camera in the second position. The two images captured in this way can form an ad hoc stereo pair of images. By correlating the features from the two images with each other and internal sensors such as a gyroscope and/or accelerometer in combination with the matched features, the camera extrinsics (such as camera center of projection and camera viewing direction) can be determined.
Unified Parallax Computation
A stereo array camera provides additional optimization possibilities in computing parallax disparities as compared to a single array camera. Parallax calculations can be performed using processes such as those disclosed in U.S. Provisional Patent Application Ser. No. 61/691,666 incorporated by reference above. As discussed above with respect to certain embodiments of the invention, parallax calculations can be performed to compute depths using the cameras in a first array in the stereo array camera. In many embodiments, information calculated using the first array can be used to accelerate calculation of depths with the second array in the stereo array camera. For example, in many processes for calculating depth, images are sampled for similar pixels to determine disparity as discussed in U.S. Provisional Patent Application Ser. No. 61/691,666. When pixels and/or objects have a depth that was already calculated by a first array, the search for similar pixels in the second array can use the depth information for the same pixel/object as a starting point and/or to limit the search to the “expected” portions of the image as predicted by the existing depth information. In several embodiments, the pixel/object can be correspondingly identified in images captured by the second array such that the existing depths can be applied to the proper pixel/object, even when the corresponding pixel/object is not in the same location within the image(s). In many embodiments, correspondence of pixels/objects is not necessarily determined for part or all of an image, but the depths of each pixel in the first image are used for calculating the depth of the pixel in the same location in the second image.
A process for reusing depth information in accordance with embodiments of the invention is illustrated in
High Resolution Image Synthesis
The image data in low resolution images captured by a array camera can be used to synthesize a high resolution image using super-resolution processes such as those described in U.S. patent application Ser. No. 12/967,807 entitled “Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al. The disclosure of U.S. patent application Ser. No. 12/967,807 is hereby incorporated by reference in its entirety. A super-resolution (SR) process can be utilized to synthesize a higher resolution (HR) 2D image or a stereo pair of higher resolution 2D images from the lower resolution (LR) images captured by an array camera. The terms high or higher resolution (HR) and low or lower resolution (LR) are used here in a relative sense and not to indicate the specific resolutions of the images captured by the array camera.
A stereo array camera configuration can also be used to create a HR image by using the cameras from both arrays. While the relatively large baseline between the two stereo array cameras would result in relatively larger occlusion zones (where parallax effects block some content that is captured in one camera from being captured in another camera), in other visible areas the cameras from the two arrays would enhance the final achieved solution. Preferably, each array camera is complete in its spectral sampling and utilizes a π color filter pattern so that the image that is synthesized using the cameras in one array is devoid of parallax artifacts in occlusion zones. In several embodiments, color filters in individual cameras can be used to pattern the camera module with π filter groups as further discussed in U.S. Provisional Patent Application No. 61/641,165 entitled “Camera Modules Patterned with pi Filter Groups”, to Nisenzon et al. filed May 1, 2012, the disclosure of which is incorporated by reference herein in its entirety.
High resolution (HR) images can be used to enhance depth measurement using stereo (two or more) array cameras in processes such as those described further above. In several embodiments of the invention, HR images are generated from image data captured by cameras in stereo array cameras. Each HR image can be generated using images captured by cameras in one array or images captured by cameras in both arrays. The HR images can then be used as image data in processes for generating depth measurement such as those described above. Measurement can be more robust using HR images because it is typically less sensitive to noise. Creating high resolution depth maps in accordance with embodiments of the invention is discussed below.
High Resolution Depth Map
The image data captured by a stereo array camera can be used to generate a high resolution depth map whose accuracy is determined by the baseline separation between the two arrays rather than the baselines of the individual cameras within either array. Depth maps can be generated by any of a variety of processes including those disclosed in U.S. Provisional Patent Application Ser. No. 61/691,666 incorporated by reference above. As discussed further above, the accuracy of depth measurement by an array camera is reduced at further distances from the camera. By using images captured by cameras in one array in a stereo array configuration with images captured by cameras in a second array, the baseline between the two cameras is significantly increased over the baseline between two cameras in a single array.
Auto Calibration of Stereo Array Cameras
A legacy stereo camera typically relies on a very accurate calibration between the two cameras to achieve the stereo effect. However, if the two cameras go out of alignment (e.g., by being dropped) the baseline between the two cameras becomes unknown. Without knowing the baseline, the ability to generate stereo imagery from the camera system is lost because the measured disparities cannot be converted into accurate estimates of depth.
With array cameras arranged in a stereo configuration in accordance with embodiments of the invention, each array individually can generate depth information for objects in a scene. By cross-correlating the pixels of the two array cameras or the depths calculated by the two array cameras, the baseline between the two array cameras can be estimated. This approach to estimating the baseline typically only works well when there are objects visible to both camera arrays whose depths can be calculated reasonably accurately using each camera array independently. If only objects at infinity are visible to both camera arrays, auto calibration as described here may not work. The depths calculated by a single array camera often will have some degree of error due to noise, nonlinearities or manufacturing defects in the lenses of the cameras, and/or other factors. The error can manifest in statistical variations in the depths calculated by the array camera. By correlating the depths calculated by one array in a stereo array camera with the depths calculated by the second array and/or depths calculated using images from one array together with images from the second array, an estimate can be made of the most likely baseline between the two array cameras in the stereo array.
Using the calculated baseline, the stereo array camera can calculate (or recalculate) depth to a higher precision for any object that is visible to both cameras in the array, such as by the processes outlined further above.
Near-Field and Far-Field Stereo
With a legacy stereo camera, an object is typically captured in stereo only if it is within the field of view of both (left and right) cameras. However, as the object comes closer to the stereo camera, it will eventually move out of the field of view of one of the cameras while still remaining in the field of view of the other camera. At this point, the stereo effect is lost because only one camera can “see” the object.
A stereo array camera in accordance with embodiments of the invention can generate both near-field and far-field stereo. As an object comes closer and moves out of the field of view of one array camera in a stereo configuration while staying within the field of view of the other array camera, it will still be captured in stereo. The cameras in the second array, which still “sees” the object, can be used to synthesize one or more virtual viewpoints (e.g., a left eye and right eye view). Good stereo acuity can be expected because the object will be close enough that the depth resolution will be high (i.e., precision of depth measurement). Processes for generating virtual viewpoints for stereo vision in accordance with embodiments of the invention are disclosed in U.S. Provisional Patent Application Ser. No. 61/780,906 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras” to Venkataraman et al., filed Mar. 13, 2013, the disclosure of which is hereby incorporated by reference in its entirety.
Time elapsed between two images captured by a camera can be utilized with location information to provide a speed measurement. Speed measurement using array cameras in accordance with embodiments of the invention is discussed below.
Speed Measurement Using Array Cameras
Motion of an object across the field of view of a digital camera can generally be translated into an angular measurement (or angular velocity with elapsed time information) if the pixel size and back focal length are known, within the tolerance of one pixel and the corresponding angular measure of one pixel. At any given distance d from the camera, the angular measure of one pixel uniquely corresponds to a linear measure. Therefore, given a starting and ending location of an object in two dimensional images captured by a digital camera and the starting and ending distance of the object from the camera, the relative starting and ending locations of the object can be determined in three dimensional space. Provided the time elapsed between the images, the speed (or velocity) of the object can also be calculated. Given one start location and one end location, this can be represented as a linear velocity. Given multiple locations over time, the distance between each pair of consecutive locations (i.e. segment) can be determined and the distances of the segments combined to give a total distance. Additionally, a total average speed can be found by dividing the total distance over the time elapsed or by averaging the speed in each segment (distance divided by time elapsed in that segment) over the total time elapsed.
Conventional digital cameras typically capture two dimensional images without the capability of depth/distance measurement and are thus limited to angular measurement of motion. As discussed further above, array cameras can be used to determine depth by observing the disparity between multiple images that are captured by different cameras in the array. Formulas and techniques for determining distance relative to pixel disparity as in U.S. Patent Application Ser. No. 61/691,666 incorporated by reference above can also be used to determine the linear measure that the width of one pixel corresponds to at a given distance from the camera. In addition, one can calculate the time elapsed between the starting and ending frames simply by counting the number of frames between them and observing the frame rate of video capture of the camera.
In many embodiments of the invention, depth information for an object is combined with an angular measure of the object's position to provide a three-dimensional location for the object. In various embodiments of the invention, depth can be calculated using a single array camera or two array cameras in a stereo configuration as discussed further above. The three-dimension location of an object in two or more images can be used to calculate a speed and direction of the object. A process for measuring speed using an array camera in accordance with embodiments of the invention is illustrated in
A first set of image data is captured (420) using active cameras in the array camera. Typically, each camera collects image data that can be used to form an image from the point of view of the camera. In array cameras, often one camera is designated a reference camera and the image data captured by that camera is referred to as being captured from a reference viewpoint. In many embodiments of the invention, depth measurements are made with respect to the viewpoint of the reference camera using at least one other camera (alternate view cameras) within the array.
An object of interest is identified (430) in the first set of image data. The identification can be based upon a variety of techniques that include, but are not limited to: user input (e.g., selection on a screen), motion activation, shape recognition, and region(s) of interest. The identification can be made in an image generated from the first set of image data from the cameras in the first array. For example, the object of interest can be indicated in a preview image generated from the first set of image data or in a reference image from a reference viewpoint that corresponds to a reference camera in the first array. The identification can include selection of a pixel or set of pixels within the image associated with the object.
Using the first set of image data, a first depth measure and a first location are determined (440) for the object. Techniques for determining the depth of the object can include those disclosed in U.S. Patent Application Ser. No. 61/691,666 incorporated by reference and discussed further above. Depth can be calculated using a single array camera or two array cameras in a stereo configuration as discussed further above. Using the two-dimensional location of the object in an image (e.g., a reference image) an angular measure can be determined for the location of the object with respect to the camera. Combining the angular measure with the depth measure gives a three-dimensional location of the object with respect to the camera. Any of a variety of coordinate systems can be utilized in accordance with embodiments of the invention to represent the calculated location of the object. In several embodiments of the invention, the centerline of a camera is treated as the origin.
At some time t after the capture of the first set of image data, a second set of image data is captured (450) using the cameras in the array. In many embodiments of the invention, the same set of cameras utilized to capture the first set of image data are used to capture the second set of image data. In other embodiments, a second set with a different combination of cameras is used to capture the second set of image data.
The object of interest is identified (460) in the second set of image data. Identification can be based upon a variety of techniques that can include those discussed above with respect to identifying the object in the first set of image data or other tracking techniques known in the art.
Using the second set of image data, a second depth measure and a second location are determined for the object (470). Depth can be calculated using techniques discussed further above using a single array camera or two array cameras in a stereo configuration. Location can be calculated using techniques discussed further above and can incorporate known information about the location of the second camera in relation to the first camera (e.g., removing parallax effects).
In different scenarios, an array camera used to capture sets of image data for speed measurement may be stationary (e.g., tripod mounted) or may be in motion (e.g., handheld or panning across a scene). It can also include an array camera using multiple image captures from slightly different points of view to get the advantage of a larger baseline and a more accurate depth. In several embodiments of the invention, an array camera is assumed to be stationary and need not compensate for motion of the camera. In other embodiments of the invention, an array camera includes sensors that collect camera motion information (480) on up to six degrees of movement of the camera, including motion along and rotation about three perpendicular axes. These sensors can include, but are not limited to, inertial sensors and MEMS gyroscopes. Camera motion information that is collected can be used to incorporate motion compensation when calculating the speed and/or direction of an object of interest (i.e., using the camera as a frame of reference). Motion compensation may be appropriate for functions such as stabilization (when there is jitter from slight movements of the camera such as by hand movement) or tracking an object (panning the camera to keep a moving object within the camera's field of view). In further embodiments of the invention, an array camera is configurable to switch between an assumption that it is stationary (no motion compensation) and that it is moving or moveable (apply motion compensation).
The speed of the object of interest is calculated (490) using the first location and second location of the object. The direction can also be calculated from the location information, as well as a vector representing the speed and direction of the object.
A confidence measure can be given that is based on factors such as lens calibration and/or pixel resolution (the width that a pixel represents based on distance from the camera). The confidence measure can also incorporate information from a confidence map that indicates the reliability of depth measurements for specific pixels as disclosed in U.S. Patent Application Ser. No. 61/691,666 incorporated by reference above.
Additionally, calculating speed in accordance with embodiments of the invention can involve calculating a refined depth measurement using two or more array cameras as discussed further above with respect to
Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practiced otherwise than specifically described, including various changes in the implementation, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive.
This application is a continuation of U.S. Non-provisional patent application Ser. No. 14/216,968, entitled “Systems and Methods for Stereo Imaging with Camera Arrays”, filed Mar. 17, 2014, which application claims priority to U.S. Provisional Application No. 61/798,673, filed Mar. 15, 2013, the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4124798 | Thompson | Nov 1978 | A |
4198646 | Alexander et al. | Apr 1980 | A |
4323925 | Abell et al. | Apr 1982 | A |
4460449 | Montalbano | Jul 1984 | A |
4467365 | Murayama et al. | Aug 1984 | A |
5005083 | Grage | Apr 1991 | A |
5070414 | Tsutsumi | Dec 1991 | A |
5144448 | Hornbaker | Sep 1992 | A |
5327125 | Iwase et al. | Jul 1994 | A |
5629524 | Stettner et al. | May 1997 | A |
5808350 | Jack et al. | Sep 1998 | A |
5832312 | Rieger et al. | Nov 1998 | A |
5880691 | Fossum et al. | Mar 1999 | A |
5933190 | Dierickx et al. | Aug 1999 | A |
5973844 | Burger | Oct 1999 | A |
6002743 | Telymonde | Dec 1999 | A |
6034690 | Gallery et al. | Mar 2000 | A |
6069351 | Mack | May 2000 | A |
6069365 | Chow et al. | May 2000 | A |
6097394 | Levoy et al. | Aug 2000 | A |
6124974 | Burger | Sep 2000 | A |
6137535 | Meyers | Oct 2000 | A |
6141048 | Meyers | Oct 2000 | A |
6160909 | Melen | Dec 2000 | A |
6205241 | Melen | Mar 2001 | B1 |
6358862 | Ireland et al. | Mar 2002 | B1 |
6477260 | Shimomura | Nov 2002 | B1 |
6603513 | Berezin | Aug 2003 | B1 |
6611289 | Yu | Aug 2003 | B1 |
6627896 | Hashimoto et al. | Sep 2003 | B1 |
6635941 | Suda | Oct 2003 | B2 |
6657218 | Noda | Dec 2003 | B2 |
6671399 | Berestov | Dec 2003 | B1 |
6750904 | Lambert | Jun 2004 | B1 |
6765617 | Tangen et al. | Jul 2004 | B1 |
6771833 | Edgar | Aug 2004 | B1 |
6774941 | Boisvert et al. | Aug 2004 | B1 |
6795253 | Shinohara | Sep 2004 | B2 |
6819358 | Kagle et al. | Nov 2004 | B1 |
6879735 | Portniaguine et al. | Apr 2005 | B1 |
6903770 | Kobayashi et al. | Jun 2005 | B1 |
6909121 | Nishikawa | Jun 2005 | B2 |
6958862 | Joseph | Oct 2005 | B1 |
7085409 | Sawhney et al. | Aug 2006 | B2 |
7199348 | Olsen et al. | Apr 2007 | B2 |
7262799 | Suda | Aug 2007 | B2 |
7292735 | Blake et al. | Nov 2007 | B2 |
7295697 | Satoh | Nov 2007 | B1 |
7369165 | Bosco et al. | May 2008 | B2 |
7391572 | Jacobowitz et al. | Jun 2008 | B2 |
7408725 | Sato | Aug 2008 | B2 |
7606484 | Richards et al. | Oct 2009 | B1 |
7633511 | Shum et al. | Dec 2009 | B2 |
7657090 | Omatsu et al. | Feb 2010 | B2 |
7675080 | Boettiger | Mar 2010 | B2 |
7675681 | Tomikawa et al. | Mar 2010 | B2 |
7706634 | Schmitt et al. | Apr 2010 | B2 |
7723662 | Levoy et al. | May 2010 | B2 |
7782364 | Smith | Aug 2010 | B2 |
7840067 | Shen et al. | Nov 2010 | B2 |
7912673 | Hébert et al. | Mar 2011 | B2 |
7973834 | Yang | Jul 2011 | B2 |
7986018 | Rennie | Jul 2011 | B2 |
7990447 | Honda et al. | Aug 2011 | B2 |
8000498 | Shih et al. | Aug 2011 | B2 |
8013904 | Tan et al. | Sep 2011 | B2 |
8027531 | Wilburn et al. | Sep 2011 | B2 |
8044994 | Vetro et al. | Oct 2011 | B2 |
8077245 | Adamo et al. | Dec 2011 | B2 |
8106949 | Tan et al. | Jan 2012 | B2 |
8126279 | Marcellin et al. | Feb 2012 | B2 |
8131097 | Lelescu et al. | Mar 2012 | B2 |
8164629 | Zhang | Apr 2012 | B1 |
8180145 | Wu et al. | May 2012 | B2 |
8189089 | Georgiev | May 2012 | B1 |
8213711 | Tam | Jul 2012 | B2 |
8231814 | Duparre | Jul 2012 | B2 |
8242426 | Ward et al. | Aug 2012 | B2 |
8244027 | Takahashi | Aug 2012 | B2 |
8254668 | Mashitani et al. | Aug 2012 | B2 |
8279325 | Pitts et al. | Oct 2012 | B2 |
8294099 | Blackwell, Jr. | Oct 2012 | B2 |
8305456 | McMahon | Nov 2012 | B1 |
8315476 | Georgiev et al. | Nov 2012 | B1 |
8345144 | Georgiev et al. | Jan 2013 | B1 |
8360574 | Ishak et al. | Jan 2013 | B2 |
8406562 | Bassi et al. | Mar 2013 | B2 |
8446492 | Nakano et al. | May 2013 | B2 |
8514491 | Duparre | Aug 2013 | B2 |
8541730 | Inuiya | Sep 2013 | B2 |
8542933 | Venkataraman et al. | Sep 2013 | B2 |
8619082 | Ciurea et al. | Dec 2013 | B1 |
8655052 | Spooner et al. | Feb 2014 | B2 |
8692893 | McMahon | Apr 2014 | B2 |
8773536 | Zhang | Jul 2014 | B1 |
8780113 | Ciurea et al. | Jul 2014 | B1 |
8804255 | Duparre | Aug 2014 | B2 |
8854462 | Herbin et al. | Oct 2014 | B2 |
8861089 | Duparre | Oct 2014 | B2 |
8866912 | Mullis | Oct 2014 | B2 |
8878950 | Lelescu et al. | Nov 2014 | B2 |
8885922 | Kobayashi et al. | Nov 2014 | B2 |
8902321 | Venkataraman et al. | Dec 2014 | B2 |
20010005225 | Clark et al. | Jun 2001 | A1 |
20010019621 | Hanna et al. | Sep 2001 | A1 |
20010038387 | Tomooka et al. | Nov 2001 | A1 |
20020012056 | Trevino | Jan 2002 | A1 |
20020027608 | Johnson | Mar 2002 | A1 |
20020039438 | Mori et al. | Apr 2002 | A1 |
20020063807 | Margulis | May 2002 | A1 |
20020087403 | Meyers et al. | Jul 2002 | A1 |
20020089596 | Suda | Jul 2002 | A1 |
20020094027 | Sato et al. | Jul 2002 | A1 |
20020101528 | Lee | Aug 2002 | A1 |
20020113867 | Takigawa et al. | Aug 2002 | A1 |
20020113888 | Sonoda et al. | Aug 2002 | A1 |
20020122113 | Foote et al. | Sep 2002 | A1 |
20020163054 | Suda et al. | Nov 2002 | A1 |
20020167537 | Trajkovic | Nov 2002 | A1 |
20020177054 | Saitoh et al. | Nov 2002 | A1 |
20030086079 | Barth et al. | May 2003 | A1 |
20030124763 | Fan et al. | Jul 2003 | A1 |
20030140347 | Varsa | Jul 2003 | A1 |
20030179418 | Wengender et al. | Sep 2003 | A1 |
20030190072 | Adkins et al. | Oct 2003 | A1 |
20030198377 | Ng et al. | Oct 2003 | A1 |
20040008271 | Hagimori et al. | Jan 2004 | A1 |
20040012689 | Tinnerino et al. | Jan 2004 | A1 |
20040027358 | Nakao | Feb 2004 | A1 |
20040047274 | Amanai | Mar 2004 | A1 |
20040050104 | Ghosh et al. | Mar 2004 | A1 |
20040056966 | Schechner et al. | Mar 2004 | A1 |
20040061787 | Liu et al. | Apr 2004 | A1 |
20040066454 | Otani et al. | Apr 2004 | A1 |
20040100570 | Shizukuishi | May 2004 | A1 |
20040114807 | Lelescu et al. | Jun 2004 | A1 |
20040151401 | Sawhney et al. | Aug 2004 | A1 |
20040165090 | Ning | Aug 2004 | A1 |
20040169617 | Yelton et al. | Sep 2004 | A1 |
20040170340 | Tipping et al. | Sep 2004 | A1 |
20040174439 | Upton | Sep 2004 | A1 |
20040207836 | Chhibber et al. | Oct 2004 | A1 |
20040213449 | Safaee-Rad et al. | Oct 2004 | A1 |
20040218809 | Blake et al. | Nov 2004 | A1 |
20040234873 | Venkataraman | Nov 2004 | A1 |
20040240052 | Minefuji et al. | Dec 2004 | A1 |
20040251509 | Choi | Dec 2004 | A1 |
20040264806 | Herley | Dec 2004 | A1 |
20050006477 | Patel | Jan 2005 | A1 |
20050012035 | Miller | Jan 2005 | A1 |
20050036778 | DeMonte | Feb 2005 | A1 |
20050048690 | Yamamoto | Mar 2005 | A1 |
20050068436 | Fraenkel et al. | Mar 2005 | A1 |
20050128509 | Tokkonen et al. | Jun 2005 | A1 |
20050132098 | Sonoda et al. | Jun 2005 | A1 |
20050134712 | Gruhlke et al. | Jun 2005 | A1 |
20050147277 | Higaki et al. | Jul 2005 | A1 |
20050151759 | Gonzalez-Banos et al. | Jul 2005 | A1 |
20050175257 | Kuroki | Aug 2005 | A1 |
20050185711 | Pfister et al. | Aug 2005 | A1 |
20050205785 | Hornback et al. | Sep 2005 | A1 |
20050219363 | Kohler | Oct 2005 | A1 |
20050225654 | Feldman et al. | Oct 2005 | A1 |
20050275946 | Choo et al. | Dec 2005 | A1 |
20050286612 | Takanashi | Dec 2005 | A1 |
20050286756 | Hong et al. | Dec 2005 | A1 |
20060002635 | Nestares et al. | Jan 2006 | A1 |
20060023197 | Joel | Feb 2006 | A1 |
20060023314 | Boettiger et al. | Feb 2006 | A1 |
20060028476 | Sobel et al. | Feb 2006 | A1 |
20060033005 | Jerdev et al. | Feb 2006 | A1 |
20060038891 | Okutomi et al. | Feb 2006 | A1 |
20060049930 | Zruya et al. | Mar 2006 | A1 |
20060054780 | Garrood et al. | Mar 2006 | A1 |
20060054782 | Olsen et al. | Mar 2006 | A1 |
20060055811 | Frtiz et al. | Mar 2006 | A1 |
20060072029 | Miyatake et al. | Apr 2006 | A1 |
20060087747 | Ohzawa et al. | Apr 2006 | A1 |
20060098888 | Morishita | May 2006 | A1 |
20060125936 | Gruhike et al. | Jun 2006 | A1 |
20060138322 | Costello et al. | Jun 2006 | A1 |
20060152803 | Provitola | Jul 2006 | A1 |
20060159369 | Young | Jul 2006 | A1 |
20060176566 | Boettiger et al. | Aug 2006 | A1 |
20060187338 | May et al. | Aug 2006 | A1 |
20060203113 | Wada et al. | Sep 2006 | A1 |
20060210186 | Berkner | Sep 2006 | A1 |
20060239549 | Kelly et al. | Oct 2006 | A1 |
20060243889 | Farnworth et al. | Nov 2006 | A1 |
20060251410 | Trutna | Nov 2006 | A1 |
20060274174 | Tewinkle | Dec 2006 | A1 |
20060278948 | Yamaguchi et al. | Dec 2006 | A1 |
20060279648 | Senba et al. | Dec 2006 | A1 |
20070002159 | Olsen et al. | Jan 2007 | A1 |
20070024614 | Tam | Feb 2007 | A1 |
20070040828 | Zalevsky et al. | Feb 2007 | A1 |
20070040922 | McKee et al. | Feb 2007 | A1 |
20070052825 | Cho | Mar 2007 | A1 |
20070083114 | Yang et al. | Apr 2007 | A1 |
20070085917 | Kobayashi | Apr 2007 | A1 |
20070092245 | Bazakos et al. | Apr 2007 | A1 |
20070102622 | Olsen et al. | May 2007 | A1 |
20070126898 | Feldman et al. | Jun 2007 | A1 |
20070127831 | Venkataraman | Jun 2007 | A1 |
20070139333 | Sato et al. | Jun 2007 | A1 |
20070146503 | Shiraki | Jun 2007 | A1 |
20070146511 | Kinoshita et al. | Jun 2007 | A1 |
20070159541 | Sparks et al. | Jul 2007 | A1 |
20070160310 | Tanida et al. | Jul 2007 | A1 |
20070165931 | Higaki | Jul 2007 | A1 |
20070171290 | Kroger | Jul 2007 | A1 |
20070211164 | Olsen et al. | Sep 2007 | A1 |
20070216765 | Wong et al. | Sep 2007 | A1 |
20070228256 | Mentzer | Oct 2007 | A1 |
20070257184 | Olsen et al. | Nov 2007 | A1 |
20070258006 | Olsen et al. | Nov 2007 | A1 |
20070258706 | Raskar et al. | Nov 2007 | A1 |
20070263113 | Baek et al. | Nov 2007 | A1 |
20070263114 | Gurevich et al. | Nov 2007 | A1 |
20070268374 | Robinson | Nov 2007 | A1 |
20070296835 | Olsen | Dec 2007 | A1 |
20080019611 | Larkin | Jan 2008 | A1 |
20080025649 | Liu et al. | Jan 2008 | A1 |
20080030597 | Olsen et al. | Feb 2008 | A1 |
20080043095 | Vetro et al. | Feb 2008 | A1 |
20080043096 | Vetro et al. | Feb 2008 | A1 |
20080062164 | Bassi et al. | Mar 2008 | A1 |
20080079805 | Takagi et al. | Apr 2008 | A1 |
20080080028 | Bakin et al. | Apr 2008 | A1 |
20080084486 | Enge et al. | Apr 2008 | A1 |
20080088793 | Sverdrup et al. | Apr 2008 | A1 |
20080112635 | Kondo et al. | May 2008 | A1 |
20080118241 | Tekolste et al. | May 2008 | A1 |
20080131019 | Ng | Jun 2008 | A1 |
20080131107 | Ueno | Jun 2008 | A1 |
20080151097 | Chen et al. | Jun 2008 | A1 |
20080152215 | Horie et al. | Jun 2008 | A1 |
20080152296 | Oh et al. | Jun 2008 | A1 |
20080158259 | Kempf et al. | Jul 2008 | A1 |
20080158375 | Kakkori et al. | Jul 2008 | A1 |
20080174670 | Olsen et al. | Jul 2008 | A1 |
20080193026 | Horie et al. | Aug 2008 | A1 |
20080218610 | Chapman et al. | Sep 2008 | A1 |
20080219654 | Border et al. | Sep 2008 | A1 |
20080240598 | Hasegawa | Oct 2008 | A1 |
20080247638 | Tanida et al. | Oct 2008 | A1 |
20080247653 | Moussavi et al. | Oct 2008 | A1 |
20080272416 | Yun | Nov 2008 | A1 |
20080273751 | Yuan et al. | Nov 2008 | A1 |
20080278591 | Barna et al. | Nov 2008 | A1 |
20080298674 | Baker et al. | Dec 2008 | A1 |
20090050946 | Duparre et al. | Feb 2009 | A1 |
20090052743 | Techmer | Feb 2009 | A1 |
20090060281 | Tanida et al. | Mar 2009 | A1 |
20090086074 | Li et al. | Apr 2009 | A1 |
20090091806 | Inuiya | Apr 2009 | A1 |
20090102956 | Georgiev | Apr 2009 | A1 |
20090109306 | Shan et al. | Apr 2009 | A1 |
20090128833 | Yahav | May 2009 | A1 |
20090129667 | Ho et al. | May 2009 | A1 |
20090167922 | Perlman et al. | Jul 2009 | A1 |
20090179142 | Duparre et al. | Jul 2009 | A1 |
20090180021 | Kikuchi et al. | Jul 2009 | A1 |
20090200622 | Tai et al. | Aug 2009 | A1 |
20090201371 | Matsuda et al. | Aug 2009 | A1 |
20090207235 | Francini et al. | Aug 2009 | A1 |
20090225203 | Tanida et al. | Sep 2009 | A1 |
20090237520 | Kaneko et al. | Sep 2009 | A1 |
20090268192 | Koenck et al. | Oct 2009 | A1 |
20090268983 | Stone | Oct 2009 | A1 |
20090274387 | Jin | Nov 2009 | A1 |
20090284651 | Srinivasan | Nov 2009 | A1 |
20090297056 | Lelescu et al. | Dec 2009 | A1 |
20090302205 | Olsen et al. | Dec 2009 | A9 |
20090323195 | Hembree et al. | Dec 2009 | A1 |
20090323206 | Oliver et al. | Dec 2009 | A1 |
20090324118 | Maslov et al. | Dec 2009 | A1 |
20100002126 | Wenstrand et al. | Jan 2010 | A1 |
20100002313 | Duparre et al. | Jan 2010 | A1 |
20100002314 | Duparre | Jan 2010 | A1 |
20100053342 | Hwang | Mar 2010 | A1 |
20100053600 | Tanida et al. | Mar 2010 | A1 |
20100060746 | Olsen et al. | Mar 2010 | A9 |
20100073463 | Momonoi et al. | Mar 2010 | A1 |
20100086227 | Sun et al. | Apr 2010 | A1 |
20100091389 | Henriksen et al. | Apr 2010 | A1 |
20100097491 | Farina et al. | Apr 2010 | A1 |
20100103259 | Tanida et al. | Apr 2010 | A1 |
20100103308 | Butterfield et al. | Apr 2010 | A1 |
20100111444 | Coffman | May 2010 | A1 |
20100118127 | Nam | May 2010 | A1 |
20100133230 | Henriksen et al. | Jun 2010 | A1 |
20100141802 | Knight et al. | Jun 2010 | A1 |
20100142839 | Lakus-Becker | Jun 2010 | A1 |
20100157073 | Kondo et al. | Jun 2010 | A1 |
20100165152 | Lim | Jul 2010 | A1 |
20100177411 | Hegde et al. | Jul 2010 | A1 |
20100194901 | van Hoorebeke et al. | Aug 2010 | A1 |
20100195716 | Klein et al. | Aug 2010 | A1 |
20100201834 | Maruyama et al. | Aug 2010 | A1 |
20100208100 | Olsen et al. | Aug 2010 | A9 |
20100220212 | Perlman et al. | Sep 2010 | A1 |
20100231285 | Boomer et al. | Sep 2010 | A1 |
20100244165 | Lake et al. | Sep 2010 | A1 |
20100259610 | Petersen et al. | Oct 2010 | A1 |
20100265346 | Iizuka | Oct 2010 | A1 |
20100265385 | Knight | Oct 2010 | A1 |
20100281070 | Chan et al. | Nov 2010 | A1 |
20100302423 | Adams, Jr. et al. | Dec 2010 | A1 |
20100309292 | Ho | Dec 2010 | A1 |
20110001037 | Tewinkle | Jan 2011 | A1 |
20110018973 | Takayama | Jan 2011 | A1 |
20110032370 | Ludwig | Feb 2011 | A1 |
20110043661 | Podoleanu | Feb 2011 | A1 |
20110043665 | Ogasahara | Feb 2011 | A1 |
20110043668 | McKinnon et al. | Feb 2011 | A1 |
20110044502 | Liu et al. | Feb 2011 | A1 |
20110069189 | Venkataraman et al. | Mar 2011 | A1 |
20110108708 | Olsen et al. | May 2011 | A1 |
20110121421 | Charbon et al. | May 2011 | A1 |
20110122308 | Duparre | May 2011 | A1 |
20110128412 | Milnes et al. | Jun 2011 | A1 |
20110149408 | Hahgholt et al. | Jun 2011 | A1 |
20110149409 | Haugholt et al. | Jun 2011 | A1 |
20110153248 | Gu et al. | Jun 2011 | A1 |
20110157321 | Nakajima et al. | Jun 2011 | A1 |
20110176020 | Chang | Jul 2011 | A1 |
20110211824 | Georgiev et al. | Sep 2011 | A1 |
20110221599 | Högasten | Sep 2011 | A1 |
20110221658 | Haddick et al. | Sep 2011 | A1 |
20110221939 | Jerdev | Sep 2011 | A1 |
20110234841 | Akeley et al. | Sep 2011 | A1 |
20110241234 | Duparre | Oct 2011 | A1 |
20110242342 | Goma et al. | Oct 2011 | A1 |
20110242355 | Goma et al. | Oct 2011 | A1 |
20110242356 | Aleksic | Oct 2011 | A1 |
20110255592 | Sung et al. | Oct 2011 | A1 |
20110267348 | Lin et al. | Nov 2011 | A1 |
20110273531 | Ito et al. | Nov 2011 | A1 |
20110274366 | Tardif | Nov 2011 | A1 |
20110279721 | McMahon | Nov 2011 | A1 |
20110285866 | Bhrugumalla et al. | Nov 2011 | A1 |
20110298917 | Yanagita | Dec 2011 | A1 |
20110300929 | Tardif et al. | Dec 2011 | A1 |
20110310980 | Mathew | Dec 2011 | A1 |
20110317766 | Lim et al. | Dec 2011 | A1 |
20120012748 | Pain et al. | Jan 2012 | A1 |
20120026297 | Sato | Feb 2012 | A1 |
20120026342 | Yu et al. | Feb 2012 | A1 |
20120026366 | Golan et al. | Feb 2012 | A1 |
20120039525 | Tian et al. | Feb 2012 | A1 |
20120044249 | Mashitani et al. | Feb 2012 | A1 |
20120044372 | Côté et al. | Feb 2012 | A1 |
20120056982 | Katz | Mar 2012 | A1 |
20120062702 | Jiang et al. | Mar 2012 | A1 |
20120069235 | Imai | Mar 2012 | A1 |
20120113413 | Miahczylowicz-Wolski et al. | May 2012 | A1 |
20120147139 | Li et al. | Jun 2012 | A1 |
20120147205 | Lelescu et al. | Jun 2012 | A1 |
20120154551 | Inoue | Jun 2012 | A1 |
20120163672 | McKinnon et al. | Jun 2012 | A1 |
20120169433 | Mullins et al. | Jul 2012 | A1 |
20120170134 | Bolis et al. | Jul 2012 | A1 |
20120176479 | Mayhew et al. | Jul 2012 | A1 |
20120198677 | Duparre | Aug 2012 | A1 |
20120200734 | Tang | Aug 2012 | A1 |
20120229628 | Ishiyama et al. | Sep 2012 | A1 |
20120249550 | Akeley et al. | Oct 2012 | A1 |
20120249750 | Izzat et al. | Oct 2012 | A1 |
20120262607 | Shimura et al. | Oct 2012 | A1 |
20120287291 | McMahon | Nov 2012 | A1 |
20120293695 | Tanaka | Nov 2012 | A1 |
20120314033 | Lee et al. | Dec 2012 | A1 |
20120327222 | Ng et al. | Dec 2012 | A1 |
20130002828 | Ding et al. | Jan 2013 | A1 |
20130003184 | Duparre | Jan 2013 | A1 |
20130010073 | Do | Jan 2013 | A1 |
20130022111 | Chen et al. | Jan 2013 | A1 |
20130027580 | Olsen et al. | Jan 2013 | A1 |
20130033579 | Wajs | Feb 2013 | A1 |
20130033585 | Li et al. | Feb 2013 | A1 |
20130050504 | Safaee-Rad et al. | Feb 2013 | A1 |
20130050526 | Keelan | Feb 2013 | A1 |
20130057710 | McMahon | Mar 2013 | A1 |
20130070060 | Chatterjee | Mar 2013 | A1 |
20130077880 | Venkataraman et al. | Mar 2013 | A1 |
20130077882 | Venkataraman et al. | Mar 2013 | A1 |
20130088637 | Duparre | Apr 2013 | A1 |
20130093842 | Yahata | Apr 2013 | A1 |
20130113899 | Morohoshi et al. | May 2013 | A1 |
20130120605 | Georgiev et al. | May 2013 | A1 |
20130128068 | Georgiev et al. | May 2013 | A1 |
20130128069 | Georgiev et al. | May 2013 | A1 |
20130128087 | Georgiev et al. | May 2013 | A1 |
20130128121 | Agarwala et al. | May 2013 | A1 |
20130147979 | McMahon et al. | Jun 2013 | A1 |
20130215108 | McMahon et al. | Aug 2013 | A1 |
20130222556 | Shimada | Aug 2013 | A1 |
20130229540 | Farina et al. | Sep 2013 | A1 |
20130250150 | Malone | Sep 2013 | A1 |
20130259317 | Gaddy | Oct 2013 | A1 |
20130265459 | Duparre et al. | Oct 2013 | A1 |
20140009586 | McNamer et al. | Jan 2014 | A1 |
20140037137 | Broaddus et al. | Feb 2014 | A1 |
20140037140 | Benhimane et al. | Feb 2014 | A1 |
20140076336 | Clayton et al. | Mar 2014 | A1 |
20140078333 | Miao | Mar 2014 | A1 |
20140092281 | Nisenzon et al. | Apr 2014 | A1 |
20140118584 | Lee et al. | May 2014 | A1 |
20140132810 | McMahon | May 2014 | A1 |
20140176592 | Wilburn et al. | Jun 2014 | A1 |
20140198188 | Izawa | Jul 2014 | A1 |
20140218546 | McMahon | Aug 2014 | A1 |
20140240528 | Venkataraman et al. | Aug 2014 | A1 |
20140240529 | Venkataraman et al. | Aug 2014 | A1 |
20140253738 | Mullis | Sep 2014 | A1 |
20140267243 | Venkataraman et al. | Sep 2014 | A1 |
20140267286 | Duparre | Sep 2014 | A1 |
20140267633 | Venkataraman et al. | Sep 2014 | A1 |
20140267762 | Mullis et al. | Sep 2014 | A1 |
20140267890 | Lelescu et al. | Sep 2014 | A1 |
20140285675 | Mullis | Sep 2014 | A1 |
20140321712 | Ciurea et al. | Oct 2014 | A1 |
20140347748 | Duparre | Nov 2014 | A1 |
20150002734 | Lee | Jan 2015 | A1 |
20150085174 | Shabtay et al. | Mar 2015 | A1 |
20150146029 | Venkataraman et al. | May 2015 | A1 |
20150146030 | Venkataraman et al. | May 2015 | A1 |
Number | Date | Country |
---|---|---|
840502 | May 1998 | EP |
2336816 | Jun 2011 | EP |
2381418 | Oct 2011 | EP |
2006033493 | Feb 2006 | JP |
2007520107 | Jul 2007 | JP |
2011109484 | Jun 2011 | JP |
2013526801 | Jun 2013 | JP |
2014521117 | Aug 2014 | JP |
2007083579 | Jul 2007 | WO |
2008108271 | Sep 2008 | WO |
2011116203 | Sep 2011 | WO |
2011063347 | Oct 2011 | WO |
2011143501 | Nov 2011 | WO |
2012057619 | May 2012 | WO |
2012057620 | May 2012 | WO |
2012057621 | May 2012 | WO |
2012057622 | May 2012 | WO |
2012057623 | May 2012 | WO |
2012057620 | Jun 2012 | WO |
2012074361 | Jun 2012 | WO |
2012078126 | Jun 2012 | WO |
2012082904 | Jun 2012 | WO |
2012155119 | Nov 2012 | WO |
2013003276 | Jan 2013 | WO |
2013043751 | Mar 2013 | WO |
2013043761 | Mar 2013 | WO |
2013049699 | Apr 2013 | WO |
2013055960 | Apr 2013 | WO |
2013119706 | Aug 2013 | WO |
2013126578 | Aug 2013 | WO |
2014052974 | Apr 2014 | WO |
2014032020 | May 2014 | WO |
2014078443 | May 2014 | WO |
2014130849 | Aug 2014 | WO |
2014133974 | Sep 2014 | WO |
2014138695 | Sep 2014 | WO |
2014138697 | Sep 2014 | WO |
2014144157 | Sep 2014 | WO |
2014145856 | Sep 2014 | WO |
2014149403 | Sep 2014 | WO |
2014150856 | Sep 2014 | WO |
2014159721 | Oct 2014 | WO |
2014159779 | Oct 2014 | WO |
2014160142 | Oct 2014 | WO |
2014164550 | Oct 2014 | WO |
2014164909 | Oct 2014 | WO |
2014165244 | Oct 2014 | WO |
2015081279 | Jun 2015 | WO |
Entry |
---|
Yokochi et al., Extrinsic Camera Parameter Estimation Based-on Feature Tracking and GPS Data, 2006, Nara Institute of Science and Technology, Graduate School of Information Science, LNCS 3851, pp. 369-378. |
International Preliminary Report on Patentability for International Application PCT/US2014/030692, issued Sep. 15, 2015, Mailed Sep. 24, 2015, 6Pgs. |
International Search Report and Written Opinion for International Application PCT/US2014/030692, completed Jul. 28, 2014, Mailed Aug. 27, 2014, 7 Pages. |
Baker et al., “Limits on Super-Resolution and How to Break Them”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2002, vol. 24, No. 9, pp. 1167-1183. |
Bertero et al., “Super-resolution in computational imaging”, Micron, 2003, vol. 34, Issues 6-7, 17 pgs. |
Bishop et al., “Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field”, ACCV 2010, Part II, LNCS 6493, pp. 186-200. |
Bishop et al., “Light Field Superresolution”, Retrieved from http://home.eps.hw.ac.uk/˜sz73/ICCP09/LightFieldSuperresolution.pdf, 9 pgs. |
Bishop et al., “The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution”, IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2012, vol. 34, No. 5, pp. 972-986. |
Borman, “Topics in Multiframe Superresolution Restoration”, Thesis of Sean Borman, Apr. 2004, 282 pgs. |
Borman et al, “Image Sequence Processing”, Source unknown, Oct. 14, 2002, 81 pgs. |
Borman et al., “Block-Matching Sub-Pixel Motion Estimation from Noisy, Under-Sampled Frames—An Empirical Performance Evaluation”, Proc SPIE, Dec. 1998, 3653, 10 pgs. |
Borman et al., “Image Resampling and Constraint Formulation for Multi-Frame Super-Resolution Restoration”, Proc. SPIE, Jun. 2003, 5016, 12 pgs. |
Borman et al., “Linear models for multi-frame super-resolution restoration under non-affine registration and spatially varying PSF”, Proc. SPIE, May 2004, vol. 5299, 12 pgs. |
Borman et al., “Nonlinear Prediction Methods for Estimation of Clique Weighting Parameters in NonGaussian Image Models”, Proc. SPIE, 1998. 3459, 9 pgs. |
Borman et al., “Simultaneous Multi-Frame MAP Super-Resolution Video Enhancement Using Spatio-Temporal Priors”, Image Processing, 1999, ICIP 99 Proceedings, vol. 3, pp. 469-473. |
Borman et al., “Super-Resolution from Image Sequences—A Review”, Circuits & Systems, 1998, pp. 374-378. |
Bose et al., “Superresolution and Noise Filtering Using Moving Least Squares”, IEEE Transactions on Image Processing, date unknown, 21 pgs. |
Boye et al., “Comparison of Subpixel Image Registration Algorithms”, Proc. of SPIE-IS&T Electronic Imaging, vol. 7246, pp. 72460X-1-72460X-9. |
Bruckner et al., “Artificial compound eye applying hyperacuity”, Optics Express, Dec. 11, 2006, vol. 14, No. 25, pp. 12076-12084. |
Bruckner et al., “Driving microoptical imaging systems towards miniature camera applications”, Proc. SPIE, Micro-Optics, 2010, 11 pgs. |
Bruckner et al., “Thin wafer-level camera lenses inspired by insect compound eyes”, Optics Express, Nov. 22, 2010, vol. 18, No. 24, pp. 24379-24394. |
Capel, “Image Mosaicing and Super-resolution”, [online], Retrieved on Nov. 10, 2012 (Nov. 10, 2012). Retrieved from the Internet at URL:<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226.2643&rep=rep1 &type=pdf>, Title pg., abstract, table of contents, pp. 1-263 (269 total pages). |
Chan et al., “Extending the Depth of Field in a Compound-Eye Imaging System with Super-Resolution Reconstruction”, Proceedings—International Conference on Pattern Recognition, 2006, vol. 3, pp. 623-626. |
Chan et al., “Investigation of Computational Compound-Eye Imaging System with Super-Resolution Reconstruction”, IEEE, ISASSP 2006, pp. 1177-1180. |
Chan et al., “Super-resolution reconstruction in a computational compound-eye imaging system”, Multidim Syst Sign Process, 2007, vol. 18, pp. 83-101. |
Chen et al., “Interactive deformation of light fields”, In Proceedings of SIGGRAPH I3D 2005, pp. 139-146. |
Drouin et al., “Fast Multiple-Baseline Stereo with Occlusion”, Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, 2005, 8 pgs. |
Drouin et al., “Geo-Consistency for Wide Multi-Camera Stereo”, Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, 8 pgs. |
Drouin et al., “Improving Border Localization of Multi-Baseline Stereo Using Border-Cut”, International Journal of Computer Vision, Jul. 2009, vol. 83, Issue 3, 8 pgs. |
Duparre et al., “Artificial apposition compound eye fabricated by micro-optics technology”, Applied Optics, Aug. 1, 2004, vol. 43, No. 22, pp. 4303-4310. |
Duparre et al., “Artificial compound eye zoom camera”, Bioinspiration & Biomimetics, 2008, vol. 3, pp. 1-6. |
Duparre et al., “Artificial compound eyes—different concepts and their application to ultra flat image acquisition sensors”, MOEMS and Miniaturized Systems IV, Proc. SPIE 5346, Jan. 2004, pp. 89-100. |
Duparre et al., “Chirped arrays of refractive ellipsoidal microlenses for aberration correction under oblique incidence”, Optics Express, Dec. 26, 2005, vol. 13, No. 26, pp. 10539-10551. |
Duparre et al., “Micro-optical artificial compound eyes”, Bioinspiration & Biomimetics, 2006, vol. 1, pp. R1-R16. |
Duparre et al., “Microoptical artificial compound eyes—from design to experimental verification of two different concepts”, Proc. of SPIE, Optical Design and Engineering II, vol. 5962, pp. 59622A-1-59622A-12. |
Duparre et al., “Microoptical Artificial Compound Eyes—Two Different Concepts for Compact Imaging Systems”, 11th Microoptics Conference, Oct. 30-Nov. 2, 2005, 2 pgs. |
Duparre et al., “Microoptical telescope compound eye”, Optics Express, Feb. 7, 2005, vol. 13, No. 3, pp. 889-903. |
Duparre et al., “Micro-optically fabricated artificial apposition compound eye”, Electronic Imaging—Science and Technology, Prod. SPIE 5301, Jan. 2004, pp. 25-33. |
Duparre et al., “Novel Optics/Micro-Optics for Miniature Imaging Systems”, Proc. of SPIE, 2006, vol. 6196, pp. 619607-1-619607-15. |
Duparre et al., “Theoretical analysis of an artificial superposition compound eye for application in ultra flat digital image acquisition devices”, Optical Systems Design, Proc. SPIE 5249, Sep. 2003, pp. 408-418. |
Duparre et al., “Thin compound-eye camera”, Applied Optics, May 20, 2005, vol. 44, No. 15, pp. 2949-2956. |
Duparre et al., “Ultra-Thin Camera Based on Artificial Apposistion Compound Eyes”, 10th Microoptics Conference, Sep. 1-3, 2004, 2 pgs. |
Fanaswala, “Regularized Super-Resolution of Multi-View Images”, Retrieved on Nov. 10, 2012 (Nov. 10, 2012). Retrieved from the Internet at URL:<http://www.site.uottawa.ca/-edubois/theses/Fanaswala—thesis.pdf>, 163 pgs. |
Farrell et al., “Resolution and Light Sensitivity Tradeoff with Pixel Size”, Proceedings of the SPIE Electronic Imaging 2006 Conference, 2006, vol. 6069, 8 pgs. |
Farsiu et al., “Advances and Challenges in Super-Resolution”, International Journal of Imaging Systems and Technology, 2004, vol. 14, pp. 47-57. |
Farsiu et al., “Fast and Robust Multiframe Super Resolution”, IEEE Transactions on Image Processing, Oct. 2004, vol. 13, No. 10, pp. 1327-1344. |
Farsiu et al., “Multiframe Demosaicing and Super-Resolution of Color Images”, IEEE Transactions on Image Processing, Jan. 2006, vol. 15, No. 1, pp. 141-159. |
Feris et al., “Multi-Flash Stereopsis: Depth Edge Preserving Stereo with Small Baseline Illumination”, IEEE Trans on PAMI, 2006, 31 pgs. |
Fife et al., “A 3D Multi-Aperture Image Sensor Architecture”, Custom Integrated Circuits Conference, 2006, CICC '06, IEEE, pp. 281-284. |
Fife et al., “A 3MPixel Multi-Aperture Image Sensor with 0.7Mu Pixels in 0.11Mu CMOS”, ISSCC 2008, Session 2, Image Sensors & Technology, 2008, pp. 48-50. |
Fischer et al. “Optical System Design”, 2nd Edition, SPIE Press, pp. 191-198. |
Fischer, et al. “Optical System Design”, 2nd Edition, SPIE Press, pp. 49-58. |
Goldman et al., “Video Object Annotation, Navigation, and Composition”, In Proceedings of UIST 2008, pp. 3-12. |
Gortler et al., “The Lumigraph”, In Proceedings of SIGGRAPH 1996, pp. 43-54. |
Hacohen et al., “Non-Rigid Dense Correspondence with Applications for Image Enhancement”, ACM Transactions on Graphics, 30, 4, 2011, pp. 70:1-70:10. |
Hamilton, “JPEG File Interchange Format, Version 1.02”, Sep. 1, 1992, 9 pgs. |
Hardie, “A Fast Image Super-Algorithm Using an Adaptive Wiener Filter”, IEEE Transactions on Image Processing, Dec. 2007, vol. 16, No. 12, pp. 2953-2964. |
Hasinoff et al., “Search-and-Replace Editing for Personal Photo Collections”, Computational Photography (ICCP) 2010, pp. 1-8. |
Horisaki et al., “Irregular Lens Arrangement Design to Improve Imaging Performance of Compound-Eye Imaging Systems”, Applied Physics Express, 2010, vol. 3, pp. 022501-1-022501-3. |
Horisaki et al., “Superposition Imaging for Three-Dimensionally Space-Invariant Point Spread Functions”, Applied Physics Express, 2011, vol. 4, pp. 112501-1-112501-3. |
Horn et al., “LightShop: Interactive Light Field Manipulation and Rendering”, In Proceedings of I3D 2007, pp. 121-128. |
Isaksen et al., “Dynamically Reparameterized Light Fields”, In Proceedings of SIGGRAPH 2000, pp. 297-306. |
Jarabo et al., “Efficient Propagation of Light Field Edits”, In Proceedings of SIACG 2011, pp. 75-80. |
Joshi, et al. “Synthetic Aperture Tracking: Tracking Through Occlusions”, I CCV IEEE 11th International Conference on Computer Vision; Publication [online]. Oct. 2007 [retrieved Jul. 28, 2014]. Retrieved from the Internet: <URL: http:I/ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4409032&isnumber=4408819>; pp. 1-8. |
Kang et al., “Handling Occlusions inn Dense Multi-View Stereo”, Computer Vision and Pattern Recognition, 2001, vol. 1, pp. I-103-I-110. |
Kitamura et al., “Reconstruction of a high-resolution image on a compound-eye image-capturing system”, Applied Optics, Mar. 10, 2004, vol. 43, No. 8, pp. 1719-1727. |
Krishnamurthy et al., “Compression and Transmission of Depth Maps for Image-Based Rendering”, Image Processing, 2001, pp. 828-831. |
Kutulakos et al., “Occluding Contour Detection Using Affine Invariants and Purposive Viewpoint Control”, Proc., CVPR 94, 8 pgs. |
Lensvector, “How LensVector Autofocus Works”, printed Nov. 2, 2012 from http://www.lensvector.com/overview.html, 1 pg. |
Levoy, “Light Fields and Computational Imaging”, IEEE Computer Society, Aug. 2006, pp. 46-55. |
Levoy et al., “Light Field Rendering”, Proc. ADM SIGGRAPH '96, pp. 1-12. |
Li et al., “A Hybrid Camera for Motion Deblurring and Depth Map Super-Resolution”, Jun. 23-28, 2008, IEEE Conference on Computer Vision and Pattern Recognition, 8 pgs. Retrieved from www.eecis.udel.edu/˜jye/lab—research/08/deblur-feng.pdf on Feb. 5, 2014. |
Liu et al., “Virtual View Reconstruction Using Temporal Information”, 2012 IEEE International Conference on Multimedia and Expo, 2012, pp. 115-120. |
Lo et al., “Stereoscopic 3D Copy & Paste”, ACM Transactions on Graphics, vol. 29, No. 6, Article 147, Dec. 2010, pp. 147:1-147:10. |
Muehlebach, “Camera Auto Exposure Control for VSLAM Applications”, Studies on Mechatronics, Swiss Federal Institute of Technology Zurich, Autumn Term 2010 course, 67 pgs. |
Nayar, “Computational Cameras: Redefining the Image”, IEEE Computer Society, Aug. 2006, pp. 30-38. |
Ng, “Digital Light Field Photography”, Thesis, Jul. 2006, 203 pgs. |
Ng et al., “Super-Resolution Image Restoration from Blurred Low-Resolution Images”, Journal of Mathematical Imaging and Vision, 2005, vol. 23, pp. 367-378. |
Nitta et al., “Image reconstruction for thin observation module by bound optics by using the iterative backprojection method”, Applied Optics, May 1, 2006, vol. 45, No. 13, pp. 2893-2900. |
Nomura et al., “Scene Collages and Flexible Camera Arrays”, Proceedings of Eurographics Symposium on Rendering, 2007, 12 pgs. |
Park et al., “Super-Resolution Image Reconstruction”, IEEE Signal Processing Magazine, May 2003, pp. 21-36. |
Pham et al., “Robust Super-Resolution without Regularization”, Journal of Physics: Conference Series 124, 2008, pp. 1-19. |
Polight, “Designing Imaging Products Using Reflowable Autofocus Lenses”, http://www.polight.no/tunable-polymer-autofocus-lens-html--11.html. |
Protter et al., “Generalizing the Nonlocal-Means to Super-Resolution Reconstruction”, IEEE Transactions on Image Processing, Jan. 2009, vol. 18, No. 1, pp. 36-51. |
Radtke et al., “Laser lithographic fabrication and characterization of a spherical artificial compound eye”, Optics Express, Mar. 19, 2007, vol. 15, No. 6, pp. 3067-3077. |
Rander et al., “Virtualized Reality: Constructing Time-Varying Virtual Worlds From Real World Events”, Proc. of IEEE Visualization '97, Phoenix, Arizona, Oct. 19-24, 1997, pp. 277-283, 552. |
Rhemann et al, “Fast Cost-Volume Filtering for Visual Correspondence and Beyond”, IEEE Trans. Pattern Anal. Mach. Intell, 2013, vol. 35, No. 2, pp. 504-511. |
Robertson et al., “Dynamic Range Improvement Through Multiple Exposures”, In Proc. of the Int. Conf. on Image Processing, 1999, 5 pgs. |
Robertson et al., “Estimation-theoretic approach to dynamic range enhancement using multiple exposures”, Journal of Electronic Imaging, Apr. 2003, vol. 12, No. 2, pp. 219-228. |
Roy et al., “Non-Uniform Hierarchical Pyramid Stereo for Large Images”, Computer and Robot Vision, 2007, pp. 208-215. |
Sauer et al., “Parallel Computation of Sequential Pixel Updates in Statistical Tomographic Reconstruction”, ICIP 1995, pp. 93-96. |
Seitz et al., “Plenoptic Image Editing”, International Journal of Computer Vision 48, 2, pp. 115-129. |
Shum et al., “Pop-Up Light Field: An Interactive Image-Based Modeling and Rendering System”, Apr. 2004, ACM Transactions on Graphics, vol. 23, No. 2, pp. 143-162. Retrieved from http://131.107.65.14/en-us/um/people/jiansun/papers/PopupLightField—TOG.pdf on Feb. 5. |
Stollberg et al., “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects”, Optics Express, Aug. 31, 2009, vol. 17, No. 18, pp. 15747-15759. |
Sun et al., “Image Super-Resolution Using Gradient Profile Prior”, Source and date unknown, 8 pgs. |
Takeda et al., “Super-resolution Without Explicit Subpixel Motion Estimation”, IEEE Transaction on Image Processing, Sep. 2009, vol. 18, No. 9, pp. 1958-1975. |
Tanida et al., “Color imaging with an integrated compound imaging system”, Optics Express, Sep. 8, 2003, vol. 11, No. 18, pp. 2109-2117. |
Tanida et al., “Thin observation module by bound optics (TOMBO): concept and experimental verification”, Applied Optics, Apr. 10, 2001, vol. 40, No. 11, pp. 1806-1813. |
Taylor, “Virtual camera movement: The way of the future?”, American Cinematographer 77, 9 (Sep.), 93-100. |
Vaish et al., “Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures”, Proceeding, CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—vol. 2, pp. 2331-2338. |
Vaish et al., “Synthetic Aperture Focusing Using a Shear-Warp Factorization of the Viewing Transform”, IEEE Workshop on A3DISS, CVPR, 2005, 8 pgs. |
Vaish et al., “Using Plane + Parallax for Calibrating Dense Camera Arrays”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004, 8 pgs. |
Veilleux, “CCD Gain Lab: The Theory”, University of Maryland, College Park—Observational Astronomy (ASTR 310), Oct. 19, 2006, pp. 1-5 (online], [retrieved on May 13, 2014]. Retrieved from the Internet <URL: http://www.astro.umd.edu/˜veilleux/ASTR310/fall06/ccd—theory.pdf, 5 pgs. |
Vuong et al., “A New Auto Exposure and Auto White-Balance Algorithm to Detect High Dynamic Range Conditions Using CMOS Technology”, Proceedings of the World Congress on Engineering and Computer Science 2008, WCECS 2008, Oct. 22-24, 2008. |
Wang, “Calculation of Image Position, Size and Orientation Using First Order Properties”, 10 pgs. |
Wetzstein et al., “Computational Plenoptic Imaging”, Computer Graphics Forum, 2011, vol. 30, No. 8, pp. 2397-2426. |
Wheeler et al., “Super-Resolution Image Synthesis Using Projections Onto Convex Sets in the Frequency Domain”, Proc. SPIE, 2005, 5674, 12 pgs. |
Wikipedia, “Polarizing Filter (Photography)”, http://en.wikipedia.org/wiki/Polarizing—filter—(photography), 1 pg. |
Wilburn, “High Performance Imaging Using Arrays of Inexpensive Cameras”, Thesis of Bennett Wilburn, Dec. 2004, 128 pgs. |
Wilburn et al., “High Performance Imaging Using Large Camera Arrays”, ACM Transactions on Graphics, Jul. 2005, vol. 24, No. 3, pp. 1-12. |
Wilburn et al., “High-Speed Videography Using a Dense Camera Array”, Proceeding, CVPR'04 Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 294-301. |
Wilburn et al., “The Light Field Video Camera”, Proceedings of Media Processors 2002, SPIE Electronic Imaging, 2002, 8 pgs. |
Wippermann et al., “Design and fabrication of a chirped array of refractive ellipsoidal micro-lenses for an apposition eye camera objective”, Proceedings of SPIE, Optical Design and Engineering II, Oct. 15, 2005, 59622C-1-59622C-11. |
Yang et al., “A Real-Time Distributed Light Field Camera”, Eurographics Workshop on Rendering (2002), pp. 1-10. |
Yang et al., “Superresolution Using Preconditioned Conjugate Gradient Method”, Source and date unknown, 8 pgs. |
Zhang et al., “A Self-Reconfigurable Camera Array”, Eurographics Symposium on Rendering, 2004, 12 pgs. |
Zomet et al., “Robust Super-Resolution”, IEEE, 2001, pp. 1-6. |
International Preliminary Report on Patentability for International Application PCT/US2014/067740, issued May 31, 2016, Mailed Jun. 9, 2016, 9 Pgs. |
Extended European Search Report for European Application No. 14763087.5, Search completed Dec. 7, 2016, Mailed Dec. 19, 2016, 9 Pgs. |
Aufderheide et al., “A MEMS-based Smart Sensor System for Estimation of Camera Pose for Computer Vision Applications”, Research and Innovation Conference 2011, Jun. 29, 2011, pp. 1-10, XP055326747, University of Bolton Retrieved from the Internet: URL:http:/fubir.bolton.ac.uk/441/1/bolton, RIC aufderheide.pdf, retrieved on Dec. 7, 2016, abstract; figure 4. |
Number | Date | Country | |
---|---|---|---|
20150237329 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
61798673 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14216968 | Mar 2014 | US |
Child | 14705903 | US |