Claims
- 1. In a system using a plurality of fixed imagers, a method to create a high quality virtual image, in real-time, as seen from a virtual viewpoint within a scene covered by the plurality of fixed imagers, comprising the steps of;
a. selecting at least two images corresponding to at least two of the plurality of fixed imagers to be used in creating the high quality virtual image; b. creating at least two depth maps corresponding to the at least two images; c. determining at least two sets of warp parameters corresponding to the at least two images using the at least two depth maps corresponding to said at least two images; d. warping the at least two images to generate at least two warped images representing the virtual viewpoint using the at least two sets of warp parameters corresponding to said at least two images; and e. merging the at least two warped images to create the high quality virtual image.
- 2. The method of claim 1 further comprising the step of selecting the virtual viewpoint based on data supplied by an operator.
- 3. The method of claim 1 further comprising the step of selecting the virtual viewpoint based on tracking at least one feature in the scene.
- 4. The method of claim 1 wherein the steps of creating the at least two depth maps comprises the steps of;
a. calculating a plurality of optical flow values between the at least two images; b. calculating a plurality of parallax values corresponding to a plurality of image coordinates in the at least two images from the plurality of optical flow values; and c. calculating the at least two depth maps from the plurality of image coordinates and the plurality of parallax values.
- 5. The method of claim 1 wherein creating the at least two depth maps comprises the steps of;
a. mounting a plurality of depth sensing sensors viewing the scene coincident with the plurality of fixed imagers; b. selecting at least two depth sensing sensors corresponding to the at least two images; c. measuring a plurality of depth values corresponding to a plurality of image coordinates in the at least two images with said at least two depth sensing sensors; and d. creating the at least two depth maps from the plurality of depth values.
- 6. The method of claim 1 wherein creating the at least two depth maps comprises the steps of;
a. separating the at least two images into a plurality of segments, pixels of each segment having substantially homogenous values; b. calculating a depth value corresponding to each segment; c. optimizing the depth values corresponding to each segment; and d. creating the at least two depth maps from the plurality of optimized depth values.
- 7. The method of claim 1 wherein the step of selecting the at least two images is based on a proximity of the virtual viewpoint to respective viewpoints corresponding to the at least two images.
- 8. The method of claim 1 wherein the step of selecting the at least two images selects exactly two images.
- 9. The method of claim 1 wherein the step of selecting the at least two images selects exactly three images.
- 10. The method of claim 9 wherein the exactly three images correspond to three fixed imagers from among the plurality of fixed imagers arranged in a triangle.
- 11. The method of claim 1 further comprising the step of placing the plurality of fixed imagers in a geometric pattern.
- 12. A virtual camera system to create a high quality virtual image, in real-time, as seen from a virtual viewpoint, comprising;
a. a plurality of fixed imagers; b. image selection means for selecting an image from each of at least two of the plurality of fixed cameras for use in creating the high quality virtual image; c. depth estimation means for creating at least two depth maps corresponding to the at least two images; d. calculation means for calculating, based on the at least two depth maps, at least two sets of warp parameters that define respective warpings of the at least two images to the virtual viewpoint; e. an image warper which applies the at least two sets of warp parameters from the calculation means to the at least two images respectively to create at least two warped images; and f. an image merger to merge the at least two warped images to generate the high quality virtual image.
- 13. The system of claim 12 wherein the depth estimation means includes view-based volumetric mapping means to create depth maps of the images.
- 14. The system of claim 12 wherein the depth estimation means includes color segmentation depth calculation means to create depth maps of the images.
- 15. The system of claim 12 further comprising a plurality of depth sensing sensors aligned to view the scene coincident with the plurality of fixed imagers, whereby the at least two depth maps are generated using data provided by the plurality of depth sensing sensors.
- 16. A method which uses a plurality of fixed imagers to create at least one of a mosaic, a three dimensional model of a scene, and a virtual image, including the step of placing the plurality of fixed imagers in a hexagonal pattern.
- 17. The method of claim 16, further including the step of deploying the hexagonal pattern of images on a planar surface.
- 18. The method of claim 16 further including the step of deploying the hexagonal pattern of cameras on at least a portion of a tubular surface.
- 19. The method of claim 16, wherein an angular separation between a neighboring pair of the plurality of fixed cameras is less than 40°.
- 20. The method of claim 19 wherein an angular separation between a neighboring pair of the plurality of fixed cameras is greater than 30°.
- 21. The method of claim 16 wherein the hexagonal pattern is elongated in a direction to provide an aspect ratio equal to an aspect ratio of an image produce by one of the plurality of fixed imagers.
- 22. A apparatus to create a local depth map of a scene from a first image including a first plurality of pixels and a second image including a second plurality of pixels, comprising the steps of:
a. segmenting the first image into a plurality of contiguous segments, each segment having a further plurality of pixels of the first plurality of pixels, wherein pixel values of the further plurality of pixels are within a predetermined pixel value range; b. determining pixel depths of the further plurality of pixels by comparing the further plurality of pixels to the second plurality of pixels; c. determining a segment surface based on the determined pixel depths of the further plurality of pixels; and d. updating pixel depths of the further plurality of pixels to be within a predetermined depth range of the segment surface by comparing the further plurality of pixels to the second plurality of pixels.
- 23. The method of claim 22, wherein the given segment surface is a plane.
- 24. The method of claim 22, wherein step (b) uses an optical flow method to compare the given plurality of pixels to the second plurality of pixels.
- 25. A method to create a local depth map of a scene from a first image showing the scene from a first viewpoint and a second image showing the scene from a second viewpoint, the first image including a first plurality of pixels and the second image including a second plurality of pixels, each pixel having a pixel value, comprising the steps of:
a. dividing the pixel values into a plurality of pixel value ranges; b. segmenting the first image into a plurality of segments, each segment including a subset of pixels from the first plurality of pixels selected to have pixel values within one of the plurality of pixel value ranges; c. determining a plurality of depth values for the plurality of segments; d. warping the first image into a warped image showing the scene from the second viewpoint using plurality of depth values; and e. calculating a first matching score of the second image and the warped image; f. hypothesizing that a given segment of the plurality of segments has a hypothetical depth value equal to a neighboring depth value of a neighboring segment; g. calculating a hypothetical matching score of the second image and a hypothetical warped image based on the hypothetical depth value; and h. changing a given depth value of the given segment to the hypothetical depth value, if the hypothetical matching score is better than the first matching score.
- 26. The method of claim 25, wherein step (c) includes the steps of;
c1. determining pixel depths of a selected subset of pixels corresponding to a selected segment by comparing the selected subset of pixels to the second plurality of pixels; c2. determining a selected segment surface based on the determined pixel depths of the selected subset of pixels; c3. updating the determined pixel depths of the selected subset of pixels to be within a predetermined depth range of the selected segment surface by comparing the selected subset of pixels to the second plurality of pixels; and c4. determining depth values for all pixels of the selected segment based on the determined pixel depths of the selected subsets of pixels of the selected segment surface.
- 27. The method of claim 25, wherein step (g) includes the steps of;
g1. determining a first portion of the second image which corresponds to a first visible portion of the further segment in the warped image; g2. determining a second portion of the second image which corresponds to a second visible portion of the further segment in the hypothetical warped image; g3. determining a changed portion of the second image which corresponds to a union of the first portion and the second portion; g4. calculating a changed portion matching score of the changed portion of the second image and the warped image; g5. calculating a changed portion hypothetical matching score of the second image and the hypothetical warped image; and g6. calculating the hypothetical matching score from the first matching score, the changed portion matching score, and the changed portion hypothetical matching score.
Parent Case Info
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 60/241,261, filed Oct. 18, 2000 and U.S. Provisional Patent Application Serial No. 60/250,651, filed Dec. 1, 2000, the contents of which are incorporated herein by reference.
Government Interests
[0002] The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of contract nos. DAAB07-98-D-H751 and N00019-99-C-1385 awarded by DARPA.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60241261 |
Oct 2000 |
US |
|
60250651 |
Dec 2000 |
US |