Claims
- 1. A method of recovering depth information for pixels of a base image representing a view of a scene, the method comprising the steps of:
detecting a plurality of pixels in a base image that represents a first view of a scene; determining 3-D depth of the plurality of pixels in the base image by matching correspondence to a plurality of pixels in a plurality of images representing a plurality of views of the scene; and tracing pixels in a virtual piecewise continuous depth surface by spatial propagation starting from the detected pixels in the base image by using the matching and corresponding plurality of pixels in the plurality of images to create the virtual piecewise continuous depth surface viewed from the base image, each successfully traced pixel being associated with a depth in the scene viewed from the base image.
- 2. The method of claim 1, wherein the step of determining the 3-D depth comprises the steps of:
detecting a plurality of image pixels in a first image corresponding to a first view of a scene; detecting a plurality of image pixels in at least a second image corresponding to a respective at least a second view of the scene, wherein the at least a second image deviates from the first image as a result of camera relative motion; determining a first two-view correspondence between the plurality of detected image pixels in the first image and a plurality of detected image pixels in one of the at least a second image resulting in a first potential match set of candidate image pixels between the first image and the one of the at least a second image; and determining a multiple-view correspondence between the plurality of detected image pixels in the first image and the plurality of detected image pixels in the at least a second image, the multiple-view correspondence being a refinement of the first two-view correspondence, resulting in a second potential match set of candidate image pixels between the first image and the at least a second image, wherein the second potential match set is based at least in part on a computation of reprojection error for matched pixels that resulted from a projective reconstruction of the second potential match set.
- 3. The method of claim 2, wherein the second potential match set is based at least in part on a least median of squares computation of the reprojection errors related to matched pixels in the second potential match set.
- 4. The method of claim 1, wherein the tracing step comprises the step of:
propagating a front of a virtual piece of a continuous depth surface to at least one neighboring pixel starting from the detected pixels in the base image.
- 5. The method of claim 4, wherein the tracing step comprises the step of:
determining when a boundary is reached between two propagating fronts of virtual pieces of a continuous depth surface.
- 6. The method of claim 4, wherein the tracing step comprises the steps of:
comparing the matching costs of the two propagating fronts about the reached boundary; and stopping the propagation of the front with the higher compared matching cost.
- 7. The method of claim 1, wherein the tracing step comprises the step of:
propagating a front of a virtual piece of a continuous depth surface to at least one neighboring pixel surrounded by a predefined size window in the continuous depth surface.
- 8. The method of claim 7, wherein the tracing step comprises the step of:
determining when a boundary is reached between two propagating fronts of virtual pieces of a continuous depth surface.
- 9. The method of claim 7, wherein the tracing step comprises the steps of:
comparing the matching costs of the two propagating fronts about the reached boundary; and stopping the propagation of the front with the higher compared matching cost.
- 10. The method of claim 9, wherein the matching cost is determined by computing the summation of all the normalized cross-correlations between a first image window of a pre-determined size in the base view, and a second image window of the same pre-determined size in a reference view.
- 11. The method of claim 10, further comprising a step of:
rectification of at least one pair of images corresponding to the base view of the scene and at least one of the reference views.
- 12. An image processing system comprising:
a memory; a controller/processor electrically coupled to the memory; an image matching module, electrically coupled to the controller/processor and to the memory, for providing a plurality of seed pixels that represent 3-D depth of the plurality of pixels in the base image view of a scene by matching correspondence to a plurality of pixels in a plurality of images representing a plurality of views of the scene; and a propagation module, electrically coupled to the controller/processor and to the memory, for tracing pixels in a virtual piecewise continuous depth surface by spatial propagation starting from the provided plurality of seed pixels in the base image by using the matching and corresponding plurality of pixels in the plurality of images to create the virtual piecewise continuous depth surface viewed from the base image, each successfully traced pixel being associated with a depth in the scene viewed from the base image.
- 13. The image processing system of claim 12, further comprising at least one camera interface, electrically coupled to the controller/processor, for sending image information from at least one camera to the controller/processor.
- 14. The image processing system of claim 12, wherein the controller/processor, the memory, the image matching module, and the propagation module, are implemented in at least one of an integrated circuit, a circuit supporting substrate, and a scanner.
- 15. The image processing system of claim 12, wherein the propagation module is further for determining when a boundary is reached between two propagating fronts of virtual pieces of a continuous depth surface.
- 16. The image processing system of claim 15, wherein the propagation module is further for comparing the matching costs of the two propagating fronts about the reached boundary; and
stopping the propagation of the front with the higher compared matching cost.
- 17. The image processing system of claim 12, wherein the propagation module is further for propagating a front of a virtual piece of a continuous depth surface to at least one neighboring pixel surrounded by a predefined size window in the continuous depth surface.
- 18. A computer readable medium including computer instructions for a 3-D image reconstruction computer system, the computer instructions comprising instructions for:
detecting a plurality of pixels in a base image that represents a first view of a scene; determining 3-D depth of the plurality of pixels in the base image by matching correspondence to a plurality of pixels in a plurality of images representing a plurality of views of the scene; and tracing pixels in a virtual piecewise continuous depth surface by spatial propagation starting from the detected pixels in the base image by using the matching and corresponding plurality of pixels in the plurality of images to create the virtual piecewise continuous depth surface viewed from the base image, each successfully traced pixel being associated with a depth in the scene viewed from the base image.
- 19. The computer readable medium of claim 18, wherein the step of determining the 3-D depth comprises the steps of:
detecting a plurality of image pixels in a first image corresponding to a first view of a scene; detecting a plurality of image pixels in at least a second image corresponding to a respective at least a second view of the scene, wherein the at least a second image deviates from the first image as a result of camera relative motion; determining a first two-view correspondence between the plurality of detected image pixels in the first image and a plurality of detected image pixels in one of the at least a second image resulting in a first potential match set of candidate image pixels between the first image and the one of the at least a second image; and determining a multiple-view correspondence between the plurality of detected image pixels in the first image and the plurality of detected image pixels in the at least a second image, the multiple-view correspondence being a refinement of the first two-view correspondence, resulting in a second potential match set of candidate image pixels between the first image and the at least a second image, wherein the second potential match set is based at least in part on a computation of reprojection error for matched pixels that resulted from a projective reconstruction of the second potential match set.
- 20. The computer readable medium of claim 19, wherein the second potential match set is based at least in part on a least median of squares computation of the reprojection errors related to matched pixels in the second potential match set.
- 21. The computer readable medium of claim 18, wherein the tracing step comprises the step of:
propagating a front of a virtual piece of a continuous depth surface to at least one neighboring pixel starting from the detected pixels in the base image.
- 22. The computer readable medium of claim 21, wherein the tracing step comprises the step of:
determining when a boundary is reached between two propagating fronts of virtual pieces of a continuous depth surface.
- 23. The computer readable medium of claim 21, wherein the tracing step comprises the steps of:
comparing the matching costs of the two propagating fronts about the reached boundary; and stopping the propagation of the front with the higher compared matching cost.
- 24. The computer readable medium of claim 18, wherein the tracing step comprises the step of:
propagating a front of a virtual piece of a continuous depth surface to at least one neighboring pixel surrounded by a predefined size window in the continuous depth surface.
- 25. The computer readable medium of claim 24, wherein the tracing step comprises the step of:
determining when a boundary is reached between two propagating fronts of virtual pieces of a continuous depth surface.
- 26. The computer readable medium of claim 24, wherein the tracing step comprises the steps of:
comparing the matching costs of the two propagating fronts about the reached boundary; and stopping the propagation of the front with the higher compared matching cost.
- 27. The computer readable medium of claim 26, wherein the matching cost is determined by computing a normalized cross-correlation between the windows associated with the two propagating fronts about the reached boundary.
- 28. The computer readable medium of claim 27, further comprising a step of:
rectification of at least one pair of images corresponding to at least one pair of views of the scene.
- 29. A method of recovering depth information for pixels of a base image representing a view of a scene, the method comprising the steps of:
tracing at least one parameter surface, each of the at least one parameter surface traced starting from at least one predetermined seed pixel point; and calculating a derivative of function E(g) with respect to parameter g by using finite difference to minimize the following equation 7E(g)=∑i=1m{1-NCCω[Ii(u_(g),v_(g)),I0(uj,vj)]},where 8NCωC⌊Ii(u_,v_),I0(uj,vj)⌋is a normalized cross-correlation between Ii({overscore (u)},{overscore (v)}) and I0(uj,vj); Ii({overscore (u)},{overscore (v)}) is a first window of size □×□ centered at pixel ({overscore (u)},{overscore (v)}) in Ii, I0(uj,vj) is a second window of size □×□ centered at pixel (uj,vj) in I0, Ij is a reference image, I0 is a base image. ({overscore (u)},{overscore (v)}) is a pixel point closest to Pi(C0+gL(uj,vj)); C0+gL(uj, vj) is a 3-D point that projects to (uj, vj) in the base view, and to Pi(C0+gL(uj, vj)) in the ith reference view; C0 is the base image camera's center of projection; L(uj,vj) is the unit vector from C0 to the point on the image plane that corresponds the pixel (uj,vj); and g is a depth parameter.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application generally relates to the teachings of U.S. patent application Ser. No. 09/825,266, entitled “Methods And Apparatus For Matching Multiple Images” filed on Apr. 3, 2001, which is assigned to the same assignee as the present patent application and the teachings of which are herein incorporated by reference.