Claims
- 1. A compound camera system comprising:
a plurality of component cameras capable of generating image data of an object; and a data processor capable of receiving first image data from a first one of said plurality of component cameras and second image data from a second one of said plurality of component cameras and generating a virtual image therefrom, wherein said data processor generates said virtual image by back-projecting virtual pixel data (u,v) to generate point data (x,y,z) located at a depth, z=Z1, associated with first object plane of said object and then projecting said point data (x,y,z) to generate first pixel data (u1,v1) located at the image plane of said first image data.
- 2. The compound camera system as set forth in claim 1 wherein said data processor generates said virtual image by projecting point data (x,y,z) located at said depth, z=Z1, associated with said first object plane of said object, to generate second pixel data (u2,v2) located at the image plane of said second image data.
- 3. The compound camera system as set forth in claim 2 wherein said data processor generates said virtual image by combining color of said first pixel data (u1,v1) and color of said second pixel data (u2,v2).
- 4. The compound camera system as set forth in claim 3 wherein said data processor combines color of said first pixel data (u1,v1) and color of said second pixel data (u2,v2) by multiplying said first color by a first weighting factor to form a first product, multiplying said second color by a second weighting factor to form a second product, adding said first and second products, and dividing the summation of colors by the summation of weights.
- 5. The compound camera system as set forth in claim 4 wherein said weight is proportional to cos(φ) where φ is the angle between the virtual ray and the corresponding ray from a component camera.
- 6. The compound camera system as set forth in claim 1 wherein said data processor projects said virtual pixel data (u,v) to generate said point data (x,y,z) using a inverse Plane Projection Matrix and projects said point data (x,y,z) to generate said first pixel data (u1,v1) using a first Plane Projection Matrix.
- 7. The compound camera system as set forth in claim 6 wherein said data processor projects said point data (x,y,z) to generate said second pixel data (u2,v2) using a second Plane Projection Matrix.
- 8. The compound camera system as set forth in claim 7 wherein said data processor is further capable of adjusting a focus of said compound camera system by back-projecting said virtual pixel data (u,v) to generate said point data (x,y,z) located at a depth, z=Z2, associated with a second object plane of said object and projecting said point data (x,y,z) to generate said first pixel data (u1,v1) located at said image plane of said first image.
- 9. The compound camera system as set forth in claim 8 wherein said data processor is further capable of adjusting said focus of said compound camera system by projecting said point data (x,y,z) located at a depth, z=Z2, associated with said second object plane of said object to generate second pixel data (u2,v2) located at said image plane of said second image.
- 10. A method of generating a virtual image using a compound camera system comprising the steps of:
generating image data of an object from a plurality of component cameras; projecting virtual pixel data (u,v) to generate point data (x,y,z) located at a depth, z=Z1, associated with a first object plane of the object; and projecting the point data (x,y,z) to generate first pixel data (u1,v1) located at image plane of the first image; and projecting the point data (x,y,z) to generate second pixel data (u2,v2) located at image plane of the second image; and receiving first image data from a first one of the plurality of component cameras and second image data from a second one of the plurality of component cameras.
- 11. The method of generating a virtual image as set forth in claim 10 wherein the data processor generates the virtual image by combining the first pixel data (u1,v1) and the second pixel data (u2,v2).
- 12. The method of generating a virtual image as set forth in claim 11 wherein the data processor combines the first pixel data (u1,v1) and the second pixel data (u2,v2) by multiplying the color of first pixel data (u1,v1) by a first weighting factor to form a first product, multiplying the color of second pixel data (u2,v2) by a second weighting factor to form a second product, adding the first and second products, and dividing the summation of products by the summation of weighting factors.
- 13. The method of generating a virtual image as set forth in claim 12 wherein the first and second weighting factors are positive fractional values that are proportional to cos(φ) where φ is the angle between the virtual ray and the corresponding ray from the component camera.
- 14. The method of generating a virtual image as set forth in claim 10 wherein the data processor projects the virtual pixel data (u,v) to generate the point data (x,y,z) using a inverse Plane Projection Matrix and projects the point data (x,y,z) to generate the first pixel data (u1,v1) using a first Plane Projection Matrix.
- 15. The method of generating a virtual image as set forth in claim 14 wherein the data processor projects the point data (x,y,z) to generate the second pixel data (u2,v2) using a second Plane Projection Matrix.
- 16. The method of generating a virtual image as set forth in claim 15 wherein the data processor is further capable of adjusting a focus of the compound camera system by projecting the virtual pixel data (u,v) to generate the point data (x,y,z) located at a depth, z=Z2, associated with a second object plane of the object and projecting the point data (x,y,z) to generate the first pixel data (u1,v1) located at the image plane of the first image.
- 17. The method of generating a virtual image as set forth in claim 16 wherein the data processor is further capable of adjusting the focus of the compound camera system by projecting the said point data (x,y,z) located at the depth, z=Z2, associated with the second object plane of the object to generate second pixel data (u2,v2) located at the image plane of the second image.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present invention is related to those disclosed in U.S. patent application Ser. No. [Attorney Docket No. 02-LJ-080], filed concurrently herewith, entitled “Compound Camera And Methods For Implementing Auto-Focus, Depth-Of-Field And High-Resolution Functions”. Application Ser. No. [Attorney Docket No. 02-LJ-080] is commonly assigned to the assignee of the present invention. The disclosures of this related patent application is hereby incorporated by reference for all purposes as if fully set forth herein.