Claims
- 1. A three-dimensional digital scanner comprising:
a multiple view detector responsive to a broad spectrum of light, said detector being operative to develop a plurality of images of a three dimensional object to be scanned, said plurality of images being taken from a plurality of relative positions and orientations with respect to said three dimensional object, said plurality of images depicting a plurality of surface portions of said three dimensional object to be scanned; a digital processor including a computational unit said digital processor being coupled to said detector; a calibration ring comprising calibration objects integrated with an object support for inclusion in said plurality of images; said digital processor being operative to use imagery of said calibration objects to determine camera geometry relative to a reference frame; said object support being an apparatus designed to provide rotational view images of said object placed on said object support and said rotational view images being defined as images taken by relative rotation between the object and the detector; at least one source of white light for illuminating said three dimensional object for purposes of color image acquisition; at least one source of active light being projected onto the object; said digital processor being operative to use active range finding techniques to derive 3D spatial data describing the surface of said three dimensional object; said digital processor being operative to analyze said calibration objects to perform a coordinate transformation between reference coordinates and image space coordinates said coordinate transformation being used to correlate color image data, viewing angle data and 3D spatial data; such that a digital representation of said object that includes both shape and surface coloration may be developed from said data.
- 2. The apparatus of claim 1 said digital processor being operative to partition said 3D spatial data into a plurality of 3D surface elements.
- 3. The apparatus of claim 1 further comprising at least one frontalness setting for selecting data for an archive of viewing angles and corresponding color image data to be processed for color mapping.
- 4. The apparatus of claim 1 wherever said digital processor is operative to perform a visibility computation to create an archive of viewing angle data and nonoccluded color image data and 3D spatial data corresponding to said 3D surface element.
- 5. The apparatus of claim 2 said digital processor being operative to identify individual 3D surface elements.
- 6. The apparatus of claim 5 said digital processor being operative to develop a viewing history that stores viewing angles, 3D spatial data and multiple color images of a surface patch corresponding to an identified 3D surface element.
- 7. The apparatus of claim 6 further comprising at least one frontalness setting for selecting data for an archive of viewing angles and corresponding color image data to be processed for color mapping.
- 8. The apparatus of claim 6 wherein said digital processor is operative to perform a visibility computation to create an archive of viewing angle data and nonoccluded color image data and 3D spatial data corresponding to said 3D surface element.
- 9. A three-dimensional digital scanner comprising:
a multiple view detector responsive to a broad spectrum of light, said detector being operative to develop a plurality of images of a three dimensional object to be scanned, said plurality of images being taken from a plurality of relative positions and orientations with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; a digital processor including a computational unit said digital processor being coupled to said image detector; said digital processor being operative to measure and record for each of said plurality of images, position information including the relative angular position of the detector with respect to the object; said digital processor being operative to use said position information to develop with said computational unit 3D coordinate positions and related image information of said plurality of surface portions of said object; a calibration ring comprising calibration objects integrated with an object support for inclusion in said plurality of images; said digital processor being operative to use imagery of said calibration object to determine camera geometry relative to a reference frame; said object support being an apparatus designed to provide rotational view images of said object placed on the object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector; said digital processor being operative to use active range finding techniques to derive surface geometry data describing the surface of said object said digital processor being operative to partition said surface geometry data into a plurality of 3D surface elements; said digital processor being operative to identify individual 3D surface elements; said digital processor being operative to develop a viewing history that stores viewing angles and 3D spatial data related to multiple views of a surface patch corresponding to an identified 3D surface element; said digital process being operative to have at least one frontalness setting for creating for selecting data for an archive of viewing angles and corresponding color image data from said viewing history to be processed for color mapping purposes related to a view dependent representation of the surface image corresponding to said 3D surface element; said digital processor being operative to perform a visibility computation to create an archive of visible cameras related to the acquired surface imagery corresponding to said 3D surface elements said archive of visible cameras including reference to 3D spatial data describing said 3D surface elements; said digital processor being operative to cross correlate said archive of visible cameras and said archive of viewing angles and corresponding image data to produce data which is incorporated into a functional expression such that a view dependent representation of said object that includes both shape and surface coloration may be developed.
- 10. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; providing a calibration ring comprised of patterned 3D calibration objects to be included in said plurality of images; providing incident light to illuminate said calibration ring and said object during the taking of said images; determining from said images of said calibration ring the viewing geometry and lighting geometry relative to a 3D surface element corresponding to a surface patch of a scanned object said lighting geometry being determined by said incident lighting casting a shadow of said 3D calibration objects. the pattern of each 3D calibration object providing feature or points at precisely known locations with respect to the 3D surface geometry of said calibration objects and including identifiable features to enhance the identification of said feature points over a range of viewing angles; deriving data by analyzing the images of said calibration objects and performing with said data a transformation between reference coordinate and image space coordinate; using said transformation, registering view dependent imagery of said surface patch with said 3D surface element; using said calibration ring to develop a viewing history where said viewing history includes the viewing geometry, angle of incident light corresponding and said view dependent imagery relative to said 3D surface element.
- 11. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; providing a calibration ring comprised of patterned 3D calibration objects to be included in said plurality of images; the pattern of said calibration and/or of an adjacent object including identifiable features that will enhance the identification said feature points over a range of viewing angles;
using a patterned 3D calibration object to determine the viewing geometry and angle of incident light relative to a 3D surface element corresponding to a surface patch of a scanned object; using said 3D calibration object to determine lighting geometry by casting a shadow of said incident light; providing from the pattern of said patterned 3D calibration object, features and points at precisely known locations with respect to 3D surface geometry of said calibration object; using data derived by analyzing said calibration objects, performing a transformation between reference coordinates and image space coordinates; said transformation being used to register view dependent imagery of said surface patch with said 3D surface element; using said calibration ring to develop a viewing history where said history includes the viewing geometry, angle of incident light corresponding and said view dependent imagery relative to said 3D surface element.
- 12. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising by use of said detector and said digital processor:
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; providing a calibration ring comprised of patterned 3D calibration objects to be included in said plurality of images; said digital processor being operative to use imagery of said calibration objects to determine camera geometry data including viewing direction of said detector relative to a reference frame; acquiring said object imagery under a plurality of lighting scenarios; said lighting scenarios including at least white light used for acquiring color image data and active projection lighting used to provide imagery to be analyzed by an active range finding procedures; using said digital processor to develop 3D data and related color image data of said plurality of surface portions of said object; using active range finding techniques to derive 3D spatial data located in reference coordinate space; processing said 3D spatial data into a 3D surface representation comprised of identifiable 3D surface elements of said surface portions; performing a coordinate transformation to transform said 3D spatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space; performing a visibility computation that compares image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; using the color image data corresponding to said non-occluded 3D color surface element to develop a 3D shape and color representation of said object.
- 13. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising by use of said detector and said digital processor;
developing a plurality of relative images of a three dimensional object to be scanned said plurality of images being taken from a plurality of positions and orientations including viewing angles relative to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; providing a calibration ring comprised of calibration objects to be included in said plurality of images; and wherein the geometric location of calibration features with respect to geometry of the calibration objects is known or is determined; said digital processor being operative to use imagery of said calibration objects to determine camera geometry data including angles of said detector relative to a reference frame; acquiring said plurality of images of said object under a plurality of lighting scenarios said lighting scenarios including at least white light used for acquiring color of said images and active projection lighting used to provide imagery to be analyzed by an active range finding procedure; said images being located in image coordinate space; determining camera geometry data corresponding to different said positions and orientations of said detector to capture the color image data; recording said color image data with corresponding camera geometry data to a point on a colormap image; whereby a three dimensional representation of said object to be scanned can be developed by said digital processor.
- 14. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising by use of said detector and said digital processor;
providing a calibration ring integrated with an object support for inclusion in said plurality of images said calibration ring comprising calibration objects said calibration objects being of known geometry and having calibration features of known geometry and the geometric location of the calibration features with respect to the geometry of the calibration objects being known and the geometric configuration of the calibration objects relative to each other being known; using of non-calibration imagery to enhance the identification of said calibration features; said digital processor being operative to use imagery of said calibration objects to determine detector location and orientation relative to a reference; acquiring said plurality of images of said object under a plurality of lighting scenarios including using white light used for acquiring color image data and using active projection lighting to provide imagery to be analyzed by an active range finding procedure to determine the surface geometry of object portions; said object support being of an apparatus designed to provide rotational view images of said object placed on said object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector; developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 15. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
providing a calibration ring comprising calibration objects integrated with an object support for inclusion in said plurality of images; said digital processor being operative to use imagery of said calibration objects to determine detector location and orientation relative to a reference frame; acquiring said plurality of images said object under a plurality of lighting scenarios including using white light used for acquiring color image data and using active projection lighting to provide imagery to be analyzed by an active range finding procedure to determine the surface geometry of said object portions; said object support being of an apparatus designed to provide rotational view images of said object placed on said object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector; developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 16. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
providing a calibration ring comprising 3D calibration objects integrated with an object support for inclusion in said plurality of images; said digital processor being operative to use imagery of said 3D calibration objects to determine detector location and orientation relative to a reference frame; determining the angle of incident lighting by analyzing shadows caste by 3D calibration objects relative to a reference frame; acquiring said plurality of images said object under a plurality of lighting scenarios including using white light used for acquiring color image data and using active projection lighting to provide imagery to be analyzed by an active range finding procedure to determine the surface geometry of said object portions; said object support being of an apparatus designed to provide rotational view images of said object placed on said object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector; developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 17. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; said digital processor being operative to perform a coordinate transformation to transform 3D spatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space. performing a visibility computation that compares image space depth coordinates between transformed 3D surface elements from a given viewing angle of said detector to determine whether a 3D surface element is non-occluded with respect to all other 3D surface elements from said viewing angle; using color image data corresponding to said non-occluded 3D surface elements to develop a 3D shape and color representation of said object; recording pairings of said object's 3D surface elements with a viewing angle at which it is visible; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 18. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; said digital processor being operative to perform a coordinate transformation to transform 3D spatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space; performing a visibility computation that compares image space depth coordinates between transformed 3D surface elements, from a given viewing angle of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface elements from said viewing angle; using said visibility computation to develop a representation of the surface image; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 19. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; said digital processor being operative to perform a coordinate transformation to transform 3D spatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space; performing a visibility computation that compares image space depth coordinates between transformed 3D surface elements, from a given viewing angle of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface elements from said viewing angle; recording pairings of said object's non-occluded 3d surface elements with an image viewing angles at which they are visible; using said visibility computation to develop a representation of the surface image; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 20. A method for scanning a three dimensional object using a multiple view detector responsive to a broad spectrum of light and a digital processor having a computation unit said digital processor being coupled to said multiple view detector comprising;
developing a plurality of images of a three dimensional object to be scanned said plurality of images being taken from a plurality of relative angles with respect to said object, said plurality of images depicting a plurality of surface portions of said object to be scanned; said digital processor being operative to perform a coordinate transformation to transform 3D spatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space; performing a visibility computation that compares image space depth coordinates between transformed 3D surface elements, from a given viewing angle of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface elements from said viewing angle; recording pairings of said object's non-occluded 3d surface elements with an image viewing angles at which they are visible; using said visibility computation to develop a representation of the surface image; whereby a three dimensional image of said object to be scanned can be developed by said digital processor that includes both shape and surface image.
- 21. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; recording data related to the plurality of images acquired from a plurality of views of said 3D surface element into a viewing history; using said viewing history to provide a structure by which independent computations related to surface image may be performed for each surface element of said plurality of 3D surface elements; said digital processor being operative to perform a coordinate transformation to transform said 3Dspatial data and corresponding identifiable 3D surface elements from reference coordinate space to image coordinate space; performing a visibility computation to compare image space depth coordinates between transformed 3d surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; said color image data corresponding to said non-occluded 3D surface element being used to develop a 3Dshape and color representation of said object; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.Claim 22. The method of claim 17 further comprising recording pairings of said object's non-occluded 3d surface elements with an image viewing angles they are visible.
- 23. The method of claim 17 further comprising storing said pairings as a list,
- 24. The method of claim 17 further comprising storing said pairings as an array.
- 25. The method of claim 17 further comprising storing said pairing sas a matrix.
- 26. The method of claim 17 further comprising using said visibility computation to reconstruct the surface coloration of said object.
- 27. The method of claim 17 further comprising using said visibility computation to reconstruct at least one surface property of said object.
- 28. The method of claim 17 further comprising selecting visible 3D surface elements using angle of the surface normal to the viewing angle in choosing the visibility pairs likely to have the best-quality image data for a particular 3D surface element.
- 29. The method of claim 17 further comprising creating a view dependent representational format.
- 29. The method of claim 17 further comprising analyzing immediate neighborhood of a Z-buffer 3D surface element to decide whether a point is close to being obscured, and so its visible color would be contaminated by the surface coloration of another surface element.
- 30. The method of claim 21 includes using multiple view dependent measurements of color images of each surface patch related to a corresponding 3D surface element to develop a view dependent representation of said color images corresponding to said surface patch and said 3D surface element.
- 31. The method of claim 21 includes using multiple view dependent measurements of surface properties for each surface patch related to a corresponding 3d surface element to develop a view dependent representation of said surface properties corresponding to said patch said surface patch and said 3d surface element.
- 32. The method of claim 21 includes determining lighting geometry relative to a 3D surface element corresponding to a surface patch of a scanned object.
- 33. The method of claim 21 includes using a coordinate transformation to register view dependent imagery of said surface patch with said 3D surface element.
- 34. The method of claim 21 including camera geometry data in said viewing history.
- 35. The method of claim 21 using a table to store the viewing history data.
- 36. The method of claim 21 cross-correlating said data for processing purposes.
- 37. The method of claim 21 integrating color and reflectance data to derive a representation of surface image of a surface patch corresponding to a 3d surface element.
- 38. The method of claim 21 using said viewing history to store lighting geometry data.
- 39. The method of claim 21 using said viewing history to store data describing the angle of incident lighting.
- 40. The method of claim 21 using said viewing history to store data describing viewing geometry.
- 41. The method of claim 21 using said viewing history to store data describing surface reflectance estimates.
- 42. The method of claim 21 using said viewing history to store data describing lighting intensity.
- 43. The method of claim 21 using said viewing history to store data describing the brightness of ingoing light.
- 44. The method of claim 21 using said viewing history to store data describing the brightness of outgoing light.
- 45. The method of claim 21 using said viewing history to store data describing change in hue.
- 46. The method of claim 21 using said viewing history to store data describing change in chrominance.
- 47. The method of claim 21 where said viewing history stores change in saturation.
- 48. The method of claim 21 further comprising using said viewing history to store data describing change in intensity.
- 49. The method of claim 21 further comprising using said viewing history to store data describing change in brightness.
- 50. The method of claim 21 further comprising using said viewing history to store data describing the change in hue relative to lighting and viewing geometries.
- 51. The method of claim 21 further comprising using said viewing history to store data describing the change in chrominance relative to lighting and viewing geometries.
- 52. The method of claim 21 further comprising using said viewing history to store data describing the change in saturation relative to lighting and viewing geometries.
- 53. The method of claim 21 further comprising using said viewing history to store data describing the change in intensity relative to lighting and viewing geometries.
- 54. The method of claim 21 further comprising using said viewing history to store data describing the change in brightness relative to lighting and viewing geometries.
- 55. The method of claim 21 further comprising using said viewing history to store data describing the change in hue relative to lighting geometry.
- 56. The method of claim 21 further comprising using said viewing history to store data describing the change in chrominance relative to lighting geometry.
- 57. The method of claim 21 further comprising using said viewing history to store data describing the change in saturation relative to lighting geometry.
- 58. The method of claim 21 further comprising using said viewing history to store data describing the change in intensity relative to lighting geometry.
- 59. The method of claim 21 further comprising using said viewing history to store data describing the change in brightens.
- 60. The method of claim further comprising using said viewing history to store data describing the change in hue relative to viewing geometry.
- 61. The method of claim further comprising using said viewing history to store data describing the change in chrominance relative to viewing geometry.
- 62. The method of claim 21 further comprising using said viewing history to store data describing the change in saturation relative to viewing geometry.
- 63. The method of claim 21 further comprising using said viewing history to store data describing the change in intensity relative to viewing geometry.
- 64. The method of claim 21 further comprising using said viewing history to store data describing the change in brightness relative to viewing geometry.
- 65. The method of claim 21 further comprising correlating the viewing geometry with the lighting geometry to produce a surface reflectance estimate.
- 66. The method of claim 21 further comprising calculating the surface reflectance estimate using a known value representing the lighting intensity.
- 67. The method of claim 21 further comprising acquiring multiple views of a surface patch corresponding to a 3D surface element under multiple lighting conditions.
- 68. The method of claim 21 further comprising correlating the angle of incident lighting with the surface normal corresponding to a said 3D surface element to produce the lighting geometry.
- 69. The method of claim 21 further comprising correlating the camera geometry with the surface normal corresponding to a 3D surface element to produce the viewing geometry.
- 70. The method of claim 21 further comprising estimating the surface normal of a surface patch corresponding to a 3D surface element.
- 71. The method of claim 21 further comprising producing a view dependent three-dimensional representation of said object that includes both shape and surface coloration information.
- 72. The method of claim 21 further comprising producing a view dependent three-dimensional representation of said object that includes both shape and surface coloration and surface property information.
- 73. The method of claim 21 producing a lighting dependent three-dimensional representation of said object that includes both shape and surface coloration information.
- 74. The method of claim 21 further comprising producing a lighting dependent three dimensional representation of said object that includes both shape and surface coloration and surface property information.
- 75. The method of claim 21 further comprising producing a lighting dependent and viewing dependent three-dimensional representation of said object that includes both shape and surface coloration information.
- 76. The method of claim 21 further comprising producing lighting dependent and viewing dependent three dimensional representation of said object that includes both shape and surface coloration and surface property information.
- 77. The method of claim 21 further comprising producing a mathematical functional expression representing a view dependent representation of said object.
- 78. The method of claim 21 further comprising producing a mathematical functional expression representing a lighting dependent representation of said object.
- 79. The method of claim 21 further comprising producing a mathematical functional expression representing lighting dependent and view dependent representation of said object.
- 80. The method of claim 21 further comprising producing a frequency based mathematical functional expression representing said object.
- 81. The method of claim 21 further comprising producing a frequency based mathematical functional expression representing said view dependent representation of said object.
- 82. The method of claim 21 further comprising producing a frequency based mathematical functional expression representing said lighting dependent representation of said object.
- 83. The method of claim 21 further comprising producing a frequency based mathematical functional expression representing said view dependent and lighting dependent representation of said object.
- 84. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; using active light projection and a corresponding active range finding procedure to produce 3D data describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D representation of said object may be developed to include shape and image.
- 85. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; using active light projection and a corresponding active range finding procedure to produce 3D data describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; using said reference data and camera geoemtry data to synthesize 3D data that corresponds to said given view and corresponding image data describing said object; aligning surface patch image data that corresponds to a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data such that a 3D representation of said object may be developed to include shape and image.
- 86. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; using active light projection and a corresponding active range finding procedure to produce 3D data describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; using said reference data and said camera geometry data to synthesize 3D data that corresponds to said set of multiple views and corresponding color image data describing said object; aligning viewing angle data and local color image data corresponding to said set of multiple views of a surface patch to a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data; such that a view dependent 3D data may be mapped to said given surface element whereby a view dependent 3D representation of said object may be developed to include shape and image
- 87. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; using active light projection and a corresponding active range finding procedure to produce 3D data describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch related color image data; using said reference data and said camera geometry data to synthesize 3D data that corresponds to said set of multiple views and related color image data describing said object; correlating viewing angle data and local color image data corresponding to said set of multiple views of a surface patch with a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data such that view dependent data correlated with said given surface element may be used to create a view dependent representation that correlates to said given 3D surface element such that a view dependent representation of said object may be developed to include shape and image.
- 88. The method of claim 84 further comprising recording data related to the plurality of images acquired from a plurality of views of said 3D surface element in a viewing history
- 89. The method of claim 86 further comprising using said viewing history provides a structure by which independent computations related to a surface element may be performed for each surface element of said plurality of 3D surface elements
- 90. The method of claim 84 further comprising using at least one frontalness setting to select data from viewing history for processing
- 91. The method of claim 84 further comprising using said reference data and said camera geometry data to synthesize 3D data that corresponds to said set of multiple views and related color image data describing said object
- 92. The method of claim 84 further comprising performing a coordinate transformation to transform said 3d data from reference coordinate space to image coordinate space
- 93. The method of claim 84 further comprising performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction
- 94. The method of claim 84 further comprising using color image data corresponding to said non-occluded 3D surface element to develop a 3D shape and color representation of said object
- 95. The method of claim 84 further comprising correlating viewing angle data and local color image data corresponding to said set of multiple views of a surface patch with a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data such that view dependent data correlated with said given surface element may be used to create a view dependent representation that correlates to said given 3D surface element.
- 96. The method of claim 84 further comprising aligning viewing angle data and local color image data corresponding to said set of multiple views of a surface patch with a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data such that view dependent data aligned with said given surface element may be used to create a view dependent representation that correlates to said given 3D surface element
- 97. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; determining the surface normal of said 3D surface element recording data related to the plurality of images acquired from a plurality of views of said 3D surface element in a viewing history; using said viewing history provides a structure by which independent computations related to surface image may be performed for each surface element of said plurality of 3D surface elements; using at least one frontalness setting to select data for processing using said reference data and said camera geometry data to synthesize 3D data that corresponds to said set of multiple views and related color image data describing said object; performing a coordinate transformation to transform said 3D data from reference coordinate space to image coordinate space; Performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; using color image data corresponding to said non-occluded 3D surface element to develop a 3D shape and color representation of said object; correlating viewing angle data and local color image data corresponding to said set of multiple views of a surface patch with a given 3D surface element using said reference data and camera geometry data common to both captured data and synthesized 3D data such that view dependent data correlated with said given surface element may be used to create a view dependent representation that correlates to said given 3D surface element
- 98. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing detectable calibration feature points using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; performing a coordinate transformation to transform said 3D data from reference coordinate space to image coordinate space; performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; the color image data corresponding to said non-occluded 3D surface element is used to develop a 3D shape and color representation of said object; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.
- 99. The method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; analyzing detectable calibration feature points; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; performing a coordinate transformation to transform said 3D data from reference coordinate space to image coordinate space; performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; the color image data corresponding to said non-occluded 3D surface element is used to develop a 3D shape and color representation of said object; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.
- 100. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images located in image coordinate space using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; performing a coordinate transformation to transform said 3D data from reference coordinate space to image coordinate space performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; the color image data corresponding to said non-occluded 3d surface element is used to develop a 3D shape and color representation of said object; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed using a viewing history frontalness setting and visibility computation.
- 101. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; computing the 3D surface normal of a plurality of 3D surface elements; recording data related to the plurality of images acquired from a plurality of views of said 3D surface element in a viewing history. using said viewing history provides a structure by which independent computations related to surface image may be performed for each surface element of said plurality of 3D surface elements; performing a coordinate transformation to transform said 3D data from reference coordinate space to image coordinate space; performing a visibility computation to compare image space depth coordinates between transformed 3D surface elements, from a given viewing direction of said detector, to determine whether a 3D surface element is non-occluded with respect to all other 3D surface element from said viewing direction; the color image data corresponding to said non-occluded 3D surface element is used to develop a 3D shape and color representation of said object using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.
- 102. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising 3D calibration objects located at known locations relative to each other each 3D calibration object comprising detectable calibration feature points at known locations with respect to the geometry of each 3D calibration object providing incident lighting to illuminate said 3D calibration object; using said 3D calibration object to determine the direction of incident lighting by casting a shadow by obstructing said incident light; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.
- 103. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; said calibration ring being integrated with an object support; said object support being of an apparatus designed to provide rotational view images of said object placed on said object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3Dshape and color representation of said object can be developed.
- 104. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images being located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed.
- 105. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; said calibration ring being integrated with an object support; said object support being of an apparatus designed to provide rotational view images of said object placed on said object support, and said rotational view images being defined as images taken by relative rotation between the object and the detector using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; using active light projection and a corresponding active range finding procedure to produce 3D data describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D representation of said object may be developed to include shape and image.
- 106. A method for scanning a three dimensional object using an image detector and a digital processing system coupled to the image detector comprising;
using white light to acquire a plurality of images of a three dimensional object; providing a calibration ring for inclusion in said plurality of images said calibration ring comprising detectable calibration feature points located at known locations relative to each other; using reference data provided by said calibration feature points to determine camera geometry relative to a reference frame; using said reference data to determine camera geometry data corresponding to each image of at least one set of multiple color images of an object; said multiple color images located in image coordinate space; using active light projection and a corresponding active range finding procedure to produce 3D data located in reference coordinate space describing at least one surface portion of said object; processing said 3D data to produce a 3D surface description that is comprised of 3D surface elements each corresponding to a surface patch and related color image data; recording data related to the plurality of images acquired from a plurality of views of said 3D surface element in a viewing history; using said viewing history to provide a structure by which independent computations related to surface image may be performed for each surface element of said plurality of 3D surface elements; using reference data provided by said calibration feature points to align surface patch image data with 3D surface element data such that a 3D shape and color representation of said object can be developed
REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of pending U.S. application Ser. No. __/______ filed on Aug. 10, 2002 and is a continuation-in-part of pending U.S. application Ser. No. 09/945,133 filed on Aug. 31, 2001 which is a continuation of U.S. application Ser. No. 09/236,727 filed on Jan. 25, 2999 now U.S. Pat. No. 6,288,385 issued on Sep. 11, 2001 which is a continuation of U.S. application Ser. No. 08/738,437 filed on Oct. 25, 1996, now U.S. Pat. No. 5,864,640 issued on Jan. 26, 1999 all of which are incorporated herein by reference..
[0002] Appendix A entitled SIGGRAPH 2001 Course: Acquisition and Visualization of Surface Light Fields is attached hereto and made a part hereof by reference.
Continuations (2)
|
Number |
Date |
Country |
Parent |
09236727 |
Jan 1999 |
US |
Child |
09945133 |
Aug 2001 |
US |
Parent |
08738437 |
Oct 1996 |
US |
Child |
09236727 |
Jan 1999 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
09945133 |
Aug 2001 |
US |
Child |
10217944 |
Aug 2002 |
US |