1. Field of the Invention
The disclosure relates to the three-dimensional (“3D”) surface measurement of material objects.
2. Background Discussion
There are known devices and methods for performing non-contact measurement of a 3-D surface shape of a material object, such as through the use of a structured-light or stereoscopic triangulation method. The structured-light triangulation method of measuring the surface shape of material objects utilizes the projection of structured light onto the surface of the object that is, generally, an amplitude-modulated, time-modulated and/or wavelength-modulated (“structured light”). An image of structured light projected onto the surface of an object (hereinafter referred to as “the image”) is captured by a camera in a direction different from the direction that the structured light is projected. The image is then analyzed to calculate the shape of the object's surface.
3D scanners use such triangulation methods to measure the surface shape of an entire object. However, 3D scanners can typically only measure a portion of the surface of the object at a time, and it is typically necessary to make a series of scans from various angles and then merge the resulting 3D images together in order to measure the shape of the entire surface. To avoid noticeable mistakes when merging the series of scans together, it is necessary to know the point and direction from which each scan was made with an accuracy no less than accuracy of each scan.
A number of solutions have previously been attempted to achieve this level of accuracy, including: 1) fixing the 3D scanner in place and using a precise rotating table to mount the object, 2) using a precision device to move the scanner around the object, 3) identifying the position and orientation of the scanner by using an array of sensors (e.g., radio, optical, or magnetic sensors) positioned around the vicinity of the object to determine the position of an emitter installed inside the 3D scanner, and 4) using several 3D scanners mounted on a rigid structure distributed about the object.
However, each of the prior attempted solutions suffer from a number of shortcomings: high costs, bulkiness, non-portability, long scanning times, inability to scan moving objects, and certain application limitations (e.g., they cannot be used to scan sensitive objects that cannot be moved or touched such as museum exhibits).
In accordance with one or more embodiments, a system and method are provided for the multiframe surface measurement of the shape of material objects. The system and method include capturing a plurality of images of portions of the surface of the object being measured and merging the captured images together in a common reference system. In one aspect, the shape and/or texture of a complex-shaped object can be measured using a 3D scanner by capturing multiple images from different positions and perspectives and subsequently merging the images in a common coordinate system to align the merged images together. Alignment is achieved by capturing images of both a portion of the surface of the object and also a reference surface having known characteristics (e.g., shape and/or texture). This allows the position and orientation of the object scanner to be determined in the coordinate system of the reference object. The position of the device capturing an image of the object can be further controlled with respect to the device capturing the image of the reference object to ensure consistency in precision between captured images of the object.
The above-mentioned features and objects of the present disclosure will become more apparent with reference to the following description taken in conjunction with the accompanying drawings wherein like reference numerals denote like elements and in which:
In general, the present disclosure includes a system and method for the multiframe surface measurement of the shape of material objects in accordance with one or more embodiments of the present disclosure. Certain embodiments of the present disclosure will now be discussed with reference to the aforementioned figures, wherein like reference numerals refer to like components.
Referring now to
With further reference to
It is then determined in step 206 whether the scan is complete and the desired image of the object 104 has been captured in its entirety. If the image capture is not complete, then the field of view 108 of the scanning device 102 is adjusted with respect to the object 104 in step 208 in order to capture an image of another portion of the object 104. For example, the position of the entire scanning device 102 can be moved or the camera field of view 108 can be adjusted. The process then returns to steps 202/204 to perform another image capture of at least a portion of the object 104 and at least a portion of the reference object 106. The above steps are successively repeated until enough images (e.g., frames) of the desired portion of the object 104 are captured. In each of the positions of the scanning device 102, the field of view 108 must contain at least a portion of the reference object 106 sufficient for determining the position and orientation of the scanning device 102 in the coordinate system of the reference object 106. In step 210, the multiple captured images or frames are combined together into a common coordinate system of the reference object 106 using any known coordinate transformation technique. Each captured image can be considered a frame that is merged together to form the multiframe image of the object 104 that can be used to calculate a surface measurement of the object 104.
In this manner and in one aspect the shape and/or texture of a complex-shaped object 104 can be measured using a scanning device 102 by capturing multiple images of the object 104 from different points of view and directions and subsequently merging the captured images or frames in a common coordinate system to align the merged images together. Alignment is achieved by capturing both a portion of the surface of the object 104 and also a reference object 106 having known characteristics (e.g., shape and texture). This allows the position and orientation of the scanning device 102 to be determined in the coordinate system of the reference object 106.
Referring now to
In one or more embodiments, one of the fields of view 112 and 114 can be used for a reference image, such that the position and orientation of the scanning device 102 associated with the other of the fields of view 112 and 114 can be converted into the reference object 106's coordinate system.
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
In one or more embodiments, each step of moving of the scanning devices 150a and 150b in other position around the object 104 comprises: a) moving one of the scanning devices 150a and 150b in another position such that the moved scanning device has a respective field of view 108 of at least a portion of the surface of the object 104 being captured and such that the scanning device 150 that was not moved has a respective field of view 132 of at least a portion of the surface of the reference object 106 mounted on the scanning device 150 that was moved; b) capturing images of the respective portions of the surface of the object 104 to be captured in the field of views 108a and 108b of each of the scanning devices 150a and 150b and capturing an image of the respective portion of the surface of the reference object 106 in the field of view 132 of the scanning device 150 that was not moved; c) transforming the captured images of the respective portions of the surface of the object 104 to be measured as well as the images of the portions of the object 104 to be measured captured and merged from prior configurations into a common reference coordinate system of one of the reference objects 106 that, in one or more embodiments, can be the reference object 106 mounted on the scanning device 150 that was moved; d) merging the images of the respective portions of the surface of the object 104 to be measured in the coordinate system of one of the reference objects 106 with the images of the portions of the object 104 to be measured captured and merged from prior configurations. In one or more embodiments, at the first step of moving one of the scanning devices 150a and 150b to another position, either of the scanning devices 150a and 150b can be moved. In one or more embodiments, at each step except the first positioning step, the scanning device 150 that is moved should be the scanning device 150 that was not moved at the previous repositioning step.
An example of the foregoing operation of the system 100 of
One of the scanning devices 150, e.g., scanning device 150a, is then moved to a different position in relation to the object 104, so that the reference object field of view 132b of the scanning device 150b observes the reference object 106a mounted on the scanning device 150a, For instance, the scanning device 150a can be moved as indicated by directional arrow 152.
The scanning device 150a then captures an image of a portion of the object 104 from field of view 108a at new position 154 and transforms the captured image into the coordinate system of the reference object 106a. Scanning device 150b also captures an image of the reference object 106a mounted on the scanning device 150a and determines the position of the scanning device 150b in the reference object 106a coordinate system. The portion of the object 104 previously captured by the scanning devices 150a and 150b and previously merged together are then converted to the coordinate system of the reference object 106a and merged with the image of the portion of the object 104 captured from field of view 108a at new position 154.
The other scanning device 150, e.g., scanning device 150b, that was not the one that was just previously moved is then is moved to a different position in relation to the object 104, so that the reference object field of view 132a of the scanning device 150a observes the reference object 106b mounted on the scanning device 150b. For instance, the scanning device 150b can be moved as indicated by directional arrow 156 to new position 158. The scanning device 150b then captures a new image of the object 104 at position 158, Scanning device 150a captures an image of the reference object 106b and determines the position of the scanning device 150a in the coordinate system of the reference object 150b. The portion of the surface of the object 104 previously scanned and merged in the reference object 150a coordinate system is converted to the reference object 150b coordinate system and merged with the object image captured by the scanning device 150b at position 158. This process of alternatingly resituating the scanning devices 150a and 150b is repeated until the entire surface of the object 104 has been captured in a frame-by-frame format and merged into a multiframe image in a common coordinate system.
In one or more embodiments, the scanning devices 102, 130, 140 and 150 can be connected to a computing system (not shown) for controlling the operation of the scanning devices 102, 130, 140 and 150 and also for performing the necessary calculations for coordinate conversation, merging and other image processing. The computing system may comprise a general-purpose computer system which is suitable for implementing the method for the multiframe surface measurement of the shape of material objects in accordance with the present disclosure. The computing system is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. In various embodiments, the present system and method for the multiframe surface measurement of the shape of material objects is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, networked PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
In various embodiments, the triangulation algorithms and the method for the multiframe surface measurement of the shape of material objects may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. These algorithms and methods may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. In one embodiment, the computing system implements multiframe surface measurement of the shape of material objects by executing one or more computer programs. The computer programs may be stored in a memory medium or storage medium such as a memory and/or ROM, or they may be provided to a CPU through a network connection or other I/O connection.
The system and method formed in accordance with the embodiments described herein provide for the accurate measurement of the surface and/or texture of large and/or complex-shaped objects by allowing multiple frames of images to be merged together in a common coordinate system. These teachings can be applied to a whole range of scientific and engineering problems that require accurate data about the surface shape of an object, distance to the surface, or its spatial orientation. The present system and method has useful applications in many fields, including but not limited to digital imaging, the control of part shapes, computer animation, capturing the shape of objects that have cultural, historical or scientific value, shape recognition, topography, machine vision, medical procedures, special positioning of devices and robots, etc.