The subject matter disclosed herein relates to a triangulation scanner. The triangulation scanner projects uncoded spots onto an object and in response determines three-dimensional (3D) coordinates of points on the object.
Triangulation scanners generally include at least one projector and at least two cameras, the projector and camera separated by a baseline distance. Such scanners use a triangulation calculation to determine 3D coordinates of points on an object based at least in part on the projected pattern of light and the captured camera image. One category of triangulation scanner, referred to herein as a single-shot scanner, obtains 3D coordinates of the object points based on a single projected pattern of light. Another category of triangulation scanner, referred to herein as a sequential scanner, obtains 3D coordinates of the object points based on a sequence of projected patterns from a stationary projector onto the object.
In the case of a single-shot triangulation scanner, the triangulation calculation is based at least in part on a determined correspondence among elements in each of two patterns. The two patterns may include a pattern projected by the projector and a pattern captured by the camera. Alternatively, the two patterns may include a first pattern captured by a first camera and a second pattern captured by a second camera. In either case, the determination of 3D coordinates by the triangulation calculation provides that a correspondence be determined between pattern elements in each of the two patterns. In most cases, the correspondence is obtained by matching pattern elements in the projected or captured pattern. An alternative approach is described in U.S. Pat. No. 9,599,455 ('455) to Heidemann, et al., the contents of which are incorporated by reference herein. In this approach, the correspondence is determined, not by matching pattern elements, but by identifying spots (e.g. points or circles of light) at the intersection of epipolar lines from two cameras and a projector or from two projectors and a camera. In an embodiment, supplementary 2D camera images may further be used to register multiple collected point clouds together in a common frame of reference. For the system described in Patent '455, the three camera and projector elements are arranged in a triangle, which enables the intersection of the epipolar lines.
In some cases, it is desirable to make the triangulation scanner more compact than is possible in the triangular arrangement of projector and camera elements. Accordingly, while existing triangulation systems are suitable for their intended purpose, the need for improvement remains, particularly in providing a compact triangulation scanner that projects uncoded spots to determine three-dimensional (3D) coordinates of points on the object.
According to another aspect of the disclosure, a device for measuring three-dimensional (3D) coordinates is provided. The device also includes a projector having a projector optical axis on a first plane, the projector operable to project a collection of laser beams on a surface of an object; a first camera having a first-camera optical axis on the first plane, the first camera operable to capture a first image of the collection of laser beams on the surface of the object; one or more processors, wherein the one or more processors are operable to: generate a first distance profile for the object using a first laser beam of the collection of laser beams and generate a second distance profile for the object using a second laser beam of the collection of laser beams; estimate the velocity of the object based on the first distance profile and the second distance profile; and provide the estimated velocity.
In accordance with one or more embodiments, or in the alternative, the one or more processors are further operable to perform a shift analysis using the first distance profile and the second distance profile.
In accordance with one or more embodiments, or in the alternative, the one or more processors are further operable to determine a time-shift between the first distance profile and the second distance profile by performing a comparison of the first distance profile and the second distance profile.
In accordance with one or more embodiments, or in the alternative, the one or more processors are operable to filter laser beams of the collection of laser beams; and assign laser beams of the collection of laser beams to the object.
In accordance with one or more embodiments, or in the alternative, filtering of the laser beams is performed based on at least one of a direction or a similarity in the generated profiles laser beams of the collection of laser beams.
In accordance with one or more embodiments, or in the alternative, the one or more processors are operable to determine a set of time-shifts for the object using a plurality of laser beam pairs of the collection of laser beams.
In accordance with one or more embodiments, or in the alternative, estimating the velocity is performed by averaging the set of time-shifts for the object.
In accordance with one or more embodiments, or in the alternative, a profile is generated by obtaining 3D points of the object, calculating a distance of the 3D points between each laser beam, and using the distance and timing information to estimate the velocity of the object.
In accordance with one or more embodiments, or in the alternative, the one or more processors are further operable to receive input velocity information associated with a device that moves the object; and compare the estimated velocity of the object to the input velocity information.
In accordance with one or more embodiments, or in the alternative, the input velocity information is used and determined from at least one of time stamp information or position information of the device that moves the object, wherein the device is at least one of a mover or a conveyor belt.
According to another aspect of the disclosure, a method for measuring three-dimensional (3D) coordinates is provided. The method includes projecting, with a projector, a collection of laser beams on a surface of an object; capturing, with a camera, a first image of the collection of laser beams on the surface of the object; generating a first distance profile for the object using a first laser beam of the collection of laser beams and generating a second distance profile for the object using a second laser beam of the collection of laser beams; estimating, using one or more processors, a velocity of the object based at least in part on the first distance profile and the second distance profile; and providing, using the one or more processors, the estimated velocity of the object.
In accordance with one or more embodiments, or in the alternative, a shift analysis is performed using the first distance profile and the second distance profile.
In accordance with one or more embodiments, or in the alternative, a time-shift analysis is performed between the first distance profile and the second distance profile by performing a comparison of the first profile and the second profile.
In accordance with one or more embodiments, or in the alternative, the laser beams of the collection of laser beams are filtered; and the laser beams of the collection of laser beams are assigned to the object.
In accordance with one or more embodiments, or in the alternative, laser beams are filtered based on at least one of a direction or a similarity in the generated distance profiles laser beams of the collection of laser beams.
In accordance with one or more embodiments, or in the alternative, a set of time-shifts are determined for the object using a plurality of laser beam pairs of the collection of laser beams.
In accordance with one or more embodiments, or in the alternative, the velocity is estimated by averaging the set of time-shifts for the object using the plurality of laser beam pairs.
In accordance with one or more embodiments, or in the alternative, a distance profile is generated by obtaining 3D points of the object, calculating a distance of the 3D points between each laser beam, and using the distance and timing information to estimate the velocity of the object.
In accordance with one or more embodiments, or in the alternative, input information associated with a device that moves the object is received, wherein the input information includes at least one of velocity information, time stamp information, or position information of the device that moves the object; and the estimated velocity of the object to the input information is compared.
In accordance with one or more embodiments, or in the alternative, a conveyor system moving the object to a configured velocity is calibrated.
These and other advantages and features will become more apparent from the following description taken in conjunction with the drawings.
The subject matter, which is regarded as the disclosure, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the disclosure are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The detailed description explains embodiments of the disclosure, together with advantages and features, by way of example with reference to the drawings.
In today's environment, 3D scanners are used to perform a variety of measurements for different types of architecture, spaces, and objects. In some embodiments, the 3D scanners can obtain scan data and measurements for moving objects. For example, an object may be moved along a conveyor and scanned by the 3D scanner. However, in order to calculate the speed of the object, additional equipment is generally required to supplement the 3D scanner. This increases the cost and complexity of the system. Also, the size of the 3D scanner or 3D scanning system may be increased to accommodate the additional equipment. The position information obtained by the external equipment may be needed to stitch or join the 3D frames together during the registration process. Systems, for example, in a production environment, may not want to connect to the external position system to obtain the speed data of the object.
The techniques described herein operate the scanning device as a light barrier or curtain to track the movement of an object through its field of view. The projection or scanning device uses a diffractive optical element (DOE) which can emit a plurality of beams. In some embodiments, about 11,665 laser beams can be used. Because the projection and/or scanning device is used as a light barrier, each beam of the light barrier can be used to generate a distance profile of the object as it travels through its field of view. The distance profile of neighboring beams can be cross-correlated to determine the distance the object has traveled between the beams over an identified period can be used to calculate the speed of the object. In some embodiments, various filtering techniques and optimization techniques, as described below, can be used to increase the accuracy of the estimation of the object's velocity.
Embodiments of the present disclosure provide advantages in enabling 3D measurements to be obtained using a relatively compact, low-cost, and accurate triangulation scanner, also referred to herein as a 3D imager or 3D scanner. It further provides advantages in enabling rapid registration, extracting of six degree-of-freedom pose information, and control of robotic mechanisms. Other embodiments enable further improvements through combined used of scanning technologies with laser trackers or articulated arm coordinate measuring machines.
In an embodiment of the present disclosure illustrated in
In an embodiment, the body 5 includes a bottom support structure 6, a top support structure 7, spacers 8, camera mounting plates 9, bottom mounts 10, dress cover 11, windows 12 for the projector and cameras, Ethernet connectors 13, and GPIO connector 14. In addition, the body includes a front side 15 and a back side 16. In an embodiment, the bottom support structure 6 and the top support structure 7 are flat plates made of carbon-fiber composite material. In an embodiment, the carbon-fiber composite material has a low coefficient of thermal expansion (CTE). In an embodiment, the spacers 8 are made of aluminum and are sized to provide a common separation between the bottom support structure 6 and the top support structure 7.
In an embodiment, the projector 20 includes a projector body 24 and a projector front surface 26. In an embodiment, the projector 20 includes a light source 25 that attaches to the projector body 24 that includes a turning mirror and a DOE, as explained herein below with respect to
In an embodiment, the first camera 30 includes a first-camera body 34 and a first-camera front surface 36. In an embodiment, the first camera includes a lens, a photosensitive array, and camera electronics. The first camera 30 forms on the photosensitive array a first image of the uncoded spots projected onto an object by the projector 20. In an embodiment, the first camera responds to near-infrared light.
In an embodiment, the second camera 40 includes a second-camera body 44 and a second-camera front surface 46. In an embodiment, the second camera includes a lens, a photosensitive array, and camera electronics. The second camera 40 forms a second image of the uncoded spots projected onto an object by the projector 20. In an embodiment, the second camera responds to light in the near-infrared spectrum. In an embodiment, a processor 2 is used to determine 3D coordinates of points on an object according to methods described herein below. The processor 2 may be included inside the body 5 or may be external to the body. In further embodiments, more than one processor is used. In still further embodiments, the processor 2 may be remotely located from the triangulation scanner.
In an embodiment where the triangulation scanner 200 of
After a correspondence is determined among the projected elements, a triangulation calculation is performed to determine 3D coordinates of the projected element on an object. For
The term “uncoded element” or “uncoded spot” as used herein refers to a projected or imaged element that includes no internal structure that enables it to be distinguished from other uncoded elements that are projected or imaged. The term “uncoded pattern” as used herein refers to a pattern in which information is not encoded in the relative positions of projected or imaged elements. For example, one method for encoding information into a projected pattern is to project a quasi-random pattern of “dots.” Such a quasi-random pattern contains information that may be used to establish correspondence among points and hence is not an example of an uncoded pattern. An example of an uncoded pattern is a rectilinear pattern of projected pattern elements.
In an embodiment, uncoded spots are projected in an uncoded pattern as illustrated in the scanner system 100 of
In an embodiment, the illuminated object spot 122 produces a first image spot 134 on the first image plane 136 of the first camera 130. The direction from the first image spot to the illuminated object spot 122 may be found by drawing a straight line 126 from the first image spot 134 through the first camera perspective center 132. The location of the first camera perspective center 132 is determined by the characteristics of the first camera optical system.
In an embodiment, the illuminated object spot 122 produces a second image spot 144 on the second image plane 146 of the second camera 140. The direction from the second image spot 144 to the illuminated object spot 122 may be found by drawing a straight line 126 from the second image spot 144 through the second camera perspective center 142. The location of the second camera perspective center 142 is determined by the characteristics of the second camera optical system.
In an embodiment, a processor 150 is in communication with the projector 110, the first camera 130, and the second camera 140. Either wired or wireless channels 151 may be used to establish connection among the processor 150, the projector 110, the first camera 130, and the second camera 140. The processor may include a single processing unit or multiple processing units and may include components such as microprocessors, field programmable gate arrays (FPGAs), digital signal processors (DSPs), and other electrical components. The processor may be local to a scanner system that includes the projector, first camera, and second camera, or it may be distributed and may include networked processors. The term processor encompasses any type of computational electronics and may include memory storage elements.
A method element 184 includes capturing with a first camera the illuminated object spots as first-image spots in a first image. This element is illustrated in
A first aspect of method element 188 includes determining with a processor 3D coordinates of a first collection of points on the object based at least in part on the first uncoded pattern of uncoded spots, the first image, the second image, the relative positions of the projector, the first camera, and the second camera, and a selected plurality of intersection sets. This aspect of the element 188 is illustrated in
A second aspect of the method element 188 includes selecting with the processor a plurality of intersection sets, each intersection set including a first spot, a second spot, and a third spot, the first spot being one of the uncoded spots in the projector reference plane, the second spot being one of the first-image spots, the third spot being one of the second-image spots, the selecting of each intersection set based at least in part on the nearness of intersection of a first line, a second line, and a third line, the first line being a line drawn from the first spot through the projector perspective center, the second line being a line drawn from the second spot through the first-camera perspective center, the third line being a line drawn from the third spot through the second-camera perspective center. This aspect of the element 188 is illustrated in
The processor 150 may determine the nearness of intersection of the first line, the second line, and the third line based on any of a variety of criteria. For example, in an embodiment, the criterion for the nearness of intersection is based on a distance between a first 3D point and a second 3D point. In an embodiment, the first 3D point is found by performing a triangulation calculation using the first image point 134 and the second image point 144, with the baseline distance used in the triangulation calculation being the distance between the perspective centers 132 and 142. In the embodiment, the second 3D point is found by performing a triangulation calculation using the first image point 134 and the projector point 112, with the baseline distance used in the triangulation calculation being the distance between the perspective centers 134 and 116. If the three lines 124, 126, and 128 nearly intersect at the object point 122, then the calculation of the distance between the first 3D point and the second 3D point will result in a relatively small distance. On the other hand, a relatively large distance between the first 3D point and the second 3D would indicate that the points 112, 134, and 144 did not all correspond to the object point 122.
As another example, in an embodiment, the criterion for the nearness of the intersection is based on a maximum of closest-approach distances between each of the three pairs of lines. This situation is illustrated in
The processor 150 may use many other criteria to establish the nearness of intersection. For example, for the case in which the three lines were coplanar, a circle inscribed in a triangle formed from the intersecting lines would be expected to have a relatively small radius if the three points 112, 134, 144 corresponded to the object point 122. For the case in which the three lines were not coplanar, a sphere having tangent points contacting the three lines would be expected to have a relatively small radius.
It should be noted that the selecting of intersection sets based at least in part on a nearness of intersection of the first line, the second line, and the third line is not used in most other projector-camera methods based on triangulation. For example, for the case in which the projected points are coded points, which is to say, recognizable as corresponding when compared on projection and image planes, there is no need to determine a nearness of intersection of the projected and imaged elements. Likewise, when a sequential method is used, such as the sequential projection of phase-shifted sinusoidal patterns, there is no need to determine the nearness of intersection as the correspondence among projected and imaged points is determined based on a pixel-by-pixel comparison of phase determined based on sequential readings of optical power projected by the projector and received by the camera(s). The method element 190 includes storing 3D coordinates of the first collection of points.
An alternative method that uses the intersection of epipolar lines on epipolar planes to establish correspondence among uncoded points projected in an uncoded pattern is described in Patent '455, referenced herein above. In an embodiment of the method described in Patent '455, a triangulation scanner places a projector and two cameras in a triangular pattern. An example of a triangulation scanner 300 having such a triangular pattern is shown in
Referring now to
In an embodiment, the device 3 is a projector 493, the device 1 is a first camera 491, and the device 2 is a second camera 492. Suppose that a projection point P3, a first image point Pi, and a second image point P2 are obtained in a measurement. These results can be checked for consistency in the following way.
To check the consistency of the image point P1, intersect the plane P3-E31-E13 the reference plane 460 to obtain the epipolar line 464. Intersect the plane P2-E21-E12 to obtain the epipolar line 462. If the image point P1 has been determined consistently, the observed image point P1 will lie on the intersection of the determined epipolar lines 462 and 464.
To check the consistency of the image point P2, intersect the plane P3-E32-E23 with the reference plane 470 to obtain the epipolar line 474. Intersect the plane P1-E12-E21 to obtain the epipolar line 472. If the image point P2 has been determined consistently, the observed image point P2 will lie on the intersection of the determined epipolar lines 472 and 474.
To check the consistency of the projection point P3, intersect the plane P2-E23-E32 with the reference plane 480 to obtain the epipolar line 484. Intersect the plane P1-E13-E31 to obtain the epipolar line 482. If the projection point P3 has been determined consistently, the projection point P3 will lie on the intersection of the determined epipolar lines 482 and 484.
It should be appreciated that since the geometric configuration of device 1, device 2 and device 3 are known, when the projector 493 emits a point of light onto a point on an object that is imaged by cameras 491, 492, the 3D coordinates of the point in the frame of reference of the 3D imager 490 may be determined using triangulation methods.
Note that the approach described herein above with respect to
In the system 540 of
The actuators 522, 534, also referred to as beam steering mechanisms, may be any of several types such as a piezo actuator, a microelectromechanical system (MEMS) device, a magnetic coil, or a solid-state deflector.
Now referring to
Now referring to
For example, the peak of the distance profile 920 for the first beam is shown at position 930 and the peak for the distance profile 920 for the second beam is shown at position 940. By comparing the two profiles, a time-shift 950 between the first distance profile 910 and the second distance profile 920 can be determined. In one or more embodiments, a cross-correlation analysis can be performed between the distance profiles 910, 920 to determine the time-shift 950. After the time-shift 950 has been determined, the velocity can be calculated using the distance traveled between the first and second beams and the time-shift 950. That is, the measured points at the peak or other identifiable points of the beams are used. The speed of the object is assumed to be constant over at least a unit of time to generate the distance profile.
In one or more embodiments of the disclosure, an object 820 can be traveling on a conveyor and can be tracked by the system 800. The conveyor 830 can be calibrated with the system 800 to improve the measurements between the distance profiles of each object. In other embodiments, the position information and timestamp information can be used calculate or estimate the velocity information. For example, a device that moves the object can provide information to measure the speed of the device, conveyor belt, mover, etc. The device can include a roll or other components that can be associated with a position and time stamp to determine the velocity.
In one or more embodiments of the disclosure, a processor coupled to the scanner 810 is configured to perform a filtering operation to remove unrelated distance profile information. For example, the distance profile information can be used to detect the direction of the moving object and beams that are indicating movement within the beams in the opposite direction can be filtered out from further processing.
A processor coupled to the scanner 810 can be configured to assign beams that provide a distance profile that are similar to one another that are determined to be within a tolerance or margin of error of each other. The beams that are outside of the tolerance or margin of error can be filtered out of the set of beams and the remaining beams that provide similar distance profiles can be assigned to the same object. The outlier data can be filtered using known techniques such as linear regression.
In one or more embodiments of the disclosure, the beam pairs from the beams that were assigned to the object can be selected for time-shift analysis. Several beam pairs can be analyzed to obtain a more accurate velocity estimation for the object. The analysis of multiple beams pairs may be desired. In some scenarios, different beams may not hit the same part of the object because the beams may not be parallel. In addition, the movement direction of the object may not necessarily be parallel to the beam pattern. In some embodiments of the disclosure, the average of the plurality of velocity calculation can be performed to obtain a representative velocity of the object.
In some embodiments, a configurable threshold number of beam pairs may be analyzed prior to providing the velocity estimation. In other embodiments, a statistical analysis can be performed to generate a confidence value for the velocity estimation, and the velocity estimation may not be provided until a configurable confidence value is achieved.
Now referring
Block 1008 a processor is configured to generate a first profile for the object using a first laser beam of the collection of laser beams and generate a second profile for the object using a second laser beam of the collection of laser beams. The distance to the object and time information is provided on the distance profile.
Block 1010 estimates the velocity of the object based on the first profile and the second profile. In one or more embodiments of the disclosure, a shift analysis can be performed to analyze the first and second the distance profiles. For example, the analysis can include calculating the sum of absolute differences or using feature-based methods or any other version of cross-correlation calculations. In addition, the time-shift can be calculated by performing a cross-correlation analysis of the distance profiles where the correlation of the time shifted signals is the highest for the time-shifted signals.
Block 1012 provides the estimated velocity. The velocity information can be provided in a numerical or graphical format on a display. In addition, the estimated velocity can be transmitted to another internal/external device or system over a network. It should be understood the method 1000 described herein is not intended to be limited by the steps shown in
While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
This application claims the benefit of U.S. Patent Application Ser. No. 62/940,317 filed Nov. 26, 2019, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62940317 | Nov 2019 | US |