The subject matter disclosed herein relates in general to devices such as three-dimensional (3D) imagers and stereo cameras that use triangulation to determine 3D coordinates.
Three-dimensional imagers and stereo cameras use a triangulation method to measure the 3D coordinates of points on an object. A 3D imager usually includes a projector that projects onto a surface of the object either a pattern of light as a line or a pattern of light covering an area. A camera is coupled to the projector in a fixed relationship. The light emitted from the projector is reflected off of the object surface and detected by the camera. A correspondence is determined among points on a projector plane and points on a camera plane. Since the camera and projector are arranged in a fixed relationship, the distance to the object may be determined using trigonometric principles. A correspondence among points observed by two stereo cameras may likewise be used with a triangulation method to determine 3D coordinates. Compared to coordinate measurement devices that use tactile probes, triangulation systems provide advantages in quickly acquiring coordinate data over a large area. As used herein, the resulting collection of 3D coordinate values or data points of the object being measured by the triangulation system is referred to as point-cloud data or simply a point cloud.
There are a number of areas in which existing triangulation devices may be improved: combining 3D and color information, capturing 3D and motion information from multiple perspectives and over a wide field-of-view, calibrating/compensating 3D imagers, and registering 3D imagers.
Accordingly, while existing triangulation-based 3D imager devices that use photogrammetry methods are suitable for their intended purpose, the need for improvement remains.
According to an embodiment of the present invention, a three-dimensional (3D) measuring system includes: a body; an internal projector fixedly attached to the body, the internal projector configured to project an illuminated pattern of light onto an object; and a first dichroic camera assembly fixedly attached to the body, the first dichroic camera assembly having a first beam splitter configured to direct a first portion of incoming light into a first channel leading to a first photosensitive array and to direct a second portion of the incoming light into a second channel leading to a second photosensitive array, the first photosensitive array being configured to capture a first channel image of the illuminated pattern on the object, the second photosensitive array being configured to capture a second channel image of the illuminated pattern on the object, the first dichroic camera assembly having a first pose relative to the internal projector, wherein the 3D measuring system is configured to determine 3D coordinates of a first point on the object based at least in part on the illuminated pattern, the second channel image, and the first pose.
These and other advantages and features will become more apparent from the following description taken in conjunction with the drawings.
The subject matter, which is regarded as the invention, is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The detailed description explains embodiments of the invention, together with advantages and features, by way of example with reference to the drawings.
Embodiments of the present invention provide advantages in combining 3D and color information, capturing 3D and motion information from multiple perspectives and over a wide field-of-view, calibrating/compensating 3D imagers, and registering 3D imagers.
In an alternative embodiment shown in
The ray of light 111A intersects the surface 130A in a point 132A, which is reflected (scattered) off the surface and sent through the camera lens 124A to create a clear image of the pattern on the surface 130A on the surface of a photosensitive array 122A. The light from the point 132A passes in a ray 121A through the camera perspective center 128A to form an image spot at the corrected point 126A. The position of the image spot is mathematically adjusted to correct for aberrations in the camera lens. A correspondence is obtained between the point 126A on the photosensitive array 122A and the point 116A on the illuminated projector pattern generator 112A. As explained herein below, the correspondence may be obtained by using a coded or an uncoded pattern of projected light. In some cases, the pattern of light may be projected sequentially. Once the correspondence is known, the angles a and b in
The second camera 170B includes a second camera lens 174B and a second photosensitive array 172B. The second camera 170B has a second camera perspective center 178B through which a ray of light 171B passes from the point 132B onto the second photosensitive array 172B as a corrected image spot 176B. The position of the image spot is mathematically adjusted to correct for aberrations in the camera lens.
A correspondence is obtained between the point 126B on the first photosensitive array 122B and the point 176B on the second photosensitive array 172B. As explained herein below, the correspondence may be obtained, for example, using “active triangulation” based on projected patterns or fiducial markers or on “passive triangulation” in which natural features are matched on each of the camera images. Once the correspondence is known, the angles a and b in
The inclusion of two cameras 210 and 230 in the system 200 provides advantages over the device of
This triangular arrangement provides additional information beyond that available for two cameras and a projector arranged in a straight line as illustrated in
In
Consider the situation of
To check the consistency of the image point P1, intersect the plane P3-E31-E13 with the reference plane 460 to obtain the epipolar line 464. Intersect the plane P2-E21-E12 to obtain the epipolar line 462. If the image point P1 has been determined consistently, the observed image point P1 will lie on the intersection of the calculated epipolar lines 462 and 464.
To check the consistency of the image point P2, intersect the plane P3-E32-E23 with the reference plane 470 to obtain the epipolar line 474. Intersect the plane P1-E12-E21 to obtain the epipolar line 472. If the image point P2 has been determined consistently, the observed image point P2 will lie on the intersection of the calculated epipolar lines 472 and 474.
To check the consistency of the projection point P3, intersect the plane P2-E23-E32 with the reference plane 480 to obtain the epipolar line 484. Intersect the plane P1-E13-E31 to obtain the epipolar line 482. If the projection point P3 has been determined consistently, the projection point P3 will lie on the intersection of the calculated epipolar lines 482 and 484.
The redundancy of information provided by using a 3D imager 300 having a triangular arrangement of projector and cameras may be used to reduce measurement time, to identify errors, and to automatically update compensation/calibration parameters.
An example is now given of a way to reduce measurement time. As explained herein below in reference to
The triangular arrangement of 3D imager 300 may also be used to help identify errors. For example, a projector 493 in a 3D imager 490 of
The triangular arrangement of the 3D imager 300 may also be used to automatically update compensation/calibration parameters. Compensation parameters are numerical values stored in memory, for example, in an internal electrical system of a 3D measurement device or in another external computing unit. Such parameters may include the relative positions and orientations of the cameras and projector in the 3D imager. The compensation parameters may relate to lens characteristics such as lens focal length and lens aberrations. They may also relate to changes in environmental conditions such as temperature. Sometimes the term calibration is used in place of the term compensation. Often compensation procedures are performed by the manufacturer to obtain compensation parameters for a 3D imager. In addition, compensation procedures are often performed by a user. User compensation procedures may be performed when there are changes in environmental conditions such as temperature. User compensation procedures may also be performed when projector or camera lenses are changed or after then instrument is subjected to a mechanical shock. Typically user compensations may include imaging a collection of marks on a calibration plate. A further discussion of compensation procedures is given herein below in reference to
Inconsistencies in results based on epipolar calculations for a 3D imager 490 may indicate a problem in compensation parameters, which are numerical values stored in memory. Compensation parameters are used to correct imperfections or nonlinearities in the mechanical, optical, or electrical system to improve measurement accuracy. In some cases, a pattern of inconsistencies may suggest an automatic correction that can be applied to the compensation parameters. In other cases, the inconsistencies may indicate a need to perform user compensation procedures.
It is often desirable to integrate color information into 3D coordinates obtained from a triangulations scanner (3D imager). Such color information is sometimes referred to as “texture” information since it may suggest the materials being imaged or reveal additional aspects of the scene such as shadows. Usually such color (texture) information is provided by a color camera separated from the camera in the triangulation scanner (i.e., the triangulation camera). An example of a separate color camera is the camera 390 in the 3D imager 300 of
In some cases, it is desirable to supplement 3D coordinates obtained from a triangulation scanner with information from a two-dimensional (2D) camera covering a wider field-of-view (FOV) than the 3D imager. Such wide-FOV information may be used for example to assist in registration. For example, the wide-FOV camera may assist in registering together multiple images obtained with the triangulation camera by identifying natural features or artificial targets outside the FOV of the triangulation camera. For example, the camera 390 in the 3D imager 300 may serve as both a wide-FOV camera and a color camera.
If a triangulation camera and a color camera are connected together in a fixed relationship, for example, by being mounted onto a common base, then the position and orientation of the two cameras may be found in a common frame of reference. Position of each of the cameras may be characterized by three translational degrees-of-freedom (DOF), which might be for example x-y-z coordinates of the camera perspective center. Orientation of each of the cameras may be characterized by three orientational DOF, which might be for example roll-pitch-yaw angles. Position and orientation together yield the pose of an object. In this case, the three translational DOF and the three orientational DOF together yield the six DOF of the pose for each camera. A compensation procedure may be carried out by a manufacturer or by a user to determine the pose of a triangulation scanner and a color camera mounted on a common base, the pose of each referenced to a common frame of reference.
If the pose of a color camera and a triangulation camera are known in a common frame of reference, then it is possible in principle to project colors obtained from the color camera onto the 3D image obtained from the triangulation scanner. However, increased separation distance between the two cameras may reduce accuracy in juxtaposing the color information onto the 3D image. Increased separation distance may also increase complexity of the mathematics required to perform the juxtaposition. Inaccuracy in the projection of color may be seen, for example, as a misalignment of color pixels and 3D image pixels, particularly at object edges.
A way around increased error and complication caused by increased distance between a color camera and a triangulation camera is now described with reference to
Although the lens 505 in
The dichroic beamsplitter 510 may be of any type that separates light into two different beam paths based on wavelength. In the example of
In an alternative embodiment, a dichroic beamsplitter is constructed of prismatic elements that direct the light to travel in two directions that are not mutually perpendicular. In another embodiment, a dichroic beamsplitter is made using a plate (flat window) of glass rather than a collection of larger prismatic elements. In this case, a surface of the plate is coated to reflect one range of wavelengths and transmit another range of wavelengths.
In an embodiment, the dichroic beamsplitter 510 is configured to pass color (texture) information to one of the two photosensitive arrays and to pass 3D information to the other of the two photosensitive arrays. For example, the dielectric coating 512 may be selected to transmit infrared (IR) light along the path 532 for use in determining 3D coordinates and to reflect visible (color) light along the path 534. In another embodiment, the dielectric coating 512 reflects IR light along the path 534 while transmitting color information along the path 532.
In other embodiments, other wavelengths of light are transmitted or reflected by the dichroic beamsplitter. For example, in an embodiment, the dichroic beamsplitter may be selected to pass infrared wavelengths of light that may be used, for example, to indicate the heat of objects (based on characteristic emitted IR wavelengths) or to pass to a spectroscopic energy detector for analysis of background wavelengths. Likewise a variety of wavelengths may be used to determine distance. For example, a popular wavelength for use in triangulation scanners is a short visible wavelength near 400 nm (blue light). In an embodiment, the dichroic beamsplitter is configured to pass blue light onto one photosensitive array to determine 3D coordinates while passing visible (color) wavelengths except the selected blue wavelengths onto the other photosensitive array.
In other embodiments, individual pixels in one of the photosensitive arrays 520, 525 are configured to determine distance to points on an object, the distance based on a time-of-flight calculation. In other words, with this type of array, distance to points on an object may be determined for individual pixels on an array. A camera that includes such an array is typically referred to as a range camera, a 3D camera or an RGB-D (red-blue-green-depth) camera. Notice that this type of photosensitive array does not rely on triangulation but rather calculates distance based on another physical principle, most often the time-of-flight to a point on an object. In many cases, an accessory light source is configured to cooperate with the photosensitive array by modulating the projected light, which is later demodulated by the pixels to determine distance to a target.
In most cases, the focal length of the lens 505 is nearly the same for the wavelengths of light that pass through the two paths to the photosensitive arrays 520 and 525. Because of this, the FOV is nearly the same for the two paths. Furthermore, the image area is nearly the same for the photosensitive arrays 520 and 525.
Although the lenses 554 and 564 in
A second potential advantage of the dichroic camera assembly 540 over the dichroic camera assembly 500 is that one of the two photosensitive arrays 556 and 566 may be selected to have a larger sensor area than the other array. In the example of
A third potential advantage of the dichroic camera assembly 540 over the dichroic camera assembly 500 is that aberrations, especially chromatic aberrations, may be more simply and completely corrected using two separate lens assemblies 554, 564 than using a single lens assembly 505 as in
On the other hand, a potential advantage of the dichroic camera assembly 500 over the dichroic camera assembly 540 is a smaller size for the overall assembly. Another potential advantage is ability to use a single off-the-shelf lens—for example, a C-mount lens.
The pattern of light projected from the auxiliary projector 710A may be configured to convey information to the operator. In an embodiment, the pattern may convey written information such as numerical values of a measured quantity or deviation of a measured quantity in relation to an allowed tolerance. In an embodiment, deviations of measured values in relation to specified quantities may be projected directly onto the surface of an object. In some cases, the information conveyed may be indicated by projected colors or by “whisker marks,” which are small lines that convey scale according to their lengths. In other embodiments, the projected light may indicate where assembly operations are to be performed, for example, where a hole is to be drilled or a screw is to be attached. In other embodiments, the projected light may indicate where a measurement is to be performed, for example, by a tactile probe attached to the end of an articulated arm CMM or a tactile probe attached to a six-DOF accessory of a six-DOF laser tracker. In other embodiments, the projected light may be a part of the 3D measurement system. For example, a projected spot or patch of light may be used to determine whether certain locations on the object produce significant reflections that would result in multi-path interference. In other cases, the additional projected light pattern may be used to provide additional triangulation information to be imaged by the camera having the perspective center 628A.
In an embodiment, the external projector 810A is fixed in place and projects a pattern over a relatively wide FOV while the 3D imager 600A is moved to a plurality of different locations. The dichroic camera assembly 620A captures a portion of the pattern of light projected by the external projector 810A in each of the plurality of different locations to register the multiple 3D images together. In an embodiment, the projector 110A projects a first pattern of light at a first wavelength, while the projector 810A projects a second pattern of light at a second wavelength. In an embodiment, a first of the two cameras in the dichroic camera assembly 620A captures the first wavelength of light, while the second of the two cameras captures the second wavelength of light. In this manner interference between the first and second projected patterns can be avoided. In other embodiments, an additional color camera such as the camera 390 in
The device in the upper left of
The dichroic camera unit 1220A includes a camera base 1222A, a dichroic camera assembly 620A, a camera perspective center 628A, and a processor 1224A. Light reflected (scattered) off the object surface 130A from the point 132A passes through the camera perspective center 628A of the dichroic camera assembly 620A. The dichroic camera assembly was discussed herein above in reference to
The second dichroic camera unit 1220B includes a camera base 1222B, a first dichroic camera assembly 620B, a second perspective center 628B, and a processor 1224B. A ray of light 121B travels from the object point 132A on the object surface 130A through the second perspective center 628B. The processor 1224B cooperates with the dichroic camera assembly 620B to capture the image of the illuminated pattern on the object surface 130B. The 3D coordinates of points on the object surface 130A may be determined by any combination of the processors 1224A, 1224B, and 1250B. Likewise, any of the processors 1224A, 1224B, and 1250B may provide support to obtain color 3D images, to register multiple images, and so forth. The distance between the first perspective center 628A and the second perspective center 628B is the baseline distance 1240B. Because the projection base 1222A and the camera base 1222B are not fixedly attached but may each be moved relative to the other, the baseline distance 1240B varies according to the setup.
The 3D coordinates of points on the object surface 130A may be determined by the camera internal processor 1224A or by the processor 1250A. Likewise, either the internal processor 1224A or the external processor 1250A may provide support to obtain color 3D images, to register multiple images, and so forth.
The dichroic camera unit 1220A includes a camera base 1222A, a dichroic camera assembly 620A, a camera perspective center 628A, and a processor 1224A. Light reflected (scattered) off the object surface 130A from the point 132A passes through the camera perspective center 628A of the dichroic camera assembly 620A. The dichroic camera assembly was discussed herein above in reference to
The auxiliary projection unit 1210C includes an auxiliary projector base 1222C, an auxiliary projector 710A, and a processor 1224C. The auxiliary processor 710A was discussed herein above in reference to
The pattern of light projected from the auxiliary projector unit 1210C may be configured to convey information to the operator. In an embodiment, the pattern may convey written information such as numerical values of a measured quantity or deviation of a measured quantity in relation to an allowed tolerance. In an embodiment, deviations of measured values in relation to specified quantities may be projected directly onto the surface of an object. In some cases, the information conveyed may be indicated by projected colors or by whisker marks. In other embodiments, the projected light may indicate where assembly operations are to be performed, for example, where a hole is to be drilled or a screw is to be attached. In other embodiments, the projected light may indicate where a measurement is to be performed, for example, by a tactile probe attached to the end of an articulated arm CMM or a tactile probe attached to a six-DOF accessory of a six-DOF laser tracker. In other embodiments, the projected light may be a part of the 3D measurement system. For example, a projected spot or patch of light may be used to determine whether certain locations on the object produce significant reflections that would result in multi-path interference. In other cases, the additional projected light pattern may be used to provide additional triangulation information to be imaged by the camera having the perspective center 628A. The processor 1224C may cooperate with the auxiliary projector 710A and with the processor 1250C to obtain the desired projection pattern.
The first dichroic camera unit 1220A includes a camera base 1222A, a dichroic camera assembly 620A, a first perspective center 628A, and a processor 1224A. Light reflected (scattered) off the object surface 130A from the point 132A passes through the camera perspective center 628A of the dichroic camera assembly 620A. The dichroic camera assembly was discussed herein above in reference to
The second dichroic camera unit 1220B includes a camera base 1222B, a first dichroic camera assembly 620B, a second perspective center 628B, and a processor 1224B. A ray of light 121B travels from the object point 132A on the object surface 130A through the second perspective center 628B. The processor 1224B cooperates with the dichroic camera assembly 620B to capture the image of the illuminated pattern on the object surface 130A.
Because the projection base 1212A and the camera bases 1222A, 1222B are not fixedly attached but may each be moved relative to the other, the baseline distances between these components varies according to the setup. The processors 1224A, 1224B cooperate with the dichroic camera assemblies 620A, 620B, respectively, to capture images of the illuminated pattern on the object surface 130A. The 3D coordinates of points on the object surface 130A are determined by a combination of the processors 1214A, 1224A, 1224B, and 1250D. Likewise, some combination of these processors may provide support to obtain color 3D images, to register multiple images, and so forth.
The one or more dichroic cameras 1320A, 1320B may be for example the dichroic camera 500 described with reference to
In an embodiment, a plurality of projectors such as 1310A, 1310B are used. In an embodiment, the plurality of projectors project patterns at the same time. This approach is useful when the spots are used primarily to assist in registration or when there is not much chance of confusing overlapping projection patterns. In another embodiment, the plurality of projectors project light at different times so as to enable unambiguous identification of the projector that emits a particular pattern. In an alternative embodiment, each projector projects a slightly different wavelength. In one approach, each camera is configured to respond to only to wavelengths from selected projectors. In another approach, each camera is configured to separate multiple wavelengths of light, thereby enabling identification of the pattern associated with a particular projector that emits light of a particular wavelength. In a different embodiment, all of the projectors project light at the same wavelength so that each camera responds to any light within its FOV.
In an embodiment, 3D coordinates are determined based at least in part on triangulation. A triangulation calculation requires knowledge of the relative position and orientation of at least one projector such as 1310A and one camera such as 1320A. Compensation (calibration) methods for obtaining such knowledge are described herein below, especially in reference to
In another embodiment, 3D coordinates are obtained by identifying features or targets on an object and noting changes in the features or target as the object 1330 moves. The process of identifying natural features of an object 1330 in a plurality of images is sometimes referred to as videogrammetry. There is a well-developed collection of techniques that may be used to determine points associated with features of objects as seen from multiple perspectives. Such techniques are generally referred to as image processing or feature detection. Such techniques, when applied to determination of 3D coordinates based on relative movement between the measuring device and the measured object, are sometimes referred to as videogrammetry techniques.
The common points identified by the well-developed collection of techniques described above may be referred to as cardinal points. A commonly used but general category for finding the cardinal points is referred to as interest point detection, with the detected points referred to as interest points. According to the usual definition, an interest point has a mathematically well-founded definition, a well-defined position in space, an image structure around the interest point that is rich in local information content, and a variation in illumination level that is relatively stable over time. A particular example of an interest point is a corner point, which might be a point corresponding to an intersection of three planes, for example. Another example of signal processing that may be used is scale invariant feature transform (SIFT), which is a method well known in the art and described in U.S. Pat. No. 6,711,293 to Lowe. Other common feature detection methods for finding cardinal points include edge detection, blob detection, and ridge detection.
In a method of videogrammetry applied to
A way to improve the registration of multiple 2D images or multiple 3D images using videogrammetry is to further provide object features by projecting an illuminated pattern onto the object. If the object 1330 and the projector(s) are stationary, the pattern on the object remains stationary even if the one or more cameras 1320A, 1320B are moving. If the object 1330 is moving while the one or more cameras 1320A, 1320B and the one or more projectors 1310A, 1310B remain stationary, the pattern on the object changes over time. In either case, a projected pattern can assist in the registration of the 2D or 3D images. Whether the pattern is stationary or moving on the object, it can be used to assist in registering the multiple images.
The use of videogrammetry techniques is particularly powerful when combined with triangulation methods for determining 3D coordinates. For example, if the pose of a first camera is known in relation to a second camera (in other words, the baseline between the cameras and the relative orientation of the cameras to the baseline are known), then common elements of a pattern of light from one or more projectors 1310A, 1310B may be identified and triangulation calculations performed to determine the 3D coordinates of the moving object.
Likewise if the pose of a first projector 1310A is known in relation to a first camera 1320A and if a processor is able to determine a correspondence among elements of the projected pattern and the captured 2D image, then 3D coordinates may be calculated in the frame of reference of the projector 1310A and the camera 1320A. The obtaining of a correspondence between cardinal points or projected pattern elements is enhanced if a second camera is added, especially if an advantageous geometry of the two cameras and the one projector, such as that illustrated in
As explained herein above with reference to
A useful capability of the one or more dichroic cameras 1220A, 1220B is in capturing object color (texture) and projecting this color onto a 3D image. It is also possible to capture color information with a separate camera that is not a dichroic camera. If the relative pose of the separate camera is known in relation to the dichroic camera, it may be possible to determine the colors for a 3D image. However, as explained herein above, such a mathematical determination from a separate camera is generally more complex and less accurate than determination based on images from a dichroic camera. The use one or more dichroic cameras 1220A, 1220B as opposed to single-channel cameras provides potential advantages in improving accuracy in determining 3D coordinates and in applying color (texture) to the 3D image.
In an embodiment, one or more artificial targets are mounted on the object 1330. In an embodiment, the one or more artificial targets are reflective spots that are illuminated by the one or more projectors 1310A, 1310B. In an alternative embodiment, the one or more artificial targets are illuminated points of light such as LEDs. In an embodiment, one of the two channels of the one or more dichroic cameras 1320A, 1320B are configured to receive light from the LEDs, while the other of the two channels is configured to receive a color image of the object. The channel that receives the signals from the reflective dots or LEDs may be optimized to block out light having wavelengths different than those returned by the reflective dots or the LEDs, thus simplifying calculation of 3D coordinates of the object surface. In an embodiment, a first channel of the one or more dichroic cameras 1320A, 1320B is configured to pass infrared light from the reflective dots or LEDs, while the second channel is configured to block infrared light while passing visible (colored) light.
In
In an embodiment, the motors are configured to track the object 1330. In the event that multiple objects are separated, different projectors and cameras may be assigned different objects of the multiple objects to follow. Such an approach may enable tracking of the ball 1304 and the player 1302 following the kick of the ball by the player.
Another potential advantage of having motorized rotation mechanisms 1402, 1404 for the projectors and cameras is the possibility of reducing the FOV of the projectors and cameras to obtain higher resolutions. This will provide, for example, more accurate and detailed 3D and color representations. The angular accuracy of steering mechanisms of the sort shown in
A number of different steering mechanisms and angle transducers may be used. The steering mechanisms 1402, 1404 illustrated in
Triangulation devices such as 3D imagers and stereo cameras have a measurement error approximately proportional to the Z2/B, where B is the baseline distance and Z is the perpendicular distance from the baseline to an object point being measured. This formula indicates that error varies as the perpendicular distance Z times the ratio of the perpendicular distance divided by the baseline distance. It follows that it is difficult to obtain good accuracy when measuring a relatively distant object with a triangulation device having a relatively small baseline. To measure a relatively distant object with relatively high accuracy, it is advantageous to position the projector and camera of a 3D imager relatively far apart or, similarly, to position the two cameras of a stereo camera relatively far apart. It can be difficult to achieve the desired large baseline in an integrated triangulation device in which projectors and cameras are attached fixedly to a base structure.
A triangulation system that supports flexible configuration for measuring objects at different distances, including relatively long distances, is now described with reference to
In an embodiment, the projector 1610 and the cameras 1620A, 1620B are not arranged in a straight line but area rather arranged in a triangular pattern so as to produce two epipoles on each reference plane, as illustrated in
In an embodiment, the two steering angles of the projector 1610 and the cameras 1620A, 1620B are known to high accuracy. For example, angular encoders used with shafts and bearings as described herein above in reference to
To make a triangulation calculation based on measurements performed by a plurality of cameras in a stereo configuration or by a camera and a projector in a 3D imager configuration, it is important to know the relative pose of the cameras and projectors in a given arrangement.
In some cases, the separated cameras and projectors of a 3D triangulation measurement system may be located mounted on a fixed stand. In this case, it may be convenient to mount the calibration artifact (for example, calibration plate 1710) fixed in place, for example, on a wall.
The exit pupil is defined as the optical image of the physical aperture stop as seen through the back of the lens system. The point 2118B is the center of the exit pupil. The chief ray travels from the point 2118B to a point on the photosensitive array 2120B. In general, the angle of the chief ray as it leaves the exit pupil is different than the angle of the chief ray as it enters the perspective center (the entrance pupil). To simplify analysis, the ray path following the entrance pupil is adjusted to enable the beam to travel in a straight line through the perspective center 2112B to the photosensitive array 2120B as shown in
As explained herein above, a videogrammetry system that includes a camera may be used in combination with a 3D imager that includes at least one camera and one projector. The projector may project a variety of patterns, as described herein above.
In the measurement scenario 1300 of
The accuracy of the composite 3D image of the object 1330 is improved if the pose of each of the 3D triangulation systems 900 in the measurement scenario 2300 is known within a common frame of reference 2310. A way to determine the pose of each system 900 is now described.
Although this sort of monitoring enables continual movement of the 3D triangulation system 1100B, use of the phase-shift method requires that the 3D measuring device 2500A be held stationary until a complete sequence of phase measurements is completed.
In an embodiment illustrated in
In an alternative embodiment illustrated in
In another embodiment illustrated in
Although
To determine 3D coordinates based on stereo triangulation calculations such as those illustrated in
To establish the pose within a frame of reference of the environment, it is also necessary to measure a known reference length with the cameras 1620A, 1620B to provide a length scale for the captured images. Such a reference length may be provided for example by a scale bar having a known length between two reference targets. In another embodiment, a scale may be provided by two reference targets measured by another method. For example, a laser tracker may be used to measure the distance between an SMR placed in each of two kinematic nests. The SMR may then be replaced by a reference target placed in each of the two kinematic nests. Each reference target in this case, may include a spherical surface element that rotates within the kinematic nest and in addition include a reflective or illuminated element centered on the sphere.
An explanation is now given for a known method of determining 3D coordinates on an object surface using a sequential sinusoidal phase-shift method, as described with reference to
In
In
In a phase-shift method of determining distance to an object, a sinusoidal pattern is shifted side-to-side in a sequence of at least three phase shifts. For example, consider the situation illustrated in
By measuring the amount of light received by the pixels in the cameras 70 and 60, the initial phase shift of the light pattern 2712 can be determined. As suggested by
The phase shift method of
An alternative method of determining 3D coordinates using triangulation methods is by projecting coded patterns. If a coded pattern projected by the projector is recognized by the camera(s), then a correspondence between the projected and imaged points can be made. Because the baseline and two angles are known for this case, the 3D coordinates for the object point can be calculated.
An advantage of projecting coded patterns is that 3D coordinates may be obtained from a single projected pattern, thereby enabling rapid measurement, which is usually needed for example in handheld scanners. One disadvantage of projecting coded patterns is that background light can contaminate measurements, reducing accuracy. The problem of background light is avoided in the sinusoidal phase-shift method since background light, if constant, cancels out in the calculation of phase.
One way to preserve accuracy using the phase-shift method while minimizing measurement time is to use a scanner having a triangular geometry, as in
In an embodiment, the rotating camera assemblies 1420A, 1420B have a FOV large enough to capture the light marks 2822 on the handheld measuring device 2820. By rotating the camera assemblies 1420A, 1420B to track the handheld measuring device 2820, the system 2800A is made capable of measuring 3D coordinates over a relatively large measuring environment 2850 even though the FOV is relatively small for each of the camera assemblies 1420A, 1420B. The consequence of this approach is improved measurement accuracy over a relatively large measurement volume. In an embodiment, the rotating cameras 1420A, 1420B are raised in a fixed position, for example, on pillars 2802.
In an embodiment illustrated in
The operation of the laser line scanner (also known as a laser line probe or simply line scanner) such as the line scanner 2832 of
The methods described in
In an embodiment illustrated in
The angular values of the light marks 2822 are determined from the knowledge of the relative pose of the two cameras 1420A, 1420B and the projector 3020 as explained herein above. The cameras 1420A, 1420B may measure a large number of projected pattern elements 3010 over the measurement volume to determine an accurate value for the baseline distances between the cameras 1420A, 1420B and between each of the cameras and the projector 3020. The angles of rotation of the cameras 1420A, 1420B are recalculated following each rotation of one or both of the cameras 1420A, 1420B based on the need for self-consistency in the triangulation calculations. The accuracy of the calculated angular values is strengthened if the two cameras 1420A, 1420B and the projector 3020 are in a triangular configuration as illustrated in
In an embodiment of
In an embodiment, the system determines the 3D coordinates of the object 1030 based at least in part on the images of the projected pattern obtained by the two cameras. The cameras 1420A, 1420B are able to match the patterns of light marks 2822 and, based on that initial orientation, are further able to match the projected spots 3010 near the probe 2820, 2830, or 2840 that are in the FOV of the two cameras 1420A, 1420B. Additional natural features on the object 1030 or on nearby stationary objects enable the system to use the images from the two cameras to determine 3D coordinates of the object 1030 within the frame of reference 2810.
In an alternative embodiment of
In another embodiment of
In an embodiment of
In an embodiment, the first projected pattern of light is a relatively fine pattern of light that provides relatively fine resolution when imaged by the cameras 3124 and 1420B. The projected pattern of light may be any of the types of light patterns discussed herein above, for example, sequential phase-shift patterns or single-shot coded patterns. In an embodiment, the triangulation calculation is performed based at least in part on the images obtained by the cameras 3124 and 1420B and by the relative pose of the cameras 3124 and 1420B. In another embodiment, the calculation is performed based at least in part on the image obtained by the camera 1420B, the first pattern projected by the projector 3122, and by the relative pose of the projector 3122 and the camera 1420B.
In one embodiment, the rotation angles of the rotating camera-projector 3120 and the rotating camera 1420B are not known to high accuracy. In this case, the method described with respect to
In another embodiment of
In an embodiment of
A potential disadvantage with such angular encoders or other high accuracy angular transducers is relatively high cost. A possible way around this problem is illustrated in
In an embodiment, a three-dimensional (3D) measuring system includes: a body; an internal projector fixedly attached to the body, the internal projector configured to project an illuminated pattern of light onto an object; and a first dichroic camera assembly fixedly attached to the body, the first dichroic camera assembly having a first beam splitter configured to direct a first portion of incoming light into a first channel leading to a first photosensitive array and to direct a second portion of the incoming light into a second channel leading to a second photosensitive array, the first photosensitive array being configured to capture a first channel image of the illuminated pattern on the object, the second photosensitive array being configured to capture a second channel image of the illuminated pattern on the object, the first dichroic camera assembly having a first pose relative to the internal projector, wherein the 3D measuring system is configured to determine 3D coordinates of a first point on the object based at least in part on the illuminated pattern, the second channel image, and the first pose.
In a further embodiment, the first portion and the second portion are directed into the first channel and the second channel, respectively, based at least in part on the wavelengths present in the first portion and the wavelengths present in the second portion.
In a further embodiment, further including a first lens between the first beam splitter and the first photosensitive array and a second lens between the first beam splitter and the second photosensitive array.
In a further embodiment, the focal length of the first lens is different than the focal length of the second lens.
In a further embodiment, the field-of-view (FOV) of the first channel is different than the FOV of the second channel.
In a further embodiment, the 3D measuring system is configured to identify a first cardinal point in a first instance of the first channel image and to further identify the first cardinal point in a second instance of the first channel image, the second instance of the first channel image being different than the first instance of the first channel image.
In a further embodiment, the first cardinal point is based on a feature selected from the group consisting of: a natural feature on or near the object, a spot of light projected onto or near to the object from a light source not attached to the body, and a marker placed on or near the object, a light source placed on or near the object.
In a further embodiment, the 3D measuring system is further configured to register the first instance of the first channel image to the second instance of the first channel image.
In a further embodiment, the 3D measuring system is configured to determine a first pose of the 3D measuring system in the second instance relative to a first pose of the 3D measuring system in the first instance.
In a further embodiment, the first channel has a larger field-of-view (FOV) than the second channel.
In a further embodiment, the first photosensitive array is configured to capture a color image.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the illuminated pattern includes an infrared wavelength.
In a further embodiment, the illuminated pattern includes a blue wavelength.
In a further embodiment, the illuminated pattern is a coded pattern.
In a further embodiment, the 3D measuring system is configured to emit a first instance of the illuminated pattern, a second instance of the illuminated pattern, and a third instance of the illuminated pattern, the 3D measuring system being further configured to capture a first instance of the second channel image, a second instance of the second channel image, and a third instance of the second channel image.
In a further embodiment, the 3D measuring system is further configured to determine the 3D coordinates of a point on the object based at least in part on the first instance of the first illuminated pattern image, the second instance of the first illuminated pattern image, and the third instance of the first illuminated pattern image, the first instance of the second channel image, the second instance of the second channel image, and the third instance of the second channel image.
In a further embodiment, the first illuminated pattern, the second illuminated pattern, and the third illuminated patterns are all sinusoidal patterns, each of the first illuminated pattern, the second illuminated pattern, and the third illuminated pattern being shifted side-to-side relative to the other two illuminated patterns.
In a further embodiment, further including a second camera assembly fixedly attached to the body, the second camera assembly receiving a third portion of incoming light in a third channel leading to a third photosensitive array, the third photosensitive array configured to capture a third channel image of the illuminated pattern on the object, the second camera assembly having a second pose relative to the internal projector, wherein the 3D measuring system is further configured to determine the 3D coordinates of the object based on the third channel image.
In a further embodiment, the 3D measuring system is further configured to determine the 3D coordinates of the object based on epipolar constraints, the epipolar constraints based at least in part on the first pose and the second pose.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the 3D measuring system is configured to assign a color to the first point based at least in part on the first channel image.
In a further embodiment, the illuminated pattern is an uncoded pattern.
In a further embodiment, the illuminated pattern includes a grid of spots.
In a further embodiment, the internal projector further includes a laser light source and a diffractive optical element, the laser light source configured to shine through the diffractive optical element.
In a further embodiment, the second camera assembly further includes a second beam splitter configured to direct the third portion into the third channel and to direct a fourth portion of the incoming light into a fourth channel leading to a fourth photosensitive array.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object.
In a further embodiment, the 3D measuring system is further configured to register a first instance of the first channel image to a second instance of the first channel image.
In a further embodiment, the external projector is further attached to a second mobile platform.
In a further embodiment, the second mobile platform further includes second motorized wheels.
In a further embodiment, the external projector is attached to a second motorized rotation mechanism configured to rotate the direction of the external pattern of light.
In a further embodiment, the body is attached to a first mobile platform.
In a further embodiment, the first mobile platform further includes first motorized wheels.
In a further embodiment, the first mobile platform further includes a robotic arm configured to move and rotate the body.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object, the external projector including a second mobile platform having second motorized wheels.
In a further embodiment, the 3D measuring system is configured to adjust a pose of the body under computer control.
In a further embodiment, the 3D measuring system is further configured to adjust a pose of the external projector under the computer control.
In a further embodiment, further including an auxiliary projector fixedly attached to the body, the auxiliary projector configured to project an auxiliary pattern of light onto or near to the object.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a numerical value of a measured quantity, a deviation of a measured quantity in relation to an allowed tolerance, information conveyed by a pattern of color, and whisker marks.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a location at which an assembly operation is to be performed and a location at which a measurement is to be performed.
In a further embodiment, the auxiliary pattern is projected to provide additional triangulation information.
In a further embodiment, the 3D measuring system is configured to produce a 3D color representation of the object.
In a further embodiment, further including a first lens placed to intercept the incoming light before reaching the first beam splitter.
In a further embodiment, the internal projector further includes a pattern generator, an internal projector lens, and an internal projector lens perspective center.
In a further embodiment, the internal projector further includes a light source and a diffractive optical element.
In a further embodiment, the auxiliary projector further includes an auxiliary picture generator, an auxiliary projector lens, and an auxiliary projector lens perspective center.
In a further embodiment, the auxiliary projector further includes an auxiliary light source and an auxiliary diffractive optical element.
In an embodiment, a three-dimensional (3D) measuring system includes: a body; a first dichroic camera assembly fixedly attached to the body, the first dichroic camera assembly having a first beam splitter configured to direct a first portion of incoming light into a first channel leading to a first photosensitive array and to direct a second portion of the incoming light into a second channel leading to a second photosensitive array, the first photosensitive array being configured to capture a first channel image of the object, the second photosensitive array being configured to capture a second channel image of the object; and a second camera assembly fixedly attached to the body, the second camera assembly having a third channel configured to direct a third portion of the incoming light into a third channel leading to a third photosensitive array, the third photosensitive array being configured to capture a third channel image of the object, the second camera assembly having a first pose relative to the first dichroic camera assembly, wherein the 3D measuring system is configured to determine 3D coordinates of a first point on the object based at least in part on the second channel image, the third channel image, and the first pose.
In a further embodiment, the first portion and the second portion are directed into the first channel and the second channel, respectively, based at least in part on wavelengths present in the first portion and on wavelengths present in the second portion.
In a further embodiment, further including a first lens between the first beam splitter and the first photosensitive array and a second lens between the first beam splitter and the second photosensitive array.
In a further embodiment, the focal length of the first lens is different than the focal length of the second lens.
In a further embodiment, the field-of-view (FOV) of the first channel is different than the FOV of the second channel.
In a further embodiment, the 3D measuring system is configured to identify a first cardinal point in a first instance of the first channel image and to further identify the first cardinal point in a second instance of the first channel image, the second instance of the first channel image being different than the first instance of the first channel image.
In a further embodiment, the first cardinal point is based on a feature selected from the group consisting of: a natural feature on or near the object, a spot of light projected onto or near to the object from a light source not attached to the body, a marker placed on or near the object, and a light source placed on or near the object.
In a further embodiment, the 3D measuring system is further configured to register the first instance of the first channel image to the second instance of the first channel image.
In a further embodiment, the 3D measuring system is configured to determine a first pose of the 3D measuring system in the second instance relative to a first pose of the 3D measuring system in the first instance.
In a further embodiment, the first channel has a larger field-of-view (FOV) than the second channel.
In a further embodiment, the first photosensitive array is configured to capture a color image.
In a further embodiment, the first photosensitive array is configured to capture an infrared image.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the 3D measuring system is configured to assign a color to the first point based at least in part on the first channel image.
In a further embodiment, the second camera assembly further includes a second beam splitter configured to direct the third portion into the third channel and to direct a fourth portion of the incoming light into a fourth channel leading to a fourth photosensitive array.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external.
In a further embodiment, the external projector is further attached to a second mobile platform.
In a further embodiment, the second mobile platform further includes second motorized wheels.
In a further embodiment, the external projector is attached to a second motorized rotation mechanism configured to rotate the direction of the external pattern of light.
In a further embodiment, the body is attached to a first mobile platform.
In a further embodiment, the first mobile platform further includes first motorized wheels.
In a further embodiment, the first mobile platform further includes a robotic arm configured to move and rotate the body.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object, the external projector including a second mobile platform having second motorized wheels.
In a further embodiment, the 3D measuring system is configured to adjust a pose of the body under computer control.
In a further embodiment, the 3D measuring system is further configured to adjust a pose of the external projector under the computer control.
In a further embodiment, further including an auxiliary projector configured to project an auxiliary pattern of light onto or near to the object.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a numerical value of a measured quantity, a deviation of a measured quantity in relation to an allowed tolerance, information conveyed by a pattern of color, and whisker marks.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a location at which an assembly operation is to be performed and a location at which a measurement is to be performed.
In a further embodiment, the auxiliary pattern is projected to provide additional triangulation information.
In a further embodiment, the 3D measuring system is configured to produce a 3D color representation of the object.
In a further embodiment, further including a first lens placed to intercept the incoming light before reaching the first beam splitter.
In a further embodiment, the auxiliary projector further includes an auxiliary picture generator, an auxiliary projector lens, and an auxiliary projector lens perspective center.
In a further embodiment, the auxiliary projector further includes an auxiliary light source and an auxiliary diffractive optical element.
In an embodiment, a three-dimensional (3D) measuring system includes: a first body and a second body independent of the first body; an internal projector configured to project an illuminated pattern of light onto an object; and a first dichroic camera assembly fixedly attached to the second body, the first dichroic camera assembly having a first beam splitter configured to direct a first portion of incoming light into a first channel leading to a first photosensitive array and to direct a second portion of the incoming light into a second channel leading to a second photosensitive array, the first photosensitive array being configured to capture a first channel image of the illuminated pattern on the object, the second photosensitive array being configured to capture a second channel image of the illuminated pattern on the object, the first dichroic camera assembly having a first pose relative to the internal projector, wherein the 3D measuring system is configured to determine 3D coordinates of a first point on the object based at least in part on the illuminated pattern, the second channel image, and the first pose.
In a further embodiment, the first portion and the second portion are directed into the first channel and the second channel, respectively, based at least in part on wavelengths present in the first portion and on wavelengths present in the second portion.
In a further embodiment, further including a first lens between the first beam splitter and the first photosensitive array and a second lens between the first beam splitter and the second photosensitive array.
In a further embodiment, the focal length of the first lens is different than the focal length of the second lens.
In a further embodiment, the field-of-view (FOV) of the first channel is different than the FOV of the second channel.
In a further embodiment, the 3D measuring system is configured to identify a first cardinal point in a first instance of the first channel image and to further identify the first cardinal point in a second instance of the first channel image, the second instance of the first channel image being different than the first instance of the first channel image.
In a further embodiment, the first cardinal point is based on a feature selected from the group consisting of: a natural feature on or near the object, a spot of light projected onto or near to the object from a light source not attached to the first body or the second body, a marker placed on or near the object, and a light source placed on or near the object.
In a further embodiment, the 3D measuring system is further configured to register the first instance of the first channel image to the second instance of the first channel image.
In a further embodiment, the 3D measuring system is configured to determine a first pose of the 3D measuring system in the second instance relative to a first pose of the 3D measuring system in the first instance.
In a further embodiment, the first channel has a larger field-of-view (FOV) than the second channel.
In a further embodiment, the first photosensitive array is configured to capture a color image.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the illuminated pattern includes an infrared wavelength.
In a further embodiment, the illuminated pattern includes a blue wavelength.
In a further embodiment, the illuminated pattern is a coded pattern.
In a further embodiment, the 3D measuring system is configured to emit a first instance of the illuminated pattern, a second instance of the illuminated pattern, and a third instance of the illuminated pattern, the 3D measuring system being further configured to capture a first instance of the second channel image, a second instance of the second channel image, and a third instance of the second channel image.
In a further embodiment, the 3D measuring system is further configured to determine the 3D coordinates of a point on the object based at least in part on the first instance of the first illuminated pattern image, the second instance of the first illuminated pattern image, and the third instance of the first illuminated pattern image, the first instance of the second channel image, the second instance of the second channel image, and the third instance of the second channel image.
In a further embodiment, the first illuminated pattern, the second illuminated pattern, and the third illuminated patterns are all sinusoidal patterns, each of the first illuminated pattern, the second illuminated pattern, and the third illuminated pattern being shifted sideways relative to the other two illuminated patterns.
In a further embodiment, further including a second camera assembly fixedly attached to a third body, the second camera assembly receiving a third portion of incoming light in a third channel leading to a third photosensitive array, the third photosensitive array configured to capture a third channel image of the illuminated pattern on the object, the second camera assembly having a second pose relative to the internal projector, wherein the 3D measuring system is further configured to determine the 3D coordinates of the object based on the third channel image.
In a further embodiment, the 3D measuring system is further configured to determine the 3D coordinates of the object based on epipolar constraints, the epipolar constraints based at least in part on the first pose and the second pose.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the 3D measuring system is configured to assign a color to the first point based at least in part on the first channel image.
In a further embodiment, the illuminated pattern is an uncoded pattern.
In a further embodiment, the illuminated pattern includes a grid of spots.
In a further embodiment, the internal projector further includes a laser light source and a diffractive optical element, the laser light source configured to shine through the diffractive optical element.
In a further embodiment, the second camera assembly further includes a second beam splitter configured to direct the third portion into the third channel and to direct a fourth portion of the incoming light into a fourth channel leading to a fourth photosensitive array.
In a further embodiment, further including an external projector detached from the first body, the second body, and the third body, the external projector configured to project an external pattern of light on the object.
In a further embodiment, the 3D measuring system is further configured to register a first instance of the first channel image to a second instance of the first channel image
In a further embodiment, the external projector is further attached to a second mobile platform.
In a further embodiment, the second mobile platform further includes second motorized wheels.
In a further embodiment, the external projector is attached to a second motorized rotation mechanism configured to rotate the direction of the external pattern of light.
In a further embodiment, the first body and the second body are attached to a first mobile platform and a second mobile platform, respectively.
In a further embodiment, the first mobile platform and the second mobile platform further include first motorized wheels and second motorized wheels, respectively.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object, the external projector including a third mobile platform having third motorized wheels.
In a further embodiment, the 3D measuring system is configured to adjust a pose of the first body and the second body under computer control.
In a further embodiment, the 3D measuring system is further configured to adjust a pose of the external projector under the computer control.
In a further embodiment, further including an auxiliary projector configured to project an auxiliary pattern of light onto or near to the object.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a numerical value of a measured quantity, a deviation of a measured quantity in relation to an allowed tolerance, information conveyed by a pattern of color, and whisker marks.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a location at which an assembly operation is to be performed and a location at which a measurement is to be performed.
In a further embodiment, the auxiliary pattern is projected to provide additional triangulation information.
In a further embodiment, the 3D measuring system is configured to produce a 3D color representation of the object.
In a further embodiment, further including a first lens placed to intercept the incoming light before reaching the first beam splitter.
In a further embodiment, the internal projector further includes a pattern generator, an internal projector lens, and an internal lens perspective center.
In a further embodiment, the internal projector further includes a light source and a diffractive optical element.
In a further embodiment, the auxiliary projector further includes an auxiliary picture generator, an auxiliary projector lens, and an auxiliary projector lens perspective center.
In a further embodiment, the auxiliary projector further includes an auxiliary light source and an auxiliary diffractive optical element.
In an embodiment, a three-dimensional (3D) measuring system includes: a first body and a second body independent of the first body; a first dichroic camera assembly fixedly attached to the first body, the first dichroic camera assembly having a first beam splitter configured to direct a first portion of incoming light into a first channel leading to a first photosensitive array and to direct a second portion of the incoming light into a second channel leading to a second photosensitive array, the first photosensitive array being configured to capture a first channel image of the object, the second photosensitive array being configured to capture a second channel image of the object; and a second camera assembly fixedly attached to the second body, the second camera assembly having a third channel configured to direct a third portion of the incoming light into a third channel leading to a third photosensitive array, the third photosensitive array being configured to capture a third channel image of the object, the second camera assembly having a first pose relative to the first dichroic camera assembly, wherein the 3D measuring system is configured to determine 3D coordinates of a first point on the object based at least in part on the second channel image, the third channel image, and the first pose.
In a further embodiment, the first portion and the second portion are directed into the first channel and the second channel, respectively, based at least in part on wavelengths present in the first portion and on wavelengths present in the second portion.
In a further embodiment, further including a first lens between the first beam splitter and the first photosensitive array and a second lens between the first beam splitter and the second photosensitive array.
In a further embodiment, the focal length of the first lens is different than the focal length of the second lens.
In a further embodiment, the field-of-view (FOV) of the first channel is different than the FOV of the second channel.
In a further embodiment, the 3D measuring system is configured to identify a first cardinal point in a first instance of the first channel image and to further identify the first cardinal point in a second instance of the first channel image, the second instance of the first channel image being different than the first instance of the first channel image.
In a further embodiment, the first cardinal point is based on a feature selected from the group consisting of: a natural feature on or near the object, a spot of light projected onto or near to the object from a light source not attached to the body, a marker placed on or near the object, and a light source placed on or near the object.
In a further embodiment, the 3D measuring system is further configured to register the first instance of the first channel image to the second instance of the first channel image.
In a further embodiment, the 3D measuring system is configured to determine a first pose of the 3D measuring system in the second instance relative to a first pose of the 3D measuring system in the first instance.
In a further embodiment, the first channel has a larger field-of-view (FOV) than the second channel.
In a further embodiment, the first photosensitive array is configured to capture a color image.
In a further embodiment, the first photosensitive array is configured to capture an infrared image.
In a further embodiment, the 3D measuring system is further configured to determine 3D coordinates of the first point on the object based at least in part on the first channel image.
In a further embodiment, the 3D measuring system is configured to assign a color to the first point based at least in part on the first channel image.
In a further embodiment, the second camera assembly further includes a second beam splitter configured to direct the third portion into the third channel and to direct a fourth portion of the incoming light into a fourth channel leading to a fourth photosensitive array.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object.
In a further embodiment, the external projector is further attached to a third mobile platform.
In a further embodiment, the third mobile platform further includes third motorized wheels.
In a further embodiment, the external projector is attached to a second motorized rotation mechanism configured to rotate the direction of the external pattern of light.
In a further embodiment, the first body is attached to a first mobile platform and the second body is attached to a second mobile platform.
In a further embodiment, the first mobile platform further includes first motorized wheels and the second mobile platform further includes second motorized wheels.
In a further embodiment, the first mobile platform further includes a first motorized rotation mechanism configured to rotate the first body and a second motorized rotation mechanism configured to rotate the second body.
In a further embodiment, further including an external projector detached from the body, the external projector configured to project an external pattern of light on the object, the external projector including a second mobile platform having second motorized wheels.
In a further embodiment, the 3D measuring system is configured to adjust a pose of the first body and the pose of the second body under computer control.
In a further embodiment, the 3D measuring system is further configured to adjust a pose of the external projector under the computer control.
In a further embodiment, further including an auxiliary projector configured to project an auxiliary pattern of light onto or near to the object.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a numerical value of a measured quantity, a deviation of a measured quantity in relation to an allowed tolerance, information conveyed by a pattern of color, and whisker marks.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a location at which an assembly operation is to be performed and a location at which a measurement is to be performed.
In a further embodiment, the auxiliary pattern is projected to provide additional triangulation information.
In a further embodiment, the 3D measuring system is configured to produce a 3D color representation of the object.
In a further embodiment, further including a first lens placed to intercept the incoming light before reaching the first beam splitter.
In a further embodiment, the auxiliary projector further includes an auxiliary picture generator, an auxiliary projector lens, and an auxiliary projector lens perspective center.
In a further embodiment, the auxiliary projector further includes an auxiliary light source and an auxiliary diffractive optical element.
In an embodiment, a measurement method includes: placing a first rotating camera assembly at a first environment location in an environment, the first rotating camera assembly including a first camera body, a first camera, a first camera rotation mechanism, and a first camera angle-measuring system; placing a second rotating camera assembly at a second environment location in the environment, the second rotating camera assembly including a second camera body, a second camera, a second camera rotation mechanism, and a second camera angle-measuring system; in a first instance: moving a three-dimensional (3D) measuring device to a first device location in the environment, the 3D measuring device having a device frame of reference, the 3D measuring device fixedly attached to a first target and a second target; rotating with the first camera rotation mechanism the first rotating camera assembly to a first angle to face the first target and the second target; measuring the first angle with the first camera angle-measuring system; capturing a first image of the first target and the second target with the first camera; rotating with the second camera rotation mechanism the second rotating camera assembly to a second angle to face the first target and the second target; measuring the second angle with the second camera angle-measuring system; capturing a second image of the first target and the second target with the second camera; measuring, with the 3D measuring device, first 3D coordinates in the device frame of reference of a first object point on an object; determining 3D coordinates of the first object point in a first frame of reference based at least in part on the first image, the second image, the measured first angle, the measured second angle, and the measured first 3D coordinates, the first frame of reference being different than the device frame of reference; in a second instance: moving the 3D measuring device to a second device location in the environment; capturing a third image of the first target and the second target with the first camera; capturing a fourth image of the first target and the second target with the second camera; measuring, with the 3D measuring device, second 3D coordinates in the device frame of reference of a second object point on the object; determining 3D coordinates of the second object point in the first frame of reference based at least in part on the third image, the fourth image, and the measured second 3D coordinates; and storing the 3D coordinates of the first object point and the second object point in the first frame of reference.
In a further embodiment, in the step of moving a three-dimensional (3D) measuring device to a first device location in the environment, the 3D measuring device is further fixedly attached to a third target; in the first instance: the step of rotating with the first camera rotation mechanism further includes rotating the first rotating camera assembly to face the third target; in the step of capturing a first image of the first target and the second target with the first camera, the first image further includes the third target; the step of rotating with the second camera rotation mechanism further includes rotating the second rotating camera assembly to face the third target; in the step of capturing a second image of the first target and the second target with the second camera, the second image further includes the third target; in the second instance: in the step of capturing a third image of the first target and the second target with the first camera, the third image further includes the third target; and in the step of capturing a fourth image of the first target and the second target with the second camera, the fourth image further includes the third target.
In a further embodiment, in the second instance: a further step includes rotating with the first camera rotation mechanism the first rotating camera assembly to a third angle to face the first target and the second target; a further step includes rotating with the second camera rotation mechanism the second rotating camera assembly to a fourth angle to face the first target and the second target; in the step of determining 3D coordinates of the second object point in the first frame of reference, the 3D coordinates of the second object point in the first frame of reference are further based on the third angle and the fourth angle.
In a further embodiment, in the step of moving a three-dimensional (3D) measuring device to a first device location in the environment, the 3D measuring device further includes a two-axis inclinometer; in the first instance: a further step includes measuring a first inclination with the two-axis inclinometer; the step of determining 3D coordinates of the first object point in a first frame of reference is further based on the measured first inclination; in the second instance: a further step includes measuring a second inclination with the two-axis inclinometer; and the step of determining 3D coordinates of the second object point in the first frame of reference is further based on the measured second inclination.
In a further embodiment, in the step of placing a first rotating camera assembly at a first environment location in an environment, the first camera includes a first camera lens, a first photosensitive array, and a first camera perspective center; in the step of placing a first rotating camera assembly at a first environment location in an environment, the first camera rotation mechanism is configured to rotate the first rotating camera assembly about a first axis by a first rotation angle and about a second axis by a second rotation angle; and in the step of placing a first rotating camera assembly at a first environment location in an environment, the first camera angle-measuring system further includes a first angle transducer configured to measure the first rotation angle and a second angle transducer configured to measure the second rotation angle.
In a further embodiment, in the step of measuring the first angle with the first camera angle-measuring system, the first angle is based at least in part on the measured first rotation angle and the measured second rotation angle.
In a further embodiment, further including steps of: capturing with the first camera one or more first reference images of a plurality of reference points in the environment, there being a known distance between two of the plurality of reference points; capturing with the second camera one or more second reference images of the plurality of reference points; determining a first reference pose of the first rotating camera assembly in an environment frame of reference based at least in part on the one or more first reference images and on the known distance; and determining a second reference pose of the second rotating camera assembly in an environment frame of reference based at least in part on the one or more second reference images and on the known distance.
In a further embodiment, further including determining 3D coordinates of the first object point and the second object point in the first frame of reference further based on the first reference pose and the second reference pose.
In a further embodiment, in the step of moving a three-dimensional (3D) measuring device to a first device location in the environment, the 3D measuring device is attached to a first mobile platform.
In a further embodiment, the first mobile platform further includes first motorized wheels.
In a further embodiment, the first mobile platform further includes a robotic arm configured to move and rotate the 3D measuring device.
In a further embodiment, in the second instance the step of moving the 3D measuring device to a second location in the environment includes moving the first motorized wheels.
In a further embodiment, the step of moving the 3D measuring device to a second device location in the environment further includes moving the robotic arm.
In a further embodiment, in the step of moving the first motorized wheels, the motorized wheels are moved under computer control.
In a further embodiment, the step of moving the 3D measuring device to a second device location in the environment further includes moving the robotic arm under computer control.
In a further embodiment, the 3D measuring device is a 3D imager having an imager camera and a first projector, the first projector configured to project a pattern of light onto an object, the imager camera configured to obtain a first pattern image of the pattern of light on the object, the 3D imager configured to determine 3D coordinates of the first object point based at least in part on the pattern of light, the first pattern image, and on a relative pose between the imager camera and the first projector.
In a further embodiment, in a third instance: moving the first rotating camera assembly to a third environment location in the environment; capturing with the first rotating camera one or more third reference images of the plurality of reference points in the environment, the third reference image including the first reference point and the second reference point; and determining a third pose of the first rotating camera in the environment frame of reference based at least in part on the third reference image.
In a further embodiment, further including determining 3D coordinates of the first object point and the second object point in the first frame of reference further based on the third pose.
In a further embodiment, further including projecting an auxiliary pattern of light onto or near to the object from an auxiliary projector.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a numerical value of a measured quantity, a deviation of a measured quantity in relation to an allowed tolerance, information conveyed by a pattern of color, and whisker marks.
In a further embodiment, the auxiliary pattern is selected from the group consisting of: a location at which an assembly operation is to be performed and a location at which a measurement is to be performed.
In a further embodiment, the auxiliary pattern is projected to provide additional triangulation information.
In an embodiment, a method includes: placing a first rotating camera assembly and a rotating projector assembly in an environment, the first rotating camera assembly including a first camera body, a first camera, a first camera rotation mechanism, and a first camera angle-measuring system, the rotating projector assembly including a projector body, a projector, a projector rotation mechanism, and a projection angle-measuring system, the projector body independent of the camera body, the projector configured to project a first illuminated pattern onto an object; placing a calibration artifact in the environment, the calibration artifact having a collection of calibration marks at calibrated positions; rotating with the first camera rotation mechanism the first rotating camera assembly to a first angle to face the calibration artifact; measuring the first angle with the first camera angle-measuring system; capturing a first image of the calibration artifact with the first camera; rotating with the projector rotation mechanism the rotating projector assembly to a second angle to face the calibration artifact; projecting with the projector the first illuminated pattern of light onto the object; measuring the second angle with the second projector angle-measuring system; capturing with the first camera a second image of the calibration artifact illuminated by the first illumination pattern; determining a first relative pose of the rotating projector assembly to the first rotating camera assembly based at least in part on the first image, the second image, the first angle, the second angle, and the calibrated positions of the calibration marks; and storing the first relative pose.
In a further embodiment, in the step of placing a first rotating camera assembly and a rotating projector assembly in an environment, the first camera includes a first camera lens, a first photosensitive array, and a first camera perspective center.
In a further embodiment, in the step of placing a first rotating camera assembly and a rotating projector assembly in an environment, the rotating projector assembly includes a pattern generator, a projector lens, and a projector lens perspective center.
In a further embodiment, in the step of placing a first rotating camera assembly and a rotating projector assembly in an environment, the projector includes a light source and a diffractive optical element, the light source configured to send light through the diffractive optical element.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks are a collection of dots arranged on a calibration plate in a two-dimensional pattern.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration artifact is attached to a first mobile platform having first motorized wheels.
In a further embodiment, in the step of placing a calibration artifact in the environment, the first mobile platform is placed in the environment under computer control.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks are a collection of dots arranged on a calibration bar in a one-dimensional pattern.
In an embodiment, a method includes: placing a first rotating camera assembly and a second rotating camera assembly in an environment, the first rotating camera assembly including a first camera body, a first camera, a first camera rotation mechanism, and a first camera angle-measuring system, the second rotating camera assembly including a second camera body, a second camera, a second camera rotation mechanism, and a second camera angle-measuring system; placing a calibration artifact in the environment, the calibration artifact having a collection of calibration marks at calibrated positions; rotating with the first camera rotation mechanism the first rotating camera assembly to a first angle to face the calibration artifact; measuring the first angle with the first camera angle-measuring system; capturing a first image of the calibration artifact with the first camera; rotating with the second camera rotation mechanism the second rotating camera assembly to a second angle to face the calibration artifact; measuring the second angle with the second camera angle-measuring system; capturing a second image of the calibration artifact with the second camera; determining a first relative pose of the second rotating camera assembly to the first rotating camera assembly based at least in part on the first image, the second image, the first angle, the second angle, and the calibrated positions of the calibration marks; and storing the first relative pose.
In a further embodiment, in the step of placing a first rotating camera assembly and a rotating projector assembly in an environment, the first camera includes a first camera lens, a first photosensitive array, and a first camera perspective center and the second camera includes a second camera lens, a second photosensitive array, and a second camera perspective center.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks are a collection of dots arranged on a calibration plate in a two-dimensional pattern.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration artifact is attached to a first mobile platform having first motorized wheels.
In a further embodiment, in the step of placing a calibration artifact in the environment, the first mobile platform is placed in the environment under computer control.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks are arranged on a calibration bar in a one-dimensional pattern.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks include light emitting diodes (LEDs).
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration marks include reflective dots.
In a further embodiment, in the step of placing a calibration artifact in the environment, the calibration artifact is attached to a first mobile platform having motorized wheels and a robotic mechanism; and in the step of placing a calibration artifact in the environment, the calibration artifact is moved by the motorized wheels to a plurality of locations and by the robotic mechanism to a plurality of rotation angles.
In an embodiment, a method includes: placing a first camera platform in an environment, the first camera platform including a first platform base, a first rotating camera assembly, and a first collection of calibration marks having first calibration positions, the first rotating camera assembly including a first camera body, a first camera, a first camera rotation mechanism, and a first camera angle-measuring system; placing a second camera platform in the environment, the second camera platform including a second platform base, a second rotating camera assembly, and a second collection of calibration marks having second calibration positions, the second rotating camera assembly including a second camera body, a second camera, a second camera rotation mechanism, and a second camera angle-measuring system; rotating the first camera rotating assembly with the first rotation mechanism to a first angle to face the first collection of calibration marks; measuring the first angle with the first camera angle-measuring system; capturing a first image of the second collection of calibration marks; rotating the second camera rotating assembly with the second rotation mechanism to a second angle to face the second collection of calibration marks; capturing a second image of the first collection of calibration marks; and determining a first pose of the second rotating camera assembly relative to the first rotating camera assembly based at least in part on the measured first angle, the first image, the measured second angle, the second image, the first calibration positions, and the second calibration positions.
In a further embodiment, in the step of placing a first camera platform in an environment, the first calibration marks include light-emitting diodes (LEDs).
In a further embodiment, in the step of placing a first camera platform in an environment, the first calibration marks include reflective dots.
In an embodiment, a measurement method includes: providing a three-dimensional (3D) measuring system in a device frame of reference, the 3D measuring system including a 3D measuring device, a first rotating camera assembly, and a second rotating camera assembly, the 3D measuring system including a body, a collection of light marks, and a measuring probe, the collection of light marks and the measuring probe attached to the body, the light marks having calibrated 3D coordinates in the device frame of reference, the measuring probe configured to determine 3D coordinates of points on an object in the device frame of reference; the first rotating camera assembly having a first camera, a first rotation mechanism, and a first angle-measuring system; the second rotating camera assembly having a second camera, a second rotation mechanism, and a second angle-measuring system; in a first instance: rotating the first camera with the first rotation mechanism to face the collection of light marks; rotating the second camera with the second rotation mechanism to face the collection of light marks; measuring with the first angle-measuring system the first angle of rotation of the first camera; measuring with the second angle-measuring system the second angle or rotation of the second camera; capturing with the first camera a first image of the collection of light marks; capturing with the second camera a second image of the collection of light marks; determining 3D coordinates of a first object point on the object in the device frame of reference; and determining 3D coordinates of the first object point in an environment frame of reference based at least in part on the first angle of rotation in the first instance, the second angle of rotation in the first instance, the first image in the first instance, the second image in the first instance, and the 3D coordinates of the first object point in the device frame of reference in the first instance.
In a further embodiment, the measurement method further includes: in a second instance: moving the 3D measuring device; rotating the first camera with the first rotation mechanism to face the collection of light marks; rotating the second camera with the second rotation mechanism to face the collection of light marks; measuring with the first angle-measuring system the first angle of rotation of the first camera; measuring with the second angle-measuring system the second angle or rotation of the second camera; capturing with the first camera a first image of the collection of light marks; capturing with the second camera a second image of the collection of light marks; determining 3D coordinates of a first object point on the object in the device frame of reference; and determining 3D coordinates of the second object point in the environment frame of reference based at least in part on the first angle of rotation in the second instance, the second angle of rotation in the second instance, the first image in the second instance, the second image in the second instance, and the 3D coordinates of the first object point in the device frame of reference in the second instance.
In a further embodiment, in the step of providing a 3D measuring system in a device frame of reference, the measuring probe is a tactile probe.
In a further embodiment, in the step of providing a 3D measuring system in a device frame of reference, the measuring probe includes a spherical probe tip.
In a further embodiment, in the step of providing a 3D measuring system in a device frame of reference, the measuring probe is a line scanner that measures 3D coordinates.
In a further embodiment, in the step of providing a 3D measuring system in a device frame of reference, the 3D measuring device is a handheld device.
In a further embodiment, in the step of providing a 3D measuring system in a device frame of reference, the 3D measuring device is attached to a motorized apparatus.
In an embodiment, a three-dimensional (3D) measuring system includes: a rotating camera-projector assembly including a camera-projector body, a projector, a first camera, a camera-projector rotation mechanism, and a camera-projector angle measuring system, the camera-projector assembly including a projector and a first camera, the projector configured to project a first illuminated pattern onto an object, the first camera including a first camera lens, a first photosensitive array, and a first camera perspective center, the first camera configured to capture a first image of first illuminated pattern on the object, the camera-projector rotation mechanism configured to rotate the first camera and the projector about a camera-projector first axis by a camera-projector rotation angle and about a camera-projector second axis by a camera-projector rotation angle, the camera-projector angle measuring system configured to measure a camera-projector first rotation angle and a camera-projector second rotation angle; and a second rotating camera assembly including a second camera body, a second camera, a second camera rotation mechanism, and a second camera angle measuring system, the second camera including a second camera lens, a second photosensitive array, and a second camera perspective center, the second camera configured to capture the a second image of the first illuminated pattern on the object, the second camera rotation mechanism configured to rotate the second camera about a second camera first axis by a second camera first rotation angle and a second camera second axis by a second camera second rotation angle, the second camera angle measuring system configured to determine a second camera first angle and a second camera second angle, wherein the 3D measuring system is configured to determine 3D coordinates of the object based at least in part on the first illuminated pattern, the first image, the second image, the camera-projector first angle of rotation, the camera-projector second angle of rotation, the second camera first angle of rotation, the second camera second angle of rotation, and a pose of the second camera relative to the first camera.
While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/234,739, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,796, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,869, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,914, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,951, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,973, filed on Sep. 30, 2015, U.S. Provisional Patent Application No. 62/234,987, filed on Sep. 30, 2015, and U.S. Provisional Patent Application No. 62/235,011, filed on Sep. 30, 2015, the entire contents all of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62234739 | Sep 2015 | US | |
62234796 | Sep 2015 | US | |
62234869 | Sep 2015 | US | |
62234914 | Sep 2015 | US | |
62234951 | Sep 2015 | US | |
62234973 | Sep 2015 | US | |
62234987 | Sep 2015 | US | |
62235011 | Sep 2015 | US |