The invention relates to measuring equipment and can be used for 3D measurement with sufficient accuracy and visualization of three-dimensional profiles of objects by observing the projected pre-known pattern at different triangulation angles.
There is a known method of measurement of linear dimensions of three-dimensional objects along three coordinate axes that includes forming on the surface of the measured object probing structured lighting by illuminating the surface of the measured with optical radiation, spatially modulated in intensity, recording the image of the structured lighting distorted by surface relief of the measured object; determination by digital electronic calculator of the height of the surface relief of the measured object based on distortion of the structure of probing lighting, and the other two coordinates based on position of distortions of the structured lighting in the recorded image (WO 99/58930).
The main disadvantage of this method is high error level due to the fact that, when illuminating the surface of the measured object with optical radiation modulated by a single coordinate with a transparency with a constant periodic structure, it is impossible to foresee or provide for any distortions of the pattern caused by different reflective properties of the surface and deep depressions, which cannot be identified without a prior information on the macrostructure of the surface of the measured object.
There is a known method and a device implementing it for measurement of linear dimensions of objects in three dimensional Cartesian coordinates. The method is that a system of multi-colored stripes created by spatial modulation of intensity of probing optical radiation along One coordinate axis is projected onto the object. The system of multi-colored stripes is periodic and creates a structured lighting. As a result, the whole part of the surface of the measured object within sight of the light detector and the distorted image of structured illumination overlaid on the surface are captured in the same image. Then dimensions of the object are determined based on the degree of distortion of the set of stripes at the image and position of stripes in Cartesian coordinate system (WO 00/70303).
The disadvantage of this method and devices implementing it is low precision due to inability to clearly interpret disruption in strips in the image due to distortion by surface relief of the measured object, or presence of open-end holes, or low value of the spectral reflectance index, depending on the color of portion of the surface of the measured object. If the measured object is a set of local components, like a set of turbine blades, application of this method for is reconstruction of topology of the object and subsequent measurement of its linear dimensions are impossible.
There is a known method of optical measurement of surface shapes including placing surface under illumination by a projection optical system and in the field of view of a recording device for imaging of said surface; projecting of a set of images with predetermined lighting structure by the projection optical system onto the measured surface; recording respective sets of images of the surface at an angle different from the angle of projection of the set of images and determination of the shape of the measured surface based on the recorded images. In this method, at least three alternating periodic distributions of light intensity are projected on the surface. These distributions are sets of stripes, the intensity of which varies in the cross direction in a sinusoidal manner, and said periodic distributions of light intensity are different in shift of these sets of stripes in a direction perpendicular to the stripes, by a controlled amount within a stripe treated. The captured images are processed to obtain preliminary phase distribution containing phases corresponding to the points on the surface. Furthermore, an additional light intensity is once projected to the surface, allowing determining the number of said strips of said set of strips for each point of the said surface. And additional image of the said surface is recorded with obtaining for each visible point of said surface of resultant phase distribution based on the image of the object illuminated with the initial phase distribution, and the image of the object illuminated by the additional light intensity distribution. And the result of said phase distribution is used to obtain absolute coordinates of said surface using pro-calibration data. Measurement using the above methods assumes that registration of image of each point of the surface occurs in an environment where this point is illuminated with direct light from the projector, and illumination of image of an object point captured by the detector is considered proportional to the brightness of the beam falling on this point directly from the projector (RU no. 2148793).
The disadvantages of this method are complexity of its implementation and long duration of the process, requiring considerable time for measurements and leading to occurrence of errors due to mechanical vibrations of projector and camera equipment.
There is a known method and a device for non-contact measurement and recognition of surfaces of three-dimensional objects by means of structured illumination, including a source of optical radiation and a set of transparencies positioned in a sequence along the radiation beam, configured to generate aperiodic line structure of strips, afocal optical system for projecting transparency images to the measured surface, a receiving lens forming the image of the line structure occurring on the surface of the measured object and distorted by the surface relief of the measured object, a light detector that converts the image generated by the receiving lens to a digital form, and a digital electronic computing unit that converts the digital images recorded by light detector into coordinate values of the measured surface. This device is provided with an additional N−1 light sources, each of which is different from the rest in spectral range of radiation, N−1 transparencies, each of which differs from the others for at least one strip, N−1 receiving lenses installed behind the transparencies, N−1 mirrors set at 45 angular degrees to the optical axis of each of N−1 lenses before the second component of the afocal optical system, another N-1 mirrors installed behind the receiving lens at the angle of 45 angular degrees to the optical axis of the receiving lens, N−1 secondary receiving lenses, each of which is installed behind each of the secondary N−1 mirrors and together with the receiving lens, generates an image of line structure formed on the surface of the measured object and distorted by the surface relief of the measured object, N−1 light detectors, each of which has a spectral sensitivity region coinciding with a spectral range of radiation from N−1 radiation source, N−1 digital electronic computing units, where the image combining electronic unit has a number of inputs equal to the number of digital electronic computing units, and each of inputs of the image combining electronic unit is connected to the output of each digital electronic computing unit, and the number N is determined by the formula N=Log2 (L), where L is the number of pairs of elements of the spatial resolution of the light detector (RU no. 2199718).
The disadvantages of this method are complexity of its implementation and long duration of the process, requiring considerable time for measurements and is leading to occurrence of errors due to mechanical vibrations of projector and camera equipment.
There is a known method (and a device for its implementation) of 3D measurement using structured light. This method uses a projector to project to the investigated object a known image having at least two non-intersecting continuous lines along one of the longitudinal axes. The light reflected from the object is captured with at least two cameras placed at different distances from the projector with the formation of different triangulation angles between the central beam of the projector and the central rays of the cameras. After that, every continuous line projected by the projector and formed by reflected light captured by each camera is identified by comparing the coordinates of lines captured by cameras, where the triangulation angle between the central beam of the projector and the central beam of the first camera located at as minimum distance from the projector is equal to the arctangent function of ratio between the distance between the projected lines to a depth of field of the lens of the camera. After that, it is possible to identify in the image of the first camera the longitudinal coordinates of centers of the lines and vertical coordinates, as the quotient obtained by division of the longitudinal coordinate by the tangent of the triangulation angle between the central beam of the projector and the central beam of the first camera. In order to define its vertical position more precisely, the value obtained by the second camera is used. The second camera is located at the greater triangulation angle than the first. In order to obtain more precise value, positions of the same line are identified in the image received from the second camera, being closest to the longitudinal coordinates calculated as the product of said vertical coordinates defined by the first camera, the triangulation angle of the tangent of the second camera, and then determined for these lines is specified value of the longitudinal and vertical coordinates (PCT/RU2012/000909 Publication WO2014074003, prototype).
The disadvantage of this method is non-uniformity of measurements obtained along the Y-axis, insufficient sensitivity along the Z-axis, resulting in a possibility of significant error of measurement, especially in small objects. These drawbacks are due to the filet that when continuous solid lines are projected on the object, these lines are projected with a certain period in between, resulting in non is uniformity of measurements obtained along the Y-axis. Moreover, this method does not use the area of detector or camera receiver efficiently, and sensitivity of the 3D scanner is limited along the Z-axis. In this respect, measurements along the Y-axis are obtained with some spacing, typically every 5-10 pixels in the camera image, and for the X-axis measurements can be obtained in each pixel through which the line passes. In other words, the resolution along the X-axis is 5-10 times higher than along the Y-axis. Also, when constructing a three-dimensional surface in the form of a polygon mesh (of polygons 20 in
The purpose of the invention is to provide an efficient and convenient method of measurement of linear dimensions of three-dimensional objects, as well as expand the range of methods of measurement of linear dimensions of three-dimensional objects.
The technical result providing solution of the problem in question consists in reduced distortion of measurements for Y-axis and enhanced sensitivity for Z axis, and almost complete elimination of errors, i.e. improved accuracy of measurements.
The essence of the invention is the method of performing three-dimensional measurements that includes: projecting with a projector of preset non-overlapping images oriented along the one of longitudinal axis with a constant distance in between, registering the light from the projector reflected from the object using at least one camera placed with formation of a triangulation angle between the central beam of the projector and central beams of the cameras, and then producing identification images projected b the projector and formed by the reflected light received by the camera, and the triangulation angle between the central beam of the projector and the central beam of the camera is selected to be equal to arctangent function of the ratio of the distance between the projected images to a depth of field of the camera lens, determining at the camera image the longitudinal coordinates of centers of the images and vertical coordinates as the quotient of the longitudinal coordinate divided by tangent function of the triangulation angle between the central beam of projector and the central beam of the first camera, at that each of the images that are projected b the projector onto the measured object is a discrete sequence of geometric elements, evenly spaced along a linear path parallel to the path of another image, and identification of images projected by the projector and formed by the reflected light received by the camera is provided by identifying a fragment of shift of each of the projected geometric elements, while the light of the projector reflected by the object is recorded by at least one camera placed at an angle to the projector in both the vertical and horizontal planes.
in general practice, the following figures are used as a discrete sequence of image elements to be projected: dots, dashes, strokes, intervals between geometric elements.
As a preferable option, the distance between the camera and the projector is chosen as the product of the distance from the projector to the point of intersection of the central beam of the projector and the camera by the tangent function of the triangulation angle between the central beam of the projector and the central beam of the camera.
As a preferable option, the light reflected from the object is registered by at least one additional improving camera, and the first camera and the improving camera are installed at different distances from the projector with formation of is different triangulation angles between the central beam of the projector and central beams of the cameras, and an improvement of the vertical coordinate is performed. The method uses the value of the vertical coordinate obtained by the improved camera positioned under the triangulation angle larger than the first camera. In order to achieve his, locations of the same geometric elements are identified on the improving camera image, being closest to the longitudinal coordinates calculated as the product of said vertical coordinates defined by the first camera by the tangent of the angle of triangulation the of the improving camera. After that, the improved values of the longitudinal and vertical coordinates are determined for these geometric elements.
As a preferable option, the reflected light is additionally recorded by two additional improving cameras.
In particular cases, these cameras can be placed on one side of the projector or on both sides of the projector.
As a preferable option, the additional texture of the object is recorded with an additional color camera, and the image capture with it is displayed on a computer screen and is overlaid with an image of three-dimensional polygon mesh obtained by calculation based on the light reflected by the object and captured by first camera and improvement with at least one improving camera.
In this case, measurement and determination of coordinates, as well as drawing of a three-dimensional polygon mesh are carried out with the help of an additionally installed computer, and the 3D image is formed on a computer screen. Transmission of measurements is performed by means of additional means of wireless communication of the following: Bluetooth, Wi-Fi, NFC.
In the drawings, the following parts are indicated with numbers: radiation source 1; condenser lens 2; transparency 3 with the projected image of geometric elements, such as dots 8 and 9; lens 4 of the projector 5 and lens 4 of the camera 6; projector 5; scanning area 7 of the object 16; projected dots 8 and 9; central beam 10 of the projector 5; central beam 11 of the camera. receiving array 12 of the camera 6; far plane 13 of the object 16; nearest plane 14 of the object 16; median plane 15 of the object 16; shift length 17 for the dot 8; shift length 18 for the dot 8; object 16, on which the dotted image is projected, shift length 19 for the dot 8 for the location of the camera 6 shifted with respect to the projector 5 only in the vertical plane; polygons 20 for construction of the polygonal mesh; projected line 21; dots 22, 23 with intersecting shift lengths (
Display 31 of the scanner 30 is directed toward the user.
A method for measurement of linear dimensions of three-dimensional objects is as follows.
Each image projected by the projector consists of periodically (uniformly) projected discrete elements: dots, dashes (similarly, segments of the image) or intervals between these elements along an imaginary straight path (imaginary straight line). These elements are arranged with a spacing Tx along the X-axis of the image and the spacing Ty along the Y-axis.
In the proposed method, when projecting images in a sequence (along two non-intersecting paths) of projected discrete geometric elements such as dots, and position of the camera 6 at an angle to the projector 5, not only in the vertical plane, but also in the horizontal one, can be used to increase the used shill length 17, i.e. the value of a possible shift of the dot 8 until it crosses the possible adjacent shift lengths, i.e. positions of other dots in the
For 3D scanner 30 in
Depth of field of the camera lens Tz in each particular case can be determined for example, as: Tz=2DC/(f/S)2
where D is the area of camera aperture (m2), C is the size of a pixel in the camera (m), f is the focal length of the camera lens (m), S is the distance from the projector 5 to the point intersection of the central beams 11, 10 of the projector 5 and camera 6 (m).
Coordinates Z1 and Z2 are the boundaries of the working area. In this working area, the object 16 is measured by three coordinates. It is assumed that the scanner 30 does not make any measurements outside this area. Working area usually looks geometrically as a spatial region with intersection of beams from the projector 5 that forms the image and the beams limiting the field of view of the camera 6. In order to increase the working area in depth, it is allowed to include an area in which at close range the camera 6 can capture the object 16 only partially, and at long range the projector 5 could not light the entire surface of the object 16 to be captured by the camera 6. The point of intersection of the central beam 11 of the optical system of the camera 6 and the central beam of the optical system of the projector 5 is located in the middle of the working area. Focusing: distance from the light source 1 of the scanner 30 to the mid-line of the area indicated in
The object 16 is shown in
The projected images of the dots 8 and 9 can be seen on the receiving array 12 of the camera 6. Depending on the distance between the scanner 30 and one or another part of the object, dots 8 and 9 may be captured by different pixels of the receiving array 12 of the camera. For example, if we project a dot 8 at the object 16, we will observe the dot 8 at different locations on the receiving array 12 of the camera 6 in dependence on which plane the dot will be reflected: the median plane 15 or the plane 14. Areas on the receiver array 12 to which the dots 8, 9 can be projected the shift length 17, 18 of these dots.
Before scanning the object 16, calibration of the scanner 30 must be performed. During calibration, the system can be positioned in place, all the possible positions of the dots to be recorded and compated, i.e. recording individual dot shift length 8, 9 on an image obtained from the camera 6 to select the optimal distance to the object 16. This information is subsequently used when working with the object 16. For this purpose a plane (e.g. screen) is installed in front of the system consisting of the camera 6 and the projector 5 perpendicular to the optical axis of the projector 5 or the camera 6, and this screen is moved along the axis of the projector 5 or camera 6. Movement of the screen plane is provided by high-precision handling equipment, such as numerically controlled machine tool that allows obtaining coordinates with high accuracy of a few microns due to high-precision handling. This process includes recording the dependence of the is shift or shift length for dots in the image of the camera 6 depending on the distance from the scanner 30 including the camera 5 and the projector 6. This calibration process also takes into account the distortion (violation of geometric similarity between the object and its image) and other distortions of lenses of the camera 6 and the projector 5.
In the two-dimensional case, the dot shift length 19 in the image would look like the shift length shown in
In order to accurately measure object 16 in three coordinates, it is necessary that the shift length 17, 18 do not overlap in the image created by the camera 6, regardless of where the object 16 or part of the object 16 are in the working area of the scanner 30. To fulfill this condition, it is necessary to choose right spacing distances Tx and Ty between the dots along the X and Y axes of positions of the dots in the image projector 5, and angles and base distance between the projector 5 and the camera 6 along the X and Y axes. These options can he selected using the correlations tg αy=Ty/z1−z2, tgαx=Tx/z1−z2. Basic distances Ly=S*tg αy and Lx=S*tg αx
Where S is the focusing distance from the light source 1 of the scanner 30 to the mid-line of the area or the distance from the light source 1 of the scanner 30 to the intersection of the central beams 10, 11 of the camera 6 and the projector 5.
To increase the dot density and thereby the accuracy of measurement of small sized items, it is possible to use a second camera 28 positioned at another angle relative to the projector 5, different from the angle of the first camera 6. In this way it is possible to increase, for example, the density of the dots twofold, and shift length of dots will intersect, but the second camera 28 will allow resolving the uncertainty in the points of intersection.
It is possible to produce an image that is projected by the projector 5 with strokes or stripes (lines), between which light dots 24 are arranged, as shown in
Proposed layout diagrams for the cameras 6, 26, 28, 29 and the projector 5 in the scanner 30 are shown in
Camera 26 does not capture the image projected by the projector 5. but captures the texture, i.e. colors of the object 16. The light source in the projector 5 can be of pulsed type and pulse period can be a fraction of a second The camera 26 captures the texture with a time delay of a few fractions of a second and does not capture the light from the source of the projector 5. In order to obtain a quality color image of the object 16, a circular flash 27 is installed around the camera 26, being made of pulsed white light sources which are also triggered in sync with the camera 26, i.e., with a certain delay relative to the light source of the projector 5. Synchronization of cameras 6, 26, 28, 29 and light sources of the projector 5, as well as their delay values are controlled with the controller 36.
Each polygonal 3d surface is recorded with the built-in computer 33 in the coordinate system of the object 16 using the ICP algorithm.
The projector 5 and cameras 6, 26, 28 and 29 are rigidly fixed to the optical mount 34. Optical mount 34 should be made of a sufficiently durable material such as aluminum or steel, which has a relatively low coefficient of linear expansion, as preserving the mutual position of the cameras 6, 26, 28 and 29 with respect to the projector 5 is very important and influences the accuracy of scanning of surface. This position is measured during the calibration of the device (scanner 30). Any small micron-sized movement of the cameras 6, 26, 28, 29 with respect to the projector 5 could lead to distortion of measurements, which are measured in millimeters. During scanning at the stage of recording of strikes in the coordinate system of the object using an ICP algorithm, errors resulting from movements of the cameras 6, 26, 28, 29 are summed for each surface, and it can lead to centimeter-sized distortions of measurements of the object 16.
Saving and transmission of data can be performed through a connector for connecting external removable storage devices. Furthermore, a wireless communication unit from the following group: Bluetooth, Wi-Fi, NFC, provides, if necessary, wireless transmission of data to another computer (PC).
To carry out the measurements using the proposed method and the scanner 30, the scanner must be taken into hands, the measured object 16 placed in the field of view of the cameras 6, 26, 26, 29, so that it can be observed on the screen 31, as the color image from the camera 26 is immediately (without processing) displayed on the display 31. This is followed by positioning of the measured object 16 at the correct distance from the scanner 30, i.e. so that it was in the working area 7. During operation of the scanner 30, the projector 5 projects the image of the transparency 3 on the object. The camera 6 captures the reflected light and records the image of the illuminated object 16. Then the built-in computer 33 processes the image captured by the camera 6. If there are any ambiguities or uncertainties in the calculation, then the program uses images obtained from cameras 28 and 29 to improve and check the position of elements of projected image of the transparency 3. After processing the images from cameras 6, 26, 28, 29, the computer 33 displays on display 31 a calculated image of a 3D model of the object 16 with the calculated dimensions. If necessary, the user can walk around the object 16 with the scanner 30 in hand, constantly keeping the object 16 m the work area 7 of the scanner 30 and receiving images of the object 16 with different angles or different positions of the scanner relative to the object. Computer 33 processes the images obtained from cameras 6, 28, 29 at each angle and uses ICP algorithm to put new 3D models into the coordinate system of the first 3D model obtained. As the result, the user obtains a 3D model of the object 16 with calculation of its dimensions, i.e. the user receives a 3D measurement of the object 16 from all sides.
Therefore, the proposed method provides an increased uniformity of the obtained measurement along the Y-axis, increased sensitivity along the Z-axis, and almost complete elimination of errors, i.e. improving the accuracy of measurements and allowing creation of a single-piece mobile device (3D scanner) for implementation of this method.
The present invention is embodied with multipurpose equipment extensively employed by the industry.
The following ICP algorithms were used:
[1] Besl, P. and McKay, N. “A Method for Registration of 3-D Shapes,” Trans. PAMI, Vol. 14, No. 2, 1992
[2] Chen, Y. and Medioni, G. “Object Modeling by Registration of Multiple Range Images,” Proc. IEEE Cont: on Robotics and Automation, 1991
[3] RUSINKIEWICZ, S., AND LEVOY, M. 2001. Efficient variants of the ICP algorithm. In Proc. of 3DIM, 145-152.
This application is a National phase application from PCT application No. PCT/RU2014/000962 filed on Dec. 19, 2014.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/RU2014/000962 | 12/19/2014 | WO | 00 |