The present disclosure generally relates to volume reconstruction of an object, and more particularly to volume reconstruction of an object using a 3D sensor and robotic coordinates.
Volume reconstruction is a technique to create a virtual volume of an object or complex scene using 3D information from several views. For the volume to be reconstructed, 3D data from each view must be added to the volume using a common coordinate system. A common coordinate system can be created if the position and orientation of each 3D data set is known. When using a single 3D sensor, and when the 3D sensor is moved around an object, the position and orientation of the 3D sensor must therefore be known for each data set. The knowledge of the position and orientation of the 3D sensor has been obtained in the past by using a reference position for the first 3D data set obtained and by calculating the relative change of position and orientation between each subsequent data set. The relative changes of position and orientation between two views is calculated by identifying features in the 3D data common to both views and by calculating what changes in the position and orientation would correspond in the observed change of position of the identified features in the 3D data. This technique is called depth tracking or 3D data tracking. As the number of views increases, the errors in the position and orientation accumulate and the total error may increase. The total error can be kept relatively low using 3D data tracking in the case of irregular shapes found in nature for which several 3D data features can typically be found. However, for industrial applications, objects to be reconstructed in 3D tend to be of smooth and regular shapes with few significant features in the 3D data. Those shapes are not easily tracked by 3D-feature tracking algorithms and those algorithms can lead to major errors in the reconstructed volume.
Embodiments of the present disclosure may provide a method for volume reconstruction of an object comprising: using a robot, positioning a three-dimensional sensor around the object; obtaining three-dimensional data from the object; and generating a three-dimensional representation of the object using the three-dimensional data, and position and orientation information provided by the robot. The three-dimensional data may determine the exact position where an industrial process is performed on the object. Different three-dimensional data of the object obtained from several orientations and positions of the robot may be integrated into a common coordinate system to generate the three-dimensional representation of the object. The integrating step may further comprise using the position and orientation information provided by the robot to calculate the change in position and orientation relative to a position and orientation reference; and using the calculated change in position and orientation to integrate the three-dimensional data into a common coordinate system.
Embodiments of the present disclosure also may comprise a system for volume reconstruction of an object comprising: a three-dimensional sensor mounted on a robot; and a processing unit to acquire and process depth information to integrate three-dimensional information of the object into a virtual volume. The processing unit may integrate the three-dimensional information of the object into the virtual volume by using position and orientation information provided by the robot. The three-dimensional sensor, the robot and the processing unit may be connected through communication links. The processing unit may be located on the robot. The processing unit may comprise a three-dimensional sensor processing unit; and an industrial process processing unit, wherein the three-dimensional sensor processing unit provides three-dimensional data from the three-dimensional sensor to the industrial process processing unit through a communication link. The three-dimensional sensor may use one or more spatial and temporal techniques selected from the group comprising: single point illumination, line illumination, multiple line illumination, 2D pattern illumination, and wide-area illumination. The three-dimensional sensor may be a camera combined with an illuminator. The three-dimensional sensor may comprise a camera combined with an illuminator and using a time-of-flight technique.
Other embodiments of the present disclosure may comprise a method for volume reconstruction of an object comprising: defining a position and orientation reference provided by a robot controller for a tool; converting the position and orientation reference into a rotation-translation matrix; inverting the rotation-translation matrix to create a reference matrix; converting the current position and orientation reference into a current rotation-translation matrix; calculating a difference matrix representative of the change in position and orientation between the position and orientation reference and the current position and orientation by multiplying the reference matrix by the current rotation-translation matrix; and using the difference matrix to integrate three-dimensional information into a single virtual volume. The calculating step may integrate information acquired by a three-dimensional sensor at a current position with information already accumulated in the virtual volume.
Further embodiments of the present disclosure may provide a method to calibrate the position and orientation of a three-dimensional sensor relative to a tool on which it is mounted, the tool being mounted on a robotic system, the method comprising: translating the tool near a first object and acquiring three-dimensional data from the first object; and rotating the tool near the first object or a second object and acquiring three-dimensional data from the first or second object. The method may further comprise using the three-dimensional data sets to determine the position and orientation of the three-dimensional sensor relative to the tool that minimizes differences between various three-dimensional data representative of common areas of the first object, or of the first and second objects.
For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
Three-dimensional (3D) volume information for industrial processes using robots can be very useful. Industrial processes may be any operation made during the industrial production of an object to characterize, produce, or modify the object. Drilling, material depositing, cutting, welding, and inspecting, to name a few, are examples of industrial processes. 3D information can be used to help the process or to gather data about how and where the process was applied to and on an object. A 3D mapping system can provide an instantaneous 3D representation of an area of an object. However, instead of using the raw 3D information from a single perspective, the accuracy and precision of that information may be improved by acquiring the information from several different points of view and then constructing a single volume. By using position information of points at the surface of an object from several points of view, it is possible to significantly improve the accuracy and precision of the three-dimensional information of an object. This operation may be referred to as volume reconstruction. Volume reconstruction may allow 3D data to be gathered about a volume significantly larger than what may be covered by a single view of the 3D sensor. Volume reconstruction also may improve the poor calibration often found in the 3D sensor at the edges of a single view area. By averaging over several views, any given point at the surface of an object is unlikely to be measured several times from a point near the edges of the 3D sensor acquisition volume because the edges represent only a small fraction of the total view area.
Embodiments of the present disclosure may provide systems and methods to improve volume reconstruction by using a robot to position a 3D sensor around an object and use the position and orientation of the sensor provided by a robot controller to reconstruct the volume. Additionally, embodiments of the present disclosure may provide methods to calibrate the position and orientation of the 3D sensor mounted in a robotic system.
In embodiments of the present disclosure, a 3D sensor may be mounted on a robot to measure 3D data from an object. The data generated by the 3D sensor can be used to generate a 3D model of the object being processed. 3D information may be taken of a point at the surface of the object from several points of view by moving the sensor using a robot. The position information (x, y, z) for each point may be averaged for several points of view. The information about the position of the point at the surface of the object may be calculated using the 3D information from the sensor combined with position and orientation information provided by the robot.
The 3D data provided by the sensor may be converted to the robot coordinate system. Embodiments of the present disclosure also may provide methods to determine the position and orientation of the sensor relative to the mounting position on the robot tool device. These methods may include taking 3D data from several different positions and orientations of one or several objects to extract the values defining the orientation and position of the 3D sensor relative to the tool from the 3D data sets.
Real-time 3D information may be collected about the shape of an object on which an industrial process is applied. By using the position and orientation information provided by the robot controller to integrate the 3D information provided by the sensor, embodiments of the present disclosure may provide improved volume information for shapes presenting few features, for example, a slowly varying wall. In addition, the total error in the reconstructed volume may be independent from the number of views because the position and orientation information for any view need not rely on the position and orientation information of the previous views.
Robotic system 100 can be an articulated robot, as shown in
3D sensor 120 may use a variety of different spatial, temporal, and coherent illumination technologies, including but not limited to, single point illumination, line illumination, multiple line illumination, 2D pattern illumination, and wide-area illumination. However, it should be appreciated that there may be some embodiments where there may be no illumination. 3D sensor 120 can include a single or multiple detectors. The single or multiple detectors may include single detection elements, linear arrays of detection elements, or two-dimensional arrays of detection elements, such as a camera.
In an embodiment of the present disclosure, 3D sensor 120 may be a camera combined with a 2D pattern illuminator. In another embodiment of the present disclosure, 3D sensor 120 may be a camera combined with a line illuminator. In yet another embodiment of the present disclosure, 3D sensor 120 may be a camera combined with a wide-area illuminator that illuminates an area of the object with a light beam without any particular pattern. It should be appreciated that the 3D sensor may be mounted on a motorized unit independent from the robot so that the line can be moved at the surface of the object to be able to cover an area. The position and orientation of the motorized unit independent from the robot may be taken into account when calculating the position and orientation of the sensor using the position and orientation information provided by the robot controller. In yet another embodiment, 3D sensor 120 may be a camera with an illuminator using a time-of-flight technique to measure 3D shapes. The time-of-flight technique works by measuring the time light takes to travel from the illuminator to each of the elements of the 3D sensor. Time-of-flight techniques can be based on several techniques including short optical pulses, incoherent modulation, and coherent modulation.
In another embodiment of the present disclosure, it should be appreciated that 3D sensor 120 may be a stereo camera. The stereo camera may include two or more cameras without departing from the present disclosure. A stereo camera works by having two or more cameras looking at the same area of an object. The difference in position of the same object features in the image of each camera, along with the known position of each camera relative to each other, may be used to calculate the 3D information of the object. In such an embodiment, it should be appreciated that 3D sensor 120 may be equipped with a 2D pattern illuminator. Such illuminator can provide features recognizable by the cameras for objects that would be otherwise featureless.
A virtual volume may be defined by its position, orientation, and dimensions. The position and orientation of the volume may be defined relative to a position and orientation reference. This reference can be any position and orientation in the robot coordinate space. One example of reference is the position and orientation of the 3D sensor at the first data acquisition of a given volume, but it should be appreciated that any position and orientation in the robot coordinate space can be used as a reference without departing from the present disclosure. This position and orientation reference may be defined by a 4×4 matrix, where the first 3 rows and columns may correspond to the orientation and the first 3 rows of the 4th column may correspond to the position. The first three values of the fourth row may always be 0's, and the fourth value may always be 1. This type of matrix may be referred to as a rotation-translation matrix. The position and orientation reference matrix corresponds to a mathematical rotation and translation operation where an object is rotated and translated from the 0,0,0 orientation and from the origin of the coordinate system into the position and orientation reference. After each 3D data acquisition, the matrix [N] corresponding to the rotation-translation operation from the position and orientation reference to the current position and orientation of the 3D sensor may be used to integrate the 3D data into a single common volume. This matrix may be calculated by multiplying the current position and orientation of the depth camera in the robot coordinate system by the inverse of the reference matrix. In the past, the matrix [N] would be calculated using the differences between 3D points acquired from at least two different robot positions.
In an embodiment of the present disclosure, the virtual volume can be defined in smaller volumes, called voxels. Each voxel may correspond to a position inside the virtual volume and may have predefined dimensions. Typically, all voxels defining the virtual volume will have the same dimensions. Each 3D point in the virtual volume belongs to a single voxel. When 3D information is added to the volume, each element of information corresponds to a single 3D point of the virtual volume and therefore belongs to a single voxel. When information is added to a voxel where information is already present, the new information is averaged with the information already present. If more than one element of new information belongs to the same voxel, those elements are combined together. Several algorithms exist to add and combine 3D data to voxels and to virtual volumes.
In a method for volume reconstruction according to an embodiment of the present disclosure, at the beginning of the volume reconstruction, a position and orientation reference may be defined. The reference can be the position and orientation of the robot tool where the 3D sensor first acquired a set of 3D data, but other positions and rotations can be used without departing from the present disclosure. After obtaining the reference position and orientation, the reference position and orientation may be converted into a rotation-translation matrix, using equations shown in
[Ni]=[Ai][B] Equation (1)
Matrix [Ni] therefore may be the rotation-translation matrix giving the rotation and translation of the 3D sensor from the position and orientation reference to the current position and orientation. Matrix [Ni] may be used to integrate the information acquired by the 3D sensor at the current position with the information already accumulated in the virtual volume.
[Dij]=[Ti][S][Vj] Equation (2)
In this equation, i is the index for each view and j is the index for the 3D data points acquired by the 3D sensor in each view. The rotation-translation matrix [Ti] may be calculated using the orientation and translation provided by the robot controller xi, yi, zi, ai, bi, ci and the equation provided in
The rotation-translation matrix [S] may be calculated by the equations of
Another approach to evaluate values xs, ys, zs, as, bs, cs may include using data from 3D sensor 110 after it is mounted on tool 120. The 3D data may then be used to calculate the xs, ys, zs, as, bs, cs. This approach is called calibration. In one calibration technique, an approximation may be used for the xs, ys, zs, as, bs, cs values based on design, measured values, or common sense for example. The sensor may be oriented by moving the tool such that the sensor coordinate system 410 is as parallel as possible to robot coordinate system 430. Then 3D data may be acquired from an object that has a flat surface parallel to an axis of robot coordinate system 430, xs for example, while moving the tool along the parallel axis of robot coordinate system 430. While looking at the 3D data acquired from two different tool positions, the mismatch between the two sets of 3D data of the flat surface and the distance traveled by the tool provides a good estimate of the corresponding rotation value, bs for example. The same approach may be used for the two other axes to determine the other angles. Then, the sensor may be positioned again with its coordinate system 410 parallel to the coordinate system of the robot. The tool is then rotated around the main axes and 3D data may be acquired from at least two different rotation angles. The mismatch between the two sets of 3D data relative to an object point or surface and the rotation can be used to evaluate one translation value xs, ys, zs. For example, if the tool is rotated by 180° around axis Ys and data from the same point on the object can be obtained from the two positions, the mismatch between the y values of the two data sets will be equal to twice the y, value. More than two sets of 3D data from more than two angles can be necessary. Making those measurements by rotating around the three axes of the robot coordinate system will provide a first approximation for the xs, ys, zs, as, bs, cs. By repeating the process from the beginning using the new sets of xs, ys, zs, as, bs, cs values, those values will converge towards the actual values. It is not necessary to acquire the data again if 3D data were saved in sensor coordinate system 410 and the process can be repeated several times using the same data to iteratively calculate xs, ys, zs, as, bs, cs values with the required accuracy.
If an object that has 3D features that can be identified automatically by a computer algorithm, another calibration technique can be used. In this calibration technique, 3D data sets of the object may be obtained from multiple views corresponding to several orientations and positions of the tool. The xs, ys, zs, as, bs, cs may then be set as variables in an error minimization algorithm like the Levenberg-Marquardt algorithm. The variations between the 3D data sets of each 3D features of the object are minimized using the chosen algorithm. The xs, ys, zs, as, bs, cs values corresponding to the smallest variation then correspond to the best estimate for those values.
After being reflected by optical element 210, optical beam 202 may go to second optical element 212. The orientation of optical beam section 242 is not fixed relative to origin of coordinate system 230 and may depend on orientation of optical element 210. After being reflected by second optical element 212, optical beam section 244 may go to object 150. The orientation of optical beam section 244 may not be pre-determined relative to the origin of coordinate system 230 and may depend on the orientations of both optical elements 210 and 212. Optical beam section 244 may hit the surface of object 150 at point 270. Position of point 270 on object 150 may depend on orientations of both first and second optical elements 210 and 212 and on position of object 150. The position of object 150 may be measured by 3D sensor 120 relative to the origin of coordinate system 230, and the orientations of both first and second optical elements 210 and 212 are known because they are controlled by a remote processing unit. For any given orientations of first and second optical elements 212 and 214, there may be a single point in space corresponding to any specific distance or depth relative to origin of coordinate system 230. Therefore, using the orientations of first and second optical elements 210 and 212, and using the distance information provided by 3D sensor 120, the position of point 270 at surface of object 150 can be calculated. For example, if tool 110 is a head for optical inspection of parts, optical beam 204 could substantially correspond to a laser beam. The laser beam would substantially follow the path shown by optical beam 204, including optical beam section 242 and 244, and hit object 150 at point 270.
System 200 of
Rotation axis 260 can be independent from the robot controller and the position and orientation information provided by the robot controller might not take into account the position and orientation of rotation axis 260. To convert the position information from 3D sensor 120 and from measurements into the coordinate system of the robot, multiplication by the rotation-translation matrix [Mk] representative of the position and orientation of coordinate system 290 of tool section 264 relative to coordinate system 420 of the tool. In the present case, the index k would indicate a specific orientation of tool section 264 relative to tool section 262. For 3D sensor 120:
[Dijk]=[Ti][Mk][S][Vi] Equation (3)
[S] is the rotation-translation matrix representative of the position and orientation of coordinate system 410 of 3D sensor 120 relative to coordinate system 290 of tool sub-section 264.
Notice that some industrial processes do not require several robotic movements. For measurements, the robot may remain immobile while mirrors 210 and 212 move laser beams 244 at surface of object 150. During this process, 3D sensor 120 has the same view and cannot acquire more data to improve accuracy of the reconstructed volume. However, it is possible for 3D sensor 120 to acquire data while tool 110 is moved into position by the robot for the actual industrial process. It is also possible that prior and after the actual industrial process, small robotic movements may be added to provide more views to 3D sensor 120 in order to further improve the accuracy of the reconstructed volume.
The reconstructed volume can be used to position the data from the measurements into 3D space. The system might not know the exact (x,y,z) coordinates of point 270 on object 150 in its own coordinate system 230 because it might lack the distance between point 270 and the system. However, because the angular positions of mirrors 210 and 212 are known, the orientation of laser beams 244 is also known. Therefore, the orientation of laser beams 244 in combination with the 3D data from 3D sensor 120 may provide the full information about the position of point 270 at the surface of object. Once the 3D information for all data points of the system are known, all data can be put in the same coordinate system and be presented in an integrated manner to the operator evaluating the data from the industrial process.
Additional parameters like angular position of mirrors and angular position of a rotation axis, even though not shown in
It also should be appreciated that communication between robot controller 932 and industrial process processing unit 922 should be fast enough so that the position and orientation information provided by robot controller 932 may correspond to the actual position and orientation of the tool.
Various benefits may be provided by embodiments of the present disclosure including but not limited to, integrating 3D data from a 3D sensor into a virtual volume using position and orientation information provided by a robotic system, using a position and an orientation in a robotic system coordinate system as the reference position of a virtual volume for surface volume reconstruction of an object, using the multiplication of a rotation-translation matrix of the current position and orientation of a 3D sensor by the inverse of a rotation-translation matrix of a position and orientation reference to calculate the change of orientation and position of a 3D sensor relative to the reference position and orientation, calibrating the position and orientation of a 3D sensor relative to the tool of a robotic system on which the sensor is mounted by moving and rotating the tool using the robotic system along defined axes using defined orientations of the tool, calibrating the position and orientation of a 3D sensor relative to the tool of a robotic system on which the sensor is mounted by acquiring 3D data sets of an object from several views and using the 3D data sets to find the position and orientation of the 3D sensor relative to the tool that minimize the variations in the 3D position, continuous 3D mapping while the robot is in motion for an industrial process (providing improved reconstruction accuracy without adding any delays to the robotic process), adding small robotic movements not necessary to industrial process but that would improve volume reconstruction accuracy without increasing time of the industrial process, using 3D information from 3D sensor to determine the exact position where an industrial process is performed on an object, and using 3D information form 3D sensor to integrate data from an industrial process into a common coordinate system.
Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
This application claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Patent Application Ser. No. 61/819,972 filed on May 6, 2013, entitled “Volume Reconstruction of an Object Using a 3D Sensor and Robotic Coordinates,” which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61819972 | May 2013 | US |