The present invention relates generally to a method and apparatus for dynamically measuring three-dimensional (3D) parameters of a tire with laser vision technology, and relates more particularly to a measurement method and apparatus for three parameters including the section width, inner radius and outer radius with non-contact laser vision technology when the measured tire is moving on the product line, the three parameters being representative of the dimensions of the measured tire.
Dynamic balance is one of the important factors for tire production. When a tire has poor dynamic balance, its performance is worsened due to the centrifugal force generated being too big or too small. An improperly installed tire can lead to unpredictable, unavoidable accidents. Therefore there is an great need to be able to measure the dynamic balance of tires on a production line, anywhere from spot-checking to checking 100% of tires, for all different specifications. There is a great need for a machine that automatically inflates tires and automatically measures their dynamic balance with high efficiency. Before such automated measurement of dynamic balance can take place, it is necessary to know the tire specification and the 3D parameters of the tire need to be known. Thus it is necessary to measure the 3D parameters of a tire—including the section width, inner radius and outer radius—to obtain the specification of the tire so as to provide known parameters for the measurement of dynamic balance. In addition, during the procedure of the tire manufacturing, many factors such as rubber quality and controlling parameters of squeezing and equipment conditions, can lead to malformation of a tire, causing incorrect section width, inner radius or outer radius. If it were possible to measure such 3D parameters in an automated way during manufacture, then manufacturing equipment could be automatically adjusted accordingly, and it would be possible to decrease the quantities of waste or under-finished tires. It will be appreciated that the ability to measure 3D parameters of tires plays an important role in the developing and manufacturing of high-performance tires, and such a measurement ability has been widely applied to quality control in tire manufacturing.
Recently, there two methods have been developed for measuring 3D parameters of a tire—one is manual measurement and the other is photoelectrical measurement. The manual measurement is unsuitable for 100% on-line inspection in situ because it has many disadvantages; such a measurement is time-consuming, has low efficiency, and has poor accuracy. The photoelectrical measurement has two concrete methods. One method utilizes at least two laser position sensors to perform thickness and width measurement. The other method is based on casting a shadow, for example utilizing optoCONTROL sensors produced by MICRO-EPSILON MESSTECHNIK GmbH & Co. KG to measure the parameters of a tire. Each set of photoelectric equipment used is only able to measure one parameter. This means that more than one set of equipment is needed to be able to measure all parameters of a tire. Moreover, the photoelectrical measurement methods presented above have a narrow measuring range, the apparatus is expensive, and the setup is elaborate. Yet another drawback is that such systems require that the tire be static (unmoving) at the time of the measurement, and the accurate center of the tire must be known. This makes the mechanical construction of such a system very complicated. There is thus a great need for a system for measuring the 3D parameters of a tire (for example, section width, inner radius, and outer radius) that does not require that the tire be motionless, that does not need accurate knowledge of the center of the tire, that is not time-consuming, that is efficient, that is accurate, that does not need to be replicated for each parameter to be measured, that has a wide measuring range, that is not expensive, that is not too big, that does not have a complicated mechanical constructions, and the setup of which is not elaborate.
A non-contact and dynamic measuring method and apparatus provides measurement of the 3D parameters of a tire. Laser technology and computer vision technology are combined to complete dynamic measurement of the 3D parameters of a tire, using two laser vision sensors. According to the present invention, the method for dynamically measuring 3D parameters of a tire is divided into two stages: calibration and measurement. The calibration only needs to be performed once and the measurement can be automatically performed.
The procedures of calibrating the model parameters of the measurement system are as follows:
Choose the tangent plane on the transmitting roller as a datum plane and place a measured tire on the datum plane.
Two laser vision sensors are set up to the measuring apparatus in site. Two projective light planes of two sensors are perpendicular to the datum plane respectively and are perpendicular each other. The optical axis of the laser projector of the first sensor is almost pierced (is nearby to) to the geometric center of the measured tire and the light plane hits the surface of the measured tire to form two feature-contours on the surface of the measured tire. The angle is from 30° to 60° between the optical axis of the laser projector of the second sensor and the datum plane, and then one feature-contour is formed on the surface of the measured tire. Next, the two sensors are fixed and the measured tire is removed from the datum plane.
A planar target with pre-established feature points for camera calibration in the present invention is provided. The planar target is one of the following structures: (1) A planar target like a chessboard. The side length of the grid of the target is from 3 to 50 mm and its accuracy is from 0.001 to 0.01 mm. The common vertices of two grids are chosen as the calibration feature points. The number of vertices is from 16 to 400. (2) A planar target with a square array. The side length of the square is from 3 to 50 mm and its accuracy is from 0.001 to 0.01 mm. The space between two adjacent squares is from 3 to 50 mm and its accuracy is from 0.001 to 0.01 mm. Choose four vertexes of each square as the calibration feature point. The number of squares is from 4 to 100.
A 3D local world coordinate frame is established on the target plane as follows: the up-left corner is defined as the origin; the x-axis is rightward; the y-axis is downward and the z-axis is particular to the target plane. Meanwhile, the local world coordinates of the calibration feature points are determined and are saved to the computer.
The planar target is then freely and non-parallely moved to at least three positions in the field of view of the first sensor; the sensor takes one image each time and save each image to the computer. (All the feature points must be contained in the images taken.) The image coordinates of the feature points are extracted and the detected image coordinates are saved and the corresponding local world coordinates of the feature points are saved to the computer. Next, the intrinsic parameters of the camera of the first sensor are calibrated with the image coordinates and the corresponding local world coordinates of the feature points.
The intrinsic parameters of the camera of the second sensor are calibrated by the same procedures as the first sensor.
A planar target that is simply a square (that is named the square planar target) is also provided in the present invention. The side length of the square is from 50 to 500 mm and its accuracy is +0.01 mm. Four vertexes and the center of the square are chosen as the principal feature points.
Let the square planar target lie on the datum plane and two sensors can simultaneously observe the square on the target. A global world coordinate frame is established on the target plane as follows: the center of the square is defined as the origin; the x-axis and y-axis are parallel to two sides of the square respectively, and the z-axis is upward and perpendicular to the target plane.
Keeping the target unmoved, the first sensor takes one image and saves it to the computer. According to the distortion model of the camera, the distortion of the taken image is corrected and the distortion-free image is obtained. After extracting the image coordinates of five principal feature points of the square planar target in the distortion-free taken image, more secondary feature points with known the image coordinates and the corresponding global world coordinates on the target are obtained by the “invariance of cross-ratio” principle. Next, the transformation from the first camera coordinate frame related to the first sensor to the global world coordinate frame can be calculated with those known feature points on the square planar target.
Keeping the target unmoved, the transformation from the second camera coordinate frame related to the second sensor to the global world coordinate frame is calculated with the same procedures as with the first sensor.
Multiple local world coordinate frames are established on the target plane respectively with the same method as the global world coordinate frame when the square planar target is moved to the different position.
In the field of view of the first sensor, freely moving the square planar target to at least two positions, the first sensor takes one image each time and save it to the computer. The square on the target plane and a feature light stripe formed by an intersection line between the projective light plane and the target plane should be completely contained in the taken images. According to the distortion model of the first sensor, the distortion of the taken images is corrected and the distortion-free images are obtained. After extracting the image coordinates of five principal feature points of the square planar target in the distortion-free taken image, more secondary feature points with measured image coordinates and the corresponding local world coordinates on the target are obtained by the “invariance of cross-ratio” principle. Next, the transformation from the local world coordinate frame to the first camera coordinate frame related to the first sensor can be calculated with those known feature points on the square planar target. The intersection points between the feature light stripe and the diagonals of the square are named control points. After extracting the image coordinates of the control points lying on the light plane in the distortion-free taken images, the local world coordinates of the control points are calculated by the “invariance of cross-ratio” principle. Next, according to the transformation from the local world coordinate frame to the first camera coordinate frame and the transformation from the camera coordinate frame to the global world coordinate frame, the camera coordinates and global coordinates of the control points are obtained from the image coordinates. Next, the light plane equation of the first sensor in the global world coordinate frame can be obtained by fitting the known control points lying on the light plane a nonlinear least-squares method.
The equation of the second light plane related to the second sensor in the global world coordinate frame can be obtained with the same procedures as the first light plane related to the first sensor.
The equation of the datum line of each sensor in the global world coordinate frame can be obtained by calculating the intersection line between the light plane and datum plane respectively.
The model parameters of the measurement system including the camera intrinsic parameters, the equation of the light plane, the equation of the datum line and the transformation from the camera coordinate frame to the global world coordinate frame, are all saved to one data file in the computer.
The procedures of practical measurement of 3D parameters of a tire are as follows:
Move a measured tire to the measuring place, simultaneously take two images of the tire using the respective two laser vision sensors; the taken images here are named the measuring images.
According to the distortion model of the camera, correct the distortion of the measuring images of the respective two sensors to obtain distortion-free measuring images. Extract the image coordinates of the feature light stripes from the distortion-free measuring images.
According to the mathematical model of the laser vision sensor and the transformation from the camera coordinate frame to the global world coordinate frame, calculate the world coordinates of center points on the feature light stripes from the corresponding image coordinates.
According to the measuring principle of the present invention, calculate the parameters of the measured tire including the section width, inner radius and outer radius from the world coordinates of center points on the feature light stripes.
Repeat the procedures as described above to measure the 3D parameters of a new tire.
The advantages of the presented invention are as follows:
Additional features and advantages of the present invention will be set forth in the description which follows, and in part will be obvious from the description, or may learned by practice of the invention. The goals and advantages of the invention may be realized and obtained by means of instrumentalities and combinations particularly pointed out hereinafter.
The accompanying drawings, which are incorporated in and constitute a part of specification, illustrate an exemplary embodiment of the present invention and, together with the general description given above and the detailed description of the preferred embodiment given below, serve to explain the principles of the present invention.
As shown in
With a combination of modern laser technology and computer vision, the dynamic measurement of the 3D parameters of a tire is implemented by using two laser vision sensors when the tire is moving on the product line in the invention. Based on the optical triangulation principle, the exemplary embodiment of the measuring apparatus is shown in
In practical measuring, the camera 105 of the sensor 103 takes one image of the feature-contours 201 and 202 and the camera 108 of the sensor 107 takes one image of the feature-contour 203. The sub-pixel image coordinates of three feature-contours 201, 202 and 203 are extracted by image processing; the camera coordinates of three feature-contours can be calculated by the mathematical model of the laser vision sensor and then the global world coordinates of three feature-contours can be calculated according to the transformation from the camera coordinate frame to the global world coordinate frame. Next, three vertical distances from the feature points 115, 116 and 117 to the datum plane 101 are calculated respectively using the global world coordinates of three feature-contours. The mean of three distances is chosen as the section width of the measured tire 102; the global world coordinates of the center 119 of the measured tire 102 can be obtained from the projections of the feature points 115, 116 and 117 on the datum plane 101 by the circle fitting; three distances between the projections of three feature points 112, 113 and 114 on the datum plane 101 and the center 119 of the measured tire 102 are calculated respectively; and then the mean of three distances is chosen as the inner radius of the measured tire 102; and the outer radius can be obtained by calculating the distance between the projection of the feature point 118 on the datum plane 101 and the center 119 of the measured tire 102.
As shown in
Let p be the projection on the image plane 304 of a given point P on the light plane 305. The world coordinates and camera coordinates of the point P are defined as (xw, yw, zw) and (xc, yc, zc) respectively. The ideal image coordinates and real image coordinates of the point p are defined as (xu, yu) and (xd, yd) respectively. Then the camera model of the sensor 301 is as follows:
where ρ is an arbitrary scale factor, Rcw and Tcw are, respectively, the 3×3 rotation matrix and 3×1 translation vector relating the camera coordinate frame Oc−xcyczc and world coordinate frame Ow−xwywzw, A is the camera 302 intrinsic matrix, fx, fy represent the focal length in terms of pixel dimensions in the x, y direction, respectively, and (u0, v0) are the principal point pixel coordinates.
From expression (1), the point P has the only projection p in the image plane 304, whereas the point p is corresponding to the only radial Ocp, and the point P is on Ocp.
The distortion of the camera lens can be taken into account, in which case the real image coordinates of the point p can be denoted by:
where k1 is the radial distortion coefficient.
The equation of the light plane 305 in the world coordinate frame is denoted by:
awxw+bwyw+cwzw+dw=0
Expressions (1), (2), and (3) completely describe the real measurement mathematical model of the laser vision sensor. The equation of the line Ocp can be obtained by the camera model, the equation of the image plane 304 can be obtained from the expression (3), and then the global world coordinates of the point P are the intersection point between the line Ocp and the image plane 304.
For calibration, a tangent plane is chosen on the transmitting roller as the datum plane 101 and a measured tire 102 is placed on the datum plane 101.
As shown in
A planar target with pre-established feature points for camera calibration in the present invention is provided. The planar target is one of the following structures:
The 3D local world coordinate frame is established on the target plane as follows:
the upper-left corner is defined as the origin;
the x-axis is rightward;
the y-axis is downward and
the z-axis is particular to the target plane.
Meanwhile, the local world coordinates of the calibration feature points are known and saved to the computer.
The camera model parameters are calibrated as follows: the intrinsic parameters of the camera can be calibrated with multiple views of the planar target by applying the calibration algorithm described in “A flexible new technique for camera calibration”, Zhengyou Zhang, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 2000, 1330–1334. All the views of the planar target are acquired by one camera in different positions, thus each view has separate extrinsic parameters, but common intrinsic parameters.
Due to the nonlinear nature of the expression (2), simultaneous estimation of the parameters involves using an iterative algorithm to minimize the residual between the model and N observations. Typically, this procedure is performed with least-squares fitting, where the sum of squared residuals is minimized. The optimized objection function is then expressed as
Where (Ui, Vi) are the observed image projective coordinates of 3D world point and (ui, vi) are the real image coordinates. All the intrinsic camera parameters and extrinsic parameters can be estimated from the expression (4), which include (fx, fy), (u0, v0), k1 and (R, T). If the camera intrinsic parameters are known, the optimized extrinsic parameter (R, T) of the camera can be directly estimated by the expression (4).
A further step in calibration is to freely and non-parallelly move the planar target to at least three positions in the field of view of the sensor 103, taking one image each time by the sensor and saving each respective image to the computer. (All the feature points must be contained in the taken images.) The image coordinates of the feature points are extracted and the detected image coordinates and the corresponding local world coordinates of the feature points are saved to the computer. Next, the camera intrinsic parameters of the sensor 103 are calibrated with the image coordinates and the corresponding local world coordinates of the feature points.
The camera intrinsic parameters of the sensor 109 are calibrated with the same procedures as the sensor 103.
As shown in
The world coordinates of five principal feature points on the square planar target are easy to obtain with the side length of the square and the corresponding image coordinates are also obtained by image processing, so one square can provide five known principal feature points P1–P5.
In
The image coordinate of pj can be computed with the world coordinates of P1, P5, P3, Pj and the image coordinates of p1, p5, p3 by the expression (5). Likewise, the image coordinates of any feature point on the straight line P2P4 can be obtained. Theoretically, an arbitrary number of secondary feature points with the known world coordinates and the corresponding image coordinates can be obtained. Because the world coordinates of the secondary feature point are arbitrarily chosen, more feature points can be generated to be adapted to the various calibrations, which feature points are reasonably distributed in the image.
As shown in
Keeping the target motionless, the camera 105 of the sensor 103 takes one image and saves it to the computer. According to the distortion model of the camera 105, the distortion of the taken image is corrected and the distortion-free image is obtained. After extracting the image coordinates of five principal feature points of the square planar target in the distortion-free taken image, more secondary feature points are obtained with measured image coordinates and the corresponding global world coordinates on the target by the “invariance of cross-ratio” principle. Next, the transformation from the frame Oc1−xc1yc1zc1 to the frame Ow−xwywzw can be calculated with those known feature points on the square planar target according to the expression (4). Keeping the target motionless, the transformation from the frame Oc2−xc2yc2zc2 to the frame Ow−xwywzw is calculated with the same procedures with respect to the sensor 107.
As shown in
As illustrated in the
Letting (xc, yc, zc) be the camera coordinates of the control point Qi, and letting (xi, yi, zi) be its local world coordinates and (xw, yw, zw) be its global world coordinates, we have:
where Rci, Rwc and Rwi are a 3×3 rotation matrix; Tci, Twc and Twi are a 3×1 translation vector; Rci and Tci depict the transformation from the frame Oi−xiyizi to the frame Oc−xcyczc; Rwc and Twc depict the transformation from the frame Oc−xcyczc to the frame Ow−xwywzw; and Rwi and Twi depict the transformation from the frame Oi−xiyizi to the frame Ow−xwywzw. According to the expression (4), Rci and Tci are easy to calculate with five principal feature points and other secondary feature points. Rwc and Twc have been calculated in advance, so Rwi and Twi are also obtained.
When moving the square planar target freely to m positions, the global world coordinates of 2m control points can be obtained as described above. Let (xwk, ywk, zwk) k=1 . . . 2 m be the global world coordinates of 2m control points, then the equation of the light plane can be obtained by fitting 2m control points with nonlinear least squares. The objective function is the square sum of the distance from the control points to the fitted plane:
where
and then the equation of the intersection line between the light plane and the datum plane in the frame Ow−xwywzw, that is, the equation of datum line, can be estimated.
In the scheme described above, the light plane 110 of the sensor 103 denoted by aw1, bw1, cw1, dw1 can be obtained and the datum line 204 denoted by Q(x01, y01, z01), t1 can be also obtained. Likewise, the light plane 111 of the sensor 107 denoted by aw2, bw2, cw2, dw2 can be obtained and the datum line 205 denoted by Q(x02, y02, z02), t2 can be also obtained.
In this way, the calibration of the measurement system is completed and the calibrated parameters (including the intrinsic camera parameters) are saved to one data file in the computer, as are the equation of the light plane, the equation of the datum line and the transformation of from the camera coordinate frame to the global world coordinate frame.
The procedures of practical measuring 3D parameters of a tire are as follows:
A tire to be measured is moved to the measuring place; two laser vision sensors 103 and 107 simultaneously take one image of the measured tire 102 respectively; the taken images here are named the measuring images.
According to the distortion model of the camera, the distortion of the measuring images of two sensors respectively are corrected to obtain the distortion-free measuring images. Then, the image coordinates of the feature light stripes 201, 202 and 203 are extracted using the algorithm described in “An Unbiased Detector of Curvilinear Structures”, Carsten Steger, IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(2), 1998, 113–125 (which is incorporated herein by reference) in the distortion-free measuring images.
According to the mathematical model of the laser vision sensor and the transformation from the camera coordinate frame to the global world coordinate frame, the global world coordinates of center points on the feature light stripes 201, 202 and 203 are calculated from the corresponding image coordinates.
As illustrated in
The cameras of two laser vision sensors 103 and 107 take the measured tire images which include the feature-contours, then the 2D image coordinates of the center points Pi (i=1 . . . N) on the feature-contours can be extracted by image processing, the 3D camera coordinates of the point Pi can be calculated according to the mathematical model of the laser vision sensor, and then, the world coordinates of the points Pi can be calculated by the expression (6). The equation of the datum line 702 in the frame Ow−xwywzw can be denoted by:
L(t)=Q+t{right arrow over (n)}→ (8)
where the point Q is an arbitrary known point on the datum line 702,
{right arrow over (n)} is the unit normal vector, and the distance from the point Pi to the datum line 702 is given by:
di=∥Pi−(Q+t0{right arrow over (n)})∥t0={right arrow over (n)}·(Pi−Q)→ (9)
The section width of the measured tire is the maximum distance from the feature-contour 701 to the datum line 702, and we have
h=max(di) i=1 . . . N→ (10)
Accordingly, the 3D world coordinates of the tire surface point 704 in the frame Ow−xwywzw can be obtained. The point 706 is the projection of the point 704 onto the datum line 702.
As shown in
In
The point 118 is the outer radius feature point on the feature-contour 203 in
As described above calculate the parameters of the measured tire including the section width, inner radius and outer radius can be calculated from the world coordinates of center points on the feature light stripes. The procedures as described above can be repeated to measure the 3D parameters of a new tire.
While an exemplary embodiment has been described above, it should be readily apparent to those of ordinary skill in the art that the above-described embodiment is exemplary in nature since various changes may be made thereto without departing from the teachings of the invention, and the embodiments described should not be construed as limiting the scope of protection for the invention as set forth in the appended claims. Indeed, it will be appreciated that one skilled in the art can easily devise myriad variations and improvements upon the invention, all of which are intended to be encompassed within the claims which follow.
Number | Date | Country | Kind |
---|---|---|---|
2005 1 0115742 | Nov 2005 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
5380978 | Pryor | Jan 1995 | A |
6894771 | Dorrance et al. | May 2005 | B1 |
6963661 | Hattori et al. | Nov 2005 | B1 |
20030209893 | Breed et al. | Nov 2003 | A1 |
20040129478 | Breed et al. | Jul 2004 | A1 |
20050017488 | Breed et al. | Jan 2005 | A1 |
20050046584 | Breed | Mar 2005 | A1 |
20050068522 | Dorrance et al. | Mar 2005 | A1 |
20050131607 | Breed | Jun 2005 | A1 |
20050196034 | Hattori et al. | Sep 2005 | A1 |
20050248136 | Breed et al. | Nov 2005 | A1 |
20050264472 | Rast | Dec 2005 | A1 |
20050278098 | Breed | Dec 2005 | A1 |
20060167784 | Hoffberg | Jul 2006 | A1 |
20060208169 | Breed et al. | Sep 2006 | A1 |