This application claims priority to China Application Serial Number 202310046813.3, filed Jan. 31, 2023, which is herein incorporated by reference in its entirety.
The disclosure relates to a measurement method. More particularly, the disclosure relates to a measurement method capable for a non-contact displacement detection apparatus to measure a position and a tilt angle of a target object.
In non-contact displacement detecting techniques, as a laser beam emitted from a non-contact displacement detection device hits a surface of a target object, the laser beam is reflected from the surface of the target object to the sensor so as to capture a light spot. As the position variation of the surface of the target object varies, light spots captured by the sensor correspond to various positions and light quantity distributions, centers of the light spots can be calculated according to the various positions and light quantity distributions, it can be converted into position information of the target object.
However, there is not only displacement but also tilt on a target object in actual situation. When a laser beam hits a first target object with a tilt angle, it reflects to the sensor to generate a first waveform (energy distribution); when the laser beam hits a second target object with a displacement but no tilt, it reflects to the sensor to generate a second waveform, but the second waveform overlaps with the first waveform. In this case, the sensor cannot distinguish the tilt angle, the displacement, or both the target object exactly has, it causes that the measuring outcomes of the sensor are inconclusive.
Therefore, how to provide a non-contact displacement detection apparatus to solve the above-mentioned problems is an important issue in this field.
An embodiment of the disclosure provides a measurement method for measuring a position and a tilt angle of a target object. The measurement method in a measurement mode includes the following steps. A pixel deviation between a first pixel and a second pixel of the target object is calculated. The pixel deviation is substituted into a curve of tilt angle versus pixel deviation to obtain the tilt angle of the target object. A first target curve is selected from N first tilt angle curves according to the tilt angle and the first pixel, and a second target curve is selected from N second tilt angle curves according to the tilt angle and the second pixel. A zero-tilt angle is substituted into the first target curve and the second target curve, respectively, to obtain a pixel at the zero-tilt angle. The pixel at the zero-tilt angle is substituted into a position curve to obtain the position of the target object.
An embodiment of the disclosure provides a non-contact displacement detection apparatus. The non-contact displacement detection apparatus includes a light source, a beam splitter, a first sensor, a second sensor and a processing circuit. The light source is configured to provide a laser light to a surface of a target object. The beam splitter is configured to split a reflected light reflected from the surface into a first split light and a second split light. The first sensor is configured to receive the first split light to measure at least one first pixel. The second sensor is configured to receive the second split light to measure at least one second pixel. The processing circuit is coupled to the first sensor and the second sensor and is configured to execute the measurement method as mentioned above in a measurement mode to measure a position and a tilt angle of the target object.
The measurement method and the non-contact displacement detection apparatus of the present disclosure can obtain multiple linear functions to give a description of positions and tilt angles in a calibration mode in order to derive the tilt angle and the position of the target object according to the multiple linear functions in a measurement mode.
The disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
In structure, the cover 15, the aperture 14, the beam splitter 13 and the sensor 11 are arranged along P axis. The beam splitter 13 and the sensor 12 are arranged along Q axis, wherein the P axis is vertical to the Q axis.
In operation, a laser light Li emitted from the light source 110 hits a surface (i.e., a reflected point Pt) of the target object 17, and a reflected light Lr is incident directly on the cover 15 and the aperture 14. The aperture 14 is coupled to and controlled by the processing circuit 16 to allow the reflected light Lr to enter the beam splitter 13. The beam splitter 13 splits the reflected light Lr into a first split light L1 direct to the sensor 11 and a second split light L2 reflected to the sensor 12.
In response to that the first split light L1 is incident on the sensor 11, the processing circuit 16 obtains a pixel position X1t of the sensor 11. In response to that the second split light L2 is incident on the sensor 12, the processing circuit 16 obtains a pixel position X2t of the sensor 12. Moreover, the processing circuit 16 calculates a position WDt along Z axis and a tilt angle θt relative to X axis according to the pixel position X1t of the sensor 11 and the pixel position X2t of the sensor 12, the Z axis is vertical to the X axis, and an angle between the Z axis and the Q axis is 45 degrees.
In step S21, a calibration plate is controlled to move within a measurement range, and sensors are controlled to perform measurement to obtain a position curve.
In step S22, the calibration plate is controlled to rotate within a tilt angle range, and the sensors is controlled to perform measurement to obtain N first tilt angle curves, N second tilt angle curves and a curve of tilt angle versus pixel deviation.
In step S23, a pixel deviation between a first pixel and a second pixel detected from a target object is calculated.
In step S24, the pixel deviation is substituted into a curve of tilt angle versus pixel deviation to obtain the tilt angle of the target object.
In step S25, a first target curve is selected from N first tilt angle curves according to the tilt angle and the first pixel, and a second target curve is selected from N second tilt angle curves according to the tilt angle and the second pixel.
In step S26, a zero-tilt angle is substituted into the first target curve and the second target curve, respectively, to obtain a pixel at the zero-tilt angle.
In step S27, the pixel at the zero-tilt angle is substituted into a position curve to obtain the position of the target object.
Based on the method 20, the present disclosure can obtain multiple linear functions (i.e., position curve, N first tilt angle curve, N second tilt angle curve and a curve of tilt angle versus pixel deviation) for describing positions and tilt angles in a calibration mode, so as to derive the tilt angle and the position of the target object according to the multiple linear functions in a measurement mode. Steps S21˜S27 can be executed by the processing circuit 16. The processing circuit 16 executes steps S21˜S22 in the calibration mode. The processing circuit 16 executes steps S23˜S27 in the measurement mode.
In step S31, when the calibration plate moves to i-th measured position without tilt, a pixel position corresponding to the i-th measured position is obtained, wherein 1≤i≤M.
In step S32, the processing circuit 16 determines whether i is equal to M or not. If YES, step S33 is executed; if No, step S31 is executed.
In step S33, the position curve is obtained according to the M pixel positions.
In step S34, when the calibration plate moves to j-th measured position, the calibration plate is controlled to rotate to K angles to obtain K first pixel positions corresponding to the j-th measured position from the first sensor and obtain K second pixel positions corresponding to the j-th measured position from the second sensor, wherein 1≤j≤N.
In step S35, the processing circuit 16 determines whether j is equal to K or not. If YES, step S36 is executed; if No, step S34 is executed.
In step S36, the N first tilt angle curves are obtained according to the N groups of K first pixel positions, and the N second tilt angle curves are obtained according to the N groups of K second pixel positions.
In step S37, the N first tilt-angle curves are averaged to obtain a first averaged curve, and the N second tilt-angle curves are averaged to obtain a second averaged curve.
In step S38, the second averaged curve is subtracted from the first averaged curve to obtain the curve of tilt angle versus pixel deviation.
To be noted that, in consideration that the laser lights received by the sensors 11, 12 are analogous energy distributions and some possible hardware errors, the present disclosure calculates an averaged curve of tilt angle versus pixel deviation in order to increase the stability of the detection apparatus 10 for determining the tilt angle and the position.
In step S32, if the processing circuit 16 determines that i is equal to M, it means that the step S31 has been repeated M times and the position measurements without tilt has been completed, and then step S33 is going to execute. In step S33, the processing circuit 16 obtains points (X1, WD1)˜(Xm, WDm) according to the M pixel positions measured at M measured positions so as to establish the position curve Yp, as shown in
Based on optical theory, if the calibration plate 47 without tilt, the measuring results of the sensors 11 and 12 are substantially the same or similar, and thus the sensors 11 and 12 should have the same position curve Yp.
In step S34, when the calibration plate 47 moves to j-th measured position (e.g., one of the measured positions WR1˜WRn), the processing circuit 16 controls the calibration plate 47 to respectively tilt to K angles (e.g., −0.5°˜0.5°) to obtain K first pixel positions (e.g., one of the first pixel positions X11˜X1k) corresponding to the j-th measured positions and K second pixel positions (e.g., one of the second pixel positions X21˜X2k) corresponding to the j-th measured positions, in which 1≤j≤N, and j and N are positive integers, and the j-th measured position refers to any one of the N measured positions.
In some embodiments, the said N can be any positive integers. In this embodiment, N is an integer of 3, the first measured position can be a farthest position (e.g., 10.5 mm) or a three quarters median (e.g., 10.3 mm) in the measurement range, the second measured position can be a medial position (e.g., 10 mm), a third measured position (or a final position) can be a nearest position (e.g., 9.5 mm) or one quarter median (e.g., 9.7 mm) in the measurement range. In some embodiments, the said K can be 11, the processing circuit 16 samples a pixel position per 0.1 degrees in the tilt range θr of −0.5°˜0.5° so as to obtain 11 corresponding pixel positions.
In step S35, if the processing circuit 16 determines that j is equal to N, it means that the step S35 has been repeated N times and the position measurements with tilt has been completed, and then step S36 is going to execute. In step S36, the processing circuit 16 obtains N first tilt angle curves (e.g., the first tilt angle curves y1˜y3) according to the N groups of K first pixel positions, and obtains N second tilt angle curves (e.g., the second tilt angle curves y1′˜y3′) according to the N groups of K second pixel positions, as shown in
In this embodiment, the second tilt angle curves y1′˜y3′ can be expressed by the following functions.
Ideally, the first tilt angle curves y1˜y3 are linear functions parallel to each other, the slope thereof should be the same. However, in practice, the laser lights received by the sensors 11 and 12 are light spots with areas or light quantity distributions, causing that the slopes a1, a2, a3 of the first tilt angle curves y1˜y3 obtained at different measured positions are approximately but not substantially the same. Similarly, the slopes a1′, a2′, a3′ of the second tilt angle curves y′˜y3′ are approximately the same. In addition, the coefficients are equal, b1=b1′, b2=b2′, b3=b3′, therefore, the first and second tilt angle curves corresponding to the same measured position have the same pixel positions when the tilt angle is equal to zero.
In step S37, the processing circuit 16 averages N first tilt angle curves to obtain a first averaged curve, and averages N second tilt angle curves to obtain a second averaged curve. For example, the processing circuit 16 averages 3 first tilt angle curves y1˜y3 to obtain a first averaged curve Y1, and averages 3 second tilt angle curves y1′˜y3′ to obtain a second averaged curve Y2, as shown in the following functions.
In the measurement mode, the processing circuit 16 executes the step S23˜S27 to calculate a tilt angle θt and a position WDt of the target object 17 according to the curve of tilt angle versus pixel deviation Y5, the first tilt angle curves y1˜y3, the second tilt angle curves y′˜y3′ and the position curve Yp which are obtained in the calibration mode.
The processing circuit 16 substitutes a pixel deviation, −17.643, into the curve Ys of pixel deviation versus tilt angle to obtain a tilt angle, −0.321°, of the target object 17.
In step S26, the processing circuit 16 substitutes the zero-tilt angle into the first tilt angle curve yt2 which is selected as the first target curve to obtain a pixel UPC at the zero-tilt angle. Moreover, the processing circuit 16 substitutes the zero-tilt angle into the second tilt angle curve yt2′ obtain the pixel UPC at the zero-tilt angle, such as the pixel UPC at the zero-tilt angle can be 160.514.
In step S27, the processing circuit 16 substitutes the pixel UPC at the zero-tilt angle into the position curve Yp to obtain the position WDt of the target object 17. For example, the position curve Yp can be expressed by the following equation.
The processing circuit 16 substitutes the pixel UPC at the zero-tilt angle, 160.514, into a variable x of the position curve Yp to obtain the position WDt of 10 mm of the target object 17, as shown in
Simply speaking, the embodiment from
Step 1), in a calibration mode, the calibration plate 47 is controlled to be moved along Z axis in a measurement range, and then steps S31 to S33 are executed to obtain a position curve along Z axis.
Step 2), the calibration plate 47 is controlled to rotate within a tilt angle range, and then steps S34 to S38 are executed to obtain N first tilt angle curves, N second tilt angle curves and a curve of tilt angle versus pixel difference corresponding to Z axis.
Step 3), the calibration plate 47 is controlled to move along Y axis in the measurement range, and then steps S31 to S33 are executed to obtain an auxiliary position curve along Y axis, wherein Y axis is vertical to axes X, Z, P and Q.
Step 4), the calibration plate 47 is controlled to rotate within the tilt angle range, and then steps S34 to S38 are executed to obtain N first auxiliary tilt angle curves, N second auxiliary tilt angle curves and an auxiliary curve of tilt angle versus pixel difference corresponding to Y axis.
Step 5), in a measurement mode, step S23 to step S27 are executed to obtain a first tilt angle and a position of a target object according to the position curve, the N first tilt angle curves, the N second tilt angle curves and the curve of tilt angle versus pixel difference corresponding to Z axis.
Step 6), step S23 to step S27 are executed to obtain a second tilt angle and the position of the target object 17 according to the auxiliary position curve, the N first auxiliary tilt angle curves, the N second auxiliary tilt angle curves and the auxiliary curve of tilt angle versus pixel difference corresponding to Y axis.
In summary, the detection apparatus 100 of the present disclosure utilizes the sensors 11 and 12 to receive the lights reflected form the target object 17 in two optical paths with different distances in order to calculate the tilt angle and position of the target object 17 according to the pixel deviation between two sensors 11 and 12. Furthermore, the curve of tilt angle versus pixel deviation Ys of the detection apparatus 100 is obtained by averaging multiple first tilt angle curves and multiple second tilt angle curves so as to increase the detection accuracy.
It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
202310046813.3 | Jan 2023 | CN | national |