The present invention relates to a measurement apparatus, a storage medium, a system, and a method of manufacturing an article and the like.
As a conventional non-contact measurement apparatus, there is a measurement apparatus disclosed in Japanese Examined Patent Application Publication No. 59-52963. This measurement apparatus generates speckles by irradiating a measurement target with a laser, obtains two image capturing signals by photoelectrically converting speckle distribution before and after motion, and calculates the deformation amount of the measurement target based on the position of the extreme value of the correlation function of the two signals. In Japanese Examined Patent Application Publication No. 59-52963, the measurement is performed by using speckles, but a similar measurement can be performed even with normal image information by using incoherent illumination.
In Japanese Patent Laid-Open No. 2003-222504, an optical magnification is determined in accordance with a measured displacement amount in order to achieve high accuracy in a displacement measurement apparatus.
Japanese Patent Laid-Open No. 2003-222504 discloses a method for correcting distortion caused by a light-receiving optical system of a displacement measurement apparatus. In a light-receiving optical system having a large distortion, the influence of the distortion is different depending on the displacement amount. In a case in which the displacement amount is small, the optical magnification is equal to the design value, but in a state in which the displacement amount becomes large, the deviation between the optical magnification and the design value becomes large. Thus, by measuring the distortion amount of the optical system in advance, by measuring the distortion amount of the optical system in advance, the optical magnification is corrected in accordance with the displacement amount.
However, errors that occur in a displacement measurement apparatus are not limited to those caused by an optical magnification such as distortion, and in the method of correcting optical magnification as disclosed in Japanese Patent Laid-Open No. 2003-222504, there was a problem in that errors other than these cannot be sufficiently corrected.
Accordingly, it is one object of the present invention to provide a measurement apparatus and the like that suppresses an error caused by an image that is used for measurement, and that enables measurement with a high accuracy.
In order to achieve the above object, a measurement apparatus according to one aspect of the present invention is configured to include a circuit configured to function as a measurement unit that is configured to calculate a measurement value with respect to a measurement target by using a cross-correlation function of two images of the measurement target acquired by an image capturing element, and a correction unit that is configured to correct the measurement value according to a configuration of a spatial frequency component of the two images
Further features of the present invention will become apparent from the following description of embodiments with reference to the attached drawings.
Hereinafter, with reference to the accompanying drawings, favorable modes of the present invention will be described using embodiments. In each diagram, the same reference signs are applied to the same members or elements, and duplicate descriptions will be omitted or simplified.
The present inventor has found a tendency that a measurement error becomes large in a case in which a low-frequency component is dominant as a configuration of a spatial frequency component included in an image to be used. For example, in the measurement of a measurement target having a significantly varying surface roughness, the measurement error becomes large when the low-frequency component becomes dominant as a configuration of the spatial frequency component of the measurement target surface pattern.
Further, for example, in a case in which the distance to the measurement target changes and a shake occurs in the image, or in a case in which a blur occurs in an image because the exposure time is constant and the speed of the measurement target becomes faster, the measurement error similarly becomes large. That is, the measurement error becomes large in a case in which the ratio of the frequency component that is lower than a predetermined frequency in the distribution of the spatial frequency component of the image of the measurement target is equal to or greater than a predetermined ratio. Therefore, it has been found that in such a case, it is desirable to correct the measurement value.
Accordingly, the measurement apparatus 1 of the first embodiment is configured so that the measurement error is corrected according to the configuration of the spatial frequency component.
The light flux emitted from a light source 3 is condensed on the measurement target 2 by a light condensing member 4, and illuminates the measurement target 2.
The light source 3 can be appropriately selected from a laser diode, an LED, a halogen lamp, or the like. An image obtained in a case in which a laser diode is selected is an image configured by speckles, and in a case in which an incoherent light source such as an LED or halogen lamp is selected, an image reflecting the pattern of the surface of the measurement target 2 is obtained.
The light condensing member 4 is configured by a single lens or a lens group. In a case in which a laser diode is used, it is desirable to perform aberration correction so that light can be condensed by a plane wave. In addition, in a case in which the distance between the measurement apparatus 1 and the measurement target 2 changes, it is desirable to configure coaxial epi-illumination because displacement of the speckle occurs in oblique incidence illumination.
In contrast, in a case in which an incoherent light source such as an LED or halogen lamp is selected, it is sufficient that the light receiving region can be illuminated, and because aberration and the like are not particularly problematic, it can be appropriately selected according to the size of the region to be illuminated.
A part of the light flux that is diffusely reflected from the illuminated measurement target 2 is condensed on, for example, a sensor 6 serving as an image capturing element via a light receiving optical system that is configured by a light condensing member 5, an aperture diaphragm 7, and a light condensing member 8. In the first embodiment, a double-sided telecentric optical system is adopted as the light-receiving optical system.
The light condensing members 5 and 8 are arranged so that their focal points correspond with each other, and the aperture diaphragm 7 is disposed at that position. By adopting a double-sided telecentric optical system, the configuration becomes one in which the magnification of the optical system does not easily change even if the distance between the measurement apparatus 1 and the measurement target 2 changes, and it is possible to implement a configuration that is less susceptible to effects such as positional deviation due to a change in the installation environment temperature.
Hereinafter, the distance between the attachment reference surface of the measurement apparatus 1 and the measurement target 2 is referred to as “WD” (Working Distance). The light condensing members 5 and 8 are configured by a single lens or a group of lenses. The magnification of the optical system is determined by the ratio of the focal lengths of the light condensing members 5 and 8.
A desired resolution can be selected as appropriate. In a case in which the change of the WD and the change in the installation position of the sensor 6 is negligible, an ordinary image forming optical system may be selected. Further, in a case in which a change in the WD is not negligible, but a change in the installation position of the sensor 6 is negligible, it is possible to select an object-side telecentric optical system.
The sensor 6 is configured by a photoelectric conversion element array such as a CCD element or a CMOS element. The sensor 6 is a line sensor or an area sensor, and in a case of an area sensor, it is possible to detect a two-dimensional displacement, and in a case in which a line sensor is selected, it is possible to detect a one-dimensional displacement. Here, a one-dimensional length measurement (displacement amount) by using a line sensor will be explained. However, the measurement in the first embodiment is not limited to a length measurement (displacement amount).
After the light flux that was formed on the sensor 6 has been photoelectrically converted, the light flux is output to the signal processing unit 9 and processed for dynamic range correction and gamma correction and the like to generate image data (data for each image). The image data that is generated by the signal processing unit 9 is supplied to a control unit 10. The control unit 10 is configured to include a CPU as a computer and a memory serving as a storage medium storing a computer program, and the like. The signal processing unit 9 and the control unit 10 include electrical circuits to perform various functions mentioned in the above.
The control unit 10 calculates the displacement amount of the measurement target 2 based on the image data according to the computer program, outputs the length measurement value (displacement amount) to an external device, and controls the operation of each part of the entire length-measuring instrument as the measurement apparatus 1.
Together with the start of a measurement, in step S10, the measurement apparatus 1 sequentially acquires images at a set sampling rate by the sensor 6. In step S11, the image first obtained is set as a reference image, and in step S12, an image is sequentially acquired, and in step S13, the image obtained in step S12 is set as a measurement image.
Then, in step S14, the displacement amount is calculated by computing the correlation between the reference image and the measurement image, and in step S15, the displacement amount is output as a length measurement value (displacement amount). In step S16, based on an operation output from an operation unit (not shown) or the like, it is determined as to whether or not to terminate the measurement operation, and in a case in which it is not terminated, the processing returns to step S12, and step S12 to S16 are repeated. When it is determined in step S16 that the processing ends, the measurement flow of
Note that in a case in which the sampling proceeds and the reference image deviates from the measurement region, processing such as updating the reference image may be performed. In this manner, in the measurement apparatus 1 of the first embodiment, light from the measurement target is received by an image capturing element, and a measurement value with respect to the measurement target is calculated by using the cross-correlation function of the two images that were acquired by the image capturing element, for example, as a measurement length value (displacement amount).
The displacement amount computation calculates the cross-correlation function between the reference image and the measurement image, and determines the displacement from the position of an extreme value. Further, the calculation of the cross-correlation function is performed, for example, in a frequency space. That is, the reference image is Fourier transformed in step S101, and in step S102, is limited to a predetermined frequency band by a band-pass filter.
Next, in step S103, the measurement image is Fourier transformed, and is limited to the same frequency band as in step S102 by a band-pass filter in step S104. Note that a window function may be applied when the Fourier transform is performed. Further, the bandpass filter in step S102 and S103 is configured to be capable of setting transmission/non-transmission for each frequency component with respect to the Fourier transformed data.
Next, in step S105, the Fourier transformed image is multiplied by taking one of the conjugate complex numbers, and in step S106, an inverse Fourier transform is performed to obtain a correlation function. In step S107, the maximum value (extreme value) of the cross-correlation function is detected, and the correlation position at the maximum value (extreme value) is detected in step S108. Note that the maximum value (extreme value) is determined on a pixel-by-pixel basis.
Furthermore, in step S109, a sub-pixel estimation computation is performed in order to calculate with a resolution equal to or smaller than the size corresponding to one pixel and to improve the accuracy. In the first embodiment, at the time of the sub-pixel estimation computation, the extreme value of the cross-correlation function and the values before and after thereof are used to approximate the function by using, for example, a quadratic function, and the extreme value of the approximate function is calculated as the sub-pixel estimation value.
In addition to a quadratic function, an approximation method such as a method of approximating as the intersection of a straight line, a method of approximating by a Gaussian distribution or the like may also be used. In this manner, the measurement value (displacement amount) is calculated based on the sub-pixel estimation value that is calculated based on the cross-correlation function.
Further, in the first embodiment, in step S110, a correction coefficient is calculated based on the spread of the peak of the cross-correlation function. Note that, because the spread of the peak shape of the cross-correlation function changes according to the configuration of the spatial frequency component, in the first embodiment, a correction coefficient is acquired based on the spread of the peak of the cross-correlation function. That is, in step S110, it can be said that the correction coefficient is calculated in accordance with the composition of the spatial frequency component.
In addition, in step S111, the measurement value (length measurement value) that is the result of the sub-pixel estimation computation is corrected by using the above-described correction coefficient to calculate a displacement amount of the measurement target 2. In this context, step S111 functions as a correction step to correct the measurement value. Note that, in a case in which the spread of the peak shape is equal to or greater than a predetermined value, the above-described measurement value is corrected, and in a case in which the spread of the peak shape is smaller than the above-described predetermined value, the error is not corrected by assuming that the error is negligible.
The above-described correction coefficient will be explained with reference to
In
In addition, as shown in
Next,
In a case in which a double-sided telecentric optical system is adopted, the change in optical magnification is small even when the WD is changed. However, as shown in
This error is larger in comparison to an error that is dependent on a change of the optical magnification, and indicates that it is an error factor other than that of the change of the optical magnification. That is, as shown in
Next,
As shown in
The error that was generated in the present case is larger in comparison to an error that is dependent on distortion, and indicates that a speed that is not caused by the optical system is an error factor. That is, as shown in
As shown above in
In the case shown in
Similarly, in the case shown in
Such a tendency occurs because, in a case in which the low-frequency component becomes dominant as the configuration of the spatial frequency component that is included in the line sensor output that is used in the measurement, the peak spread of the cross-correlation function becomes large, and thus, the influence of a case in which a sub-pixel estimation error occurs becomes large.
Hereinafter, a relationship between the occurrence of a sub-pixel estimation error and the sub-pixel estimation error amount generated due to the spread of the peak shape of the cross-correlation function will be explained.
In
In addition, the overlap region of the two line sensor output data is shown by oblique lines. In this example, an example of a case in which the displacement of m pixels (wherein m is a non-zero integer) is measured is shown.
In a case in which the displacement amount corresponds to an integer pixel, ideally, it is desirable that the peak of the cross-correlation function takes a symmetrical shape. However, as shown in
That is, in the computation of the cross-correlation function in real space, because the values of the two line sensor outputs exist in the overlap portion, the multiplication is executed as is. However, because there is no partner to be multiplied except in the overlap portion, they do not contribute to the computation of the cross-correlation function. In the 0th pixel of the cross-correlation function, the N pixels, which are all of the pixels related to the two line sensor outputs, become an overlap portion.
In the case of the mth pixel, the (N−m) pixel, and the (N−2m) pixel in the 2mth pixel, becomes the overlap portion. In the mth pixel, because the values of the two line sensor outputs in the overlap portion correspond to each other, the value of the cross-correlation function becomes a very large value. Because even in the overlap portion the values of the other cross-correlation functions do not correspond to the values of the two line sensor outputs, a random small value is taken as compared to the value of the cross-correlation function of the mth pixel.
Here, when the 0th pixel and the 2mth pixel of the cross-correlation function are compared, a difference of 2m pixels occurs in the overlap portion, as was described above. Because a difference occurs in the range that contributes to the computation of the cross-correlation function, in general, the value of the cross-correlation function of the 0th pixel becomes larger than the value of the cross-correlation function of the 2mth pixel.
In this manner, in the computation of the cross-correlation function, an asymmetry of the peak shape occurs. Then, the asymmetry of the peak shape of the cross-correlation function affects the sub-pixel estimation.
The fitting function that is used in subpixel estimation is generally a symmetric function, such as a quadratic function or a linear function. That is, by performing a fitting with a linear or quadratic function based on the cross-correlation function, the above-described sub-pixel estimation value is calculated.
However, as shown in
In
In contrast, as shown in
As described above, in a measurement method such as length measurement that uses a cross-correlation function, asymmetry occurs in the peak shape of the cross-correlation function. When the low-frequency component increases as the configuration of the spatial frequency component included in the sensor output, the spread of the peak shape of the cross-correlation function becomes large.
As the spread of the peak shape of the cross-correlation function becomes large, the influence of the asymmetry of the peak shape becomes large, and the sub-pixel estimation error, that is, the measurement error of the length measurement and the like, becomes large. In addition, the error occurs in a direction in which a measurement amount, such as a length measurement, is shortened.
When the ratio of the low-frequency component in the spatial frequency components of the two images, which are sensor outputs, increases, the peak shape of the cross-correlation function has a spreading relationship. Thus, the spread of the peak, which is the horizontal axis of
In the first embodiment, correction of the length measurement error was performed by using the relationship shown in the graph of
At this time, the correction coefficient can be acquired by referring to an approximate expression such as a polynomial equation in which the length measurement (displacement amount) error and the spread of the peak shape of the cross-correlation function are made variables, or to a table stored in advance in the memory.
In a case in which the spread of the peak shape of the cross-correlation function is calculated by using an approximate expression, the computation can be performed by utilizing the quadratic function fitting that is used in the sub-pixel estimation computation of step S109.
Note that, in the graph of
Further, in that case, by using the ratio of the low frequency component instead of the peak spread, the relationship between the ratio of the low-frequency component and the correction coefficient may be made as an approximate expression or a table, and a correction made thereby.
Note that, as can be understood from
C(x)=a·x2+b·x±c (1)
The three unknowns of the function of this Equation (1) are determined by using the maximum value of the cross-correlation function and the total of three sets of data before and after the maximum value. When the position of the pixel to be the maximum value is mth, the value of the maximum value is C(m), and similarly the m±1th pixel before and after the maximum value is C(m±1), the determination can be made by using the following equations (2) to (4).
Note that the sub-pixel estimation position can be determined, for example, by using Equation (5) and Equation (6) as points at which the derivative of the function to be fitted becomes zero. That is, when the sub-pixel estimation position is set to be S, this becomes the following:
The sub-pixel estimation position can be determined from the above-described a and b. In this context, the spread of the peak shape of the cross-correlation function can be made the width of the intersection of the fitted quadratic function with, for example, the x-axis. By using the solution formula of the quadratic equation, the spread of the peak shape of the cross-correlation function can be calculated by using Equation (7).
Note that, in order to omit the calculation of the square root, the spread of the peak shape of the cross-correlation function may be calculated by using the squared value. Further, instead of setting the spread of the peak shape of the cross-correlation function as the width of the intersection with the x-axis of the fitted quadratic function, the spread can be a width (for example, a half-width) of a waveform at a predetermined ratio of the threshold value with respect to the peak value. That is, the spread of the peak shape may be calculated based on the maximum value of the cross-correlation function, the position of the maximum value, and the values of the positions before and after thereof.
In this manner, it is possible to correct the length measurement value (displacement amount) by diverting the information that is obtained from the sensor output. Note that, in the first embodiment, although a method of estimating the length measurement error from the spread of the peak shape of the cross-correlation function was shown, the configuration of the spatial frequency component that is included in the line sensor output can also be calculated by calculating the power spectrum based on the Fourier-transformed data.
In addition, as a method of calculating the configuration of the spatial frequency component that is included in the line sensor output, it is also possible to determine that the high frequency component is large if the number of cross points with respect to the threshold value is large. Further, for example, the ratio of the high-frequency component can be determined by using differential data. Conversely, the low-frequency component ratio may be determined by using a low-pass filter.
Note that the low-frequency component in the first embodiment may be a frequency component that is equal to or less than a predetermined frequency threshold such as an average frequency, for example, of the configuration (frequency spectrum distribution) of the spatial frequency component that is included in the line sensor output.
Alternatively, for example, it is sufficient that the deviation value on the low frequency side in the configuration of the spatial frequency component (frequency spectral distribution) is equal to or less than a predetermined threshold value. In addition, in the first embodiment, the ratio of the low-frequency component refers to, for example, occurrences of a frequency component that is lower than the average frequency in the configuration of the spatial frequency component (the histogram of the frequency spectrum) that occupies all occurrences.
Next, a second embodiment of the measurement apparatus of the present invention will be explained.
The linear function fitting is performed by combining the maximum value of the cross-correlation function with the position thereof, and the value of the third magnitude and the position thereof. In addition, the linear function fitting is also performed by combining the values of the second and fourth magnitudes and the positions thereof. Then, a sub-pixel estimation can be performed by calculating the intersection point of the two linear function fitting results.
In addition, it becomes possible to calculate the spread of the peak shape of the cross-correlation function from the gradient of the slope of the two linear function fitting results. Then, the length measurement error is acquired in step S110 based on the spread of the peak shape of the cross-correlation function that was computed in this manner and, for example, the data of the memory that stored the graph of
Note that, in the first embodiment and the second embodiment, a measurement apparatus 1 has been explained by using an example of a length-measuring instrument. However, the measurement apparatus 1 may be, for example, a distance measurement apparatus configured to measure a distance to a measurement target or a distance distribution based on a correlation function of two images. Alternatively, it may be, for example, a measurement apparatus configured to measure the shape, position and posture of a measurement target.
In addition, in the above embodiments, as explained with reference to
Further, in addition to calculating the measurement value based on the sub-pixel estimation that was calculated based on the cross-correlation function, the above-described measurement value may be corrected in a case in which the surface roughness of the measurement target is equal to or greater than a predetermined roughness.
Alternatively, control may be carried out in which, based on the correlation that is shown in
Alternatively, based on the correlation that is shown in
In addition, the measurement apparatus 1 is used by being supported by a robot arm 20 serving as a support apparatus. The light flux that is emitted from the light source 3 housed in the measurement apparatus 1 is condensed by the light condensing member 4 on the measurement target 2, and illuminates the measurement target 2, which is conveyed in the arrow direction by the conveying unit 14.
The sensor of the measurement apparatus 1 captures an image of the measurement target 2, which is illuminated by the light source 3 and conveyed by the conveying unit 14, acquires image capturing data, and inputs the image data to the control unit 10 via the signal processing unit 9. Then, the control unit 10 executes a measurement step of measuring the length measurement, the shape, the position, the posture distance, and the like of the measurement target 2 based on the image data, and calculates a measurement value.
Based on the measurement values obtained by the measurement step, such as length, shape, position and posture, and distance, the control unit 10 sends a drive command to the robot arm 20 so as to control the movement of the robot arm 20. In addition, the measurement data that is measured by the measurement apparatus 1 and the obtained image may be displayed on the display unit 11.
The robot arm 20 holds (grips) the measurement target 2 by a robot hand (gripping unit) 21 at the distal end based on the measurement value that was obtained by the measurement apparatus 1, and performs movement and posture control processing, such as translation and rotation.
Further, by performing a process such as assembling the measurement target 2 on another component by the robot arm 20, an article that is configured by a plurality of components, for example, an electronic circuit board or a machine, is manufactured. In addition, by further performing other processing treatment steps with respect to the measurement target 2 that has been moved, a process of manufacturing a final article can also be performed. Note that a control unit for controlling the robot arm 20 may be provided separately from the control unit 10.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation to encompass all such modifications and equivalent structures and functions. In addition, as a part or the whole of the control according to the embodiments, a computer program realizing the function of the embodiments described above may be supplied to the measurement apparatus through a network or various storage media. Then, a computer (or a CPU, an MPU, or the like) of the measurement apparatus may be configured to read and execute the program. In such a case, the program and the storage medium storing the program configure the present invention.
This application claims the benefit of Japanese Patent Application No. 2021-208219 filed on Dec. 22, 2021, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2021-208219 | Dec 2021 | JP | national |