Aspects of the present invention generally relate to a measurement apparatus for measuring the shape of a target object, system, and manufacturing method.
Optical measurement is known as one of techniques for measuring the shape of a target object. There are various methods in optical measurement. One of them is a method called as pattern projection. In a pattern projection method, the shape of a target object is measured as follows. A predetermined pattern is projected onto a target object. An image of the target object is captured by an imaging section. A pattern in the captured image is detected. On the basis of the principle of triangulation, distance information at each pixel position is calculated, thereby obtaining information on the shape of the target object.
In this measurement method, the coordinate of each line of a projected pattern is detected on the basis of the spatial distribution information of pixel values (the amount of received light) in a captured image. The spatial distribution information of the amount of the received light is data that contains the effects of reflectivity distribution arising from the pattern and/or fine shape, etc. of the surface of the target object. Because of them, in some cases, a detection error occurs in the detection of the pattern coordinates, or it could be impossible to perform the detection at all. This results in low precision in the information on the calculated shape of the target object.
The following measurement method is disclosed in PTL 1. An image at the time of projection of pattern light (hereinafter referred to as “pattern projection image”) is acquired. After that, uniform light is applied to a target object by using a liquid crystal shutter, and an image under uniform illumination (hereinafter referred to as “grayscale image”) is acquired. With the use of the grayscale image as correction data, image correction is performed so as to remove the effects of reflectivity distribution on the surface of the target object from the pattern projection image.
The following measurement method is disclosed in PTL 2. Pattern light and uniform illumination light are applied to a target object. The direction of polarization of the pattern light and the direction of polarization of the uniform illumination light are different from each other by 90°. Imagers corresponding to the respective directions of polarization capture a pattern projection image and a grayscale image respectively. After that, image processing for obtaining distance information from a difference image, which is indicative of the difference between the two, is performed. In this measurement method, the timing of acquisition of the pattern projection image and the timing of acquisition of the grayscale image are the same as each other, and correction for removing the effects of reflectivity distribution on the surface of the target object from the pattern projection image is performed.
In the measurement method disclosed in PTL 1, the timing of acquisition of the pattern projection image and the timing of acquisition of the grayscale image are different from each other. In some imaginable uses and applications of a measurement apparatus, distance information is acquired while either a target object or the imaging section of a measurement apparatus moves, or both. In such a case, the relative position of them changes from one time to another, resulting in a difference between the point of view for capturing the pattern projection image and the point of view for capturing the grayscale image. An error will occur if correction is performed by using such images based on the different points of view.
In the measurement method disclosed in PTL 2, the pattern projection image and the grayscale image are acquired at the same time by using polarized beams the directions of polarization of which are different from each other by 90°. The surface of a target object has local angular variations because of irregularities in the fine shape of the surface of the target object (surface roughness). Because of the local angular variations, reflectivity distribution on the surface of the target object differs depending on the direction of polarization. This is because the reflectivity of incident light in relation to the angle of incidence differs depending on the direction of polarization. An error will occur if correction is performed by using images containing information based on reflectivity distributions different from each other.
PTL 1: Japanese Patent Laid-Open No. 3-289505
PTL 2: Japanese Patent Laid-Open No. 2002-213931
Even in a case where the relative position of a target object and an imaging section changes, some aspects of the invention make it possible to reduce a measurement error arising from the surface roughness of the target object, thereby measuring the shape of the target object with high precision.
Regarding a measurement apparatus for measuring the shape of a target object, one aspect of the invention is as follows. The measurement apparatus comprises: a projection optical system, an illumination unit, an imaging unit, and a processing unit. The projection optical system is configured to project pattern light onto the target object. The illumination unit is configured to illuminate the target object. The imaging unit is configured to image the target object onto which the pattern light has been projected by the projection optical system, thereby capturing a first image of the target object by the pattern light reflected by the target object. The processing unit is configured to obtain information on the shape of the target object. The illumination unit includes plural light emitters arranged around an optical axis of the projection optical system symmetrically with respect to the optical axis of the projection optical system. The imaging unit images the target object illuminated by the plural light emitters to capture a second image by light emitted from the plural light emitters and reflected by the target object. The processing unit corrects the first image by using the second image of the target object and obtains the information on the shape of the target object on the basis of the corrected image.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
With reference to the accompanying drawings, some preferred embodiments of the invention will now be explained. In each of the drawings, the same reference numerals are assigned to the same members to avoid redundant description.
A relationship between the measurement apparatus 100 and the state of arrangement of the target objects 5 is illustrated in
The distance image illumination unit 1 includes a light source 6, an illumination optical system 8, a mask 9, and the projection optical system 10. The light source 6 is, for example, a lamp. The light source 6 emits non-polarized light that has a wavelength different from that of light sources 7 of the grayscale image illumination unit 2 described later. The wavelength of light emitted by the light source 6 is λ1. The wavelength of light emitted by the light source 7 is λ2. The illumination optical system 8 is an optical system for uniformly applying the beam of light emitted from the light source 6 to the mask 9 (pattern light forming section). The mask 9 has a pattern that is to be projected onto the target object 5. For example, a predetermined pattern is formed by chromium-plating a glass substrate. An example of the pattern of the mask 9 is a dot line pattern coded by means of dots (identification portion) as illustrated in
The grayscale image illumination unit 2 includes plural light sources 7 (light emitters), which are light sources 7a to 7l. Each of these light sources is, for example, an LED, and emits non-polarized light.
The imaging unit 3 includes an imaging optical system 11, a wavelength division element 12, and image sensors 13 and 14. The imaging unit 3 is a shared unit used for the purpose of both distance image measurement and grayscale image measurement. The imaging optical system 11 is an optical system for forming a target image on the image sensor 13, 14 by means of light reflected by the target object 5. The wavelength division element 12 is an element for optical separation of the light source 6 (λ1) and the light sources 7 (λ2). For example, the wavelength division element 12 is a dichroic mirror. The wavelength division element 12 allows the light of the light source 6 (λ1) to pass through itself toward the image sensor 13, and reflects the light of the light sources 7 (λ2) toward the image sensor 14. The image sensor 13, 14 are, for example, a CMOS sensor or a CCD sensor. The image sensor 13 (first imaging unit) is an element for capturing a pattern projection image. The image sensor 14 (second imaging unit) is an element for capturing a grayscale image.
The arithmetic processing unit 4 is a general computer that functions as an information processing apparatus. The arithmetic processing unit 4 includes a processor such as CPU, MPU, DSP, and FPGA and includes a memory such as DRAM.
Next, procedure for acquiring a grayscale image will now be explained. In the present embodiment, edges corresponding to the contour and edge lines of the target object 5 are detected from a grayscale image, and the edges are used as image features for calculating the position and orientation of the target object 5. First, the grayscale image illumination unit 2 floodlights the target object 5 (S14). This light for illuminating the target object 5 has, for example, uniform light intensity distribution. Next, the image sensor 14 of the imaging unit 3 captures the target object 5 under uniform illumination by the grayscale image illumination unit 2, thereby acquiring a grayscale image (second image) (S14). For edge calculation (S16), the arithmetic processing unit 4 performs edge detection processing by using the acquired image.
In the present embodiment, the capturing operation for a distance image and the capturing operation for a grayscale image are performed in synchronization with each other. Therefore, the illumination of (the projection of pattern light onto) the target object 5 by the distance image illumination unit 1 and the uniform illumination of the target object 5 by the grayscale image illumination unit 2 are performed at the same time. The image sensor 13 captures the target object 5 onto which the pattern light has been projected by the projection optical system 10, thereby acquiring the first image of the target object 5 by means of the pattern light reflected by the target object 5. The image sensor 14 captures the target object 5 lit up by the plural light sources 7 to acquire the second image of the target object 5 by means of the light reflected by the target object 5 after coming from the plural light sources 7. Since the capturing operation for the distance image and the capturing operation for the grayscale image are performed in synchronization with each other, even in a situation in which the relative position of the target object 5 and the imaging unit 3 changes, it is possible to perform image acquisition based on the same point of view. The arithmetic processing unit 4 calculates the position and orientation of the target object 5 by using the calculation results of S13 and S16 (S17).
In the calculation of the distance image in S13, the arithmetic processing unit 4 detects the coordinate of each line of the projected pattern on the basis of the spatial distribution information of the pixel values (the amount of the received light) in the captured image. The spatial distribution information of the amount of the received light is data that contains the effects of reflectivity distribution arising from the pattern and/or fine shape, etc. of the surface of the target object. Because of them, in some cases, a detection error occurs in the detection of the pattern coordinates, or it could be impossible to perform the detection at all. This results in low precision in the information on the calculated shape of the target object. To avoid this, in S12, the arithmetic processing unit 4 corrects the acquired image, thereby reducing an error due to the effects of reflectivity distribution arising from the pattern and/or fine shape, etc. of the surface of the target object.
The reflectivity distribution of a target object will now be explained. First, with reference to
R(θ)=(R(θ+γ)+R(θ−γ))/2 (1).
That is, in the region where the angular characteristics of reflectivity are roughly linear, local reflectivity (reflectivity distribution) for a pattern projection image and local reflectivity for a grayscale image are roughly equal to each other. Therefore, with the use of the grayscale image acquired in S15, the arithmetic processing unit 4 corrects (S12) the pattern projection image acquired in S11 before the calculation of the distance image in S13. By this means, it is possible to remove, from the pattern projection image, the effects of reflectivity distribution arising from the fine shape of the surface of the target object. Next, in S13, the distance image is calculated using the corrected image. Therefore, in the calculation of the distance image in S13, it is possible to reduce an error due to the effects of reflectivity distribution arising from the pattern and/or fine shape, etc. of the surface of the target object. This makes it possible to obtain, with high precision, the information on the shape of the target object.
If the light sources 7 differ in wavelength, polarization, brightness, and/or light distribution characteristics from one another, reflectivity and the amount of reflected light differ because of the difference in these parameters, resulting in a difference between the reflectivity distribution of a pattern projection image and the reflectivity distribution of a grayscale image. For this reason, preferably, the light sources 7 should have equal wavelength, equal polarization, equal brightness, and equal light distribution characteristics. If the light distribution characteristics differ from one light source to another, the angular distribution of the amount of incident light coming toward the surface of a target object differs. Consequently, in such a case, the amount of reflected light differs from one light source to another due to the angle difference in reflectivity.
In general, in the angular characteristics of reflectivity, as illustrated in
In the present embodiment, since the measurement apparatus is significantly tilted with respect to the target object 5, the relative orientation θ of the target object and the measurement apparatus is greater than the angle threshold θth. Therefore, image correction is carried out. The image correction is performed by the arithmetic processing unit 4 with the use of a pattern projection image I1(x, y) and a grayscale image I2(x, y). A corrected pattern projection image I1′(x, y) is calculated using the following formula (2):
I
1′(x,y)=I1(x,y)/I2(x,y) (2).
where x, y denotes pixel coordinate values on the image sensor.
As expressed in the formula (2), the correction is based on division in the above example. However, the method of correction is not limited to division. For example, as expressed in the following formula (3), the correction may be based on subtraction.
I
1′(x,y)=I1(x,y)−I2(x,y) (3).
In the embodiment described above, since the light sources for grayscale image illumination are arranged symmetrically with respect to the optical axis of the projection optical system 10, light intensity distribution for a pattern projection image and light intensity distribution for a grayscale image are roughly equal to each other. Therefore, it is possible to correct the pattern projection image by using the grayscale image easily with high precision. For this reason, even in a case where the relative position of the target object and the imaging unit changes, it is possible to reduce a measurement error due to the effects of reflectivity distribution arising from the fine shape of the surface of the target object. Therefore, it is possible to obtain information on the shape of the target object with high precision.
Though the light sources 7 are arranged symmetrically with respect to the optical axis of the projection optical system 10, strict symmetry in the light-source layout is not required as long as an error occurring through image correction is within a predetermined tolerable range. The symmetric layout in the present embodiment encompasses such a layout not exceeding error tolerance. For example, the target object 5 may be floodlit therefrom in two directions that are asymmetric with respect to the optical axis of the projection optical system 10 within a range in which reflectivity in relation to the angle of the surface of the target object is roughly linear.
In the illustrated example of
A second embodiment will now be explained. The difference from the foregoing first embodiment lies in, firstly, the measurement scene, and secondly, in the addition of determination processing regarding the correction of an error arising from the fine shape of the surface of a target object in the image correction step of S12. In the first embodiment, it is assumed that, with the use of an image captured under conditions in which the target objects 5 are substantially in an array state, the entire image is corrected in S12. In the measurement scene of the present embodiment, there is a pile of the target objects 5 in a non-array state inside a pallet as illustrated in
In view of the above, in the present embodiment, for each partial area in an image, it is determined in S12 whether correction is necessary or not. With reference to the flowchart of
The step 21 (S21) is a process in which the arithmetic processing unit 4 determines whether correction is necessary or not on the basis of the relative orientation of the measurement apparatus 100 and the target object 5 (measurement scene). In the measurement scene of the present embodiment, since there is a pile of the target objects 5 in a non-array state inside a pallet, the relative orientation of the target object and the measurement apparatus is unknown. Therefore, unlike the first embodiment, at this point in time, the arithmetic processing unit 4 determines that the correction of the entire area of the image should not be carried out.
The step 22 (S22) is a process in which the arithmetic processing unit 4 acquires the data of a table showing a relationship between pixel values (brightness values) in an image and the ratio of improvement in precision as a result of the correction of an error arising from the fine shape of a surface of a target object. The table data can be acquired by conducting a measurement while changing the angle of inclination of the target object in relation to the measurement apparatus. Specifically, the table is created by acquiring the relationship (data) between the pixel values in the pattern projection image or the grayscale image and the ratio of improvement in precision as a result of the correction of the error arising from the fine shape at the part where the approximate shape of the target object 5 is known. “The ratio of improvement in precision as a result of the correction of the error arising from the fine shape” is a value calculated by dividing measurement precision in the shape of the target object after the correction by measurement precision in the shape of the target object before the correction. According to the relationship between the angle of the surface of the target object and reflectivity in
The step 23 (S23) is a process in which the arithmetic processing unit 4 decides, out of the table prepared in the step S22, a threshold of the pixel values (brightness values) for determining whether the correction is necessary or not. The brightness threshold Ith is, for example, a brightness value beyond which no effect can be expected for improving precision as a result of the correction of an error arising from the fine shape of the surface of a target object. That is, it is a brightness value under angular conditions in which the ratio of improvement in precision is one. It is enough if the steps 22 and 23 are carried out once for each kind of parts (target objects). They may be skipped in the second and subsequent executions in a case of repetitive measurement of the same kind of parts.
The step 24 (S24) is a process in which the arithmetic processing unit 4 acquires the data of the grayscale image captured in S15 and the data of the pattern projection image captured in S11. The step 25 (S25) is a process in which the arithmetic processing unit 4 determines, for each partial area in the pattern projection image, whether the correction is necessary or not. In this process, first, the grayscale image or the pattern projection image is segmented into plural partial areas (for example, 2×2 pixels). Next, an average pixel value (average brightness value) is calculated for each of the partial areas. The average pixel value is compared with the brightness threshold calculated in the step 23. Each partial area where the average pixel value is less than the brightness threshold is set as an area for which the correction is necessary (correction area). Each partial area where the average pixel value is greater than the brightness threshold is set as an area for which the correction is not necessary. Though a method that involves segmentation into partial areas for the purpose of smoothening noise is described in the present embodiment, it may be determined for each pixel whether the correction is necessary or not, without area segmentation.
The step 26 (S26) is a process in which the arithmetic processing unit 4 corrects the pattern projection image by using the grayscale image. The pattern projection image is corrected by using the grayscale image for the correction areas decided in the step 25. The correction is performed on the basis of the aforementioned formula (2) or (3).
The foregoing is a description of the procedure of correction processing according to the present embodiment. With the present embodiment, in the target object, for each partial area except for those under near-regular-reflection conditions, it is possible to correct the error due to the effects of reflectivity distribution arising from the fine shape of the surface of the target object as in the first embodiment, resulting in improved measurement precision. Moreover, since the correction based on the aforementioned formula (2) or (3) is not applied to, in the target object, each partial area under near-regular-reflection conditions, it is possible to prevent a decrease in precision due to the correction. Since image correction is applied to not a whole but a part of areas in the captured pattern projection image, specifically, only to areas where an improvement can be expected as a result of the correction, it is possible to calculate the shape of the target object in its entirety with higher precision.
A third embodiment will now be explained. The difference from the foregoing second embodiment lies in the procedure of correction of an error arising from the fine shape of the surface of a target object. Therefore, the point of difference only is explained here. In the second embodiment, on the basis of the pixel values of the pattern projection image or the pixel values of the grayscale image, the determination for each partial area in the image as to whether the correction is necessary or not is performed. In the present embodiment, this determination is performed on the basis of the rough orientation of the target object calculated from the image before the correction.
Procedure according to the present embodiment is illustrated in
The step 32 (S32) is a process in which the data of a table showing a relationship between the angle of inclination of a surface of a target object and the ratio of improvement in precision as a result of the correction of an error arising from the fine shape of the surface of the target object. The table is created by conducting a measurement while changing the angle of inclination of the target object in relation to the measurement apparatus and by acquiring the relationship between the angle of inclination of the surface of the target object and the ratio of improvement in precision as a result of the correction of the error arising from the fine shape at the part where the approximate shape of the target object 5 is known. The ratio of improvement in precision as a result of the correction of the error arising from the fine shape of the surface of the target object is, as in the second embodiment, a value calculated by dividing measurement precision after the correction by measurement precision before the correction. According to the relationship between the angle of the surface of the target object and reflectivity in
The step 33 (S33) is a process in which a threshold of orientation (the angle of inclination) for determining whether the correction is necessary or not is decided out of the table prepared in the step S32. The orientation threshold θth is, for example, an orientation value (the angle of inclination) beyond which no effect can be expected for improving precision as a result of the correction of an error arising from the fine shape of the surface of a target object. That is, it is an orientation value of the ratio of improvement in precision=1. It is enough if the steps 32 and 33 are carried out once for each kind of parts, as in the first embodiment. They may be skipped in the second and subsequent executions in a case of repetitive measurement of the same kind of parts.
The step 35 (S35) is a process in which the approximate orientation of the target object is calculated. In this process, a group of distance points and edges are calculated from the pattern projection image and the grayscale image acquired in the step 34, and model fitting is performed on a prepared-in-advance CAD model of the target object, thereby calculating the approximate orientation (approximate angle of inclination) of the target object. This approximate orientation of the target object is used as acquired-in-advance information on the shape of the target object. The step 36 (S36) is a process in which, with the use of the acquired-in-advance information on the shape of the target object, it is determined for each partial area in the pattern projection image whether the correction is necessary or not. In this process, the orientation (the angle of inclination) acquired in the step 35 for each pixel of the pattern projection image is compared with the orientation threshold decided in the step 33. In the pattern projection image, each partial area where the approximate orientation calculated in S35 is greater than the threshold is set as an area for which the correction is necessary (correction area), and each partial area where the approximate orientation calculated in S35 is less than the threshold is set as an area for which the correction is not necessary.
With the embodiment described above, as in the second embodiment, it is possible to correct a measurement error arising from the fine shape of the surface of a target object with high precision while preventing a decrease in precision at the near-regular-reflection region.
A fourth embodiment will now be explained. The difference from the foregoing first embodiment lies in the grayscale image illumination unit 2. Therefore, the point of difference only is explained here. In the first embodiment, the grayscale image illumination unit 2 floodlights the target object 5 by means of direct light coming from the light sources 7. In the foregoing structure, the characteristics of the light sources 7 have a significant influence on the characteristics of the light for illuminating the target object 5 (wavelength, polarization, brightness, light distribution characteristics).
In view of the above, as illustrated in
With the embodiment described above, as in the first embodiment, it is possible to correct a measurement error due to the effects of reflectivity distribution on the surface of a target object with high precision even in a case where the relative position of the target object and the imaging unit changes.
Though exemplary embodiments are described above, the scope of the invention is not restricted to the exemplary embodiments. It may be modified in various ways within a range not departing from the gist of the invention. For example, though the two image sensors 13 and 14 are provided for imaging in the foregoing embodiments, a single sensor that is capable of acquiring a distance image and a grayscale image may be provided instead. In such a case, the wavelength division element 12 is unnecessary. The foregoing embodiments may be combined with one another. Though the light emitted by the light source 6 and the light sources 7 is explained as non-polarized light, the scope of the invention is not restricted thereto. It may be linearly polarized light of the same polarization direction. It may be polarized light as long as the state of polarization is the same. The plural light emitters may be mechanically coupled by means of a coupling member, a supporting member, or the like. A single ring-shaped light source may be adopted instead of the plural light sources 7. The disclosed measurement apparatus may be applied to a measurement apparatus that performs measurement by using a plurality of robot arms with imagers, or a measurement apparatus with an imaging unit provided on a fixed supporting member. The measurement apparatus may be mounted on a fixed structure, not on a robot arm. With the use of data on the shape of a target object measured by the disclosed measurement apparatus, the object may be processed, for example, machined, deformed, or assembled to manufacture an article, for example, an optical part or a device unit.
With some aspects of the invention, even in a case where the relative position of a target object and an imaging unit changes, it is possible to reduce a measurement error arising from the surface roughness of the target object, thereby measuring the shape of the target object with high precision.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2015-138158, filed Jul. 9, 2015, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2015-138158 | Jul 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/003121 | 6/29/2016 | WO | 00 |