The present invention relates to a shape reconstruction method and an image measurement device, and particularly relates to a shape reconstruction method and an image measurement device capable of quickly reconstructing information for each point of a measurement object in a captured image of the measurement object.
An image measurement device that reconstructs shape information of a measurement object, by applying illumination light to the measurement object and processing an image captured, is conventionally known. For example, an image measurement device that captures an image of a measurement object by a telecentric imaging optical system and measures the shape of the measurement object falls under this category. The telecentric imaging optical system is suitable mainly for measuring the two-dimensional shape of a surface of the measurement object because it has the characteristic of causing less blurring of the image even when there is a level difference in an optical axis direction due to deep depth of field. However, with the telecentric imaging optical system, it is difficult to detect information in a height direction of the measurement object, and thus, the telecentric imaging optical system is not appropriate for measuring the three-dimensional shape of the measurement object.
In recent years, as described in Patent Literature 1, an inspection system capable of obtaining inclination information for each point of a measurement object on the basis of a single captured image, by using a specific illumination device for testing, has been developed. This invention makes it possible to extract information on defects such as minute irregularities and foreign matter in the measurement object.
[Patent Literature 1] Japanese Patent No. 6451821
However, although Patent Literature 1 describes that the inspection system can obtain the inclination information for each point of the measurement object with single imaging, it does not clarify specific steps and configurations for reconstructing information for each point of the measurement object that takes advantage of this ease and speed.
The present invention has been made to solve the above-described conventional problems, and aims at providing a shape reconstruction method and an image measurement device that are capable of quickly reconstructing information for each point of a measurement object in a captured image of the measurement object.
To solve the above-described problems, the invention according to claim 1 of the present application is a shape reconstruction method for reconstructing a shape of a measurement object by applying illumination light to the measurement object and processing an image that has been captured, the shape reconstruction method including: an illumination step of applying, to the measurement object, the illumination light having a specific irradiation solid angle including a plurality of solid angle regions with optical attributes, each different from each other; an imaging step of receiving object light, generated by the illumination light, from the measurement object at a predetermined observation solid angle and capturing the image; a calculation step of obtaining a normal vector at each point of the measurement object corresponding to each pixel from inclusion relation between the plurality of solid angle regions, constituting the object light, and the predetermined observation solid angle, on the basis of the optical attributes identified at each pixel of the image; and a shape reconstruction step of obtaining, from the normal vector, inclination information for each point of the measurement object and reconstructing the shape of the measurement object.
In the invention according to claim 2 of the present application, the irradiation solid angle is allowed to be uniform at each point of the measurement object.
In the invention according to claim 3 of the present application, the plurality of solid angle regions are provided around an irradiation optical axis of the irradiation solid angle of the illumination light.
In the invention according to claim 4 of the present application, the optical attributes are light wavelength ranges.
The invention according to claim 5 of the present application further including a preliminary step before the illumination step, wherein in the preliminary step, the measurement object itself or a specific jig is used instead of the measurement object, the illumination step and the imaging step are performed, and a correspondence relation generation step of obtaining correspondence relations between the optical attributes and the normal vector is performed.
In the invention according to claim 6 of the present application, the specific jig is a reference sphere or a reference plane.
In the invention according to claim 7 of the present application, the correspondence relations are configured as a correspondence table.
In the invention according to claim 8 of the present application, the correspondence relations are configured as a complementary function.
In the invention according to claim 9 of the present application, the normal vector is normalized.
In the invention according to claim 10 of the present application, in a case in which the plurality of solid angle regions are not rotationally symmetrical with respect to an observation optical axis of the observation solid angle, a rotation step of rotating the measurement object around the observation optical axis at a predetermined angle is performed after the imaging step, and the calculation step is performed after the illumination step and the imaging step are performed a predetermined number of times.
The invention according to claim 11 of the present application is an image measurement device for measuring a shape of a measurement object, the image measurement device including: an illumination device configured to apply illumination light to the measurement object; an imaging device configured to capture an image of the measurement object and output the image; and a processing device configured to process the image, wherein the illumination device has a light source unit configured to emit the illumination light, a lens unit configured to apply the illumination light to the measurement object at a specific irradiation solid angle, and a filter unit, which is disposed between the light source unit and the lens unit, that is configured to separate the inside of the specific irradiation solid angle into a plurality of solid angle regions with optical attributes, each different from each other; the imaging device receives object light, generated by the illumination light, from the measurement object at a predetermined observation solid angle, and pixels of the imaging device can each identify the different optical attributes; and the processing device includes an arithmetic unit configured to obtain a normal vector at each point of the measurement object corresponding to each pixel from inclusion relation between the plurality of solid angle regions, constituting the object light, and the predetermined observation solid angle, and a shape reconstruction unit configured to reconstruct, from the normal vector, the shape of the measurement object by obtaining inclination information for each point of the measurement object.
In the invention according to claim 12 of the present application, the filter unit is disposed on an irradiation optical axis of the illumination light in the vicinity of a position determined by a focal length of the lens unit.
In the invention according to claim 13 of the present application, the filter unit includes filter regions, each different from each other, around the irradiation optical axis so that the plurality of solid angle regions are provided around the irradiation optical axis of the illumination light.
In the invention according to claim 14 of the present application, the filter unit is configured to allow the light wavelength ranges, as the optical attributes, to be different from each other.
In the invention according to claim 15 of the present application, the processing device includes a memory unit configured to store correspondence relations between the optical attributes and the normal vector, and the arithmetic unit is configured to obtain the normal vector on the basis of the correspondence relations.
In the invention according to claim 16 of the present application, the processing device normalizes the normal vector.
The invention according to claim 17 of the present application includes a rotary table configured to be capable of rotating the measurement object around an observation optical axis.
In the invention according to claim 18 of the present application, the arithmetic unit includes a consistency determination unit configured to compare the normal vector at each point of the measurement object stored in advance with the normal vector at each point obtained from the measurement object newly imaged, and to extract portions, each different from each other.
According to the present invention, it is possible to quickly reconstruct the information for each point of the measurement object in the captured image of the measurement object.
A first embodiment of the present invention will be described below using
As illustrated in
Each component will be described below in detail.
As illustrated in
The light source unit 112 may have one or more arranged chip-type LEDs, an organic EL, or a light guide plate with a light from a sidelight. The light source unit 112 is movable along an irradiation optical axis L1.
As illustrated in
As illustrated in
As illustrated in
As described above, the light source unit 112, the filter unit 114, and the lens unit 116 can be moved and adjusted and the filter regions of the filter unit 114 can be changed, so that it is possible to form the irradiation solid angle IS of an optional shape with respect to the measurement object W, while arbitrarily changing the light wavelength ranges. Furthermore, since the filter unit 114 is disposed in the vicinity of the position determinded by the focal length f of the lens unit 116, the irradiation light can be applied under the same conditions to every position through the entire field of view of the measurement object W to be imaged by the imaging device CM. Here,
As illustrated in
As illustrated in
The image retention unit 122 is a circuit inside an image capture IMC, and is capable of retaining the images from the imaging device CM in frame units. In the present embodiment, the image retention unit 122 can retain images of the respective light wavelength regions R, G, and B.
The arithmetic unit 124 calculates(obtains) a normal vector Vn at each point of the measurement object W corresponding to each pixel from the inclusion relation between a plurality of solid angle regions RS1, RS2, and RS3, constituting the object light from the measurement object W, and the predetermined observation solid angle DS. The principle thereof will be described using
First, in a case in which there is no inclination in the measurement object W, as illustrated in
On the other hand, in a case in which there is an inclination (angle ϕ) in the measurement object W, as illustrated in
In other words, the arithmetic unit 124 can calculate the normal vector Vn on the basis of the correspondence relation between the optical attributes (in the present embodiment, each of the light wavelength regions R, G, and B) and the normal vector Vn.
Note that the normal vector Vn is represented as (Vnx, Vny, Vnz), and is normalized by the arithmetic unit 124. That is, the relation between the values Vnx, Vny, and Vnz is as follows.
Vnx*Vnx+Vny*Vny+Vnz*=1 (1)
In the present embodiment, the correspondence relation between the light wavelength ranges R, G, and B and the normal vector Vn is also obtained by the arithmetic unit 124. The correspondence relation can be obtained using a correspondence table and complementary functions fx and fy. In the present embodiment, the complementary functions fx and fy are defined to obtain the normal vector Vn between discrete values of the correspondence table.
The memory unit 126 can store various initial values, various programs, various tables, various functions, and various types of data. For example, the memory unit 126 stores the correspondence relation between the light wavelength ranges R, G, and B and the normal vector Vn of the measurement object W. In the present embodiment, the correspondence relation between the light wavelength regions R, G, and B and the normal vector Vn is configured as illustrated in
The shape reconstruction unit 128 calculates inclination information for each point of the measurement object W from the normal vector Vn obtained by each pixel, and reconstructs the shape of the measurement object W. Specifically, the shape reconstruction unit 128 reconstructs the shape of the measurement object W by converting the normal vector Vn into the inclination information of each pixel and connecting the inclination information at pixel intervals. The inclination information and shape information is output to the display device DD and stored in the memory unit 126.
Next, a procedure for reconstructing the shape of the measurement object W by the image measurement device 100 will be described below using
First, a preliminary step (
Here, the preliminary step will be described in detail using
The preliminary step is a step of calculating in advance the correspondence relations between the light wavelength ranges R, G, and B and the normal vector Vn to reconstruct the shape of the measurement object W. As illustrated in
First, the preliminary illumination step (
Next, the preliminary imaging step (
Next, the preliminary correspondence relation generation step (
Specific procedures will be described below using
First, the range setting step (
9, a range in which the direction of the normal vector Vn can be determined is calculated from a captured image JG_IMG of the reference sphere. For example, a pixel region with high luminance that exceeds noise level is extracted from the image JG_IMG of the reference sphere, or a pixel region is extracted from the image JG_IMG of the reference sphere by differential processing when turning on/off the illumination device 110, to obtain a range L from which the object light from the reference sphere is reflected. Then, a reference sign θ represents a maximum surface inclination angle at the reference sphere (with a radius r) and the reference sign θ can be obtained by using the range L as follows.
θ=acos((L/2)/r) (2)
Next, the correspondence table generation step (
Vx=(X−Cx)*Px (3)
Vy=(Y−Cy)*Py (4)
Vz=sqrt(r*r−Vx*Vx−Vy*Vy) (5)
By normalizing these, the normal vector Vn is obtained as follows.
Vnx=Vx/r (6)
Vny=Vy/r (7)
Vnz=sqrt(1−Vnx*Vnx−Vny*Vny) (8)
Therefore, the correspondence table illustrated in
Next, the complementary function calculation step (
Rn=Rt/sqrt(Rt*Rt+Gt*Gt+Bt*Bt) (9)
Gn=Gt/sqrt(Rt*Rt+Gt*Gt+Bt*Bt) (10)
Bn=sqrt(1−(Rt*Rt)/(Rt*Rt+Gt*Gt+Bt*Bt)+(Gt*Gt)/(Rt*Rt+Gt*Gt+Bt*Bt)) (11)
The Z component Vnz of the normal vector Vn is then assumed to be only positive. Under these conditions, the complementary function fx (or fy) with the luminance rates Rn and Gn as variables is obtained so that the X component Vtnx (for fy, the Y component Vtny) of the normal vector Vn in the correspondence table is obtained. The complementary functions fx and fy can be obtained, for example, by using spline interpolation for fitting freeform surfaces. Note that, to obtain the complementary functions fx and fy, N (N≥4) correspondence relations are used. The obtained complementary functions fx and fy are stored in the memory unit 126.
This completes the preliminary correspondence relation generation step, and the preliminary step is also completed.
Next, returning to
Next, an imaging step (
Next, a calculation step (
Specifically, the correspondence table is read out of the memory unit 126. In a case in which the luminances Rc, Gc, and Bc of the identified light wavelength ranges R, G, and B coincide with the luminances Rt, Gt, and Bt of the light wavelength ranges R, G, and B of the correspondence table, the corresponding normal vector Vn, as is, becomes a normal vector to be obtained. In a case in which the luminances Rc, Gc, and Bc of the identified light wavelength ranges R, G, and B do not coincide with the luminances Rt, Gt, and Bt of the light wavelength ranges R, G, and B of the correspondence table, the luminance rates Rn and Bn are obtained by normalizing the luminances Rc, Gc, and Bc of the identified light wavelength ranges R, G, and B. Then, the complementary functions fx and fy are read out of the memory unit 126, and the corresponding normal vector Vn is calculated.
Note that, without using the correspondence table, the luminance rates Rn and Bn are obtained by normalizing the luminances Rc, Gc, and Bc of the immediately identified light wavelength regions R, G, and B. Then, the complementary functions fx and fy are read out of the memory unit 126, and the corresponding normal vector Vn may be calculated.
Alternatively, even in a case in which the luminances Rc, Gc, and Bc of the identified light wavelength ranges R, G, and B do not coincide with the luminance Rt, Gt, and Bt of the light wavelength ranges R, G, and B of the correspondence table, the corresponding normal vector Vn may be calculated approximately using multiple correspondence relations in the correspondence table without using the complementary functions fx and fy. This will be described below.
For example, first, the sum of squares of luminance difference SUM between the luminances Rt, Gt, and Bt and the luminances Rc, Gc, and Bc is obtained for M (M sets) in the correspondence table that can be determined to be values close to the luminances Rc, Gc, and Bc of the identified light wavelength ranges R, G, and B (M≥N≥4, where M may be the number of sets where all luminances in the correspondence table are used).
SUM=(Rc−Rt)*(Rc−Rt)+(Gc−Gt)*(Gc−Gt)+(Bc−Bt)*(Bc−Bt) (12)
Next, in the order in which the sum of squares of luminance difference SUM is closest to zero, N (N sets) of luminances Rt, Gt, and Bt are selected. Then, N normal vectors Vn corresponding to these are obtained from the correspondence table.
Then, by averaging the obtained N normal vectors Vn, the normal vector for the luminances Rc, Gc, and Bc of the identified light wavelength regions R, G, and B may be obtained.
Then, a shape reconstruction step (
In this way, in present embodiment, the illumination light having the specific irradiation solid angle IS including the plurality (three) of solid angle regions IS1, IS2, and IS3 with light wavelength ranges R, G, and B, each different from each other, is applied to the measurement object W. Then, on the basis of the light wavelength regions R, G, and B identified at each pixel of the image, the normal vector Vn at each point of the measurement object W corresponding to each pixel is obtained from the inclusion relation between the plurality of solid angle regions RS1, RS2, and RS3 constituting the object light and the predetermined observation solid angle DS. Therefore, it is possible to detect each of the wavelength regions R, G, and B with appropriate luminance at each pixel, and to stably obtain the normal vector Vn with high accuracy. At the same time, since the shape of the measurement object W is reconstructed from the normal vectors Vn, the shape can be quickly reconstructed with high accuracy.
In addition, in the present embodiment, the filter unit 114 is disposed in the vicinity of the position determined by the focal length f of the lens unit 116 on the irradiation optical axis L1, and the irradiation solid angle IS is made uniform at each point of the measurement object W. Therefore, homogeneous information can be taken from every point of the measurement object W into the image to be captured. In other words, information on the surface of the measurement object W can be equally quantified, regardless of location, to reconstruct and evaluate the shape. Not limited to this, the filter unit does not have to be disposed in the vicinity of the position determined by the focal length f of the lens unit on the irradiation optical axis L1. This is because, depending on the measurement object W, it may be sufficient to obtain highly accurate information only for each point of the measurement object W in the extreme vicinity of the irradiation optical axis L1.
In the present embodiment, the filter unit 114 includes the filter regions CF1, CF2, and CF3, each different from each other, around the irradiation optical axis L1 so that the plurality of solid angle regions IS1, IS2, and IS3 are provided around the irradiation optical axis L1 of the illumination light. Therefore, when there are a plurality of normal vectors Vn that have the same inclination angle with the irradiation optical axis L1 as the rotation axis, the plurality of normal vectors Vn can be obtained distinctly. In other words, the inclination of the surface of the measurement object (the direction of the inclination angle with the irradiation optical axis L1 as the rotation axis) can be faithfully reproduced from the normal vector Vn.
Specifically, the filter unit 114 illustrated in
Alternatively, the filter unit 114 can be configured as illustrated in
As a matter of course, the filter unit 114 may be configured as illustrated in
In the present embodiment, the filter unit 114 is also used to allow the light wavelength regions R, G, and B, as optical attributes, to be different from each other. Therefore, when the normal vector Vn is not inclined (there is no inclination in the measurement object W), the light is white, and it is easy to intuitively visually recognize that the measurement object W is not inclined. In addition, since the light is white when there is no inclination, the color of the measurement object W itself, which is facing forward, can be easily determined. At the same time, as the imaging device CM, an ordinary color CDD camera or color CMOS camera can be used as is. Therefore, identification of the optical attributes can be achieved easily and at low cost. Not limited to this, the light wavelength ranges do not have to be three of R, G, and B, but may be at least two. The colors of the light wavelength ranges do not have to be the red wavelength range, the green wavelength range, and the blue wavelength range, but may be a combination of the wavelength regions of different colors.
Note that the optical attributes include polarization states, luminance, or the like, other than the light wavelength regions R, G, and B. That is, for example, the optical attributes may be polarization states. In this case, for example, a polarizer or the like that changes the polarization states of light is used in the filter unit. The imaging device CM may then identify the optical attributes by using a corresponding polarizer.
Also, the present embodiment has the preliminary step before the illumination step. In the preliminary step, the preliminary illumination step and the preliminary imaging step are performed while the reference sphere is used as a specific jig instead of the measurement object W. In addition to these, the preliminary correspondence relation generation step to obtain the correspondence relations between the light wavelength ranges R, G, and B and the normal vector Vn is performed. In other words, since the correspondence relations between the light wavelength regions R, G, and B and the normal vector Vn are obtained in advance, it is possible to image the measurement object W and to measure and reconstruct its shape quickly and stably. At the same time, when determining the correspondence relations between the light wavelength regions R, G, and B and the normal vector Vn, arrangement and configuration in measurement of the measurement object W by the image measurement device 100 can be used as is, except for replacing the measurement object W with the specific jig. Therefore, the steps from the preliminary step to the shape reconstruction step can be performed efficiently and quickly. Furthermore, since the specific jig is the reference sphere, it is sufficient to perform the preliminary imaging step only once, and the correspondence relations between the light wavelength ranges R, G, and B and the normal vector Vn can be easily and quickly obtained.
Not limited to this, but the preliminary step may be omitted. In that case, in the calculation step, the correspondence relations between the light wavelength regions R, G, and B and the normal vector Vn may be obtained and the normal vector may be obtained. Alternatively, once the light wavelength regions R, G, and B are identified, an operator can directly specify the normal vector in the most dominant light wavelength region using an input device not illustrated in the drawing, or, for example, the operator can specify the normal vector using any simulation, such as a light ray tracking method. Alternatively, the preliminary step may be performed in a different configuration or by a different method. For example, an apparatus different from the image measurement device 100 may be used, or a different illumination device and imaging device CM may be used in the image measurement device 100.
Alternatively, a reference plane, rather than the reference sphere, may be used as the specific jig. (Note that the reference plane used herein is a plane having a surface whose undulation or roughness is negligible with respect to the inclination of a normal vector to be measured. The measurement object W may be exactly what is about to be measured, or may be another object of the same shape, or of a completely different shape).
For example, when the reference plane is used as the specific jig, the following steps are performed.
First, the illumination device 110 applies light to the reference plane and images of the reference plane are captured. At this time, the reference plane is imaged multiple times (N≥4) at different inclination angles with respect to the observation optical axis L2. Then, normal vectors Vn corresponding to the inclined angles are obtained. Then, the luminances Rc, Gc, and Bc of the light wavelength regions R, G, and B corresponding to the respective normal vectors Vn are calculated. The luminances Rc, Gc, and Bc are obtained by taking an average thereof in only portions of the reference plane in the captured image. Thereby, a correspondence table that represents the correspondence relations between the light wavelength region R, G, and B and the normal vector, as illustrated in
As a matter of course, the measurement object W itself may be used as is. In that case, the following steps are performed.
First, the illumination device 110 applies light to the measurement object W to determine a temporary reference plane. For example, this temporary reference plane can be determined by calculating the amount of change in luminance Rc, Gc, and
Bc of light wavelength ranges R, G, and B in portions of the measurement object W in the images and finding an area with the least amount of change. Once this temporary reference plane is determined, the remaining steps are identical to those in the case of using the reference plane described above. Therefore, further explanation is omitted.
In the present embodiment, the processing device 120 includes the memory unit 126 that stores the correspondence relations between the light wavelength ranges R, G, and B and the normal vector Vn, and the arithmetic unit 124 calculates the normal vector Vn on the basis of the correspondence relations. Therefore, even when the correspondence relations are complex, the correspondence relations can be read out and used appropriately in the arithmetic unit 124. In addition, the correspondence relations are configured as the correspondence table. Therefore, the amount of calculation in the arithmetic unit 124 can be reduced, and the normal vector Vn can be quickly obtained. At the same time, the correspondence relations are also configured as the complementary functions fx and fy. Therefore, by using the complementary functions fx and fy, the normal vector Vn can be quickly obtained for the luminances Rc, Gc, and Bc of the light wavelength ranges R, G, and B the correspondence of which is not in the correspondence table.
Not limited to this, it is not necessary for the processing device to have a memory unit. In such a case, the above-described correspondence relations may be read directly from the outside into the arithmetic unit. Alternatively, it may be configured such that the correspondence relations are obtained each time the normal vector Vn is obtained. Alternatively, only the complementary functions fx and fy may be configured without configuring the correspondence table. Alternatively, the correspondence table may be configured and the complementary functions fx and fy may not be configured. Alternatively, neither the correspondence table nor the complementary functions fx and fy may be configured. In this case, the operator may directly determine the normal vector for the luminances Rc, Gc, and Bc of the obtained light wavelength regions R, G, and B.
In the present embodiment, the normal vector Vn is normalized. Therefore, it is possible to reduce the number of parameters for obtaining the correspondence table and the complementary functions fx and fy that define the correspondence relations between the light wavelength regions R, G, and B and the normal vector Vn. Therefore, the storage capacity required for the correspondence table can be reduced, and the amount of calculation for the complementary functions fx and fy can be reduced. Not limited to this, un-normalized normal vectors V may also be used.
In other words, in the present embodiment, it is possible to quickly reconstruct the information for each point of the measurement object W in the captured image of the measurement object W.
In the first embodiment, the illumination device 110 includes the light source unit 112, the filter unit 114, the lens unit 116, and the half mirror 118, but the present invention is not limited to this. For example, it may be configured as that in a second embodiment illustrated in
In the present embodiment, the second filter unit 213 is disposed, on the irradiation optical axis L1, between the light source unit 212 and the filter unit 214. The second filter unit 213, as with the filter unit 214, has an aperture for blocking illumination light and filter regions for changing optical attributes. The second filter unit 213 is disposed in the vicinity of a position determined by a focal point such that its image is formed on the surface of the measurement object W. Therefore, the second filter unit 213 can homogenize the illumination light, change complex optical attributes, and the like, as well as can prevent any stray light.
In the above-described embodiment, the image measurement device receives the reflected light of the measurement object W as the object light to measure the measurement object W, but the present invention is not limited to this. For example, the image measurement device may be configured as in a third embodiment illustrated in
Note that, in the above-described embodiment, the irradiation optical axis L1 and the observation optical axis L2 are coaxial, but the present invention is not limited to this. For example, the image measurement device may be configured as in a fourth embodiment illustrated in
In the present embodiment, a rotary table RT that can rotate the measurement object W around the observation optical axis L2 is provided. The processing device 420 includes an image retention unit 422, an arithmetic unit 424, a control unit 425, a memory unit 426, and a shape reconstruction unit 428. In the present embodiment, in the processing device 420, only the control unit 425 differs from the above-described embodiment, and thus, only the control unit 425 will be described. The control unit 425 outputs, to the rotary table RT, a signal for controlling the rotary drive of the rotary table RT. Note that a rotation angle is designated by a not-illustrated input device or a program stored in the memory unit 426. The control unit 425 also outputs a rotation angle signal of the rotary table RT to the arithmetic unit 424. The arithmetic unit 424 establishes correspondence between the rotation angle signal of the rotary table RT and an image obtained at that time, and obtains a normal vector at each point of the measurement object corresponding to each pixel from the inclusion relation between the plurality of solid angle regions IS1, IS2, and IS3 and the predetermined observation solid angle DS.
Next, a procedure for reconstructing the shape of the measurement object W by an image measurement device 400 will be described below using
First, the preliminary step (
Next, a rotation step (
NN=360/θ1 (13)
Next, the calculation step (
In this way, according to the present embodiment, even when there is a large inclination on the surface of the measurement object W, the inclination can be isotropically measured and reconstructed without depending on the direction of measurement.
Note that, the rotary table RT is effective even with coaxially falling illumination light in which the irradiation optical axis L1 and the observation optical axis L2 coincide with each other. For example, in a case in which the filter regions of the filter unit are not rotationally symmetrical around the irradiation optical axis L1, there is a risk of directional dependence in the measurement accuracy of the normal vector Vn. Therefore, by using such a rotary table RT in the image measurement device as in the first embodiment, it is possible to improve the directional dependence in the measurement accuracy of the normal vector Vn.
In the image measurement device of the above-described embodiment, the image of the measurement object W is processed to measure and reconstruct the shape of the measurement object, but the present invention is not limited to this. For example, the image measurement device may be configured as in a fifth embodiment illustrated in
The arithmetic unit 524 first calculates all normal vectors for the measurement object W, and establishes correspondence between each normal vector and each pixel in two dimensions (XY plane) (this is called a normal vector group). Next, this normal vector group is rotated 360 times in 1 deg increments, for example, and is stored in the memory unit 526. In other words, 360 normal vector groups are stored in the memory unit 526 (normal vectors Vn are normalized in advance). These are the normal vector Vn at each point of the measurement object W that is stored in advance.
When an image of the measurement object W is newly captured, the arithmetic unit 524 obtains a normal vector Vn at each point of the measurement object W. Then, the arithmetic unit 524 establishes correspondence between the normal vector Vn and each pixel in two dimensions (XY plane) to constitute a normal vector group. Then, the arithmetic unit 524 calculates the sum of squares of difference between the normal vector group and the 360 normal vector groups stored in the memory unit 526 in advance (pattern matching), and reads one of the normal vector groups with the smallest value (in the case of the best pattern matching) in the consistency determination unit 524A. The consistency determination unit 524A then compares the best pattern matching normal vector group read out of the memory unit 526 with the newly calculated normal vector group. Then, the consistency determination unit 524A obtains portions where the normal vectors Vn differ from each other, and calculates the difference of the normal vectors at the different portions. In a case in which the difference is equal to or greater than a certain threshold value, the consistency determination unit 524A adds information that the position is a defect (this is referred to as defect information). Then, the consistency determination unit 524A outputs the defect information and the newly calculated normal vector group to the shape reconstruction unit 528.
On the basis of the output from the consistency determination unit 524A, the shape reconstruction unit 528 reconstructs the shape of the measurement object W with the defect information. Alternatively, the shape reconstruction unit 528 reconstructs a portion indicated by the defect information and the defect information.
In this way, in the present embodiment, the provision of the consistency determination unit 524A allows to discriminate the different portion between the measurement objects W, and easily detect the defect.
This invention can be widely applied to a shape reconstruction method that reconstructs the shape of a measurement object by applying illumination light to the measurement object and processing an image, and an image measurement device using the shape reconstruction method.
100, 400, 500 . . . image measurement device
110, 210, 310, 410, 510 . . . illumination device
112, 212, 312 . . . light source unit
114, 214, 314 . . . filter unit
116, 216, 316 . . . lens unit
118, 218 . . . half mirror
120, 420, 520 . . . processing device
122, 422, 522 . . . image retention unit
124, 424, 524 . . . arithmetic unit
126, 426, 526 . . . memory unit
128, 428, 528 . . . shape reconstruction unit
213 . . . second filter unit
425 . . . control unit
524A . . . consistency determination unit
B, G, R . . . wavelength range
Bc, Bt, Gc, Gt, Rc, Rt . . . luminance
Bn, Gn, Rn . . . luminance rate
CF1, CF2, CF3, CF4, CF11, CF12, CF13, CF21, CF22, CF23 . . . filter region
CM . . . imaging device
Cx, Cy . . . center of sphere projection image
DD . . . display device
DS . . . observation solid angle
DS1, DS2, DS3, IS1, IS2, IS3, IS4, IS5, IS11, IS12, IS13, IS21, IS22, IS23, RS1, RS2, RS3 . . . solid angle region
f . . . focal length
fx, fy . . . complementary function
IMC . . . image capture
IMP . . . image processing device
IS, IS' . . . irradiation solid angle
JG . . . reference sphere
JG IMG . . . image of reference sphere
L . . . range
L1 . . . irradiation optical axis
L2 . . . observation optical axis
L3 . . . reflection optical axis
LS . . . conventional illumination
M, NN, N . . . time
P, P′ . . . position
r, R0 . . . radium
RS . . . reflected solid angle
RT . . . rotary table
V, Vn, Vnb, Vtn . . . normal vector
Vnx, Vtnx, Vx . . . X component
Vny, Vtny, Vy . . . Y component
Vnz, Vtnz, Vz . . . Z component
W . . . measurement object
θ, θ1, ϕ, ω . . . angle
Number | Date | Country | Kind |
---|---|---|---|
2019-217429 | Nov 2019 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/044058 | 11/26/2020 | WO |