This application is based on Japanese Patent Application No. 2019-024369, the contents of which are incorporated herein by reference.
The present disclosure relates to a flaw inspection apparatus and a flaw inspection method.
An automatic inspection apparatus that inspects an inspection object by using deep learning in a neural network is known (for example, see Japanese Unexamined Patent Application, Publication No. 2003-76991).
According to an aspect of the present disclosure, there is provided a flaw inspection apparatus including: a deep learning unit to which an image obtained by photographing a surface of an inspection object is input, in which, on the basis of the input image, the deep learning unit judges absence or presence of a flaw on a surface of the inspection object and specifies a site judged as being the flaw; a dimension measuring unit that measures a dimension of the flaw on the basis of the image of the site specified by the deep learning unit; and a flaw classifying unit that classifies the flaw on the basis of the dimension of the flaw measured by the dimension measuring unit.
A flaw inspection apparatus 1 and a flaw inspection method according to one embodiment of the present disclosure will now be described with reference to the drawings.
As illustrated in
The image processing device 3 is equipped with a deep learning unit 31 to which the image P1 obtained by the camera 2 is input; a flaw dimension measuring unit (dimension measuring unit) 32 that calculates a dimension of a flaw from the image output from the deep learning unit 31; a flaw classifying unit 33 that classifies the flaw on the basis of whether or not the calculated dimension of the flaw exceeds a predetermined threshold; and a storage unit 34 that stores the flaw and the dimension in association with each other. The deep learning unit 31, the flaw dimension measuring unit 32, and the flaw classifying unit 33 are each constituted by a processor, and the storage unit 34 is constituted by a memory.
In the deep learning unit 31, images P1 of a large number of inspection objects O obtained in advance and the information about absence or presence of flaws in the images P1 are input to enable deep learning and configure a learning model.
Since the information used to configure the learning model only needs to be information about whether there are flaws or not, there is no need to measure the dimensions of the flaws, and, thus, the learning task is simple.
When an image P1 obtained by the camera 2 is input to the learning model in the deep learning unit 31, absence or presence of flaws on the surface of the inspection object O in the image P1 is judged.
The deep learning unit 31 then outputs information about the probability of there being a flaw for each of the pixels in the rectangular regions A and B.
As illustrated in
For each of white pixel regions in the generated binary images P4 and P5, the flaw dimension measuring unit 32 calculates at least one of the length in the longitudinal direction, the length in a direction orthogonal to the longitudinal direction, the perimeter, and the area.
The flaw classifying unit 33 compares the threshold and the dimension calculated in the flaw dimension measuring unit 32. The threshold is a value for judging, for example, whether repair is possible or whether shipment is possible, and can be set as desired.
In other words, flaws that have lengths, perimeters, or areas that are less than a predetermined threshold can be classified as repairable or acceptable for shipment, and flaws that have lengths, perimeters, or areas that are equal to or greater than the threshold can be classified as unrepairable or unacceptable for shipment.
The storage unit 34 stores the classified flaws in association with the dimensions measured by the flaw dimension measuring unit 32.
A flaw inspection method that uses the flaw inspection apparatus 1 of the present embodiment having the above-described features will now be described.
As illustrated in
When there are no sites judged as being flaws in step S2, the process is ended. When there are sites judged as being flaws, information regarding the probability of a flaw being present is output to the flaw dimension measuring unit 32 for each of the rectangular regions A and B containing these sites, and, for each flaw, at least one of the length, perimeter, and area is calculated in the flaw dimension measuring unit 32 (step S4).
Next, whether the calculated dimension is equal to or greater than a threshold is judged in the flaw classifying unit 33 (step S5), and the flaws are classified into two categories, X and Y, respectively corresponding to flaws that have a dimension less than the threshold and flaws that have a dimension equal to or greater than the threshold (steps S6 and S7). The classified flaws are stored in the storage unit 34 in association with the dimensions of the flaws measured in the flaw dimension measuring unit 32 (step S8).
When not all of the rectangular regions A and B are classified, the steps from step S4 are repeated for the next rectangular regions A and B (step S9).
According to the flaw inspection apparatus 1 and the flaw inspection method of this embodiment, absence or presence of flaws is judged by deep learning from an image P1 obtained by the camera 2. Thus, absence or presence of flaws can be judged and the sites thereof can be specified without clearly defining what flaws are. In other words, according to deep learning, whether the concerned object is a flaw or attached matter such as dust can be easily learned, and the absence or presence of flaws in the input image P1 can be easily judged.
The sites judged as being flaws are subjected to image processing in the flaw dimension measuring unit 32 to measure at least one of the length, perimeter, and area for each flaw. Thus, the flaws can be easily classified in the flaw classifying unit 33.
In this case, a site judged as being a flaw in deep learning is measured to determine the dimension of the flaw by image processing. Thus, there is an advantage in that there is no need to use dimensions of flaws in the learning stage of deep learning, and thus learning can be completed easily and in a short time. Another advantage is that when the measured dimension is less than a threshold, and when, for example, the flaw is classified as acceptable for shipment, the flaw and the dimension in association with each other are stored in the storage unit 34, and thus the traceability after shipment can be improved. Yet another advantage is that the shipment standard can be adjusted by changing the threshold of the dimension of the flaw without having to perform the learning again.
In this embodiment, in the flaw dimension measuring unit 32, at least one of the length, perimeter, and area of a flaw is calculated, and the calculated dimension is compared with a threshold to classify the flaw into two categories, X and Y. Alternatively, all of the length, perimeter, and area of a flaw may be calculated, and the flaw may be classified into two categories, X and Y, on the basis of whether any one of these values is equal to or greater than the corresponding threshold. Depending on the types of dimensions that exceed the thresholds, the flaw may be classified into three or more categories.
In this embodiment, images P2 and P3 generated from the information indicating the probability of constituting a flaw output from the deep learning unit 31 is binarized, and the dimensions are measured from these binary images; alternatively, the image P1 obtained by the camera 2 may be directly subjected to image processing so as to extract the edge of a flaw and calculate at least one of the length, perimeter, and area of the flaw by using the extracted edge.
In this embodiment, the camera 2 captures a two-dimensional image P1; alternatively, the camera 2 may capture a two-dimensional image and a three-dimensional image. As illustrated in
In such a case, as illustrated in
In this manner, the depth can be used as the standard for classifying the flaw. In the flaw dimension measuring unit 32, both a two-dimensional image and a three-dimensional image may be used to use at least one of the length, perimeter, area, and depth of the flaw to classify the flaw.
In this embodiment, a moving mechanism that moves the three-dimensional camera 22 may be provided. In this manner, the three-dimensional camera 22 can be moved by the moving mechanism on the basis of the position information of the flaw obtained from the two-dimensional image, and thus a three-dimensional camera 22 having a narrower field of view (for example, a camera with a higher resolution but with a narrower field of view) than the two-dimensional camera 21 can be used.
Number | Date | Country | Kind |
---|---|---|---|
2019-024369 | Feb 2019 | JP | national |