The present disclosure relates to a defect analyzer, a method for analyzing a defect, and a program.
Techniques in which images are classified based on distance metrics between images that are learned using deep learning are used. For example, Patent Document 1 discloses a user interface through which interaction is performed according to a user preference by differentiating between users based on a distance metric that is obtained by performing learning from face images of the users.
Patent Document 1: WO2012/88627
However, in a conventional image classification technique, there is a problem that it is difficult to classify defects that occur on a surface of an object. For example, parts that constitute a human face are generally commonly used, and thus it is relatively easy to improve accuracy in classifying face images. On the other hand, there are a variety of defects that occur on the surface of the object, and in many cases, it is difficult to classify the defects even when classification is performed based on human-judgement. Further, when only a classification result in which the defects are classified is presented, it is difficult to recognize a similarity relationship between classifications.
In view of the above technical problem, an object of the present disclosure is to analyze a classification of a defect based on an image that is obtained by imaging the defect.
The present disclosure provides the following configurations.
1[8] In the defect analyzer described in [6], upon occurrence of a condition in which there are no classification for which the distance is less than or equal to the threshold, the classification estimation unit may be configured to estimate that the verification image matches a new classification different from the classifications.
In one aspect of the present disclosure, a classification of a defect can be analyzed based on an image that is obtained by imaging the defect.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the specification and the drawings, components having substantially the same functional configuration are denoted by the same numerals, and redundant description thereof will be omitted.
Defects may occur on a surface of an object due to various causes. In order to identify the causes of the defects, images that are obtained by imaging the defects are analyzed. For example, when a defect occurs in a manufacturing process of a product, a failure may potentially occur in the manufacturing process. In such a situation, when the defect is found in the product in an inspection process, the defect of the product is imaged, and the imaged defect serves as a clue for identifying the failure in the manufacturing process.
In a conventional image classification technique, each image is classified by assigning a unique class to the image. However, there are a variety of defects that occur on the surface of an object, and in many cases, it is difficult for even a person to classify these defects. In the conventional image classification technique, for example, there are cases where accuracy in classifying an intermediate defect similar to a plurality of classes or a new defect different from all of predefined classes may be reduced. On the other hand, there are cases where a subtle difference between defects may be used as important information for identifying what has caused the defects. In view of the above situation, by analyzing the relation between classifications of defects, useful information related to handling of the object can be obtained.
One embodiment of the present disclosure will be described using a defect analysis system that classifies an image (hereinafter also referred to as an “image of a defect”) obtained by imaging each defect that occurs on a surface of an object, and that analyzes the relation between classifications of defects. An example of the object according to the present embodiment is an article having a mirror surface, such as an optical product. Another example of the object according to the present embodiment is a single crystal substrate such as a semiconductor wafer. The object according to the present embodiment is not limited to these examples, and a mirrored article, such as an aluminum substrate for a photosensitive drum used in a photosensitive drum of a laser printer, may be adopted.
The defect in the present embodiment includes a linear or planar flaw that occurs on the surface of the object, or includes dirt or a foreign substance or the like that is attached to the surface of the object.
In the defect analysis system according to the present embodiment, a feature extraction model that extracts one or more image features from an input image is trained using learned data that is collected in advance. The learned data is data in which information (hereinafter may be also referred to as a “defect label”) indicating a classification of the defect is assigned to an image (hereinafter may be also referred to as a “learned image”) of the defect that is used for model training.
The feature extraction model in the present embodiment is trained with a metric learning approach. In metric learning, images are each arranged at one point in a high-dimensional embedding space such that a distance between similar images decreases.
The defect analysis system according to the present embodiment extracts images feature of learned images, by using the trained feature extraction model, and then determines a representative point for each classification of a given defect that is indicated by the defect label. The defect analysis system outputs information indicating the relation between classifications, based on distances between representative points for the classifications. An example of the information indicating the relation between the classifications includes a heat map, a scatter plot, a dendrogram, or the like that expresses the relation between the classifications. The information indicating the relation between the classifications is not limited to these examples, and various representation forms that are easy to understand for a user of the defect analysis system can be used.
Also, the defect analysis system according to the present embodiment extracts an image feature of a given image (hereinafter also referred to as a “verification image”) of a given defect, for which a classification of the given defect is not specified using the trained feature extraction model. Then, the defect analysis system estimates the classification of the given defect that is imaged in the verification image, based on one or more distances from representative points for respective classifications. At this time, an image (hereinafter may be also referred to as a “similar image”) similar to the verification image is extracted from a given learned image to which a defect label indicating the same classification as the estimated classification is assigned, and the similar image may be output together with an estimation result.
Further, when it is determined that the verification image does not match any classification, the defect analysis system according to the present embodiment creates a new classification (hereinafter may be also referred to as an “additional classification”). Also, when there are multiple classifications for which a short distance between representative points is obtained, the defect analysis system aggregates these classifications into the new classification (hereinafter may be also referred to as a “consolidated classification”). Then, the defect analysis system retrains the feature extraction model using learned data to which the new classification or the consolidated classification is assigned.
First, the overall configuration of the defect analysis system according to the present embodiment will be described with reference to
As shown in
The defect analyzer 10 is an information processing device, such as a personal computer, a workstation, or a server, that analyzes, in response to a request from the user terminal 30, an image of a defect that is obtained by imaging the defect that occurs on the surface of the object. The defect analyzer 10 receives the image of the defect to be analyzed, from the user terminal 30. The defect analyzer 10 analyzes the received image of the defect, and transmits an analysis result to the user terminal 30.
The image acquisition device 20 is an optical device that acquires the image of the defect by imaging the defect occurring on the surface of the object. The image acquisition device 20 may include a digital camera that captures a still image, or may include a video camera that captures a moving image. The image acquisition device 20 may be an information processing device such as a personal computer that is connected to various cameras, or may be a surface inspection device equipped with various cameras.
The user terminal 30 is an information processing terminal such as a personal computer, a tablet computer, or a smartphone that is operated by a user. The user terminal 30 acquires the image of the defect to be analyzed, from the image acquisition device 20, in response to a user operation, and then transmits the image of the defect to the defect analyzer 10. The user terminal 30 receives an analysis result from the defect analyzer 10, and outputs the analysis result to the user.
The entire configuration of the defect analysis system 1 shown in
Hereinafter, a hardware configuration of the defect analysis system 1 according to the present embodiment will be described with reference to
The defect analyzer 10, the image acquisition device 20, and the user terminal 30 according to the present embodiment are each implemented by, for example, a computer.
As illustrated in
The CPU 501 is an arithmetic device that controls the entire computer 500 and implements the function of the entire computer 500, by reading, to the RAM 503, a program or data retrieved from a storage device such as the ROM 502 or the HDD 504, and by performing processing.
The ROM 502 is an example of a non-volatile semiconductor memory (storage device) that can retain the program and data even when power is turned off. The ROM 502 functions as a main storage device that stores various programs, data, and the like that are necessary for the CPU 501 to execute the various programs that are installed in the HDD 504. Specifically, the ROM 502 stores boot programs such as a BIOS (basic input/output system) and an EFI (extensible firmware interface) that are to be executed at start-up of the computer 500, and the ROM 502 also stores data relating to OS (operating system) settings, network settings, and the like.
The RAM 503 is an example of a volatile semiconductor memory (storage device) in which one or more programs and data are erased when power is turned off. The RAM 503 includes, for example, a dynamic random access memory (DRAM), a static random access memory (SRAM), or the like. The RAM 503 provides a work area that is developed when the CPU 501 executes various programs installed in the HDD 504.
The HDD 504 is an example of a non-volatile storage device that stores one or more programs and data. The programs and data stored in the HDD 504 include the OS that is basic software for controlling the entire computer 500, and also include applications or the like for providing various functions on the OS. Instead of using the HDD 504, the computer 500 may use a storage medium (for example, a solid state drive (SSD) or the like) that utilizes a flash memory as a storage medium.
The input device 505 may include a touch panel, operation keys, buttons, a keyboard, a mouse, a microphone, and the like that are used by the user to enter various signals.
The display device 506 may include a display such as a liquid crystal or organic electro-luminescence (EL) that displays a screen, and also may include a speaker or the like that outputs sound data such as voice.
The communication I/F 507 is an interface that is connected to a communication network and is used for the computer 500 to perform data communications.
The external I/F 508 is an interface with an external device. The external device may include a drive device 510 or the like.
The drive device 510 is a device in which a recording medium 511 is set. The recording medium 511 in this description may include a medium that optically, electrically, or magnetically records information, such as a CD-ROM, a flexible disk, or a magneto-optical disk. The recording medium 511 may include a semiconductor medium or the like that electrically records information, such as a ROM or a flash memory. With this arrangement, the computer 500 can perform reading and/or writing with respect to the recording medium 511, by using the external I/F 508.
Various programs to be installed in the HDD 504 are installed, for example, by setting a distributed recording medium 511 in the drive device 510 that is connected to the external I/F 508, and by reading the various programs recorded in the recording medium 511 through the drive device 510. Alternatively, the various programs to be installed in the HDD 504 may be installed by downloading the programs via the other network, which is different from the communication network, through the communication I/F 507.
Hereinafter, the functional configuration of the defect analysis system according to the present embodiment will be described with reference to
As shown in
The image receiver 101, the model training unit 103, the feature extraction unit 105, the representative-point determination unit 106, the relation visualization unit 107, the classification estimation unit 108, the similar-image extraction unit 109, and the classification aggregator 110 are implemented by a process that the program, which is read from the HDD 504 shown in
The image receiver 101 receives one or more images of defects from the user terminal 30. The images of the defects received by the image receiver 101 include one or more learned images to which respective defect label are assigned, and include one or more verification images to which no defect label is assigned.
The learned data storage 102 stores the one or more learned images received by the image receiver 101, as learned data. The learned data is data to which a given defect label is assigned. It is sufficient when a predetermined number of pieces of learned data are used to train the feature extraction model. The predetermined number of pieces of learned data that are required to train the feature extraction model vary depending on a model type, but the predetermined number of pieces of learned data are, for example, about 100 pieces for each classification.
The model training unit 103 trains the feature extraction model, by using the learned data that is stored in the learned data storage 102. The feature extraction model is a machine-learned model that extracts, from an input image, a given image feature that enables a distance between similar images to decrease. The model training unit 103 may train the feature extraction model by applying an approach such as transfer learning or fine tuning, to a pretrained image classification model. An example of the feature extraction model is an image classification model using deep learning such as Alex-Net or Res-Net. The image feature is defined, for example, by a 256-dimensional feature vector.
The model storage 104 stores the feature extraction model that is trained by the model training unit 103.
The feature extraction unit 105 extracts an image feature from the learned image or the verification image, by using the feature extraction model stored in the model storage 104. In the following description, the image feature extracted from the learned image may be referred to as a “learned feature.” The image feature extracted from the verification image may be referred to as a “verification feature.”
The representative-point determination unit 106 determines a representative point for each classification of the defect, based on the learned feature extracted by the feature extraction unit 105. The representative point for the classification is defined by coordinates representing a learned feature set that corresponds to learned images to which the same defect label is assigned when the learned features are arranged in an embedding space. An example of the representative point is an average of feature vectors. Another example of the representative point is a centroid of a feature vector set. The representative point is not limited to these examples, and any point may be determined as long as the point is representative of the learned feature set for each classification.
The relation visualization unit 107 generates information indicating the relation between classifications, based on distances that are each between representative points for classifications, and transmits the information to the user terminal 30. An example of the distance is a cosine similarity level. The other example of the distance includes the Euclidean distance or the Mahalanobis distance. The above distance is not limited to these distance metrics, and any distance metric corresponding to the feature vector can be used.
A heat map is, for example, a matrix in which distances, which are each between representative points, are color-coded according to distance lengths. A scatter plot is a graph in which representative points for classifications are arranged on a two dimensional plane based on, for example, multi-dimensional scaling (MDS). A dendrogram is, for example, a tree diagram in which representative points of classifications are hierarchized in a tree structure based on hierarchical clustering.
The classification estimation unit 108 estimates a classification of the defect that is imaged in the verification image, based on a distance between the verification feature extracted by the feature extraction unit 105 and the representative point for each classification. The classification estimation unit 108 estimates the classification of the defect imaged in the verification image, by comparing the distance between the verification feature and the representative point, with a predetermined threshold.
The classification estimation unit 108 assigns a defect label indicating the estimated classification, to the verification image, and may store the verification image in the learned data storage 102. With this arrangement, the verification image for which the classification of the defect is estimated is added as new learned data.
The similar-image extraction unit 109 extracts one or more similar images similar to the verification image, from learned images stored in the learned data storage 102, based on the distance between each of the learned features and the verification feature.
When there are a plurality of classifications for which a distance between representative points is less than or equal to the predetermined threshold, the classification aggregator 110 determines a consolidated classification that includes the plurality of classifications. When the classification aggregator 110 determines the consolidated classification, the classification aggregator 110 updates, using learned data stored in the learned data storage 102, defect labels representing respective classifications that are included in the consolidated classification, to thereby set a defect label representing the consolidated classification.
As shown in
The imaging unit 201 is implemented by a camera connected to the external I/F 508 illustrated in
The imaging unit 201 images a defect occurring on the surface of the object, and generates an image of the defect. The imaging unit 201 may capture a still image, or may capture a moving image and then extract an image in which a defect is reflected.
The image storage 202 stores one or more images of defects that are imaged by the imaging unit 201.
As shown in
The image acquisition unit 301, the classification assignment unit 302, and the image transmitter 303 are implemented by a process that the program, which is read from the HDD 504 shown in
The image acquisition unit 301 acquires the image of each defect from the image acquisition device 20, in response to a user request.
The classification assignment unit 302 generates the learned image by assigning a defect label to the image of the defect in response to a user operation. As the image of the defect to which the defect label is assigned, only the image of a given defect that allows the user to specify a classification of the given defect may be selected.
The image transmitter 303 transmits the learned image or the verification image to the defect analyzer 10, in response to a user operation.
The result display unit 304 receives information indicating the relation between classifications, from the defect analyzer 10, and outputs the information to the display device 506 or the like.
Hereinafter, a process procedure of a defect analysis method executed by the defect analysis system 1 according to the present embodiment will be described with reference to
In step S1, the imaging unit 201 included in the image acquisition device 20 images a defect on the object, and generates an image of the defect. Next, the imaging unit 201 stores the generated image of the defect in the image storage 202. The image storage 202 stores a plurality of images of defects.
In step S2, the image acquisition unit 301 included in the user terminal 30 transmits a request to acquire learned images to the image acquisition device 20, in response to a user operation. The image acquisition device 20 transmits a plurality of images of defects that are accumulated in the image storage 202, to the user terminal 30, in response to the request to acquire the learned images. The image acquisition unit 301 transmits the plurality of images of defects that are received from the image acquisition device 20, to the classification assignment unit 302.
In step S3, the classification assignment unit 302 included in the user terminal 30 receives the plurality of images of defects from the image acquisition unit 301. Next, the classification assignment unit 302 assigns a defect label to each of the plurality of images of defects, in accordance with a user operation, and thereby generates learned images. Subsequently, the classification assignment unit 302 transmits the generated learned images to the image transmitter 303.
The classification of each defect may be arbitrarily set by the user. The defect label is data indicating a code value that is assigned to the classification of each defect. The types of classifications of defects are not limiting, and the user may set a number of classifications that is considered necessary for analysis.
The classifications according to the present embodiment includes 0: scratch, 1: residue of marker, 2: vertical line, 3: surface stain, 4: blurry, 5: mottle, 6: short fiber, 7: long fiber, 8: sagging, 9: thin horizontal line, 10: thick horizontal line, 11: ellipse point, 12: thick black point, 13: small haze, and 14: small black point. A numerical value before “: ” is a code value assigned to a given classification, and a character string after “: ” is a classification name.
As the code value of each classification, any value may be adopted as long as the code value does not overlap with code values of other classifications. As a classification name, any name that is easily understood by the user may be adopted.
In the example of
The user selects only one or more images of defects each of which clearly matches a specific classification, from acquired images of defects, and then may assign defect labels to the respective selected defect images. Any image of a given defect to which the user does not assign a defect label may be used as a verification image to be used in the image classification process.
Referring back to
In step S5, the image receiver 101 included in the defect analyzer 10 receives the plurality of learned images from the user terminal 30. Next, the image receiver 101 stores the received plurality of learned images as learned data in the learned data storage 102.
In step S6, the model training unit 103 included in the defect analyzer 10 retrieves learned data from the learned data storage 102. Next, the model training unit 103 trains the feature extraction model using the retrieved learned data. Subsequently, the model training unit 103 stores the trained feature extraction model in the model storage 104.
In step S7, the feature extraction unit 105 included in the defect analyzer 10 reads out the feature extraction model stored in the model storage 104. Next, for each classification of the defect indicated by a given defect label, the feature extraction unit 105 retrieves one or more learned images from the learned data storage 102.
Subsequently, the feature extraction unit 105 generates one or more learned features by inputting the one or more learned images to the feature extraction model for each classification of the given defect. The feature extraction unit 105 transmits the generated learned features to the representative-point determination unit 106, for each classification of the given defect.
In step S8, the representative-point determination unit 106 included in the defect analyzer 10 receives the learned features for each classification of the given defect, from the feature extraction unit 105. Next, the representative-point determination unit 106 arranges the learned features in the embedding space. Subsequently, the representative-point determination unit 106 determines a representative point for each classification of the defects, based on the embedding space in which the learned features are arranged. The representative-point determination unit 106 transmits information indicating determined representative points to the relation visualization unit 107.
In step S9, the relation visualization unit 107 included in the defect analyzer 10 receives the information indicating the representative points, from the representative-point determination unit 106. Next, the relation visualization unit 107 calculates distances that are each between representative points for the respective classifications. Subsequently, the relation visualization unit 107 generates information indicating the relation between classifications, based on the distances between the representative points. The relation visualization unit 107 transmits the information indicating the relation between the classifications, to the user terminal 30.
The relation visualization unit 107 can provide the relation between the classifications, with various approaches. A first example of the information indicating the relation between the classifications is a heat map expressing the relation between the classifications. A second example of the information indicating the relation between the classifications is a scatter plot expressing the relation between the classifications. A third example of the information indicating the relation between the classifications is a dendrogram expressing the relation between the classifications.
In the example of
Referring back to
In step S11, the imaging unit 201 included in the image acquisition device 20 images a given defect on the object, and generates an image of the given defect. Then, the imaging unit 201 stores the generated image of the given defect in the image storage 202. The image storage 202 stores a plurality of images of defect.
In step S12, the image acquisition unit 301 included in the user terminal 30 transmits a request to acquire a verification image, to the image acquisition device 20, in response to a user operation. The image acquisition device 20 transmits one or more images of defects stored in the image storage 202 to the user terminal 30, in response to the request to acquire the verification image. The image acquisition unit 301 transmits the images of the defects that are received from the image acquisition device 20, to the image transmitter 303.
In step S13, the image transmitter 303 included in the user terminal 30 receives the images of the defects from the image acquisition unit 301. Next, the image transmitter 303 transmits a verification image, selected from the images of the defects, to the defect analyzer 10, in response to a user operation.
In step S14, the image receiver 101 included in the defect analyzer 10 receives the verification image from the user terminal 30. Next, the image receiver 101 transmits the received verification image to the feature extraction unit 105.
In step S15, the feature extraction unit 105 included in the defect analyzer 10 receives the verification image from the image receiver 101. Next, the feature extraction unit 105 reads out the feature extraction model stored in the model storage 104. Subsequently, the feature extraction unit 105 generates a verification feature by inputting the verification image to the feature extraction model. The feature extraction unit 105 transmits the generated verification feature to the classification estimation unit 108.
In step S16, the classification estimation unit 108 included in the defect analyzer 10 receives the verification feature from the feature extraction unit 105. Next, the classification estimation unit 108 acquires a representative point of each classification, from the representative-point determination unit 106. Subsequently, the classification estimation unit 108 arranges the verification feature and the representative point of each classification, in the embedding space. Next, the classification estimation unit 108 calculates a distance between the verification feature and the representative point of each classification. Next, the classification estimation unit 108 estimates a classification of a given defect that is imaged in the verification image, based on the distance between the verification feature and the representative point of each classification.
The classification estimation unit 108 estimates the classification of the given defect imaged in the verification image, by comparing the distance between the verification feature and the representative point of each classification, with a predetermined threshold. The threshold used in the estimating may be set to any value that is obtained based on a user operation.
When there is one representative point for which a distance from the verification feature is less than or equal to the threshold, the classification estimation unit 108 determines that the verification image matches the classification corresponding to the one representative point. When there is a plurality of representative points for each of which a distance from the verification feature is less than or equal to the threshold, the classification estimation unit 108 determines that the verification image matches classifications corresponding to the plurality of representative points. When there is no representative point for which a distance from the verification feature is less than or equal to the threshold, the classification estimation unit 108 determines that the verification image matches a new classification.
The classification estimation unit 108 transmits the verification feature and an estimation result indicating the estimated classification, to the similar-image extraction unit 109. The classification estimation unit 108 assigns a defect label indicating the estimated classification to the verification image, and then may add the defect label to learned data stored in the learned data storage 102.
In step S17, the similar-image extraction unit 109 included in the defect analyzer 10 receives the verification feature and the estimation result, from the classification estimation unit 108. Next, the similar-image extraction unit 109 retrieves, from the learned data storage 102, one or more learned images to which a defect label, indicating the same classification as indicated in the estimation result, is assigned. Subsequently, the similar-image extraction unit 109 calculates a distance between the learned feature, corresponding to each of the retrieved one or more learned images, and the verification feature.
The similar-image extraction unit 109 extracts a similar image from the retrieved one or more learned images, based on calculated distances between the learned features and the verification feature. The similar-image extraction unit 109 may extract all learned images for which distances from the verification feature are each less than or equal to a predetermined threshold, or may extract a predetermined number of learned images in ascending order of distance from the verification feature. The similar-image extraction unit 109 transmits the verification feature, the estimation result, and one or more similar images, to the relation visualization unit 107.
Referring back to
The relation visualization unit 107 can provide the relation between the verification image and the classifications, in the same manner as the approach to provide the relation between classifications in the relation visualization process. That is, the relation visualization unit 107 can provide the relation between the verification image and the classifications, by using the heat map, the scatter plot, or the dendrogram.
Referring back to
In step S21, if it is determined that the verification image matches a new class in the image classification process, the classification estimation unit 108 included in the defect analyzer 10 assigns a defect label indicating the new classification, to the verification image, and then adds the defect label to learned data stored in the learned data storage 102.
In step S22, the classification aggregator 110 included in the defect analyzer 10 acquires the representative point of each classification, from the representative-point determination unit 106. Next, the classification aggregator 110 arranges representative points for classifications, in the embedding space. Subsequently, the classification aggregator 110 determines whether there are a plurality of classifications for which a distance between representative points for the respective classifications is less than or equal to a predetermined threshold, based on the embedding space in which the representative points for the respective classifications are arranged. If there are a plurality of classifications for which a distance between the representative points is less than or equal to the threshold, the classification aggregator 110 determines a consolidated classification that includes the plurality of classifications. For the consolidated classification, one classification among the plurality of classifications may be selected, or a new classification may be adopted.
The threshold used in the classification aggregator 110 that performs the determination may be the same value as the threshold used in the classification estimation unit 108 that performs the estimation, or a different value may be adopted. As the threshold used in the determination, any value that is obtained based on a user operation may be set, as in the threshold used in the above estimation.
In step S23, the classification aggregator 110 included in the defect analyzer 10 retrieves, from the learned data storage 102, pieces of learned data to which respective defect labels representing classifications included in the consolidated classification are assigned. Next, the classification aggregator 110 changes each defect label assigned to the retrieved learned data, to a defect label indicating the consolidated classification. Subsequently, the classification aggregator 110 stores the learned data to which the defect label indicating the consolidated classification is assigned, in the learned data storage 102.
In step S24, the model training unit 103 included in the defect analyzer 10 retrieves learned data from the learned data storage 102. Next, the model training unit 103 reads out the feature extraction model stored in the model storage 104. Subsequently, the model training unit 103 retrains the feature extraction model, using the retrieved learned data. Next, the model training unit 103 stores the retrained feature extraction model in the model storage 104.
The defect analysis system according to the present embodiments trains a feature extraction model based on learned data in which each classification of a given defect is assigned to the image of the given defect. The feature extraction model is trained so as to extract, from the input image, one or more image features that make the distance between similar images small. The defect analysis system outputs information representing the relation between classifications of defects, based on distances between representative points that are determined from image features of the images of defects. With this arrangement, in the defect analysis system of the present embodiment, a given classification of defect can be analyzed based on the image obtained by imaging the defect.
In particular, the defect analysis system according to the present embodiment can visualize the relation between classifications of defects in various representation forms, such as a heat map, a scatter plot, and a dendrogram in which the distance between representative points is visualized. With this arrangement, the defect analysis system according to the present embodiment can output an analysis result that is highly satisfactory to the user.
The defect analysis system according to the present embodiment estimates the classification of a given defect imaged in the verification image, based on the distance between a given image feature of the verification image, in which the classification of the given defect is not specified, and a given representative point for each classification. At this time, one or more images similar to the verification image may be extracted from images of defects included for the estimated classification, and the images may be output together with an estimation result. With this arrangement, the defect analysis system of the present embodiment can output the estimation result with a high sense of satisfaction for an intermediate defect similar to any classifications or a new defect different from any predetermined classifications.
Further, the defect analysis system according to the present embodiment retrains the feature extraction model by using learned data to which a new classification in which a verification image does not match any classifications or a consolidated classification including a plurality of classifications, for which a distance between representative points is short, is assigned. The retrained feature extraction model can be used to analyze the relation between classifications of defects that are reorganized based on the analysis result. With this arrangement, in the defect analysis system of the present embodiment, it is possible to acquire the classification of the defect suitable for a use environment of the user, and it is possible to output the analysis result that is more satisfactory to the user.
Each function described in the above embodiments can be implemented by one or more processing circuits. Here, a “processing circuit” in the present specification includes a processor programmed to implement each function by software, such as a processor implemented by an electronic circuit, and also includes a device such as an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), or a conventional circuit module that is designed to implement each function described above.
Although the embodiments of the present invention are described above in detail, the present invention is not limited to these embodiments, and various modifications or changes can be made within the scope of the gist of the present invention set forth in the claims.
This application claims priority to Japanese Patent Application No. 2022-111662, filed Jul. 12, 2022 with Japan Patent Office, the entire contents of which are incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2022-111662 | Jul 2022 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2023/017967 | 5/12/2023 | WO |