The disclosure relates in general to a training method and training system for a resolution improvement model and a boundary detection method using the resolution improvement model.
Over the years, the requirement of boundary detection precision in the fields of image measurement, satellite telemetry, and medical image analysis has increased from 1 pixel to 1/10 to 1/100 pixel. The conventional boundary detection with a precision level of 1 pixel can no longer meet the current requirements of high-precision measurement/detection.
The currently disclosed sub-pixel boundary detection technology includes moment-based estimation, internal and external Interpolations reconstruction or curve fitting. These estimation methods are all based on approximation, and the boundary positions within the pixel are calculated through approximation. Therefore, boundary detection still has problems of uncertainty and errors.
Particularly, in the measurement of line width and space of semiconductor redistribution layer (RDL) and the tumor diagnosis using medical images, if the picture files contain textures, such as colors, intensities and ramp texture, generated during the image capturing process, discontinuous boundary changes at best stage cannot be obtained easily. Under such circumstances, in the application of tumor diagnosis using medical images, doctors may mis-diagnose the position and size of the tumor. In the application of high-precision measurement of line width and space of semiconductor redistribution layer, the accuracy of the position of semiconductor circuit contact and its size will be affected.
The disclosure is directed to a training method and training system for a resolution improvement model and a boundary detection method using the resolution improvement model.
According to one embodiment, a training method for a resolution improvement model is provided. The training method for the resolution improvement model includes the following steps. A low-resolution image is inputted. Pixels of the low-resolution image are captured and reorganized to generate a high-resolution image according to convolutional features. The resolution of the high-resolution image is higher than that of the low-resolution image. When capturing the low-resolution image, a condition mask is used to filter off the noise content as well as sharpen the edge. The high-resolution image is compared with a ground-truth target image to output a discrimination result. The convolutional features are updated according to the discrimination result.
According to another embodiment, a boundary detection method using the resolution improvement model is provided. The boundary detection method using the resolution improvement model includes the following steps. A low-resolution image is inputted to a resolution improvement model to obtain a high-resolution image whose noise content has been filtered off and edge has been sharpened. The resolution of the high-resolution image is higher than that of the low-resolution image. The high-resolution image is used for boundary detection.
According to an alternative embodiment, a training system for a resolution improvement model is provided. The training system for the resolution improvement model includes an input unit, a generator and a discriminator. The input unit is configured to input a low-resolution image. The generator is configured to capture and reorganize pixels of the low-resolution image to generate a high-resolution image according to convolutional features. The resolution of the high-resolution image is higher than that of the low-resolution image. When capturing the low-resolution image, a condition mask is used to filter off the noise content, as well as sharpen the edge. The discriminator is configured to compare the high-resolution image with a ground-truth target image to output a discrimination result, and the convolutional features are updated according to the discrimination result.
The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
Referring to
Referring to
Referring to
Referring to
Referring to
As indicated in
In the present disclosure, the resolution improvement model 100 plays an important role in the boundary detection process. The resolution improvement model 100 of the present disclosure is a generative adversarial network (GAN) adopting term convolution. The resolution improvement model 100 not only increases the resolution, but further filters off the noise content and sharpens the edge during the resolution increasing process to greatly increase the accuracy of the boundary detection. Detailed descriptions of a training method and a training system for the resolution improvement model 100 are disclosed below.
Referring to
Referring to
Next, the method proceeds to step S720, the low-resolution image IM60 is captured and reorganized by the generator 120 to generate a high-resolution image IM61 according to convolutional features, wherein the resolution of the high-resolution image IM61 is higher than that of the low-resolution image IM60; when the generator 120 captures the low-resolution image IM60, a condition mask MS is used to filter off the noise content, and sharpen the edge. In this embodiment, one physical condition mask is applied both to filter off the noise content and sharpen the edge. Those person has ordinary skill in the art should know that under other suitable conditions, two or more physical conditional may be applied both to filter off the noise content and sharpen the edge.
Referring to
The convolutional features use the condition mask MS to keep only the part corresponding to the numeric value 1 of the condition mask MS. That is, capturing the part corresponding to the numeric value 0 of the first condition mask MS is prohibited.
The numeric values kept in the convolutional features include “−0.3, 0.8, 0.76, −0.3”. Since “0.8” is the largest numeric value, the pixel corresponding to the position of “0.8” is captured from the low-resolution image IM60 to form a pixel of the high-resolution image IM61. By the same analogy, the generator 120 captures pixels from the low-resolution image IM60 and reorganizes the captured pixels to generate the high-resolution image IM61.
That is, when the generator 120 generates the high-resolution image IM61, the generator 120 uses the condition mask MS to weaken the noise content NS60 and to enforce the edge as well.
Then, the method proceeds to step S730, as indicated in
Then, the generator 120 updates the convolutional features according to the discrimination result RS, such that the high-resolution image IM61 generated next time can be closer to the ground-truth target image IM61′. Steps S720 to S740 are repeatedly performed to continuously optimize the output.
According to the above embodiments, the resolution improvement model 100 trained by the training system 1000 not only increases the resolution, but also further filters off the noise content and sharpens the edge during the resolution increasing process. The high-resolution image IM61 has a higher resolution and significantly increases the accuracy of boundary detection. Besides, since the noise content of the high-resolution image IM61 have been filtered off and the edge has been sharpened, the boundary detection will not be affected by noise content and ramp texture.
Moreover, in steps S710 to S740 of the boundary detection method, the boundary detection speed is optimized using a multi-task batch processing technology. Referring to
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.