The present invention relates to a method for enhancing the optical features of a workpiece, a method for enhancing the optical features of a workpiece through deep learning, and a non-transitory computer-readable recording medium. More particularly, the invention relates to a method for enhancing the optical features of a workpiece by intensifying the defects or flaws detected from the workpiece, a method for achieving such enhancement through deep learning, and a non-transitory computer-readable recording medium for implementing the method.
Artificial intelligence (AI), also known as machine intelligence, refers to human-like intelligence demonstrated by a manmade machine via simulating such human abilities as reasoning, comprehension, planning, learning, interaction, perception, moving, and object operation. With the development of technology, AI-related research has had preliminary results, and AI nowadays is capable of better performance than humans particularly in areas involving a finite set of human abilities, such as in image recognition, speech recognition, and chess games.
Formerly, AI-based image analysis was carried out by machine learning, which involves analyzing image data and learning from the data in order to determine or predict the state of a target object. Later, the advancement of algorithms and the improvement of hardware performance brought about major breakthroughs in deep learning. For instance, with the help of artificial neural networks, human selection is no longer required in the machine training process of machine learning. Strong hardware performance and powerful algorithms make it possible to input images directly into an artificial neural network so that a machine can learn on its own. Deep learning is expected to gradually supersede machine learning and become the mainstream technique in machine vision and image recognition.
It is an objective of the present invention to increase the rate at which a convolutional neural network can recognize the defects of a workpiece. To this end, the defect features of images taken of a workpiece are optically enhanced, and the enhanced images are transferred to a deep-learning module to train the deep-learning module.
In order to achieve the above objective, the present invention is to provide a method for enhancing an optical feature of a workpiece, comprising the steps of: receiving the workpiece and corresponding defect image information from outside; moving the workpiece to a working area; generating feature enhancement information according to the defect image information; adjusting an optical properties of a variable light source device according to the feature enhancement information, and then providing light source to the workpiece in the working area by the variable light source device; and adjusting an external parameter and an internal parameter of a variable image-taking device according to the feature enhancement information, and then capturing images of the workpiece in the working area by the variable image-taking device to obtain feature-enhanced images of the workpiece.
Another objective of the present invention is to provide a method for enhancing an optical feature of a workpiece through deep learning, comprising the steps of: receiving the workpiece and corresponding defect image information from outside; moving the workpiece to a working area; generating feature enhancement information according to the defect image information; adjusting an optical properties of a variable light source device according to the feature enhancement information, and then providing light source to the workpiece in the working area by the variable light source device; adjusting an external parameter and an internal parameter of a variable image-taking device according to the feature enhancement information, and then capturing images of the workpiece in the working area by the variable image-taking device to obtain feature-enhanced images of the workpiece; normalizing the feature-enhanced images to form training samples; and providing the training samples to a deep-learning model and thereby training the deep-learning model to identify the defect image information
Furthermore, another objective of the present invention is to provide a non-transitory computer-readable recording medium, comprising a computer program, wherein the computer program performs the above methods after being loaded into and executed by a controller.
The present invention can effectively enhance the presentation of defects or flaws in the images of a workpiece, thereby increasing the rate at which a deep-learning model can recognize the defect or flaw features.
According to the present invention, images can be taken of a workpiece under different lighting conditions and then input into a deep-learning model in order for the model to learn from the images. This also helps increase the defect or flaw feature recognition rate of the deep-learning model.
The details and technical solution of the present invention are hereunder described with reference to accompanying drawings. For illustrative sake, the accompanying drawings are not drawn to scale. The accompanying drawings and the scale thereof are restrictive of the present invention.
A preferred embodiment of the present invention is described below with reference to
The invention essentially includes an automated optical inspection apparatus 10, at least one carrying device 20, and at least one optical feature enhancement apparatus 30. The carrying device 20 and the optical feature enhancement apparatus 30 are provided downstream of the automated optical inspection apparatus 10. A workpiece that has been inspected by the automated optical inspection apparatus 10 is carried by the carrying device 20 to the working area of the optical feature enhancement apparatus 30. The optical feature enhancement apparatus 30 provides additional lighting to enhance the defect features of the workpiece, and images thus obtained are output to a convolutional neural network (CNN) system to conduct training process.
The automated optical inspection apparatus 10 includes an image taking device 11 and an image processing device 12 connected to the image taking device 11. The image taking device 11 photographs a workpiece to obtain images of the workpiece. In a preferred embodiment, the image taking device 11 may be an area scan camera or a line scan camera; the present invention has no limitation in this regard. The image processing device 12 is configured to generate defect image information by analyzing and processing images. The defect image information includes such information as the types and/or locations of defects.
The carrying device 20 is provided downstream of the automated optical inspection apparatus 10 and is configured to carry a workpiece that has been inspected by the automated optical inspection apparatus 10 to the working area of the optical feature enhancement apparatus 30 in an automatic or semi-automatic manner. In a preferred embodiment, the carrying device 20 is composed of a plurality of working devices, and the working devices work in concert with one another to transfer workpieces along a relatively short or relatively good path, keeping the workpieces from collision or damage during the transferring or carrying process. More specifically, the carrying device 20 may be a conveyor belt, a linearly movable platform, a vacuum suction device, a multi-axis carrier, a multi-axis robotic arm, a flipping device, or the like, or any combination of the foregoing; the present invention has no limitation in this regard.
The optical feature enhancement apparatus 30 is also provided downstream of the automated optical inspection apparatus 10 and receives inspected workpieces from the carrying device 20. The optical feature enhancement apparatus 30 includes at least one variable image-taking device 31; at least one variable light source device 32; an image processing module 33; a control device 34 connected to the variable image-taking device 31, the variable light source device 32, and the image processing module 33; and a computation device 35 coupled to the control device 34. The variable light source device 32 and the variable image-taking device 31 are provided in a working area in order to provide auxiliary lighting to and take further images of a workpiece respectively.
The variable light source device 32 is configured to provide light source to a workpiece and has adjustable optical properties. More specifically, the adjustable optical properties of the variable light source device 32 may include the intensity, projection angle, or wavelength of the output light.
In a preferred embodiment, the variable light source device 32 can provide uniform light, collimated light, annular light, a point source of light, spotlight, area light, volume light, and so on. In another preferred embodiment, the variable light source device 32 includes a plurality of lamp units provided respectively at different positions and angles (e.g., one at the front, one at the back, and several lateral light sources positioned at different angles respectively), wherein the light sources of the light units at different corresponding angles can be selectively activated by instructions of the control device 34 in order to obtain images of a workpiece illuminated by different light sources, or wherein the lamp unit can be moved by movable platforms to different positions in order to provide multi-angle or partial lighting.
In yet another preferred embodiment, the variable light source device 32 can provide light of different wavelengths, such as white light, red light, blue light, green light, yellow light, ultraviolet (UV) light, and laser light, so that the defect features of a workpiece can be rendered more distinguishable by illuminating the workpiece with light of one of the wavelengths.
In still another preferred embodiment, and by way of example only, the variable light source device 32 can provide partial lighting to the defects of a workpiece according to instructions of the control device 34.
The variable image-taking device 31 is configured to obtain images of a workpiece and has external parameters and internal parameters, which are adjustable. The internal parameters include, for example, the focal length, the image distance, the position where a camera's center of projection lies on the images taken, the aspect ratio of the images taken (expressed in numbers of pixels), and a camera's image distortion parameters. The external parameters include, for example, the location and shooting direction of a camera in a three-dimensional coordinate system, such as a rotation matrix and a displacement matrix.
In a preferred embodiment, the variable image-taking device 31 may be an area scan camera or a line scan camera, depending on equipment layout requirements; the present invention has no limitation in this regard.
The image processing module 33 is configured to generate feature enhancement information based on the defect image information. More specifically, the feature enhancement information may be a combination of a series of control parameters, wherein the control parameters are generated according to the types and locations of defects and may be, for example, specific coordinates, a lighting strategy, or a process flow. In a preferred embodiment, a database of control parameters is established, and the desired control parameters can be found according to the types and locations of defects. The control parameters are output to the control device 34 in order for the control device 34 to adjust the output of the variable image-taking device 31 and of the variable light source device 32 in advance and/or in real time.
The control device 34 is configured to adjust the aforesaid external parameters, internal parameters, and/or optical properties according to the feature enhancement information and control the operation of the variable image-taking device 31 and/or of the variable light source device 32 so that feature-enhanced images can be obtained of a workpiece.
In a preferred embodiment, the control device 34 essentially includes a processor and a storage unit connected to the processor. In this embodiment, the processor and the storage unit may jointly form a computer or processor, such as a personal computer, a workstation, a mainframe computer, or a computer or processor of any other form; the present invention has no limitation in this regard. Also, the processor in this embodiment may be coupled to the storage unit. The processor may be, for example, a central processing unit (CPU), a programmable general-purpose or application-specific microprocessor, a digital signal processor (DSP), a programmable controller, an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or any other similar device, or a combination of the above.
The computation device 35 is configured to execute a deep-learning model after loading the storage unit and then train the deep-learning model with feature-enhanced images so that the deep-learning model can identify defect image information. The deep-learning model may be but is not limited to a LeNet model, an AlexNet model, a GoogleNet model, a Visual Geometry Group (VGG) model, or a convolutional neural network based on (e.g., expanded from and with modifications made to) any of the aforementioned model.
Reference is now made to
The automated optical inspection apparatus 10 takes images of a workpiece, marks the defect features of the images taken, and sends the defect image information to the image processing module 33 in order for the image processing module 33 to output feature enhancement information to the control device 34, thereby allowing the control device 34 to control the operation of the variable image-taking device 31 and/or of the variable light source device 32. The image processing module 33 includes the following parts, named after their respective functions: an image analysis module 33A, a defect locating module 33B, and a defect area calculating module 33C.
The image analysis module 33A is configured to verify the defect features and defect types by analyzing the defect image information. More specifically, the image analysis module 33A performs a pre-processing process (e.g., image enhancement, noise elimination, contrast enhancement, border enhancement, feature extraction, image compression, and image transformation) on an image obtained, applies a vision software tool and algorithm to the to-be-output image to accentuate the presentation of the defect features in the image, and compares the processed image of the workpiece with an image of a master slice to determine the differences therebetween, to verify the existence of the defects, and preferably to also identify the defect features and the defect types according to the presentation of the defects.
The defect locating module 33B is configured to locate the defect features of a workpiece, or more particularly to find the positions of the defect features in the workpiece. More specifically, after the image analysis module 33A verifies the existence of defects, the defect locating module 33B assigns coordinates to the location of each defect feature in the image, correlates each set of coordinates with the item number of the workpiece and the corresponding defect type, and stores the aforesaid information into a database for future retrieval and access. It is worth mentioning that distinct features of the workpiece or of the workpiece carrier can be marked as reference points for the coordinate system, or the boundary of the workpiece (in cases where the workpiece is a flat object such as a panel or circuit board) can be directly used to define the coordinate system; the present invention has no limitation in this regard.
The defect area calculating module 33C is configured to analyze the covering area of each defect feature in the workpiece. More specifically, once the type and location of a defect are known, it is necessary to determine the extent of the defect feature in the workpiece so that the backend optical feature enhancement apparatus 30 can take images covering the entire defect feature in the workpiece and determine the covering area to be enhanced. The defect area calculating module 33C can identify the extent of each defect feature by searching for the boundary values of connected sections and then calculate the area of the defect feature in the workpiece.
Any defect feature obtained through the foregoing procedure by the image processing module 33 includes such information as the type and/or location of the defect.
As defect features present themselves better with certain types of light sources than with others, the control device 34 of the optical feature enhancement apparatus 30 refers to the types of the defect features detected (as can be found in the feature enhancement information obtained, which includes such information as the types and location of the defects) in order to determine which type of light source should be provided to the workpiece in the working area.
The storage unit of the control device 34 is prestored with a database that includes indices and output values corresponding respectively to the indices. After obtaining the feature enhancement information from the image processing module 33, the control device 34 uses the feature enhancement information as an index to find the corresponding output value, which is subsequently used to adjust the optical properties of the variable light source device 32.
The relationship between defect types and the optical properties of the variable light source device 32 is described below by way of example. Please note that the following examples demonstrate only certain ways of implementing the present invention and are not intended to be restrictive of the scope of the invention.
If a defect feature provides a marked contrast in hue, color saturation, or brightness to the surrounding area and can be easily identified through an image processing procedure (e.g., binarization), it is feasible to provide the workpiece surface with uniform light (or ambient light) so that every part of the visible surface of the workpiece has the same brightness. Such defect features include, for example, metal discoloration, discoloration of the workpiece surface, black lines, accumulation of ink, inadvertently exposed substrate areas, bright dots, variegation, dirt, and scratches.
If a defect feature is an uneven area in the image, it is feasible to provide the workpiece surface with collimated light from the side so that an included angle is formed between the optical path and the visible surface of the workpiece, allowing the uneven area in the image to cast a shadow. Such defect features include vertical lines, blade streaks, sanding marks, and other uneven workpiece surface portions.
If a defect feature is a flaw inside the workpiece or can reflect light of a particular wavelength, it is feasible to provide a backlight at the back of the workpiece or illuminate the workpiece with a light source whose wavelength can be adjusted to accentuate the defect in the image. Such defect features include, for example, mura, bright dots, and bright sub-pixels.
Aside from the above, different light source combinations can be used to highlight different defect features in an image. The resulting feature-enhanced images (i.e., images in which the defect features have been accentuated) are sent to the deep-learning model in the computation device 35 to train the model and thereby raise the recognition rate of the model.
The following paragraphs describe various embodiments of the variable light source device 32 with reference to
According to a preferred embodiment as shown in
The light intensity control unit 32A is configured to control the output power of one or a plurality of lamp units. The optical feature enhancement apparatus 30 can detect the state of ambient light and then control the output power of the lamp units of the variable light source device 32 through the light intensity control unit 32A according to the detection result.
The light angle control unit 32B is configured to control the light projection angles of the lamp units. In a preferred embodiment, the lamp units are directly set at different angles to target the working area, and the light angle control unit 32B will turn on the lamp units whose positions correspond to instructions received from the control device 34. In another preferred embodiment, carrying devices are provided to carry the lamp units of the variable light source device 32 to the desired positions to shed additional light on a workpiece. In yet another preferred embodiment, the polarization property of each lamp unit can be changed via an electromagnetic transducer module provided on an optical propagation medium, with a view to outputting light of different phases or polarization directions. The present invention has no limitation on how the light angle control unit 32B is implemented.
The light wavelength control unit 32C is configured to control the variable light source device 32 to output light so that the defects on the surface of a workpiece can be accentuated by switching to a certain wavelength. Light provided by the variable light source device 32 includes, for example, white light, red light, blue light, green light, yellow light, UV light, and laser light. The aforementioned light can be used to accentuate mura defects of a panel and defects that are hidden in a workpiece but easily identifiable with particular light.
Please refer to
As shown in
Please refer to
As shown in
The first movable platform 322 in this preferred embodiment may be a multidimensional linearly movable platform, a multi-axis robotic arm, or the like; the present invention has no limitation in this regard.
The following paragraphs describe various embodiments of the variable image-taking device 31 with reference to
In the preferred embodiment shown in
As shown in
In addition to moving the variable image-taking device 31 in the X and Y directions, the linearly movable platform can control the position and image-taking angle of the variable image-taking device 31 in the Z direction. As shown in
Other than the foregoing methods, the control device 34 may adjust the focus and image-taking position of the variable image-taking device 31 via software or by an optical means in order to obtain feature-enhanced images; the present invention has no limitation on the control method of the control device 34.
The apparatus described above will eventually obtain feature-enhanced images, i.e., images in which the defect features are enhanced. The feature-enhanced images obtained will be normalized and then output to the deep-learning model in the computation device 35 to train the model. Structurally speaking, the deep-learning model may be a LeNet model, an AlexNet model, a GoogleNet model, or a VGG model; the present invention has no limitation in this regard.
The training method of a convolutional neural network is described below with reference to
As shown in
The aforesaid process not only can increase the defect or flaw feature recognition rate of the convolutional neural network effectively, but also verifies the performance of the network repeatedly during the inspection process so that the trained device will eventually have a high degree of completion and a high recognition rate.
The method of the present invention for enhancing the optical features of a workpiece is described below with reference to
As shown in
To begin with, the workpiece is carried to the inspection area of the automated optical inspection apparatus 10 for defect/flaw detection (step S11).
Then, the automated optical inspection apparatus 10 photographs the workpiece with the image taking device 11 to obtain images of the workpiece (step S12).
After obtaining the images of the workpiece, the image processing device 12 of the automated optical inspection apparatus 10 processes the images to obtain defect image information of the images (step S13). The defect image information includes such information as the types and/or locations of defects.
The workpiece having completed the inspection is carried from the inspection area of the automated optical inspection apparatus 10 to the working area of the optical feature enhancement apparatus 30 by the carrying device 20, and the image processing module 33 receives the defect image information from the image processing device 12 (step S14).
Feature enhancement information is subsequently derived from the defect image information (step S15). The feature enhancement information may be a combination of a series of control parameters, wherein the control parameters are generated according to the types and locations of the defects.
After that, the optical properties of the variable light source device 32 are adjusted according to the feature enhancement information, and the variable light source device 32 projects light on the workpiece in the working area accordingly to enhance the defect features of the workpiece (step S16). More specifically, the optical properties of the variable light source device 32 are adjusted according to the types of the defects, and the adjustable optical properties of the variable light source device 32 include the intensity, projection angle, or wavelength of the light source.
Following that, the control device 34 controls the external parameters and internal parameters of the variable image-taking device 31 according to the feature enhancement information, and images are taken of the workpiece in the working area to obtain feature-enhanced images of the workpiece (step S17). More specifically, the control device 34 can adjust, among others, the position, angle, or focal length of the variable image-taking device 31 according to the types of the defects.
Then, the control device 34 normalizes the feature-enhanced images to form training samples (step S18). Each training sample at least includes input values and an anticipated output corresponding to the input values.
The training samples are sent to a computer device (e.g., the computation device 35) and are input through the computer device into a deep-learning model, thereby training the deep-learning model how to identify the defect image information (step S19).
The steps stated above can be carried out by way of a non-transitory computer-readable recording medium. Such a computer-readable recording medium may be, for example, a read-only memory (ROM), a flash memory, a floppy disk, a hard disk drive, an optical disc, a USB flash drive, a magnetic tape, a database accessible through a network, or any other storage medium that a person skilled in the art can easily think of as having similar functions.
In summary, the present invention can effectively enhance the presentation of defects or flaws in the images of a workpiece, thereby increasing the rate at which a deep-learning model can recognize the defect or flaw features. In addition, according to the present invention, images can be taken of a workpiece under different lighting conditions and then input into a deep-learning model in order for the model to learn from the images. This also helps increase the defect or flaw feature recognition rate of the deep-learning model.
The above is the detailed description of the present invention. However, the above is merely the preferred embodiment of the present invention and cannot be the limitation to the implement scope of the present invention, which means the variation and modification according the present invention may still fall into the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
107106952 | Mar 2018 | TW | national |
Number | Date | Country | |
---|---|---|---|
Parent | 16265334 | Feb 2019 | US |
Child | 17082893 | US |