SYSTEM FOR DETECTING SURFACE PATTERN OF OBJECT AND ARTIFICIAL NEURAL NETWORK-BASED METHOD FOR DETECTING SURFACE PATTERN OF OBJECT

Information

  • Patent Application
  • 20200364442
  • Publication Number
    20200364442
  • Date Filed
    April 14, 2020
    5 years ago
  • Date Published
    November 19, 2020
    5 years ago
Abstract
An artificial neural network-based method for detecting a surface pattern of an object includes: receiving a plurality of images, wherein the plurality of images are obtained by capturing an image of an object based on light with different lighting directions and a light incident angle is less than or equal to 90 degrees; superimposing initial images of the object images; and performing deep learning by using the plurality of initial images to build a predictive model for identifying the surface pattern of the object. Accordingly, the speed of identifying a surface pattern of an object is increased, further improving the product yield of the object.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to a system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object, and particularly, to a system capable of learning and automatically detecting various surface types such as slots, cracks, bumps and textures of an object, and a neural network training system thereof.


Description of the Prior Art

Various safety protection measures are consisted of numerous small structural elements, such as safety belts. If these small structural elements have insufficient strength or other defects, safety concerns of the safety protection measures can be resulted.


Due to various reasons during a manufacturing process, such as unintentional impacts or mold defects, minute slots, cracks, bumps and patterns can be resulted on surfaces of these small or miniature structural elements, and these minute defects cannot be easily observed. In one conventional defect detection method, a product under detection is observed by the naked eyes or touched by hands. However, inspecting by such manual detection method to determine whether a product is defective has poor efficiency and is susceptible to misjudgment.


SUMMARY OF THE INVENTION

In view of the above, a system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object of the present invention perform training by multi-angle imaging (i.e., different lighting directions) and preprocessing of multi-dimensional superimposition, so as to enhance distinguishability of a stereoscopic structural feature of an object without increasing computation time.


In one embodiment, an artificial neural network-based method for detecting a surface pattern of an object includes: receiving a plurality of object images of a plurality of objects, wherein the plurality of object images of each of the objects are images of the object captured based on light of a plurality of lighting directions and the plurality of lighting directions are different from one another; superimposing the plurality of object images of each of the objects into an initial image; and performing deep learning by using the plurality of initial images of the plurality of objects to build a predictive model for identifying the surface pattern of the object.


In one embodiment, a system for detecting a surface pattern of an object includes a driver component, a plurality of lighting source components and a photosensitive element. The driver component carries an object, a surface of the object is divided along a first direction into a plurality of areas, and the driver component is further for sequentially moving one of the plurality of areas to a detection position. The plurality of light source components are configured to face the detection position according to a plurality of different lighting directions, respectively, and provide light to illuminate the detection position, respectively, wherein a light incident angle of the light provided by each of the light source components relative to a normal line of the area located at the detection position is less than or equal to 90 degrees. The photosensitive element is configured to face the detection position, and sequentially captures the detection image of each of the areas when the light illuminates the detection position with the lighting directions.


In conclusion, the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention are capable of providing object images of different imaging effects for the same object by controlling various different incident angles of imaging light sources, thereby enhancing stereoscopic distinguishability in space for various surface patterns of an object under image detection. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, object images under different lighting directions can be integrated by performing multi-dimensional superimposition on the object images, so as to improve identification of a surface pattern of an object and to further obtain an optimal resolution of the surface pattern of the object. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, surface images of multiple spectra can also be integrated, so as to improve identification of a surface pattern of an object, further obtaining an optimal resolution of the surface pattern of the object. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, surface images of multi-spectrum can be integrated to enhance the identification of a surface pattern of an object. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, a surface pattern of an object can be independently determined by an artificial neural network system such that an inspector is not required to observe the object by the naked eyes or touch the object by hands, hence significantly improving the efficiency of identifying surface patterns as well as reducing human misjudgment.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a system for detecting a surface pattern of an object according to an embodiment of the present invention;



FIG. 2 is a block diagram of a system for detecting a surface pattern of an object according to an embodiment of the present invention;



FIG. 3 is a schematic diagram of an implementation form of relative optical positions of the object, the light source component and the photosensitive element in FIG. 1;



FIG. 4 is a schematic diagram of a system for detecting a surface pattern of an object according to another embodiment of the present invention;



FIG. 5 is a schematic diagram of an implementation form of relative optical positions of the object, the light source component and the photosensitive element in FIG. 4;



FIG. 6 is a schematic diagram of another implementation form of relative optical positions of the object, the light source component and the photosensitive element in FIG. 4;



FIG. 7 is a schematic diagram of an implementation form of an object image;



FIG. 8 is a schematic diagram of another implementation form of an object image;



FIG. 9 is a flowchart of an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention;



FIG. 10 is a schematic diagram of an implementation form of a surface pattern of an area on a surface;



FIG. 11 is a schematic diagram of an object image of an area in FIG. 10 under a lighting direction of a light source component 502;



FIG. 12 is a schematic diagram of an object image of an area in FIG. 10 under a lighting direction of a light source component 501;



FIG. 13 is a schematic diagram of an object image of an area in FIG. 10 under a lighting direction of a light source component 503;



FIG. 14 is a schematic diagram of an object image of an area in FIG. 10 under a lighting direction of a light source component 504;



FIG. 15 is a schematic diagram of an implementation form of an initial image;



FIG. 16 is a flowchart of an artificial neural network-based method for detecting a surface pattern of an object according to another embodiment of the present invention; and



FIG. 17 is a flowchart of an artificial neural network-based method for detecting a surface pattern of an object according to yet another embodiment of the present invention.





DETAILED DESCRIPTION OF THE EMBODIMENTS

Refer to FIG. 1 and FIG. 2. In one embodiment, a system 1 for detecting a surface pattern of an object is suitable for scanning an object 10 to obtain object images of the object 10 under different lighting directions. In some embodiments, the surface of the object 10 can include at least one surface pattern (for example, surface structures such as slots, cracks, bumps, ridges and dents, edges, surface defects, surface roughness, and minute patterns), and images of the surface patterns are also presented in a corresponding object image. Wherein, the surface defect is a three-dimensional structure. Herein, the three-dimensional fine structure is in a scale of submicron (<1 μm, e.g., 0.1 μm to 1 μm) to micron (μm); that is, the longest side or the longest diameter of the three-dimensional structure is in a scale between submicron and micron. For example, the three-dimensional structure is a micro-structure of 300 nm to 6 μm.


The system 1 for detecting a surface pattern of an object includes a processor 30, a driver component 20, a plurality of light source components 501, 502, 503 and 504, and a photosensitive element 40. The processor 30 is coupled to the driver component 20, the plurality of light source components 501, 502, 503 and 504, and the photosensitive element 40. The driver component 20 is for carrying an object 10, and is configured with a detection position. The plurality of light source components 501, 502, 503 and 504 and the photosensitive element 40 are configured to face the detection position from different angles, and the plurality of light source components 501, 502, 503 and 504 provide light from different lighting directions toward an imaging target position (i.e., the detection position) to be detected. In other words, the plurality of light source components 501, 502, 503 and 504 face the detection position and are configured in a plurality of different lighting directions toward the detection position. Thus, the system 1 for detecting a surface pattern of an object can obtain object images having optimal spatial information of surface features. In one embodiment, the plurality of lighting directions at least include the front side of the detection position, the rear side of the detection position, the left side of the detection position and the right side of the detection position, as shown in FIG. 1. That is to say, the light source component 502 is provided on the front side of the detection position, the light source component 504 is provided on the rear side of the detection position, the light source component 501 is provided on the left side of the detection position, and the light source component 503 is provided on the right side of the detection position. Herein, the plurality of light source components sequentially provide light to illuminate the detection position.


In one embodiment, light L provided by the light source components is visible light, so as to form an image of a surface pattern in a scale of submicron (μm) on the surface of the object 10 in the detection image. In one embodiment, the optical wavelength of the light L can range between 380 nm and 780 nm, and can be determined according to requirements of material properties and spectral reflectance of the surface of an inspected object. In some embodiments, the visible light is, for example, any one of white light, violet light, blue light, green light, yellow light, orange light and red light. For example, the light L can be white light having a wavelength ranging between 380 nm and 780 nm, blue light having a wavelength ranging between 450 nm and 475 nm, green light having a wavelength ranging between 495 nm and 570 nm, or red light having a wavelength ranging between 620 nm and 750 nm.


Refer to FIG. 3. In one embodiment, a light incident angle θ of the light L relative to a normal line C of an area 10a, 10b or 10c on the surface of the object 10 and located at the detection position is less than or equal to 90 degrees. Herein, the light incident angle θ refers to an included angle between the incident direction of the detection light L and the normal line C of the area located at the detection position; when the light incident angle θ is more than 0 degree and less than or equal to 90 degrees, it means that the detection light L illuminates the area located at the detection position with the light incident angle θ of being more than 0 degree and less than or equal to 90 degrees relative to the normal line C. In some embodiments, the light incident angle θ of the light L can be more than or equal to a critical angle and less than or equal to 90 degrees. Herein, the critical angle can be associated with the surface pattern expected to be detected. Herein, the surface pattern expected to be detected can be a surface pattern having a minimum size among surface types expected to be detected by a user. In some embodiments, the light incident angle θ is associated with an aspect ratio of a surface pattern expected to be detected. In some embodiments, the critical angle can be an arctangent (r/d), where d is a hole depth of a surface pattern expected to be detected and r is a hole radius of the surface pattern expected to be detected. That is, the light incident angle a can be more than or equal to the arctangent (r/d).


In one embodiment, the plurality of light source components of the system 1 for detecting a surface pattern of an object provide light with the same light incident angle. In one embodiment, the photosensitive axis of the photosensitive element 40 is parallel to the normal line C.


Again refer to FIG. 1. During the operation of the system 1 for detecting a surface pattern of an object, the surface of the object 10 can be divided along a first direction E into a plurality of areas 10a, 10b and 10c, and the driver component 20 sequentially moves one of the plurality of areas 10a, 10b and 10c to the detection position. It should be noted that, three areas 10a, 10b and 10c are denoted as an example in FIG. 1. However, the present invention is not limited to the above example. The surface of the object 10 can also be divided into areas of other quantities according to actual requirements, for example, any quantity such as 3, 5, 11, 15 or 20.


In one embodiment, the photosensitive element 40 is configured to face the detection position, and sequentially captures a detection image of each of the areas 10a, 10b and 10c when the light L sequentially illuminates the detection position with each of the lighting directions. For example, during a detection procedure, the driver component 20 first moves the area 10a to the detection position, and as the area 10a is illuminated by the detection light provided by the light source component 501, the photosensitive element 40 captures the detection image of the area 10a. Next, as the area 10b is illuminated by the detection light provided by the light source component 502, the photosensitive element 40 captures the detection image of the area 10a. Then, as the area 10a is illuminated by the detection light provided by the light source component 503, the photosensitive element 40 captures the detection image of the area 10a. The above is repeated similarly until the detection images of the area 10a under all the different lighting directions have been captured. The driver component 20 then moves the object 10 such that the area 10b becomes located at the detection position, and as the area 10b is illuminated by the detection light provided by the light source component 501, the photosensitive element 40 then captures the detection image of the area 10b. The above is repeated similarly until the detection images of the area 10b under all the different lighting directions have been captured. Thus, the detection images of all the areas under all the different lighting directions can be obtained.


In one embodiment, the photosensitive element 40 is configured to face the detection position; when the areas 10a, 10b and 10c of the object 10 sequentially arrive at the detection position as the light L in one lighting direction illuminates the detection position, the photosensitive element 40 sequentially captures the detection images of the areas 10a, 10b and 10c. For example, as the light source component 501 provides the light L to the detection position, when the areas 10a, 10b and 10c sequentially arrive at the detection position, the photosensitive element 40 also sequentially captures the detection images of the area 10b, the area 10b and the area 10c of the object 10 located at the detection position. As the light source component 502 provides the light L to the detection position, when the areas 10a, 10b and 10c sequentially arrive at the detection position, the photosensitive element 40 also sequentially captures the detection images of the area 10b, the area 10b and the area 10c of the object 10 located at the detection position. The above is repeated similarly so as to obtain the detection images of each of the areas under different lighting directions.


In one embodiment, between the optical axes of any two adjacent light source components among the plurality of light source components is the same predetermined included angle. As shown in FIG. 1, in one example, the predetermined included angle between the light source component 501 and the light source component 502 is 90 degrees, the predetermined included angle between the light source component 502 and the light source component 503 is 90 degrees, the predetermined included angle between the light source component 503 and the light source component 504 is 90 degrees, and the predetermined included angle between the light source component 504 and the light source component 501 is 90 degrees. In one embodiment, the total angle of the predetermined included angles between the plurality of light source components is 360 degrees. Herein, the predetermined included angle can refer to an included angle between incident directions of the light L (also referred to as an optical axis) of two adjacent light source components.


Refer to FIG. 4 and FIG. 5. In one embodiment, the system 1 for detecting a surface pattern of an object includes a photosensitive module 41 consisted of a photosensitive element 40 and a spectroscopic component 46. The spectroscopic component 46 is located between the photosensitive element 40 and the detection position; alternatively speaking, spectroscopic component 46 is located between the photosensitive element 40 and the object 10. The spectroscopic component 46 has a plurality of filter regions 462, 464 and 466 respectively corresponding to a plurality of spectra, and a movement component 460. At this point, a light source component 50 (i.e., any one of the light source components 501, 502, 503 and 504) provides multi-spectrum light to illuminate the detection position. Herein, the multi-spectrum light has sub-light of a plurality of spectra. Thus, by switching the filter regions 462, 464 and 466 of the spectroscopic component 46 by using the movement component 460, the filter regions 462, 464 and 466 are individually driven and moved to a photosensitive axis D of the photosensitive element 40, hence enabling the photosensitive element 40 to capture, under the sub-light of individual spectra by means of the filter regions 462, 464 and 466, detection images of the area located at the detection position with respect to multiple different spectra. That is to say, when the multi-spectrum light from the light source component 50 illuminates upon the object 10 at the detection position, the multi-spectrum light is reflected by the surface of the object 10, and the reflected light is filtered by any one of the filter regions 462, 464 and 466 of the spectroscopic component 46 to become the sub-light of the spectrum of the corresponding filter region and then enters a sensing region of the photosensitive element 40, and at this point, only one single spectrum (an intermediate value of an optical waveband) remains in the sub-light arriving at the photosensitive element 40. When the same filter region 462, 464 or 466 is aligned with the photosensitive axis D of the photosensitive element 40, the driver component 20 moves one area 10a, 10b or 10c each time to the detection position, and the photosensitive element 40 captures once after each movement the detection image of the area 10a, 10b or 10c currently located at the detection position, so as to obtain the detection images of all the areas 10a, 10b and 10c under the same spectrum. Then, the spectroscopic component 46 switches to and aligns another filter region 462, 464 or 466 with the photosensitive axis D of the photosensitive element 40, and again sequentially moves the areas 10a, 10b and 10c and captures the detection images of the areas 10a, 10b and 10c. The steps above are repeated similarly to obtain the detection images having the spectra respectively corresponding to the filter regions 462, 464 and 466.


In some embodiments, the photosensitive component 40 can include a movement component 460, and the movement component 460 is coupled to the spectroscopic component 46 and the processor 30. During the operation of the system 1 for detecting a surface pattern of an object, under the control of the processor 30, the movement component 460 sequentially moves one of the filter regions 462, 464 and 466 of the spectroscopic component 46 to the photosensitive axis D of the photosensitive element 40.


Refer to FIG. 3. In one embodiment, the light source component 501 uses light emitting elements (not shown) of multiple different spectra to provide light of a plurality of spectra, and the light emitting elements of the different spectra are sequentially actuated, so as to enable the photosensitive element 40 to obtain detection images of multiple different spectra. The plurality of spectra different from one another can be visible light from any at least two of white light, violet light, blue light, green light, yellow light, orange light and red light. Herein, the light emitting elements respectively correspond to multiple non-overlapping optical wavebands, and these optical wavebands can be continuous or discontinuous. In some embodiments, each of the light emitting elements can be implemented by one or more light emitting diodes (LED); in some embodiments, each light emitting element can be implemented by a laser source. For example, the light source component 50 includes a red LED, a blue LED and a green LED, and causes the different LEDs to sequentially emit light, so as to obtain a detection image of a red spectrum, a detection image of a blue spectrum, and a detection image of a green spectrum.


Refer to FIG. 6. In one embodiment, the light source component 50 includes a light emitting element 52 and a spectroscopic component 56. The spectroscopic component 56 is located between the light emitting element 52 and the detection position; alternatively speaking, the spectroscopic component 56 is located between the light emitting element 52 and the object 10. The spectroscopic component 56 has a plurality of filter regions 562, 564 and 566 respectively corresponding to a plurality of spectra, and a movement component 560. At this point, the light emitting element 52 provides multi-spectrum light to illuminate the detection position. Herein, the multi-spectrum light has sub-light of a plurality of spectra. Thus, by switching the filter regions 562, 564 and 566 of the spectroscopic component 56 by using the movement component 560, one of the filter regions 562, 564 and 566 is driven and moved to the front of the light emitting element 52, hence enabling the light emitting element 52 to illuminate the area on the surface of the object 10 located at the detection position with the sub-light of various spectra through the filter regions 562, 564 and 566, and allowing the photosensitive element 40 to capture the detection images of multiple different spectra. That is to say, the multi-spectrum light emitted by the light emitting element 52 is filtered by any one of the filter regions 562, 564 and 566 of the spectroscopic component 56 to become the sub-light of the spectrum of the corresponding the filter region, and the sub-light is reflected, upon illuminating the object 10 at the detection position, by the surface of the object 10 and further arrives at the photosensitive element 40. When the same filter region 562, 564 or 566 is aligned with the light emitting element 52, the driver component 20 moves one area 10a, 10b or 10c each time to the detection position, and the photosensitive element 40 captures once after each movement the detection image of the area 10a, 10b or 10c currently located at the detection position, so as to obtain the detection images of all the areas 10a, 10b and 10c under the same spectrum. Then, the spectroscopic component 56 switches to and aligns another filter region 562, 564 or 566 with the light emitting element 52, and again sequentially moves the areas 10a, 10b and 10c and captures the detection images of the areas 10a, 10b and 10c. The steps above are repeated similarly to obtain the detection images having the spectra respectively corresponding to the filter regions 562, 564 and 566. In other words, the light source component 50 uses the multi-spectrum light emitted by one light emitting element to illuminate the detection position, and then forms the sub-light of one single spectrum from the multi-spectrum light by using the spectroscopic component 56 to illuminate the detection position, hence enabling the photosensitive element 40 to obtain the detection images of multiple different spectra.


In some embodiments, the optical waveband of spectra of the multi-spectrum light provided by the light source component 50 can be between 380 nm and 750 nm, and the optical wavebands individually allowed to pass through the multiple filter regions 462, 464, 466, 562, 564 and 566 of the spectroscopic components 46 and 56 are respectively any non-overlapping sections between 380 nm and 750 nm. Herein, the optical wavebands individually allowed to pass through the multiple filter regions 462, 464, 466, 562, 564 and 566 of the spectroscopic components 46 and 56 can be continuous or discontinuous. For example, assuming that the optical waveband of the multi-spectrum light is between 380 nm and 750 nm, the optical wavebands individually allowed to pass through the multiple filter regions of the spectroscopic components 46 and 56 can be 380 nm to 450 nm, 450 nm to 475 nm, 475 nm to 495 nm, 495 nm to 570 nm, 570 nm to 590 nm, 590 nm to 620 nm, and 620 nm to 750 nm, respectively. In another example, assuming that the optical waveband of the multi-spectrum light is between 380 nm and 750 nm, the optical wavebands individually allowed to pass through the multiple filter regions 462, 464, 466, 562, 564 and 566 of the spectroscopic components 46 and 56 can be 380 nm to 450 nm, 495 nm to 570 nm, and 620 nm to 750 nm, respectively.


In one embodiment, the plurality of object images of each of the objects 10 can be images of the objects 10 captured based on another light of a plurality of lighting directions, wherein the spectrum of the another light is different from the spectrum of the original light.


In a first example, during the operation of the system 1 for detecting a surface pattern of an object, as the light source components of different lighting directions sequentially emit first light to illuminate the detection position, the photosensitive element 40 sequentially captures the detection images of the areas 10a, 10b and 10c under the different lighting directions; as the light source components of different lighting directions sequentially emit second light to illuminate the detection position, the photosensitive element 40 sequentially captures the detection images of the areas 10a, 10b and 10c, wherein the first light and the second light have different spectra.


In a second example, as the light source components 50 of different lighting directions sequentially emit multi-spectrum light to illuminate the detection position, the photosensitive element 40 captures the detection images of the areas 10a, 10b and 10c while the filter region 562 is moved to the photosensitive axis, and the photosensitive element 40 captures the detection images of the areas 10a, 10b and 10c while the filter region 564 is moved to the photosensitive axis, thus obtaining multiple detection images corresponding to the spectra of the filter regions 562 and 564.


In the first example and the second example, the system 1 for detecting a surface pattern of an object is similarly capable of obtaining detection images of different spectra under different lighting directions by using different operation processes, thereby enhancing stereoscopic distinguishability in space for various surface patterns of an object under image detection.


Again refer to FIG. 1. In one embodiment, during the operation of the system 1 for detecting a surface pattern of an object, a carrier element 22 carries the object 10, and a driver motor 24 rotates the carrier element 22 so as to drive the object 10 to move the plurality of areas along the first direction to the detection position. In one embodiment, assuming that the object 10 is a plate, it is equivalently that the surface of the object 10 is a non-curved surface having a curvature of being equal to zero or approximately equal to zero. The object 10 is moved along a first direction E by using the driver component 20, so as to sequentially move the areas 10a, 10b and 10c on the surface of the object 10 to the detection position for the photosensitive element 40 to obtain detection images. Herein, the first direction E can be an extension direction of any side (e.g., the long side) of the surface of the object 10. In one example, the carrier element 22 can be a planar carrier plate, and the driver motor 24 is coupled to one side of the planar carrier plate. At this point, in the detection procedure, the object 10 is detachably arranged on the planar carrier plate. The driver motor 24 drives the planar carrier plate to move along the first direction E and to drive the object 10 to move, so as to align an area to the detection position. Herein, the driver motor 24 drives the planar carrier plate to move by a predetermined distance each time, and repeatedly drives the planar carrier plate to move so as to sequentially move the areas 10a, 10b and 10c on the surface to the detection position. Herein, the predetermined distance is substantially equal to the width of the areas 10a, 10b and 10c along the first direction E.


Again refer to FIG. 4. In one embodiment, assuming that the object 10 is cylindrical in shape, by rotating the object 10 along the first direction A by using the driver component 20, the areas 10a, 10b and 10c of the object 10 are sequentially moved to the detection position for the photosensitive module 41 to obtain the detection images. In some embodiments, the first direction A can be a clockwise direction or a counterclockwise direction. In some embodiments, the surface of the object 10 is divided, for example but not limited to, into nine areas.


In one embodiment, the driver component 20 includes the carrier element 22 and the driver motor 24. The driver motor 24 is connected to the carrier element 22. During the operation of the system for detecting a surface pattern of an object, the carrier element 22 carries the object 10, and the driver motor 24 rotates the carrier element 22 so as to drive the object 10 to rotate and sequentially move the plurality of areas to the detection position.


In one example, the carrier element 22 can be two rollers spaced by a predetermined distance, and the driver motor 24 is coupled to rotating shafts of the two rollers. Herein, the predetermined distance is less than the diameter of the object 10 (the minimum diameter of the body). Thus, the object 10 can be movably arranged between the two rollers. Furthermore, while the driver motor 24 rotates the two rollers, the object 10 is driven and hence rotated due to the surface frictional force between the object 10 and the two rollers.


In another example, the carrier element 22 can be a rotating shaft, and the driver motor 24 is coupled to one end of the rotating shaft. At this point, the other end of the rotating shaft is provided with an embedding member (e.g., an insertion slot). At this point, the object 10 can be detachably embedded in the embedding member. Furthermore, while the driver motor 24 rotates the rotating shaft, the object 10 is driven and hence rotated by the rotating shaft.


Refer to FIG. 7. In one embodiment, the detection image can directly serve as an object image M for performing subsequent steps. In another embodiment, each of the object images of each of the objects 10 is formed by combining a plurality of detection images of the object. In some embodiments, upon capturing the detection images 100 of all the areas 10a to 10c of the same lighting direction by the photosensitive element 40, the processor 30 can further combine the detection images 100 captured according to the capturing sequence into an object image M.


In one embodiment, the photosensitive element 40 can be a linear photosensitive element; the linear photosensitive element can be implemented by a linear image sensor. At this point, the detection images 100 captured by the photosensitive element 40 can be combined without cropping by the processor 30.


In another embodiment, the photosensitive element 40 is a two-dimensional photosensitive element; the two-dimensional photosensitive element can be implemented by a planar image sensor. At this point, upon capturing the detection images 100 of all the areas 10a to 10c of the same lighting direction by the photosensitive element 40, the processor 30 captures, based on the short sides of the detection images 100, middle regions of the detection images 100. Then, the processor 30 combines the middle regions corresponding to all the areas 10a to 10c into the object image M.


In one embodiment, referring to FIG. 1, the system 1 for detecting a surface pattern of an object can be provided with one single photosensitive element 40, and the photosensitive element 40 performs image capturing on multiples areas 10a to 10c to obtain multiple detection images respectively corresponding to the areas 10a to 10c. In another embodiment, referring to FIG. 4, the system 1 for detecting a surface pattern of an object can be provided with multiple photosensitive elements 40, and each of the photosensitive elements 40 faces the detection position and is arranged on a frame 45 along the long axis (a second direction B) of the object 10. The photosensitive elements 40 respectively capture detection images of the area located at the detection position for different sections of the object 10, respectively. In one embodiment, the second direction B is substantially perpendicular to the first direction A. That is to say, the photosensitive axes D of the photosensitive elements 40 are parallel to the normal line C.


Refer to FIG. 8. In an example, assume that the object 10 is cylindrical in shape and the system 1 for detecting a surface pattern of an object is provided with multiple photosensitive elements 40 in a quantity of three, as shown in FIG. 4. These photosensitive elements 40 respectively capture detection images of different section positions of the surface of the object 10 located at the detection position, e.g., a detection image 101 at a head section of the object 10, a detection image 102 at a middle section of the object 10, and a detection image 103 at the tail section of the object 10, and the processor 30 then combines all the detection images obtained from the same section into images 110, 120 and 130, and then finally combines the images 110, 120 and 130 into the object image M.


Refer to FIG. 9. An artificial neural network-based method for detecting a surface pattern of an object is suitable for an artificial neural network system. Herein, the artificial neural network system can be implemented on the processor 30. The artificial neural network system has a learning (i.e., training) phase and a prediction phase.


In the learning phase, the artificial neural network system receives a plurality of object images of a plurality of objects (step S01). Herein, the plurality of object images of each of the objects are images of the objects captured based on the light from a plurality of lighting directions, wherein the lighting directions are different from one another. For example, the plurality of object images can be a plurality of the object images M obtained by the system 1 for detecting a surface pattern of an object, as illustrated in FIG. 7 and FIG. 8.


Next, the artificial neural network system superimposes the plurality of object images of each of the objects into an initial image (step S02). Then, the artificial neural network system performs deep learning by using the plurality of initial images of the plurality of objects to build a predictive model for identifying a surface pattern of an object (step S03). In some embodiments, the deep learning can be implemented by a convolutional neural network (CNN) algorithm; however, the present invention is not limited to the above example.


Refer to FIG. 10. In one example, there are several surface patterns on the areas of the object 10, including a recessed slot 14, a planar pattern 12, and a common surface 16 without defects. When the light source component 502 located on the front side of the detection position emits light toward the object 10, the recess of the slot 14 being closer to the front side appears darker and produces a shadow while the rear side is brighter, the pattern 12 is brighter, and the common surface 16 is darker, and thus an object image M01 captured by the photosensitive element 40 is as shown in FIG. 11. When light source component 501 located on the left side of the detection position emits light toward the object 10, the recess of the slot 14 being closer to the left side appears darker and produces a shadow while the right side is brighter, the pattern 12 is brighter, and the common surface 16 is darker, and thus an object image M02 captured by the photosensitive element 40 is as shown in FIG. 12. Similarly, when the light source component 503 located on the right side of the detection position emits light toward the object 10, an object image M03 captured by the photosensitive element 40 is as shown in FIG. 13; when the light source component 504 located on the rear side of the detection position emits light toward the object 10, an object image M04 captured by the photosensitive element 40 is as shown in FIG. 14. Referring to FIG. 11 to FIG. 14, in the object images M01 to M04, no shadow due to different lighting directions is produced in the image of the pattern 12; however, a shadow in the slot 14 is produced in the image of the slot 14 as a result of different lighting directions.


Refer to FIG. 15. In one example, the artificial neural network system superimposes the object image M01, the object image M02, the object image M03 and the object image M04 of the object 10 into an initial image MF. In one embodiment, the superimposition means superimposing brightness values of pixels in the object image. Taking the object images M01 to M04 for instance, the brightness value of the pattern 12 is high in all the object images M01 to M04, and hence also has a high brightness value after the superimposition. The brightness value of the surface 16 is low in all the object images M01 to M04, and hence also has a low brightness value after the superimposition. The brightness value of the front side of the slot 14 is slightly low in the object image M01, the brightness values of the front side of the slot 14 are high on the left and low on the right in the object images M02 and M03, and the brightness value of the front side of the slot 14 is slightly high in the object image M04; however, the brightness value of the circumference of the slot 14 after the superimposition in overall is low compared to that of the surface 16, and can thus be emphasized in the object image MF. Hence, when a defect to be detected is too minute, as a result of the shape or depth of the defect, the defect may not be presented in an obvious way by imaging with one single light source, such that the defect is unlikely detected and misjudgment is thus caused. However, by superimposing images of different lighting directions, the difference of brightness values between the circumference of a defect and a surface can be accentuated.


That is to say, with respect to the artificial neural network system stored in the processor 30, the artificial neural network system in the learning phase can receive initial images obtained by superimposing object images of different lighting directions of multiple objects. Taking the foregoing system for detecting a surface pattern of an object for example, images of different surface patterns can be images having different defects, images without defects, images having different levels of surface roughness, or images of defects presenting different levels of brightness and contrast produced by illuminating areas on a surface with light of different lighting directions, and the artificial neural network system can perform deep learning according to the images of various surface patterns so as to build a predictive procedure for identifying various surface patterns. In other words, by using imaging with multi-angle light sources and preprocessing of superimposing images, the distinguishability for features of stereoscopic defects can be significantly enhanced without vastly increasing computation time of the CNN algorithm, providing better outcome than conventional optical algorithms.


Refer to FIG. 16. In one embodiment, the surface pattern can be at least two of slots, cracks, bumps and patterns; in other words, the predictive model is capable of identifying whether the surface pattern contains at least two of slots, cracks, bumps, sand holes, air holes, impact marks, scratches, edges and patterns, or does not contain any defect. In one embodiment, the step of performing the deep learning by using the initial images (step S03) includes categorizing each of the initial images according to a plurality of predetermined surface pattern categories (step S33), and performing the deep learning according to the plurality of categorized initial images of the plurality of objects to build the predictive model for identifying a surface pattern of an object (step S34).


Refer to FIG. 17. In one embodiment, the artificial neural network-based method for detecting a surface pattern of an object further includes a step of normalizing the plurality of object images of the plurality of objects (step S11), and superimposing the plurality of normalized object images of each of the objects into an initial image (step S21). Thus, asymmetry among learning data is reduced and learning efficiency is at the same time improved.


In one embodiment, the artificial neural network-based method for detecting a surface pattern of an object further includes transforming the initial image of each of the objects into a matrix (step S31), and performing the deep learning by using the plurality of matrices to build the predictive model for identifying a surface pattern of an object (step S32). That is to say, different initial images are transformed into information such as length, width, pixel type, pixel depth and channel value in data matrices, so as to facilitate subsequent processing, wherein the channel value represents an imaging condition of a corresponding object image. Herein, an artificial neural network (e.g., implemented by a deep learning program) in the artificial neural network system includes a plurality of image matrix input channels for inputting corresponding matrices, and the image matrix input channels respectively represent imaging conditions of a plurality of spectra. In other words, in step S31, a data format of the initial image is transformed to a format (e.g., an image matrix) supported by the input channel of the artificial neural network.


In some embodiments, in the learning phase, the object images received by the artificial neural network system are known surface patterns, and surface defect types outputted by the artificial neural network system are also set in advance. In other words, the object images used for deep learning are all marked with existing object types. For instance, in one example, if an object is an unqualified object, the surface of the object has one or more surface patterns that the artificial neural network system has already learned and attempts to capture, such that the artificial neural network system then selects these surface patterns; conversely, if an object is a qualified object, the surface of the object does not possess any surface patterns that have been recorded and are used for triggering the selection action of the artificial neural network. At this point, some of the object images received by the artificial neural network system have labels of one or more surface patterns, and others have labels without any surface patterns. Furthermore, the output of the artificial neural network system sets in advance a plurality of surface pattern categories according to these surface patterns. In another example, if an object is an unqualified object, the surface of the object has one or more first-type surface patterns; conversely, if an object is a qualified object, the surface of the object has one or more second-type surface patterns. At this point, some of the object images received by the artificial neural network system have labels of one or more first-type surface patterns, and others have labels of one or more second-type surface patterns. Furthermore, the output of the artificial neural network system sets in advance a plurality of surface pattern categories according to these surface patterns.


In some embodiments, in the learning phase, the artificial neural network system performs training by using object images with known surface defects so as to generate determination items of the neurons in a predictive model and/or to adjust a weighting connecting any two neurons, such that a prediction result (i.e., the surface defect type outputted) of each object image conforms to the known and labeled as learned surface defects, and a predictive model for identifying a surface pattern of an object can be built. In the prediction phase, the artificial neural network system performs category prediction on object images of unknown surface patterns by using the predictive model built. In some embodiments, the artificial neural network system performs percentile prediction on the object image according to surface pattern categories, i.e., determining the percentage of possibility that each object image falls within the individual surface pattern categories.


In some embodiments, the artificial neural network system includes an input layer and a multiple layers of hidden layers. The input layer is coupled to the hidden layers. The input layer is for performing operations of steps S01 and S02 (and steps S11 and S21) above. The hidden layers are for performing step S03 above.


In some other embodiments, the artificial neural network system includes a preprocessing unit and a neural network unit. The preprocessing unit is coupled to the neural network unit. The preprocessing unit is for performing steps S01 and S02 (and steps S11 and S21) above. The neural network unit is for performing step S03 above. The neural network unit includes an input layer and multiple layers of hidden layers, and the input layer is coupled to the hidden layers.


Refer to FIG. 9. In one embodiment, in the prediction phase, the artificial neural network system is for performing the prediction procedure according to a plurality of object images M corresponding to a plurality of different lighting directions so as to identify in an object image an area image presenting a surface pattern of an object in object image M (step S04). In other words, in the prediction procedure, after the object image M is inputted into the artificial neural network system, the artificial neural network system executes the predictive model according to the combined object image M generated, so as to identify, in the object image M, an object image presenting the surface pattern of the object 10. In one embodiment, the object image M is categorized by the predictive model, that is, the artificial neural network system categorizes the object image of the surface pattern of the object according to a plurality of predetermined surface defect types. At an output end, percentile prediction is performed on the object image M according to predetermined surface defect types in step S04 to determine the percentage of possibility that the object image M falls within the individual defect types.


In some embodiments, the processor 30 can include the foregoing artificial neural network system, so as to automatically perform surface pattern categorization according to a combined image, thereby automatically determining the surface pattern of the surface of the object 10. In other words, in the learning phase, the object image generated by the processor 30 can be subsequently used for training by the artificial neural network system, so as to build a predictive model for identifying a surface pattern of an object. In the prediction phase, the object image generated by the processor 30 can be subsequently used for prediction by the artificial neural network system, so as to perform category prediction of an object image by the predictive model.


In some embodiments, the object image generated by the processor 30 can be fed into another processor having the foregoing artificial neural network system, so as to have the artificial neural network system automatically categorize a surface pattern according to the combined object image, thereby automatically determining the surface pattern of the surface of the object 10. In other words, the artificial neural network system automatically performs training or prediction with respect to the object image fed thereto.


In one example of step S02 or S21, the object images of the same object can have the same spectrum. In another example of step S02 or S21, the object images of the same object can have different spectra. That is to say, multiple object images of the same object include an image of the object captured based on light of a spectrum of different lighting directions, and an image of the object captured based on light of another spectrum of different lighting directions. Furthermore, the two spectra are different from each other.


In some embodiments, the artificial neural network-based method for detecting a surface pattern of an object of the present invention can be implemented by a computer program product, such that the artificial neural network-based method for detecting a surface pattern of an object according to any one of the embodiments of the present invention can be completed when a computer (i.e., a processor) loads and executes the program. In some embodiments, the computer program product is a non-transitory computer-readable recording medium, and the program above is stored in the non-transitory computer-readable recording medium and to be loaded by a computer (i.e., a processor). In some embodiments, the program above itself can be a computer program product, and is transmitted by a wired or wireless means into a computer.


In conclusion of the above description, the system for detecting a surface pattern of an object and the artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention are capable of providing object images of different imaging effects for the same object by controlling various different incident angles of imaging light sources so as to perform image capturing, thereby enhancing stereoscopic distinguishability in space for various surface patterns of an object under image detection. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, object images under different lighting directions can be integrated by performing multi-dimensional superimposition on the object images, so as to improve identification of a surface pattern of an object and to further obtain an optimal resolution of the surface pattern of the object. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, surface images of multiple spectra can also be integrated, so as to improve identification of a surface pattern of an object. In the system for detecting a surface pattern of an object and an artificial neural network-based method for detecting a surface pattern of an object according to an embodiment of the present invention, a surface pattern of an object can be independently determined by an artificial neural network system such that an inspector is not required to observe the object by the naked eyes or touch the object by hands, hence significantly improving the efficiency of identifying surface patterns as well as reducing human misjudgment.


The present disclosure is explained by way of the disclosed embodiments that are not to be construed as limitations to the present disclosure. Without departing from the spirit and purview of the present disclosure, a person of ordinary skill in the art could make slight modifications and changes. Therefore, the legal protection of the present disclosure shall be defined by the appended claims.

Claims
  • 1. An artificial neural network-based method for detecting a surface pattern of an object, comprising: receiving a plurality of object images of a plurality of objects, wherein the plurality of object images of each of the objects comprise images of the object captured based on a light of a plurality of lighting directions, and the plurality of light directions are different from one another;superimposing the plurality of the object images of each of the objects into an initial image; andperforming deep learning by using the plurality of initial images of the plurality of objects to build a predictive model for identifying the surface pattern of the object.
  • 2. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, further comprising: normalizing the plurality of object images;wherein the step of superimposing the plurality of object images of each of the objects into the initial image comprises: superimposing the plurality of normalized object images of each of the objects into the initial image.
  • 3. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, further comprising: transforming the plurality of initial images into matrices;wherein the step of performing the deep learning by using the plurality of initial images of the plurality of objects comprises: performing the deep learning by using the plurality of matrices.
  • 4. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein the light of the plurality of lighting directions has different spectra.
  • 5. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein the light of the plurality of lighting directions has the same spectrum.
  • 6. The artificial neural network-based method for detecting a surface pattern of an object of claim 5, wherein the plurality of object images of each of the objects further comprise an image of the object captured based on another light of the plurality of lighting directions, and the spectrum of the another light is different from that of the light.
  • 7. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein the surface pattern identified by the predictive model is at least two of slots, cracks, bumps, sand holes, air holes, impact marks, scratches, edges and patterns.
  • 8. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein the deep learning is implemented by a convolutional neural network (CNN) algorithm.
  • 9. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein each of the object images of each of the objects is formed by combining a plurality of detection images of the object.
  • 10. The artificial neural network-based method for detecting a surface pattern of an object of claim 1, wherein the step of performing the deep learning by using the initial image comprises categorizing each of the initial images according to a plurality of predetermined surface pattern categories.
  • 11. A system for detecting a surface pattern of an object, comprising: a driver component, carrying the object, wherein a surface of the object is divided along a first direction into a plurality of areas, the driver component being further for sequentially moving the plurality of areas to a detection position;a plurality of light source components, configured to face the detection position and arranged in a plurality of different lighting directions with respect to the detection position, providing light to illuminate the detection position, respectively, wherein a light incident angle of the light provided by each of the light source components relative to a normal line of the area located at the detection position is less than or equal to 90 degrees; anda photosensitive element, configured to face the detection position, capturing a detection image of each of the areas sequentially located at the detection position when the light illuminates the detection position in each of the lighting directions.
  • 12. The system for detecting a surface pattern of an object according to claim 11, wherein the plurality of lighting directions at least comprise a front side of the detection position, a rear side of the detection position, a left side of the detection position and a right side of the detection position.
  • 13. The system for detecting a surface pattern of an object according to claim 11, wherein between optical axes of any two adjacent light source components among the plurality light source components is a same predetermined included angle.
  • 14. The system for detecting a surface pattern of an object according to claim 11, wherein the plurality of light source components provide the light with the same light incident angle.
  • 15. The system for detecting a surface pattern of an object according to claim 11, wherein the light is multi-spectrum light, each of the light source components comprises a light emitting component and a spectroscopic component, the light emitting component is for generating the multi-spectrum light, the spectroscopic component is located between the light emitting component and the detection position and has a plurality of filter regions respectively corresponding to the plurality of spectra, each of the filter regions is for separating the multi-spectrum light into the light of the corresponding spectrum, and the photosensitive element captures, under the light of each of the spectra illuminating the detection position in each of the lighting directions, the detection image of each of the areas sequentially located at the detection position.
  • 16. The system for detecting a surface pattern of an object according to claim 11, further comprising a spectroscopic component, wherein the spectroscopic component is located between the photosensitive element and the detection position and has a plurality of filter regions respectively corresponding to the plurality of spectra, the light is multi-spectrum light, each of the filter regions is for separating the multi-spectrum light into the light of the corresponding spectrum, and the photosensitive element captures, through the plurality of filter regions and under the light illuminating the detection position in each of the lighting directions, the detection image of each of the areas sequentially located at the detection position.
  • 17. The system for detecting a surface pattern of an object according to claim 11, wherein a photosensitive axis of the photosensitive element is parallel to the normal line.
  • 18. The system for detecting a surface pattern of an object according to claim 11, wherein the surface of the object is a cylindrical surface, and the driver component rotates the object in a clockwise direction or a counterclockwise direction relative to the photosensitive element to move the plurality of areas to the detection position.
  • 19. The system for detecting a surface pattern of an object according to claim 11, wherein the object is a plate, and the driver component horizontally moves the object along the first direction relative to the photosensitive element to move the plurality of areas to the detection position.
  • 20. The system for detecting a surface pattern of an object according to claim 11, further comprising: a processor, coupled to the photosensitive element, combining the plurality of detection images corresponding to the same lighting direction into an object image.
  • 21. The system for detecting a surface pattern of an object according to claim 20, wherein the processor further comprises an artificial neural network system for performing a prediction procedure according to the plurality of object images corresponding to the plurality of different lighting directions so as to identify in the object image an area image presenting the surface pattern of the object.
  • 22. The system for detecting a surface pattern of an object according to claim 20, wherein the photosensitive element is a linear photosensitive element.
  • 23. The system for detecting a surface pattern of an object according to claim 11, further comprising: a processor, coupled to the photosensitive element, capturing, based on a short side of each of the detection images, a middle region of each of the detection images, and combining the plurality of middle regions corresponding to the same lighting direction into an object image.
  • 24. The system for detecting a surface pattern of an object according to claim 23, wherein the processor further comprises an artificial neural network system for performing a prediction procedure according to the plurality of object images corresponding to the plurality of different lighting directions so as to identify in the object image an area image presenting the surface pattern of the object.
  • 25. The system for detecting a surface pattern of an object according to claim 23, wherein the photosensitive element is a two-dimensional photosensitive element.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority from U.S. Patent Application Ser. No. 62/848,216, filed on May 15, 2019, the entire disclosure of which is hereby incorporated by reference.

Provisional Applications (1)
Number Date Country
62848216 May 2019 US