This application is based on and claims priority under 35 U.S.C. §119 to Japanese Patent Application 2010-026766, filed on Feb. 9, 2010, the entire content of which is incorporated herein by reference.
This disclosure generally relates to a defect detection apparatus and a defect detection method for detecting a defect on a surface of an inspection object.
It is known that various defects are generated on a casting (which will be hereinafter referred to as a workpiece) made of aluminum or the like used as a component of a vehicle and the like. The defects include, for example, a blowhole (a pinhole) in a hollow shape, a stamp formed on a surface of the workpiece because of swarf (e.g., turnings, a cutting chip and the like) being pressed thereon when the workpiece is set on and held by a chuck, chatter marks formed on the surface of the workpiece because of uneven rigidity of the workpiece, and the like. The aforementioned defects on the workpiece may degrade a quality of a product.
Therefore, generally, a visual inspection is conducted for the workpiece in order to check the defects on the surface of the workpiece and in order to ensure the quality of the product. However, the visual inspection on the workpiece may impose a burden on an inspector. Furthermore, because the visual inspection conducted by the inspector is based on a subjective assessment, an inspection result (an assessment) may vary between inspectors.
In order to reduce the drawback mentioned above, there exists a defect inspection method disclosed in, for example, JP2006-208259A. According to the defect inspection method disclosed in JP2006-208259A, light is radiated onto an inspection surface of a workpiece and then, a light reflected by the inspection surface of the workpiece (i.e. a reflection light) is captured by a camera, so that a defect on the inspection surface of the workpiece is detected on the basis of an image captured by the camera. More specifically, according to the defect inspection method disclosed in JP2006-208259A, an intensity of the light when capturing the inspection surface of the workpiece by the camera is adjustable, so that a voltage value generated by an image sensor, which corresponds to a portion of the inspection surface of the workpiece other than the defects, is saturated.
According to the defect inspection method disclosed in JP2006-208259A, the defect such as a blowhole, a stamp and the like may be detected on the surface of the workpiece having uneven cutting surface.
However, according to the defect inspection method disclosed in JP2006-208259A, because the image sensor corresponding to a portion of the inspection surface is set to be saturated by adjusting the intensity of the light, an image sensor corresponding to a shallow defect such as the chatter marks captured by the image sensor and reflecting the light strongly may also be saturated as in the case with the image sensor corresponding to a normal and non-defect inspection surface. Therefore, according to the defect inspection method disclosed in JP2006-208259A, the shallow defect such as the chatter marks and the like may not be accurately detected.
A need thus exists for a defect detection apparatus and a defect detection method which are not susceptible to the drawback mentioned above
According to an aspect of this disclosure, a defect detection apparatus for detecting a defect formed on an inspection surface of an inspection object, the defect detection apparatus includes a table, on which the inspection object is placed, including a table surface having a flat surface, a lighting device emitting a light to the inspection surface of the inspection object, an image capturing device capturing an image of the inspection surface of the inspection object, a displacement mechanism changing a direction of at least of one of a relative direction of an optical axis of the lighting device relative to the table surface and a relative direction of an optical axis of the image capturing device relative to the table surface, an image data obtaining portion obtaining an image data from images, which are captured by the image capturing device while changing the relative direction by the displacement mechanism, a feature extracting portion extracting a feature representing a reflection characteristic of the inspection surface on the basis of the image data, and a defect specification portion specifying a type of the defect formed on the inspection surface of the inspection object on the basis of the extracted feature.
According to another aspect of this disclosure, a defect detection method for detecting a defect formed on an inspection surface of an inspection object, which is placed on a table surface having a flat surface of a table and to which a light is emitted from a lighting device, the defect detection method includes an image data obtaining step of obtaining an image data from a data captured by an image capturing device, which captures images of the inspection surface of the inspection object, while changing a direction of at least one of a relative direction of an optical axis of the lighting device relative to the table surface and a relative direction of an optical axis of the image capturing device relative to the table surface, a feature extracting step of extracting a feature representing a reflection characteristic of the inspection surface on the basis of the image data, and a determining step of determining whether or not the defect is formed on the inspection surface of the inspection object on the basis of the extracted feature.
The foregoing and additional features and characteristics of this disclosure will become more apparent from the following detailed description considered with the reference to the accompanying drawings, wherein:
Embodiments of a defect detection apparatus and a defect detection method will be described below with reference to the attached drawings. Illustrated in
<Lighting Device>
Illustrated in
<Camera>
In the embodiments, a monochrome digital camera using an optical element such as a charge coupled device (CCD) and the like is adapted as the camera 2 (the image capturing device). Furthermore, in the embodiments, an output of the camera 2 is set to be an eight binary digit (8 bit) and a pixel value is set in a range between zero (0) and 255 (including zero (0) and 255). In the embodiments, the lighting device 1 and the like is adjusted so that a portion of the inspection surface Wf of the workpiece W having no defect on the captured image becomes approximately 150 pixel value. Additionally, for example, a color digital camera, an analogue camera or the like may be adapted as the camera 2. In a case where the analogue camera is adapted as the camera 2, an analogue-to-digital converter (an A/D converter) may need to be connected to an image data obtaining portion 53 of the calculation device 5.
<Displacement Mechanism>
As briefly described above, the defect detection apparatus is configured so that the inclination of the table surface 3f is changeable by the displacement mechanism 4. As illustrated in
An example of changes in the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f and the relative direction of the optical axis of the camera 2 relative to the table surface 3f is indicated in
<Calculation Device>
Illustrated in
The actuator control portion 52 calculates a deformation of each of the actuators 41 of the displacement mechanism 4 in order to incline the table surface 3f of the table 3 in a target direction and in order to control an inclination degree of the table surface 3f of the table 3. Furthermore, the actuator control portion 52 actuates each actuator 41 through a signal line.
The image data obtaining portion 53 sends a signal to the camera 2 via the signal line in order to command the camera 2 to capture the image(s) of the inspection surface Wf of the workpiece W and obtains the image data from the image(s) captured by the camera 2. In a case where a digital still camera is adapted as the camera 2, the image data obtaining portion 53 obtains the captured image of the camera 2 as the image data. On the other hand, in a case where a digital movie camera is adapted as the camera 2, the image data obtaining portion 53 does not need to send the signal to order the camera 2 to capture the image of the inspection surface Wf of the workpiece W. In this case, the image data obtaining portion 53 extracts one frame from images, which are captured by the camera 2 as a movie data, as the image data.
The corresponding point searching portion 54 searches the corresponding point between the reference image data and the angle-changed image data. In the embodiments, the camera 2 captures images of the workpiece W while changing the inclination of the table surface 3f. Therefore, any desired point(s) (which will be hereinafter referred to as an inspection point(s)) on the inspection surface Wf of the workpiece W is not positioned in the same coordinate on each image data. In other words, the inspection point is found in different coordinates on each image data. Therefore, in order to detect and specify a defect on the inspection point, correspondence of pixels between plural image data of the inspection surface Wf captured at different angle positions need to be obtained. For example, one pixel on an image data (i.e. a reference image data), which is captured by the camera 2, while the table surface 3f is at the reference position, i.e. while the table surface 3f is in a horizontal state, is set as the inspection point. Then, a window (e.g. a frame, an area) in a predetermined size is set centering on the inspection point in order to search the corresponding point on other image data (i.e. the angle-changed image data), which are captured by the camera 2 while the inclination of the table surface 3f is changed, on the basis of a known matching method. In this case, any desired known matching method, such as a phase limitation correlation method, a normalized correlation method, a geometric correlation method and the like, may be selected and used. Additionally, one or more of inspection points may be set for the inspection surface Wf on the image data. In the embodiments, further explanation of the defect detection process (the defect specification process) including image processing is given based on the inspection surface Wf having plural inspection points.
The viewpoint shifting portion 55 converts the angle-changed image data into an image data corresponding to an image data to be captured while the table surface 3f is at the reference position (i.e. a viewpoint-shifted image data). The viewpoint shifting portion 55 obtains the inclination (the inclination degree) of the table surface 3f when each angle-changed image data is obtained, i.e. the relative direction vector, from the control portion 51. Furthermore, the viewpoint shifting portion 55 obtains a coordinate of the corresponding point of each of the inspection points for each angle-changed image data from the corresponding point searching portion 54. Then, the viewpoint shifting portion 55 generates the image data on which a viewpoint of the angle-changed image data is shifted to a position where the second deviation angle φ is zero (0) degree (i.e. φ=0°) on the basis of the relative direction vector and the coordinate of the corresponding point of each of the inspection points for each angle-changed image data by using a know method such as an affine transformation and the like.
The feature extracting portion 56 extracts the feature of each of the inspection points based on the viewpoint-shifted image data. The inspection point(s) represents any desired point(s) on the inspection surface Wf. However, in this embodiments, each predetermined pixel on the reference image data is considered as each of the inspection points in a processing.
According to the defect detection apparatus having the above-described configuration, because images of the inspection surface Wf are captured by the camera 2 while changing the relative direction of the optical axis of the camera 2 relative to the table surface 3f, reflection intensity of each of the inspection points in each different relative direction (i.e. an example of a reflection distribution and which will be hereinafter referred to as a reflection intensity distribution) may be obtained.
In a first embodiment, the defect detection apparatus is configured so as to control the displacement mechanism 4 so that the normal line of the table surface 3f rotates about the X and Y-axes. In other words, the camera 2 captures the images of the inspection surface Wf of the workpiece W while changing both of the first deviation angle θ and the second deviation angle φ defining the relative direction vector. Furthermore, in the first embodiment, the feature of each of the inspection points is defined on the basis of a maximum/minimum pixel value of each of the inspection points at each different angle position and a value of the second deviation angles φ formed when a pixel value of the inspection value reaches the maximum/minimum pixel value. Therefore, when each of the inspection points on the reference data image is set as P (x, y) where each of “x” and “y” represents a coordinate value, a feature F (=x, y) of each of the inspection points is defined as follows: F (x, y)=((Imin, φmin), (Imax, φmax)), where “Imin” and “Imax” represent the minimum pixel value and the maximum pixel value, respectively, in the case where the images of each of the inspection points are captured from various directions (angles), and “φmin” and “φmax” represent the second deviation angles φ in the case where each of the inspection points reaches the minimum pixel value and the maximum pixel value, respectively.
Processes of the control executed by the defect detection apparatus and the defect detection method according to the first embodiment will be described below with reference to a flowchart in
Then, the control portion 51 actuates the actuators 41 in order to incline the table surface 3f in the predetermined direction (step S04). In the first embodiment, the control portion 51 memorizes (stores) plural relative direction vectors (θ, φ), so that the control portion 51 actuates the actuators 41 in response to each relative direction vector. More specifically, the control portion 51 memorizes the relative direction vectors dk=(θi, φj), where (i=1, . . . , m, j=1, . . . , n, k=1, . . . , m*n), and the control portion 51 selects one of the relative direction vectors dk.
In the first embodiment, firstly, an index “k” in the relative direction vector dk is initialized by one (1) in order to obtain a relative direction vector d1. Then, the control portion 51 actuates the actuators 41 in response to a deviation angle represented by the relative direction vector d1 (step S04).
After the actuation of the actuators 41 is completed, the control portion 51 sends the command to the image data obtaining portion 53 to obtain the image data while the table surface 3f is inclined. The image data obtaining portion 53, which receives the command from the control portion 51, obtains the image(s) as the angle-changed image data (step S05). The obtained angle-changed image data is stored in the memory.
Following the process in step S05, the control portion 51 outputs a command to the corresponding point searching portion 54 to search the corresponding point of each pixel of the reference image data, which is stored within the memory, on the angle-changed image data, which is also stored within the memory. The corresponding point searching portion 54, which receives the command from the control portion 51, obtains the reference image data and the angle-changed image data from the memory, and then, searches the corresponding point by using the correlation method and the like, as described above (step S06). More specifically, the corresponding point searching portion 54 searches a point (corresponding point) on the angle-changed image data corresponding to the inspection point (each pixel). Information of the corresponding point obtained by the corresponding point searching portion 54 is temporarily stored within the memory. More specifically, the coordinate (x, y) of each pixel on the reference image data and a coordinate (xd, yd) of each corresponding point on the angle-changed image data are stored within the memory while relating the coordinate (x, y) and the coordinate (xd, yd). Additionally, displacement (i.e. a displacement amount) (xd−x, yd−y) from the coordinate on the reference image data may be used instead of the coordinate of the corresponding point on the angle-changed image data.
After the search of the corresponding point(s) is completed, the control portion 51 outputs a command to the viewpoint shifting portion 55 to generate the viewpoint-shifted image data based on the angle-changed image data, which is stored within the memory. Simultaneously, the control portion 51 transmits the current relative direction vector dk to the viewpoint shifting portion 55. The viewpoint shifting portion 55, which receives the command from the control portion 51, generates the viewpoint-shifted image data from the angle-changed image data based on the relative direction vector dk and a positional relationship between each of the inspection points on the reference image data and each of the corresponding points on the angle-changed image data (step S07). The generated viewpoint-shifted image data is stored within the memory.
After the generation of the viewpoint-shifted image data is completed, the control portion 51 outputs a command to the feature extracting portion 56 to execute a feature extracting process. The feature extracting portion 56, which receives the command from the control portion 51, extracts the feature (i.e. feature quantity, feature amount) based on the viewpoint-shifted image data, which is stored within the memory (step S08, a feature extracting step). As described above, in the first embodiment, the maximum/minimum pixel values of each of the inspection points viewed from various angles are used as the feature. Therefore, the feature extracting portion 56 compares each pixel value on the reference image data with the pixel value of each corresponding point on the viewpoint-shifted image data in order to obtain the maximum/minimum pixel value of each inspection point. The detailed explanation about the process executed in step S08 will be described below with reference to a flowchart in
In the feature extracting process, firstly, one pixel (one inspection point) P (x, y) on the reference image data is selected (step S21), and then, the control portion 51 obtains a pixel value I (x, y) of the pixel P (step S22). Then, the feature extracting portion 56 compares the obtained pixel value I (x, y) with the current feature F (x, y)=((Imin, φmin), (Imax, φmax)) and updates the feature F. More specifically, in a case where the obtained pixel value I (x, y) is greater than the maximum pixel value Imax (i.e. I (x, y)>Imax) (Yes in step S23), the maximum pixel value is updated as the pixel value I (x, y) (i.e. Imax=I (x, y)) and a value of the current second deviation angle φj as a deviation angle to be formed when the pixel value of the inspection point reaches the maximum pixel value is set (step S24). Furthermore, in a case where the obtained pixel value I (x, y) is smaller than the minimum pixel value Imin (i.e. I (x, y)<Imin) (Yes in step S25), the feature extracting portion 56 updates the minimum pixel value Imin as the pixel value I (x, y) (i.e. Imin=I (x, y)) and a value of the current second deviation angle φj as a deviation angle to be formed when the pixel value of the inspection point reaches the minimum pixel value is set (step S26).
The above-mentioned feature extracting process (steps S21 to S26) is repeated until an unprocessed pixel does not exist (No in step S27).
After the feature extraction relative to one angle-changed image data is completed, the control portion 51 checks whether or not an unprocessed relative direction vector dk exists (step S09). In a case where the unprocessed relative direction vector dk exists (Yes in step S09), the control portion 51 increments the index “k” and specifies a next unprocessed relative direction vector dk. Then, the process proceeds to step S04.
Accordingly, the extraction of the feature F (x, y) of each of the inspection points P (x, y) is completed. Additionally, the extraction of the features F of plural inspection points P may be completed simultaneously, however, in order to facilitate the explanation of the process executed by the defect detection apparatus, the explanation of the feature extraction and following processes will be given with the feature F of a single inspection point P as an example in this embodiment, unless otherwise mentioned. The control portion 51 then transmits a command to the defect specification portion 57 to execute a defect specification process. The defect specification portion 57 specifies the defect formed on the inspection surface Wf of the workpiece W in response to the command outputted thereto from the control portion 51 (step S10, a determining step).
Illustrated in
Illustrated in
Illustrated in
As is evident from
On the other hand, a reflection intensity of the blowhole reaches a maximum value when the second deviation angle is about zero degree (0°) including zero degree (0°), as illustrated in
Furthermore, a significant difference is not found between a minimum reflection intensity at the normal portion of the inspection surface Wf of the workpiece W and a minimum reflection intensity at the blowhole of the inspection surface Wf of the workpiece W. Therefore, the calculation device 5 may determine (specifies) whether each of the inspection points corresponds to the normal portion or the blowhole by comparing a ratio between the minimum reflection intensity Imin and the maximum reflection intensity Imax of the feature F (x, y) and the predetermined threshold value TH. In other words, in a case where the ratio between the minimum reflection intensity Imin and the maximum reflection intensity Imax is greater than the predetermined threshold value TH (i.e. Imax/Imin>TH), the calculation device 5 determines that the inspection point P (x, y) corresponds to the normal portion. On the other hand, in a case where the ratio between the minimum reflection intensity Imin and the maximum reflection intensity Imax is equal to or smaller than the predetermined threshold value TH (i.e. Imax/Imin≦TH), the calculation device 5 determines that the inspection point P (x, y) corresponds to the blowhole.
As is evident from
The determination results obtained in the above-described manner are substituted (added) into the memory as an element of the corresponding reference image data, and [y] and [x] of a two-dimensional array L on the same dimension as the reference image data, which are stored within the memory (step S32). In other words, the determination results are inputted into the memory while being related to the two-dimensional array L in the same dimension as the corresponding reference image data. More specifically, when the normal portion on the inspection surface Wf of the workpiece W is represented by zero (0) and the blowhole on the inspection surface Wf of the workpiece W is represented by one (1), zero (0) is substituted in [y] and [x] of the two dimensional array L (i.e. L [y] [x] 0) in the case where the inspection point P (x, y) is determined as the normal portion. On the other hand, in the case where the inspection point P (x, y) is determined to correspond to the blowhole, one (1) is substituted in [y] and [x] of the two dimensional array L (i.e. L [y][x]=1). Accordingly, two dimensional data indicating different defect types may be generated. The two dimensional data may be considered as an image data in which each different defect type is represented by the pixel value. Hereinafter, the two dimensional data is referred to as a defect image data.
The defect specification portion 57 repeats the above-described processes (step S31 to S32) until the defect specification of all of the inspection points P (x, y) is completed (step S33).
After the defect specification of all of the inspection points P (x, y) is completed (No in step S33), the defect specification portion 57 executes a process of labeling neighboring pixels having the same pixel value on the defect image data as one area (which will be hereinafter referred to as a defect area) (step S34). For example, the defect specification portion 57 executes a multiple value labeling process to the defect image data. As a result, pixels having the same label belong to the same defect area.
The defect specification portion 57 extracts the feature of each defect area based on a labeling data (step S35). In this case, dimensions of the defect area, a size of a circumscribed rectangle, a ratio between each side of the circumscribed rectangle, a fillet radius and the like may be used as the feature F.
Then, the defect specification portion 57 examines whether or not the defect specification result is appropriate on the basis of the defect type and the feature at each defect area (step S36). For example, because an upper limit of a size of the blowhole is predictable, in a case where the dimensions of the defect area, which is determined (specified) as the blowhole, is greater than a predetermined threshold, the calculation device 5 once again determines that the defect area does not correspond to the blowhole. For example, because a product hole formed on the workpiece W has a regulated diameter but the blowhole has various shapes, the product hole and the blowhole may be distinguished from each other by dimensions, diameters, circularity and the like of the product hole and the blowhole. Additionally, the feature of the defect area may be obtained in any other desired method, and determination (examination) of the feature F of the defect area may be executed by any other desired method.
After the defect type of each inspection point P is specified on the basis of the reflection intensity distribution as in the above-described manner, plural inspection points having the same defect type are labeled as one defect area in order to determine whether or not the defect area is considered as the defect. Accordingly, determination (specification) accuracy in the defect specification may be increased.
A second embodiment of the defect detection apparatus and the defect detection method will be described below. In the second embodiment, the first deviation angle θ in the relative direction vector is set (limited) to zero degree (0°) and 180 degrees (180°). Therefore, in the second embodiment, the control portion 51 memorizes the relative direction vectors dk=(θi, φj) (where i=1, 2, j=1, . . . , n, k=1, . . . , 2*n). Additionally, an index “i” being one (1) indicates that the first deviation angle θ is zero degree (0°) (i.e. θ1=0°) and the index “i” being two (2) indicates that the first deviation angle θ is 180 degrees (180°) (i.e. θ2=180°). Hence, according to the second embodiment, because the first deviation angle θ is set (limited), the relative direction vector dk is set on a XZ-plane, which is orthogonal to the table surface 3f. In other words, in the second embodiment, the lighting device 1 and the camera 2 are displaced relative to the table surface 3f along zero degree (0°) and 180 degrees (180°) in a latitude on the hemisphere on the table surface 3f.
Illustrated in
Then, the control portion 51 actuates the actuators 41 in order to incline the table surface 3f in the predetermined direction (step S42). As described above, plural relative direction vectors dk=(θi, φj) are memorized within the memory according to the second embodiment, so that the control portion 51 selects one of the relative direction vectors dk. In this case, firstly, the index “k” is initialized by one (1) in order to obtain the relative direction vector d1. Then, the control portion 51 actuates the actuators 41 on the basis of the deviation angle formed when the relative direction vector is d1 (step S42).
After the actuation of the actuators 41 is completed, the control portion 51 transmits the command to the image data obtaining portion 53 to obtain the image data. Accordingly, the image data obtaining portion 53, which receives the command from the control portion 51, obtains the image(s), which are obtained by the camera 2 while the table surface 3f is inclined, as the angle-changed image data (step S43). The obtained angle-changed image data are stored within the memory.
When the reference image data and the angle-changed image data, which is obtained at a single image capturing angle (position), are obtained, processes similar to the processes from step S06 to step S07 in the first embodiment are executed, thereby generating the viewpoint-shifted image data (steps S44 to S45).
After the viewpoint-shifted image data is generated, the control portion 51 outputs the command to the feature extracting portion 56 to extract the feature F. Accordingly, the feature extracting portion 56, which receives the command from the control portion 51, extracts the feature F (the characteristic amount, the amount of feature) on the basis of the viewpoint-shifted image data (step S46, the feature extracting step). In the second embodiment, the reflection intensity distribution is used as the feature F. Accordingly, the feature F of each of the inspection points P on the reference image data may be represented as follows: F (x, y, dk)=I (G (x, y)), where G (x, y) represents a function of coordinate conversion for returning (converting) the coordinate of the corresponding point of the inspection point P (x, y) on the viewpoint-shifted image data to the coordinate on the reference image data. The function G (x, y) may be obtained in a process executed by the corresponding point searching portion 54. Furthermore, “I ( )” represents the pixel value on the viewpoint-shifted image data.
After the process executed to one (single) angle-changed image data is completed, the control portion 51 checks whether or not the unprocessed relative direction vector dk exists (step S47). In a case where the unprocessed relative direction vector dk exists (Yes in step S47), the control portion 51 increments the index k. Then, the process proceeds to step S42.
After the processes executed to all of the predetermined relative direction vectors dk is completed in the above-described manner (No in step S47), the control portion 51 outputs the command to the defect specification portion 57 to execute the defect specification process. Accordingly, the defect specification portion 57 executes the defect specification on the basis of the feature F, which is extracted by the feature extracting portion 56 (step S48, the determining step).
Illustrated in
As illustrated in
As described above, in the case where the defect type corresponds to the chatter marks, an angle at which the maximum reflection intensity is found is displaced from the second deviation angle φ (=0°). On the other hand, as illustrated in
Accordingly, the defect type may be specified in a manner where a value, which is obtained by quantifying the characteristic of the reflection intensity distribution of each defect type (which will be hereinafter referred to as a distribution characteristic), is extracted as the feature F. For example, a shape of an envelope of the reflection intensity distribution may be used as the distribution characteristic. In this case, the distribution characteristic of the feature F (x, y, dk) may be obtained in a manner where points of the deviation angle φ and points of the radius vector F (x, y, dk) are plotted on the XZ-plane and connecting those points to form the envelope. The distribution characteristic is not limited to the above-mentioned value and the shape of the envelope. For example, various values may be used as the distribution characteristic as long as being adaptable to be used in the defect determination process and as long as calculable from the reflection intensity distribution such as the relative direction vector, at which the reflection intensity reaches the maximum value, an area defined (surrounded) by a closed curve in a case where the above-mentioned envelope is used as the closed curve, a density distribution of the points of the deviation angle φ and the points of the radius vector F (x, y, dk), which are plotted on the XZ-plane, and the like.
Accordingly, as is the case with the first embodiment, after the defect type of each of the inspection points P (x, y) on the reference image data is specified, the defect specification result is substituted into the defect image data (step S52), in other words, the defect specification result is added to the defect image data so as to be related thereto. In the second embodiment, the number indicating the defect type (e.g., the normal portion: zero (0), the blowhole: one (1), the stamp: two (2), the chatter marks: three (3), and the water drop: zero (0)) is substituted into the two dimensional array.
The defect specification portion 57 repeats the above-described processes in steps S51 to S 52 until the defect specification of all of the inspection points P (x, y) is completed (No in step S53).
After the defect specification of all of the inspection points P (x, y) is completed (No in step S53), the defect specification portion 57 groups the inspection points according to the defect type based on the defect image data in order to obtain the defect area(s) (step S54). For example, a known multiple value labeling process and the like may be used as the grouping of the inspection points.
Then, as is the case with the first embodiment, the defect specification portion 57 extracts the feature of each defect area (step S55). Following the process executed in step S55, the defect specification portion 57 determines whether or not the result of the defect specification is appropriate on the basis of the defect type of each defect area and the feature F (step S56).
Accordingly, in the second embodiment, the distinction between the normal portion, the blowhole, the stamp, the chatter marks, the water drop and the like may be achievable. A known defect detection apparatus and a defect determination method are likely to determine a water drop on an inspection surface of a workpiece as a defect. However, according to the second embodiment, the defect detection apparatus and the defect detection method may appropriately specify the water drop on the inspection surface Wf of the workpiece W, so that a decrease in a yield rate may be avoided. Furthermore, the defect detection apparatus and the defect detection method according to the second embodiment may properly specify a shallow stamp formed on the inspection surface Wf of the workpiece W, which may result in increasing precision of the product.
A third embodiment of the defect detection apparatus and the defect detection method will be described below. The defect detection apparatus and the defect detection method according to the third embodiment differ from the defect detection apparatus and the defect detection method according to the second embodiment in that the relative direction vector dk=(θi, φj) (where i=1, . . . , m, j=1, . . . , n, and k=1, . . . , m*n) is used instead of the relative direction vector dk=(θi, φj) (where i=1, 2, j=1, . . . , n, and k=1, . . . , 2*n) used in the second embodiment. In other words, in the third embodiment, the camera 2 captures the images of the inspection surface Wf of the workpiece W from along the hemisphere on the table surface 3f.
The processes executed by the defect detection apparatus and the defect detection method according to the third embodiment are similar to the processes executed by the defect detection apparatus and the defect detection method according to the second embodiment. However, because the above-described relative direction vector dk is used, the feature F (x, y, dk) of each inspection point P (x, y) appears as a three-dimensional distribution, although the feature F (x, y, dk) of each inspection point P (x, y) appears as the two-dimensional distribution in the second embodiment. Therefore, in the third embodiment, for example, a three-dimensional feature (i.e. a three-dimensional amount of feature) such as a surface area of an envelope surface and the like may be used as the distribution characteristic of the feature F (x, y, dk). Alternatively, the feature of the first deviation angle θ and the feature of the first deviation angle θ+180° may be extracted from the feature F (x, y, dk) and the two-dimensional distribution characteristic may be obtained on the basis of the extracted features, as is the case with the second embodiment. In this case, the distribution characteristic in various and plural directions may be obtained while changing the degree of the first deviation angle θ.
As described above, the distribution characteristic of the chatter marks on the inspection surface. Wf of the workpiece W appears prominently in the cutting direction. Therefore, in the case where the feature is two-dimensionally extracted as in the case of the second embodiment, if the capturing direction and the cutting direction are displaced from each other, the feature (the characteristic) of the chatter marks are not likely to appear in the distribution characteristic. Hence, the chatter marks on the inspection surface Wf of the workpiece W may not be detected depending on the setting of the relative vector. However, according to the third embodiment, because the three-dimensional characteristic is used, the defect such as the chatter marks, which has a prominent characteristic in a specific direction, may be properly and appropriately detected (specified).
For example, as illustrated in
Other embodiments and modified examples of the above-mentioned embodiments will be described below. In the above-mentioned first embodiment, the camera 2 is fixed. However, in a case where the images of the entire inspection surface Wf of the workpiece W are not obtainable by a single shooting, the defection detection apparatus may be modified so as to horizontally displace the camera 2 or the table 3.
In the above-described embodiments, the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f and the relative direction of the optical axis of the camera 2 relative to the table surface 3f are changeable by inclining the table 3. However, the defect detection apparatus may be modified so as to change the relative directions by using different mechanism. For example, the defect detection apparatus may be modified so as to displace the lighting device 1 and/or the camera 2 while the table 3 is fixed at one position, so that the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f and the relative direction of the optical axis of the camera 2 relative to the table surface 3f are changed. Furthermore, the defect detection apparatus may be modified so that only the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f is changeable in a manner where the lighting device 1 is moved while the table 3 and the camera 2 are fixed at one positions. Still further, the defect detection apparatus may be modified so as to displace the lighting device 1 and the camera 2 separately and independently of each other, so that the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f, the relative direction of the optical axis of the camera 2 relative to the table surface 3f and the relative direction between the optical axis of the lighting device 1 and the optical axis of the camera 2 are changeable. In this case, the feature appears as a bidirectional reflectance distribution function (BRDF).
In the above-described embodiments, the pixel value of the viewpoint-shifted image data is used for the calculation of the feature F. However, the defect detection apparatus and the defect detection method may be modified so as to use a value, which is obtained by normalizing the pixel value of the viewpoint-shifted image data, for the calculation of the feature. In this case, because the pixel value is normalized, an illuminance level of the lighting device 1 and a reflection characteristic of a material used for the workpiece W may not need to be considered. Accordingly, the defect detection apparatus and the defect detection method may be used for various types of the workpiece W without being influenced by, for example, colors applied on the inspection surface Wf of the workpieces W as long as the workpieces W are formed to have the same shape.
Accordingly, the defect detection apparatus and the defect detection method for detecting and specifying the defect on the inspection surface Wf of the workpiece W is achieved.
Accordingly, the images of the inspection surface Wf of the workpiece W are captured while changing at least one of the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f and the relative direction of the optical axis of the camera 2 relative to the table surface 3f. Accordingly, plural image data, which are obtained by capturing the inspection surface Wf of the workpiece W by the camera 2 from different angles, are obtained. The pixel values on the image data represent values in which the reflection characteristics (i.e. the reflection intensities) of the inspection surface Wf of the workpiece W are reflected. Therefore, in the case where the image data, which are obtained by capturing the inspection surface Wf of the workpiece W from different directions, are used, the reflection characteristic in each image capturing direction may be obtained. Furthermore, the defect has a specific reflection characteristic depending on a type of the defect. Therefore, the type of the defect formed on the inspection surface Wf of the workpiece W may be specified by comparing the reflection characteristic, which is obtained as mentioned above, and the reflection characteristic of each defect type. Additionally, the defect type includes the normal portion where no defect is formed on the inspection surface Wf of the workpiece W. Hence, the specification of the defect type in the embodiments includes the determination of whether or not the defect exists on the inspection surface Wf of the workpiece W and the specification of the type of the defect in the case where the defect is found.
According to the embodiments, the feature extracting portion 56 extracts the feature F on the basis of at least one of the minimum pixel value of the inspection point P included in the inspection surface Wf on the image data and the maximum pixel value of the inspection point P included in the inspection surface Wf on the image data.
It is proved that the reflection intensity at the blowhole is smaller than the reflection intensity at the normal portion or other defect from experiment. Therefore, the blowhole may be distinguished from other defect type on the basis of the maximum value of the reflection intensity. Alternatively, the blowhole and other defect types may be distinguishable on the basis of the ratio between the maximum value of the reflection intensity and the minimum value of the reflection intensity. Still further, it is proved that the reflection intensity of the reflection characteristic of each of the defects other than the chatter marks reaches the maximum value in the direction orthogonal to the inspection surface Wf but the reflection intensity at the chatter marks does not reach the maximum value in the direction orthogonal to the inspection surface Wf from experiment. Therefore, the chatter marks may be detected on the basis of the image capturing direction at which the reflection intensity reaches the maximum value.
According to the embodiments, the displacement mechanism 4 changes the direction of the relative direction so that the direction vector indicating the redirected relative direction falls within the predetermined plane surface, which is orthogonal to the table surface 3f. Furthermore, the feature extracting portion 56 obtains the two-dimensional reflection distribution of each of the inspection points on the basis of the pixel value of the corresponding inspection point included in the inspection surface Wf on the image data and extracts the feature F on the basis of the two-dimensional reflection distribution.
Accordingly, the two-dimensional reflection characteristic of the inspection surface Wf of the workpiece W may be obtained. Therefore, a feature indicating further detailed reflection characteristic may be obtained, so that the defect type may be specified with higher accuracy. Still further, according to the embodiments, the stamp, the water drop and the like, which are difficult to be specified in a known defect detection device and the defect detection method, may be specified.
It is proved that, in a case where the image of the inspection surface Wf of the workpiece W having the chatter marks is captured in a direction parallel to the cutting direction, the prominent reflection characteristic appears, but in a case where the image of the inspection surface Wf of the workpiece W having the chatter marks is captured in a direction orthogonal to the cutting direction, any significant difference between the reflection characteristic of the chatter marks and the reflection characteristics of other defect types is not found from experiment. Therefore, in the second embodiment, the displacement mechanism 4 is controlled to change the relative direction so that the direction vector indicating the relative direction (i.e. a redirected relative direction), to which the relative direction is redirected, falls within a range corresponding to a predetermined hemisphere on the table surface 3f. Furthermore, the feature extracting portion 56 is configured so as to obtain the three-dimensional reflection distribution of each of the inspection points on the basis of the pixel values of each of the inspection points, which is included in the inspection surface Wf on the image data. Then, the feature extracting portion 56 extracts the feature F on the basis of the three-dimensional reflection characteristic.
Accordingly, the three-dimensional reflection characteristic of the inspection surface Wf of the workpiece W may be obtained. Therefore, the defect having a prominent characteristic in a specific direction, such as the chatter marks and the like, may also be accurately and appropriately detected.
The reflection characteristic may be quantified by various values. Therefore, according to the second embodiment, the feature extracting portion 56 extracts at least one of the reflection intensity of the reflection distribution, the relative direction at which the reflection intensity reaches the maximum value or the minimum value, and a dispersion of the reflection intensity as the feature F. Accordingly, because the defect is detected on the basis of the above-mentioned values as the feature F, the defect may be detected (specified) by a simple calculation.
According to the embodiments, the lighting device 1 is set so that the optical axis thereof corresponds to the optical axis of the camera 2. Accordingly, because the lighting device 1 is configured as the coaxial epi-illuminating device, a size of the defect detection apparatus may be reduced while the lighting device 1 accurately emits the light to a target point.
Accordingly, the images of the inspection surface Wf of the workpiece W are captured while changing at least one of the relative direction of the optical axis of the lighting device 1 relative to the table surface 3f and the relative direction of the optical axis of the camera 2 relative to the table surface 3f. Accordingly, plural image data, which are obtained by capturing the inspection surface Wf of the workpiece W by the camera 2 from different angles, are obtained. The pixel values on the image data represent values in which the reflection characteristics (i.e. the reflection intensities) of the inspection surface Wf of the workpiece W are reflected. Therefore, in the case where the image data, which are obtained by capturing the inspection surface Wf of the workpiece W from different directions, are used, the reflection characteristic in each image capturing direction may be obtained. Furthermore, the defect has a specific reflection characteristic depending on a type of the defect. Therefore, the type of the defect formed on the inspection surface Wf of the workpiece W may be specified by comparing the reflection characteristic, which is obtained as mentioned above, and the reflection characteristic of each defect type. Additionally, the defect type includes the normal portion where no defect is formed on the inspection surface Wf of the workpiece W. Hence, the specification of the defect type in the embodiments includes the determination of whether or not the defect exists on the inspection surface Wf of the workpiece W and the specification of the type of the defect in the case where the defect is found.
The principles, preferred embodiment and mode of operation of the present invention have been described in the foregoing specification. However, the invention which is intended to be protected is not to be construed as limited to the particular embodiments disclosed. Further, the embodiments described herein are to be regarded as illustrative rather than restrictive. Variations and changes may be made by others, and equivalents employed, without departing from the spirit of the present invention. Accordingly, it is expressly intended that all such variations, changes and equivalents which fall within the spirit and scope of the present invention as defined in the claims, be embraced thereby.
Number | Date | Country | Kind |
---|---|---|---|
2010-026766 | Feb 2010 | JP | national |