The present invention relates to a tire inspection apparatus and more particularly to a tire inspection apparatus for inspecting molding defects by applying image processing to an image captured of a tire side along the circumferential direction of a tire.
In a conventional tire appearance inspection, a tire is placed on its side on a rotating table, and as the tire is rotated, an image of the upward tire side is captured along the circumferential direction of the tire. By doing so, an inspection image for one round of the tire side is obtained, and this inspection image is subjected to an image processing. Thus, an inspection is performed to detect molding defects, such as bumpy or recessed flaws, on the tire side (see Patent Document 1).
However, the tire side to be inspected has adhesions of rubber burrs which are rubber extremities having spilled into air bleeder holes provided in a mold for molding the tire or having spilled into gaps between the molds, in addition to the molded objects to be primarily molded. As a result, the inspection image obtained shows rubber burrs captured therein. And the rubber burrs captured in the inspection image are detected as defects because they cannot be distinguished from molding defects in an appearance inspection by image processing. Hence, parts detected as defects must be reinspected visually by a worker to determine whether they are molding defects or rubber burrs. This requires use of trouble and time for inspection, thus obstructing the improvement of efficiency in inspection.
Therefore, the object of the present invention is to provide a tire inspection apparatus that can improve the efficiency of inspection by eliminating the need for trouble and time for reinspection by a worker. This is done by identifying locations of rubber burrs in an image captured of a tire side in a tire appearance inspection and enabling an inspection of the tire side without detecting the rubber burrs as defects in a subsequent inspection step.
A tire inspection apparatus to solve the above-described problem includes a storage means for storing a model pattern of a molded objects to be molded on a tire side of a tire to be inspected and a master image of the tire side, the master image being prepared when the molded object and an unnecessary molded object are molded together as a conforming article, a model pattern locating means for locating the model pattern on a captured image of the tire side to be inspected in a position of the captured image showing a highest degree of agreement with the model pattern, and an unnecessary molded object location identifying means for identifying a location of the unnecessary molded object from the captured image by comparing the position of the model pattern located on the captured image against a position of the model pattern on the master image.
This makes it possible to identify the locations of the unnecessary molded objects in the captured image. And it becomes possible to perform a tire side inspection without erroneously detecting the unnecessary molded object as a defect in a subsequent inspection step. As a result, the inspection efficiency can be improved by eliminating the need for trouble and time for reinspection by a worker.
Hereinafter, a description is given of the inspection apparatus 1 for the tire T with reference to
The inspection apparatus 1 according to the present invention makes it possible to perform an appearance inspection of a tire side S of the tire T by identifying a location of an unnecessary molded object from an image captured of the tire side S and enabling the appearance inspection without detecting the unnecessary molded object as a defect in a subsequent inspection step using image processing. And the inspection apparatus 1 includes an image acquiring unit 2 for capturing an image of the tire side S of the tire T to be inspected and an image processing unit 3.
With the tire T to be inspected without an internal pressure applied therein and placed on its side on a table 5 freely rotatable by a rotating mechanism 4, the image acquiring unit 2 captures an image of the state of the tire side S using a camera 6 and a light source 7 disposed above the table 5. The image is produced by casting a slit-shaped 2-dimensional laser light (slit light) on the tire side S from the light source 7 in such a manner that the light extends in the radial direction of the tire and capturing by the camera 6 the trace of the laser light cast on the tire side S. Thus, color tones on the surface of the tire side are acquired as brightness values, and raised and recessed portions as height information. This image is captured while the tire T is rotated such that a captured image K of one round of the tire continuously along the circumferential direction of the tire T is acquired. That is, the image acquiring unit 2 in this embodiment acquires the captured image K of the tire side S by a so-called light-section method. Each pixel constituting the captured image K contains the height information representing the raised and recessed portions on the tire side S and the brightness information. It is to be noted that one round of tire T rotation is detected by an encoder 8.
The captured image K captured by the camera 6 is processed with images of individual shots arranged in a series by the image processing unit 3 to be discussed in the following, as shown in
The image processing unit 3, which is a so-called computer, is equipped with a CPU as calculating means, a ROM and a RAM as storage means, and an I/O interface as communication means. The storage means 9 stores a model pattern (s) M of a molded object(s) to be molded on the tire side S of the tire T to be inspected and a master image N of the tire side S, which is prepared when the molded object (s) and a rubber burr (s) U as an unnecessary molded object (s) molded unnecessarily on the tire side S are molded as a conforming article. Also, the image processing unit 3 is equipped with a model pattern locating means 10 for locating the model pattern(s) M in a position(s) with the highest degree of agreement with the model pattern(s) M on a captured image K captured of the tire side S to be inspected and an unnecessary molded object location identifying means 11 for identifying the location(s) of the rubber burr(s) U molded as an unnecessary molded object(s) from the captured image K by comparing the location(s) in the captured image K of the model pattern(s) M located thereon against the location (s) of the model pattern(s) M on the master image N. The above two means perform their respective processings according to the program stored in the storage means 9.
Molded objects, as shown in
Recorded in the master image N are the molded objects, such as the maker name, the tire name (nickname), the tire size, the manufacturing country name, the radial marking, the tubeless marking, the slip sign marking, the rotation marking, together with the unnecessary molded objects, such as the rubber burrs U, in their respective locations of the tire side S when molded as a conforming article.
The character strings of the above-mentioned molded objects, such as the maker name, the tire name (nickname), the tire size, the manufacturing country name, the radial marking, the tubeless marking, the slip sign marking, the rotation marking, are recorded as model patterns M in the storage means 9 of the image processing unit 3. At the same time, the characters constituting the above-mentioned character strings are individually recorded as model pattern elements J in the storage means 9. The characters, which are scattered about covering almost the entire region of the tire side, make it possible to identify the locations of the unnecessary molded objects in the captured image with greater certainty and accuracy.
The above-mentioned model patterns M, model pattern elements J, and master image N, are prepared based on CAD data used at the designing of the mold. The molded objects molded as a conforming article are molded by the grooves carved in the mold. Therefore, use of the CAD data at the time of designing the mold makes it possible to obtain the model patterns M, model pattern elements J, and master image N with excellent accuracy.
Also, recorded in the master image N are the rubber burrs U to be molded as the unnecessary molded objects. The rubber burrs U are located at the air bleeder holes in the CAD data of the mold. Hence, the locations of the air holes contained in the CAD data are strung, for instance, to the model patterns M or model pattern elements J which are the closest thereto, and they are recorded as location identification data, etc. in the storage means 9.
The model pattern locating means 10 is equipped with the rough model pattern locating means 12 and the precise model pattern locating means 13.
As shown in
As shown in
Hereinbelow, a description is given of the operation of the model pattern locating means 10, using the model pattern of maker name “ABC-TIRE”.
First, the model pattern “ABC-TIRE” is located over the captured image K by the rough model pattern locating means 12.
Next, with reference to the location of the model pattern “ABC-TIRE” matched to the captured image K, pattern matching is again executed using “A”, “B”, “C”, “-”, “T”, “I”, “R”, “E” as the model pattern elements J constituting the model pattern “ABC-TIRE” relative to the “ABC-TIRE” on the captured image K by the precise model pattern locating means 13, thereby locating the model pattern elements J on the captured image K.
The unnecessary molded object location identifying means 11 is equipped with a dislocation amount calculating means 14 for identifying the location of unnecessary molded object from the captured image K by comparing the locations on the captured image K of the model patterns M and the model pattern elements J located thereon against the locations of the model patterns M on the master image N. And it is further equipped with an unnecessary molded object location identifying function acquiring means 15 and an unnecessary molded object location calculating means 16.
The dislocation amount calculating means 14 calculates the dislocation amounts δL between the locations of the model pattern elements J located on the captured image K, as the model patterns M, when the model patterns M are matched to the captured image K by the rough model pattern locating means 12 and the locations of individual model pattern elements J located on the captured image K when the model pattern elements J are matched to the captured image K by the precise model pattern locating means 13.
This process is hereby explained taking “E” of the model pattern elements J of “ABC-TIRE” as an example. As shown in
It is to be noted here that the centers C1, C2, and C3 are the gravity center positions used as matching centers in pattern matching.
And this calculation is performed for each of the remaining model pattern elements J, namely, “A”, “B”, “C”, “-”, “T”, “I”, and “R” which constitute the model pattern M matched to the captured image K. Thus calculated are the dislocation amounts δL between the characters which are the elements of the molded object “ABC-TIRE” on the captured image K and the “A”, “B”, “C”, “-”, “T”, “I”, “R”, “E” of the model pattern elements J which are the elements of “ABC-TIRE” of the model pattern M on the master image N.
The above calculation of dislocation amount δL is done for all of the model pattern elements of all the model patterns M. All these calculated dislocation amounts δL are strung to the respective model pattern elements J at the time of calculation of the dislocation amounts δL and stored in the storage means 9.
As shown in
It is to be noted that when the matching rate in the matching of each character is the threshold value or below, the matching rate is not used for the processing, and the dislocation amount δL of the particular character is ignored in the subsequent processing. Also, the length of the dislocation amount δL is in the units of pixels in the captured image K.
The unnecessary molded object location identifying function acquiring means 15 executes a processing to acquire a location identifying function F for identifying the locations of rubber burrs U in the captured image K using the dislocation amount δL of every character derived by the dislocation amount calculating means 14.
The location identifying function F is a function represented by dislocation amounts δL between the locations of the model pattern elements J contained in the master image N in the circumferential direction of the tire and the characters in the respective locations contained in the captured image K corresponding to the model pattern elements J.
More specifically, as shown in
That is, the molded objects are scattered about covering almost the entire region in the circumferential direction of the tire side S. Therefore, by deriving the dislocation amounts δL along the circumferential direction at the respective locations of the model pattern elements J corresponding to the molded objects on the tire side S, it is possible to acquire the location identifying function F representing the state or tendency of dislocation of the molded objects or unnecessary molded objects at the various locations of the captured image K with excellent accuracy. And by use of this location identifying function F, it is possible to identify the locations of unnecessary molded objects from the captured image K with certainty and accuracy.
The unnecessary molded object location calculating means 16 executes a processing to identify the locations of rubber burrs U from the captured image K using the above-described location identifying function F.
As shown in
For example, let us assume a case where a rubber burr U is recorded at the location dislocated in the circumferential direction by a distance L5 from the center C2 of the model pattern element “E” in the master image N as shown in
This calculation of dislocation amount L4 is performed for all the rubber burrs U. Thus, the locations of all the rubber burrs U′ captured in the captured image K can be identified with certainty and accuracy.
The identification in this manner of the locations of rubber burrs U′ on the captured image K by the unnecessary molded object identifying means 11 can be processed easily by stringing and recording the relations between the rubber burrs U and the characters closest thereto in the storage means 9 beforehand. In other words, the distance L5 in the tire circumferential direction between the matching center C2 of the character “E” in the master image N and the matching center C4 of the rubber burr U is strung to the character “E” beforehand as shown in
Therefore, the positional relationships between the model pattern elements J constituting the character strings of the model patterns M of the character strings to be molded on the tire side S and the rubber burrs U in the master image N must be made known values in advance. Thus, it is possible to identify with ease and accuracy the locations of rubber burrs U′ contained in the captured image K from the molded objects corresponding to the model pattern elements J.
The rubber burrs U′ identified in the captured image K as described above are mask-processed by a processing means in a subsequent step such that they would not be detected as molding defects by an inspecting means because the height information contained in the captured image K is equalized to the height of the neighboring tire surfaces.
Hereinbelow, a description is given of the operation of the above-mentioned inspection apparatus 1.
First the image acquiring unit 2 acquires a captured image K by capturing images of the tire side S. This captured image K is captured as the table 5 is rotated with a tire T placed on its side on the table 5 without applying an internal pressure into the tire. As a result, the captured image K is obtained in a condition where the rotation center of the tire T and the rotation center of the table 5 are not in agreement with each other and the tire T itself is not taut because of its pliability. That is, the images constituting the captured image K are captured with different pixel widths in the circumferential direction. Consequently, as shown in
Next, the locations of rubber burrs U′, which are unnecessary molded objects, are identified from the captured image K by the image processing unit 3.
Firstly, the image processing unit 3 locates the model patterns M on the captured image K by performing pattern matching on the captured image K of the model patterns M, such as maker name, tire name (nickname), tire size, manufacturing country name, radial marking, tubeless marking, slip sign marking, and rotation marking, contained in the master image N as shown in
Next, by the precise model pattern locating means 13, the model pattern elements J of the model patterns M matched by the rough model pattern locating means 12 are located on the captured image K by pattern-matching them within predetermined limits on the captured image K with reference to the locations matched by the rough model pattern locating means 12.
Then, by the dislocation amount calculating means 14 of the unnecessary molded object location identifying means 11, calculation processing is executed of the dislocation amounts δL between the locations of the characters located on the captured image K captured for inspection and the corresponding locations of the characters on the master image N.
This calculation of dislocation amount δL is done for all the model pattern elements J constituting all the model patterns M.
Then, by the unnecessary molded object location identifying function acquiring means 15, the location identifying function F for identifying the locations of rubber burrs U from the dislocation amounts δL of the respective characters on the captured image K is obtained.
Then, by the unnecessary molded object location calculating means 16, the locations of the rubber burrs U′ as unnecessary molded objects are identified using the location identifying function F. This identification of the locations of the rubber burrs U′ is executed for all the rubber burrs U contained in the master image N. As a result, all the locations of the rubber burrs U′ contained in the captured image K are identified with accuracy.
By implementing the structure as described above, it is possible to identify the locations of rubber burrs U′ from the captured image K with certainty and accuracy even when the tire T is rotating eccentrically relative to the camera 6 or when the tire side S is not taut at the time of capturing an inspection image. This allows an inspection to be conducted without detecting the rubber burrs by such a process as ignoring the rubber burrs U′ identified as described above in a subsequent inspection step using image processing. As a result, it is no longer necessary for the worker to perform a reinspection to distinguish visually between molding defects and rubber burrs. Accordingly, it becomes possible to shorten the time and improve inspection efficiency without trouble taken for inspection.
In the present embodiment, in particular, the captured image K is captured of the tire side S without the internal pressure applied to the tire. In such cases, the circumferential lengths of the captured image K vary with the capturing positions because of the eccentricity or deformation of the tire at the time of image capturing. Hence, the locations of rubber burrs U cannot be identified by directly comparing the captured image K against the CAD data at the time of mold design. However, according to the method of the present invention, it is advantageously possible to identify rubber burrs U′ from the captured image K with certainty without being affected by the eccentricity or deformation at the time of image capturing.
As another structure of the inspection apparatus 1, it is possible to eliminate the precise model pattern locating means 13 in the above-described processing when the tire T to be inspected can be fitted to the rim and the internal pressure can be applied to the tire in the acquisition of the captured image K by the image acquiring unit 2. That is, with the tire T fitted to the rim and the internal pressure applied within the tire, there is almost no change in the distance of the tire side S relative to the camera 6 at the rotation of the tire (no effect of the deformation of the tire T and eccentricity at the time of image capturing). There will be the effects only of the differences from the CAD data due to the deformation of the tire side S when the internal pressure is applied. That is, there will be only isotropic changes of the locations of the model patterns M between themselves in the circumferential direction and the radial direction in the master image N. Therefore, the locations of the rubber burrs U may be identified from the captured image K by locating the model patterns M on the captured image K by pattern matching by the rough model pattern locating means 12 and obtaining the location identifying function F after calculating the dislocation amounts δL between the model patterns M and the captured image K by the dislocation amount calculating means 14.
Number | Date | Country | Kind |
---|---|---|---|
2013-228699 | Nov 2013 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/077540 | 10/16/2014 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/064369 | 5/7/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6124925 | Kaneko | Sep 2000 | A |
7295948 | Jetter | Nov 2007 | B2 |
7512260 | Murakami | Mar 2009 | B2 |
8305436 | Fujisawa | Nov 2012 | B2 |
8498467 | Joly | Jul 2013 | B2 |
8948491 | Sekiguchi | Feb 2015 | B2 |
9097514 | Takahashi | Aug 2015 | B2 |
9113046 | Fujii | Aug 2015 | B2 |
9123112 | Vinciguerra | Sep 2015 | B2 |
9291527 | Zoken | Mar 2016 | B2 |
9310278 | Sukegawa | Apr 2016 | B2 |
9677879 | Mizutani | Jun 2017 | B2 |
20050058333 | Kaneko | Mar 2005 | A1 |
20070209431 | Fujisawa | Sep 2007 | A1 |
20090226073 | Honda et al. | Sep 2009 | A1 |
20110019903 | Joly et al. | Jan 2011 | A1 |
20130266189 | Vinciguerra et al. | Oct 2013 | A1 |
20160086320 | Fujisawa | Mar 2016 | A1 |
20160320265 | Regoli | Nov 2016 | A1 |
Number | Date | Country |
---|---|---|
101163941 | Apr 2008 | CN |
101896937 | Nov 2010 | CN |
102460132 | May 2012 | CN |
103210417 | Jul 2013 | CN |
1 750 089 | Feb 2007 | EP |
2500686 | Sep 2012 | EP |
2005-331274 | Dec 2005 | JP |
2007-011462 | Jan 2007 | JP |
2011-127930 | Jun 2011 | JP |
2011-226971 | Nov 2011 | JP |
2011-247646 | Dec 2011 | JP |
2013-096972 | May 2013 | JP |
2009077539 | Jun 2009 | WO |
2012055752 | May 2012 | WO |
Entry |
---|
Oct. 10, 2016 Extended Search Report issued in European Patent Application No. 14857345.4. |
May 12, 2016 Written Opinion issued in International Patent Application No. PCT/JP2014/077540. |
Dec. 16, 2014 Search Report issued in International Patent Application No. PCT/JP2014/077540. |
Oct. 23, 2017 Search Report issued in Chinese Patent Application No. 201480060295. |
Number | Date | Country | |
---|---|---|---|
20160263952 A1 | Sep 2016 | US |