This invention relates in general to the field of the non-contact inspection of small manufactured objects and sorting the inspected objects and, more particularly, to methods and systems for inspecting small manufactured objects, such as pharmaceutical tablets, pills, tokens, coins, medals, etc. and planchets for such tokens, coins, medals, etc.
Traditional manual inspecting devices and techniques have been replaced to some extent by automated inspection methods and systems. However, such automated inspection methods and systems still have a number of shortcomings associated with them.
Rapid inspection of defects on and in a variety of small, mass-produced objects is a vital aspect in the manufacturing process, allowing for maintenance of a high level of quality and reliability in a number of industries. For example, traditionally, quality control in the pharmaceutical industry has related to the type, purity, and amount of tablet ingredients. However, quality also relates to defects which can be detected by visual inspection such as dirt, surface blemishes, and surface chips. Although many visual inspections can be performed by operators, manual inspection can be slow, expensive and subject to operator error. Also, many types of inspections cannot be done visually. Thus, automated inspection systems for quality control in the pharmaceutical industry are extremely important. The following U.S. patents are related to these types of systems: U.S. Pat. Nos. 5,085,510, 4,319,269; 4,354,602; 4,644,150, 4,757,382; 5,661,249; 3,709,598, 5,695,043, 6,741,731; and 6,079,284.
The making of medicinal tablets by compression of powders, dry or treated, is an old art and satisfactory machinery for making such tablets has long been available.
It is highly desirable that all tablets prepared by rotary tablet press mechanisms be of uniform and precisely controlled size and weight. This is especially true for medicinal tablets because carefully prescribed dosage amounts are difficult to achieve without accurate tablet size and weight control. Inaccuracies in tablet size and weight stem from a variety of different circumstances. Various different failure modes of tables are illustrated in
The following terms and phrases are used herein in accordance with the following meanings:
Coins—pieces, including metallic money (i.e.,
Obverse/Reverse—obverse is the side of a coin bearing the more important legends or types; its opposite side is the reverse.
Mint Luster—the sheen or “bloom” on the surface of a coin created by radial die marks, which are produced by minute imperfections or rough spots on the surface of the dies used to form the coin and by the centrifugal flow or metal when struck by those dies;
Strength of Strike—refers to the sharpness of design details within an object such as a coin. A sharp strike or strong strike is one with all the details of the die are impressed clearly into the coil; a weak strike has the details lightly impressed at the time of coining
In minting, coining is the process of manufacturing coins using a kind of stamping which is now generically known in metalworking as “coining”.
A coin die is one of the two metallic pieces that are used to strike one side of a coin. A die contains an inverse version of the image to be struck on the coin. To imagine what the inverse version looks like, one can press a coin into clay or wax and look at the resulting inverted image. Modern dies made out of hardened steel are capable of producing many hundreds of thousands of coins before they are retired and defaced.
On the edge of the US dime, quarter and half dollar, and many world coins there are ridges, similar to knurling, called reeds. Some older US coins, and many world coins have other designs on the edge of the coin. Sometimes these are simple designs like vines, more complex bar patterns or perhaps a phrase. These kinds of designs are imparted into the coin through a third die called a collar. The collar is the final size of the coin, and the planchet expands to fill the collar when struck. When the collar is missing, it results in a type of error called a broadstrike. A broadstruck coin is generally a bit flatter and quite a bit bigger around than the regular non-error coin of the same denomination.
The terminal die state is the last state in which any die is used. This state refers to a die that is starting to develop serious structural failures through cracks. A die in such a state would, if not removed from service, become unserviceable by breaking apart. Like any metallic part, dies are subject to failure from the enormous pressures used to impress the image of the dies onto the blank planchet. Some dies were removed when even a microscopic defect is observed.
More typically, a terminal die state will result in crack-like structures appearing on the coin. Crack-like structures appear like material that is overlaid onto the surface of the coin; this is because the crack on the die allowed the planchet material to flow into it during stamping, just like a deliberate design feature. Some coins exhibit multiple crack-like features, indicating a die that is very close to the end of its serviceable life.
WO 2009/130062 discloses a method and a device for the optical viewing of objects. The method includes the stages of illuminating an object with ultraviolet radiation, and acquiring an image of the object thereby illuminated using a lens comprising at least a forward optical group and an aperture diaphragm exhibiting a transparent window located at a focal point of the forward optical group defined for the ultraviolet radiation. WO 2005/022076 is also related to the present application.
U.S. patents documents related to the invention include: U.S. Pat. Nos. 4,315,688; 4,598,998; 4,644,394; 4,831,251; 4,852,983; 4,906,098; 4,923,066; 5,383,021; 5,521,707; 5,568,263; 5,608,530; 5,646,724; 5,291,272; 6,055,329; 4,983,043; 3,924,953; 5,164,995; 4,721,388; 4,969,746; 5,012,117; 6,313,948; 6,285,034; 6,252,661; 6,959,108; 7,684,054; 7,403,872; 7,633,635; 7,312,607, 7,777,900; 7,633,046; 7,633,634; 7,738,121; 7,755,754; 7,738,088; 7,796,278; 7,684,054; 7,802,699; and 7,812,970; and U.S. published patent applications 2005/0174567; 2006/0236792; 2010/0245850 and 2010/0201806.
In one example method embodiment, a method of inspecting small, manufactured objects and sorting the inspected objects is provided. Each of the objects has top, bottom and side surfaces and an axis. The method includes consecutively feeding and transferring the objects so that the objects travel along a path which extends from an object loading station and through a plurality of inspection stations including a first vision station. Each object to be inspected at the first vision station has an unknown orientation. Only one of the top and bottom surfaces of each object is viewable at the first vision station. The method further includes the steps of imaging the viewable surface of each object at the first vision station to obtain a first set of images of the objects, determining orientation of each object at the first vision station based on the first set of images of the objects, and processing each image of the first set of images with one of a top surface vision algorithm and a bottom surface vision algorithm depending on the determined orientations at the first vision station to identify objects having unacceptable defects. The method still further includes consecutively transferring objects from the first vision station to a second vision station. Each object to be inspected at the second vision has an orientation opposite its unknown orientation at the first vision station. Only the other one of the top and bottom surfaces of each object is viewable at the second vision station. The method further includes imaging the viewable surface of each object at the second vision station to obtain a second set of images of the objects, determining orientation of each object at the second vision station, and processing each image of the second set of images with the other one of the top surface vision algorithm and the bottom surface vision algorithm depending on the determined orientations at the second vision station to identify objects having unacceptable defects. The method finally includes directing objects identified as having an unacceptable defect to at least one defective object area.
The step of determining orientation of each object at the second vision station may be based on the second set of images.
The step of consecutively transferring from the first vision station to the second vision station may include the step of applying a vacuum to the objects to obtain the opposite orientation of each of the objects.
The inspection stations may include a circumference vision station wherein all of the side surfaces of each of the objects are viewable at the circumference vision station.
The method may further include simultaneously illuminating all of the side surfaces of each object with a plurality of separate beams of radiation when the object is located at the circumference vision station to generate corresponding reflected radiation signals, imaging the reflected radiation signals to generate a plurality of side images and processing the side images of each object with a side surface vision algorithm to identify objects having unacceptable defects.
The step of illuminating may include the step of generating a single beam of radiation and dividing or splitting the single beam of radiation into the separate beams of radiation. Each of the separate beams of radiation may be a reflected beam of radiation.
The objects may be tablets. The inspection stations may include an eddy current station. The method may further include generating an electromagnetic signature of each tablet located at the eddy current station and processing the signatures to identify tablets having unacceptable defects in the form of metallic debris.
At least one of the steps of imaging may be performed with a three-dimensional sensor to obtain three-dimensional information about the imaged surface.
In one example system embodiment, a system for inspecting small, manufactured objects and sorting the inspected objects is provided. Each of the objects has top, bottom and side surfaces and an axis. The system includes a feeder and a transfer subsystem to consecutively feed and convey the objects so that the objects travel along a path which extends through a plurality of inspection stations including a first vision station. Each object to be inspected at the first vision station has an unknown orientation. Only one of the top and bottom surfaces of each object is viewable at the first vision station. The system further includes a first imaging assembly to image the viewable surface of each object when the objects are located at the first vision station to obtain a first set of images of the objects. The system still further includes at least one processor for processing the first set of images to determine orientation of each object at the first vision station and to identify objects having an unacceptable defect based on the determined orientations. The transfer subsystem consecutively conveys objects from the first vision station to a second vision station of the inspection stations. Each object to be inspected at the second vision station has an orientation opposite the unknown orientation at the first vision station. Only the other one of the top and bottom surfaces of each object is viewable at the second vision station. The system further includes a second imaging assembly to image the viewable surface of each object when the objects are located at the second vision station to obtain a second set of images of the objects. The system still further includes means for determining orientation of each object at the second vision station. The at least one processor processes the second set of images to identify objects having an unacceptable defect based on the determined orientations at the second vision station. The system further includes at least one object sorter for directing objects identified as having an unacceptable defect to at least one defective object area and a system controller coupled to the transfer subsystem, each of the imaging assemblies, the at least one processor and the at least one object sorter for controlling the sorting based on the inspections.
The at least one processor may determine orientation of each object at the second vision station based on the second set of images.
The transfer subsystem may include a vacuum transfer conveyor including a perforated conveyor belt. A top or bottom surface of each of the objects is held against a surface of the belt to obtain the opposite orientation.
The transfer subsystem may include first and second vacuum transfer drums and a mechanism for synchronously rotating the drums. The first rotating drum may convey objects at equal intervals to the first vision station. The second rotating drum may convey the objects supplied by the first drum at equal intervals to the second vision station.
The inspection stations may include a circumference vision station at which a third imaging assembly may be located. All of the side surfaces of each of the objects are viewable at the circumference vision station by the third imaging assembly.
The third imaging assembly may include a side illumination assembly to simultaneously illuminate a plurality of side surfaces of the object which are angularly spaced about the axis of the object with a plurality of separate beams of radiation when the object is located at the circumference vision station. The third imaging assembly may further include a telecentric lens and detector assembly to form an optical image of at least a portion of each of the illuminated side surfaces of the object and to detect the optical images. The at least one processor processes the detected optical images to obtain a plurality of views of the object which are angularly spaced about the axis of the object.
The telecentric lens may include a forward set of optical elements having an optical axis and an aperture diaphragm. The diaphragm may be provided with a transparent window substantially centered on the optical axis and located at a focal point along the optical axis of the forward set of optical elements.
The telecentric lens may further include a rear set of optical elements having a focal point. The diaphragm may be interposed between the forward and rear sets of optical elements with the transparent window located at the focal points of the forward and rear sets of optical elements.
The detector may include an image sensor having an image plane to detect the optical images.
The side illumination assembly may include a source of radiation and a mirror subassembly to receive and divide the radiation into the plurality of separate beams of radiation.
The objects may be tablets. The inspection stations may include an eddy current station. The system may further include an eddy current subsystem for generating an electromagnetic signature of a tablet when the tablet is located at the eddy current station and a signature processor for processing the signatures to identify tablets having an unacceptable defect in the form of metallic debris.
The first imaging assembly may include a three-dimensional sensor and the second imaging assembly may also include a three-dimensional sensor.
Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions and claims. Moreover, while specific advantages have been enumerated, various embodiments may include all, some of or none of the enumerated advantages.
For a more complete understanding of the present invention, and for further features and advantages thereof, reference is made to the following description taken in conjunction with the accompanying drawings, in which:
a is a schematic perspective view of a plurality of common disk-shaped tablets which can be inspected and sorted with at least one embodiment of the present invention;
b is a schematic perspective view of a plurality of different tablets which can be distinguished and sorted by both color and shape utilizing at least one embodiment of the present invention;
c is a schematic perspective view of a plurality of defective tablets wherein the top tablet has a “capping” failure and the right tablet has a lamination failure;
d is a schematic perspective view of a plurality of coins some of which form a stack and which can be inspected and sorted utilizing at least one embodiment of the present invention;
e as a top plan view of a coin blank or planchet which can be inspected and sorted in accordance with at least one embodiment of the present invention;
a is a side schematic view of a feeder and conveyors of a transfer subsystem constructed in accordance with at least one embodiment of the present invention;
b is a top plan view of the feeder and conveyors shown in
a is a side view, partially broken away and in cross section, of a conveyor belt of the vacuum transfer conveyor of
b is a side view, partially broken away and in cross section, of the conveyor belt of the vacuum transfer conveyor of
a is a side schematic view of a feeder and conveyors of a transfer subsystem constructed in accordance with a second embodiment of the present invention;
b is a top plan view of the feeder and conveyors shown in
In general, one embodiment of the method and system of the present invention inspects manufactured objects such as pharmaceutical tablets, pills, tokens, coins, medals and planchets, some of which are illustrated in
Referring now to
Referring again to
Objects are provided to the inspection machine subsystem by the transfer or conveyor subsystem at controlled regular and preferably equal intervals. The inspection machine subsystem of the first embodiment includes several inspection stations as shown in
With respect to the circumference vision station, a telecentric subsystem or imaging assembly provides multiple side imaging of the objects. One aspect of one embodiment of the present invention relates to a novel method and configuration which uses a telecentric subsystem including a telecentric or bi-telecentric lens to optically inspect objects which are received and suspended on a conveyor such as a vacuum conveyor which moves the objects between the inspection stations. At the circumference vision inspection station the objects have a predetermined position for optical inspection of the side surfaces of the objects.
Referring again to
As further illustrated in
As illustrated in
Referring now to
The illumination assembly 60 includes a diffusive source 61 of radiation and a mirror subassembly, generally indicated at 62, to receive and divide the radiation into the plurality of separate beams of radiation as shown in
The beam splitter 65 is located within the optical path to direct light energy reflected back along the optical path from the coin to a telecentric lens 92 and a detection device 94 (shown in detail in
The mirror subassembly 62 includes at least one mirror and preferably two mirrors 66 disposed on one side of the path along which the coins travel and at least one mirror and preferably four mirrors 67 disposed on the opposite side of the path as shown in
The detected optical images are processed by the processor to determine defects located at the side surfaces of the coins. Text recognition may also be implemented by the processor.
As described in greater detail hereinbelow, defect detection in each region of each side surface can be conducted by first running several image processing algorithms and then analyzing the resultant pixel brightness values. Groups of pixels whose brightness values exceed a preset threshold are flagged as a “bright defect”, while groups of pixels whose brightness values lie below a preset threshold are flagged as a “dark defect”. Different image processing techniques and threshold values are often needed to inspect for bright and dark defects, even within the same side surface region.
The system of
Four orthonormal partially overlapped views of the object are simultaneously provided to the device 94 by the telecentric lens 92 through the array of mirrors 62. The optical path is designed so that the displacement angle between the views is almost exactly 90°. This optical layout ensures complete coverage of the coin's lateral surfaces. The optical path is the same for all four viewpoints. Furthermore, telecentric imaging makes the system insensitive to coin (or tablet) decentering and therefore suitable for measurement applications. The subsystem is a solution for inspecting objects, such as tablets and coins, whose features would be hidden when looked at from the top or the bottom and for all those applications where an object is to be inspected or measured from different sides without object rotation.
Referring to
The illumination assembly 90 is provided for illustrative purposes in
Such an optical or optoelectronic device for the acquisition of images (for example the camera or telecamera 94) has the image plane 93 which can be, for example, an electronic sensor (CCD, CMOS). Preferably the device 94 is a high resolution digital telecamera, having the electronic sensor 93 with individual pixels of lateral dimensions equal to or less than one or more microns.
The lens 92 schematically comprises a forward set of optical elements 95 proximal to the coin, a rear set of optical elements 96 proximal to the acquisition device 94 and an aperture diaphragm 97 interpose between the forward set and the rear set of optical elements 95 and 96, respectively. The aperture diaphragm 97 comprises a circular window 98 transparent to the radiation, which is referred to as a diaphragm aperture. For example, the aperture diaphragm 97 can comprise an opaque plate preferably of thickness of a few tenths of a millimeter, and the diaphragm aperture can be defined as a simple hole in the plate.
The diaphragm aperture or window 98 is coaxial to the optical axis 99 of the forward set of optical elements 95, and positioned on the focal plane of the forward set 95 defined for the wavelength range of radiation emitted by the radiant source 90. The position of the focal plane of a set of optical elements mostly depends on the refraction index of the material from which the lenses are made, which, in turn, depends on the wavelength of the electromagnetic radiation passing through the lenses.
The lens 92 only accepts ray cones 100 exhibiting a main (barycentric) axis that is parallel to the optical axis 99 of the forward set 95. Thereby, the lens 92 is a telecentric lens configured for the particular radiation. The rear set of optical element 96 serves to compensate and correct the residual chromatic dispersion generated by the forward set of optical elements 95 for the wavelength in question.
The optical axis of the rear set 96 coincides with the optical axis 99 of the forward set 95 and the focal plane of the rear set 96 defined for the wavelength cited above, coincides with the plane on which the aperture diaphragm 97 is located. Consequently, rays of radiation 101 conveyed by the rear set 96 towards the image plane 93 form light cones, the main (barycentric) axis of which is parallel to the optical axis 99 of the lens 92.
The lens 92 is therefore both telecentric on the object side and telecentric on the image side, and overall the lens 92 is a bi-telecentric lens configured for light such as visible light or ultraviolet light. It may be preferable that the lens 92 is optimized for operation with radiation in the ultraviolet range, such that the choice of materials from which the lenses are composed, and the characteristics of the lenses, including for example the curvature radius, thickness and spatial position, permit the lens 92 to operate in the above indicated wavelength range exhibiting very high contrast and with performance close to the diffraction limit.
Referring still to
The images obtained with the bi-telecentric lens 92 are images substantially without errors of perspective and wherein the image size of the observed coin or tablet is independent of the distance from the coin. The use of the bi-telecentric lens 92 with radiation in the preferred range also provides a high resolution image, exhibiting a level of detail of less than ten microns, compatible with the maximum resolution of the electronic sensor 93 of the telecamera 94.
The lens 92 used in the wavelength range is therefore particularly suited for use with devices 94 capable of high resolution image acquisition, wherein the individual image point (pixel) is very small, and wherein the density of these pixels is very high, thereby enabling acquisition of highly detailed images.
An image acquired in this way will comprise a high numbers of pixels, each of which contains a significant geometric datum based the high performance of the lens 92 operating in the wavelength range, thereby being particularly useful for assessing the dimensions of the object viewed by the lens 92. The high level of detail provided by the individual pixels of the device 94 enables, after suitable processing of the image, an accurate determination of the outline of the object to be made, improving the efficiency of “edge detection” machine vision algorithms, which select, from a set of pixels making up an image, those pixels that define the border of the objects depicted, and thereby to establish the spatial positioning and the size of the objects as well as features on the side surfaces.
Consequently, the assembly of
An eddy current station of
Pencil light beams from emitters and associated sensors may be provided to monitor the progress of tablets or coins as they are conveyed. Also, feedback signals from sensors associated with the various drivers of the system may be used to monitor the progress of tablets or coins as they are being conveyed. Each pencil light beam is associated with a small control unit or hardware trigger or sensor that produces an electrical pulse when a light beam is blocked. The pulse is referred to as a “trigger”. Two of these are typically associated with the eddy current hardware. For eddy current, these essentially provide a “great ready”, then a “get set” signal to the hardware which then controls the induced eddy current. The eddy current subsystem is typically a commercially available subsystem.
The software for the eddy current subsystem displays the electromagnetic signature of a tablet on the complex impedance plane. The software is a purely comparative tool, generating no quantitative data. Several coil sizes are commercially available. Additionally, coil frequency, AC gain and DC gain can be adjusted to generate a signature plot which is as large as possible without saturating the sensor.
In general, when setting up for inspecting a new object whether a tablet or a coin, the user chooses surface “features” of the object to be measured via the user interface. The types of features include design dimensions and eddy current signature. For most features, the user chooses a region of the object where the measurement will be made, a nominal value of the measurement, and plus and minus tolerances. For some features, such as eddy current, the measurement region is the whole object. Also, for eddy current the user chooses a rectangle on an eddy screen of the display instead of a nominal value and tolerances. If the eddy signature hits the rectangle, than the object is good.
More particularly, in creating a template, a gold or master object with known good dimensions and surface features and without defects is conveyed in the system after which the particular object is named. After the object has traveled the length of the path, one or more images of the object is displayed on the display.
Software locates and defines several regions of interest on the object and inspects those regions using any number of customizable tools for user-defined defects. In order to allow the system to be able to locate and recognize a wider variety of defects, exterior side surfaces of the part are illuminated from a variety of angles as previously described.
In view of the above, the following are important considerations in the design of the illumination and telecentric lens and detector assemblies of the third imaging or camera assembly:
Standard telecentric lenses operate in the visible range;
In order to use an ultraviolet (UV) illuminator, it would be necessary to replace both the LED illuminator and the telecentric (TC) lens with the equivalent UV structures.
UV telecentric setups offer more contrast information at higher spatial frequencies compared to lenses operating in the visible range.
UV telecentric setups offer more contrast information at higher spatial frequencies compared to lenses operating in the visible range.
Telecentric lenses that are telecentric only in object space accept incoming rays that are parallel to the main optical axis. However, when those rays exit the optical system, they are not parallel anymore and would strike the detector at different angles. This results in:
The vision subsystems for the first embodiment described above and for the second embodiment described below are especially designed for the inspection of the top, bottom and side surfaces of relatively small manufactured objects such as pharmaceutical tablets and coins. The processing of object images or resulting data to detect defective objects in each of the embodiments can be performed as follows.
The detection of surface dents, chips or cracks typically relies on the alteration of the angle of reflected light caused by a surface deformation on the inspected object. Light which is incident on a surface dent will reflect along a different axis than light which is incident on a non-deformed section.
There are generally two ways to detect dents using this theory. One option is to orient the light source so that light reflected off the object exterior is aimed directly into the camera aperture. Light which reflects off a dented or cracked region will not reflect bright background. Alternatively, the light source can be positioned with a shallower angle to the object. This will result in a low background illumination level with dents appearing as well deemed origin spots on the image.
Detecting perforations uses both of the principles outlined above. The task is much simpler however, as the region containing the defect is completely non-reflective. Therefore, perforations are visible as dark spots on surfaces illuminated by either shallow or steep angle illumination.
Because the object to be viewed is essentially at a pre-defined location but unknown orientation when the images are acquired, the software to locate objects and their orientation and to identify regions of interest use preset visual clues.
Defect detection in each region of interest is typically conducted by first running several image processing algorithms and then analyzing the resultant pixel brightness values. Groups of pixels whose brightness values exceed a preset threshold are flagged as a “bright defect,” while groups of pixels whose brightness values lie below a preset threshold are flagged as a “dark defect.” Different image processing techniques and threshold values are often needed to inspect for bright and dark defects, even within the same object region.
Previously locating the object in the image may be accomplished by running a series of linear edge detection algorithms. These algorithms use variable threshold, smoothing and size settings to determine the boundary between a light and dark region along a defined line. These variables are not generally available to the user, but are hard-coded into the software, as the only time they will generally need to change is in the event of large scale lighting adjustments.
Once the object has been located in the image, a framework of part regions is defined using a hard-coded model of the anticipated part shape and surface designs. Each of these regions can be varied in length and width through the user interface in order to adapt the software to varying object sizes.
Once the regions have been defined, a buffer distance is applied to the inside edges of each region. These buffered regions define the area within which the defect searches will be conducted. By buffering the inspection regions, edge anomalies and non-ideal lighting frequently found near the boundaries are ignored. The size of the buffers can be independently adjusted for each region as part of the standard user interface and is saved in an object profile.
There are two general defect detection algorithms that can be conducted in each region. These two algorithms are closely tied to the detection of dents and perforations respectively as discussed above. More generally however, they correspond to the recognition of a group of dark pixels on a bright background or a group of bright pixels on a dark background.
Although there may be only two defect detection algorithms used across all the regions on the object, the parameters associated with the algorithm can be modified from region to region. Additionally, the detection of dark and/or bright defects can be disabled for specific regions. This information is saved in the object profile.
The detection of dark defects may be a 6 step process.
1. Logarithm: Each, pixel brightness value (0-255) is replaced with the log of its brightness value. This serves to expand the brightness values of darker regions while compressing the values of brighter regions, thereby making it easier to find dark defects on a dim background.
2. Sobel Magnitude Operator: The Sobel Operator is the derivative of the image. Therefore, the Sobel Magnitude is shown below:
although it is frequently approximated as follows:
The Sobel Magnitude Operator highlights pixels according to the difference between their brightness and the brightness of their neighbors. Since this operator is performed after the Logarithm filter applied in step 1, the resulting image will emphasize dark pockets on an otherwise dim background. After the Sobel Magnitude Operator is applied, the image will contain a number of bright ‘rings’ around the identified dark defects.
3. Invert Original Image: The original image captured by the camera is inverted so that bright pixels appear dark and dark pixels appear bright. This results in an image with dark defect areas appearing as bright spots.
4. Multiplication: the image obtained after step 2 is multiplied with the image obtained after step 3. Multiplication of two images like this is functionally equivalent to performing an AND operation on them. Only pixels which appear bright appear in the resultant image. In this case, the multiplication of these two images will result in the highlighting of the rings found in step two, but only if these rings surround a dark spot.
5. Threshold: All pixels with a brightness below a specified value are set to OFF while all pixels greater than or equal to the specified value are set to ON.
6. Fill in Holes: The image obtained after the completion of steps 1-5 appears as a series of ON-pixel rings. The final step is to fill in all enclosed contours with ON pixels.
After completing these steps, the resultant image should consist of pixels corresponding to potential defects. These bright blobs are superimposed on areas that originally contained dark defects.
The detection of bright defects may be a two-step process.
1. Threshold: A pixel brightness threshold filter may be applied to pick out all saturated pixels (greyscale255). A user-definable threshold may be provided so values lower than 255 can be detected.
2. Count Filter: A count filter is a technique for filtering small pixel noise. A size parameter is set (2, 3, 4, etc.) and a square box is constructed whose sides are this number of pixels in length. Therefore, if the size parameter is set to 3, the box will be 3 pixels by 3 pixels. This box is then centered on every pixel picked out by the threshold filter applied in step 1. The filter then counts the number of additional pixels contained within the box which have been flagged by the threshold filter and verifies that there is at least one other saturated pixel present. Any pixel which fails this test has its brightness set to 0. The effect of this filter operation is to blank out isolated noise pixels.
Once these two steps have been completed, the resultant binary image will consist of ON pixels corresponding to potential defects. Furthermore, any “speckling” type noise in the original image which would have results in an ON pixel will have been eliminated leaving only those pixels which are in close proximity to other pixels which are ON.
After bright and/or dark defect detection algorithms have been run in a given region, the resultant processed images are binary. These two images are then OR'ed together. This results in a single image with both bright and dark defects.
The software now counts the number of ON pixels in each detected defect. Finally, the part may be flagged as defective if either the quantity of defect pixels within a given connected region is above a user-defined threshold, or if the total quantity of defect pixels across the entire object is above a user-defined threshold.
Each of the first and second vision stations may include a three-dimensional imaging subsystem or sensor such as a confocal or triangulation-based subsystem or sensor to obtain 3D images, information or data. The processor processes the 3D data to obtain dimensional or design information related to the object. The image data is both acquired and processed under control of the system controller in accordance with one or more control algorithms. The data from the sensors are processed for use with one or more measurement algorithms to thereby obtain dimensional or design information about the top and bottom surfaces of the object.
Each confocal or triangulation-based subsystem or assembly typically includes a confocal or triangulation-based sensor, respectively, having a laser for transmitting a laser beam incident on the object from a first direction to obtain reflected laser beams and at least one detector (and preferably two detectors) positioned with respect to the laser beam incident on the object. The sensor is disposed adjacent the object to illuminate the object with the beam of laser energy. Analog signals from the detectors are processed to obtain digital signals or data which can be processed by the processor.
Referring now to
As further illustrated in
Referring now to
A stationary metal sheet 162 is secured to the shaft 148 and prevents the vacuum within the cylinder member 156 from communicating with certain holes 159 formed through the cylindrical side wall of the member 156, which, in turn, communicate with aligned holes 160 formed through strips 157 and into object receiving depressions 158 in the strips 157. The holes 159 blocked by the metal sheet 162 are those holes 159 which communicate with the empty depressions 158 of the drums 130 and 132 extending from their 6 o'clock position to their 12 o'clock position at which the drums 130 and 132 pick up more objects.
Objects are provided to the inspection machine subsystem by the feeder and the transfer subsystem at controlled regular and, preferably, equal intervals. The inspection machine subsystem includes several visual inspection stations, each of which includes an imaging assembly such as the camera assemblies 110 and 112 as shown in
Referring again to
As illustrated in
While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrate and describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention.
This application is related to U.S. patent application entitled “Method and System for Optically Inspecting Parts” filed on the same day as this application and having Attorney Docket No. GINS 0149 PUS.