The present teachings generally relate to a method and apparatus for mapping and analyzing a gradient of a surface. More particularly, the present teachings relate to various methods and apparatus for using light rays reflected from a surface to construct an image that represents variations in surface angle with a high spatial resolution. The present teachings include the algorithmic analysis of time image to determine one or more characteristics, features, anomalies or defects of the surface or particles that form a portion of the surface.
This section provides background information related to the present disclosure which is not necessarily prior art.
Known methods for optically inspecting a surface for defects or topographical variations include aiming diverging or converging light rays from a conventional (non-collimated) light source at a working surface. Some portion of that light is directly reflected while some portion is scattered at other angles due to microscopic surface roughness. An image capturing device is then commonly positioned at an angle close, but not equal to, the nominal angle of the reflected light, such that the camera nominally captures the lower intensity scattered light. Gross changes in the local surface angle, as represented by the surface normal vector, can then cause the higher intensity reflected light to enter and be captured by the image capturing device. However, because the light source has some finite width, a multitude of light rays emitted from the non-collimated source (i.e., non-parallel light rays) will strike a given point on the working surface, each ray from a different angle. Therefore, there is also a multitude of angles of reflected light, each ray with a similar intensity. As such, areas of the inspection surface with minimally different surface angles reflect light of sufficiently equal intensity into the image capturing device, preventing these changes in surface angle from being detectable in the image. Only more drastic changes to the surface angle cause changes in the light intensity and are detected in the image.
This method of surface inspection is also highly sensitive to possible variations in the position of the working surface with respect to the camera and light. A positional translation of the working surface with respect to the light and camera changes the angular relationship between these three elements and consequently changes the intensity of the light captured by the camera. These changes in intensity can be indistinguishable from the changes caused by variation in the surface angle. This can mask or significantly hinder the detection of features or defects in the surface being analyzed.
This method of surface inspection has been previously employed in automated particle grind measurement equipment. Particle grind analysis is an important part of various manufacturing and testing processes. The size (or the fineness of grind) of particles in a ground material, such as pigment particles within a liquid, can affect numerous surface finish characteristics such as color uniformity, gloss, opacity and tint. The existing automated particle grind measurement equipment utilizes a solid rectangular gauged block with a flat top surface having at least one channel or groove of tapered depth machined therein, and is commonly referred to as a “Hegman gauge.” To perform an inspection, an operator puddles material samples into the deep side of the channels formed in a top surface end of the gauge. The machine then draws the samples down with a flat edge toward the shallow side of the channels of the gauge. The material fills the channels and the machine optically inspects the gauge in order to identify the location where a regular, significant “pepperiness” in the appearance of the coating can be found, using the optical inspection method previously described. This location determines the coarsest-ground, dispersed particles in the material sample. The shortcomings of the optical inspection method utilized can lead to inaccuracies in the calculated reading of fineness of grind.
This section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
In one aspect the present disclosure relates to an apparatus for analyzing a surface. The apparatus may have an image capturing device and a collimated light source supported on a frame-like structure fixedly relative to each other. The light source may direct substantially parallel light rays at the surface at an angle β relative to the surface, which are reflected off of the surface as reflected light rays. The image capturing device has a view axis disposed at an angle α relative to the surface. The image capturing device captures substantially only those ones of the reflected light rays that are reflected in accordance with the angle α, and which form an image. The image provides an indication of a characteristic of the surface.
In another aspect the present disclosure relates to an apparatus for analyzing a distribution of particles contained in a composition. The apparatus may comprise a body having a working surface upon which the composition to be analyzed is applied. A moveable frame-like structure may include an image capturing device and a light source for reflecting light off of the composition. The image capturing device has a view axis disposed at an angle α relative to the working surface. The light source produces an image from light reflected off of the composition. The image includes a plurality of substantially parallel light rays disposed at an angle β relative to the working surface, which is useable to create a histogram indicative of a fineness of a grind of the composition.
In still another aspect the present disclosure relates to a method of analyzing a surface. The method may comprise moving an image capturing device having a collimated light source from a first position to a second position, at an angle β relative to the surface, to illuminate the surface with a plurality of parallel light rays. The method may further involve simultaneously moving an image capturing device, arranged with a view angle at an angle α which is different from the angle β, over the surface to capture light rays which are reflected from the surface, the light rays forming an image. The image may be used to analyze the gradient of the surface.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
a is a high level diagram illustrating how light rays generated by the light generating system are reflected from the surface at generally the same angle (α), relative to the surface, that they impinge the surface (angle β), except for those light rays that are reflected by microscopic or larger surface features that project from the surface, which are reflected at an angle that differs from the angle α, and which may be reflected co-incident with angle δ, which is the angle at which the image capturing device is aligned relative to the surface.
b shows how change in the angle of reflection of the light rays of the system of
c is a schematic representation that illustrates the system of 6a and how minor changes in the distance “D” between the working surface and the light source do not cause a change in the angle of the reflected rays, and thus do not substantially change the percentage of reflected rays that are received by the image capturing device.
d is a high level block diagram of one example of a system for use with the apparatus of
e is an example histogram which may be produced from the images obtained by the image capturing device.
a is a block diagram showing the general steps of a method for detecting particle dispersion in accordance with the teachings of the present disclosure.
b is a block diagram further detailing the process image operation of the method for detecting particle dispersion of
c is a block diagram further detailing the process channel of
d is a block diagram further detailing the compute Hegman Reading from remaining blobs of
e is a block diagram further detailing an operation called for in
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
Example embodiments will now be described more fully with reference to the accompanying drawings.
With general reference to
As illustrated in
The apparatus 10 may generally include an enclosure or housing 12 to facilitate portability and otherwise protect the apparatus 10 during transportation. The apparatus 10 may also generally include an analysis assembly 14. The housing 12 may be a generally hollow construct having at least one access location or door (not shown) for accessing an interior portion of the housing 12 (including the analysis assembly 14). The housing 12 may also include at least one handle 18 for transporting or otherwise moving the apparatus 10. In one configuration the housing 12 may be a rectangular box or cuboid having two (2) handles 18 and a hinged door.
With continued reference to
The base subassembly 20 may generally include a base 28, an inspection block 30, at least one rail member 32, a body or gauge block 34, and a holder 36. The base 28 may be mounted within the housing 12. The base 28 may include a first support surface 37 and at least one slot 38 extending across the surface 37. As illustrated in
The inspection block 30 may be a generally rectangular member having a second support surface 42. The inspection block 30 may be mounted to the base 28 such that the second support surface 42 is generally parallel to the first support surface 37 of the base 28. While the inspection block 30 is generally shown as a unitary piece, it is also understood that the inspection block may be formed from a plurality of distinct layers of material, such as a stack of shims. The inspection block 30 may be mounted to the base 28 using mechanical fasteners (e.g., screws), adhesive or other suitable fastening techniques. As illustrated, the inspection block 30 may be mounted between the slots 38.
The rail member 32 may include a first surface 44, a second surface 46 and a third surface 48. The second surface 46 may angularly extend from and between the first surface 44 and the third surface 48 to form a ramp between the first and third surfaces. The first, second and third surfaces 44, 46 and 48, respectively, may be substantially planar. The first surface 44 and the third surface 48 may be substantially parallel to the second support surface 42 of the inspection block 30. In one configuration, the base subassembly 20 may include two (2) substantially parallel rail members 32 located between the slots 38. The rail members 32 may be mounted to the inspection block 30 using mechanical fasteners (e.g., screws), adhesive, or other suitable fastening techniques, or may even be machined from material making up the inspection block such that the rail members 32 form an integral part of the inspection block 30.
The gauge block 34 may be a solid rectangular block of material having a working surface 50, and is commonly known as a grindometer block or a “Hegman gauge block”. It will also be appreciated that the gauge block 34 may be any other type of body or structure having a working surface that is subject to surface inspection. The working surface 50 may be a substantially planar upper surface (relative to
The planar working surface 50 may include at least one linear channel 52 machined or otherwise formed therein and generally tapered in depth along its length such that the depth changes uniformly from one end of the channel 52 to the other. While the gauge block 34 may optionally include metering or calibration marks 54 along the length of the channel 52. In one configuration the gauge block 34 includes two (2) substantially parallel channels 52. The gauge block 34 may be removably located on the inspection block 30 between the rail members 32.
The scraper 36 may include at least one leg portion 56 and a blade portion 58. In one specific configuration the holder 36 may include two leg portions 56, with the blade portion 58 extending there between. The holder 36 may be removably assembled or placed on the base subassembly 20 such that the leg portions 56 are supported on the rail members 32. The carriage subassembly 22 is conventionally mounted for linear movement relative to the base subassembly 20. In this regard the carriage subassembly 22 is moveable linearly from a first position to a second position. The first position is shown in
The carriage subassembly 22 may include a bracket 60, a light assembly 62 and an image capturing device 64. The bracket 60 may include at least one leg 66 and at least one bumper 68. In one configuration the bracket 60 includes two substantially parallel legs 66.
With particular reference to
The light assembly 62 may include at least one mount portion 80 and a light source 82. The light assembly 62 may be mounted to the bracket 60. Specifically, the mount portion 80 of the light assembly 62 may be mounted within the arcuate slot 74 such that the mount portion 80 is operable to slide, or otherwise move within, the arcuate slot 74 from the first end 76 to the second end 78. In this regard the mount portion 80 may be a rod, pin or other suitable structure for operably engaging in and traversing the arcuate slot 74. This enables the angle of the light rays emitted from the light assembly 62 to be adjustably positioned relative to the working surface 50.
The mount portion 80 may be fastened to the light source 82. The light source 82 may be generally located between the legs 66 of the bracket 60 and above the gauge block 34. The light source 82 may be operable to project a plurality of parallel light rays that cooperatively form a beam or “light profile.” The light profile may be a substantially uniaxially collimated light profile generating approximately parallel light rays 86 (
When the mount portion 80 of the light assembly 62 is located at the first end 76 of the arcuate slot 74, the angle β between the light rays 86 and the working surface 50 may be substantially equal to 58.8 degrees, for example. When the mount portion 80 of the light assembly 62 is located at the second end 78 of the arcuate slot 74, the angle β may be substantially equal to 78.8 degrees, for example. As illustrated in
The image capturing device 64 may be a video camera, a still frame camera, or any other suitable device for capturing and transmitting images. In one particular configuration the image capturing device 64 is a line scan video camera designed to accept incoming light rays only at a single angle δ in the x-y plane. The image capturing device 64 may be mounted to and carried by the bracket 60. In one configuration the image capturing device 64 may be mounted proximate to the second end 72 of the bracket 60.
With brief reference to
The image capturing device 64 may be operable to send and receive images comprising image data to a computing device (not shown) via a wired or wireless data transmission method. In this regard the computing device may include an output device (e.g., a display or monitor), an input device (e.g., a keyboard, mouse, USB port, Bluetooth receiver), and a memory system (e.g., hard drive or RAM), and may be integrated into the apparatus 10. In another configuration the apparatus 10 may be a stand-alone apparatus for detecting particle dispersion which is operable to communicate with a separate, stand-alone computing device via software or another program running on the computing device.
Referring now to
In this example a liquid composition making up a test sample, such as pigment suspended in a carrier liquid, may be added to the working surface 50 and/or to the at least one channel 52 of the gauge block 34. The composition will typically include particles of various sizes that are suspended within the liquid of the composition. An electric motor (not shown) or other suitable power source may cause the carriage subassembly 22 to move from a first position (
After the carriage subassembly 22 reaches the second position (
As illustrated in
As another example, in
Another advantage of the apparatus 10 is that it can be configured to be substantially insensitive to small changes in overall thickness or elevation of the working surface 50. This is illustrated in
The apparatus 10 is further shown in one specific configuration in
With continued reference to
At operation 202 the computing device checks the product identification entered by the user, and proceeds to decision block 204. If the product identification is a new product identification, the method proceeds to operation 206 at which the exposure is tuned or calibrated. By “tuned” it is meant that an optimal amount of exposure time for the image capturing device 64 is obtained by an iterative process involving increasing or decreasing the exposure time based on the deviation of the current average pixel intensity value from a desired pixel intensity value. The purpose of the tuning process is to ensure that the sensors of the image capturing device 64 operate within a desirable range for samples of varying reflectance, and therefore maximize their signal-to-noise ratio. At operation 208 a check is made if the tuning operation was successful and, if not, the method proceeds to operation 210 and issues a report error of the exposure. Upon such failure, the method proceeds to end at operation 212.
If the tune exposure operation is detected at operation 208 as having been successful, then the method advances to operation 214. Similarly, if it is determined at operation 204 that the product identification is not new, the method advances to operation 214. In this case the computing device defers to saved data concerning tuning exposure for the existing product identification.
After the image is acquired at operation 214 it is processed at operation 216. Acquiring the image at operation 214 may involve a pass of the carriage subassembly 22 in one direction or it may involve movement of the carriage fully in one direction and then fully in the opposite (i.e., return) direction. The image processing of operation 216 is further detailed at
The image processing indicated at operation 216 in
Referring to
After the blobs identified in operation 406 are filtered in operation 408, the method proceeds to compute Hegman-type readings for the remaining blobs at operation 412. Operation 412 is further detailed in
Referring to
Once the histogram has been generated and smoothed, at operation 506 the computing device can determine the relative location of the particle size P1 with the highest count (denoted as “maxV” (@max L)) and the particle P2 (P2>P1) with the lowest count (denoted at “minV” (@minL)) in the histogram. Merely for purpose of illustration, an example histogram is shown in
If the difference Δ is greater than a predetermined range (e.g., 5), the method proceeds to operation 510 at which the computing device can analyze the histogram. The analysis involves analyzing the histogram in the direction of increasing particle size for the first encounter of location X3, where a frequency of occurrence V3 of a particle size P3 is less than or equal to a predetermined factor or percentage (e.g., 30%) of the frequency of occurrence maxV of the highest count particle P1. In
At operation 512 a check is made if a location was found where the particle meets the constraints imposed at operation 510. If this inquiry produces a “Yes” answer, then at operation 514 the computing device outputs the location (x), also referred to as the Hegman reading, to the output device. In the example histogram of
If the difference Δ is determined at decision block 508 to be less than the predetermined range (e.g., 5), or if the computing device is unable to determine the location of a particle size that meets the conditions imposed at operation 510, then the computing device may proceed to operation 518 for the handling of abnormal conditions.
Referring to
If it is determined at operation 602 that the total number of particles in the histogram is greater than or equal to the first predetermined quantity (e.g., ThreshLow=30), then at operation 608 the computing device may determine whether the total number of particles in the histogram is greater than a second predetermined quantity (i.e., “ThreshHeight”=1000). If the total number of particles (e.g., blobs) in the histogram is greater than the second predetermined quantity (e.g., greater than ThreshHeight=1000), then at operation 610 the computing device may set the Hegman reading to a predetermined default value (e.g., “worstReading”=4). If the total number of particles in the histogram is less than or equal to the second predetermined quantity (e.g., worstReading=1000), as determined at operation 608, then at operation 612 the computing device may communicate to the output device that a Hegman reading cannot be determined. Handling of the abnormal condition may then conclude at operation 606.
After the Hegman readings for the first channel are completed at operation 412 (see
The foregoing description of the embodiments and method of the present disclosure has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the present disclosure.
The example embodiments discussed above are not intended to be limiting, and have been provided so that this disclosure will be thorough and will fully convey the scope of the present disclosure to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures and well-known technologies are not described in detail.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.