The present invention is generally directed to a test system and, more specifically, to an indentation hardness test system and to an auto-learning shading corrector.
Hardness testing has been found to be useful for material evaluation and quality control of manufacturing processes and research and development endeavors. The hardness of an object, although empirical in nature, can be correlated to tensile strength for many metals and provides an indicator of wear-resistance and ductility of a material. A typical indentation hardness tester utilizes a calibrated machine to force a diamond indenter (of a desired geometry) into the surface of a material being evaluated. The indentation dimension (dimensions) is (are) then measured with a light microscope after load removal. A determination of the hardness of the material under test may then be obtained by dividing the force applied to the indenter by the projected area of the permanent impression made by the indenter.
In general, it is advantageous to be able to view an overview of the test object so that it is possible to determine the best places to perform hardness tests and thus where to form indentations. It is also advantageous to be able to view a close-up of the indentations. Accordingly, the microscopes used on such indentation hardness testers may utilize various magnification objective lenses. Examples of such hardness testers are disclosed in commonly-assigned U.S. Pat. Nos. 6,996,264 and 7,139,422. These particular hardness testers offer the advantage of forming a mosaic image of the test object formed of images captured through a high magnification objective such that the mosaic image has a much higher resolution than would otherwise be obtainable using a low magnification objective through which the entire test object may be viewed.
When capturing an image with any one of the objective lenses, the image may have some inherent shading in the corners of the image and possibly along the edges of the image as well. Such shadowing, also known as vignetting, may be caused from variations in the position of the objective lens relative to a center of illumination. Other sources of shading include imperfections, distortions and contaminants in the optical path. For most operations, such shading does not present any particular problem. However, when a user wishes to incorporate images in a report, particularly a mosaic (or panoptic) image, such shading can introduce artifacts such as stripes into the mosaic image.
One approach that has been used to correct for such shading is to manually place a mirror on a stage of the hardness tester and instruct the hardness tester to perform shading correction. In this process, the mirror is assumed to have a uniform brightness over its whole surface such that a processor of the hardness tester may take an image of the surface of the mirror and then determine a correction factor for each pixel location that results in images of uniform intensity. This approach requires periodic manual operation for each objective lens, which takes valuable time from the operator. Events that would require a new shading correction include: replacement of a light bulb and subsequent change in illumination, changes in alignment of the light source, changes in the alignment of the lenses, and new contaminants (dust) in the optical path.
According to one aspect of the present invention, an indentation hardness test system is provided for testing hardness of a test object, the system comprising: a frame including an attached indenter; a movable stage for receiving a part attached to the frame; a camera for capturing images of the part; a display; a processor electrically coupled to the movable stage, the camera and the display; and a memory subsystem coupled to the processor. The memory subsystem storing code when executed instructs the processor to perform the steps of: (a) causing the camera to capture a series of magnified image frames of different portions of the part, where each image frame includes raw pixel data for each pixel; (b) for each captured image frame, computing an average pixel intensity of the image frame; (c) computing an average pixel intensity across all captured image frames; (d) for each pixel, computing PixelAverageDeltan(x,y) as a function of the raw pixel data and the average pixel intensity of the image frame; (e) for each pixel, computing and storing a correction factor using the average pixel intensity across all captured image frames as determined in step (c) and PixelAverageDeltan(x,y) as determined in step (d); (f) for each pixel, performing shading correction by adjusting the raw pixel value by a corresponding correction factor as determined in step (e); and (g) generating a composite image of the part on the display, wherein the composite image includes the series of image frames as corrected for shading in step (f) and assembled according to relative positions of the image frames.
According to another aspect of the present invention, a method is provided for generating a composite image of a part with shading correction, comprising the steps of: (a) capturing a series of magnified image frames of different portions of the part, where each image frame includes raw pixel data for each pixel; (b) for each captured image frame, computing an average pixel intensity of the image frame; (c) computing an average pixel intensity across all captured image frames; (d) for each pixel, computing PixelAverageDeltan(x,y) as a function of the raw pixel data and the average pixel intensity of the image frame; (e) for each pixel, computing and storing a correction factor using the average pixel intensity across all captured image frames as determined in step (c) and PixelAverageDeltan(x,y) as determined in step (d); (f) for each pixel, performing shading correction by adjusting the raw pixel value by a corresponding correction factor as determined in step (e); and (g) generating a composite image of the part, wherein the composite image includes the series of image frames as corrected for shading in step (f) and assembled according to relative positions of the image frames.
According to another aspect of the present invention, a non-transitory computer readable medium is provided having stored thereon software instructions that, when executed by a processor, cause the processor to provide a composite image of a part with shading correction, by executing the steps comprising: (a) capturing a series of magnified image frames of different portions of the part, where each image frame includes raw pixel data for each pixel; (b) for each captured image frame, computing an average pixel intensity of the image frame; (c) computing an average pixel intensity across all captured image frames; (d) for each pixel, computing PixelAverageDeltan(x,y) as a function of the raw pixel data and the average pixel intensity of the image frame; (e) for each pixel, computing and storing a correction factor using the average pixel intensity across all captured image frames as determined in step (c) and PixelAverageDeltan(x,y) as determined in step (d); (f) for each pixel, performing shading correction by adjusting the raw pixel value by a corresponding correction factor as determined in step (e); and (g) generating a composite image of the part, wherein the composite image includes the series of image frames as corrected for shading in step (f) and assembled according to relative positions of the image frames.
These and other features, advantages, and objects of the present invention will be further understood and appreciated by those skilled in the art by reference to the following specification, claims, and appended drawings.
The present invention will be more fully understood from the detailed description and the accompanying drawings, wherein:
Reference will now be made in detail to the present preferred embodiments, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals will be used throughout the drawings to refer to the same or like parts. In the drawings, the depicted structural elements are not to scale and certain components are enlarged relative to the other components for purposes of emphasis and understanding.
The embodiments described herein relate to an indentation hardness test system and a method for performing shading correction. An exemplary indentation hardness test system is first described with reference to
A system according to the various embodiments described herein can be implemented using an indentation hardness test system that includes: a light microscope, a digital camera positioned to collect images through the microscope, an electrically controlled stage capable of moving a test assembly, i.e., an associated part to be tested or a portion of a part mounted in a plastic, in at least two dimensions in a plane perpendicular to the lens of the light microscope, and a processor (or computer system) connected to both the camera and the stage such that the processor can display images acquired by the camera while monitoring and controlling the movements of the stage and its associated part.
Additional details of the system described above and the general operation in capturing images and performing indentation hardness testing are described in commonly-assigned U.S. Pat. Nos. 6,996,264 and 7,139,422, the entire disclosures of which are incorporated herein by reference.
The method for performing automatic shading correction is described herein as being implemented by processor 40 using images captured by camera 52. This method may be a subroutine executed by any processor, and thus this method may be embodied in a non-transitory computer readable medium having stored thereon software instructions that, when executed by a processor, cause the processor to provide a composite image of a part with shading correction, by executing the steps of the method described below. In other words, aspects of the inventive method may be achieved by software stored on a non-transitory computer readable medium or software modifications or updates to existing software residing in a non-transitory computer readable medium. Such a non-transitory computer readable medium may include, but is not limited to, any form of computer disk, tape, integrated circuit, ROM, RAM, flash memory, etc., regardless of whether it is located on a portable memory device, a personal computer, tablet, laptop, smartphone, Internet or network server, or dedicated device such as the above-described indentation hardness tester.
The method for performing automatic shading correction is described below with respect to
This method improves upon the prior method using a mirror in that it eliminates the need to use a mirror and manual initiate a shading correction routine and allows shading correction to be performed on the fly. The premise is that, but for any shading to be corrected, all of the pixels of the imager will be exposed to the same levels of light when averaged over many different images. Thus, from a large sample from which the averages are taken, shading corrections may be obtained that are similar to those obtained using a mirror but without requiring the mirror or the need to perform a separate shading correction routine.
As shown in
If the system has not been trained for the selected lens, processor 40 executes a training routine in step 106. Otherwise, if the system has been trained for the selected lens, processor 40 performs shading correction by multiplying the raw pixel values for the captured image frame by previously computed correction factors corresponding to each pixel location of the captured image. The correction factors are computed during the training routine 106, which is described below with reference to
Once processor 40 has performed shading correction on each pixel of the captured image frame, it adds the corrected image to a composite panoptic image in step 110. Processor 40 then returns to step 100 to process the next captured image frame.
In the example shown in
Having described the general routine shown in
A running average pixel intensity across all captured images FrameHistoricAveragen may then be computed (step 122) using the following equation:
It should be noted, however, that a weighted average could also be used to compute FrameHistoricAveragen as could an infinite impulse response (IIR) filter.
Next, the difference PixelDeltan(x,y) between each pixel of the nth frame and the average pixel intensity FrameAveragen for that frame is computed (step 124) as follows:
PixelDeltan(x,y)=Pixeln(x,y)−FrameAveragen.
Then, for each pixel, a running average of the differences computed above (PixelAverageDeltan(x,y)) is computed (step 126) as follows:
It should be noted that, in lieu of steps 124 and 126, PixelAverageDeltan(x,y) could alternatively be computed by averaging the value of each corresponding raw pixel Pixeln(x,y) across all n frames to obtain a value PixelAveragen(x,y) for each pixel and then subtracting from FrameHistoricAveragen as follows:
Thus, either way, processor 40 computes PixelAverageDeltan(x,y) as a function of the raw pixel data Pixeln(x,y) and the average pixel intensity of the image frame FrameAveragen.
A correction factor CorrectionFactorn(x,y) for each pixel of the nth frame may then be computed (step 128) as follows:
Processor 40 then determines in step 130 if the number n of image frames has reached a threshold number N that represents a sufficient number of image frames having been processed so as to provide a sufficient level of confidence in the correction factors. As an example, N may be about 500 frames. Thus, if n>N, processor 40 sets a status of a “trained” flag as being trained for the selected objective lens in step 132. Otherwise, processor 40 ends the training routine without changing the status of the “trained” flag. It should be appreciated that the number of frames N may be varied depending upon operator preference.
If the training is complete, processor 40 may, upon exiting the training routine, return to step 108 in
CorrectedPixeln(x,y)=Pixeln(x,y)×CorrectionFactorn(x,y).
It should be noted that the correction factor may alternatively be computed as an offset for each pixel and added or subtracted from the raw pixel data.
As noted above, processor 40 may begin to build the panoptic image using captured images that have not been corrected for shading and then use corrected images once training is complete. To determine the relative location of each image, processor 40 may obtain stage coordinates for each captured image frame and assemble the image frames based upon the associated stage coordinates.
Processor 40 may alternatively be configured to use only images that have undergone shading correction. As still yet another alternative, processor 40 may begin to build the panoptic image with uncorrected images and then once training is complete, go back and correct those images using the correction factors. If processor 40 uses some uncorrected images, it may nevertheless go back and replace uncorrected images with corrected images that are subsequently captured of the same portion of the part and subjected to shading correction. Alternatively, processor 40 could superimpose corrected images over uncorrected images regardless of whether the images are of the same exact location on the part. As yet still another alternative, processor 40 may be configured to allow training to go on indefinitely. Training can also be cleared at any time, which causes n to be reset to zero.
In addition to the foregoing, either an additional camera or objective lens may be provided in the indentation hardness test system to capture an overview image of the entirety of the part. This overview image may be used as a starting point of the panoptic image. Previously, a gray shaded area was used to represent the part in an enlarged simulated image of the part with the magnified images added (i.e., superimposed) as they were captured. By starting with an actual overview image of the part and superimposing magnified images as they are captured, detail may be added to the overview image that otherwise would not have been present. Further, the actual overview image of the part provides a much more informative image to start with than a simulated image, as was used previously.
Although the above embodiments have been described with reference to an indentation hardness test system, it will be appreciated by those skilled in the art that the method of shading correction may be used for any panoptic mosaic image even using images captured using a conventional still or video camera.
The above description is considered that of the preferred embodiments only. Modifications of the invention will occur to those skilled in the art and to those who make or use the invention. Therefore, it is understood that the embodiments shown in the drawings and described above are merely for illustrative purposes and not intended to limit the scope of the invention, which is defined by the claims as interpreted according to the principles of patent law, including the doctrine of equivalents.