The invention relates to a method for optically measuring technical surfaces using a confocal sensor. Light of a light source (11) is directed onto a sample surface to be measured via an optical system, said optical system containing an illumination mask (13), a sensor matrix (15), a beam splitter (14) for combining an illumination beam path and a detection beam path, and imaging optics (6), wherein the illumination mask (13) consists of transparent regions (1) and non-transparent or slightly transparent regions (2) arranged in a checkerboard pattern, and the pitch (3) of the pattern on the illumination mask (13) corresponds to the pixel pitch (24) of the sensor matrix (15). The illumination mask (13) and the sensor matrix (15) are adjusted relative to each other such that the transparent regions (1) and the pixels of the sensor matrix (15) are simultaneously sharply imaged onto the sample (7) by means of the imaging optics (6), whereby the sharp image of the illumination mask (13) is then sharply imaged onto the sensor matrix (15) so that a checkerboard pattern of light and dark illuminated pixels is produced on the sensor matrix.
The invention also relates to a device for carrying out the method.
Until now, it has been common practice to realize planar confocal imaging by combining illumination and detection using a beam splitter and then inserting the confocal filter. This is, for example, a rotating multipinhole disk (Nipkow disk), a fixed pinhole pattern with pinholes, a microlens array or a combination of microlens array and pinhole pattern or rotating multipinhole disk. The transparency of the confocal filter can be specifically increased using the microlenses, but the manufacturing process for this solution is technologically demanding and the microlenses can have a negative impact on the optical imaging quality of the overall system.
Confocal filters without microlenses usually have a transparency of less than 5%, which is why very powerful light sources must be used and disturbing light reflections often occur in the device in front of the confocal filter. In addition to the frame rate of the camera, the speed of the system is limited by the required illumination intensity for sufficient modulation of the camera signal and the maximum speed of a synchronously running multipinhole disk.
Methods according to the preamble of claim 1 and corresponding devices are known from the following publications:
In this prior art, the transparent regions of the illumination mask are angular.
If a pinhole pattern with rectangular transparent regions according to
The object of the invention is that of eliminating the above-mentioned direction-dependent diffraction effects.
This object is achieved in the method of the type mentioned at the outset, in accordance with the invention, in that the transparent regions (1) of the illumination mask (13) are round.
The following advantages, inter alia, are achieved:
In order to eliminate the direction-dependent artifacts, pinhole patterns with round transparent regions according to
This invention reduces the implementation of confocal surface detection to the essential elements, significantly increases the optical transparency of the overall system, and removes the previous limitation of the practically realizable measuring speed.
Advantageous embodiments of the invention are specified in the dependent claims.
It is therefore proposed that there are no imaging optics either between the illumination mask (13) and the beam splitter (14) or between the camera sensor and the beam splitter (14).
It is further proposed that the imaging optics (6) focus through the sample (7) during the acquisition of an image stack of confocal images, the position of the respective focus position being included in the determination of the z-positions of the intensity maxima.
In order to capture a 3D image, depending on the desired resolution, an image stack typically containing 20 to 1,000 images is captured while the focus is moved continuously in the Z-direction through the sample or, correspondingly, the sample is moved through the focus. The subsequent capture of the image stack then takes place either in the reverse scanning direction or in the same direction, the focus being moved back to its original position as quickly as possible beforehand. The intensity curve for each pixel is evaluated from the recorded image stacks.
It is further proposed that the camera sensor is a monochromatic sensor, the intensity values of the “dark” pixels corresponding to the non-transparent or slightly transparent regions (2) of the illumination mask (13) being first inverted and then the Z-position of the intensity maxima being determined, or the height values for the slightly transparent regions of the illumination mask (13) being interpolated from the height values of the neighboring pixels.
If a monochromatic sensor matrix (black and white camera) is used, in the case of neighboring pixels, in each case an intensity signal with an intensity maximum or minimum in the focus is generated in the recorded image stack. In both cases, their height position z0 can be determined algorithmically, e.g. using the center of gravity algorithm. In the case of pixels with intensity minimum in the focus, the intensity values are first inverted and the height position z0 is determined analogously to the pixels with intensity maximum.
The additional evaluation of the pixels with intensity minimum in the focus has the advantage that the normally discarded data are also used to calculate the 3D result. This means that twice as much raw data are included in the calculation of the overall 3D result as in the usual evaluation of only the pixels with maximum intensity. In the case of suitable post-evaluation, the noise figure can in this way be reduced to 70.7% of the original value (1/root N), with identical hardware. Since both partial images determined from intensity maxima/minima in each case are based on fundamentally different basic information due to the different origin, more valid measurement data can be determined by a suitable combination of both results of neighboring pixels, especially in sample regions with low reflection. This improves the quality and the data density of the 3D results.
Alternatively, the height positions z0 of the pixels with intensity minimum can also be determined by interpolating the height values of neighboring pixels. The determined height position is output for each individual pixel.
The speed of data evaluation can be almost doubled as a result.
It is further proposed that the camera sensor is a color sensor with a Bayer pattern, the “bright” pixels corresponding to the transparent regions (1) of the illumination mask (13) being the green pixels (21) and the z-position of the intensity maximum being determined for these pixels, and that the height values for the red pixels (23) and blue pixels (22) are interpolated from the height values of the neighboring green pixels (21).
If a white light source, e.g. a white LED, is used as the light source, and a color sensor matrix with the Bayer pattern shown in
A known problem in the 3D evaluation of adjacent pixels of different colors is a vertical shift of the determined height position depending on the wavelength. This leads to a checkerboard-like pattern in the 3D result, when all pixels are displayed.
The exclusive use of the “green” pixels for 3D evaluation has the advantage that neither wavelength-dependent interactions between light and the sample surface, nor chromatic aberrations of the imaging optics, lead to visible artifacts in the 3D result.
It is further proposed that, when generating the colored intensity image, the color information for the red pixels (23) and blue pixels (22) is determined from the intensity values just outside the focus.
In the focus, the intensity of the red and blue pixels is reduced due to the confocal effect according to
The intensity value in the respective focus is used for the green pixels.
It is further proposed for the calculation of the Z-position of the intensity maxima to already begin during the measurement data acquisition, the calculation of the Z-position of the intensity maxima being carried out using parallelized algorithms.
The confocal evaluation is carried out during the acquisition of the image stack. In this case, each image is transferred to the graphics card and evaluated there in parallel. This means that the resulting 3D image is already available, displayable and storable during the acquisition of the ensuing image stack, as a result of which the latency time for obtaining the result after the image stack has been acquired is typically less than the time required to acquire it.
Likewise, the resources used for data evaluation are free again once the 3D image has been calculated, so that the next 3D calculation can be started immediately afterwards and the image acquisition does not have to be interrupted.
This means that a 3D frame rate of 20 Hz can be achieved when using a fast camera system, for example having a frame rate of 800 Hz and recording 40 images per image stack. This allows 3D images to be continuously displayed and saved at video frequency without any waiting time between the recording of two 3D images. If the sample surface to be measured is moved continuously at a constant speed during measurement data acquisition, a defined distorted 3D image is created, which can be corrected by a speed-related 3D calibration in a post-processing algorithm.
An embodiment of the invention is described in more detail below with reference to drawings. In all the drawings, the same reference signs have the same meaning and are therefore only explained once where appropriate.
In the figures:
The mode of operation of the method according to the invention for optically measuring technical surfaces is explained in detail below:
This structure can also be realized in such a way that the illumination optics (11) to (13) and sensor matrix (15) are swapped, i.e. the illumination takes place in transmission through the beam splitter, while the sensor matrix is arranged in reflection.
Number | Date | Country | Kind |
---|---|---|---|
10 2021 128 444.9 | Nov 2021 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/079294 | 10/20/2022 | WO |