METHOD FOR OPTICALLY MEASURING TECHNICAL SURFACES, AND DEVICE FOR CARRYING OUT THE METHOD

Information

  • Patent Application
  • 20250012564
  • Publication Number
    20250012564
  • Date Filed
    October 20, 2022
    2 years ago
  • Date Published
    January 09, 2025
    8 months ago
Abstract
In a method for optically measuring technical surfaces using a confocal sensor, light is directed onto a sample surface to be measured via an optical system that contains an illumination mask, a sensor matrix, a beam splitter for combining illumination beam and detection beam paths, and imaging optics. The mask has transparent and non-transparent or slightly transparent regions arranged in a checkerboard pattern. The pitch of the mask pattern corresponds to the pixel pitch of the matrix. The mask and the matrix are adjusted relative to each other such that the transparent regions and the pixels of the matrix are simultaneously sharply imaged onto the sample by the imaging optics, whereby the sharp image of the mask is then sharply imaged onto the matrix so that a checkerboard pattern of light and dark illuminated pixels is produced on the matrix. The transparent regions of the mask are round.
Description

The invention relates to a method for optically measuring technical surfaces using a confocal sensor. Light of a light source (11) is directed onto a sample surface to be measured via an optical system, said optical system containing an illumination mask (13), a sensor matrix (15), a beam splitter (14) for combining an illumination beam path and a detection beam path, and imaging optics (6), wherein the illumination mask (13) consists of transparent regions (1) and non-transparent or slightly transparent regions (2) arranged in a checkerboard pattern, and the pitch (3) of the pattern on the illumination mask (13) corresponds to the pixel pitch (24) of the sensor matrix (15). The illumination mask (13) and the sensor matrix (15) are adjusted relative to each other such that the transparent regions (1) and the pixels of the sensor matrix (15) are simultaneously sharply imaged onto the sample (7) by means of the imaging optics (6), whereby the sharp image of the illumination mask (13) is then sharply imaged onto the sensor matrix (15) so that a checkerboard pattern of light and dark illuminated pixels is produced on the sensor matrix.


The invention also relates to a device for carrying out the method.


BACKGROUND

Until now, it has been common practice to realize planar confocal imaging by combining illumination and detection using a beam splitter and then inserting the confocal filter. This is, for example, a rotating multipinhole disk (Nipkow disk), a fixed pinhole pattern with pinholes, a microlens array or a combination of microlens array and pinhole pattern or rotating multipinhole disk. The transparency of the confocal filter can be specifically increased using the microlenses, but the manufacturing process for this solution is technologically demanding and the microlenses can have a negative impact on the optical imaging quality of the overall system.


Confocal filters without microlenses usually have a transparency of less than 5%, which is why very powerful light sources must be used and disturbing light reflections often occur in the device in front of the confocal filter. In addition to the frame rate of the camera, the speed of the system is limited by the required illumination intensity for sufficient modulation of the camera signal and the maximum speed of a synchronously running multipinhole disk.


PRIOR ART

Methods according to the preamble of claim 1 and corresponding devices are known from the following publications:

  • WO 2014/125 037 A1
  • WO 2012/083 967 A1
  • WO 2010/145 669 A1.
  • M. Noguchi M., S. K. Nayar S. K., “Microscopic shape from focus using a projected illumination pattern”, Mathematical and Computer Modelling, Volume 24, Issues 5-6 (1996), Pages 31-48, ISSN 0895-7177, https://doi.org/10.1016/0895-7177 (96) 00114-8
  • U.S. Pat. No. 6,229,913 B1
  • DE 10 2015 209 410 A1.


In this prior art, the transparent regions of the illumination mask are angular.


If a pinhole pattern with rectangular transparent regions according to FIG. 2 is used as a confocal filter as an illumination mask, direction-dependent diffraction effects occur in the illumination. When capturing images of textured surfaces, these lead to local asymmetries in light intensity when focusing through the surface. Depending on the orientation, this favors the formation of spikes and excessive roughness of the determined 3D surface. This can be demonstrated experimentally by carrying out several measurements of the same surface region at different angles of rotation around the optical axis and comparing the results.


Object and Solution of the Invention

The object of the invention is that of eliminating the above-mentioned direction-dependent diffraction effects.


This object is achieved in the method of the type mentioned at the outset, in accordance with the invention, in that the transparent regions (1) of the illumination mask (13) are round.


Advantages of the Invention

The following advantages, inter alia, are achieved:


In order to eliminate the direction-dependent artifacts, pinhole patterns with round transparent regions according to FIG. 3 are proposed. This eliminates the directional dependence of the diffraction effects on the individual pinholes, resulting in the desired symmetrical-round Airy patterns on the sample.


This invention reduces the implementation of confocal surface detection to the essential elements, significantly increases the optical transparency of the overall system, and removes the previous limitation of the practically realizable measuring speed.


Advantageous embodiments of the invention are specified in the dependent claims.


It is therefore proposed that there are no imaging optics either between the illumination mask (13) and the beam splitter (14) or between the camera sensor and the beam splitter (14).


It is further proposed that the imaging optics (6) focus through the sample (7) during the acquisition of an image stack of confocal images, the position of the respective focus position being included in the determination of the z-positions of the intensity maxima.


In order to capture a 3D image, depending on the desired resolution, an image stack typically containing 20 to 1,000 images is captured while the focus is moved continuously in the Z-direction through the sample or, correspondingly, the sample is moved through the focus. The subsequent capture of the image stack then takes place either in the reverse scanning direction or in the same direction, the focus being moved back to its original position as quickly as possible beforehand. The intensity curve for each pixel is evaluated from the recorded image stacks.


It is further proposed that the camera sensor is a monochromatic sensor, the intensity values of the “dark” pixels corresponding to the non-transparent or slightly transparent regions (2) of the illumination mask (13) being first inverted and then the Z-position of the intensity maxima being determined, or the height values for the slightly transparent regions of the illumination mask (13) being interpolated from the height values of the neighboring pixels.


If a monochromatic sensor matrix (black and white camera) is used, in the case of neighboring pixels, in each case an intensity signal with an intensity maximum or minimum in the focus is generated in the recorded image stack. In both cases, their height position z0 can be determined algorithmically, e.g. using the center of gravity algorithm. In the case of pixels with intensity minimum in the focus, the intensity values are first inverted and the height position z0 is determined analogously to the pixels with intensity maximum.


The additional evaluation of the pixels with intensity minimum in the focus has the advantage that the normally discarded data are also used to calculate the 3D result. This means that twice as much raw data are included in the calculation of the overall 3D result as in the usual evaluation of only the pixels with maximum intensity. In the case of suitable post-evaluation, the noise figure can in this way be reduced to 70.7% of the original value (1/root N), with identical hardware. Since both partial images determined from intensity maxima/minima in each case are based on fundamentally different basic information due to the different origin, more valid measurement data can be determined by a suitable combination of both results of neighboring pixels, especially in sample regions with low reflection. This improves the quality and the data density of the 3D results.


Alternatively, the height positions z0 of the pixels with intensity minimum can also be determined by interpolating the height values of neighboring pixels. The determined height position is output for each individual pixel.


The speed of data evaluation can be almost doubled as a result.



FIG. 4a shows the intensity signal of a pixel for pixels onto which the highly transparent regions of the illumination mask are imaged, and FIG. 4b shows the intensity signal of a pixel onto which the slightly transparent regions of the illumination mask are imaged.


It is further proposed that the camera sensor is a color sensor with a Bayer pattern, the “bright” pixels corresponding to the transparent regions (1) of the illumination mask (13) being the green pixels (21) and the z-position of the intensity maximum being determined for these pixels, and that the height values for the red pixels (23) and blue pixels (22) are interpolated from the height values of the neighboring green pixels (21).


If a white light source, e.g. a white LED, is used as the light source, and a color sensor matrix with the Bayer pattern shown in FIG. 5, the device is adjusted so that the transparent regions of the illumination mask correspond to the green pixels of the sensor matrix. This means that only the green pixels are illuminated in the focus, while the differently colored pixels are “dark”. The height position of the intensity maxima is then determined in each case for the green pixels and interpolated for the differently colored pixels in between.


A known problem in the 3D evaluation of adjacent pixels of different colors is a vertical shift of the determined height position depending on the wavelength. This leads to a checkerboard-like pattern in the 3D result, when all pixels are displayed.


The exclusive use of the “green” pixels for 3D evaluation has the advantage that neither wavelength-dependent interactions between light and the sample surface, nor chromatic aberrations of the imaging optics, lead to visible artifacts in the 3D result.


It is further proposed that, when generating the colored intensity image, the color information for the red pixels (23) and blue pixels (22) is determined from the intensity values just outside the focus.


In the focus, the intensity of the red and blue pixels is reduced due to the confocal effect according to FIG. 4b. In order to achieve a correct color determination despite this, the intensity is measured just outside the focus, where the blur is a few pixels and the intensity signal is not yet reduced by the confocal effect.


The intensity value in the respective focus is used for the green pixels.


It is further proposed for the calculation of the Z-position of the intensity maxima to already begin during the measurement data acquisition, the calculation of the Z-position of the intensity maxima being carried out using parallelized algorithms.


The confocal evaluation is carried out during the acquisition of the image stack. In this case, each image is transferred to the graphics card and evaluated there in parallel. This means that the resulting 3D image is already available, displayable and storable during the acquisition of the ensuing image stack, as a result of which the latency time for obtaining the result after the image stack has been acquired is typically less than the time required to acquire it.


Likewise, the resources used for data evaluation are free again once the 3D image has been calculated, so that the next 3D calculation can be started immediately afterwards and the image acquisition does not have to be interrupted.


This means that a 3D frame rate of 20 Hz can be achieved when using a fast camera system, for example having a frame rate of 800 Hz and recording 40 images per image stack. This allows 3D images to be continuously displayed and saved at video frequency without any waiting time between the recording of two 3D images. If the sample surface to be measured is moved continuously at a constant speed during measurement data acquisition, a defined distorted 3D image is created, which can be corrected by a speed-related 3D calibration in a post-processing algorithm.





EMBODIMENT

An embodiment of the invention is described in more detail below with reference to drawings. In all the drawings, the same reference signs have the same meaning and are therefore only explained once where appropriate.


In the figures:



FIG. 1 shows the basic confocal beam path,



FIG. 2 shows an illumination mask having rectangular transparent regions arranged in a checkerboard pattern (comparative example),



FIG. 3 shows an illumination mask having round transparent regions arranged in a checkerboard pattern, according to the invention



FIG. 4 shows the intensity when focusing through the sample, and



FIG. 5 shows the Bayer pattern of a color sensor matrix with a Bayer pattern.






FIG. 1 shows the basic confocal beam path: A light source (11) illuminates an illumination mask (13) via a collimator (12). Said mask is imaged through the beam splitter (14) onto a sample (7) to be measured, using imaging optics (6). The sample is imaged through the beam splitter onto a sensor matrix (15), using the imaging optics. When the sample is in focus, the pattern is imaged sharply onto the sample and this image is imaged sharply onto the sensor matrix.



FIG. 2 shows an illumination mask having rectangular transparent regions arranged in a checkerboard pattern: The transparent regions (1) with the edge length (4) are arranged between the slightly transparent regions (2) with the edge length (5) as in a checkerboard pattern with a pitch (3) corresponding to the pixel pitch (24) of the sensor matrix.



FIG. 3 shows an illumination mask having round transparent regions arranged in a checkerboard pattern: The transparent regions (1) with the diameter (25) are arranged between the slightly transparent regions (2) as in a checkerboard pattern with a pitch (3) corresponding to the pixel pitch (24) of the sensor matrix.



FIG. 4 shows the intensity when focusing through the sample: a) For the “bright” pixels, there is an increase in intensity in the region of the focus (z0) during the height scan, and b) for the other pixels, there is a reduction in intensity in the focus (z0). The Z-position z0 for the respective extreme value is determined.



FIG. 5 shows the Bayer pattern of a color sensor matrix with Bayer pattern: The blue pixels (22) and the red pixels (23) are located between the green pixels (21), which are arranged diagonally in a checkerboard manner and have a pixel pitch (24) that corresponds to the pitch (3) of the illumination mask.


Mode of Operation of the Method According to the Invention for Optically Measuring Technical Surfaces

The mode of operation of the method according to the invention for optically measuring technical surfaces is explained in detail below:



FIG. 1 shows the basic structure of the invention. A light source (1) is collimated by means of a collimator and illuminates a fixedly installed illumination mask (13) that serves as a confocal filter and has a pinhole pattern (see FIG. 2 and FIG. 3). The pitch of the pinhole pattern corresponds to the pixel pitch of the sensor matrix (15). The illumination light is reflected in the direction of the sample by means of a beam splitter (14) and focused on the surface of a sample (7) by means of imaging optics (6). When the sample is in focus, the illumination mask (13) is imaged sharply on the sample. In the detection beam path, the surface of the sample (7) is imaged onto the sensor matrix (15) by means of imaging optics (6), through the beam splitter (14). When the sample is in focus, the pinhole pattern of the illumination mask that is sharply imaged onto the sample is sharply imaged onto the sensor matrix. If the sample surface is outside the focus region, the pinhole pattern becomes so blurred that it is no longer recognizable there.


This structure can also be realized in such a way that the illumination optics (11) to (13) and sensor matrix (15) are swapped, i.e. the illumination takes place in transmission through the beam splitter, while the sensor matrix is arranged in reflection.


LIST OF REFERENCE SIGNS






    • 1 transparent region


    • 2 slightly transparent region


    • 3 pitch of the illumination mask


    • 4 edge length


    • 5 edge length


    • 6 Imaging optics


    • 7 sample


    • 11 light source


    • 12 collimator


    • 13 illumination mask


    • 14 beam splitter


    • 15 sensor matrix


    • 21 green pixels


    • 22 blue pixels


    • 23 red pixels


    • 24 pixel pitch of the sensor matrix


    • 25 diameter




Claims
  • 1. A method for optically measuring technical surfaces using a confocal sensor wherein light of a light source (11) is directed onto a sample surface to be measured via an optical system, said optical system containing an illumination mask (13), a sensor matrix (15), a beam splitter (14) for combining an illumination beam path and a detection beam path, and imaging optics (6), wherein the illumination mask (13) consists of transparent regions (1) and non-transparent or slightly transparent regions (2) arranged in a checkerboard pattern, and the pitch (3) of the pattern on the illumination mask (13) corresponds to the pixel pitch (24) of the sensor matrix (15). The illumination mask (13) and the sensor matrix (15) are adjusted relative to each other such that the transparent regions (1) and the pixels of the sensor matrix (15) are simultaneously sharply imaged onto the sample (7) by means of the imaging optics (6), whereby the sharp image of the illumination mask (13) is then sharply imaged onto the sensor matrix (15) so that a checkerboard pattern of light and dark illuminated pixels is produced on the sensor matrix,whereinthe transparent regions (1) of the illumination mask (13) are round.
  • 2. The method for optically measuring technical surfaces according to claim 1, whereinthere are no imaging optics either between the illumination mask (13) and the beam splitter (14) or between the camera sensor and the beam splitter (14).
  • 3. The method for optically measuring technical surfaces according to claim 1, whereinthe imaging optics (6) focus through the sample (7) during the acquisition of an image stack of confocal images, the position of the respective focus position being included in the determination of the z-positions of the intensity maxima.
  • 4. The method for optically measuring technical surfaces the method according to claim 1, whereinthe camera sensor is a monochromatic sensor, the intensity values of the “dark” pixels corresponding to the non-transparent or slightly transparent regions (2) of the illumination mask (13) being first inverted and then the Z-position of the intensity maxima being determined, or the height values for the slightly transparent regions of the illumination mask (13) being interpolated from the height values of the neighboring pixels.
  • 5. The method for optically measuring technical surfaces according to claim 1, whereinthe camera sensor is a color sensor with a Bayer pattern, the “bright” pixels corresponding to the transparent regions (1) of the illumination mask (13) being the green pixels (21), and the z-position of the intensity maximum being determined for these pixels.
  • 6. The method for optically measuring technical surfaces according to claim 1, whereinthe height values for the red pixels (23) and blue pixels (22) are interpolated from the height values of the neighboring green pixels (21).
  • 7. The method for optically measuring technical surfaces according to claim 1, whereinwhen generating the colored intensity image, the color information for the red pixels (23) and blue pixels (22) is determined from the intensity values just outside the focus.
  • 8. The method for optically measuring technical surfaces the method according to claim 1, whereinthe calculation of the Z-position of the intensity maxima already begins during the measurement data acquisition, the calculation of the Z-position of the intensity maxima being carried out using parallelized algorithms.
  • 9. A device for carrying out the method according to claim 1, said device containing a beam splitter plate, a beam splitter cuboid or a beam splitter cube as a beam splitter (14), whereinthe beam-splitting coating has a polarization-neutral splitting ratio or has a polarizing effect, a lambda-quarter retardation plate being located between the beam splitter (14) and the sample (7) in the case of a polarizing effect, which plate rotates the polarization direction of the reflected light by 90 degrees.
Priority Claims (1)
Number Date Country Kind
10 2021 128 444.9 Nov 2021 DE national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2022/079294 10/20/2022 WO