The present invention is directed to a method or a processing device. Moreover, the subject matter of the present invention relates to a computer program.
In conventional optical recording systems, a problem often arises with regard to sufficiently precise imaging of images by an image sensor, due to the fact that, for example, imaging errors of optical components of a real object assume a different type of shape in the center of the image sensor compared to imaging of the object in the edge area of the image sensor. At the same time, different imaging properties of colors or color patterns may occur at different positions of the image sensor, which result in a suboptimal representation or depiction of the real object by the image sensor. In particular, due to the color filter mask, all colors of the color filter mask are not available at every location on the image sensor.
In accordance with example embodiments of the present invention, a method, a processing device that uses this method, and a corresponding computer program are provided. Advantageous refinements and enhancements of the processing device are possible by use of the measures set forth herein.
In accordance with the present invention, a method for processing measured data of an image sensor is provided. In an example embodiment of the present invention, the method includes the following steps:
Measured data may be understood to include data that have been recorded by a light sensor or other measuring units of an image sensor, and that represent a depiction of a real object on the image sensor. A reference position may be understood to mean, for example, a position of a light property (for example, a red filtered spectral range) on a light sensor for which other light properties (green and blue, for example) are to be computed, or whose measured value is to be processed or corrected. The reference positions form, for example, a uniform point grid which allows the generated measured data to be represented, without further postprocessing, as an image on a system using an orthogonal display grid, for example (a digital computer display, for example). The reference position may match a measuring position or the position of an existing light sensor, or may be situated at an arbitrary location of the sensor array spanned by the x and y axes, as described in greater detail below. The surroundings around a reference position of an image sensor may be understood to mean the light sensors that are adjacent to the reference position in the adjoining other rows and/or columns of the light sensor raster of an image sensor. For example, the surroundings around the reference position form a rectangular two-dimensional structure in which N×M light sensors having different properties are situated.
A weighting value may be understood to mean, for example, a factor that is linked to the measured values of the light sensors in the surroundings of the reference position or weighted, for example multiplied, and the result is subsequently summed to obtain the image data for the reference position. The weighting values for light sensors may differ as a function of the position on the sensor, for example based on identical light sensor types, i.e., light sensors that are designed to record the same physical parameters. This means that the sensor values or measured data values of light sensors that are situated in an edge area of the image sensor are weighted differently than the measured data values or values of light sensors that are situated in a central area of the image sensor. Linking may be understood to mean, for example, multiplication of measured values of the light sensors (i.e., the light sensors in the surroundings of the reference position) by the particular associated weighting values, followed, for example, by addition of the particular weighted measured data values of these light sensors.
The approach presented here is based on the finding that, due to the weighting of the measured data of light sensors as a function of the particular position on the image sensor, a technically very simple and elegant option results for allowing compensation for unfavorable imaging properties (such as location-dependent or thermal changes in the point response (point spread function)) of optical components (such as lenses, mirrors or the like) or the conventional image sensor itself, without the need for a new, higher-resolution, costly image sensor that operates with high precision, or a costly optical system that images without errors. This unfavorable imaging property may thus be corrected by weighting the measured data with weighting factors or weighting values that are a function of a position of the light sensor in question on the image sensor, the weighting values being trained or ascertained, for example, in a preceding method or during runtime. This training may be carried out, for example, for an appropriate combination of an image sensor and optical components, i.e., for a specific optical system, or for groups of systems having similar properties. The trained weighting values may be subsequently stored in a memory and read out at a later point in time for the method provided here.
One specific embodiment of the present invention is advantageous in which in the step of reading in, measured data of light sensors are read in, which are respectively situated in a different row and/or a different column on the image sensor in relation to the reference position, in particular the light sensors completely surrounding the reference position. In the present case, a row may be understood to mean an area having a predetermined distance from an edge of the image sensor. In the present case, a column may be understood to mean an area having a predetermined distance from another edge of the image sensor, the edge that defines the column being different from an edge that defines the row. In particular, the edge, by which the rows are defined, may extend in a different direction or perpendicularly with respect to the edge, by which the columns are defined. As a result, areas on the image sensor may be distinguished without the light sensors themselves being positioned symmetrically in rows and columns on the image sensor (image sensor built up in the form of a matrix). Rather, the aim is merely to ensure that the light sensors in the surroundings of the reference position are situated around the reference position at multiple different sides. Such a specific embodiment of the approach presented here offers the advantage that the image data for the reference position may be corrected, taking into account effects or measured values of light sensors that are to be observed in the immediate vicinity around the reference position. For example, a continuously increasing change in the point imaging of a real object from a central area of the image sensor toward an edge area may thus be taken into account or compensated for very precisely.
According to a further specific embodiment of the present invention, in the step of reading in, measured data may be read in from the light sensors, which are respectively designed to record measured data in different parameters, in particular colors, exposure times, the brightnesses, or other technical lighting parameters. Such a specific embodiment of the approach presented here allows the correction of different physical parameters such as the imaging of colors, the exposure times, and/or the brightnesses at the light sensors in the different positions of the image sensor.
In addition, one specific embodiment of the approach presented here is advantageous in which a step of ascertaining the weighting values is carried out using an interpolation of weighting reference values, in particular the weighting reference values being associated with light sensors situated at a predefined distance from one another on the image sensor. The weighting reference values may thus be understood to mean supporting weighting values that represent the weighting values for individual light sensors situated at the predetermined distance and/or position from one another on the image sensor. Such a specific embodiment of the approach presented here may offer the advantage that a correspondingly associated weighting value does not have to be provided for each light sensor on the image sensor, so that a reduction of the memory space to be provided for implementing the approach presented here may be reduced. Those weighting values for a light sensor or light sensors that are situated between the light sensors on the image sensor, and with which weighting reference values are associated, may then be ascertained, via an interpolation that is technically easy to implement, as soon as these weighting values are needed.
Furthermore, one specific embodiment of the approach presented here is advantageous in which the steps of reading in and of linking are carried out repeatedly, in the repeatedly carried out step of reading in, measured data of light sensors are read that are situated at a different position on the image sensor than the measured data of the light sensors from which measured data were read in in a preceding step of reading in. Such a specific embodiment of the approach presented here allows the stepwise optimization or correction of measured data for as many reference positions of the image sensor as possible, optionally almost all reference positions that are to be meaningfully considered, so that an improvement in the imaging of the real object represented by the measured data of the image sensor is made possible.
According to a further specific embodiment of the present invention, the steps of reading in and of linking may be carried out repeatedly, in the repeatedly carried out step of reading in, measured data of the light sensors in the surroundings of the reference position being read out, which were also read in in the preceding step of reading in; in addition, in the repeatedly carried out step of reading in, different weighting values being read in for these measured data than the weighting values that were read in in the preceding step of reading in. These different weighting values may be designed, for example, for reconstructing different color properties than the reconstruction of color properties intended in the preceding step of reading in. It is also possible to use different weighting factors for the same measured values in order to obtain a different physical property of the light from the measured values. In addition, certain weighting factors may also equal zero. This may be advantageous, for example, when red color is to be determined from the measured values of green, red, and blue light sensors. In this case, it may be appropriate to weight the green and blue measured data with a factor of zero, and thus ignore them.
The multiply repeated carrying out of the above-described method, in each case with different weights, represents a special form that allows signals with different signal reconstruction objectives to be computed for each reference position. (For example, the reconstruction for the light feature of brightness with maximum resolution may require a different reconstruction than the reconstruction of the feature of color, etc.)
A light sensor type may be understood to mean, for example, the property of the light sensor for imaging a certain physical parameter of the light. For example, a light sensor of a first light sensor type may be designed to detect particularly well certain color properties of the light striking the light sensor, such as red light, green light, or white light, whereas a light sensor of another light sensor type is designed to detect particularly well the brightness or a polarization direction of the light striking this light sensor. Such a specific embodiment of the present invention may offer the advantage that the measured data detected by the image sensor may be corrected very effectively for different physical parameters, and multiple physical parameters may thus be jointly taken into account via the corresponding correction for these parameters in each case.
According to a further specific embodiment of the present invention, measured data from the light sensors of different light sensor types may also be read in in the step of reading in. Such a specific embodiment of the present invention may offer the advantage that, in the correction of the measured data for the image data at the reference position, only measured data from surroundings light sensors that correspond to different light sensor types are used. In this way, a reconstruction of the image data desired in each case at the reference position may be ensured in a very reliable and robust manner, since measured data or weighted image data from different light sensor types are linked together, and the best possible compensation may thus be made for possible errors in the measurement of light by a light sensor type.
According to a further specific embodiment of the present invention, in the step of reading in it is possible to read in the measured data from light sensors of an image sensor having, at least in part, a cyclic arrangement of light sensor types as light sensors, and/or to read in measured data from light sensors having different sizes on the image sensor, and/or to read in measured data from light sensors that each include different light sensor types that occupy a different area on the image sensor. Such a specific embodiment of the present invention may offer the advantage of being able to process or link measured data from corresponding light sensors of the light sensor types in question in a technically simple and rapid manner, without having to scale these measured data from the light sensor types in question beforehand, or prepare the measured data in some other way for a linkage.
One specific embodiment of the present invention may be implemented in a particularly technically simple manner, in which in the step of linking, the measured data of the light sensors, weighted by being multiplied by the associated weighting values, are summed in order to obtain the image data for the reference position.
One specific embodiment of the present invention is advantageous as a method for generating a weighting value matrix for weighting measured data of an image sensor, the method including the following steps:
Reference image data of a reference image may be understood to mean measured data that represent an image that is regarded as optimal. Training measured data of a training image may be understood to mean measured data that represent an image that has been recorded by light sensors of an image sensor, so that, for example, the spatial variations of the imaging properties of the optical components or of the imaging properties of the image sensor or its interaction (vignetting, for example) have not yet been compensated for. A starting weighting value matrix may be understood to mean a matrix of weighting values that is initially provided, the weighting values being changed or adapted via a training in order to adapt the image data from light sensors of the training image, obtained according to one variant of the above-described approach for a method for processing measured data, to the measured data from light sensors of the reference image.
Thus, by use of the method for generating the weighting value matrix, weighting values may be generated that may be subsequently used for correcting or processing measured data of an imaging of an object by the image sensor. Specific properties may be corrected during the imaging of the real object in the measured data of the image sensor, so that the image data subsequently describe the real object in the representation form selected by the image data more favorably than the measured data that may be read out directly from the image sensor. For example, for each image sensor, each optical system, or each combination of image sensor or optical system, an individual weighting value matrix may be created in order to adequately take into account the individual manufacturing situation of the image sensor, the optical system, or the combination of image sensor or optical system.
One specific embodiment of the present invention is particularly advantageous in which in the step of reading in, an image that represents an image detail that is smaller than an image that is detectable by the image sensor is read in, in each case as a reference image and as a training image. Such a specific embodiment of the present invention may offer the advantage of a determination of the weighting value matrix that is much simpler technically or numerically, since it is not necessary to use the measured data of the entire reference image or of the training image; rather, only individual light sensor areas in the form of supporting point details at certain positions on the image sensor are used in order to compute the weighting value matrix. This may be based on the fact, for example, that a change in imaging properties of the image sensor from a central area of the image sensor toward an edge area may often be linearly approximated in sections, so that by interpolation, for example, the weighting values may be ascertained for those light sensors not situated in the area of the image detail in question of the reference image or of the training image.
Variants of the method in accordance with the present invention may be implemented, for example, in software or hardware or in a mixed form of software and hardware, for example in a processing device.
Moreover, the present invention provides a processing device that is designed to carry out, control, or implement the steps of one variant of a method provided here in appropriate units. By use of this embodiment variant of the present invention in the form of a processing device, the object underlying the present invention may also be achieved quickly and efficiently.
For this purpose, the processing device may include at least one processing unit for processing signals or data, at least one memory unit for storing signals or data, and at least one interface to a sensor or an actuator for reading in sensor signals from the sensor or for outputting data signals or control signals to the actuator and/or at least one communication interface for reading in or outputting data that are embedded in a communication protocol. The processing unit may be, for example, a signal processor, a microcontroller, or the like, it being possible for the memory unit to be a flash memory, an EEPROM, or a magnetic memory unit. The communication interface may be designed for reading in or outputting data wirelessly and/or in a hard-wired manner, it being possible for a communication interface to read in or output the hard-wired data electrically or optically, for example, from a corresponding data transmission line or output these data into a corresponding data transmission line.
In the present context, a processing device may be understood to mean an electrical device that processes sensor signals and outputs control and/or data signals as a function thereof. The processing device may include an interface that may have a hardware and/or software design. In a hardware design, the interfaces may be part of a so-called system ASIC, for example, which contains various functions of the device. However, it is also possible for the interfaces to be dedicated, integrated circuits, or to be at least partially made up of discrete components. In a software design, the interfaces may be software modules that are present on a microcontroller, for example, in addition to other software modules.
Also advantageous is a computer program product or computer program including program code that may be stored on a machine-readable medium or memory medium such as a semiconductor memory, a hard disk, or an optical memory, and used for carrying out, implementing, and/or controlling the steps of the method according to one of the specific embodiments described above, in particular when the program product or program is executed on a computer or a device.
Exemplary embodiments of the present invention are illustrated in the figures and explained in greater detail below.
In the following description of advantageous exemplary embodiments of the present invention, identical or similar reference numerals are used for the elements having a similar action which are illustrated in the various figures, and a repeated description of these elements is dispensed with.
These measured data 310 may (optionally) initially be preprocessed in a unit 320. Depending on the design, the preprocessed image data, which may also be referred to as measured data 310′ for the sake of simplicity, may be supplied to a processing unit 325 in which, for example, the approach described in even greater detail below in the form of a grid base correction is implemented. For this purpose, measured data 310′ are read in via a read-in interface 330 and supplied to a linkage unit 335. At the same time, weighting values 340 may be read out from a weighting value memory 345 and likewise supplied to linkage unit 335 via read-in interface 330. For example, according to the even more detailed description below, measured data 310′ from the individual light sensors are then linked to weighting values 340 in linkage unit 335, and correspondingly obtained image data 350 may be further processed in one or multiple parallel or sequential processing units.
Light sensors 400 may also be built up as sensor cells S1, S2, S3, or S4, as is apparent in
Individual light sensors 400 in
In
In order to now improve a correction of the imaging properties of optical system 100 according to
In order to now be able to make the best correction possible of the imaging errors in the measured data via this weighting, weighting values 340 should be used that have been determined or trained as a function of the position of light sensor 400 on image sensor 115, with which particular weighting values 340 are associated. For example, weighting values 340 associated with light sensors 400 that are situated in edge area 125 of image sensor 115 have a higher value than weighting values 340 associated with light sensors 400 that are situated in central area 120 of image sensor 115. In this way, for example a higher attenuation, which is caused by a light beam 122 passing over a fairly long path through a material of an optical component such as lens 105, may be compensated for. In the subsequent linking of the weighted measured data for light sensors 400 or 510 in edge area 125 of image sensor 115, a state that would be obtained by the optical system or image sensor 115 without an imaging error may thus be back-calculated, when possible. In particular, deviations in the point imaging and/or color effects and/or luminance effects and/or moiré effects may thus be reduced with a skillful selection of the weights.
Weighting values 340, which may be used for such processing or weighting, are determined in advance in a training mode described in even greater detail below, and may be stored, for example, in memory 345 illustrated in
It may also be noted that for the objective of different light properties at reference position 500, it is also possible to use different weighting values 340 for surroundings light sensor 510. This means, for example, that for a light sensor that is regarded as a surroundings light sensor 510, a first weighting value 340 may be used when the objective is to reconstruct a first light property at reference position 500, and for the same surroundings light sensor 510, a second weighting value 340 that is different from the first weighting value is used when a different light property is to be represented at reference position 500.
To allow to the greatest extent possible a reduction of the size of memory 345 (which may be a cache memory, for example) necessary for carrying out the approach presented here, according to a further exemplary embodiment it is possible for a corresponding weighting value 340 for each of light sensors 400 to not be stored in memory 345. Rather, for example for every nth light sensor 400 of a corresponding light sensor type on image sensor 115, a weighting value 340 associated with this position of light sensor 400 may be stored as a weighting reference value in memory 345.
In processing unit 335, measured data 310 or 310′ that are weighted in each case with associated weighting values 340, or sensor data 900 that are weighted with associated weighting values 340, are initially collected in a collection unit 920 and sorted according to their reference positions and reconstruction tasks, and the collected and sorted weighted measured data are subsequently summed in their group in an addition unit 925, and the obtained result, as weighted image data 350, is associated with the respective underlying reference positions and reconstruction task 500.
The lower portion of
The values ascertained from output buffer 930 may then be further processed in one or multiple units, such as units 940 and 950 illustrated in
By use of such an approach, a weighting value matrix may be obtained that provides in each case corresponding, different weighting values for a light sensor at different positions on the image sensor to allow the most optimal correction possible of distortions or imaging errors in the measured data of the image sensor, as may be implemented by the above-described approach for processing measured data of an image sensor.
In order to also minimize numerical and/or circuitry-related complexity, it is also possible for an image that represents an image detail that is smaller than an image that is detectable by image sensor 115 to be read in, in each case as a reference image and as a training image, as illustrated in
In summary, it is noted that the approach presented here in accordance with example embodiments of the present invention provides a method and its possible implementation in hardware. The method is used for the comprehensive correction of multiple error classes of image errors that result from the physical image processing chain (optical system and imager, atmosphere, windshield, motion blur). In particular, the method is provided for correcting wavelength-dependent errors that arise during sampling of the light signal by the image sensor and their correction, the so-called “demosaicing.” In addition, errors that arise via the optical system are corrected. This applies for manufacturing tolerance-related errors as well as for changes in the imaging behavior which during operation are induced thermally or caused by air pressure. Thus, for example, the red-blue error in the center of the image is generally to be corrected differently than that at the edge of the image, and differently at high temperatures than at low temperatures. The same applies for an attenuation of the image signal at the edge (“shading”).
The “grid based demosaicing” hardware block, provided by way of example for the correction, in the form of the processing unit may simultaneously correct all of these errors, and in addition, with a suitable light sensor structure may also maintain the quality of the geometric resolution and of the contrast more satisfactorily than conventional methods.
In addition, an explanation is provided for how a training method for determining the parameters might look. The method makes use of the fact that the optical system has a point response whose action takes place primarily in limited spatial surroundings. It may thus be deduced that a first approximation correction may take place via a linear combination of the measured values of the surroundings. This first or linear approximation requires less computing power, and is similar to present preprocessing layers of neural networks.
Particular advantages may be achieved for present and future systems via a direct correction of image errors directly in the imager unit. Depending on the processing logic system situated downstream, this may have a superlinear positive effect on the downstream algorithms, since due to the correction, image errors no longer have to be considered in the algorithms, which is a major advantage in particular for learning methods. The approach presented here shows how this method may be implemented in a more general form as a hardware block diagram. This more general methodology also allows features other than the visual image quality to be enhanced. For example, the edge features, important for the machine vision, could be directly highlighted when the measured data stream is not provided for a displaying system.
If an exemplary embodiment includes an “and/or” linkage between a first feature and a second feature, this may be construed in such a way that according to one specific embodiment, the exemplary embodiment has the first feature as well as the second feature, and according to another specific embodiment, the exemplary embodiment either has only the first feature or only the second feature.
Number | Date | Country | Kind |
---|---|---|---|
10 2018 222 903.1 | Dec 2018 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/085555 | 12/17/2019 | WO | 00 |