The present application claims priority to European Patent Application No. 21191349.6, filed Aug. 13, 2021, the disclosure of which is hereby incorporated by reference herein in its entirety.
The present invention pertains to the field of image processing and, in particular, concerns a method for processing of image data, wherein said image data comprises noise and information, the method comprising the steps of acquiring input data comprising input raw image data to be processed for storage and/or transmission, said input raw image data comprising values of pixels of an image sensor used to take the image data, processing said input raw image data, and outputting the processed image data by providing output data.
In general, the present invention is situated in the context of image sensors which are finding its way into more and more devices. Image data can hence be readily and cheaply produced in large amounts, and sophisticated processing algorithms enable a vast range of applications.
The overall performance of the processing algorithms generally depends heavily on the quality of the image data. It is often best to perform any kind of processing and analysis starting from the raw image data, that is, the unaltered image data from the acquisition device. Operations including but not limited to converting image data from color filter arrays into RGB data or applying standard lossy compression can lead to significant information and quality loss, and should be avoided. For this reason, the applicant has previously developed an image compression technique where the decompressed image data is statistically equivalent to the raw uncompressed sensor data, as disclosed in the European patent application EP 3 820 150.
Raw image data often suffers from imperfections caused by, for example, a non-uniform response or bad pixels of the image sensor, imperfect illumination, or imperfect optical elements that lead to distortion and vignetting. To achieve better results from image processing and analysis, and in addition subjectively better-looking images, numerous methods have been developed in the prior art to address such issues. While these methods are quite successful at improving the appearance of individual images, they fail to maintain the statistical properties of the image data due to the fact, as is generally recognized, that the noise of raw image data contains a signal-dependent component with non-linear behavior. For this reason, the application of linear corrections, such as scaling of pixel values or substituting pixel values by averages of a neighborhood, modifies the local characteristics of the noise and makes the noise inconsistent across the image.
U.S. Pat. No. 7,683,948, for example, discloses a method for bad pixel replacement in image processing, where corrupted pixels are detected and replaced by averages, means, maximums or other statistical functions of selected nearby pixels. The application of averages, means or medians produces replaced pixel values with a noise level below the expectation, which will become apparent in the simultaneous processing of several images in a sequence.
The document with the title “Flat-field correction technique for digital detectors” by Seibert et al. reveals a method for correcting a non-uniform response of the sensor. The method is based on measuring response curves for each individual pixel. The curves are then approximated by a linear model, generating two parameters per pixel. The flat-field correction is then achieved using normalization to the pixel-specific response curves. While this approach helps to improve the appearance of the image, it almost certainly worsens the statistical characteristics of the image data. More recently, U.S. Pat. No. 9,143,709 adapts the method to make it suitable for image sensors with a non-linear response, but the above mentioned issue with inconsistent noise remains.
Also the Digital Negative (DNG) Specification by Adobe Inc. recognizes the need to correct for bad pixels and vignetting. This is addressed by providing specially parametrized operations, represented by so-called Opcodes, for those particular purposes. The Opcodes in question are FixVignetteRadial, FixBadPixelsConstant and FixBadPixelsList. However, similar to the prior art cited above, the implementations do not seek to preserve a consistent noise in the sense as set out above. Specifically, FixVignetteRadial works by applying a radially varying normalization, equivalent to the methods of the prior art above; and FixBadPixelsConstant and FixBadPixelsList are aimed at the correction of bad pixels using interpolation, which is equivalent to replacing the pixel value by a local average.
Some prior art also uses noise models for image processing. For example, said DNG specification permits to store information about the noise profile in the DNG file format. Here, the main purpose is use of the noise profile for denoising. However, the majority of image data does not come with any information about the noise characteristics, and owing to a varying amount of correction and preprocessing, a suitable noise model may not even exist. Denoising methods have been developed to estimate the noise parameters directly from the image data, such as proposed for example in the article “Variance Stabilization for Noisy+Estimate Combination in Iterative Poisson Denoising” by Azzari and Foi. However, the estimation of the noise model directly from the image data is often poor and such techniques cannot be reliably used for general image processing purposes.
Noise models also provide benefits to applications other than denoising, for example to the field of medical imaging with x-rays, where the possibility of a reduced radiation exposure for patients for different imaging modalities can be studied through simulations. Here, a lower x-ray dose comes at the cost of a decreased signal-to-noise ratio (SNR), which in turn can only be realistically simulated by using a noise model. A method for simulating dose reduction through a combination of scaling and noise injection is disclosed in the article “A Technique for Simulating the Effect of Dose Reduction on Image Quality in Digital Chest Radiography” by Veldkamp et al., wherein a noise model is used for said noise injection. However, like in the applications mentioned above, the use of noise models remains quite limited also in this prior art application.
Finally, more and more applications rely on machine learning techniques for the processing and/or interpretation of image data. Whereas in more classical image processing the core challenge was typically the invention and development of a suitable algorithm, an additional challenge in the context of machine learning is the availability of large amounts of high-quality training data.
To obtain reliable results with machine learning, the training image set must contain samples for all the variations that are expected in the target environment, and the amount of training data grows exponentially with the parameter space. This issue has been partially alleviated by the introduction of data augmentation, that is, by adding to the training set synthetic images that are modified versions of existing data. Examples of such modifications include geometric transformations (rotation, translation, scaling and flipping) as well as manipulation of contrast and color, or the injection of noise. Such kind of data augmentation according to prior art doesn't comprise use of noise models.
In summary, the correction of imaging imperfections and the generation of synthetic images according to prior art have the drawback that, by focusing on the appearance of the image, they neglect one of the main characteristics of the raw image data, namely the very specific relationship between signal and noise originating from fundamental physical principles and technical properties of the acquisition device. Breaking this relationship prevents the reliable use of the noise contained in the image raw data and/or of measurement uncertainties in processing algorithms, introduces confusing elements in training sets for machine learning, and makes it practically impossible to find a suitable and consistent normalization of image data stemming from different sensors.
It is the object of the present invention to overcome the above mentioned drawbacks. In general, it is an object of the present invention to provide methods of image processing that maintain a consistent noise model across the entire image. It is a further object of the present invention to enable users of image data to obtain image data with well-defined statistical properties and to make the statistical properties available for their own calculations and analyses based on these image data.
It is a further object of the present invention to provide methods of image preprocessing that correct for various imperfections present in the acquisition hardware or conditions.
It is another object of the present invention to provide methods for the generation of synthetic image data from existing image data, with the aim that the generated synthetic image data is well-suited for the training of machine-learning algorithms, the validation of image processing under challenging conditions, and other similar purposes.
To this effect, the present invention proposes a method which is characterized by the features enumerated in claim 1 and which allows to achieve the objectives identified above. In particular, a method according to the present invention distinguishes from prior art by the fact that the step of acquiring input data comprises the step of
In this manner, image data and noise model are considered as inseparable entities that pass through the image processing pipeline together. The individual processing steps in the pipeline may modify either the image data or the noise model, or both. Therefore, the processed image data in any case have statistical characteristics which are consistent with the output noise model. The noise model further enables users of the image data to obtain uncertainties for the individual pixel values, and hence for their own calculations and analyses based on these pixel values.
Furthermore, the invention provides, by means of at least some of said processing operations, methods for replacement of dead pixel values, correction of photo-response non-uniformity, and flat-field correction, including correction of inhomogeneous illumination and vignetting. Such image preprocessing is performed in a manner allowing that the noise model of the input data remains valid after the preprocessing, i.e. remains applicable to the output data.
The invention also provides for methods allowing to generate synthetic image data, in particular to generate synthetic image data that follows the same noise model as the input data. Specifically, methods to generate synthetic images that simulate a reduced exposure time or light-collection are provided.
A method according to the present invention may be realized in various embodiments.
In a particularly preferred embodiment, said input noise model and/or said output noise model is/are represented by a mapping of mean pixel values of the values of a given pixel of the image sensor used to take the image data to pixel value standard deviations, a mapping of mean pixel values to pixel value variances, or a mapping of mean pixel values to signal-to-noise ratios.
Even more preferably, said input noise model and/or said output noise model is/are adapted to represent noise in said input raw image data and/or said output raw image data which follows a Poisson-Gaussian distribution.
In another particularly preferred embodiment, producing output raw image data which is statistically equivalent as compared to said input raw image data as well as consistent with said output noise model is assured by applying a transformation representing a noise-consistent scaling operation to the values of a given pixel of an image sensor used to take the image data, wherein said transformation may perform a pixel value reduction.
Other features and advantages of the present invention are mentioned in the dependent claims as well as in the description disclosing in the following, with reference to the figures, the invention in more detail.
The attached figures exemplarily and schematically illustrate the principles as well as several embodiments of the present invention.
In the following, the invention shall be described in detail with reference to the above mentioned figures.
The concept of this invention is illustrated in
The reason that the output image data 150 is labelled as raw is that it is statistically equivalent to actual raw image data coming from a sensor for which the output noise model 160 applies. Hence, the output data 140 is adapted to serve as input data 100 for another step of noise-consistent processing 130, such that a whole processing pipeline can be constructed based thereupon.
Noise models are an indispensable building block of the current invention. According to, e.g., the EMVA1288 standard, the following considers as a noise model a mapping σy(μy) that relates the standard deviation σy of the values y of a given pixel to its mean value μy. In general, the mean valueμy depends on the flux and wavelength of the photons hitting the pixel, but the standard deviationσy only depends on the mean valueμy. While the following will focus on the above introduced mapping σy(σy), equivalent mappings exist which can be used to extract σy(μy), such as the dependence of the pixel variance σy2 on the mean valueμy or the signal-to-noise ratio
where μy.dark is the black level, i.e. the mean pixel value in absence of illumination.
The above mentioned mapping/model is related to so-called temporal noise which quantifies the pixel value fluctuations for repeated acquisitions under identical conditions. In principle, each pixel of a given image sensor can have its own characteristic mapping σy(μy), but modern image sensors have reached an extremely good uniformity, such that in many cases using the same model for all pixels of the sensor will form a good approximation. Nevertheless, an image sensor can exhibit spatial noise that quantifies the differences of the mean values of different pixels exposed to identical illumination. Spatial noise can often be reduced or even removed with the help of preprocessing. Bad pixels can be considered as an extreme form of spatial noise.
Depending on the type of image data to be processed, a suitable parametrization of the mappingσy(μy)may serve as digital representation for the noise model. Examples hereof, usable as input noise model 120 or output noise model 160, are
For the purpose of noise-consistent processing 130, the input raw image data 110 is combined with the digital representation of the input noise model 120 to form the input data 100. In various embodiments of the invention which will be described in the following, this combination can be achieved in different ways or any of its combination, such as
In any case, according to the present invention, storing the noise model means storing a digital representation of the parameters for the model. In some embodiments of the invention, these parameters are passed to a function with fixed definition. In other embodiments, the implementation consists of a catalog of functions, and the specific function to be used is selected by an additional identifying parameter. While some embodiments of the invention make use of noise models in the very general sense, as discussed so far, other embodiments will specialize on the Poisson-Gaussian noise model typically used for linear image sensors, where the sensor output is directly proportional to the amount of incoming light. The Poisson-Gaussian noise model is given by
where σy.dark represents the standard deviation of the pixel values in the absence of light and is often termed readout noise, μy.dark is the mean of the pixel values in the absence of light, and K is a gain factor. Hence, the Poisson-Gaussian noise model is described by the 3 parameters {K, μy.dark, σy.dark}.
It should be noted that the Poisson-Gaussian noise model is applicable to raw image data output by the vast majority of image sensors. It is a formal expression of the statistics of the fluctuations of pixel values that naturally occur in image acquisition with linear sensors. Despite its ubiquity, its importance is generally ignored in prior art concerning image data correction and processing.
The image data processing that is the subject of this invention can be divided into two categories. The first category is image preprocessing, mainly with the purpose of correcting imperfections in acquisition hardware or conditions, or of standardizing the image data in a specific way. The second category is concerned with the synthesis of image data, which can then be used to test and improve the reliability of image processing algorithms, or to enrich the training data set in a machine-learning context.
While both categories of processing exist in the prior art, no attention is paid in prior art to the underlying noise model, and no effort is made to produce output data that is consistent with the noise model or to consider how the processing affects the image statistics, and hence the noise model. In prior art, only processing algorithms specifically concerned with noise, like denoising and deconvolution, occasionally use noise models in a specific and limited manner, such as mentioned in the introduction. Even in these cases, the focus of prior art has always been on the single operation concerned, without attempting to generalize the methods to a whole processing pipeline and without producing output data that remains consistent with some noise model.
Nevertheless, images with a consistent and accurate noise model across the entire image allow for much more reliable processing, in particular if the noise model is taken into account in the processing algorithm. Conversely, images that do not follow any consistent noise model have the tendency to confuse algorithms with their unexpected statistics. This is in particular true for algorithms based on deep learning and neural networks which consider all aspects of the image data in their analysis, including noise and noise correlations.
Two important remarks in this context are that a) if the input is only a single pixel value and its noise model, no operation can reliably increase the signal-to-noise ratio for that pixel, but a reduction is possible; and b) the signal-to-noise ratio typically increases for higher illumination (and hence larger pixel values), but the improvement is not linear or proportional. On this basis, reducing the illumination corresponding to a multiplication of the initial mean pixel values by a factor q, where 0<q<1, may be simulated, respectively achieved. To do so, the initial pixel value y is replaced by a corrected pixel value y′=q·y+δ(y, q), where δ is a (pseudo-)random number sampled from a normal distribution with mean value 0 and variance
σδ2=σy2·(q′2−q2),
where
is the factor by which the standard deviation σy(q, y) of the noise of the output raw image data 150 is reduced for a real pixel value reduction of a factor q. Note that q<q′<1. This transformation represents a noise-consistent scaling operation, which will be denoted by the symbol S(y, q), that is, S(y, q)=q·y+δ(y, q), and the use of which in the context of the present invention will become more clear in the following. In fact, when applied correctly, this transformation can be used to perform more sophisticated operations in the context of image correction.
In the specific case of a Poisson-Gaussian noise model, the variance to be used for the pseudorandom number δ(y, q) is given by
In prior art bad pixels are typically corrected in the hardware of the imaging device or in software as part of the processing pipeline, by interpolation with neighboring pixels which are working correctly. As a result of the interpolation, the resulting pixel value fluctuates less than it would for a working pixel, and interpolated pixels can be clearly identified by a different photo-response curve.
Since the noise of the bad pixels is below the expectation, their statistics can be made consistent with the noise model by adding noise. This is achieved with the help of noise-consistent scaling. Consider, for example, a bad pixel whose value yC should be replaced by using the left and right neighbors with values yL and yR. One may assign to the value yC the equation
y′
C
=S(yL+yR−μy.dark, q=1/2)
which reproduces on average the same value as by using interpolation and which in addition reproduces the standard deviation of yC that is consistent with the noise model.
Apertures, lenses and filters often result in a spatially inhomogeneous “capturing” of incoming photons, as for example in vignetting: A uniformly illuminated scene appears darker towards the edges when imaged. In the prior art, a standard way to correct for this is by normalizing to a reference flat-field image N(i, j), where i and j denote pixel coordinates, through
y′(i, j)=y(i, j)/N(i, j).
Here, the flat-field image should be as noise-free as possible, which can be achieved by fitting a mathematically smooth function to the reference image, or by averaging a large number of reference images. In addition one usually wants to preserve the brighter parts of the original image (usually the center), such that N(i, j) should be normalized to the range 0<N(i, j)≤1.
While this method can be used to normalize the brightness of the image, it will no longer be possible to use the same noise model for the entire image: The previously darker parts now have the same brightness, but they have larger relative noise than the previously brighter parts of the image.
In the present invention, a noise-consistent flat-field correction is achieved by using noise-dependent scaling with a pixel-dependent factor q(i, j) to darken the brighter parts of the image. All the considerations of noise-consistent scaling still apply, namely with respect to use of the equation y′=q·y+δ(y, q) for determining corrected pixel values, and the value for q(i, j) such as to ensure noise-consistent flat-field correction is
Photo-Response-Non-Uniformity is a special kind of inhomogeneous illumination caused by the varying detection efficiency of the individual pixels. The differences are usually on the order of 1% to 2% and can, among other things, be caused by differences of absorption in the active area of the pixel.
The varying detection efficiency is equivalent to inhomogeneous illumination or exposure and can be corrected in the same way as flat-field correction by using as N(i,j) the spatially varying relative detection efficiency, determined through careful calibration, of an image sensor used to take the image data.
The methods of noise-consistent processing presented above ensure that the output raw image data 150 remain consistent with the input noise model 120. In many use cases, it is required or of interest to perform processing that leads to a modified output noise model 160, which, however, remains applicable to the entirety of the output raw image data 150. One may, for example, sacrifice spatial resolution with a binning operation, and in turn gain in signal-to-ratio. It is here important to update the noise model, such that the next processing step can distinguish binned data with a consistently modified noise model from cropped data where the noise model remains the same and thus is inconsistent with the binned data.
One realization of a binning operation consists of forming a sum of groups of N=n·m pixels, such that for each group there is an output pixel with value
Y=Σ
i=1
N
y
i,
where the index i labels the spatial position in the block to be binned.
The output noise model σY(μY) must be updated to take into account the new dark conditions. Specifically, Y has N times the contribution of the black level, i.e. μY.dark=Nμy.dark, and the readout noise in the output model is σY.dark=√{square root over (N)}σy.dark.
In a different realization of the binning operation, the output pixel value is calculated as
Y=(Σi=1Nyi)−(N−1)μy.dark.
This has the advantage that the black level of the output noise model is identical to that of the input noise model, such that the range of scarcely used pixel values between 0 and μy.dark is not unnecessarily enlarged.
In contrast to the binning operation based on average described below, the above two binning operations map integers to integers, and there is no risk for loss of precision associated to rounding or discarding fractional parts.
In another version of the binning operation, the output pixel value is obtained by forming the average of the input pixel values,
This is particularly useful when the digital representation of the output raw image data 150 is a bounded data type, such as an integer value with fixed bit depth, because it avoids clipping of sums of large values. In the case of averaging, the black level of the data remains unchanged, but the output noise is smaller than the input noise, σÝ(μ)≤σy(μ). In particular, σÝ.darkσy.dark√{square root over (N)}, and in the case of the Poisson-Gaussian noise model, the gain parameter of the output model is equal to K/N.
One can similarly sum or average pixels at the same position but of different exposures. In this case, the output raw image data 150 will have the same dimension as the input raw image data 110, but the output noise model 160 must be modified in the same way as for spatial sums or averages. For example, N exposures can be averaged to obtain an output image with pixel values
where the index i now labels the exposure number, and the values yi represent the same position in the image data as the output pixel value Ý. In the output noise model of the exposure average, σÝ.dark=σy.dark/√{square root over (N)}, and in the case of a Poisson-Gaussion noise model, the output gain parameter is equal to K/N.
Some embodiments for noise-consistent processing will have as input raw image data 110 a sequence of images coming from a number M of different devices with Poisson-Gaussian noise-models that have different parameters {(Ki, μy.dark(i), σy.dark(i))∨i=1,2 . . . , M}, and the goal of the processing is to produce output raw image data 150 formed by a sequence of output images following a single output noise model 160. This can be achieved by determining which noise model has the highest normalized readout noise
and assigning for each pixel value yi of input image i an output pixel value ∈i as
where δi is a (pseudo-)random number sampled from a normal distribution with mean value 0 and variance
The parameters for the common output noise model 160 will then be {K=1, μ∈.dark=0, σ∈.dark=σ∈}.
In several of the above-mentioned operations, corrections use floating point operations or floating point random numbers, and changes with respect to the input pixel value may be small. The overall correction may in fact be smaller than unity, for example, if a reduction of 2% should be applied to a pixel value of 10, the effective corrected pixel value may be 9.8. In general, such small corrections may be applied by saving the corrected value in a non-integer representation (e.g.
fixed-point, floating-point or rational number). However, in many use-cases, having an output in integer representation is desirable. Naïve implementations, such as truncating or rounding, may result in a difference between the desired correction coefficient and the effective applied correction. For example, truncating or rounding may introduce an unintended shift of the statistical mean of the image data, also known as bias. To avoid this, a preferred embodiment of a method according to the present invention provides a noise-consistent rounding operation that involves dithering, i.e the addition of a suitably chosen random number Δ, which is different for each pixel, before the actual rounding operation, i.e.
y′=round(y+Δ).
Of particular interest are random numbers that follow a uniform distribution between −½ and +½, and random numbers that follow a triangular distribution between −1 and +1, i.e. the distribution that is formed by adding two independent samples of said uniform distribution, because both distributions introduce zero bias. Moreover, the latter gives quantization noise that is uncorrelated with Δ. Some applications may benefit from the use of other random distributions for Δ.
The quantization operation introduces a slight amount of extra noise, which requires to adapt the readout noise of the output noise model according to
σ′y.dark2=σy.dark2+σΔ2,
where σΔ2 is the variance of the distribution for the random number Δ. Because of this additional noise and to minimize the introduction of processing errors, preferred implementations of processing pipelines according to the present invention perform quantization only once and as the last operation of the pipeline.
The normalization procedure presented above is well-suited for machine learning and algorithm testing, because it allows to gather data from a multitude of sources without increasing the number of noise models that the algorithm must be able to handle or be tolerant against. To further increase the amount of training or testing data, one can simulate images that have a reduced signal-to-noise ratio within the same noise model. This noise model can either be the model of a single set of original input data 100, or of a set of normalized input data generated by following the procedure set out above. Synthetic images with properties equivalent to a reduction of exposure time or of detection efficiency by a factor q can then be generated by applying to each pixel with value y of the input raw image data 110 the noise-consistent scaling
S(y, q) such as defined above in more detail.
In other situations, it is desirable to increase the noise level while keeping the signal constant. For a Poisson-Gaussian input noise model, this can be achieved in two ways. The trivial way is to add (pseudo-)random values δ with 0 mean and desired standard deviation σδ to all pixel values of the input image data 110. This effectively corresponds to increasing the read-out noise parameter, which in the output noise model should be changed to √{square root over (σy.dark+σδ2)}. Another approach consists of the combination of normalization and exposure scaling by factor q, followed by an application of a new gain factor K′=K/q and adding back the input black level μy.dark,
With this transformation, the output raw image data 150 will statistically have the same mean values as the input raw image data 110, and the output noise model has the new parameters {K′=K/q, μy.dark, σy.dark′=σy.dark/q}.
It is clear from the above description that said processing step 130 of a method according to the present invention in general, i.e. in all cases, comprises, on the one hand, determining, based upon said input noise model 120 and depending on said processing operation(s) applied to the input raw image data 110, an output noise model 160 adapted to reflect noise present in said output data 140, as well as, on the other hand, producing, based upon said processing operation(s) applied to the input raw image data 110 as well as on said input noise model 120 and/or said output noise model 160, output raw image data 150 which is statistically consistent with said output noise model 160, these two steps of course depending on one another.
In practice, either the processing consists in an operation (or series of operations) which is noise-consistent and thus happens within the input noise model, i.e. by using the input noise model as the output noise model, or the processing consists in an operation (or series of operations) which isn't noise-consistent and thus simultaneously modifies the input noise model such as to produce an output noise model that is consistent with the processed image data.
In case of the first solution mentioned here above, said processing step 130 preferably comprises applying a transformation S(y, q) representing a noise-consistent operation to the values y of a given pixel of an image sensor used to take the image data, where q is a pixel value reduction factor, with 0<q<1, said processing step 130 using an output noise model 160 which is identical to the input noise model 120.
In case of the second solution mentioned here above, said processing step 130 preferably comprises applying a noise-inconsistent operation to the values y of a given pixel of an image sensor used to take the image data, said processing step 130 using an output noise model 160 which is different as compared to the input noise model 120 and statistically consistent with the noise of the output raw image data 150 of the output data 140.
A way to verify that acquired image data is consistent with an associated noise model is to acquire a set of images under identical conditions such that all differences between pairs of individual images can be attributed to noise. Preferably, the pixel values of a single image should cover a large fraction of all possible pixel values. For each pixel position i, the mean μi and standard deviation σi is then calculated using all images in the set. The larger the number of images in the set, the better the estimates of mean μi and standard deviation σi. Finally, it is checked if the points (μi, σi) are well approximated by the function σy(μy) of the noise model. Similarly, it can be verified that the output noise model of a processing operation is consistent with the output noise model by passing each individual image of said set through the processing operation and extracting means and standard deviations (μ′i, σ′i) for each pixel position in the output image data. The output noise model is considered to be consistent with the output image data if all points (μ′i, σ′i) are well approximated by the function σ′y(μ′y) of the output noise model.
Finally, the present invention is also related to computer program means stored in a computer readable medium adapted to implement the method set out above as well as to a device equipped with such computer program means. For example, such device may consist in a microprocessor, a field-programmable gate array, an image sensor, a mobile phone, in particular a smart phone equipped with a digital camera, a digital photo apparatus, a digital video camera, a scanning device, a tablet, a personal computer, a server, a microscope, a telescope, or a satellite.
In light of the above description of various embodiments of a method according to the present invention, its advantages are clear.
First, access to a consistent noise model according to the present invention provides strong benefits to a variety of image processing applications. In fact, as the noise model describes the statistical uncertainty of the image data, it provides valuable insights in terms of tolerance limits and reproducibility in particular for scientific and metrological applications. This is enabled by a method according to the present invention. Conversely, the absence of a consistent noise model reduces the reliability of image processing and analysis, where the outcome may depend on the degree of correction or preprocessing applied to the image data. This may be avoided by use of a method according to the present invention.
Secondly, access to a consistent noise model according to the present invention also provides strong benefits to applications relying on machine learning techniques for the processing and/or interpretation of image data, because this allows for the availability of large amounts of high-quality training data due to the fact noise that models may be used to enhance the efficiency and eliminate a possible point of confusion of the algorithm. In the context of machine learning, this is even more important, because machine learning algorithms make much stronger use of (statistical) properties of the data than human observers who ignore statistical properties, such as noise and noise correlations.
Number | Date | Country | Kind |
---|---|---|---|
21191349.6 | Aug 2021 | EP | regional |