This application claims benefit to European Patent Application No. EP 22153009.0, filed on Jan. 24, 2022, which is hereby incorporated by reference herein.
The invention relates to a digital image processing apparatus and method for computing a baseline estimate of a digital input image. The invention also relates to an observation device, in particular to a medical observation device, such as a microscope with such a digital image processing apparatus.
When digital images of objects are recorded through an optical device, such as a camera or a microscope, they often comprise in-focus components and out-of-focus components. That is to say, the features of the recorded object that are in the focal region of the optical device at the time of recording, are rendered sharply in the digital image (i.e. the in-focus component), while the remaining object features outside of the focal region appear as blurred background of the digital image (i.e. the out-of-focus component). Conversely, in some digital images, the content of interest may be present in the background, while being overlaid with image noise for example. In both scenarios, an estimation of the background is useful during image processing, be it for merely evaluating image quality or actively enhancing it.
For image processing in the field of medicine and other scientific disciplines, the available computation time and/or equipment performance (especially regarding memory and memory bandwidth) are often constraints.
In an embodiment, the present disclosure provides a digital image processing apparatus for computing a baseline estimate of a digital input image. The digital image processing apparatus is configured to obtain a digital intermediate image by downsampling the digital input image by a predetermined downsampling factor, and compute the baseline estimate based on the digital intermediate image.
Subject matter of the present disclosure will be described in even greater detail below based on the exemplary figures. All features described and/or illustrated herein can be used alone or combined in different combinations. The features and advantages of various embodiments will become apparent by reading the following detailed description with reference to the attached drawings, which illustrate the following:
Embodiments of the present invention relate to a digital image processing apparatus and method for computing a baseline estimate of a digital input image. Embodiments relate to an observation device, in particular to a medical observation device, such as a microscope with such a digital image processing apparatus. Embodiments relate to a computer-implemented image processing method for computing a baseline estimate of a digital input image. Embodiments relate to a computer program product and a computer-readable medium. Embodiments relate to a machine learning product for processing a digital input image and a method of training such a machine learning product.
Embodiments of the present invention can provide an accelerated background estimation, which only requires low memory and low memory bandwidth.
This can be achieved by a digital image processing apparatus for computing a baseline estimate of a digital input image, wherein the digital image processing apparatus is configured to obtain a digital intermediate image by downsampling the digital input image by at least one predetermined downsampling factor and to compute the baseline estimate based on the digital intermediate image.
This apparatus is advantageous and achieves the above advantages for the following reasons:
As will be described below in further detail, the baseline estimate computed from the digital intermediate image can be considered as primarily representing an approximation of the background of the digital intermediate image. After all, said baseline estimate is computed based on the digital intermediate image. However, for numerous applications, said baseline estimate is also sufficiently representative for the background of the digital input image and thus can be considered equivalent to a baseline estimate of the digital input image.
Downsampling the digital input image to the digital intermediate image has little to no impact on the qualitative accuracy of the resulting baseline estimate. In other words, the baseline estimate computed from the digital intermediate image and a comparative baseline estimate computed directly from the digital input image are substantially geometrically similar. Thus, both adequately allow at least some qualitative evaluations of the background of the full-size digital input image (i.e. the digital input image in full-resolution).
Consequently, an embodiment of the apparatus is suited for (certain) background estimation purposes. At the same time, the amount of memory usage and bandwidth consumption is significantly reduced during image processing, when the baseline estimate is computed from the downsampled digital input image, instead of the full-size digital input image. Further, in comparison to computing the baseline estimate directly from the digital input image, the computational burden and computation time are reduced due to the prior downsampling. The above apparatus thus provides accelerated background estimation, while requiring less memory and less memory bandwidth.
The digital input image may be for example a one-dimensional, two-dimensional or three-dimensional image, or an image of higher dimensionality. In particular, the digital input image may derive from a line scan camera, an area scan camera, a range camera, a stereo camera, a microscope camera, an endoscope camera or the like. The digital input image may comprise input image data at a number of input image locations, such as pixels or voxels. The digital intermediate image may likewise comprise intermediate image data at a number of intermediate image locations, such as pixels or voxels. The input image data and the intermediate image data may be discrete intensity data and/or discrete color data for example.
The input image data are, in a preferred embodiment, represented by an N-dimensional array I(Xi), where N is an integer equal to or larger than 1. Likewise, the intermediate image data are represented by an n-dimensional array Id(xi), where n is an integer equal to or larger than 1. In a preferred embodiment, N and n are equal.
I(Xi) and Id(xi) can be any value or combination of values at the location Xi or xi, respectively, such as a value representing an intensity of a color or “channel” in a color space, e.g. the intensity of the color R in RGB space, or a combined intensity of more than one color, e.g.
in RGB color space.
The term Xi is a shortcut notation for a tuple {X1; . . . ; XM} containing M location values and representing a discrete location Xi—or the position vector to that location—in the array representing the input image data. The location Xi may be represented by a pixel or a preferably coherent set of pixels in the input image data. The discrete location Xi denotes e.g. a location variable {X1} in the case of one-dimensional input image data, a pair of discrete location variables {X1; X2} in the case of two-dimensional input image data and a triplet of discrete location variables {X1; X2; X3} in the case of three-dimensional input image data. In the i-th dimension, the array may contain Mi locations, i.e. Xi={Xi,1, . . . , Xi,M
Analogously, the term xi is a shortcut notation for a tuple {x1; . . . ; xm} containing m location values and representing a discrete location xi—or the position vector to that location —in the array representing the intermediate image data. Due to the downsampling, the number of intermediate image locations m is lower than the number of input image locations M. The location xi may be represented by a pixel or a preferably coherent set of pixels in the input intermediate data. The discrete location xi denotes e.g. a location variable {x1} in the case of one-dimensional intermediate image data, a pair of discrete location variables {x1; x2} in the case of two-dimensional intermediate image data and a triplet of discrete location variables {x1; x2; x3} in the case of three-dimensional intermediate image data. In the i-th dimension, the array may contain mi locations, i.e. xi={xi,1, . . . , Xi,m
As will be described in further detail below, the baseline estimate can be computed using a fit to at least a part of the intermediate image data. Computationally, the result of the fit, i.e. the baseline estimate, is represented by discrete baseline estimation data at a number of baseline estimate locations, such as pixels or voxels. The baseline estimation data may be an array fd(xi) having the same dimensionality as the input image data and intermediate image data. Thus, for a one-dimensional digital intermediate image, the baseline estimate may be presented as a discrete curve graph, while for a two-dimensional digital intermediate image, the baseline estimate may be presented as a discrete graph of a curved surface.
As was described at the outset, the digital input image may be composed of in-focus components and out-of-focus components. The in-focus components may also be referred to as in-focus contributions, as their presence in the digital input image contributes to an overall sharp appearance. The out-of-focus components may also be referred to as out-of-focus contributions, since they contribute to making the digital input image appear blurred.
The baseline estimate represents the out-of-focus contributions. That is, the above apparatus functions particularly well for applications where the in-focus contributions can be assumed to have a high spatial frequency e.g. being responsible for intensity and/or color changes that take place over a short distance in the digital input image, while the out-of-focus contributions are assumed to be blurred background having low spatial frequency, i.e. leading to predominantly gradual intensity and/or color changes that extend over comparatively large regions of the digital input image.
An embodiment of the apparatus also works for applications where the content of interest is in the background having a comparatively low spatial frequency and appears in the digital input image together with image noise that has a relatively high spatial frequency.
Due to its low spatial frequency, the background can be considered as a more or less smooth baseline on which the in-focus components and/or the image noise are superimposed as features having high spatial frequency. The baseline estimate is an estimate of said baseline in the intensity data and/or color data. Therefore, the baseline estimate represents an approximation of the background.
Further, owing to its low spatial frequency, the information content of the background originally present in the digital input image is not lost or at least not significantly lost during downsampling. Therefore, the information content of the background is carried over from the input image data to the intermediate image data during downsampling without being altered (except for scaling).
The initial advantages can also be achieved by a computer-implemented image processing method for computing a baseline estimate of a digital input image, wherein an embodiment of the method comprises the steps of downsampling the digital input image by a predetermined downsampling factor to obtain a digital intermediate image and computing the baseline estimate based on the digital intermediate image.
This method benefits from the same advantages as already explained for an embodiment of the apparatus. In particular, it can be carried out in a comparatively short computation time using only relatively little memory and little memory bandwidth due to the downsampling step.
The above apparatus and method may be further improved by adding one or more of the features described in the following. Each of these features may be added to an embodiment of the method and/or an embodiment of the apparatus independently of the other features. In particular, a person skilled in the art—with knowledge of an embodiment of the apparatus—is capable of configuring an embodiment of the method such that the embodiment of the method is capable of operating the embodiment of the apparatus. Moreover, each feature has its own advantageous technical effect, as explained hereinafter.
In a preferred embodiment, the predetermined downsampling factor d is an integer factor. In other words, the number of input image locations is an integer multiple of the number of intermediate image locations. In particular, the number of input image locations divided by the number of intermediate image locations equals to the downsampling factor d, in the case of a one-dimensional digital input image and digital intermediate image. If the digital input image and the digital intermediate image are two-dimensional, the ratio between the number of input image locations and the number of intermediate image locations equals to the square of the downsampling factor d. Likewise, in case of a three-dimensional digital input image and digital intermediate image, the number of input image locations divided by the number of intermediate image locations equals to the cube of the downsampling factor d. The downsampling factor may be different for each dimension of the digital input image. In this case, the downsampling factor assumes the more general form di, where i represents the dimension. For example, the downsampling factor may be different in each of the directions of a two-dimensional image, d1≠d2. If the digital input image has three dimensions, e.g. represents a volume, two or more of the downsampling factors for the different dimension may be different. Of course, in the simplest case, the downsampling factor is the same for each dimension. In this context, a dimension does not necessarily need to represent a spatial dimension. For example, the third dimension may represent a color band or primary of a color space, in which the digital input image is represented.
Consequently, the digital image processing apparatus may be configured to generate the digital intermediate image through a rate reduction by the downsampling factor di. In particular, the digital image processing apparatus may be configured to fill the array representing the intermediate image data Id(xi) with every di-th value from the array representing the input image data. In this case, the intensity value at the location xi of the digital intermediate image is equal to the intensity value at the location di·xi of the digital input image:
I
d(xi)=I(di·xi)
Using the downsampling factor di, the location values xi of the digital intermediate image and the location values Xi of the digital input image may thus be converted back and forth:
Optionally, the digital image processing apparatus may be configured to select the first value and then every di-th value from the array representing the input image data.
If the digital input image is a three-dimensional image, in particular an XYZ image stack and, under the assumption that the background has a low spatial frequency in all dimensions, the number of Z planes can be reduced during downsampling by filling the array representing the intermediate image data with every d3-th plane or with the first plane and then every d3-th plane.
Alternatively, the digital image processing apparatus may be configured to compute the digital intermediate image based on the digital input image through average pooling, maximum pooling, minimum pooling or other types of downsampling techniques.
The digital image processing apparatus may be configured to generate a digital output image solely based on the baseline estimate. The digital output image may comprise output image data O(xi) at a number of output image locations, such as pixels or voxels. Depending on the application, it may be sufficient if the number of output image locations is equal to the number of intermediate image locations. In this case, the number of baseline estimate locations can remain unchanged and the output image data O(xi) may be intensity data corresponding to the baseline estimation data fd(xi):
O(xi)=fd(xi)
Consequently, the digital output image, the digital intermediate image and the baseline estimate may have the same image resolution. In applications where the content of interest is particularly present in the background, this embodiment allows an extraction of the background and thus provides an enhanced digital output image.
In certain situations, however, the number of output image locations needs to be higher than the number of intermediate image locations. For such situations, the digital image processing apparatus may be configured to upsample the baseline estimate by a predetermined sampling factor and to compute the digital output image based on the upsampled baseline estimate.
In particular, the digital image processing apparatus may be configured to compute upsampled baseline estimation data fu(Xi) from the baseline estimation data fd(xi) by means of a nearest-neighbor interpolation, a linear interpolation, a bilinear interpolation, a Lanczos interpolation, AI-based interpolation algorithms or other types of upsampling techniques. The output image data O(Xi) may then be intensity data corresponding to the upsampled baseline estimation data fu(Xi).
O(Xi)=fu(Xi)
In a preferred embodiment, the number of output image locations is equal to the number of input image locations. In particular, the number of baseline estimate locations may be increased to the number of input image locations during upsampling. In other words, the upsampling factor may be complementary or, more specifically reciprocal, to the downsampling factor di. Thus, a full-size baseline estimate can be obtained without having to compute any baseline estimate from the full-size digital input image. Of course, it is also possible to use upsampling factors that are not the reciprocal of the downsampling factor di in the respective dimension.
Due to the already low spatial frequency of the background, upsampling the baseline estimate will not lead to the introduction of upsampling artifacts. For small downsampling factors d<˜3, the nearest-neighbor interpolation results in satisfactory upsampling to full-size. In the case of high downsampling factors d>˜3, more sophisticated interpolation algorithms, such as bilinear interpolation, Lanczos interpolation and AI-based interpolation algorithms are preferred for upsampling to full-size.
In an embodiment of the method, the step of computing the baseline estimate may thus also comprise the step of upsampling the baseline estimate.
Optionally, the digital image processing apparatus may be configured to compute the digital output image based on the digital input image, from which the baseline estimate has been removed. In a preferred embodiment, the baseline estimate is upsampled to full-size prior to removal from the digital input image. In particular, the digital image processing apparatus may be configured to subtract the upsampled baseline estimation data fu(Xi) from the input image data I(Xi) in order to obtain the output image data O(Xi):
O(Xi)=I(Xi)˜fu(Xi)
The functionality of this embodiment is based on the assumption that the intensity and/or color changes across the digital input image may be separated additively into a high spatial frequency in-focus component I1(Xi) and a low spatial frequency out-of-focus-component I2 (Xi) according to the following equation:
I(Xi)=I1(Xi)+I2(Xi)
By subtracting the upsampled baseline estimation data fu(Xi) from the input image data I(Xi), the in-focus component hi(Xi) is maintained in the output image data O(Xi), while the out-of-focus-component I2(Xi) represented by the upsampled baseline estimation data fu(Xi) is removed. Thereby, an enhanced digital output image, in which the amount of blurred background is reduced, results.
In order to avoid aliasing effects caused e.g. by downsampling the high spatial frequency in-focus components of the digital input image, the digital image processing apparatus may be configured to apply an anti-aliasing filter, such as a low-pass filter to the digital input image prior to downsampling. In particular, the digital image processing apparatus may be configured to compute a convolution of the input image data with a suitable filter kernel before carrying out the downsampling described above. The digital intermediate image may then be obtained in a subsequent step by downsampling the filtered digital input image.
According to an embodiment with improved computational efficiency, the digital image processing apparatus may be configured to carry out the downsampling simultaneously to filtering the digital input image. For example, the filtering may be carried out on the full-size digital input image, wherein the filter output is only computed at a reduced number of input image locations. In a preferred embodiment, the filter output is only computed at every p-th input image location or at the first input image location and then at every p-th input image location. A separate downsampling step can thus be dispensed with.
Consequently, in an embodiment of the method, the step of downsampling the digital input image may comprise the step of filtering the digital input image.
Optionally, the filtering may also be carried out on the intermediate image data. Moreover, the low-pass filtering may be implemented in Fourier space by multiplication with a weighing function that suppresses high-frequency components. Besides a low-pass filter, the digital image processing apparatus may be configured to apply any other linear and/or time invariant filter to at least one of the digital input image and the digital intermediate image.
In applications, where no aliasing effects are to be expected during downsampling, it is preferred not to apply any filter to the digital input image and the digital intermediate image.
As briefly mentioned above, the baseline estimate can be computed using a fit to the intermediate image data e.g., a polynomial fit or a spline fit. Thus, the digital image processing apparatus may be configured to compute the baseline estimation data using a least-square minimization criterion, which is to be minimized for the fit. Said least-square minimization criterion M(f(xi)) may have the following form:
M(fd(xi))=P(fd(xi))+C(fd(xi))
where P(fd(xi)) is a penalty term and C(fd(xi)) is a cost function, as will be explained below.
The least-square minimization criterion may comprise the penalty term P(fd(xi)), in order to ensure that the baseline estimation data fd(xi) are an accurate representation of only the out-of-focus contributions Id,2 (xi) in the intermediate image data Id (xi) and to avoid that the baseline estimation data are fitted to the in-focus contributions Id,1(xi) of the intermediate image data Id(xi). In particular, the penalty term may take any form that introduces a penalty if the baseline estimate is fitted to data that are considered to belong to the in-focus component. Such a penalty may be created by increasing the penalty term in value if the in-focus component of the intermediate image data is represented in the baseline estimation data.
Under the assumption that the out-of-focus component has a low spatial frequency, the penalty term may be a term that becomes large if the spatial frequency of the baseline estimate becomes large. Such a term may be in one embodiment a roughness penalty term which penalizes non-smooth baseline estimation data that deviate from a smooth baseline. Such a roughness penalty term effectively penalizes the fitting of data having high spatial frequency.
For example, a deviation from a smooth baseline may lead to large values in at least one of the first derivative, i.e. the steepness or gradient, and the second derivative, i.e. the curvature, of the baseline estimation data. Therefore, the roughness penalty term may contain at least one of a first spatial derivative of the baseline estimation data, in particular the square and/or absolute value of the first spatial derivative, and a second derivative of the baseline estimation data, in particular the square and/or absolute value of the second spatial derivative. More generally, the penalty term may contain a spatial derivative of any arbitrary order of the baseline estimation data, or any linear combination of spatial derivatives of the baseline estimation data.
Without loss of generality, the penalty term P(fd(xi)) may have the following form for the one-dimensional case for example:
P(fd(xi))=rΣi=1m(∂fd(xi))2
where r is a regularization parameter and ∂ is a discrete operator for computing the first derivative. The utility of the regularization parameter r is explained further below. More generally, the penalty term P(fd(xi)) may have the form:
P(fd(xi))=rΣi=1m(∂jfd(xi))2
where ∂j is a discrete operator for computing the j-th derivative. For multidimensional digital intermediate images, different penalty terms ri may be used in the different dimensions The least-square minimization criterion may further comprise the cost function C(fd(xi)), which represents a difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi). An example of a cost function for the one-dimensional case is:
C(fd(xi)=∥Id(xi)˜fd(Xi)∥
where ∥ . . . ∥ denotes the L1-Norm i.e., the sum of absolute values. For the multidimensional case, the sum of the root-mean-square values across all dimensions of the sum of squared differences between the intermediate image data and the baseline estimation data in the i-th dimension may be used instead of the L1-Norm.
In certain applications, the digital intermediate image may comprise both high-frequency in-focus components and high-frequency image noise overlaid with low frequency out-of-focus components. In this situation, it is preferred to distinguish between the in-focus components and image noise when computing the baseline estimate. That is, the baseline estimate should not be influenced by the in-focus components, while image noise is allowed to influence the baseline estimate, in order to be reflected in the baseline estimate.
To achieve this, the digital image processing apparatus may be configured to obtain e.g., from external user input, a predetermined threshold value s, which is larger than zero and allows a differentiation between the high frequency in-focus components and the high-frequency image noise.
Assuming that the image noise follows the Poisson statistic, any intensity peak, which surpasses the baseline estimation data by more than the predetermined threshold value s can be considered as belonging to the in-focus components. Conversely, intensity peaks, which are closer to the baseline estimation data than the predetermined threshold value s can be considered as image noise.
If a low-pass filter has been applied to the digital input image prior to downsampling, the digital image processing apparatus may be configured to compute an adapted threshold value sd based on the predetermined threshold value s and the downsampling factor d e.g. for a two-dimensional image as follows:
or in a general case when the downsampling factor—indicated by di—is different for different N dimensions i:
The predetermined threshold value s or, if computed, the adapted threshold value sd may be included in a truncated difference term of the cost function. The truncated difference term may be symmetric or asymmetric. In a preferred embodiment, the truncated difference term may be a truncated quadratic term representing the difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi), wherein the output value of the truncated quadratic term is limited to a constant value, if the difference between the intermediate image data and the baseline estimation data is larger than the respective threshold value s or sd. Otherwise, the value of the truncated quadratic term is equal to the square of the difference between the intermediate image data and the baseline estimation data. Thus, the baseline estimation data will follow intensity peaks of the in-focus components only to a limited amount and intensity peaks of the image noise all the more. The truncated quadratic term φ(fd(xi)) may be of the form:
Using the truncated quadratic term φ(fd(xi)), the cost function C(fd(xi)) may be expressed as:
C(fd(xi))=Σi=1mφ(fd(xi))
The digital image processing apparatus may further be configured to compute the baseline estimate, in particular the fit to the intermediate image data using an iterative algorithm. This embodiment is preferred, since it may make use of the advantages of the present invention in every one of its iteration steps. Optionally, the iterative algorithm may comprise a first iterative stage and a second iterative stage, the first and second iterative stages together representing one iteration step or iteration cycle.
For example, the digital image processing apparatus may be configured to compute the baseline estimate using an iterative half-quadratic minimization scheme aimed at minimizing the least-square minimization criterion. The half-quadratic minimization scheme may e.g. comprise at least part of the LEGEND algorithm, which is computationally efficient. The LEGEND algorithm is described in Idier, J. (2001): Convex Half-Quadratic Criteria and Interacting Variables for Image Restoration, IEEE Transactions on Image Processing, 10(7), p. 1001-1009, and in Mazet, V., Carteret, C., Bire, D, Idier, J., and Humbert, B. (2005): Background Removal from Spectra by Designing and Minimizing a Non-Quadratic Cost Function, Chemometrics and Intelligent Laboratory Systems, 76, p. 121-133. Both articles are herewith incorporated by reference in their entirety.
The LEGEND algorithm introduces discrete auxiliary data Dl(xi) that are preferably of the same dimensionality as the intermediate image data Id(xi). These auxiliary data are updated in each iteration cycle, preferably in the first iterative stage, based on the latest baseline estimation data and the intermediate image data. In particular, the auxiliary data are updated as follows:
where l=1 . . . L is the index of the current iteration cycle and α is a constant value e.g., 0.493. In the second iterative stage, the baseline estimation data may be updated based on the previously calculated, updated auxiliary data, the baseline estimation data from the previous iteration cycle and the penalty term using the following formula:
Alternatively, the baseline estimation data are computed using a convolution of a discrete Green's function G(xi) with a sum of the intermediate image data Id(xi) and the updated auxiliary data Dl(xi), in this second iterative stage. In other words, the second iterative step of the LEGEND algorithm may be replaced by the following iterative step, where the updated baseline estimation data fd,l(xi) is computed in the l-th iteration cycle using the Green's function G(xi):
f
d,l(xi)=G(xi)*(Id(xi)+Dl(xi))
Without loss of generality, the Green's function G(xi) may have the following form for the one-dimensional case:
where F[ . . . ] is the discrete Fourier transform, F−1[ . . . ] is the inverse discrete Fourier transform, r is the regularization parameter, and
the functional derivative. Again, the regularization parameter may be different for each dimension, i.e. r=rj, where j represents a dimension.
This step reduces the computational burden significantly as compared to the traditional LEGEND algorithm. The reduced computational burden results from the fact that a convolution is computed. This computation can be efficiently carried out using an FFT algorithm. Moreover, the second iterative step may make full use of an array processor, such as a graphics processing unit (GPU) or an FPGA due to the FFT algorithm.
As a starting step in the iterative algorithms, an initial set of baseline estimation data (e.g. fd(xi)=Id(xi)) and an initial set of auxiliary data (e.g. di(xi)=0) can be defined. In a preferred embodiment, the first and second iteration stages are repeated until a convergence criterion is met. A suitable convergence criterion may be, for example that the sum of the differences between the current baseline estimation data and the previous baseline estimation data across all locations xi is smaller than a predetermined threshold.
According to another aspect, the digital image processing apparatus may be configured to obtain a predetermined feature size, which is larger than zero and representative for features of interest contained in the digital input image. In particular, the predetermined feature size can be used to differentiate, whether an image feature is to be considered as constituting the in-focus contribution or the out-of-focus contribution.
For example, the feature size may be expressed as a cutoff frequency ωcutoff, wherein image features having a spatial frequency higher than the cutoff frequency ωcutoff are considered to constitute the in focus contribution, whereas image features with a spatial frequency lower than the cutoff frequency ωcutoff are considered to belong to the background i.e., the out-of-focus contribution.
The feature size may also be expressed as a predetermined feature scale γ denoting the number, in particular the square number of pixels, which the biggest or average feature of interest takes up in the digital input image. All image features that are smaller than the predetermined feature scale γ are considered to constitute the in-focus contribution, while image features larger than the predetermined feature scale γ are considered to be part of the out-of-focus contribution i.e., the background.
Alternatively, the feature size may be given as a predetermined feature length λ, which denotes the spatial extent of the biggest or average feature of interest along a single dimension. The predetermined feature length λ may be the square root of the predetermined feature scale γ given in pixels:
λ=√{square root over (γ)},
or, if the feature length is different in the different dimensions i.
λi=√{square root over (γi)},
The predetermined feature length a may also be expressed as a number of pixels or in length units.
The feature size, in particular the cutoff frequency, the feature scale or the feature length may be obtained by external user input. Alternatively or additionally, the digital image processing apparatus may be configured to automatically compute the feature size, in particular the cutoff frequency, the feature scale or the feature length from the input image data. For example, a Fourier Ring Correlation may be used for this purpose. A suitable Fourier Ring Correlation is disclosed e.g., in Koho, Sami; Tortarolo, Giorgio; Castello, Marco; Deguchi, Takahiro; Diaspro, Alberto; Vicidomini, Giuseppe (2019): Fourier Ring Correlation Simplifies Image Restoration in Fluorescence Microscopy, https://www.nature.com/articles/s41467-019-11024-z. This article is herewith incorporated by reference in its entirety.
In general, the predetermined feature size may be used to improve the accuracy of the estimated baseline. This will be described in further detail below. The baseline estimate may then be computed in such a way that it represents those image features being larger than the predetermined feature size.
When computing the baseline estimate from the digital intermediate image, consideration should be given to the effect downsampling has on the feature size, if the predetermined feature size is given in respect to the full-size digital input image. After all, the digital intermediate image is effectively a downscaled version of the digital input image.
For this purpose, the digital image processing apparatus may be configured to compute an adapted feature size based on the predetermined feature size and the downsampling factor d. In particular, the digital image processing apparatus may be configured to compute an adapted feature scale γd based on the predetermined feature scale γ and the downsampling factor d as follows:
or in a general case when the downsampling factor is different for different dimensions, where it is also considered that the feature scale may also be different for each dimension:
Likewise, the digital image processing apparatus may be configured to compute an adapted feature length λd based on the predetermined feature length λ and the downsampling factor d as follows:
or, in a more general form:
The digital image processing apparatus may be configured to compute the baseline estimate based on the digital intermediate image and the adapted feature size. For example, the above-mentioned penalty term in the least-square minimization criterion may include the adapted feature size. In particular, the adapted feature size, and in a preferred embodiment the adapted feature scale or the adapted feature length may be included in the penalty term as a multiplicative weighing factor, e.g. as the regularization parameter r. The regularization parameter r may be chosen such that the penalty term results in a scalar, dimensionless quantity.
That is, for digital intermediate images having a large adapted feature scale or adapted feature length, the penalty term is amplified by the weighing factor. Conversely, for digital intermediate images having a small adapted feature scale or adapted feature length, the effect of the penalty term is attenuated by the weighing factor. Due to this amplification and attenuation of the penalty term, the resulting baseline estimate is forced to be smoother in the former case, while being allowed to have a certain roughness in the latter case.
Additionally or alternatively, the adapted feature size, and in a preferred embodiment the adapted feature scale or the adapted feature length may be included in the penalty term as an additive weighing constant.
Instead of the multiplicative weighing factor or the additive weighing constant, the adapted feature size may also be used for selecting an optimal polynomial order K that is used for the above-mentioned polynomial fit. That is, the baseline estimation data may be represented by the K-order polynomial in any of the N dimensions i.
f(xi)=Σk=0Kai,kxik
where ai,k are the coefficients of the polynomial in the i-th dimension. For each dimension i=1, . . . , n, a separate polynomial may be computed. The optimum value for the maximum polynomial order K depends on the required smoothness of the baseline estimation data. For a smooth baseline (i.e. large adapted feature size), the polynomial order K should be set as low as possible, whereas fitting a highly irregular background (i.e. small adapted feature size) may require a higher order K.
In the case of a polynomial fit, the baseline estimation data may consist only of the polynomial coefficients ai,k or of intensity data representing the graph of the K-order polynomial.
Instead of obtaining the predetermined feature size and then computing the adapted feature size, the digital image processing apparatus may also be configured to automatically compute the feature size from the intermediate image data and utilize it during the computation of the baseline estimate.
In order to make full use of the downsampling, while preventing excessive downsampling, the digital image processing apparatus may be configured to limit the downsampling factor d to a maximum downsampling factor dmax based on the predetermined feature size. In particular, the digital image processing apparatus may be configured to limit the downsampling factor d, such that the Nyquist condition is fulfilled. For example, the maximum downsampling factor dmax may be less than the square root of the predetermined feature scale γ or less than the predetermined feature length λ:
d
max<√{square root over (γ)}=λ
or, in a more general form:
d
max,i<√{square root over (γi)}=λi
In a preferred embodiment, the maximum downsampling factor dmax is less than or equal to one fourth of the predetermined feature length λ. Alternatively, the maximum downsampling factor dmax is not larger than one fourth of the square root of the predetermined feature scale γ.
Or, more generally:
Thus, for a typical range of the feature scale γ between 100 and 400 pixels, the maximum downsampling factor dmax,i is given by 2.5 to 5.
According to another aspect, the digital image processing apparatus may comprise a microprocessor or may be configured as a microprocessor or as part of such microprocessor. The digital image processing apparatus and/or the microprocessor may comprise a memory as a combination of read-only memory (ROM) with any arbitrary amount of random access memory (RAM). For a 2D 2048×2048 16-bit single channel image and d=2, the memory may have a memory capacity of less than 32 MiB, in particular less than 27 MiB. When considering real-time application, the memory may have a memory bandwidth of less than 50 GiB/s. Owing to the downsampling, the computation of the baseline estimate can be achieved on hardware having such comparatively low performance stats.
In a preferred embodiment, the digital image processing apparatus is an embedded processor. Embedded processors, being a class of inexpensive computers or computer chips, can be embedded in various machines and devices to control electrical and mechanical functions there. Embedded processors are generally small in size, use simple microprocessors and do not have to execute elaborate computations or be extremely fast, nor do they need great input/output capability. Owing to the downsampling, these conditions apply to the digital image processing apparatus.
Alternatively, the digital image processing apparatus may be only part of such an embedded processor. This also allows the digital image processing apparatus to be integrated into a device requiring the computation of baseline estimates.
In particular, the embedded processor may be an embedded processor for/of a microscope. Advantageously, embodiments of the present invention may thus be utilized in the field of microscopy.
Consequently, the initial object is also achieved by a microscope comprising an amended processor, wherein the embedded processor comprises or is a digital image processing apparatus according to any one of the above embodiments. The microscope thus benefits from the advantages described above of the digital image processing apparatus. In particular, the computation of the baseline estimate may be performed in real time and the enhanced digital output images may be output as a live image feed due to the shortened computation time.
According to another aspect, an embodiment of the method may be adapted to operate a digital image processing apparatus according to any one of the above described embodiments. Advantageously, this embodiment allows the method to be carried out on hardware that is specifically dedicated to a given particular purpose. In particular, an embodiment of the method may be executed on an embedded processor of a microscope.
Alternatively, an embodiment of the method may also be adapted to operate a general-purpose computer. Thereby, the applicability of the method is improved. Accordingly, the initial object is achieved by a computer-program product comprising a program code which, when executed by a computer, such as a general-purpose computer, causes the computer to carry out an embodiment of the method. Likewise, the initial object is also achieved by a computer-readable medium comprising a program code which, when executed by the computer, cause the computer to carry out an embodiment of the method. Both the computer-program product and the computer-readable medium are advantageous, since they represents means for carrying out an embodiment of the method.
The computation of the baseline estimate itself is not limited to the procedures described above and can be accomplished by virtually any known method for estimating a baseline that correctly represents the background of a given digital image.
Advantages of embodiments of the present invention is further achieved by a machine learning product for processing a digital input image. The machine learning product is configured to compute a baseline estimate of the digital input image, which has been trained by pairs of different digital input images and baseline estimates, each baseline estimate of a pair computed from the digital input image of the pair using at least one of a) an embodiment of the method and b) an embodiment of the apparatus.
This machine learning product is advantageous, since it is capable of emulating the computations of an embodiment of the method and/or an embodiment of the apparatus due to its training by the pairs of different digital input images and baseline estimates.
The respective baseline estimate of the pairs may be the upsampled baseline estimate. Consequently, the machine learning product may be configured to compute a full-size baseline estimate.
A training method for a machine learning product by pairs of different digital input images and baseline estimates also achieves the initial advantages, wherein each baseline estimate of a pair is computed from the digital input image of the pair using at least one of a) an embodiment of the method and b) an embodiment of the apparatus.
The training method is advantageous, as it allows to create machine learning products having the characteristics mentioned above.
The following examples for utilizing the baseline estimate computed from the digital intermediate image for a qualitative evaluation of the digital input image are given below:
For example, the baseline estimate computed from the digital intermediate image may be compared with a predefined reference baseline estimate, which has been computed by downsampling a comparative baseline estimate of a reference digital input image. The predefined reference baseline estimate may derive from external user input and may be representative for an ideal state. This comparison allows for the evaluation of whether the current digital input image satisfies said ideal state. Advantageously, the reference baseline estimate can be computed only once in preparation for the respective application and can then be used repeatedly for the above-mentioned comparison.
In another example, the computed baseline estimate can be used for determining the size of an out-of-focus region in the digital intermediate image. The determined size can then be referenced to the overall image size of the digital intermediate image to arrive at a relative size of the out-of-focus region. Due to the geometric similarity between the baseline estimate and the comparative baseline estimate mentioned above, the corresponding out-of-focus region in the digital input image has substantially the same relative size. The relative size can then be used for determining a sharpness score of the digital input image for example. Therefore, the sharpness score of the digital input image can be estimated without having to compute any baseline estimate of the full-size image.
In applications where two or more digital input images of the same object are recorded at full-size but with various focal settings, the baseline estimates computed from the respective downsampled digital input images can be mutually compared in order to assess which focal setting suits the most for recording the object. For example, the focal setting resulting in the smallest out-of-focus region can be chosen. Advantageously, this choice can be made without having to compute any baseline estimate from the full-size digital input images.
These examples are not restrictive, and are intended to illustrate that the baseline estimate computed based on the digital intermediate image is in itself qualitatively equivalent to the comparative baseline estimate.
The digital image processing apparatus and each of its functions may be implemented in hardware, in software or as a combination or hardware and software. For example, at least one function of the digital image processing apparatus may at least partly be implemented by a subroutine, a section of a general-purpose processor, such as a CPU, and/or a dedicated processor such as a GPU, FPGA, vector processor and/or ASIC.
Embodiments of the present invention will now be described by way of example using a sample embodiment, which is also shown in the drawings. In the drawings, the same reference numerals are used for features which correspond to each other with respect to at least function and/or design.
The combination of features shown in the enclosed embodiment is for explanatory purposes only and can be modified. For example, a feature of the embodiment having a technical effect that is not needed for a specific application may be omitted. Likewise, a feature which is not shown to be part of the embodiment may be added if the technical effect associated with this feature is needed for a particular application.
First, the structure and functionality of the digital image processing apparatus 100 is explained with reference to
The microscope 120 may comprise an image-forming section 138, which is adapted to capture with a camera 140, a digital input image 104 in the form of input image data 200. Optionally, the camera 140 may produce a time series of subsequent sets of input image data 200.
The camera 140 may be a CCD, multispectral or hyperspectral camera, which records the input image data 200 in a plurality of channels 142, wherein each channel 142, in a preferred embodiment, represents a different light spectrum range from the infrared to the ultraviolet. Attentively, the camera 140 may record the input image data 200 in monochrome.
In the case of a CCD camera, three channels 142, e.g. an R-channel, a G-channel and a B-channel may be provided to represent a visible light image of an object 144. In the case of a multi or hyperspectral camera, a total of more than three channels 142 may be used in at least one of the visible light range, the TR light range, the NTR light range and the ultraviolet light range.
The object 144 may comprise animate and/or inanimate matter. The object 144 may further comprise one or more fluorescent materials, such as at least one fluorophore 146. A multispectral or hyperspectral camera may have one channel 142 for each different fluorescence spectrum of the fluorescent materials in the object 144. For example, each fluorophore 146 may be represented by at least one channel 142, which is matched to the fluorescence spectrum triggered by an illumination 148. The microscope 120 may be adapted to excite fluorescence e.g. of fluorophores 146 within the object 144 with light having a suitable fluorescence excitation wavelength by the illumination 148. Alternatively or additionally, channels 142 may be provided for auto-fluorescence spectra or for spectra of secondary fluorescence, which is triggered by fluorescence excited by the illumination 148, or for lifetime fluorescence data. Of course, the illumination 148 may also or solely emit white light or any other composition of light without triggering fluorescence in the object 144.
The illumination 148 may be guided through a lens 150, through which the input image data 200 are acquired. The illumination 148 may comprise or consist of one or more flexible light guides to direct light onto the object 144 from one or more different directions. A suitable blocking filter may be arranged in the light path in front of the camera 140, e.g. to suppress glare. For fluorescence imaging, a blocking filter preferably blocks only the illumination wavelength and allows the fluorescent light of the fluorophores 146 in the object 144 to pass to the camera 140.
It is apparent—without loss of generality—that the input image data 200 can be captured by any kind of microscope, in particular with a fluorescence light microscope operable in a widefield mode and/or using a confocal laser scanning microscope.
The input image data 200 may be one-dimensional if a line camera is used for the recording. Alternatively, the input image data 200 are two-dimensional if a single channel 142 is contained in a two-dimensional image. The digital input image may have a higher dimensionality than two if more than one channel 142 is comprised and/or if the input image data 200 represent a three-dimensional image.
Three-dimensional input image data 200 may be recorded by the microscope 120 by e.g. using light-field technology, Z-stacking in microscopes, images obtained by a SCAPE microscope and/or a three-dimensional reconstruction of images obtained by a SPIM microscope. In the case of a three-dimensional image, each plane of the three-dimensional input image data 200 may be considered as a two-dimensional input image 104. Again, each plane may comprise several channels 142. Each channel 142 may be regarded as a separate two-dimensional image. Alternatively, a plurality of channels 142 may be interpreted together as a multi-dimensional array.
The microscope 120, in particular the digital image processing apparatus 100 may further comprise an image storage section 152 adapted to contain, at least temporarily, the input image data 200. The digital image processing apparatus 100 may also be part of a computer 130, such as a general-purpose computer, in particular a workstation 154 of the microscope 120 comprising the image storage section 152. The image storage section 152 may comprise a volatile or non-volatile memory, such as a cache memory of a CPU 182 of the computer 130, and/or of a GPU 184 of the computer 130. The image storage section 152 may further comprise RAM, a hard disk drive or an exchangeable storage system, such as a USB stick or an SD card. The image storage section 152 may comprise any combination of these types of memory.
The microscope 120 as shown in
The image processing method may be a computer-implemented image processing method. Therefore, a computer-program product 126 comprising a program code 128 which, when executed by the computer 130, causes the computer 130 to carry out an embodiment of the method may be provided. Accordingly, a computer-readable medium 132 comprising the program code 128 may also be provided. In this case, the digital image processing apparatus 100 may serve as an image processor, which is configured to read out the input image data 200 from the image storage section 152 or from an external image-forming section (e.g. the image-forming section 138 of the microscope 120). For this, the computer 130 may comprise an image input section 178.
As is indicated in the upper left half of
In order to save computation time when computing the baseline estimate 102 of the digital input image 104, the digital image processing apparatus 100 is configured to first obtain a digital intermediate image 106 by downsampling the digital input image 104 by a predetermined downsampling factor 108. In the following, the downsampling factor 108 will be denoted as d.
The digital input image 104 may comprise the input image data 200 at a number of input image locations M, such as pixels or voxels. The digital intermediate image 106 may likewise comprise intermediate image data 202 at a number of intermediate image locations m, such as pixels or voxels. Due to the downsampling, the number of intermediate image locations m is lower than the number of input image locations M. The input image data 200 and the intermediate image data 202 may be discrete intensity data and/or discrete color data for example. These input image data 200 are, in a preferred embodiment, represented by an N-dimensional array I(Xi), where N is an integer equal to or larger than 1. Likewise, the intermediate image data 202 are represented by an n-dimensional array Id(xi), where n is an integer equal to or larger than 1. In a preferred embodiment, N and n are equal.
In a preferred embodiment, the predetermined downsampling factor d is an integer factor. In other words, the number of input image locations M is an integer multiple of the number of intermediate image locations m. In particular, the number of input image locations M divided by the number of intermediate image locations m equals to the downsampling factor d, in the case of a one-dimensional digital input image 104 and digital intermediate image 106. If the digital input image 104 and the digital intermediate image 106 are two-dimensional, the ratio between the number of input image locations M and the number of intermediate image locations m equals to the square of the downsampling factor d. Likewise, with a three-dimensional digital input image 104 and digital intermediate image 106, the number of input image locations M divided by the number of intermediate image locations m equals to the cube of the downsampling factor d.
Due to their low spatial frequency, the out-of-focus components 170 (i.e. background) can be considered as a more or less smooth baseline on which the in-focus components 168 are superimposed as features having high spatial frequency. This is illustrated in
Further, owing to their low spatial frequency, the information content of the out-of-focus components I2 (Xi) originally present in the digital input image 104 is not or at least not significantly lost during downsampling. Therefore, the information content of the out-of-focus components I2 (Xi) is carried over from the input image data I(xi) to the intermediate image data Id(xi) during downsampling without being altered (except for scaling).
The digital image processing apparatus 100 is further configured to compute the baseline estimate 102 based on the digital intermediate image 106. As will be described in more detail below, the baseline estimate 102 can be computed using a fit to at least part of the intermediate image data Id(xi). Computationally, the result of the fit, i.e. the baseline estimate 102, is represented by discrete baseline estimation data 204 at a number of baseline estimate locations, such as pixels or voxels. The baseline estimation data 204 may be an array fd(xi) having the same dimensionality as the input image data I(xi) and intermediate image data Id(xi). For a one-dimensional digital intermediate image 106, the baseline estimate 102 may be presented as a curve graph (see e.g.
Further details of the aspects are described below with respect to the image processing method and
As can be seen in
From the detail shown in
I
d(xi)=1(di·xi)
Optionally, it may be the first value and then every d-th value from the array representing the input image data I(Xi), which is filled in the array representing the intermediate image data Id(xi).
If the digital input image is a three-dimensional image, in particular an XYZ image stack and under the assumption that the background has a low spatial frequency in all dimensions, the number of Z planes can be reduced during downsampling by filling the array representing the intermediate image data Id(xi) with every d3-th plane or with the first plane and then every d3-th plane of the input image data I(xi).
Alternatively, step 122 may comprise step 302 of computing the digital intermediate image based on the digital input image through average pooling, maximum pooling, minimum pooling or other types of downsampling techniques (see
Optionally, step 122 may comprise the step 304 of applying an anti-aliasing filter, such as a low-pass filter to the digital input image 104 prior to step 300 or step 302. Instead of filtering and downsampling in separate steps, step 122 may comprise the step 306 of simultaneously carrying out the downsampling and filtering of the digital input image 104. For example, the filtering may be carried out on the full-size digital input image 104, wherein the filter output is only computed at a reduced number of input image locations. In a preferred embodiment, the filter output is only computed at every di-th input image location or at the first input image location and then at every di-th input image location.
In applications, where no aliasing effects are to be expected during downsampling, it is preferred to directly downsample the digital input image to the digital intermediate image in step 122 without applying any filter.
Next, an embodiment of the method comprises the step 124 of computing the baseline estimate 102 based on the digital intermediate image 106. Just like the digital input image 104, the digital intermediate image 106 also contains in the intermediate image data Id(xi) in-focus components 172 denoted by Id,1(xi) and out-of-focus components 174 denoted by Id,2 (xi). As illustrated in
M(fd(xi))=P(fd(xi))+C(fd(xi))
where P(fd(xi)) is a penalty term and C(fd(xi)) is a cost function.
Under the assumption that the out-of-focus components Id,2 (xi) have a low spatial frequency, the penalty term P(fd(xi)) may be a term that becomes large if the spatial frequency of the baseline estimate 102 becomes large i.e., if the baseline estimation data fd(xi) exhibit large values in at least one of the first derivative and the second derivative. Consequently, the penalty term P(fd(xi)) may contain at least one of a first spatial derivative of the baseline estimation data fd(xi), in particular the square and/or absolute value of the first spatial derivative, and a second derivative of the baseline estimation data fd(xi), in particular the square and/or absolute value of the second spatial derivative. More generally, the penalty term P(fd(xi)) may contain a spatial derivative of any arbitrary order and have the following general form for the one-dimensional case:
P(fd(xi))=rΣi=1m(∂jfd(xi))2
where r is a regularization parameter and ∂j is a discrete operator for computing the j-th derivative. The regularization parameter may be different for each dimension j of the digital input image, i.e. r=rj in the above equation. The penalty term P(fd(xi)) may also contain a linear combination of spatial derivatives of the baseline estimation data fd(xi).
The regularization parameter r can be understood as a multiplicative weighing factor, which depends on the structure of the image. That is, the regularization parameter r roughly represents the spatial size of the features in the in-focus component Id,1(xi) of the intermediate image data Id(xi). In general, the information on said spatial size may be used to improve the accuracy of the estimated baseline. This will be described in further detail below. The baseline estimate 102 may then be computed in such a way that it represent only those image features being larger than said spatial size.
The information on said spatial size may be obtained e.g., from external user input 176, as can be seen in the detail shown in
For example, the feature size may be expressed as a predetermined feature scale γ denoting a number of pixels, in particular a square number of pixels. The biggest or average feature of interest takes up that many pixels in the digital input image 104. All image features smaller than the predetermined feature scale γ are considered to constitute the in-focus components I1(Xi), while image features larger than the predetermined feature scale γ are considered to be part of the out-of-focus components I2 (Xi) i.e., the background.
Alternatively, the feature size may be given as a predetermined feature length 112 denoted by λi, which represents the spatial extent of the biggest or average feature of interest along a single dimension i. The predetermined feature length λi may be the square root of the predetermined feature scale γi given in pixels:
λi=√{square root over (γi)}
The predetermined feature length λi may also be expressed as a number of pixels or in length units.
When computing the baseline estimate 102 from the digital intermediate image 106, it is preferred to take into consideration the effect downsampling has on the feature size, if the predetermined feature size is given in respect to the full-size digital input image 104. After all, the digital intermediate image 106 is effectively a downscaled version of the digital input image 104, as can be seen in the upper left corner of
For this purpose, an adapted feature size may be computed based on the predetermined feature size and the downsampling factor di in an optional step 308. In particular, an adapted feature scale γd,i may be computed based on the predetermined feature scale γi and the downsampling factor di as follows:
Likewise, an adapted feature length 116 denoted by λd,i may be computed based on the predetermined feature length λi and the downsampling factor di as follows:
For example, the penalty term in the least-square minimization criterion may include the adapted feature size. In particular, the adapted feature size, and in a preferred embodiment the adapted feature scale γd,i or the adapted feature length λd,i may be included in the penalty term as the regularization parameter r.
r
i=γd,i or ri=λd,i2
Depending on the order j of the spatial derivative in the penalty term P(fd(xi)), the adapted feature scale γd,i or the adapted feature length λd,i may be chosen as the regularization parameter ri such that the penalty term P(fd(xi)) results in a scalar, dimensionless quantity.
For illustrative purposes, the influence the adapted feature length λd,i has on the baseline estimate 102 can be seen from a juxtaposition of
Both
The least-square minimization criterion may further comprise the cost function C(fd(xi)), which represents a difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi). An example of a cost function for the one-dimensional case is:
C(fd(xi)=∥Id(xi)˜fd(xi)∥
where ∥ . . . ∥ denotes the L1-Norm i.e., the sum of absolute values. For the multidimensional case, the sum of the root-mean-square values across all dimensions of the sum of squared differences between the intermediate image data Id(xi) and the baseline estimation data fd(xi) in the i-th dimension may be used instead of the L1-Norm.
As can be seen in the detail shown in
To implement this distinction, the cost function C(fd(xi)) may comprise a truncated difference term, which includes the predetermined threshold value s. The truncated difference term may be symmetric or asymmetric. In a preferred embodiment, the truncated difference term may be a truncated quadratic term representing the difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi), wherein the output value of the truncated quadratic term is limited to a constant value, if the difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi) is larger than the respective threshold value s. Otherwise, the value of the truncated quadratic term is equal to the square of the difference between the intermediate image data Id(xi) and the baseline estimation data fd(xi). The truncated quadratic term φ(fd(xi)) may be of the form:
Using the truncated quadratic term φ(fd(xi)), the cost function C(fd(xi)) may be expressed as:
C(fd(xi))=Σi=1mφ(fd(xi))
If a low-pass filter has been applied to the digital input image 104 in step 304, an adapted threshold value sd is computed in step 310 based on the predetermined threshold value s and the downsampling factor di as follows:
The adapted threshold value sd is then used instead of the threshold value s in the truncated quadratic term φ(fd(xi)).
As can be seen in the detail shown in
The first iterative stage 312 and second iterative stage 314 together represent one iteration step, which is repeated until a convergence criterion 316 is met. A suitable convergence criterion may be that for the differences between the baseline estimation data fd,l (xi) computed in the current iteration step and the baseline estimation data fd,l-1(xi) computed in the previous iteration step their sum across all locations xi is smaller than a predetermined convergence threshold c, obtained from external user input 176.
For example, the baseline estimation data fd(xi) may be computed using an iterative half-quadratic minimization scheme 318 aimed at minimizing the least-square minimization criterion M(f(xi)). The iterative half-quadratic minimization scheme 318 may e.g. comprise at least part of the LEGEND algorithm, which is described in Idier, J. (2001): Convex Half-Quadratic Criteria and Interacting Variables for Image Restoration, IEEE Transactions on Image Processing, 10(7), p. 1001-1009, and in Mazet, V., Carteret, C., Bire, D, Idier, J., and Humbert, B. (2005): Background Removal from Spectra by Designing and Minimizing a Non-Quadratic Cost Function, Chemometrics and Intelligent Laboratory Systems, 76, p. 121-133. Both articles are herewith incorporated by reference in their entirety.
The LEGEND algorithm introduces discrete auxiliary data Dl(x1) that are preferably of the same dimensionality as the intermediate image data Id(xi). These auxiliary data are updated in each iteration step, preferably in the first iterative stage 312, based on the latest baseline estimation data and the intermediate image data. In particular, the auxiliary data Dl(xi) are updated as follows:
where l=1 . . . L is the index of the current iteration step and α is a constant value e.g., 0.493. In the second iterative stage 314, the baseline estimation data fd,l (xi) may be updated based on the previously calculated, updated auxiliary data Dl(xi), the baseline estimation data fd,l-1(xi) from the previous iteration step and the penalty term P(fd,l-1(xi)) using the following formula:
Alternatively, the baseline estimation data fd,l(xi) may be computed in the second iterative stage 314 using a convolution of a discrete Green's function G(xi) with the intermediate image data Id(xi) and the updated auxiliary data dl(xi) as follows:
f
d,l(xi)=G(xi)*(Id(xi)+di(xi))
Without loss of generality, the Green's function G(xi) may have the following form for the one-dimensional case:
where F[ . . . ] is the discrete Fourier transform, F−1[ . . . ] is the inverse discrete Fourier transform, r is the regularization parameter, and ∂/∂ the functional derivative. In a multidimensional case, the regularization parameter may be different for each dimension: r=rj.
As a starting point, an initial set of baseline estimation data (e.g. fd,l=1(xi)=Id(xi)) and an initial set of auxiliary data (e.g. Dl=1(xi)=0) can be defined.
Optionally, the step 156 of upsampling the computed baseline estimate 102 by a predetermined upsampling factor 158 may follow and result in an upsampled baseline estimate 160. In particular, upsampled baseline estimation data fu(Xi) may be computed from the baseline estimation data fd(xi) by means of a nearest-neighbor interpolation, a linear interpolation, a bilinear interpolation, a Lanczos interpolation, AI-based interpolation algorithms or other types of upsampling techniques.
In a preferred embodiment, the number of baseline estimate locations is increased to the number of input image locations M during upsampling. In other words, the upsampling factor 158 is complementarity to the downsampling factor 108. Thus, a full-size baseline estimate 160 can be obtained without actually having to compute any baseline estimate from the full-size digital input image 104.
Depending on the application, the baseline estimation data fd(xi) or the upsampled baseline estimation data fu(Xi) may be outputted as output image data 206 of a digital output image 110 in a subsequent step 208. The output image data 206 may be represented by a discrete array O(xi) or O(Xi) of discrete values:
O(xi)=fd(xi) or O(Xi)=fu(Xi)
Alternatively, the upsampled baseline estimate 160 may be removed from the digital input image 104 in an optional step 210 before the step 208. In particular, the upsampled baseline estimation data fu(Xi) are subtracted from the input image data I(Xi) in order to obtain the output image data O(Xi):
O(Xi)=I(Xi)˜fu(Xi)
This step is based on the above-mentioned assumption that the intensity and/or color changes across the digital input image 104 can be separated additively into the high spatial frequency in-focus component I1(Xi) and the low spatial frequency out-of-focus-component I2 (Xi) according to the following equation:
I(Xi)=I1(Xi)+I2(Xi)
By subtracting the upsampled baseline estimation data fu(Xi) from the input image data I(Xi), the in-focus component I1(Xi) is maintained in the output image data O(Xi), while the out-of-focus-component I2(Xi) represented by the upsampled baseline estimation data fu(Xi) is removed. This yields an enhanced digital output image 110, in which the amount of blurred background is reduced, as is illustrated in
The digital image processing apparatus 100 may comprise an image output section 162, which may comprise standardized connection means 164, such as standardized data exchange protocols, hardware connectors and/or wireless connections, each configured to output the output image data O(xi) or O(Xi) to a display 166 of the computer 130 or to binoculars 180 of the microscope 120. The output image data O(xi) have, in a preferred embodiment, the same dimensionality as the intermediate image data Id(xi), while the output image data O(Xi) have, in a preferred embodiment, the same dimensionality as the input image data I(Xi).
In
In frames A and B of
d
max,i<√{square root over (γi)}=λi
In a preferred embodiment, the maximum downsampling factor dmax is less than or equal to one fourth of the predetermined feature length λ. Alternatively, the maximum downsampling factor dmax is not larger than one fourth of the square root of the predetermined feature scale γ.
Thus, for a typical range of the feature scale γi between 100 and 400 pixels, the maximum downsampling factor dmax,i is given by 2.5 to 5.
In frame C of
Frame F in
Pairs 136 of different digital input images 104 and baseline estimates 102 may be created, wherein each baseline estimate 102 of a pair 136 is computed from the digital input image 104 of said pair 136 using at least one of a) an embodiment of the method and b) an embodiment of the apparatus 100. A machine learning product 134 for processing a given digital input image 104 may then be trained by the pairs 136 of different digital input images 104 and baseline estimates 102, in order to arrive at a machine learning product 134 configured to compute the baseline estimate 102 of the given digital input image 104.
As used herein the term “and/or” includes any and all combinations of one or more of the associated listed items and may be abbreviated as “/”.
Although some aspects have been described in the context of an embodiment of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
Some embodiments relate to a microscope comprising a system as described in connection with one or more of the
The computer system 930 may be a local computer device (e.g. personal computer, laptop, tablet computer or mobile phone) with one or more processors and one or more storage devices or may be a distributed computer system (e.g. a cloud computing system with one or more processors and one or more storage devices distributed at various locations, for example, at a local client and/or one or more remote server farms and/or data centers). The computer system 930 may comprise any circuit or combination of circuits. In one embodiment, the computer system 930 may include one or more processors which can be of any type. As used herein, processor may mean any type of computational circuit, such as but not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), multiple core processor, a field programmable gate array (FPGA), for example, of a microscope or a microscope component (e.g. camera) or any other type of processor or processing circuit. Other types of circuits that may be included in the computer system 930 may be a custom circuit, an application-specific integrated circuit (ASIC), or the like, such as, for example, one or more circuits (such as a communication circuit) for use in wireless devices like mobile telephones, tablet computers, laptop computers, two-way radios, and similar electronic systems. The computer system 930 may include one or more storage devices, which may include one or more memory elements suitable to the particular application, such as a main memory in the form of random access memory (RAM), one or more hard drives, and/or one or more drives that handle removable media such as compact disks (CD), flash memory cards, digital video disk (DVD), and the like. The computer system 930 may also include a display device, one or more speakers, and a keyboard and/or controller, which can include a mouse, trackball, touch screen, voice-recognition device, or any other device that permits a system user to input information into and receive information from the computer system 930.
Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a processor, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an embodiment of an apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a non-transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the present invention is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the present invention is, therefore, a storage medium (or a data carrier, or a computer-readable medium) comprising, stored thereon, the computer program for performing one of the methods described herein when it is performed by a processor. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. A further embodiment of the present invention is an apparatus as described herein comprising a processor and the storage medium.
A further embodiment of the invention is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
A further embodiment comprises a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. An embodiment of the apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are, in a preferred embodiment, performed by any hardware apparatus.
Embodiments may be based on using a machine-learning model or machine-learning algorithm. Machine learning may refer to algorithms and statistical models that computer systems may use to perform a specific task without using explicit instructions, instead relying on models and inference. For example, in machine-learning, instead of a rule-based transformation of data, a transformation of data may be used, that is inferred from an analysis of historical and/or training data. For example, the content of images may be analyzed using a machine-learning model or using a machine-learning algorithm. In order for the machine-learning model to analyze the content of an image, the machine-learning model may be trained using training images as input and training content information as output. By training the machine-learning model with a large number of training images and/or training sequences (e.g. words or sentences) and associated training content information (e.g. labels or annotations), the machine-learning model “learns” to recognize the content of the images, so the content of images that are not included in the training data can be recognized using the machine-learning model. The same principle may be used for other kinds of sensor data as well: By training a machine-learning model using training sensor data and a desired output, the machine-learning model “learns” a transformation between the sensor data and the output, which can be used to provide an output based on non-training sensor data provided to the machine-learning model. The provided data (e.g. sensor data, meta data and/or image data) may be preprocessed to obtain a feature vector, which is used as input to the machine-learning model.
Machine-learning models may be trained using training input data. The examples specified above use a training method called “supervised learning”. In supervised learning, the machine-learning model is trained using a plurality of training samples, wherein each sample may comprise a plurality of input data values, and a plurality of desired output values, i.e. each training sample is associated with a desired output value. By specifying both training samples and desired output values, the machine-learning model “learns” which output value to provide based on an input sample that is similar to the samples provided during the training. Apart from supervised learning, semi-supervised learning may be used. In semi-supervised learning, some of the training samples lack a corresponding desired output value. Supervised learning may be based on a supervised learning algorithm (e.g. a classification algorithm, a regression algorithm or a similarity learning algorithm. Classification algorithms may be used when the outputs are restricted to a limited set of values (categorical variables), i.e. the input is classified to one of the limited set of values. Regression algorithms may be used when the outputs may have any numerical value (within a range). Similarity learning algorithms may be similar to both classification and regression algorithms but are based on learning from examples using a similarity function that measures how similar or related two objects are. Apart from supervised or semi-supervised learning, unsupervised learning may be used to train the machine-learning model. In unsupervised learning, (only) input data might be supplied and an unsupervised learning algorithm may be used to find structure in the input data (e.g. by grouping or clustering the input data, finding commonalities in the data). Clustering is the assignment of input data comprising a plurality of input values into subsets (clusters) so that input values within the same cluster are similar according to one or more (pre-defined) similarity criteria, while being dissimilar to input values that are included in other clusters.
Reinforcement learning is a third group of machine-learning algorithms. In other words, reinforcement learning may be used to train the machine-learning model. In reinforcement learning, one or more software actors (called “software agents”) are trained to take actions in an environment. Based on the taken actions, a reward is calculated. Reinforcement learning is based on training the one or more software agents to choose the actions such, that the cumulative reward is increased, leading to software agents that become better at the task they are given (as evidenced by increasing rewards).
Furthermore, some techniques may be applied to some of the machine-learning algorithms. For example, feature learning may be used. In other words, the machine-learning model may at least partially be trained using feature learning, and/or the machine-learning algorithm may comprise a feature learning component. Feature learning algorithms, which may be called representation learning algorithms, may preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. Feature learning may be based on principal components analysis or cluster analysis, for example.
In some examples, anomaly detection (i.e. outlier detection) may be used, which is aimed at providing an identification of input values that raise suspicions by differing significantly from the majority of input or training data. In other words, the machine-learning model may at least partially be trained using anomaly detection, and/or the machine-learning algorithm may comprise an anomaly detection component.
In some examples, the machine-learning algorithm may use a decision tree as a predictive model. In other words, the machine-learning model may be based on a decision tree. In a decision tree, observations about an item (e.g. a set of input values) may be represented by the branches of the decision tree, and an output value corresponding to the item may be represented by the leaves of the decision tree. Decision trees may support both discrete values and continuous values as output values. If discrete values are used, the decision tree may be denoted a classification tree, if continuous values are used, the decision tree may be denoted a regression tree.
Association rules are a further technique that may be used in machine-learning algorithms. In other words, the machine-learning model may be based on one or more association rules. Association rules are created by identifying relationships between variables in large amounts of data. The machine-learning algorithm may identify and/or utilize one or more relational rules that represent the knowledge that is derived from the data. The rules may e.g. be used to store, manipulate or apply the knowledge.
Machine-learning algorithms are usually based on a machine-learning model. In other words, the term “machine-learning algorithm” may denote a set of instructions that may be used to create, train or use a machine-learning model. The term “machine-learning model” may denote a data structure and/or set of rules that represents the learned knowledge (e.g. based on the training performed by the machine-learning algorithm). In embodiments, the usage of a machine-learning algorithm may imply the usage of an underlying machine-learning model (or of a plurality of underlying machine-learning models). The usage of a machine-learning model may imply that the machine-learning model and/or the data structure/set of rules that is the machine-learning model is trained by a machine-learning algorithm.
For example, the machine-learning model may be an artificial neural network (ANN). ANNs are systems that are inspired by biological neural networks, such as can be found in a retina or a brain. ANNs comprise a plurality of interconnected nodes and a plurality of connections, so-called edges, between the nodes. There are usually three types of nodes, input nodes that receiving input values, hidden nodes that are (only) connected to other nodes, and output nodes that provide output values. Each node may represent an artificial neuron. Each edge may transmit information, from one node to another. The output of a node may be defined as a (non-linear) function of its inputs (e.g. of the sum of its inputs). The inputs of a node may be used in the function based on a “weight” of the edge or of the node that provides the input. The weight of nodes and/or of edges may be adjusted in the learning process. In other words, the training of an artificial neural network may comprise adjusting the weights of the nodes and/or edges of the artificial neural network, i.e. to achieve a desired output for a given input.
Alternatively, the machine-learning model may be a support vector machine, a random forest model or a gradient boosting model. Support vector machines (i.e. support vector networks) are supervised learning models with associated learning algorithms that may be used to analyze data (e.g. in classification or regression analysis). Support vector machines may be trained by providing an input with a plurality of training input values that belong to one of two categories. The support vector machine may be trained to assign a new input value to one of the two categories. Alternatively, the machine-learning model may be a Bayesian network, which is a probabilistic directed acyclic graphical model. A Bayesian network may represent a set of random variables and their conditional dependencies using a directed acyclic graph. Alternatively, the machine-learning model may be based on a genetic algorithm, which is a search algorithm and heuristic technique that mimics the process of natural selection.
While subject matter of the present disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. Any statement made herein characterizing the invention is also to be considered illustrative or exemplary and not restrictive as the invention is defined by the claims. It will be understood that changes and modifications may be made, by those of ordinary skill in the art, within the scope of the following claims, which may include any combination of features from different embodiments described above.
The terms used in the claims should be construed to have the broadest reasonable interpretation consistent with the foregoing description. For example, the use of the article “a” or “the” in introducing an element should not be interpreted as being exclusive of a plurality of elements. Likewise, the recitation of “or” should be interpreted as being inclusive, such that the recitation of “A or B” is not exclusive of “A and B,” unless it is clear from the context or the foregoing description that only one of A and B is intended. Further, the recitation of “at least one of A, B and C” should be interpreted as one or more of a group of elements consisting of A, B and C, and should not be interpreted as requiring at least one of each of the listed elements A, B and C, regardless of whether A, B and C are related as categories or otherwise. Moreover, the recitation of “A, B and/or C” or “at least one of A, B or C” should be interpreted as including any singular entity from the listed elements, e.g., A, any subset from the listed elements, e.g., A and B, or the entire list of elements A, B and C.
Number | Date | Country | Kind |
---|---|---|---|
22153009.0 | Jan 2022 | EP | regional |