A decoder is normally used to decode a bitstream (e.g., received or stored in a storage device). The signal may notwithstanding be subjected to noise, such as for example, quantization noise. Attenuation of this noise is therefore an important goal.
According to an embodiment, a decoder for decoding a frequency-domain input signal defined in a bitstream, the frequency-domain input signal being subjected to noise, may have:
According to another embodiment, a decoder for decoding a frequency-domain input signal defined in a bitstream, the frequency-domain input signal being subjected to noise, may have:
According to another embodiment, a method for decoding a frequency-domain input signal defined in a bitstream, the frequency-domain input signal being subjected to noise, may have the steps of:
According to yet another embodiment, a method for decoding a frequency-domain input signal defined in a bitstream, the frequency-domain input signal being subjected to noise, may have the steps of:
According to yet another embodiment, a non-transitory digital storage medium may have a computer program stored thereon to perform the inventive methods, when said computer program is run by a computer.
In accordance to an aspect, there is here provided a decoder for decoding a frequency-domain signal defined in a bitstream, the frequency-domain input signal being subjected to quantization noise, the decoder comprising:
In accordance to an aspect, there is here disclosed a decoder for decoding a frequency-domain signal defined in a bitstream, the frequency-domain input signal being subjected to noise, the decoder comprising:
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
According to an aspect, the noise is noise which is not quantization noise. According to an aspect, the noise is quantization noise.
According to an aspect, the context definer is configured to choose the at least one additional bin among previously processed bins.
According to an aspect, the context definer is configured to choose the at least one additional bin based on the band of the bin.
According to an aspect, the context definer is configured to choose the at least one additional bin, within a predetermined threshold, among those which have already been processed.
According to an aspect, the context definer is configured to choose different contexts for bins at different bands.
According to an aspect, the value estimator is configured to operate as a Wiener filter to provide an optimal estimation of the input signal.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process from at least one sampled value of the at least one additional bin.
According to an aspect, the decoder further comprises a measurer configured to provide a measured value associated to the previously performed estimate(s) of the least one additional bin of the context,
According to an aspect, the measured value is a value associated to the energy of the at least one additional bin of the context.
According to an aspect, the measured value is a gain associated to the at least one additional bin of the context.
According to an aspect, the measurer is configured to obtain the gain as the scalar product of vectors, wherein a first vector contains value(s) of the at least one additional bin of the context, and the second vector is the transpose conjugate of the first vector.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information as pre-defined estimates and/or expected statistical relationships between the bin under process and the at least one additional bin of the context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information as relationships based on positional relationships between the bin under process and the at least one additional bin of the context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information irrespective of the values of the bin under process and/or the at least one additional bin of the context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information in the form of variance, covariance, correlation and/or autocorrelation values.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information in the form of a matrix establishing relationships of variance, covariance, correlation and/or autocorrelation values between the bin under process and/or the at least one additional bin of the context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information in the form of a normalized matrix establishing relationships of variance, covariance, correlation and/or autocorrelation values between the bin under process and/or the at least one additional bin of the context.
According to an aspect, the matrix is obtained by offline training.
According to an aspect, the value estimator is configured to scale elements of the matrix by an energy-related or gain value, so as to keep into account the energy and/or gain variations of the bin under process and/or the at least one additional bin of the context.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of a relationship
{circumflex over (x)}=Λ
X(ΛX+ΛN)−1y,
where ΛX, ΛN ϵ(c+1)×(c+1) are noise and covariance matrices, respectively, and yϵc+1 is a noisy observation vector with c+1 dimensions, c being the context length.
According to an aspect, value estimator is configured to obtain the estimate of the value of the bin (123) under process on the basis of a relationship
{circumflex over (x)}=γΛ
X(γΛX+λN)−1y,
where ΛNϵ(c+1)×(c+1) is a normalized covariance matrix, ΛNϵ(c+1)×(c+1) is the noise covariance matrix, yϵc+1 is a noisy observation vector with c+1 dimensions and associated to the bin under process and the addition bins of the context, c being the context length, γ being a scaling gain.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process provided that the sampled values of each of the additional bins of the context correspond to the estimated value of the additional bins of the context.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process provided that the sampled value of the bin under process is expected to be between a ceiling value and a floor value.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of a maximum of a likelihood function.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of an expected value.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of the expectation of a multivariate Gaussian random variable.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of the expectation of a conditional multivariate Gaussian random variable.
According to an aspect, the sampled values are in the Log-magnitude domain.
According to an aspect, the sampled values are in the perceptual domain.
According to an aspect, the statistical relationship and/or information estimator is configured to provide an average value of the signal to the value estimator.
According to an aspect, the statistical relationship and/or information estimator is configured to provide an average value of the clean signal on the basis of variance-related and/or covariance-related relationships between the bin under process and at least one additional bin of the context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide an average value of the clean signal on the basis of the expected value of the bin (123) under process.
According to an aspect, the statistical relationship and/or information estimator is configured to update an average value of the signal based on the estimated context.
According to an aspect, the statistical relationship and/or information estimator is configured to provide a variance-related and/or standard-deviation-value-related value to the value estimator.
According to an aspect, the statistical relationship and/or information estimator is configured to provide a variance-related and/or standard-deviation-value-related value on the basis of variance-related and/or covariance-related relationships between the bin under process and at least one additional bin of the context to the value estimator.
According to an aspect, the noise relationship and/or information estimator is configured to provide, for each bin, a ceiling value and a floor value for estimating the signal on the basis of the expectation of the signal to be between the ceiling and the floor value.
According to an aspect, the version of the input signal has a quantized value which is a quantization level, the quantization level being a value chosen from a discrete number of quantization levels.
According to an aspect, the number and/or values and/or scales of the quantization levels are signaled by the encoder and/or signaled in the bitstream.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process in terms of
{circumflex over (x)}=E[P(X|Xc={circumflex over (x)}c)]l≤X≤usubjectto.
where {circumflex over (x)} is the estimate of the bin under process, l and u are the lower and upper limits of the current quantization bins, respectively, and P(a1|a2) is the conditional probability of a1, given a2, {circumflex over (x)}c being an estimated context vector.
According to an aspect, the value estimator is configured to obtain the estimate of the value of the bin under process on the basis of the expectation
wherein X is a particular value [X] of the bin under process expressed as a truncated Gaussian random variable, with l<X<u, where l is the floor value and u is the ceiling value,
μ=E(X), μ and σ are mean and variance of the distribution.
According to an aspect, the predetermined positional relationship is obtained by offline training.
According to an aspect, at least one of the statistical relationships and/or information between and/or information regarding the bin under process and the at least one additional bin are obtained by offline training.
According to an aspect, at least one of the quantization noise relationships and/or information are obtained by offline training.
According to an aspect, the input signal is an audio signal.
According to an aspect, the input signal is a speech signal.
According to an aspect, at least one among the context definer, the statistical relationship and/or information estimator, the noise relationship and/or information estimator, and the value estimator is configured to perform a post-filtering operation to obtain a clean estimation of the input signal.
According to an aspect, the context definer is configured to define the context with a plurality of additional bins.
According to an aspect, the context definer is configured to define the context as a simply connected neighbourhood of bins in a frequency/time graph.
According to an aspect, the bitstream reader is configured to avoid the decoding of inter-frame information from the bitstream.
According to an aspect, the decoder is further configured to determine the bitrate of the signal, and, in case the bitrate is above a predetermined bitrate threshold, to bypass at least one among the context definer, the statistical relationship and/or information estimator, the noise relationship and/or information estimator, the value estimator.
According to an aspect, the decoder further comprises a processed bins storage unit storing information regarding the previously proceed bins,
According to an aspect, the context definer is configured to define the context using at least one non-processed bin as at least one of the additional bins.
According to an aspect, the statistical relationship and/or information estimator is configured to provide the statistical relationships and/or information in the form of a matrix establishing relationships of variance, covariance, correlation and/or autocorrelation values between the bin under process and/or the at least one additional bin of the context,
According to an aspect, the noise relationship and/or information estimator is configured to provide the statistical relationships and/or information regarding noise in the form of a matrix establishing relationships of variance, covariance, correlation and/or autocorrelation values associated to the noise,
There is also provided a system comprising an encoder and a decoder according to any of the aspects above and/or below, the encoder being configured to provide the bitstream with encoded the input signal.
In examples, there is provided a method comprising:
In examples, there is provided a method comprising:
One of the methods above may use the equipment of any of any of the aspects above and/or below.
In examples, there is provide a non-transitory storage unit storing instructions which, when executed by a processor, causes the processor to perform any of the methods of any of the aspects above and/or below.
The decoder 110 may decode a frequency-domain input signal encoded in a bitstream 111 (digital data stream) which has been generated by an encoder. The bitstream 111 may have been stored, for example, in a memory, or transmitted to a receiver device associated to the decoder 110.
When generating the bitstream, the frequency-domain input signal may have been subjected to quantization noise. In other examples, the frequency-domain input signal may be subjected to other types of noise. Hereinbelow are described techniques which permit to avoid, limit or reduce the noise.
The decoder 110 may comprise a bitstream reader 113 (communication receiver, mass memory reader, etc.). The bitstream reader 113 may provide, from the bitstream 111, a version 113′ of the original input signal (represented with 120 in
The bitstream 111 (and the signal 113′, 120, consequently) may be provided in such a way that each time/frequency bin is associated to a particular value (e.g., sampled value). The sampled value is in general expressed as Y(k, t) and may be, in some cases, a complex value. In some examples, the sampled value Y(k, t) may be the unique knowledge that the decoder 110 has regarding the original at the time slot t at the band k. Accordingly, the sampled value Y(k, t) is in general impaired by quantization noise, as the necessity of quantizing the original input signal, at the encoder, has introduced errors of approximation when generating the bitstream and/or when digitalizing the original analog signal. (Other types of noise may also be schematized in other examples.) The sampled value Y(k, t) (noisy speech) may be understood as being expressed in terms of
Y(k,t)=X(k,t)+V(k,t),
with X(k, t) being the clean signal (which would be advantageously obtained) and V(k, t), which is quantization noise signal (or other type of noise signal). It has been noted that it is possible to arrive at an appropriated, optimal estimate of the clean signal with techniques described here.
Operations may provide that each bin is processed at one particular time, e.g. recursively. At each iteration, a bin to be processed is identified (e.g., bin 123 or C0, in
It is possible to obtain, for one bin 123 under process, an optimal estimate on the basis of at least one additional bin (which may be one of the squared bins in
The decoder 110 may comprise a context definer 114 which defines a context 114′ (or context block) for one bin 123 (C0) under process. The context 114′ includes at least one additional bin (e.g., a group of bins) in a predetermined positional relationship with the bin 123 under process. In the example of
(In the subsequent parts of the present document, “context bin” may be used to indicate an “additional bin” 124 of the context.)
In examples, after having processed all the bins of a generic tth frame, all the bins of the subsequent (t+1)th frame may be processed. For each generic tth frame, all the bins of the tth frame may be iteratively processed. Other sequences and/or paths may notwithstanding be provided.
For each tth frame, the positional relationships between the bin 123 (C0) under process and the additional bins 124 forming the context 114′ (120) may therefore be defined on the basis of the particular band k of the bin 123 (C0) under process. When, during a previous iteration, the under-process bin was the bin currently indicated as C6 (t=4, k=1), a different shape of the context had been chosen, as there are no bands defined under k=1. However, when the under-process bin bin was the bin at t=3, k=3 (currently indicated as C1) the context had the same shape of the context of
Therefore, the context definer 114 may be a unit which iteratively, for each bin 123 (C0) under process, retrieves additional bins 124 (118′, C1-C10) to form a context 114′ containing already-processed bins having an expected high correlation with the bin 123 (C0) under process (in particular, the shape of the context may be based on the particular frequency of the bin 123 under process).
The decoder 110 may comprise a statistical relationship and/or information estimator 115 to provide statistical relationships and/or information 115′, 119′ between the bin 123 (C0) under process and the context bins 118′, 124. The statistical relationship and/or information estimator 115 may include a quantization noise relationship and/or information estimator 119 to estimate relationships and/or information regarding the quantization noise 119′ and/or statistical noise-related relationships between the noise affecting each bin 124 (C1-C10) of the context 114′ and/or the bin 123 (C0) under process.
In examples, an expected relationship 115′ may comprise a matrix (e.g., a covariance matrix) containing expected covariance relationships (or other expected statistical relationships) between bins (e.g., the bin C0 under process and the additional bins of the context C1-C10). The matrix may be a square matrix for which each row and each column is associated to a bin. Therefore, the dimensions of the matrix may be (c+1)×(c+1) (e.g., 11 in the example of
In examples, an expected noise relationship and/or information 119′ may be formed by a statistical relationship. In this case, however, the statistical relationship may refer to the quantization noise. Different covariances may be used for different frequency bands.
In examples, the quantization noise relationship and/or information 119′ may comprise a matrix (e.g., a covariance matrix) containing expected covariance relationships (or other expected statistical relationships) between the quantization noise affecting the bins. The matrix may be a square matrix for which each row and each column is associated to a bin. Therefore, the dimensions of the matrix may be (c+1)×(c+1) (e.g., 11). In examples, each element of the matrix may indicate an expected covariance (and/or correlation, and/or another statistical relationship) between the quantization noise impairing the bin associated to the row and the bin associated to the column. The covariance matrix may be Hermitian (symmetric in case of Real coefficients). The matrix may comprise, in the diagonal, a variance value associated to each bin. In example, instead of a matrix, other forms of mappings may be used.
It has been noted that, by processing the sampled value Y(k, t) using expected statistical relationships between the bins, a better estimation of the clean value X(k, t) may be obtained.
The decoder 110 may comprise a value estimator 116 to process and obtain an estimate 116′ of the sampled value X(k, t) (at the bin 123 under process, C0) of the signal 113′ on the basis of the expected statistical relationships and/or information and/or statistical relationships and/or information 119′ regarding quantization noise 119′.
The estimate 116′, which is a good estimate of the clean value X(k, t), may therefore be provided to an FD-to-TD transformer 117, to obtain an enhanced TD output signal 112.
The estimate 116′ may be stored onto a processed bins storage unit 118 (e.g., in association with the time instant t and/or the band k). The stored value of the estimate 116′ may, in subsequent iterations, provide the already processed estimate 116′ to the context definer 114 as additional bin 118′ (see above), so as to define the context bins 124.
In examples, the estimated statistical relationship and/or information 115′ may comprise a normalized matrix Λx. The normalized matrix may be a normalized correlation matrix and may be independent from the particular sampled value Y(k, t). The normalized matrix Λx may be a matrix which contains relationships among the bins C0-C10, for example. The normalized matrix Λx may be static and may be stored, for example, in a memory.
In examples, the estimated statistical relationship and/or information regarding quantization noise 119′ may comprise a noise matrix ΛN. This matrix may be a correlation matrix and may represent relationships regarding the noise signal V(k, t), independent from the value of the particular sampled value Y(k, t). The noise matrix ΛN may be a matrix which estimates relationships among noise signals among the bins C0-C10, for example, independent of the clean speech value Y(k, t).
In examples, a measurer 131 (e.g., gain estimator) may provide a measured value 131′ of the previously performed estimate(s) 116′. The measured value 131′ may be, for example, an energy value and/or gain γ of the previously performed estimate(s) 116′ (the energy value and/or gain γ may therefore be dependent on the context 114′). In general terms, the estimate 116′ and the value 113′ of bin under process 123 may be seen as a vector uk,t=[YC
It is also possible to obtain the gain γ as the scalar product of the normalized vector by its transpose, e.g., to obtain γ=zk,tzk,tH (where zk,tH is the transpose of zk,t, so that γ is a scalar Real number).
A scaler 132 may be used to scale the normalized matrix Λx by the gain γ, to obtain a scaled matrix 132′ which keeps into account energy measurement (and/or gain γ) associated to the contest of the bin 123 under process. This is to keep into account that speech signals have large fluctuations in gain. A new matrix {circumflex over (Λ)}x, which keeps into account the energy, may therefore be obtained. Notably, while matrix Λx and matrix ΛN may be predefined (and/or containing elements pre-stored in a memory), the matrix {circumflex over (Λ)}x is actually calculated by processing. In alternative examples, instead of calculating the matrix {circumflex over (Λ)}x, a matrix {circumflex over (Λ)}x may be chosen from a plurality of pre-stored matrixes {circumflex over (Λ)}x, each pre-stored matrix {circumflex over (Λ)}x being associated to a particular range of measured gain and/or energy values.
After having calculated or chosen the matrix {circumflex over (Λ)}x, an adder 133 may be used to add, element by element, the elements of the matrix {circumflex over (Λ)}x with elements of the noise matrix ΛN, to obtain an added value 133′ (summed matrix {circumflex over (Λ)}x+ΛN). In alternative examples, instead of being calculated, the summed matrix {circumflex over (Λ)}x+ΛN may be chosen, on the basis of the measured gain and/or energy values, among a plurality of pre-stored summed matrixes.
At inversion block 134, the summed matrix {circumflex over (Λ)}x+ΛN may be inverted to obtain ({circumflex over (Λ)}x+ΛN)−1 as value 134′. In alternative examples, instead of being calculated, the inversed matrix ({circumflex over (Λ)}x+ΛN)−1 may be chosen, on the basis of the measured gain and/or energy values, among a plurality of pre-stored inversed matrixes.
The inversed matrix ({circumflex over (Λ)}x+ΛN)−1 (value 134′) may be multiplied by {circumflex over (Λ)}x to obtain a value 135′ as {circumflex over (Λ)}x({circumflex over (Λ)}x+ΛN)−1. In alternative examples, instead of being calculated, the matrix {circumflex over (Λ)}x({circumflex over (Λ)}x+ΛN)−1 may be chosen, on the basis of the measured gain and/or energy values, among a plurality of pre-stored matrixes.
At this point, at a multiplier 136 the value 135′ may be multiplied to the vector input signal y. The vector input signal may be seen as a vector y=[yC
The output 136′ of the multiplier 136 may therefore be {circumflex over (x)}={circumflex over (Λ)}x({circumflex over (Λ)}x+ΛN)−1y, as for a Wiener filter.
In
Reference is made to
It possible to establish an optimal estimation of the value 116′ of each bin as the expectation of the conditional likelihood of the value X being between the ceiling value u and the floor value 1, provided that the quantized sampled value of the bin 123 (C0) under process and the context bins 124 are equal to the estimated values of the bin under process and of the estimated values of the additional bins of the context, respectively. In this way, it is possible to estimate the magnitude of the bin 123 (C0) under process. It is possible to obtain the expectation value on the basis of mean values (μ) of the clean values X and the standard deviation value (σ) which may be provided by the statistical relationship and/or information estimator, for example.
It is possible to obtain the mean values (μ) of the clean values X and the standard deviation values (σ) on the basis of an procedure, discussed in detail below, which may be iterative.
For example (see also 4.1.3 and its subsections), the mean value of the clean signal X may be obtained by updating a non-conditional average value (μ1) calculated for the bin 123 under process without considering any context, to obtain a new average value (μup) which considers the context bins 124 (C1-C10). At each iteration, the non-conditional calculated average value (μ1) may be modified using a difference between estimated values (expressed with the vector {circumflex over (x)}c) for the bin 123 (C0) under process and the context bins and the average values (expressed with the vector μ2) of the context bins 124. These values may be multiplied by values associated to the covariance and/or variance between the bin 123 (C0) under process and the context bins 124 (C1-C10).
The standard deviation value (σ) may be obtained from variance and covariance relationships (e.g., the covariance matrix Σϵ(C+1)×(C+1)) between the bin 123 (C0) under process and the context bins 124 (C1-C10).
An example of a method for obtaining the expectation (and therefore for estimating the X value 116′) may be provided by the following pseudocode:
Examples in this section and in its subsections mainly relate to techniques for postfiltering with complex spectral correlations for speech and audio coding.
In the present examples, the following figures are mentioned:
Examples in this section and in the subsection may also refer to and/or explain in detail examples of
Present speech codecs achieve a good compromise between quality, bitrate and complexity. However, retaining performance outside the target bitrate range remains challenging. To improve performance, many codecs use pre- and post-filtering techniques to reduce the perceptual effect of quantization-noise. Here, we propose a postfiltering method to attenuate quantization noise which uses the complex spectral correlations of speech signals. Since conventional speech codecs cannot transmit information with temporal dependencies as transmission errors could result in severe error propagation, we model the correlation offline and employ them at the decoder, hence removing the need to transmit any side information. Objective evaluation indicates an average 4 dB improvement in the perceptual SNR of signals using the context-based post-filter, with respect to the noisy signal, and an average 2 dB improvement relative to the conventional Wiener filter. These results are confirmed by an improvement of up to 30 MUSHRA points in a subjective listening test.
Speech coding, the process of compressing speech signals for efficient transmission and storage, is an essential component in speech processing technologies. It is employed in almost all devices involved in the transmission, storage or rendering of speech signals. While standard speech codecs achieve transparent performance around target bitrates, the performance of codecs suffer in terms of efficiency and complexity outside the target bitrate range [5].
Specifically at lower bitrates the degradation in performance is because large parts of the signal are quantized to zero, yielding a sparse signal which frequently toggles between zero and non-zero. This gives a distorted quality to the signal, which is perceptually characterized as musical noise. Modern codecs like EVS, USAC [3, 15] reduce the effect of quantization noise by implementing postprocessing methods [5, 14]. Many of these methods have to be implemented both at the encoder and decoder, hence involving changes to the core structure of the codec, and sometimes also the transmission of additional side information. Moreover, most of these methods focus on alleviating the effect of distortions rather than the cause for distortions.
The noise reduction techniques widely adopted in speech processing are often employed as pre-filters to reduce background noise in speech coding. However, application of these methods for the attenuation of quantization noise have not been fully explored yet. The reasons for this are (i) information from zero-quantized bins cannot be restored by using conventional filtering techniques alone, and (ii) quantization noise is highly correlated to speech at low bitrates, thus discriminating between speech and quantization-noise distributions for noise reduction is difficult; these are further discussed in Sec. 4.1.2.2.
Fundamentally, speech is a slowly varying signal, whereby it has a high temporal correlation [9]. Recently, MVDR and Wiener filters using the intrinsic temporal and frequency correlation in speech were proposed and showed significant noise reduction potential [1, 9, 13]. However, speech codecs refrain from transmitting information with such temporal dependency to avoid error propagation as a consequence of information loss. Therefore, application of speech correlation for speech coding or the attenuation of quantization noise has not been sufficiently studied, until recently; an accompanying paper [10] presents the advantages of incorporating the correlations in the speech magnitude spectrum for quantization noise reduction.
The contributions of this work are as follows: (i) modeling the complex speech spectrum to incorporate the contextual information intrinsic in speech, (ii) formulating the problem such that the models are independent of the large fluctuations in speech signals and the correlation recurrence between samples enables us to incorporate much larger contextual information, (iii) obtaining an analytical solution such that the filter is optimal in minimum mean square error sense. We begin by examining the possibility of applying conventional noise reduction techniques for the attenuation of quantization noise, and then model the complex speech spectrum and use it at the decoder to estimate speech from an observation of the corrupted signal. This approach removes the need for the transmission of any additional side information.
At low bitrates conventional entropy coding methods yield a sparse signal, which often causes a perceptual artifact known as musical noise. Information from such spectral holes cannot be recovered by conventional approaches like Wiener filtering, because they mostly modify the gain. Moreover, common noise reduction techniques used in speech processing model the speech and noise characteristics and perform reduction by discriminating between them. However, at low bitrates quantization noise is highly correlated with the underlying speech signal, hence making it difficult to discriminate between them.
To mitigate these problems, we can apply randomization before encoding the signal [2, 7, 18]. Randomization is a type of dithering [11] which has been previously used in speech codecs [19] to improve perceptual signal quality, and recent works [6, 18] enable us to apply randomization without increase in bitrate. The effect of applying randomization in coding is demonstrated in
Due to dithering, we can assume that the quantization noise is an additive and uncorrelated normally distributed process,
Y
k,t
=X
k,t
+V
k,t, (2.1)
where Y, X and V are the complex-valued short-time frequency domain values of the noisy, clean-speech and noise signals, respectively. k denotes the frequency bin in the time-frame t. In addition, we assume that X and V are zero-mean Gaussian random variables. Our objective is to estimate Xk,t from an observation Yk,t as well as using previously estimated samples of {circumflex over (x)}c. We call {circumflex over (x)}c the context of Xk,t
The estimate of the clean speech signal, {circumflex over (x)}, known as the Wiener filter [8], is defined as:
{circumflex over (x)}=Λ
X(ΛX+ΛN)−1y, (2.2)
where ΛX, ΛNϵ(c+1)×(c+1) are the speech and noise covariance matrices, respectively, and yϵC+1 is the noisy observation vector with c+1 dimensions, c being the context length. The covariances in Eq. 2.2 represent the correlation between time-frequency bins, which we call the context neighborhood. The covariance matrices are trained off-line from a database of speech signals. Information regarding the noise characteristics is also incorporated in the process, by modeling the target noise-type (quantization noise), similar to the speech signals. Since we know the design of the encoder, we know exactly the quantization characteristics, hence it is a straightforward task to construct the noise covariance ΛN.
Context Neighborhood:
An example of the context neighborhood of size 10 is presented in
Normalized Covariance and Gain Modeling:
Speech signals have large fluctuations in gain and spectral envelope structure. To model the spectral fine structure efficiently [4], we use normalization to remove the effect of this fluctuation. The gain is computed during noise attenuation from the Wiener gain in the current bin and the estimates in the previous frequency bins. The normalized covariance and the estimated gain are employed together to obtain the estimate of the current frequency sample. This step is important as it enables us to use the actual speech statistics for noise reduction despite the large fluctuations.
Define the context vector as uk,t=[Xk,t XC
From Eq. 2.3, we observe that this approach enables us to incorporate correlation from a neighborhood much larger than the context size and more information, consequently saving computational resources. The noise statistics is computed as follows:
where nk,t=[Nk,t NC
{circumflex over (x)}=γΛ
X[(γΛX)+ΛN]−1y (2.5)
Owing to the formulation, the complexity of the method is linearly proportional to the context size. The proposed method differs from the 2D Wiener filtering in [17], in that it operates using the complex magnitude spectrum, whereby there is no need to use the noisy phase to reconstruct the signal unlike conventional methods. Additionally, in contrast to 1D and 2D Wiener filters which apply a scaler gain to the noisy magnitude spectrum, the proposed filter incorporates information from the previous estimates to compute the vector gain. Therefore, with respect to previous work the novelty of this method lies in the way the contextual information is incorporated in the filter, thus making the system adaptive to the variations in speech signal.
Proposed method was evaluated using both objective and subjective tests. We used the perceptual SNR (pSNR) [3, 5] as the objective measure, because it approximates human perception and it is already available in a typical speech codec. For subjective evaluation, we conducted a MUSHRA listening test.
A system structure is illustrated in
To ensure that the coding noise has least perceptual effect, the frequency domain signal 241′ is perceptually weighted at block 242 to obtain a weighted signal 242′. After a pre-process block 243, we compute the perceptual model at block 244, (e.g., as used in the EVS codec [3]), based on the linear prediction coefficients (LPCs). After weighting the signal with the perceptual envelope, the signal is normalized and entropy coded (not shown). For straightforward reproducibility, we simulated quantization noise at block 244 (which is not necessary part of a marketed product) by perceptually weighted Gaussian noise, following the discussion in Sec. 4.1.2.2. A codec 242″ (which may be the bitstream 111) may therefore be generated.
Thus, the output 244′ of the codec/quantization noise (QN) simulation block 244, in
Experimental Setup:
The process is divided into training and testing phases. In the training phase, we estimate the static normalized speech covariances for context sizes Lϵ{1, 2 . . . 14} from the speech data. For training, we chose 50 random samples from the training set of the TIMIT database [20]. All signals are resampled to 12.8 kHz, and a sine window is applied on frames of size 20 ms with 50% overlap. The windowed signals are then transformed to the frequency domain. Since the enhancement is applied in the perceptual domain, we also model the speech in the perceptual domain. For each bin sample in the perceptual domain, the context neighborhoods are composed into matrices, as described in section 4.1.2.2, and the covariances are computed. We similarly obtain the noise models using perceptually weighted Gaussian noise.
For testing, 105 speech samples are randomly selected from the database. The noisy samples are generated as the additive sum of the speech and the simulated noise. The levels of speech and noise are controlled such that we test the method for pSNR ranging from 0-20 dB with 5 samples for each pSNR level, to conform to the typical operating range of codecs. For each sample, 14 context sizes were tested. For reference, the noisy samples were enhanced using an oracle filter, wherein the conventional Wiener filter employs the true noise as the noise estimate, i.e., the optimal Wiener gain is known.
Evaluation Results:
The results are depicted in
We evaluated the quality of the proposed method with a subjective MUSHRA listening test [16]. The test comprised of six items and each item consisted of 8 test conditions. Listeners, both experts and non-experts, between the age 20 to 43 participated. However, only the ratings of those participants who scored the hidden reference greater than 90 MUSHRA points were selected, resulting in 15 listeners whose scores were included for this evaluation.
Six sentences were randomly chosen from the TIMIT database to generate the test items. The items were generated by adding perceptual noise, to simulate coding noise, such that the resulting signals' pSNR were fixed at 2, 5 and 8 dB. For each pSNR, one male and one female item was generated. Each item consisted of 8 conditions: Noisy (no enhancement), ideal enhancement with the noise known (oracle), conventional Wiener filter, samples from the proposed method with context sizes one (L=1), six (L=6), fourteen (L=14), in addition to the 3.5 kHz low-pass signal as the lower anchor and the hidden reference, as per the MUSHRA standard.
The results are presented in
We propose a time-frequency based filtering method for the attenuation of quantization noise in speech and audio coding, wherein the correlation is statistically modeled and used at the decoder. Therefore, the method does not require the transmission of any additional temporal information, thus eliminating chances of error propagation due to transmission loss. By incorporating the contextual information, we observe pSNR improvement of 6 dB in the best case and 2 dB in a typical application; subjectively, an improvement of 10 to 30 MUSHRA points is observed.
In this section, we fixed the choice of the context neighborhood for a certain context size. While this provides a baseline for the expected improvement based on context size, it is interesting to examine the impact of choosing an optimal context neighborhood. Additionally, since the MVDR filter showed significant improvement in background noise reduction, a comparison between MVDR and the proposed MMSE method should be considered for this application.
In summary, we have shown that the proposed method improves both subjective and objective quality, and it can be used to improve the quality of any speech and audio codecs.
Examples in this section and in the subsections mainly refer to techniques for postfiltering using log-magnitude spectrum for speech and audio coding.
Examples in this section and in the subsections may better specify particular cases of
In the present example, the following figures are mentioned:
Advanced coding algorithms yield high quality signals with good coding efficiency within their target bit-rate ranges, but their performance suffer outside the target range. At lower bitrates, the degradation in performance is because the decoded signals are sparse, which gives a perceptually muffled and distorted characteristic to the signal. Standard codecs reduce such distortions by applying noise filling and post-filtering methods. Here, we propose a post-processing method based on modeling the inherent time-frequency correlation in the log-magnitude spectrum. A goal is to improve the perceptual SNR of the decoded signals and, to reduce the distortions caused by signal sparsity. Objective measures show an average improvement of 1.5 dB for input perceptual SNR in range 4 to 18 dB. The improvement is especially prominent in components which had been quantized to zero.
Speech and audio codecs are integral parts of most audio processing applications and recently we have seen rapid development in coding standards, such as MPEG USAC [18, 16], and 3GPP EVS [13]. These standards have moved towards unifying audio and speech coding, enabled the coding of super wide band and full band speech signals as well as added support of voice over IP. The core coding algorithms within these codecs, ACELP and TCX, yield perceptually transparent quality at moderate to high bitrates within their target bitrate ranges. However, the performance degrades when the codecs operate outside this range. Specifically, for low-bitrate coding in the frequency-domain, the decline in performance is because fewer bits are at disposal for encoding, whereby areas with lower energy are quantized to zero. Such spectral holes in the decoded signal renders a perceptually distorted and muffled characteristic to the signal, which can be annoying for the listener.
To obtain satisfactory performance outside target bitrate ranges, standard codecs like CELP employ pre- and post-processing methods, which are largely based on heuristics. In particular, to reduce the distortion caused by quantization-noise at low bitrates, codecs implement methods either in the coding process or strictly as a post-filter at the decoder. Formant enhancement and bass post-filters are common methods [9] which modify the decoded signal based on the knowledge of how and where quantization noise perceptually distorts the signal. Formant enhancement shapes the codebook to intrinsically have less energy in areas prone to noise and is applied both at the encoder and decoder. In contrast, bass post-filter removes the noise like component between harmonic lines and is implemented only in the decoder.
Another commonly used method is noise filling, where pseudo-random noise is added to the signal [16], since accurate encoding of noise-like components is not essential for perception. In addition, the approach aids in reducing the perceptual effect of distortions caused by sparsity on the signal. The quality of noise-filling can be improved by parameterizing the noise-like signal, for example, by its gain, at the encoder and transmitting the gain to the decoder.
The advantage of post-filtering methods over the other methods is that they are only implemented in the decoder, whereby they do not require any modifications to the encoder-decoder structure, nor do they need any side information to be transmitted. However, most of these methods focus on solving the effect of the problem, rather than address the cause.
Here, we propose a post-processing method to improve signal quality at low bitrates, by modeling the inherent time-frequency correlation in speech magnitude spectrum and, investigating the potential of using this information to reduce quantization noise. The advantages of this approach are that it does not require the transmission of any side information and operates using solely the quantized signal as the observation and the speech models trained offline; Since it is applied at the decoder after the decoding process, it does not require any changes to the core structure of the codec; The approach addresses the signal distortions by estimating the information lost during the coding process using a source model. The novelties of this work lies in (i) incorporating the formant information in speech signals using log-magnitude modeling, (ii) representing the inherent contextual information in the spectral magnitude of speech in the log-domain as a multivariate Gaussian distribution (iii) finding the optimum, for the estimation of true speech, as the expected likelihood of a truncated Gaussian distribution.
Formants are the fundamental indicator of linguistic content in speech and are manifested by the spectral magnitude envelope of speech, therefore the magnitude spectrum is an important part of source modeling [10, 21]. Prior research has shown that frequency coefficients of speech are best represented by a Laplacian or Gamma distribution [1, 4, 2, 3]. Hence, the magnitude-spectrum of speech is an exponential distribution, as shown in
In recent years, contextual information in speech has attracted a growing interest [11]. The inter-frame and inter-frequency correlation information have been explored previously in acoustic signal processing, for noise reduction [11, 5, 14]. The MVDR and Wiener filtering techniques employ the previous time- or frequency-frames to obtain an estimate of the signal in the current time-frequency bin. The results indicate a significant improvement in the quality of the output signal. In this work, we use similar contextual information to model speech. Specifically, we explore the plausibility of using the log-magnitude to model the context and, representing it using multivariate Gaussian distributions. The context neighborhood is chosen based on the distance of the context bin to the bin under consideration.
The overview of the modeling (training) process 330 is presented in
In other words, the trained models 336′ comprise:
We explored context sizes up to 40, which includes approximately four previous time frames, lower and upper frequency bins, each. Note that we operate with STFT instead of MDCT which is used in standard codecs, in order to keep this work extensible to enhancement applications. Expansion of this work to MDCT is ongoing and informal tests provide insights similar to this document.
Our objective is to estimate the clean speech signal from the observation of the noisy decoded signal using the statistical priors. To this end, we formulate the problem as the maximum likelihood (ML) of the current sample given the observation and the previous estimates. Assume a sample x has been quantized to a quantization level Qϵ[l, u]. We can then express our optimization problem as:
where {circumflex over (x)} is the estimate of the current sample, l and u are the lower and upper limits of the current quantization bins, respectively, and, P(a1|a2) is the conditional probability of a1, given a2. {circumflex over (x)}c is the estimated context vector.
To illustrate the performance of Eq. 3.1, we solved it using generic numerical methods.
The resulting speech distribution using EL is demonstrated in
Our objective is to evaluate the advantage of modeling the log-magnitude spectrum. Since envelope models are the main method for modeling the magnitude spectrum in conventional codecs, we evaluate the effect of statistical priors both in terms of the whole spectrum as well as only for the envelope. Therefore, besides evaluating the proposed method for the estimation of speech from the noisy magnitude spectrum of speech, we also test it for the estimation of the spectral envelope from an observation of the noisy envelope. To obtain the spectral envelope, after transforming the signal to the frequency domain, we compute the Cepstrum and retain the 20 lower coefficients and transform it back to the frequency domain. The next steps of envelope modeling are the same as spectral magnitude modeling presented in Sec. 4.1.3.2 and
A general block diagram of a system 360 is presented in
At the decoder 360b, the reverse process is implemented at block 367 (which may be an example of the bitstream reader 113) to decode the encoded signal 366′. The decoded signal 366′ may be corrupted by quantization noise and our purpose is to use the proposed post-processing method to improve output quality. Note that we apply the method in the perceptually weighted domain. A Log-transform block 368 is provided.
A post-filtering block 369 (which may implement the elements 114, 115, 119, 116, and/or 130 discussed above) permits to reduce the effects of the quantization noise as discussed above, on the basis of speech models which may be, for example, the trained models 336′ and/or rules for defining the context (e.g., on the basis of the frequency band k) and/or statistical relationships and/or information 115′ (e.g., normalized covariance matrix ΛX) between and/or information regarding the bin under process and at least one additional bin forming the context and/or statistical relationships and/or information 119′ (e.g., matrix ΛN) regarding noise (e.g., quantization noise.
After post-processing, the estimated speech is transformed back to the temporal domain by applying the inverse perceptual weights at block 369a and the inverse frequency transform at block 369b. We use true phase to reconstruct the signal back to temporal domain.
For training we used 250 speech samples from the training set of the TIMIT database [22]. The block diagram of the training process is presented in
The average of the qualitative measures over the 10 speech samples are plotted in
For the magnitude spectrum, the improvement in quality between context size 1 and 4 is significantly large, approximately 0.5 dB over all input pSNRs. By increasing the context size we can further improve the pSNR, but the rate of improvement is relatively lower for sizes from 4 to 40. Also, the improvement is considerably lower at higher input pSNRs. We conclude that a context size around 10 samples is a good compromise between accuracy and complexity. However, the choice of context size can also depend on the target device for processing. For instance, if the device has computational resources at disposal, a high context size can be employed for maximum improvement.
Performance of the proposed method is further illustrated in
The scatter plots in
In this sections, we investigated the use of contextual information inherent in speech for the reduction of quantization noise. We propose a post-processing method with focus on estimating speech samples at the decoder, from the quantized signal using statistical priors. Results indicate that including speech correlation not only improves the pSNR, but also provide spectral magnitude estimates for noise filling algorithms. While a focus of this paper was modeling the spectral magnitude, a joint magnitude-phase modeling method, based on current insights and the results from an accompanying paper [20], is the natural next step.
This section also begins to tread on spectral envelope restoration from highly quantized noisy envelopes by incorporating information for the context neighborhood.
Let us define
where μ, σ are the statistical parameters of the distribution and erƒ is the error function. Then, expectation of a univariate Gaussian random variable X is computed as:
Conventionally, when Xϵ[−∞, ∞], solving Eq. 3.3 results in E(X)=μ. However, for a truncated Gaussian random variable, with l<X<u, the relation is
which yields the following equation to compute the expectation of a truncated univariate Gaussian random variable:
Let the context vector be defined as x=[x1,x2]T, wherein x1ϵ1X1 represents the current bin under consideration, and x2ϵCX1 is the context. Then, xϵ(C+1)X1, where C is the context size. The statistical models are represented by the mean vector μϵ(C+1)X1, and the covariance matrix Σϵ(C+1)X(C+1) such that μ=[μ1, μ2]T with dimensions same as x1 and x2, and the covariance as
Σij are partitions of Σ with dimensions Σ11ϵ1X1, Σ22ϵCXC, Σ12ϵ1XC and Σ21ϵCX1. Thus, the updated statistics of the distribution of the current bin based on the estimated context is [15]:
μup=μ1+Σ12Σ22−1({circumflex over (x)}c−μ2) (3.7)
σup=Σ11−Σ12Σ22−1Σ21. (3.8)
The proposed method applies filtering in the time-frequency domain, to reduce noise. It is designed especially for attenuation of quantization noise of a speech and audio codec, but it is applicable to any noise reduction task.
The noise attenuation algorithm is based on optimal filtering in a normalized time-frequency domain. This contains the following important details:
The benefits are two-fold:
A central novelty of a proposed method is that it takes into account statistical properties of the speech signal, in a time-frequency representation over time. Conventional communication codecs, such 3GPP EVS, use statistics of the signal in the entropy coder and source modeling only over frequencies within the current frame [1]. Broadcast codecs such as MPEG USAC do use some time-frequency information in their entropy coders also over time, but only to a limited extent [2].
The reason for the aversion from using inter-frame information is that if information is lost in transmission, then we would be unable to correctly reconstruct the signal. Specifically, we do not loose only that frame which is lost, but because the following frames depend on the lost frame, also the following frames would be either incorrectly reconstructed or completely lost. Using inter-frame information in coding thus leads to significant error propagation in case of frameloss.
In contrast, the current proposal does not require transmission of inter-frame information. The statistics of the signal are determined off-line in the form of covariance matrices of the context for both the desired signal and the quantization noise. We can therefore use inter-frame information at the decoder, without risking error propagation, since the inter-frame statistics are estimated off-line.
The proposed method is applicable as a post-processing method for any codec. The main limitation is that if a conventional codec operates on a very low bitrate, then significant portions of the signal are quantized to zero, which reduces the efficiency of the proposed method considerably. At low rates, it is however possible to use randomized quantization methods to make the quantization error better resemble Gaussian noise [3,4]. That makes the proposed method applicable at least
1. at medium and high bitrates with conventional codec designs and
2. at low bitrates when using randomized quantization.
The proposed approach therefore uses statistical models of the signal in two ways; the intra-frame information is encoded using conventional entropy coding methods, and inter-frame information is used for noise attenuation in the decoder in a post-processing step. Such application of source modeling at the decoder side is familiar from distributed coding methods, where it has been demonstrated that it does not matter whether statistical modeling is applied at both the encoder and decoder, or only at the decoder [5]. As far as we know, our approach is the first application of this feature in speech and audio coding, outside the distributed coding applications.
It has been demonstrated relatively recently that noise attenuation applications benefit greatly from incorporating statistical information over time in the time-frequency domain. Specifically, Benesty et al. have applied conventional optimal filters such as MVDR in the time-frequency domain to reduce background noises [6, 7]. While a primary application of the proposed method is attenuation of quantization noise, it can naturally also be applied to the generic noise attenuation problem like Benesty does. A difference is however that we have explicitly chosen those time-frequency bins into our context which have the highest correlation with the current bin. In difference, Benesty applies filtering over time only, but not neighbouring frequencies. By choosing more freely among the time-frequency bins, we can choose those frequency bins which give the highest improvement in quality, with the smallest context size, whereby the computational complexity is reduced.
There are a number of natural extensions which follow naturally from the proposed method and which may be applied to the aspects and examples disclosed above and below:
1. Above, the context contains only the noisy current sample and past estimates of the clean signal. However, the context could include also time-frequency neighbours which have not yet been processed. That is, we could use a context where we include the most useful neighbours, and when available, we use the estimated clean samples, but otherwise the noisy ones. The noisy neighbours then naturally would have a similar covariance for the noise as the current sample.
2. Estimates of the clean signal are naturally not perfect, but also contain some error, but above, we assume that the estimates of the past signal do not have error. To improve quality, we could include an estimate of residual noise also for the past signal.
3. The current work focuses on attenuation of quantization noise, but clearly, we can include background noises as well. We would then only have to include the appropriate noise covariance in the minimization process [8].
4. The method was here presented applied on single-channel signals only, but clearly we can extend it to multi-channel signals using conventional methods [8].
5. The current implementation uses covariances which are estimated off-line and only scaling of the desired source covariance is adapted to the signal. It is clear that adaptive covariance models would be useful if we have further information about the signal. For example, if we have an indicator of the amount of voicing of a speech signal, or an estimate of the harmonics to noise ratio (HNR), we could adapt the desired source covariance to match the voicing or HNR, respectively. Similarly, if the quantizer type or mode changes frame to frame, we could use that to adapt the quantization noise covariance. By making sure that the covariances match the statistics of the observed signal, we obviously will obtain better estimates of the desired signal.
6. Context in the current implementation is chosen among the closest neighbours in the time-frequency grid. There is however no limitation to use only these samples; we are free to choose any useful information which is available. For example, we could use information about the harmonic structure of the signal to choose samples into the context which correspond to the comb structure of the harmonic signal. In addition, if we have access to an envelope model, we could use that to estimate the statistics of spectral frequency bins, similar to [9]. Generalizing, we can use any available information which is correlated with the current sample, to improve the estimate of the clean signal.
In examples above, there is no need of inter-frame information encoded in the bitstream 111. Therefore, in examples, the at least one among the context definer 114, the statistical relationship and/or information estimator 115, the quantization noise relationship and/or information estimator 119, and the value estimator 116, exploits inter-frame information at the decoder . . . , hence reducing payload and the risk of error propagation in case packet or bit loss.
In examples above, reference has been mainly made to quantization noise. However, other kinds of noise may be coped with in other examples.
It has been noted that most of the techniques described above are particularly effective for low bitrates. Therefore, it may be possible to implement a technique of selecting between:
In examples, the context definer 114 may form the context 114′ using at least one non-processed bin 126. With reference to
In examples above, the statistical relationship and/or information estimator 115 and/or the noise relationship and/or information estimator 119 may store a plurality of matrixes (Λx, ΛN, for example). The choice of the matrix to be used may be performed on the basis of a metrics on the input signal (e.g., in the context 114′ and/or in the bin 123 under process). Different harmonicities (e.g., determined with different harmonicity to noise ratio or other metrics) may therefore be associated to different matrices Λx, ΛN, for example.
Alternatively, different norms of the context (e.g., determined with measuring the norm of the context of the unprocessed bin values or other metrics) may therefore be associated to different matrices Λx, ΛN, for example.
Operations of the equipment disclosed above may be methods according to the present disclosure.
A general example of method is shown in
In examples, the method may be reiterated, e.g., after step 522, step 521 is newly invoked, e.g., by updating the bin under process and by choosing a new context.
Methods such as method 520 may be supplemented by operation discussed above.
As show in
Generally, examples may be implemented as a computer program product with program instructions, the program instructions being operative for performing one of the methods when the computer program product runs on a computer. The program instructions may for example be stored on a machine readable medium.
Other examples comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an example of method is, therefore, a computer program having a program instructions for performing one of the methods described herein, when the computer program runs on a computer.
A further example of the methods is, therefore, a data carrier medium (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier medium, the digital storage medium or the recorded medium are tangible and/or non-transitionary, rather than signals which are intangible and transitory.
A further example of the method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be transferred via a data communication connection, for example via the Internet.
A further example comprises a processing means, for example a computer, or a programmable logic device performing one of the methods described herein.
A further example comprises a computer having installed thereon the computer program for performing one of the methods described herein.
A further example comprises an apparatus or a system transferring (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
In some examples, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some examples, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods may
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals even if occurring in different figures.
Number | Date | Country | Kind |
---|---|---|---|
17198991.6 | Oct 2017 | EP | regional |
This application is a continuation of copending International Application No. PCT/EP2018/071943, filed Aug. 13, 2018, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 17198991.6, filed Oct. 27, 2017, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2018/071943 | Aug 2018 | US |
Child | 16856537 | US |