The disclosure relates to a system and method (both generally referred to as a “structure”) for noise reduction applicable in speech enhancement.
Speech contains different articulations such as vowels, fricatives, nasals, etc. These articulations and other speech properties, such as short-term power, can be exploited to assist speech enhancement in systems such as noise reduction systems. A critical noise case is, for example, the reduction of the so called “babble noise”. Babble noise is defined as a constant chatter in the background of a conversation. This constant chatter is extremely hard to suppress because it is speech-like and traditional voice activity detectors (VADs) would fail. The use of microphones of different types aggravates this drawback, particularly in the context of far-field microphone applications, because the speaker can potentially talk from any distance to the device (from other rooms of a house, large office spaces, etc.). There is a desire to improve the behavior of voice activity detectors in connection with babble noise.
A noise suppression method includes transforming a time-domain input signal into an input spectrum that is the spectrum of the input signal, the input signal comprising speech components and noise components, and the input spectrum comprising a speech spectrum that is the spectrum of the speech components and a noise spectrum that is the spectrum of the noise components, smoothing magnitudes of the input spectrum to provide a smoothed-magnitude input spectrum, and estimating basic suppression filter coefficients from the input spectrum and the smoothed input spectrum. The method further includes determining noise suppression filter coefficients from the estimated basic suppression filter coefficients and a spectral correlation factor, the spectral correlation factor indicating whether speech is present in the input signal or not, filtering the input spectrum based on the noise suppression filter coefficients to generate an output spectrum; and transforming the output spectrum into a time-domain output signal. The spectral correlation factor is determined from a scaling factor and the smoothed input spectrum, the scaling factor being determined iteratively starting from a start correlation factor.
An example noise suppression structure includes a processor and a memory, the memory storing instructions of a program and the processor configured to execute the instructions of the program, carrying out the above-described method.
An example computer program product includes instructions which, when the program is executed by a computer, cause the computer to carry out the above-described method.
Other systems, methods, features and advantages will be, or will become, apparent to one with skill in the art upon examination of the following detailed description and appended figures. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the following claims.
The system may be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
A voice activity detector outputs a detection signal that, when binary, assumes, for example, 1 or 0 indicating the presence or absence of speech, respectively. In some cases, the output signal of the voice activity detector may be between and including 0 and 1, which may indicate a certain measure or a certain probability for the presence of the speech in the signal under investigation. The detection signal may be used in different parts of speech enhancement systems such as echo cancellers, beamformers, noise estimators, noise reduction systems, etc.
One way to detect a formant in speech is to evaluate the presence of a harmonic structure in a speech segment. The harmonic structure has a fundamental frequency, referred to as the first formant, and its harmonics. Due to the anatomical structure of the human speech generation system, harmonics are inevitably present in most human speech articulations. If the formants of a speech are correctly detected, this can identify a majority of the speech present in recorded signals. Although this does not cover cases such as fricatives, when intelligently used, this can replace traditional voice activity detectors or even work in tandem with traditional voice activity detectors.
Expanding on the above-described approach further, a formant may be detected in a speech by searching for peaks which are periodically present in the spectral content of the speech segment. Although this can be implemented easily, it is not computationally attractive to perform search operations on every spectral frame. Another way to detect formants in a signal is to perform a normalized spectral correlation Corr given by
wherein
To make the detection more robust against background noise, the first modification to the primary detection method outlined above is to band-limit the normalized correlation with a lower frequency (μmin) and an upper frequency (μmax) applied in the subband domain. The lower frequency may be set, e.g., to around 100 Hz and the upper frequency may be set, e.g., to around 3000 Hz. This limitation allows: (1) early detection of formants in the beginning of syllables, (2) a higher spectral signal-to-noise ratio (SNR) or signal-to-noise ratio per band in the chosen frequency range, which increases the detection chances, and (3) robustness in a wide range of noisy environments. The band-limited spectrally-normalized spectral correlation NormSpecCorr may be computed according to
As mentioned before, the input spectrum is not normalized. One reason for this is that, like speech signals, noise signals may also have a harmonic structure. When the noisy input spectrum is normalized in practical situations, it is difficult to adjust the detection threshold parameter Kthr for accurate detection of speech formants as compared to harmonics which could be present in the background noise. Further, due to the known Lombard effect, a speaker usually makes an intrinsic effort to speak louder than the background noise. Keeping these factors in mind, instead of directly using the primary detection approach as described in Equation (1) or the band-limited detection as described in Equation (2), a so-called scaling factor y_scaling(k) is introduced to the detection signal which results in
The scaling factor y_scaling(k) is multiplied with the smoothed magnitudes of the input spectrum, which results in a scaled input spectrum
Equation (5) is evaluated for every subband μ, at the end of which the total number of subbands that satisfy the condition of speech-like activity is given by the summing of the bin counter k·μ. This counter and the instantaneous level are reset to 0 before the level is estimated. The normalized instantaneous level estimate
The long-term average of the level can be obtained by time-window averaging over L frames in combination with infinite impulse response (IIR) filter based smoothing of the time-window average. In place of a two stage filtering, a smoothing filter that is based on an IIR filter can be used, which would be longer with more tuning coefficients. However, the two-stage filtering or smoothing can achieve the same smoothing results with reduced computational complexity. In the two-stage filter, the time-window average is obtained by simply storing L previous values of the instantaneous estimate and computing the average Ytime-window(k) according to
Given that the scaling value does not need to react to the dynamics of the varying level estimates, further an IIR based smoothing is applied to the time-window estimate given by
lev(k)=∝levYtime-window(k)(k)+(1−∝lev)Ytime-window(k), (8)
where
The formants in speech signals can be used as speech presence detector which, when supported by other voice activity detector algorithms, can be utilized in noise reduction systems. The approach described above allows detecting formants in noisy speech frames. The detector outputs a soft-decision. Although the primary approach for detection is very simple, it may be enhanced with three robustness features: (1) band-limited formant detection, (2) scaling through speech level estimation of varying speech levels of the input signal, and (3) reference signal masked scaling (or level estimation) for protection against echolike scenarios. In a noise processing structure presented below, the output of the interframe formant detection procedure is a detection signal Kcorr(k). The approach described above aims to overcome this drawback in some cases, but because of the different kinds of microphones used, a so called “optimal scaling” is required to exactly determine the onset/offset of such background noise scenarios. The drawback is exacerbated in farfield microphone applications as the speaker can potentially talk from any distance to the device (like from other rooms in a house, large office spaces, etc.). To overcome this drawback, an automatically computed “scaling factor” is utilized.
A suppression filter controller 105 operatively coupled to the Wiener filter coefficient estimator 104 estimates (dynamic) suppression filter coefficients Hdyn(μ,k), based on the estimated Wiener filter coefficients Hw(μ,k) and optionally at least one of a correlation factor Kcorr(μ,k) for formant based detection and estimated noise suppression filter coefficients Hw_dyn(μ,k). A noise suppression filter 106, which is operatively coupled to the suppression filter controller 105 and the analysis filter bank 101, filters the input spectrum Y(μ,k) according to the estimated (dynamic) suppression filter coefficients Hdyn(μ,k) to provide a clean estimated speech spectrum Ŝclean(μ,k). An output (frequency-to-time) domain transformer, e.g., a synthesis filter bank 107, which is operatively coupled to the noise suppression filter 106, transforms the clean estimated speech spectrum Ŝclean(μ,k) or a corresponding spectrum such as a spectrum Ŝ(μ,k) into a time-domain output signal ŝ(n) representative of the speech components of the input signal y(n).
The estimated noise suppression filter coefficients Hw_dyn(μ,k) may be derived from the input spectrum
The input signal y(n) and the reference signal x(n) may be transformed from the time domain to the frequency (spectral) domain, i.e., into the input spectrum Y(μ,k) by the analysis filter bank 101 employing an appropriate domain transform algorithm such as, e.g., a short term Fourier transform (STFT). STFT may also be used in the synthesis filter bank 107 to transform the clean estimated speech spectrum Ŝclean(μ,k) or the spectrum Ŝ(μ,k) into the time-domain signal output signal ŝ(n). For example, the analysis may be performed in frames by a sliding low-pass filter window and a discrete Fourier transformation (DFT), a frame being defined by the Nyquist period of the bandlimited window. The synthesis may be similar to an overlap add process, and may employ an inverse DFT and a vector add each frame. Spectral modifications may be included if zeros are appended to the window function prior to the analysis, the number of zeros being equal to the time characteristic length of the modification.
In the following examples, a frame k of the noisy input spectrum STFT(y(n)) forms the basis for further processing. By way of magnitude smoothing, the instantaneous fluctuations are removed but the long-term dynamicity of the noise is retained according to
For example, the noisy speech signal in the discrete-time domain may be described as y(n)=s(n)+b(n) where n is again the discrete time index, y(n) is the (noisy speech) signal recorded by a microphone, s(n) is the clean speech signal and b(n) is the noise component. The processing of the signals is performed in the subband domain. An STFT based analysis-synthesis filterbank is used to transform the signal into its subbands and back to the time-domain. The output of the analysis filterbank is the short-term spectrum of the input signal Y(μ,k) where, again,μ is the subband index and k is the frame index. The estimated background noise B(μ,k) is used by a noise suppression filter such as the Wiener filter to obtain an estimate of the clean speech.
Noise present in the input spectrum can be estimated by accurately tracking the segments of the spectrum in which speech is absent. The behavior of this spectrum is dependent on the environment in which the microphone is placed. In an automobile environment, for example, there are many factors that contribute to the noise spectrum being/becoming non-stationary. For such environments, the noise spectrum can be described as non-flat with a low-pass characteristic dominating below 500 Hz. Apart from this low-pass characteristic, changes in speed, the opening and closing of windows, passing cars, etc. may also cause the noise floor to vary with time. A close look at one frequency bin of the noise spectrum reveals the following properties: (a) Instantaneous power can vary from the mean power to a large extent even under steady conditions, and (b) a steady increase or a steady decrease of power is observed in certain situations (e.g., during acceleration). A simple estimator, which can be used to track these magnitude changes for each frequency bin, is described in Equation (10)
This estimator follows a smoothed input
Starting from this simple estimator, a noise estimation scheme may be employed that allows keeping the computational complexity low and offering fast, accurate tracking. The estimator is to choose the “right” multiplicative constant for a given specific situation. Such a situation can be a speech passage, a consistent background noise, increasing background noise, decreasing background noise, etc. A value referred to as “trend” is computed which indicates whether the long-term direction of the input signal is going up or down. The increment and decrement time-constants along with the trend are applied together in Equation (11).
Tracking of the noise estimator is dependent on the smoothed input spectrum
(μ, k)=γsmth|Y(μ, k)|+(1−γsmth)
in which γsmth is a smoothing constant. The smoothing constant γsmth is chosen in such a way that it retains fine variations of the input spectrum Y(μ,k) as well as eliminating the high variation of the instantaneous spectrum. Optionally, additional frequency-domain smoothing can be applied.
One of the difficulties with noise estimators in non-stationary environments is differentiating between a speech part of the spectrum and a change in the spectral floor. This can be at least partially overcome by measuring the duration of a power increase. If the increase is due to a speech source, the power will drop after the utterance of a syllable, whereas, if the power continues to stay high for a longer duration, it is an indication of increased background noise. It is these dynamics of the input spectrum that the trend factor measures in the processing scheme. By observing the direction of the trend—going up or down—the spectral floor changes can be tracked while avoiding the tracking of the speech-like parts of the spectrum. The decision as to the current state of the frame is made by comparison to determine whether the estimated noise of the previous frame is smaller than the smoothed input spectrum of the current frame, by which a set of values are obtained. A positive value indicates that the direction is going up, and a negative value indicates that the direction is going down as, for example,
where {circumflex over (B)}(μ, k−1) represents the estimated noise of the previous frame. The values 1 and −4 are exemplary and any other appropriate value can be applied. The trend can be smoothed along both the time and the frequency axis. A zero-phase forward-backward filter may be used to smooth along the frequency axis. Smoothing along the frequency axis ensures that isolated peaks caused by non-speech-like activities are suppressed. Smoothing is applied according to
Ā
trnd(μ, k)=γtrnd-fqAcurr(μ, k)+(1−γtrnd-fq)Ātrnd(μ−1, k), (13)
for μ=1, . . . , NSbb and similarly backward smoothing is applied. The time-smoothed trend factor
Ā
trnd(μ,k)=γtrnd-tmÂtrnd(μ,k)+(1−γtrmd-tm)
where γtrnd-tm is a smoothing constant. The behavior of the double-smoothed trend factor
Tracking of the noise estimation is performed for two cases. One such case is when the smoothed input is greater than the estimated noise, and the second is when it is smaller. The input spectrum can be greater than the estimated noise due to three reasons: First, when there is speech activity, second, when the previous noise estimate has dipped too low and must rise, and third when there is a continuous increase in the true background noise. The first case is addressed by checking whether the level of the input spectrum Y(μ,k) is greater than a certain signal-to-noise ratio (SNR) threshold Tsnr, in which case the chosen incremental constant Δspeech has to be very slow because speech should not be tracked. For the second case the incremental constant is set to Δnoise which means that this is a case of normal rise and fall during tracking. In the case of a continuous increase in the true background noise, the estimate must catch up with this increase as fast as possible. For this a counter providing counts kcnt(μ,k) is utilized. The counter counts the duration over which the input spectrum has stayed above the estimated noise. If the count reaches a threshold Kinc-max, a fast incremental constant Δinc-fast may be chosen. The counter is incremented by 1 every time the input spectrum Y(μ,k) becomes greater than the estimated noise spectrum {circumflex over (B)}(μ, k−1) and reset to 0 otherwise. Equation (16) captures these conditions
The choice of a decrement constant Δdec does not have to be as explicit as in the case of the increment constant. This is because there is less ambiguity when the input spectrum Y(μ,k) is narrower than the estimated noise spectrum {circumflex over (B)}(μ, k−1). Here the noise estimator chooses the decremental constant Δdec by default. For a subband only one of the above two stated conditions is chosen. From either of the two conditions a final multiplicative constant is determined
The input spectrum includes only background noise when no speech-like activity is present. At such times, the best estimate is achieved by setting the noise estimate equal to the input spectrum. When the estimated noise is lower than the input spectrum, the noise estimate and the input spectrum are combined with a certain weight. The weights are computed according to Equation (18). A pre-estimate {circumflex over (B)}pre(μ, k) is obtained to compute the weights. The pre-estimate {circumflex over (B)}pre(μ, k) is used in combination with the input spectrum. It is obtained by multiplying the input spectrum with the multiplicative constant Δfinal(μ, k) and the trend constant ΔTrend(μ,k) according to
{circumflex over (B)}
pre(μ, k)=Δfinal(μ,k)ΔTrend(μ, k){circumflex over (B)}(μ, k−1). (18)
A weighting factor W{circumflex over (B)}(μ, k) for combining the input spectrum Y(μ,k) and the pre-estimate {circumflex over (B)}pre(μ, k) is given by
The final noise estimate is determined by applying this weighting factor
{circumflex over (B)}(μ, k)=W{circumflex over (B)}(μ, k)
During the first few frames of the noise estimation process, the input spectrum itself is directly chosen as the noise estimate for faster convergence.
The estimated background noise {circumflex over (B)}(μ, k) and the magnitude of the input spectrum |Y(μ,k)| are combined to compute basic noise suppression filter coefficients, Hw(μ,k), also referred to as the Wiener filter coefficients by,
The Wiener filter coefficients Hw(μ,k) are applied to the complex spectra of the input spectrum Y(μ,k) to obtain an estimate of the clean speech spectrum Ŝ(μ,k), which is
Ŝ(μ,k)=Hw(μ,k)·Y(μ,k). (22)
The estimated clean speech spectrum Ŝ(μ,k) is transformed into the discrete-time domain by the synthesis filter bank to obtain the estimated clean speech signal ŝ(n)=ISTFT(Ŝ(μ,k)), where ISTFT is the application of the synthesis filter bank, e.g., an inverse short term Fourier transform.
In order to control highly non-stationary (i.e., dynamic) noise, the noisy input signal (i.e., the input spectrum), is suppressed in a time-frequency controlled manner, and the applied suppression is not constant. The amount of suppression to be applied is determined by the “dynamicity” of the noise in the noisy input signal. The output of the dynamic suppression scheme is a set of filter coefficients Hdyn(μ,k) which determine the amount of suppression to be applied to “dynamic noise parts” given by
H
dyn(μ,k)=DynSupp(Y(μ,k),
The output of the dynamic suppression estimator 108 is denoted as dynamic suppression filter coefficients Hdyn(μ,k). The dynamic suppression estimator 108 may, e.g., compare the input spectrum Y(μ,k) and the smoothed input spectrum
Interframe formant detection is performed in the interframe formant detector 109 which detects formants present in the noisy input speech signal y(n). This detection outputs a signal which is a time-varying signal or a time-frequency varying signal. The output of the interframe formant detector 108 is a spectral correlation factor Kcorr(μ,k) given by
K
corr(μ,k)=FormantDetection(yscaling(k),Hdyn(μ, k)). (24)
The spectral correlation factor Kcorr(μ,k) provided by the interframe formant detector 108 is a signal which may be a value between 0 and 1, indicating whether formants are present or not. By choosing an adequate threshold, this signal allows determining which parts of the time-frequency noisy input spectrum are to be suppressed.
Fricative detection is performed in the fricative detector which detects white-noise like sounds (fricatives) present in the noisy input speech signal y(n). The output F(k) of the fricative detector is a binary signal indicating if the given speech frame is a fricative frame or not. This binary output signal is input into the Interframe formant detector, which combines the binary formant detection and collectively influence the correlation factor Kcorr(μ,k). A multiplicity of methods for detecting fricatives is known in the art.
Noise suppression filter coefficients are determined in the suppression filter controller 105 based on the Wiener filter coefficients, dynamic suppression coefficients, and the formant detection signal and supplied as final noise suppression filter coefficients to the noise suppression filter 106. The three components mentioned above are combined to obtain the final suppression filter coefficients Hw_dyn(μ,k) which are given by
H
w_dyn(μ,k)=FinalSuppCoeffs(Kcorr(μ, k), Hdyn(μ, k), Hw(μ,k)). (25)
The example noise reduction structure described in connection with
During operation of this noise reduction, the speaker can be standing at any unknown distance from the microphones whose level needs to be estimated. Conventional noise reduction systems and methods estimate the scaling factor through a pre-tuned value, e.g., based on a system engineers' tuning. One drawback of this approach may be that the estimations and tunings cannot be easily ported to different devices and systems without extensive tests and tuning. To overcome this drawback, the scaling is automatically estimated in the systems and methods presented herein so that dynamic suppression can be applied without any substantial limitations. The systems and methods described herein automatically choose which acoustic scenario to operate in, and in-turn scale the incoming noisy input signal x(n) accordingly so that most devices in which such system and methods are implemented are enabled to allow human communication and speech recognition.
The autoscaling structure can be considered as an independent system or method which can plugged-in into any larger system or method as shown in
In a spectral correlator 204, further correlation values Kcorriter(μ, k) are computed from the initial estimate of the scaling factor y_scalingest1. In a subsequent decision 205, the further correlation values Kcorriter(μ, k) are evaluated whether they are too high or too low. If they are too low, an ith scaling factor y_scalingesti is output upon expanding the scaling factor estimate 206 and the ith scaling factor y_scalingesti forms basis for a new iteration. However, if the further correlation values Kcorriter(μ, k) are too high, and upon diminishing the scaling factor estimate 207, a decision 208 is made whether the target iteration has been reached or not. If it has been reached, a scaling factor y_scaling(k) is output. If it has not been reached, the ith scaling factor y_scalingesti forms basis for a new iteration.
Different kinds of scenarios can exist for a given acoustic environment. In one scenario, the application of a dynamic suppression enhances the noisy signal. To this point, the signal-to-noise ratio of the targeted speaker plays a vital role. Hence, a given speech scenario is classified into either of the two scenarios: a classical approach scenario, and a dynamic approach scenario. The classical approach scenario is chosen in extremely low signal-to-noise ratio scenarios in which the application of the dynamic approach would deteriorate the speech quality rather than enhance it. This approach is not discussed further here. The dynamic approach scenario is chosen for all other scenarios, where the suppression would result in an enhanced speech quality and, thus, better subjective experience for the listener. To arrive at the decision of classical or dynamic, two measures are computed and considered: an instantaneous signal-to-noise ratio, and a long-term signal-to-noise ratio. Before computing the signal-to-noise ratio, it is first determined if the current frame is a speech frame or not. This can be made with a simple voice activity detector based on a threshold comparison given by:
A simple voice activity detector would suffice here since the goal is to estimate the scaling and the estimate has to be based on a frame which has a high probability of being speech. This would ensure that the scaling estimate is of good quality. Once a signal ksum(k) meets a threshold condition Kthr-sum, resulting in a voice activity detector output Vad,
the instantaneous and the long-term signal-to-noise ratios can be computed. The instantaneous signal-to-noise ratio ξinst(k) is computed by,
The long-term signal-to-noise ratio is computed based on the instantaneous signal-to-noise ratio ξinst(k) through a time-window averaging approach given by
wherein ξlt(k) is the long-term signal-to-noise ratio and L is the length of the time-window for averaging. The decision about the speech scenario SpSc is made by comparing the instantaneous and the long-term signal-to-noise ratios with respective thresholds given by
The following considerations are based on the assumption that the given scenario is a dynamic approach scenario. Given a known scaling, the (scaled) spectral correlation factor Kcorr(k) is computed by
Here it is desired to estimate the scaling given the fact that it is a speech frame. The scaling factor y_scaling(k) can be computed by rearranging Equation (32),
However, the spectral correlation factor Kcorr is also unknown. Therefore, the approach is to start with an assumed correlation value. This value can be any appropriate value. So, the spectral correlation factor Kcorr is set to be a positive integer factor Kfactor of the later used threshold Kthr, through which the start correlation value Kcorrstart is computed,
K
corr
start
=K
thr
·K
factor, (34)
and the initial estimate of the scaling y_scalingest1 can be computed according to
Now a basis is established for an iterative search for the “optimal” scaling. The search is performed, for example, according to the following steps:
1. Compute the spectral correlation Kcorri based on the initial estimate of the scaling factor y_scalingest1 according to
2. The spectral correlation value Kcorri is compared to the threshold Kthr to evaluate if the estimated scaling is too high or too low.
3. If the value is too high, a simple diminishing rule is applied to re-estimate a new scaling factor
4. If the value is too low, a simple expanding rule is applied to re-estimate a new scaling factor
5. Repeat steps 1 to 4 until iteration i reaches the target iteration Niter.
6. Upon reaching the target iteration Niter, the search algorithm is stopped and the current frame scaling factor is set to the value of last computed value
y_scaling(k)=y_scalingestN
The computed scaling value may be sub-optimal or pseudo-optimal since the precision of the estimate depends on the number of iterations in the search algorithm.
Accordingly, the method includes detecting a frame which is a speech frame with high probability, and, based on this frame, computing the instantaneous and long-term SNR. The method allows choosing automatically which acoustic scenario to operate in and scaling the incoming noisy signal accordingly.
Referring to
The method may be implemented in dedicated logic or, as shown in
The method described above may be encoded in a computer-readable medium such as a CD ROM, disk, flash memory, RAM or ROM, an electromagnetic signal, or other machine-readable medium as instructions for execution by a processor. Alternatively or additionally, any type of logic may be utilized and may be implemented as analog or digital logic using hardware, such as one or more integrated circuits (including amplifiers, adders, delays, and filters), or one or more processors executing amplification, adding, delaying, and filtering instructions; or in software in an application programming interface (API) or in a Dynamic Link Library (DLL), functions available in a shared memory or defined as local or remote procedure calls; or as a combination of hardware and software.
The method may be implemented by software and/or firmware stored on or in a computer-readable medium, machine-readable medium, propagated-signal medium, and/or signal-bearing medium. The media may comprise any device that contains, stores, communicates, propagates, or transports executable instructions for use by or in connection with an instruction executable system, apparatus, or device. The machine-readable medium may selectively be, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared signal or a semiconductor system, apparatus, device, or propagation medium. A non-exhaustive list of examples of a machine-readable medium includes: a magnetic or optical disk, a volatile memory such as a Random Access Memory “RAM,” a Read-Only Memory “ROM,” an Erasable Programmable Read-Only Memory (i.e., EPROM) or Flash memory, or an optical fiber. A machine-readable medium may also include a tangible medium upon which executable instructions are printed, as the logic may be electronically stored as an image or in another format (e.g., through an optical scan), then compiled, and/or interpreted or otherwise processed. The processed medium may then be stored in a computer and/or machine memory.
The systems may include additional or different logic and may be implemented in many different ways. A controller may be implemented as a microprocessor, microcontroller, application specific integrated circuit (ASIC), discrete logic, or a combination of other types of circuits or logic. Similarly, memories may be DRAM, SRAM, Flash, or other types of memory. Parameters (e.g., conditions and thresholds) and other data structures may be separately stored and managed, may be incorporated into a single memory or database, or may be logically and physically organized in many different ways. Programs and instruction sets may be parts of a single program, separate programs, or distributed across several memories and processors.
The description of embodiments has been presented for purposes of illustration and description. Suitable modifications and variations to the embodiments may be performed in light of the above description or may be acquired from practicing the methods. For example, unless otherwise noted, one or more of the described methods may be performed by a suitable device and/or combination of devices. The described methods and associated actions may also be performed in various orders in addition to the order described in this application, in parallel, and/or simultaneously. The described systems are exemplary in nature, and may include additional elements and/or omit elements.
As used in this application, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is stated. Furthermore, references to “one embodiment” or “one example” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects.
While various embodiments of the invention have been described, it will be apparent to those of ordinary skilled in the art that many more embodiments and implementations are possible within the scope of the invention. In particular, the skilled person will recognize the interchangeability of various features from different embodiments. Although these techniques and systems have been disclosed in the context of certain embodiments and examples, it will be understood that these techniques and systems may be extended beyond the specifically disclosed embodiments to other embodiments and/or uses and obvious modifications thereof.
This application is the U.S. national phase of PCT Application No. PCT/EP2020/058944 filed on Mar. 30, 2020, the disclosure of which is incorporated in its entirety by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/058944 | 3/30/2020 | WO |