The present document relates to the separation of one or more audio sources from a multi-channel audio signal.
A mixture of audio signals, notably a multi-channel audio signal such as a stereo, 5.1 or 7.1 audio signal, is typically created by mixing different audio sources in a studio, or generated by recording acoustic signals simultaneously in a real environment. The different audio channels of a multi-channel audio signal may be described as different sums of a plurality of audio sources. The task of source separation is to identify the mixing parameters which lead to the different audio channels and possibly to invert the mixing parameters to obtain estimates of the underlying audio sources.
When no prior information on the audio sources that are involved in a multi-channel audio signal is available, the process of source separation may be referred to as blind source separation (BSS). In the case of spatial audio captures, BSS includes the steps of decomposing a multi-channel audio signal into different source signals and of providing information on the mixing parameters, on the spatial position and/or on the acoustic channel response between the originating location of the audio sources and the one or more receiving microphones.
The problem of blind source separation and/or of informed source separation is relevant in various different application areas, such as speech enhancement with multiple microphones, crosstalk removal in multi-channel communications, multi-path channel identification and equalization, direction of arrival (DOA) estimation in sensor arrays, improvement over beam-forming microphones for audio and passive sonar, movie audio up-mixing and re-authoring, music re-authoring, transcription and/or object-based coding.
Real-time online processing is typically important for many of the above-mentioned applications, such as those for communications and those for re-authoring, etc. Hence, there is a need in the art for a solution for separating audio sources in real-time, which raises requirements with regards to a low system delay and a low analysis delay for the source separation system. Low system delay requires that the system supports a sequential real-time processing (clip-in/clip-out) without requiring substantial look-ahead data. Low analysis delay requires that the complexity of the algorithm is sufficiently low to allow for real-time processing given practical computation resources.
The present document addresses the technical problem of providing a real-time method for source separation. It should be noted that the method described in the present document is applicable to blind source separation, as well as for semi-supervised or supervised source separation, for which information about the sources and/or about the noise is available.
According to an aspect, a method for extracting J audio sources from I audio channels, with I, J>1, is described. The audio channels may for example be captured by microphones or may correspond to the channels of a multi-channel audio signal. The audio channels include a plurality of clips, each clip including N frames, with N>1. In other words, the audio channels may be subdivided into clips, wherein each clip includes a plurality of frames. A frame of the audio channel typically corresponds to an excerpt of an audio signal (for example, to a 20 ms excerpt) and typically includes a sequence of samples.
The I audio channels are representable as a channel matrix in a frequency domain, and the J audio sources are representable as a source matrix in the frequency domain. In particular, the audio channels may be transformed from the time domain into the frequency domain using a time domain to frequency domain transform, such as a short term Fourier transform.
The method includes, for a frame n of a current clip, for at least one frequency bin f, and for a current iteration, updating a Wiener filter matrix based on a mixing matrix, which is adapted to provide an estimate of the channel matrix from the source matrix, and based on a power matrix of the J audio sources, which is indicative of a spectral power of the J audio sources. In particular, the method may be directed at determining a Wiener filter matrix for all the frames n of a current clip and for all the frequency bins f or for all frequency bands
The Wiener filter matrix is adapted to provide an estimate of the source matrix from the channel matrix. In particular, an estimate of the source matrix Sfn for the frame n of the current clip and for a frequency bin f may be determined as Sfn=ΩfnXfn, wherein Ωfn is the Wiener filter matrix for the frame n of the current clip and for the frequency bin f and wherein Xfn is the channel matrix for the frame n of the current clip and for the frequency bin f. Hence, subsequently to the iterative process for determining the Wiener filter matrix for a frame n and for a frequency bin f, the source matrix may be estimated using the Wiener filter matrix. Furthermore, using an inverse transform, the source matrix may be transformed from the frequency domain to the time domain to provide the J source signals, notably to provide a frame of the J source signals.
Furthermore, the method includes, as part of the iterative process, updating a cross-covariance matrix of the I audio channels and of the J audio sources and updating an auto-covariance matrix of the J audio sources, based on the updated Wiener filter matrix and based on an auto-covariance matrix of the I audio channels. The auto-covariance matrix of the I audio channels for frame n of the current clip may be determined from frames of the current clip and from frames of one or more previous clips and from frames of one or more future clips. For this purpose a buffer including a history buffer and a look-ahead buffer for the audio channels may be provided. The number of future clips may be limited (for example, to one future clip), thereby limiting the processing delay of the source separation method.
In addition, the method includes updating the mixing matrix and the power matrix based on the updated cross-covariance matrix of the I audio channels and of the J audio sources and/or based on the updated auto-covariance matrix of the J audio sources.
The updating steps may be repeated or iterated to determine the Wiener filter matrix, until a maximum number of iterations has been reached or until a convergence criteria with respect to the mixing matrix has been met. As a result of such an iterative process, a precise Wiener filter matrix may be determined, thereby providing a precise separation between the different audio sources.
The frequency domain may be subdivided into F frequency bins. On the other hand, the F frequency bins may be grouped or banded into
As such, the frequency resolution of the Wiener filter matrix may be higher than the frequency resolution of one or more other matrices used within the iterative method for extracting the J audio sources. By doing this an improved tradeoff between precision and computational complexity may be provided. In particular example, the Wiener filter matrix may be updated for a resolution of frequency bins f using a mixing matrix at the resolution of frequency bins f and using a power matrix of the J audio sources at a reduced resolution of frequency bands
Ωfn=ΣS,
Furthermore, the cross-covariance matrix RXS,
Furthermore, the mixing matrix Afn and the power matrix ΣS,
The Wiener filter matrix may be updated based on a noise power matrix comprising noise power terms, wherein the noise power terms may decrease with an increasing number of iterations. In other words, artificial noise may be inserted within the Wiener filter matrix and may be progressively reduced during the iterative process. As a result of this, the quality of the determined Wiener filter matrix may be increased.
For the frame n of the current clip and for the frequency bin f lying within a frequency band
Ωfn=ΣS,
wherein Ωfn is the updated Wiener filter matrix, wherein Σ
The Wiener filter matrix may be updated by applying an orthogonal constraint with regards to the J audio sources. By way of example, the Wiener filter matrix may be updated iteratively to reduce the power of non-diagonal terms of the auto-covariance matrix of the J audio sources, in order to render the estimated audio sources more orthogonal with respect to one another. In particular, the Wiener filter matrix may be updated iteratively using a gradient (notably, by iteratively reducing the gradient)
wherein Ω
The cross-covariance matrix of the I audio channels and of the J audio sources may be updated based on or using RXS,
Updating the mixing matrix may include determining a frequency-independent auto-covariance matrix
The method may include determining a frequency-dependent weighting term efn based on the auto-covariance matrix RXX,
Updating the power matrix may include determining an updated power matrix term (ΣS)jj,fn for the jth audio source for the frequency bin f and for the frame n based on or using (ΣS)jj,fn=(RSS,
Furthermore, updating the power matrix may include determining a spectral signature W and a temporal signature H for the J audio sources using a non-negative matrix factorization of the power matrix. The spectral signature W and the temporal signature H for the jth audio source may be determined based on the updated power matrix term (ΣS)jj,fn for the jth audio source. A further updated power matrix term (ΣS)jj,fn for the jth audio source may be determined based on (ΣS)jj,fn=ΣkWj,fkHj,kn, wherein k is the number or index of signatures. The power matrix may then be updated using the further updated power matrix terms for the J audio sources. The factorization of the power matrix may be used to impose one or more constraints (notably with regards to spectrum permutation) on the power matrix, thereby further increasing the quality of the source separation method.
The method may include initializing the mixing matrix (at the beginning of the iterative process for determining the Wiener filter matrix) using a mixing matrix determined for a frame (notably the last frame) of a clip directly preceding the current clip. Furthermore, the method may include initializing the power matrix based on the auto-covariance matrix of the I audio channels for frame n of the current clip and based on the Wiener filter matrix determined for a frame (notably the last frame) of the clip directly preceding the current clip. By making use of the results obtained for a previous clip for initializing the iterative process for the frames of the current clip, the convergence speed and quality of the iterative method may be increased.
According to a further aspect, a system for extracting J audio sources from I audio channels, with I, J>1, is described, wherein the audio channels include a plurality of clips, each clip comprising N frames, with N>1. The I audio channels are representable as a channel matrix in a frequency domain and the J audio sources are representable as a source matrix in the frequency domain. For a frame n of a current clip, for at least one frequency bin f, and for a current iteration, the system is adapted to update a Wiener filter matrix based on a mixing matrix, which is adapted to provide an estimate of the channel matrix from the source matrix, and based on a power matrix of the J audio sources, which is indicative of a spectral power of the J audio sources. The Wiener filter matrix is adapted to provide an estimate of the source matrix from the channel matrix. Furthermore, the system is adapted to update a cross-covariance matrix of the I audio channels and of the J audio sources and to updated an auto-covariance matrix of the J audio sources, based on the updated Wiener filter matrix and based on an auto-covariance matrix of the I audio channels. In addition, the system is adapted to update the mixing matrix and the power matrix based on the updated cross-covariance matrix of the I audio channels and of the J audio sources, and/or based on the updated auto-covariance matrix of the J audio sources.
According to a further aspect, a software program is described. The software program may be adapted for execution on a processor and for performing the method steps outlined in the present document when carried out on the processor.
According to another aspect, a storage medium is described. The storage medium may include a software program adapted for execution on a processor and for performing the method steps outlined in the present document when carried out on the processor.
According to a further aspect, a computer program product is described. The computer program may include executable instructions for performing the method steps outlined in the present document when executed on a computer.
It should be noted that the methods and systems including its preferred embodiments as outlined in the present patent application may be used stand-alone or in combination with the other methods and systems disclosed in this document. Furthermore, all aspects of the methods and systems outlined in the present patent application may be arbitrarily combined. In particular, the features of the claims may be combined with one another in an arbitrary manner.
The invention is explained below in an exemplary manner with reference to the accompanying drawings, wherein:
As outlined above, the present document is directed at the separation of audio sources from a multi-channel audio signal, notably for real-time applications.
The document uses the nomenclature described in Table 1.
Furthermore, the present document makes use of the following notation:
may denote the element-wise division, and the expression B−1 may denote a matrix inversion.
An I-channel multi-channel audio signal includes I different audio channels 302, each being a convolutive mixture of I audio sources 301 plus ambience and noise,
where xi(t) is the i-th time domain audio channel 302, with i=1, . . . , I and t=1, . . . , T. sj(t) is the j-th audio source 301, with j=1, . . . , J, and it is assumed that the audio sources 301 are uncorrelated to each other; bi(t) is the sum of ambiance signals and noise (which may be referred to jointly as noise for simplicity), wherein the ambiance and noise signals are uncorrelated to the audio sources 301; aij(τ) are mixing parameters, which may be considered as finite-impulse responses of filters with path length L.
If the STFT (short term Fourier transform) frame size ωlen is substantially larger than the filter path length L, a linear circular convolution mixing model may be approximated in the frequency domain, as
Xfn=AfnSfn+Bfn (2)
where Xfn and Bfn are I×1 matrices, Afn are I×J matrices, and Sfn are J×1 matrices, being the STFTs of the audio channels 302, the noise, the mixing parameters and the audio sources 301, respectively. Xfn may be referred to as the channel matrix, Sfn may be referred to as the source matrix and Afn may be referred to as the mixing matrix.
A special case of the convolution mixing model is an instantaneous mixing type, where the filter path length L=1, such that:
aij(τ)=0, (∀τ≠0) (3)
In the frequency domain, the mixing parameters A are frequency-independent, meaning that equation (3) is identical to Afn=An; (∀f=1, . . . , F), and real. Without loss of generality and extendibility, the instantaneous mixing type will be described in the following.
The initial values may be used to initialize an iterative scheme for updating parameters until convergence of the parameters or until reaching the maximum allowed number of iterations ITR. A Wiener filter Sfn=ΩfnXfn may be used to determine the audio sources 301 from the audio channels 302, wherein Ωfn are the Wiener filter parameters or the un-mixing parameters (included within a Wiener filter matrix). The Wiener filter parameters Ωfn within a particular iteration may be calculated or updated using the values of the mixing parameters Aij,fn and of the spectral power matrices (ΣS)jj,fn, which have been determined within the previous iteration (step 102). The updated Wiener filter parameters Ωfn may be used to update 103 the auto-covariance matrices RSS of the audio sources 301 and the cross-covariance matrix RXS of the audio sources and the audio channels. The updated covariance matrices may be used to update the mixing parameters Aij,fn and the spectral power matrices (ΣS)jj,fn (step 104). If a convergence criteria is met (step 105), the audio sources may be reconstructed (step 106) using the converged Wiener filter Ωfn. If the convergence criteria is not met (step 105) the Wiener filter parameters Ωfn may be updated in step 102 for a further iteration of the iterative process.
The method 100 may be applied to a clip of frames of a multi-channel audio signal, wherein a clip includes N frames. As shown in
frames of one or more previous clips (as history buffer 201) and
frames or one or more future clips (as look-ahead buffer 202). This buffer 200 is maintained for determining the covariance matrices.
In the following, a scheme for initializing the source parameters is described. The time-domain audio channels 302 are available and a relatively small random noise may be added to the input in the time-domain to obtain (possibly noisy) audio channels xi(t). A time-domain to frequency-domain transform is applied (for example, an STFT) to obtain Xfn. The instantaneous covariance matrices of the audio channels may be calculated as
RXX,fninst=XfnXfnH, n=1, . . . , N+TR−1 (4)
The covariance matrices for different frequency bins and for different frames may be calculated by averaging over TR frames:
A weighting window may be applied optionally to the summing in equation (5) so that information which is closer to the current frame is given more importance.
RXX,fn may be grouped to band-based covariance matrices RXX,
Using the input covariance matrices RXX,fn logarithmic energy values may be determined for each time-frequency (TF) tile, meaning for each combination of frequency bin f and frame n. The logarithmic energy values may then be normalized or mapped to a [0, 1] Interval:
where α may be set to 2.5, and typically ranges from 1 to 2.5. The normalized logarithmic energy values efn may be used within the method 100 as the weighting factor for the corresponding TF tile for updating the mixing matrix A (see equation 18).
The covariance matrices of the audio channels 302 may be normalized by the energy of the mix channels per TF tiles, so that the sum of all normalized energies of the audio channels 302 for a given TF tile is one:
where ε1 is a relatively small value (for example, 10−6) to avoid division by zero, and trace(·) returns the sum of the diagonal entries of the matrix within the bracket.
Initialization for the sources' spectral power matrices differs from the first clip of a multi-channel audio signal to other following clips of the multi-channel audio signal:
For the first clip, the sources' spectral power matrices (for which only diagonal elements are non-zero) may be initialized with random Non-negative Matrix Factorization (NMF) matrices W, H (or pre-learned values for W, H, if available):
where by way of example: Wj,fk=0.75|rand (j, fk)|+0.25 and Hj,kn=0.75|rand (j,kn)|+0.25. The two matrices for updating Wj,fk in equation (22) may also be initiated with random values: (WA)j,fk=0.75|rand(j, fk)|+0.25 and (WB)j,fk=0.75|rand(j, fk)|+0.25.
For any following clips, the sources' spectral power matrices may be initialized by applying the previously estimated Wiener filter parameters SI for the previous clip to the covariance matrices of the audio channels 302:
(ΣS)jj,fn=(ΩRXXΩH)jj,fn+ε2|rand(j)| (9)
where Ω may be the estimated Wiener filter parameters for the last frame of the previous clip. ε2 may be a relatively small value (for example, 10−6) and rand(j)˜N(1.0, 0.5) may be a Gaussian random value. By adding a small random value, a cold start issue may be overcome in case of very small values of (ΩRXXΩH)jj,fn. Furthermore, global optimization may be favored.
Initialization for the mixing parameters A may be done as follows: For the first clip, for the multi-channel instantaneous mixing type, the mixing parameters may be initialized:
Aij,fn=|rand(i, j)|, f, n (10)
and then normalized:
For the stereo case, meaning for a multi-channel audio signal including I=2 audio channels, with the left channel L being i=1 and with the right channel R: i=2, one may explicitly apply the below formulas
For the subsequent clips of the multi-channel audio signal, the mixing parameters may be initialized with the estimated values from the last frame of the previous clip of the multi-channel audio signal.
In the following, updating the Wiener filter parameters is outlined. The Wiener filter parameters may be calculated:
Ω
where the ΣS,
The noise covariance parameters ΣB may be set to iteration-dependant common values, which do not exhibit frequency dependency or time dependency, as the noise is assumed to be white and stationary
The values change in each iteration iter, from an initial value 1/100I to a final smaller value /10000I. This operation is similar to simulated annealing which favors fast and global convergence.
The inverse operation for calculating the Wiener filter parameters is to be applied to an I×I matrix. In order to avoid the computations for matrix inversions, in the case J≤I, instead of equation (13), Woodbury matrix identity may be used for calculating the Wiener filter parameters using
Ω
It may be shown that equation (15) is mathematically equivalent to equation (13).
Under the assumption of uncorrelated audio sources, the Wiener filter parameters may be further regulated by iteratively applying the orthogonal constraints between the sources:
where the expression [·]D indicates the diagonal matrix, which is obtained by setting all non-diagonal entries zero and where ε may be ε=10−12 or less. The gradient update is repeated until convergence is achieved or until reaching a maximum allowed number ITRortho of iterations. Equation (16) uses an adaptive decorrelation method.
The covariance matrices may be updated (step 103) using the following equations
RXS,
RSS,
In the following, a scheme for updating the source parameters is described (step 104). Since the instantaneous mixing type is assumed, the covariance matrices can be summed over frequency bins or frequency bands for calculating the mixing parameters. Moreover, weighting factors as calculated in equation (6) may be used to scale the TF tiles so that louder components within the audio channels 302 are given more importance:
Given an unconstrained problem, the mixing parameters can be determined by matrix inversions
An=
Furthermore, the spectral power of the audio sources 301 may be updated In this context, the application of a non-negative matrix factorization (NMF) scheme may be beneficial to take into account certain constraints or properties of the audio sources 301 (notably with regards to the spectrum of the audio sources 301). As such, spectrum constraints may be imposed through NMF when updating the spectral power. NMF is particularly beneficial when prior-knowledge about the audio sources' spectral signature (W) and/or temporal signature (H) is available. In cases of blind source separation (BSS), NMF may also have the effect of imposing certain spectrum constraints, such that spectrum permutation (meaning that spectral components of one audio source are split into multiple audio sources) is avoided and such that a more pleasing sound with less artifacts is obtained.
The audio sources' spectral power ΣS may be updated using
(ΣS)jj,fn=(RSS,
Subsequently, the audio sources' spectral signature Wj,fk and the audio sources' temporal signature Hj,kn may be updated for each audio source j based on (ΣS)jj,fn. For simplicity, the terms are denoted as W, H, and ΣS in the following (meaning without indexes). The audio sources' spectral signature W may be updated only once every clip for stabilizing the updates and for reducing computation complexity compared to updating W for every frame of a clip.
As an input to the NMF scheme, ΣS, W, WA, WB and H are provided. The following equations (21) up to (24) may then be repeated until convergence or until a maximum number of iterations is achieved. First the temporal signature may be updated:
with ε4 being small, for example 10−12. Then, WA, WB may be updated
and W may be updated
and W, WA, WB may be re-normalized
As such, updated W, WA, WB and H may be determined in an iterative manner, thereby imposing certain constraints regarding the audio sources. The updated W, WA, WB and H may then be used to refine the audio sources' spectral power ΣS using equation (8).
In order to remove scale ambiguity, A, W and H (or A and ΣS) may be re-normalized:
Through re-normalization, A conveys energy-preserving mixing gains among channels (ΣiAij,n2=1), and W is also energy-independent and conveys normalized spectral signatures. Meanwhile the overall energy is preserved as all energy-related information is relegated into the temporal signature H. It should be noted that this renormalization process preserves the quantity that scales the signal: A√{square root over (WH)}. The sources' spectral power matrices Σs may be refined with NMF matrices W and H using equation (8).
The stop criteria which is used in step 105 may be given by
The individual audio sources 301 may be reconstructed using the Wiener filter:
Sfn=ΩfnXfn (27)
where Ωfn may be re-calculated for each frequency bin using equation (13) (or equation (15)). For source reconstruction, it is typically beneficial to use a relatively fine frequency resolution, so it is typically preferable to determine Ωfn based on individual frequency bins f instead of frequency bands
Multi-channel (I-channel) sources may then be reconstructed by panning the estimated audio sources with the mixing parameters:
where
Due to the linearity of the inverse STFT, the conservativity also holds in the time-domain.
The methods and systems described in the present document may be implemented as software, firmware and/or hardware. Certain components may for example be implemented as software running on a digital signal processor or microprocessor. Other components may for example be implemented as hardware and or as application specific integrated circuits. The signals encountered in the described methods and systems may be stored on media such as random access memory or optical storage media. They may be transferred via networks, such as radio networks, satellite networks, wireless networks or wireline networks, for example the Internet. Typical devices making use of the methods and systems described in the present document are portable electronic devices or other consumer equipment which are used to store and/or render audio signals.
Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):
Number | Date | Country | Kind |
---|---|---|---|
16170722 | May 2016 | EP | regional |
This application is continuation of U.S. patent application Ser. No. 16/091,069, filed Oct. 3, 2018, which is the US National Stage of International Application No. PCT/US2017/026296, filed Apr. 6, 2017, which claims priority to U.S. Provisional Application No. 62/330,658, filed May 2, 2016, European Patent Application No. 16170722.9, filed May 20, 2016 and International Application No. PCT/CN2016/078819, filed Apr. 8, 2016, each of which is incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7088831 | Rosca | Aug 2006 | B2 |
7650279 | Hiekata | Jan 2010 | B2 |
8358563 | Hiroe | Jan 2013 | B2 |
8521477 | Nam | Aug 2013 | B2 |
8743658 | Claussen | Jun 2014 | B2 |
8818001 | Hiroe | Aug 2014 | B2 |
9042583 | Buyens | May 2015 | B2 |
10410641 | Wang | Sep 2019 | B2 |
20070025556 | Hiekata | Feb 2007 | A1 |
20080208538 | Visser | Aug 2008 | A1 |
20090306973 | Hiekata | Dec 2009 | A1 |
20110026736 | Lee | Feb 2011 | A1 |
20120287303 | Umeda | Nov 2012 | A1 |
20120294446 | Visser | Nov 2012 | A1 |
20130121506 | Mysore | May 2013 | A1 |
20140058736 | Taniguchi | Feb 2014 | A1 |
20140288926 | Parikh | Sep 2014 | A1 |
20150215721 | Sato | Jul 2015 | A1 |
20170365273 | Wang | Dec 2017 | A1 |
20180240470 | Wang | Aug 2018 | A1 |
Number | Date | Country |
---|---|---|
2510631 | Aug 2014 | GB |
2005227512 | Aug 2005 | JP |
1020150016745 | Feb 2015 | KR |
2015173192 | Nov 2015 | WO |
Entry |
---|
Barfuss, H. et al. “An adaptive microphone array topology for target signal extraction with humanoid robots”, Sep. 8-11, 2014, Acoustic Signal Enhancement (IWAENC), 2014 14th International Workshop. |
Duong, N. “Under-Determined Reverberant Audio Source Separation Using a Full-Rank Spatial Covariance Model”, IEEE Transactions on Audio, Speech, and Language Processing, 2010, vol. 18, Issue 7, pp. 1830-1840. |
Hiekata, T. et al. “Multiple ICA-based real-time blind source extraction applied to handy size microphone”, IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-24, 2009 pp. 121-124. |
Hsieh, H. et al. “Online Bayesian learning for dynamic source separation”, IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 14-19, 2010, pp. 1950-1953. |
Ikram, M. “Promoting convergence in multi-channel blind signal separation using PNLMS” May 22-27, 2011, Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference. |
Inoue, S. et al. “3-Dimensional real-time BSS-microphone with spatio-temporal gradient analysis”, Aug. 18-21, 2010, SICE Annual Conference 2010, Proceedings, pp. 3439-3444. |
Kang, C. et al. “A kind of method for direction of arrival estimation based on blind sourceseparation demixing matrix”, 2012 8th International Conference on Natural Computation, May 29-31, 2012 IEEE Conferences, pp. 134-137. |
Katayama, T. et al. “A real-time blind source separation for speech signals based on theorthogonalization of the joint distribution of the observed signals”, Dec. 20-22, 2011, System Integration (S11), 2011 IEEE/SICE International Symposium. |
Lefevre, A. et al “Online Algorithms for Nonnegative Matrix Factorization with the Itakura-Saito Divergence” IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2011, pp. 313-316. |
Loesch, B. et al. “Online blind source separation based on time-frequency sparseness”, Apr. 19-24, 2009, IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 117-120. |
Naqvi, S.M. et al. “Multimodal blind source separation for moving sources”, Apr. 19-24, 2009, Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International. |
Ozerov, A. et al. “A General Flexible Framework for the Handling of Prior Information in Audio Source Separation”, IEEE Transactions on Audio, Speech, and Language Processing, 2012, vol. 20, Issue: 4, pp. 1118-1133. |
Ozerov, A. et al. “Multichannel nonnegative matrix factorization in convolutive mixtures with application to blind audio source separation”, Apr. 19, 2009, ICASSP 2009, IEEE Piscataway, NJ, USA, pp. 3137-3140. |
Parra, L. et al “Convolutive Blind Separation of Non-Stationary Sources” IEEE Trans on Speech and Audio Processing, vol. 8, No. 3, May 2000, pp. 320-327. |
Stanojevic, Tomislav “3-D Sound in Future HDTV Projection Systems,” 132nd SMPTE Technical Conference, Jacob K. Javits Convention Center, New York City, New York, Oct. 13-17, 1990, 20 pages. |
Stanojevic, Tomislav “Surround Sound for a New Generation of Theaters,” Sound and Video Contractor, Dec. 20, 1995, 7 pages. |
Stanojevic, Tomislav “Virtual Sound Sources in the Total Surround Sound System,” SMPTE Conf. Proc.,1995, pp. 405-421. |
Stanojevic, Tomislav et al. “Designing of TSS Halls,” 13th International Congress on Acoustics, Yugoslavia, 1989, pp. 326-331. |
Stanojevic, Tomislav et al. “Some Technical Possibilities of Using the Total Surround Sound Concept in the Motion Picture Technology,” 133rd SMPTE Technical Conference and Equipment Exhibit, Los Angeles Convention Center, Los Angeles, California, Oct. 26-29, 1991, 3 pages. |
Stanojevic, Tomislav et al. “The Total Surround Sound (TSS) Processor,” SMPTE Journal, Nov. 1994, pp. 734-740. |
Stanojevic, Tomislav et al. “The Total Surround Sound System (TSS System)”, 86th AES Convention, Hamburg, Germany, Mar. 7-10, 1989, 21 pages. |
Stanojevic, Tomislav et al. “TSS Processor” 135th SMPTE Technical Conference, Los Angeles Convention Center, Los Angeles, California, Society of Motion Picture and Television Engineers, Oct. 29-Nov. 2, 1993, 22 pages. |
Stanojevic, Tomislav et al. “TSS System and Live Performance Sound” 88th AES Convention, Montreux, Switzerland, Mar. 13-16, 1990, 27 pages. |
Tengtrairat, N. et al. “Online Noisy Single-Channel Source Separation Using Adaptive Spectrum Amplitude Estimator and Masking”, Sep. 7, 2015, IEEE Transactions on Signal Processing (vol. 64, Issue 7) pp. 1881-1895. |
Number | Date | Country | |
---|---|---|---|
20190392848 A1 | Dec 2019 | US |
Number | Date | Country | |
---|---|---|---|
62330658 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16091069 | US | |
Child | 16561836 | US |