METHOD FOR PROCESSING A MULTICHANNEL SOUND IN A MULTICHANNEL SOUND SYSTEM

Information

  • Patent Application
  • 20150382125
  • Publication Number
    20150382125
  • Date Filed
    February 04, 2013
    11 years ago
  • Date Published
    December 31, 2015
    8 years ago
Abstract
The invention relates to a method for processing a multichannel sound in a multichannel sound system, wherein the input signals L and R are decoded, preferably as stereo signals. The aim of the invention is to develop the method such that a further improvement of the spatial reproduction of the input signals L and R is achieved on the basis of an extraction of direction components. According to the invention, this is achieved in that the signals R and L are decoded at least into two signals of the form nL-mR, in which n, m=1, 2, 3, 4.
Description

The invention relates to a method for processing a multichannel sound in a multichannel sound system, wherein the input signals L and R are decoded, preferably as stereo signals.


Methods of the initially named type are known and familiar to a person skilled in the art.


In the previously known method disclosed in publication U.S. Pat. No. 5,046,098, the front signals L′ and R′ as well as the center signal C and the surround signal S are generated in that the center signal C=a1*L+a2*R and the surround signal S=a3*L−a4*R and the front signals L′=a5*L-a6*C and R′=a7*R−a8*C are formed from the two input signals L and R through summing and difference formation. The coefficients a1 . . . a8 of these weighted summations are derived from level measurements. In order to control this difference formation, two control signals are calculated from the level difference of a left and right channel DLR and from the level difference of a sum and difference signal DCS. These two control signals are changed with time-variant response times in this dynamic. Four individual weighting factors EC, ES, EL and ER, which enable a time-variant output matrix for calculating the front signals L′ and R′ as well as the center signal C and the surround signal S, are then derived from these two time-variant new control signals.


The publication US 2004/0125960 A1, which contains an enhancement of the decoding with time-variant control signals, discloses a further method of the initially named type. The two front signals Lout and Rout are thereby obtained from the two input signals L and R and the subtraction of a weighted sum signal (L+R) and a weighted difference signal (L−R). The center signal C results from the sum (L+R) and the subtraction of the weighted input signals L and R. The surround signal S results from the sum (L−R) and the subtraction of the weighted input signals L and R. The weight coefficients gl, gr, gc and gs are obtained from a level adjustment of the signals L and R or respectively L+R and L−R in a recursive structure.


In publication U.S. Pat. No. 6,697,491 B1, the level difference calculation for L/R and (L+R)/(L−R) also serves to derive control signals for the weighted matrix decoding in the processing of multichannel sound.


In the multichannel sound method described in publication U.S. Pat. No. 5,771,295, the front signals Lo and Ro, the center signal Co and the surround signals LRO and RRO are derived from stereo signals, i.e. from the input signals L and R. For each of the signals, the respective other signals with a weighting are subtracted from the signals L, R, L+R and L−R. Within the framework of this previously known method for processing a multichannel sound, frequency-dependent weight factors are derived in addition to level ratio calculations. The center signal C thereby only varies in the level, whereas the two surround signals LRO and RRO are derived in two frequency bands and in a phase-inverted manner.


The described methods for processing a multichannel sound in a multichannel sound system were mainly developed for the processing of movie sound signals. It was hereby important to reproduce in a directionally accurate manner dynamically occurring directions of signals, usually in the form of voice and effect signals, spatially over several speakers. The dynamic activation of these multichannel signals supports the directional perception for these types of signals. However, in contrast, the direction information in musical stereo recordings is not dynamic to a high degree, but rather static and only changes slightly for special spatial effects. Acoustic examinations within the framework of the method disclosed in publication US 2004/0125960 A1 show minimal control of the direction information, since dominant directions seldom occur within a stereo mix. This time-variant multichannel control ensures a spatial shift of the signal when a stereo encoding is then performed again.


In contrast, an extraction of direction signal components and their weighting through static or frequency-dependent weighting is considerably more important for a spatial resolution improvement of stereo signals. Thus, the publication WO 2010/015275 A1 represents an important advancement of the method of the initially named type, since the splitting of stereo signals into spatial components takes place here in order to evaluate them with different level regulators. The evaluated spatial signals are then recombined into a stereo signal. Due to the weighting of the spatial signal components, the spatial reproduction of the stereo signal is improved.


The object of the invention is thus to further develop a method of the initially named type such that a further improvement in the spatial reproduction of the input signals L and R is achieved based on an extraction of direction signal components.


This object is solved with the characteristics of claim 1. Advantageous embodiments of the invention result from the dependent claims.


According to the invention, R and L are decoded at least into two signals of the form nL-mR, in which n, m=1, 2, 3, 4. An improvement in the spatial reproduction and transparency of the input signals L and R is hereby advantageously achieved. For this, the signals L−R (i.e. with n,m=1) and 2L−R (i.e. with n=2 and m=1) are preferably formed during the decoding.


The signals L and R are preferably decoded into a spatial signal R and into a center signal. The spatial signal is thereby formed from the difference of the signals L and R (RL) and/or from the difference of the signals R and L (RR).


Contrary to the conventional methods, which provide for a splitting of the signals L and R into the front signals Lfront and Rfront, the center signal C and the surround signals SL and SR, a spatial and stereo expansion of a stereo signal is achieved through an expansion of the stereo splitting by the method according to the invention. For this, the spatial signals RL=L−R and RR=R−L are also calculated from the input channels R and L.


These properties have been verified for the following systems:

    • Behringer MS40 monitor speakers
    • Toshiba notebook
    • IMAC27 computer
    • LG GM 205 mobile telephone with DolbyMobile
    • Philips 42PFL9703D flatscreen television with BBE Surround
    • JBL On Stage 400p docking station


Comparisons to DolbyMobile, Virtual Dolby Surround and other stereo spatializers show that the method according to the invention generates a mainly neutral improvement of the stereo sound pattern.


Within the framework of psychoacoustic examinations, the derivation of the surround signals from the difference L-R also proved to be another important step for an improved stereo and spatial expansion. After an intensive audiometry test, the ratio of the surround signals SL=2L−R and SR=2R−L hereby proved to be beneficial. An advantageous embodiment of the invention thus provides that the surround signal SL=2L−R and the surround signal SR are formed from the difference SR=2R−L.


A frequency-dependent weighting of the surround signals is thereby advantageous. A frequency-dependent weighting of the signals SL and SR thus expediently takes place. The frequency-dependent weighting preferably takes place by means of a height-shelving filter.


The signals L and R are expediently added to the signals Lp and R.


An audio system for performing the method is the object of claim 13, wherein the audio system comprises a signal processor, preferably in the form of an audio processor.


A software, which is located on a signal processor, i.e. is imported onto the signal processor, is also provided within the framework of the invention. The software thereby contains an algorithm, which is executed by the signal processor, wherein the algorithm includes the method.


Moreover, the invention includes a signal processor for performing the method.





The invention is described in greater detail below based on the drawing. It shows in a schematic representation:



FIG. 1 a method according to the invention.






FIG. 1 shows the method according to the invention, which comprises four method sections A, B, C, D. Individually, the method sections concern the following:

    • the decoding (method section A),
    • the processing of the decoded signals (method section B),
    • the encoding (method section C),
    • the processing of the encoded signals (method section D).


The method begins in that, within the framework of the decoding, the input signals L and R, which are present as stereo signals, are split into three signal components, wherein the signals L and R can remain intact. The signal components are the center signal C, the spatial signal R as well as the surround signals SL and SR. The center signal C is thereby a single-channel, i.e. it contains only the channel C, while the spatial signal R and the surround signal S are dual-channel, i.e. they contain the signals RL and RR or respectively SL and SR. The surround and spatial signals SL, SR as well as RL and RR thereby contain the direction and spatial information of the stereo signals L and R.


In method section A, the signals, i.e.

    • the single-channel center signal C=L+R, also called a mono signal,
    • the stereo component RL=L−R and RR=R−L of the dual-channel spatial signal R as well as
    • the two dual-channel surround channels SL=2L−R and SR=2R−L,


are decoded from the stereo signals R and L into five parallel stages.


The method section A is followed by the method section B, in which the processing of the channels C, RL, RR, SL and SR takes place. In order to adjust the volume of the center signal C and of the spatial signal RL=L−R and RR=R−L, these signals are provided by first level regulators 1, 2 with a level weighting, which manifests itself in the factor 1.5. After this first level weighting, a further variable level weighting, which weights the sound characteristics of the decoded signals to L, R, is performed by the further level regulators 3, 4.


In contrast, the two surround signals SL=2L−R and SR=2R−L are delivered to height-shelving filters 5, 6, through which the frequency response of the surround signals SL and SR are set. A frequency-dependent weighting of the signals SL and SR thus takes place, wherein the filters 5, 6 comprise a minimal phase shift in the frequency range around preferably 2 kHz so that cancellation effects during the encoding taking place in method section C are minimized, but the actual amplifying effect is simultaneously emphasized and namely with a height-shelving frequency response around e.g. 3 dB at preferably 2 KHZ. The surround signals SL, SR are then delivered to the level regulators 7, 8, which weight the sound characteristics of the decoded signals to SL, SR.


During the encoding, i.e. in the method section C, the following thus results after summation, which is already given in method step A, of the signals C, RL, RR, SL, SR in the form:






L
P
=C+R
L
+S
L=(L+R)+(L−R)+(2L−R)=4L−R






R
P
=C+R
R
+S
R=(L+R)+(R−L)+(2R−L)=4R−L


the encoded stereo signals LP, RP according to






L
P
=V
C
C+V
R
R
L
+V
S
S
L
=V
C(L+R)+VR(L−R)+VS(2L−R)






R
P
=V
C
C+V
R
R
R
+V
S
S
R
=V
C(L+R)+VR(R−L)+VS(2R−L)


or respectively after filtering of the surround signals SL, SR






L
P
=V
C
C+V
R
R
L
+V
S(SL)Filtered=VC(L+R)+VR(L−R)+VS(2L−R)Filtered






R
P
=V
C
C+V
R
R
R
+V
S(SR)Filtered=VC(L+R)+VR(R−L)+VS(2R−L)Filtered


In the last method section D, the encoded weighted signals LP, Rp are post-processed by stereo equalizers 9, 10. A special non-linear characteristic line NL is used for further enhancement of the sound pattern. This non-linear characteristic line forms an input amplitude x over an output amplitude y. The used, non-linear characteristic line y=f(x) is






y=tanh((1/7.522*atan(7.522*x).*(sign(x)+1)./2.+x*(sign(−x)+1)./2)/0.5)*0.5


Harmonic overtones are added to the direct music signal via this characteristic line. Finally, the signals LP, RP are post-processed further in the method section D such that the level regulators 11, 12 determine the degree of overtone admixing to the direct signal. Further processing finally takes place by the level regulators 13, 14, which make the overall level of the method result adjustable.


The present invention in this design is not restricted to the exemplary embodiment specified above. Rather, a plurality of variants is conceivable, which also use the represented solution in different designs. For example, within the framework of method section D, maximizers, i.e. compressors/limiters, can be used to further enhance the sound pattern.


LIST OF REFERENCE NUMBERS




  • 1, 2 First level regulators


  • 3, 4 Further level regulators


  • 5, 6 Height-shelving filters


  • 7, 8 Level regulators


  • 9, 10 Stereo equalizers


  • 11, 12


  • 13, 14 Further components


Claims
  • 1. A method for processing a multichannel sound in a multichannel sound system, in which the input signals L and R are decoded, preferably as stereo signals, characterized in thatthe signals R and L are decoded at least into two signals of the form nL-mR with n, m=1, 2, 3, 4.
  • 2. The method according to claim 1, characterized in thatthe signals L and R are decoded into a spatial signal R and into a center signal, wherein a spatial signal RL is formed from the difference of the signals L and R and/or a spatial signal RR from the difference of the signals R and L.
  • 3. The method according to claim 1 or 2, characterized in thata surround signal SL is formed from the difference SL=2L−R and a surround signal SR from the difference SR=2R−L.
  • 4. The method according to one of claims 2 to 3, characterized in thatan encoding to signals LP, RP takes place in the form LP=C+RL+SL=(L+R)+(L−R)+(2L−R)=4L−R and RP=C+RR+SR=(L+R)+(R−L)+(2R−L)=4R−L.
  • 5. The method according to one of claims 3 to 4, characterized in thatthe signals RL, RR, C, SL and SR contain a level weighting VC, VR, VS.
  • 6. The method according to claim 4, characterized in thatan encoding to signals LP, RP takes place in the form LP=VCC+VRRL+VSSL=VC(L+R)+VR(L−R)+VS(2L−R) andRP=VCC+VRRR+VSSR=VC(L+R)+VR(R−L)+VS(2R−L).
  • 7. The method according to one of claims 3 to 6, characterized in thata frequency-dependent weighting of the signals SL and SR takes place.
  • 8. The method according to claim 7, characterized in thatthe frequency-dependent weighting takes place by means of a height-shelving filter (5, 6).
  • 9. The method according to one of claims 4 to 7, characterized in thatthe signals LP, RP are filtered by means of an equalizer (9, 10).
  • 10. The method according to one of claims 4 to 8, characterized in thatharmonic overtones are added to the signals LP, RP.
  • 11. The method according to claim 10, characterized in thatthe addition of the harmonic overtones takes places by means of a maximizer or a non-linear characteristic line NL.
  • 12. The method according to one of claims 3 to 11, characterized in thatthe signals L and R are added to the signals LP and RP.
  • 13. An audio system for performing the method according to one of claims 1 to 12, characterized in thatit comprises a signal processor.
  • 14. A software, which is imported onto a signal processor, characterized in thatthe software contains an algorithm, which is executed by the signal processor, wherein the algorithm includes the method according to one of claims 1 to 12.
  • 15. A signal processor for performing the method according to one of claims 1 to 12.
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2013/052127 2/4/2013 WO 00