The present disclosure relates to a technique for obtaining monaural sound signals from 2-channel sound signals in order to code sound signals in a monaural manner, to code sound signals in conjunction with monaural coding and stereo coding, to perform signal processing on sound signals in a monaural manner, or to perform signal processing on stereo sound signals by using monaural sound signals.
The technique of PTL 1 is a technique for obtaining monaural sound signals from 2-channel sound signals and embedded coding/decoding the 2-channel sound signals and the monaural sound signals. PTL 1 discloses a technique for obtaining monaural signals obtained by averaging sound signals of the left channel input and sound signals of the right channel input for each corresponding sample, coding the monaural signals (monaural coding) to obtain a monaural code, decoding the monaural code (monaural decoding) to obtain monaural local decoded signals, and coding the difference (prediction residue signals) between the input sound signals and prediction signals obtained from the monaural local decoded signals for each of the left channel and the right channel. In the technique of PTL 1, for each channel, assuming that signals obtained by giving a latency and an amplitude ratio to monaural local decoded signals are prediction signals, prediction residue signals are obtained by subtracting the prediction signals from the input sound signals, by selecting prediction signals having a latency and an amplitude ratio that minimize the errors between the input sound signals and the prediction signals, or by using prediction signals having a latency difference and an amplitude ratio that maximize the cross-correlation between the input sound signals and the monaural local decoded signals. By targeting the prediction residue signals for coding/decoding, the deterioration of the sound quality of the decoded sound signals of each channel is suppressed.
In the technique of PTL 1, the coding efficiency of each channel can be increased by optimizing the latency and the amplitude ratio given to the monaural local decoded signals when obtaining the prediction signals. However, in the technique of PTL 1, the monaural local decoded signals are obtained by coding/decoding monaural signals obtained by averaging the sound signals of the left channel and the sound signals of the right channel. In other words, there is a problem that the technique of PTL 1 is not devised to obtain monaural signals useful for signal processing such as coding processing from 2-channel sound signals.
An object of the present disclosure is to provide a technique for obtaining monaural signals useful for signal processing such as coding processing from 2-channel sound signals.
One aspect of the present disclosure is a sound signal downmix method for obtaining a downmix signal that is a signal obtained by mixing a left channel input sound signal and a right channel input sound signal, the sound signal downmix method including obtaining preceding channel information that is information indicating which of the left channel input sound signal and the right channel input sound signal is preceding and a left-right correlation coefficient that is a correlation coefficient between the left channel input sound signal and the right channel input sound signal, and obtaining the downmix signal by weighted averaging the left channel input sound signal and the right channel input sound signal to include a larger amount of an input sound signal of a preceding channel among the left channel input sound signal and the right channel input sound signal as the left-right correlation coefficient is greater, based on the preceding channel information and the left-right correlation coefficient.
One aspect of the present disclosure is the sound signal downmix method, in which assuming that a sample number is t, the left channel input sound signal is xL(t), the right channel input sound signal is xR(t), the downmix signal is xM(t), and the left-right correlation coefficient is γ, the obtaining of the downmixing signal by weighted averaging the left channel input sound signal and the right channel input sound signal includes obtaining, in a case where the preceding channel information indicates that a left channel is preceding, the downmix signal by xM(t)=((1+γ)/2)×xL(t)+((1−γ)/2)×xR(t) per sample number t, obtaining, in a case where the preceding channel information indicates that a right channel is preceding, the downmix signal by xM(t)−((1−γ)/2)×xL(t)+((1+γ)/2)×xR(t) per sample number t, and obtaining, in a case where the preceding channel information indicates that neither the left channel nor the right channel is preceding, the downmix signal by xM(t)−(xL(t)+xR(t))/2 per sample number t.
One aspect of the present disclosure includes the aforementioned sound signal downmix method, and further includes coding the downmix signal obtained by the obtaining of the downmixing signal by weighted averaging the left channel input sound signal and the right channel input sound signal to obtain a monaural code, and coding the left channel input sound signal and the right channel input sound signal to obtain a stereo code.
According to the present disclosure, monaural signals useful for signal processing such as coding processing can be obtained from 2-channel sound signals.
First, a notation method in the specification will be described. The superscript “{circumflex over ( )}”, such as {circumflex over ( )}x for a character x, is originally written directly above the “x”. However, due to restrictions on the description notation in the specification, it may be described as {circumflex over ( )}x.
Prior to describing embodiments of the disclosure, a coding device and a decoding device in an original form for carrying out the disclosure of a second embodiment and the disclosure of a first embodiment will be described as a first reference embodiment and a second reference embodiment. Note that, in the specification and the claims, a coding device may be referred to as a sound signal coding device, a coding method may be referred to as a sound signal coding method, a decoding device may be referred to as a sound signal decoding device, and a decoding method may be referred to as a sound signal decoding method.
As illustrated in
The input sound signals of the left channel input to the coding device 100 and the input sound signals of the right channel input to the coding device 100 are input to the downmix unit 110. The downmix unit 110 obtains and outputs downmix signals which are signals obtained by mixing the input sound signals of the left channel and the input sound signals of the right channel, from the input sound signals of the left channel and the input sound signals of the right channel that are input (step S110).
For example, assuming that the number of samples per frame is T, input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and input sound signals xR(1), xR(2), . . . xR(T) of the right channel input to the coding device 100 in frame units are input to the downmix unit 110. Here, T is a positive integer, and, for example, if the frame length is 20 ms and the sampling frequency is 32 kHz, then T is 640. The downmix unit 110 obtains and outputs a sequence of average values of the respective sample values for corresponding samples of the input sound signals of the left channel and the input sound signals of the right channel input, as downmix signals xM(1), xM(2), . . . , xM(T). In other words, assuming t for each sample number, xM(t)=(xL(t)+xR(t))/2.
The input sound signals xL(1), xL(2), . . . , xL(T) of the left channel input to the coding device 100, and the downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110 are input to the left channel subtraction gain estimation unit 120. The left channel subtraction gain estimation unit 120 obtains and outputs the left channel subtraction gain α and the left channel subtraction gain code Cα, which is the code representing the left channel subtraction gain α, from the input sound signals of the left channel and the downmix signals input (step S120). The left channel subtraction gain estimation unit 120 determines the left channel subtraction gain α and the left channel subtraction gain code Cα by a well-known method such as that illustrated in the method of obtaining the amplitude ratio g in PTL 1 or the method of coding the amplitude ratio g, or a newly proposed method based on the principle for minimizing quantization errors. The principle for minimizing quantization errors and the method based on this principle are described below.
The input sound signals xL(1), xL(2), . . . , xL(T) of the left channel input to the coding device 100, the downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110, and the left channel subtraction gain α output by the left channel subtraction gain estimation unit 120 are input to the left channel signal subtraction unit 130. The left channel signal subtraction unit 130 obtains and outputs a sequence of values xL(t)−α×xM(t) obtained by subtracting the value α×xM(t), obtained by multiplying the sample value xM(t) of the downmix signal and the left channel subtraction gain α, from the sample value xL(t) of the input sound signal of the left channel, for each corresponding sample t, as left channel difference signals yL(1), yL(2), . . . , yL(T) (step S130). In other words, yL(t)=xL(t)−α×xM(t). In the coding device 100, in order to avoid requiring latency or an arithmetic processing amount for obtaining a local decoded signal, the left channel signal subtraction unit 130 only needs to use the unquantized downmix signal xM(t) obtained by the downmix unit 110 rather than a quantized downmix signal that is a local decoded signal of monaural coding. However, in a case where the left channel subtraction gain estimation unit 120 obtains the left channel subtraction gain α in a well-known method such as that illustrated in PTL 1 rather than the method based on the principle for minimizing quantization errors, a means for obtaining a local decoded signal corresponding to the monaural code CM may be provided in the subsequent stage of the monaural coding unit 160 of the coding device 100 or in the monaural coding unit 160, and in the left channel signal subtraction unit 130, quantized downmix signals {circumflex over (0)}xM(1), {circumflex over ( )}xM(2), . . . , xM(T) which are local decoded signals for monaural coding may be used to obtain the left channel difference signals in place of the downmix signals xM(1), xM(2), . . . , xM(T), as in the case of a conventional coding device such as PTL 1.
The input sound signals xR(1), xR(2), . . . , xR(T) of the right channel input to the coding device 100, and the downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110 are input to the right channel subtraction gain estimation unit 140. The right channel subtraction gain estimation unit 140 obtains and outputs the right channel subtraction gain β and the right channel subtraction gain code Cβ, which is the code representing the right channel subtraction gain β, from the input sound signals of the right channel and the downmix signals input (step S140). The right channel subtraction gain estimation unit 140 determines the right channel subtraction gain β and the right channel subtraction gain code Cβ by a well-known method such as that illustrated in the method of obtaining the amplitude ratio g in PTL 1 or the method of coding the amplitude ratio g, or a newly proposed method based on the principle for minimizing quantization errors. The principle for minimizing quantization errors and the method based on this principle are described below.
The input sound signals xR(1), xR(2), . . . , xR(T) of the right channel input to the coding device 100, the downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110, and the right channel subtraction gain β output by the right channel subtraction gain estimation unit 140 are input to the right channel signal subtraction unit 150. The right channel signal subtraction unit 150 obtains and outputs a sequence of values xR(t)−β×xM(t) obtained by subtracting the value β×xM(t), obtained by multiplying the sample value xM(t) of the downmix signal and the right channel subtraction gain β, from the sample value xR(t) of the input sound signal of the right channel, for each corresponding sample t, as right channel difference signals yR(1), yR(2), . . . , yR(T) (step S150). In other words, yR(t)=xR(t)−βxM(t). Similar to the left channel signal subtraction unit 130, in the coding device 100, in order to avoid requiring latency or an arithmetic processing amount for obtaining a local decoded signal, the right channel signal subtraction unit 150 only needs to use the unquantized downmix signal xM(t) obtained by the downmix unit 110 rather than a quantized downmix signal that is a local decoded signal of monaural coding. However, in a case where the right channel subtraction gain estimation unit 140 obtains the right channel subtraction gain β in a well-known method such as that illustrated in PTL 1 rather than the method based on the principle for minimizing quantization errors, a means for obtaining a local decoded signal corresponding to the monaural code CM may be provided in the subsequent stage of the monaural coding unit 160 of the coding device 100 or in the monaural coding unit 160, and in the right channel signal subtraction unit 150, similar to the left channel signal subtraction unit 130, quantized downmix signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) which are local decoded signals for monaural coding may be used to obtain the right channel difference signals in place of the downmix signals xM(1), xM(2), . . . , xM(T), as in the case of a conventional coding device such as PTL 1.
The downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110 are input to the monaural coding unit 160. The monaural coding unit 160 codes the input downmix signals with bM bits in a prescribed coding scheme to obtain and output the monaural code CM (step S160). In other words, the monaural code CM with bM bits is obtained and output from the downmix signals xM(1), xM(2), . . . , xM(T) of the input T samples. Any coding scheme may be used as the coding scheme, for example, a coding scheme such as the 3GPP EVS standard is used.
The left channel difference signals yL(1), yL(2), . . . , yL(T) output by the left channel signal subtraction unit 130, and the right channel difference signals yR(1), yR(2), . . . , yR(T) output by the right channel signal subtraction unit 150 are input to the stereo coding unit 170. The stereo coding unit 170 codes the input left channel difference signals and the right channel difference signals in a prescribed coding scheme with a total of bs bits to obtain and output the stereo code CS (step S170). In other words, the stereo coding unit 170 obtains and outputs the stereo code CS with the total of bs bits from the left channel difference signals yL(1), yL(2), . . . , yL(T) of the input T samples and the right channel difference signals yR(1), yR(2), . . . , yR(T) of the input T samples. Any coding scheme may be used as the coding scheme, for example, a stereo coding scheme corresponding to the stereo decoding scheme of the MPEG-4 AAC standard may be used, or a coding scheme of independently coding input left channel difference signals and input right channel difference signals may be used, and a combination of all the codes obtained by the coding is used as a “stereo code CS”.
In a case where the input left channel difference signals and the input right channel difference signals are coded independently, the stereo coding unit 170 codes the left channel difference signals with bL bits and codes the right channel difference signals with bR bits. In other words, the stereo coding unit 170 obtains the left channel difference code CL with bL bits from the left channel difference signals yL(1), yL(2), . . . , yL(T) of the input T samples, obtains the right channel difference code CR with bR bits from the right channel difference signals yR(1), yR(2), . . . , yR(T) of the input T samples, and outputs the combination of the left channel difference code CL and the right channel difference code CR as the stereo code CS. Here, the sum of bL bits and bR bits is bs bits.
In a case where the input left channel difference signals and the right channel difference signals are coded together in one coding scheme, the stereo coding unit 170 codes the left channel difference signals and the right channel difference signals with a total of bs bit. In other words, the stereo coding unit 170 obtains and outputs the stereo code CS with bs bits from the left channel difference signals yL(1), yL(2), . . . , yL(T) of the input T samples and the right channel difference signals yR(1), yR(2), . . . , yR(T) of the input T samples.
As illustrated in
The monaural code CM input to the decoding device 200 is input to the monaural decoding unit 210. The monaural decoding unit 210 decodes the input monaural code CM in a prescribed decoding scheme to obtain and output monaural decoded sound signals {circumflex over (0)}xM(1), {circumflex over (0)}xM(2), . . . , {circumflex over (0)}xM(T) (step S210). A decoding scheme corresponding to the coding scheme used by the monaural coding unit 160 of the corresponding coding device 100 is used as the prescribed decoding scheme. The number of bits of the monaural code CM is bm.
The stereo code CS input to the decoding device 200 is input to the stereo decoding unit 220. The stereo decoding unit 220 decodes the input stereo code CS in a prescribed decoding scheme to obtain and output left channel decoded difference signals {circumflex over ( )}yL(1), {circumflex over ( )}yL(2), . . . , {circumflex over ( )}yL(T), and right channel decoded difference signals yR(1), yR(2), . . . , yR(T) (step S220). A decoding scheme corresponding to the coding scheme used by the stereo coding unit 170 of the corresponding coding device 100 is used as the prescribed decoding scheme. The total number of bits of the stereo code CS is bs.
The left channel subtraction gain code Cα input to the decoding device 200 is input to the left channel subtraction gain decoding unit 230. The left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code Cc to obtain and output the left channel subtraction gain α (step S230). The left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code Cα in a decoding method corresponding to the method used by the left channel subtraction gain estimation unit 120 of the corresponding coding device 100 to obtain the left channel subtraction gain α. A method in which the left channel subtraction gain decoding unit 230 decodes the left channel subtraction gain code Cα and obtains the left channel subtraction gain α in the case where the left channel subtraction gain estimation unit 120 of the corresponding coding device 100 obtains the left channel subtraction gain α and the left channel subtraction gain code Cα by the method based on the principle for minimizing the quantization errors will be described later.
The monaural decoded sound signals xM(1), xM(2), . . . , xM(T) output by the monaural decoding unit 210, the left channel decoded difference signals {circumflex over ( )}yL(1), yL(2), . . . , {circumflex over ( )}yL(T) output by the stereo decoding unit 220, and the left channel subtraction gain α output by the left channel subtraction gain decoding unit 230 are input to the left channel signal addition unit 240. The left channel signal addition unit 240 obtains and outputs a sequence of values {circumflex over ( )}yL(t)+α×{circumflex over ( )}xM(t) obtained by adding the sample value {circumflex over ( )}yL(t) of the left channel decoded difference signal and the value α×xM(t) obtained by multiplying the sample value xM(t) of the monaural decoded sound signal and the left channel subtraction gain α, for each corresponding sample t, as left channel decoded sound signals {circumflex over ( )}xL(1), {circumflex over ( )}XL(2), . . . , {circumflex over ( )}xL(T) (step S240). In other words, {circumflex over ( )}XL(t)={circumflex over ( )}yL(t)+α×{circumflex over ( )}xM(t)
The right channel subtraction gain code Cβ input to the decoding device 200 is input to the right channel subtraction gain decoding unit 250. The right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code Cβ to obtain and output the right channel subtraction gain β (step S250). The right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code Cβ in a decoding method corresponding to the method used by the right channel subtraction gain estimation unit 140 of the corresponding coding device 100 to obtain the right channel subtraction gain β. A method in which the right channel subtraction gain decoding unit 250 decodes the right channel subtraction gain code Cβ and obtains the right channel subtraction gain β in the case where the right channel subtraction gain estimation unit 140 of the corresponding coding device 100 obtains the right channel subtraction gain β and the right channel subtraction gain code Cβ by the method based on the principle for minimizing the quantization errors will be described later.
The monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) output by the monaural decoding unit 210, the right channel decoded difference signals {circumflex over ( )}yR(1), {circumflex over ( )}yR(2), . . . , {circumflex over ( )}yR(T) output by the stereo decoding unit 220, and the right channel subtraction gain β output by the right channel subtraction gain decoding unit 250 are input to the right channel signal addition unit 260. The right channel signal addition unit 260 obtains and outputs a sequence of values {circumflex over ( )}yR(t)+β×{circumflex over ( )}xM(t) obtained by adding the sample value {circumflex over ( )}yR(t) of the right channel decoded difference signal and the value β×xM(t) obtained by multiplying the sample value xM(t) of the monaural decoded sound signal and the right channel subtraction gain β, for each corresponding sample t, as right channel decoded sound signals {circumflex over ( )}xR(1), {circumflex over ( )}xR(2), . . . , {circumflex over ( )}xR(T) (step S260). In other words, {circumflex over ( )}xR(t)={circumflex over ( )}yR(t)+β×{circumflex over ( )}xM(t).
The principle for minimizing quantization errors will be described below. In a case where the left channel difference signals and the right channel difference signals input in the stereo coding unit 170 are coded together in one coding scheme, the number of bits bL used for the coding of the left channel difference signals and the number of bits bR used for the coding of the right channel difference signals may not be explicitly determined, but in the following, the description is made assuming that the number of bits used for the coding of the left channel difference signals is bL, and the number of bits used for the coding of the right channel difference signal is bR. In the following, mainly the left channel will be described, but the description similarly applies to the right channel.
The coding device 100 described above codes the left channel difference signals yL(1), yL(2), . . . , yL(T) having values obtained by subtracting the value obtained by multiplying each sample value of the downmix signals xM(1), xM(2), . . . , xM(T) and the left channel subtraction gain α, from each sample value of the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel, with bL bits, and codes the downmix signals xM(1), xM(2), . . . , xM(T) with bM bits. The decoding device 200 described above decodes the left channel decoded difference signals {circumflex over ( )}yL(1), yL(2), . . . , {circumflex over ( )}yL(T) from the bL bit code (hereinafter also referred to as “quantized left channel difference signals”) and decodes the monaural decoded sound signals xM(1), xM(2), . . . , xM(T) from the bM bit code (hereinafter also referred to as “quantized downmix signals”), and then adds the value obtained by multiplying each sample value of the quantized downmix signals xM(1), {circumflex over ( )}xM(2), . . . , xM(T) obtained by the decoding by the left channel subtraction gain α, to each sample value of the quantized left channel difference signals {circumflex over ( )}yL(1), yL(2), . . . {circumflex over ( )}yL(T) obtained by the decoding, to obtain the left channel decoded sound signals{circumflex over ( )}xL(1), {circumflex over ( )}xL(2), . . . , {circumflex over ( )}xL(T), which are the decoded sound signals of the left channel. The coding device 100 and the decoding device 200 should be designed such that the energy of the quantization errors possessed by the decoded sound signals of the left channel obtained in the processes described above is reduced.
The energy of the quantization errors (hereinafter referred to as “quantization errors generated by coding” for convenience) possessed by the decoded signals obtained by coding and decoding input signals is roughly proportional to the energy of the input signals in many cases, and tends to be exponentially smaller with respect to the value of the number of bits per sample used for the coding. Thus, the average energy of the quantization errors per sample resulting from the coding of the left channel difference signals can be estimated using a positive number σL2 as in Expression (1-0-1) below, and the average energy of the quantization errors per sample resulting from the coding of the downmix signals can be estimated using a positive number σM2 as in Expression (1-0-2) below.
Here, suppose that each sample values of the input sound signals xL(1), xL(2), . . . xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) are close values such that the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) can be regarded as the same sequence. For example, a case in which the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the input signals xR(1), xR(2), . . . , xR(T) of the right channel are obtained by collecting sounds originating from a sound source that is equidistant from two microphones in an environment where background noise or reflections are not much corresponds to this condition. Under this condition, each sample value of the left channel difference signals yL(1), yL(2), . . . , yL(T) is equivalent to the value obtained by multiplying a corresponding sample value of the downmix signals xM(1), xM(2), . . . , xM(T) by (1-α). Thus, because the energy of the left channel difference signals can be expressed by (1-α)2 times the energy of the downmix signals, σL2 described above can be replaced with (1-α)2×σM2 using σM2 described above, so the average energy of the quantization errors per sample resulting from the coding of the left channel difference signals can be estimated as in Expression (1-1) below.
The average energy of the quantization errors per sample possessed by the signals added to the quantized left channel difference signals in the decoding device, that is, the average energy of the quantization errors per sample possessed by a sequence of values obtained by multiplying each sample value of the quantized downmix signals obtained by the decoding and the left channel subtraction gain α can be estimated as in Expression (1-2) below.
Assuming that there is no correlation between the quantization errors resulting from the coding of the left channel difference signals and the quantization errors possessed by the sequence of values obtained by multiplying each sample value of the quantized downmix signals obtained by the decoding by the left channel subtraction gain α, the average energy of the quantization errors per sample possessed by the decoded sound signals of the left channel is estimated by the sum of Expressions (1-1) and (1-2). The left channel subtraction gain α which minimizes the energy of the quantization errors possessed by the decoded sound signals of the left channel is determined as in Equation (1-3) below.
In other words, in order to minimize the quantization errors possessed by the decoded sound signals of the left channel in a condition where the sample values of the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) are close values such that the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) can be regarded as the same sequence, the left channel subtraction gain estimation unit 120 only needs to calculate the left channel subtraction gain α by Equation (1-3). The left channel subtraction gain α obtained in Equation (1-3) is a value greater than 0 and less than 1, is 0.5 when bL and bM, which are the two numbers of bits used for the coding, are equal, is a value closer to 0 than 0.5 as the number of bits bL for coding the left channel difference signals is greater than the number of bits bM for coding the downmix signals, and is a value closer to 1 than 0.5 as the number of bits bM for coding the downmix signals is greater than the number of bits bL for coding the left channel difference signals.
This similarly applies to the right channel, and in order to minimize the quantization errors possessed by the decoded sound signals of the right channel in a condition where the sample values of the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . , xM(T) are close values such that the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . , xM(T) can be regarded as the same sequence, the right channel subtraction gain estimation unit 140 only needs to calculate the right channel subtraction gain β by Equation (1-3-2) below.
The right channel subtraction gain β obtained in Equation (1-3-2) is a value greater than 0 and less than 1, is 0.5 when bR and bM, which are the two numbers of bits used for the coding, are equal, is a value closer to 0 than 0.5 as the number of bits bR for coding the right channel difference signals is greater than the number of bits bM for coding the downmix signals, and is a value closer to 1 than 0.5 as the number of bits bM for coding the downmix signals is greater than the number of bits bR for coding the right channel difference signals.
Next, a principle for minimizing the energy of the quantization errors possessed by the decoded sound signals of the left channel will be described, including a case in which the input sound signals xL(1), XL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) are not regarded as the same sequence.
The normalized inner product value rL of the input sound signals xL(1), XL(2), . . . xL(T) of the left channel and the downmix signal xM(1), xM(2), . . . , xM(T) is represented by Equation (1-4) below.
The normalized inner product value rL obtained by Equation (1-4) is an actual value, and when each sample value of the downmix signals xM(1), xM(2), . . . , xM(T) is multiplied by an actual value rL′ to obtain a sequence of sample values rL′×xM(1), rL′×xM(2), . . . , rL′×xM(T), the normalized inner product value rL is the same value as the actual value rL′, where the energy of the sequence xL(l)−rL′×xM(1), xL(2)−rL′×xM(2), . . . , xL(T)−rL×xM(T) obtained by the difference between the obtained sequence of the sample values and each sample value of the input sound signals of the left channel is minimized.
The input sound signals xL(1), xL(2), . . . , xL(T) of the left channel can be decomposed as xL(t)=rL×xM(t)+(xL(t)−rL×xM(t)) for each sample number t. Here, assuming that a sequence constituted by the values of xL(t)−rL×xM(t) is orthogonal signals xL′(1), xL′(2), . . . , xL′(T), according to the decomposition, each sample value αL(t)=xL(t)−αxM(t) of the left channel difference signals is equivalent to the sum (rL−α)×xM(t)+xL′(t) of the value (rL−α)×xM(t) obtained by multiplying each sample value xM(t) of the downmix signals xM(1), xM(2), . . . , xM(T) by (rL−α) using the normalized inner product value rL and the left channel subtraction gain α, and each sample value xL′(t) of the orthogonal signals. Because the orthogonal signals xL′(1), xL′(2), . . . , xL′(T) indicate orthogonality with respect to the downmix signals xM(1), xM(2), . . . , xM(T), in other words, the property that the inner product is 0, the energy of the left channel difference signals is expressed as the sum of the energy of the downmix signals multiplied by (rL−α)2 and the energy of the orthogonal signals. Thus, the average energy of the quantization errors per sample resulting from coding the left channel difference signals with bL bits can be estimated using a positive number G2 as in Expression (1-5) below.
Assuming that there is no correlation between the quantization errors resulting from the coding of the left channel difference signals and the quantization errors possessed by the sequence of values obtained by multiplying each sample value of the quantized downmix signals obtained by the decoding by the left channel subtraction gain α, the average energy of the quantization errors per sample possessed by the decoded sound signals of the left channel is estimated by the sum of Expressions (1-5) and (1-2).
The left channel subtraction gain α which minimizes the energy of the quantization errors possessed by the decoded sound signals of the left channel is determined as in Equation (1-6) below.
In other words, in order to minimize the quantization errors of the decoded sound signals of the left channel, the left channel subtraction gain estimation unit 120 only needs to calculate the left channel subtraction gain α by Equation (1-6). In other words, considering this principle for minimizing the energy of the quantization errors, the left channel subtraction gain α should use a value obtained by multiplying the normalized inner product value rL and a correction coefficient that is a value determined by bL and bM, which are the numbers of bits used for the coding. The correction coefficient is a value greater than 0 and less than 1, is 0.5 when the number of bits bL for coding the left channel difference signals and the number of bits bM for coding the downmix signals are the same, is closer to 0 than 0.5 as the number of bits bL for coding the left channel difference signals is greater than the number of bits bM for coding the downmix signals, and is closer to 1 than 0.5 as the number of bits bL for coding the left channel difference signals is less than the number of bits bM for coding the downmix signals.
This similarly applies to the right channel, and in order to minimize the quantization errors of the decoded sound signals of the right channel, the right channel subtraction gain estimation unit 140 calculates the right channel subtraction gain β by Equation (1-6-2) below.
Here, rR is a normalized inner product value of the input sound signals xR(1), xR(2), . . . xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . , xM(T), which is expressed by Equation (1-4-2) below.
In other words, considering this principle for minimizing the energy of the quantization errors, the right channel subtraction gain β should use a value obtained by multiplying the normalized inner product value rR and a correction coefficient that is a value determined by bR and bM, which are the numbers of bits used for the coding. The correction coefficient is a value greater than 0 and less than 1, is a value closer to 0 than 0.5 as the number of bits bR for coding the right channel difference signals is greater than the number of bits bM for coding the downmix signals, and closer to 1 than 0.5 as the number of bits for coding the right channel difference signals is less than the number of bits for coding the downmix signals.
Specific examples of the estimation and decoding of the subtraction gain based on the principle for minimizing the quantization errors described above will be described. In each example, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 configured to estimate a subtraction gain in the coding device 100 and the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 configured to decode a subtraction gain in the decoding device 200 will be described.
Example 1 is an example based on the principle for minimizing the energy of the quantization errors possessed by the decoded sound signals of the left channel, including a case in which the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) are not regarded as the same sequence, and the principle for minimizing the energy of the quantization errors possessed by the decoded sound signals of the right channel, including a case in which the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . xM(T) are not regarded as the same sequence.
The left channel subtraction gain estimation unit 120 stores in advance a plurality of sets (A sets, a=1, . . . , A) of candidates of the left channel subtraction gain αcand(a) and the codes Cαcand(a) corresponding to the candidates. The left channel subtraction gain estimation unit 120 performs steps S120-11 to S120-14 below illustrated in
The left channel subtraction gain estimation unit 120 first obtains the normalized inner product value rL for the input sound signals of the left channel of the downmix signals by Equation (1-4) from the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) input (step S120-11). The left channel subtraction gain estimation unit 120 obtains the left channel correction coefficient cL by Equation (1-7) below by using the number of bits bL used for the coding of the left channel difference signals yL(1), yL(2), . . . , yL(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S120-12).
The left channel subtraction gain estimation unit 120 then obtains a value obtained by multiplying the normalized inner product value rL obtained in step S120-11 and the left channel correction coefficient cL obtained in step S120-12 (step S120-13). The left channel subtraction gain estimation unit 120 then obtains a candidate closest to the multiplication value cL×rL obtained in step S120-13 (quantized value of the multiplication value cL×rL) of the stored candidates αcand(1), . . . , αcand(A) of the left channel subtraction gain as the left channel subtraction gain α, and obtains the code corresponding to the left channel subtraction gain α of the stored codes Cαcand(1), . . . Cαcand(A) as the left channel subtraction gain code Cα (step S120-14).
Note that in a case where the number of bits bL used for the coding of the left channel difference signals yL(1), yL(2), . . . , yL(T) in the stereo coding unit 170 is not explicitly determined, it is only needed to use half of the number of bits bs of the stereo code CS output by the stereo coding unit 170 (that is, bs/2) as the number of bits bL. Instead of the value obtained by Equation (1-7) itself, the left channel correction coefficient cL may be a value greater than 0 and less than 1, may be 0.5 when the number of bits bL used for the coding of the left channel difference signals yL(1), yL(2), . . . , yL(T) and the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . xM(T) are the same, and may be a value closer to 0 than 0.5 as the number of bits bL is greater than the number of bits bM and closer to 1 than 0.5 as the number of bits bL is less than the number of bits bM. These similarly apply to each example described later.
The right channel subtraction gain estimation unit 140 stores in advance a plurality of sets (B sets, b=1, . . . , B) of candidates of the right channel subtraction gain βcand(b) and the codes Cβcand(b) corresponding to the candidates. The right channel subtraction gain estimation unit 140 performs steps S140-11 to S140-14 below illustrated in
The right channel subtraction gain estimation unit 140 first obtains the normalized inner product value rR for the input sound signals of the right channel of the downmix signals by Equation (1-4-2) from the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . , xM(T) input (step S140-11). The right channel subtraction gain estimation unit 140 obtains the right channel correction coefficient cR by Equation (1-7-2) below by using the number of bits bR used for the coding of the right channel difference signals yR(1), yR(2), . . . , yR(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S140-12).
The right channel subtraction gain estimation unit 140 then obtains a value obtained by multiplying the normalized inner product value rR obtained in step S140-11 and the right channel correction coefficient cR obtained in step S140-12 (step S140-13). The right channel subtraction gain estimation unit 140 then obtains a candidate closest to the multiplication value cR×rR obtained in step S140-13 (quantized value of the multiplication value cR×rR) of the stored candidates βcand(1), . . . , βcand(B) of the right channel subtraction gain as the right channel subtraction gain β, and obtains the code corresponding to the right channel subtraction gain β of the stored codes Cβcand(1), . . . Cβcand(B) as the right channel subtraction gain code Cβ (step S140-14).
Note that in a case where the number of bits bR used for the coding of the right channel difference signals yR(1), yR(2), . . . , yR(T) in the stereo coding unit 170 is not explicitly determined, it is only needed to use half of the number of bits bs of the stereo code CS output by the stereo coding unit 170 (that is, bs/2), as the number of bits bR. Instead of the value obtained by Equation (1-7-2) itself, the right channel correction coefficient cR may be a value greater than 0 and less than 1, may be 0.5 when the number of bits bR used for the coding of the right channel difference signals yR(1), yR(2), . . . yR(T) and the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) are the same, and may be a value closer to 0 than 0.5 as the number of bits bR is greater than the number of bits bM and closer to 1 than 0.5 as the number of bits bR is less than the number of bits bm. These similarly apply to each example described later.
The left channel subtraction gain decoding unit 230 stores in advance a plurality of sets (A sets, a=1, . . . , A) of candidates of the left channel subtraction gain αcand(a) and the codes Cαcand(a) corresponding to the candidates, which are the same as those stored in the left channel subtraction gain estimation unit 120 of the corresponding coding device 100. The left channel subtraction gain decoding unit 230 obtains a candidate of the left channel subtraction gain corresponding to an input left channel subtraction gain code Cα of the stored codes Cαcand(1), . . . , Cαcand(A) as the left channel subtraction gain α (step S230-11).
The right channel subtraction gain decoding unit 250 stores in advance a plurality of sets (B sets, b=1, . . . , B) of candidates of the right channel subtraction gain βcand(b) and the codes Cβcand(b) corresponding to the candidates, which are the same as those stored in the right channel subtraction gain estimation unit 140 of the corresponding coding device 100. The right channel subtraction gain decoding unit 250 obtains a candidate of the right channel subtraction gain corresponding to an input right channel subtraction gain code Cβ of the stored codes Cβcand(1), . . . , Cβcand(B) as the right channel subtraction gain β (step S250-11).
Note that the left channel and the right channel only needs to use the same candidates or codes of subtraction gain, and by using the same value for the above-described A and B, the set of the candidates of the left channel subtraction gain αcand(a) and the codes Cαcand(a) corresponding to the candidates stored in the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 and the set of the candidates of the right channel subtraction gain βcand(b) and the codes Cβcand(b) corresponding to the candidates stored in the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may be the same.
Because the number of bits bL used for the coding of the left channel difference signals by the coding device 100 is the number of bits used for the decoding of the left channel difference signals by the decoding device 200, and the value of the number of bits bM used for the coding of the downmix signals by the coding device 100 is the number of bits used for the decoding of the downmix signals by the decoding device 200, the correction coefficient cL can be calculated as the same value for both the coding device 100 and the decoding device 200. Thus, with the normalized inner product value rL as the target of coding and decoding, the left channel subtraction gain α may be obtained by multiplying the quantized value rL of the inner product value normalized by the coding device 100 and the decoding device 200 by the correction coefficient cL. This similarly applies to the right channel. This mode will be described as a modified example of Example 1.
The left channel subtraction gain estimation unit 120 stores in advance a plurality of sets (A sets, a=1, . . . , A) of candidates of the normalized inner product value of the left channel rLcand(a) and the codes Cαcand(a) corresponding to the candidates. As illustrated in
Similarly to step S120-11 of the left channel subtraction gain estimation unit 120 of Example 1, the left channel subtraction gain estimation unit 120 first obtains the normalized inner product value rL for the input sound signals of the left channel of the downmix signals by Equation (1-4) from the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the downmix signals xM(1), xM(2), . . . , xM(T) input (step S120-11). The left channel subtraction gain estimation unit 120 then obtains a candidate rL closest to the normalized inner product value rL (quantized value of the normalized inner product value rL) obtained in step S120-11 of the stored candidates rLcand(1), . . . , rLcand(A) of the normalized inner product value of the left channel, and obtains the code corresponding to the closest candidate rL of the stored codes Cαcand(1), . . . , Cαcand(A) as the left channel subtraction gain code Cα (step S120-15). Similarly to step S120-12 of the left channel subtraction gain estimation unit 120 of Example 1, the left channel subtraction gain estimation unit 120 obtains the left channel correction coefficient cL by Equation (1-7) by using the number of bits bL used for the coding of the left channel difference signals yL(1), yL(2), . . . , yL(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S120-12). The left channel subtraction gain estimation unit 120 then obtains a value obtained by multiplying the quantized value of the normalized inner product value rL obtained in step S120-15 and the left channel correction coefficient cL obtained in step S120-12 as the left channel subtraction gain α (step S120-16).
The right channel subtraction gain estimation unit 140 stores in advance a plurality of sets (B sets, b=1, . . . , B) of a candidate of the normalized inner product value of the right channel rRcand(b) and the code Cβcand(b) corresponding to the candidate. As illustrated in
Similarly to step S140-11 of the right channel subtraction gain estimation unit 140 of Example 1, the right channel subtraction gain estimation unit 140 first obtains the normalized inner product value rR for the input sound signals of the right channel of the downmix signals by Equation (1-4-2) from the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel and the downmix signals xM(1), xM(2), . . . , xM(T) input (step S140-11). The right channel subtraction gain estimation unit 140 then obtains a candidate rR closest to the normalized inner product value rR (quantized value of the normalized inner product value rR) obtained in step S140-11 of the stored candidates rRcand(1), . . . , rRcand(B) of the normalized inner product value of the right channel, and obtains the code corresponding to the closest candidate rR of the stored codes Cβcand(1), . . . , Cβcand(B) as the right channel subtraction gain code Cβ (step S140-15). Similarly to step S140-12 of the right channel subtraction gain estimation unit 140 of Example 1, the right channel subtraction gain estimation unit 140 obtains the right channel correction coefficient cR by Equation (1-7-2) by using the number of bits bR used for the coding of the right channel difference signals yR(1), yR(2), . . . , yR(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S140-12). The right channel subtraction gain estimation unit 140 then obtains a value obtained by multiplying the quantized value of the normalized inner product value rR obtained in step S140-15 and the right channel correction coefficient cR obtained in step S140-12, as the right channel subtraction gain β (step S140-16).
The left channel subtraction gain decoding unit 230 stores in advance a plurality of sets (A sets, a=1, . . . , A) of a candidate of the normalized inner product value of the left channel rLcand(a) and the code Cαcand(a) corresponding to the candidate, which are the same as those stored in the left channel subtraction gain estimation unit 120 of the corresponding coding device 100. The left channel subtraction gain decoding unit 230 performs steps S230-12 to S230-14 below illustrated in
The left channel subtraction gain decoding unit 230 obtains a candidate of the normalized inner product value of the left channel corresponding to an input left channel subtraction gain code Cα of the stored codes Cαcand(1), . . . , Cαcand(A) as the decoded value {circumflex over ( )}rL of the normalized inner product value of the left channel (step S230-12). The left channel subtraction gain decoding unit 230 obtains the left channel correction coefficient cL by Equation (1-7) by using the number of bits bL used for the decoding of the left channel decoded difference signals {circumflex over ( )}yL(1), {circumflex over ( )}yL(2), . . . , {circumflex over ( )}yL(T) in the stereo decoding unit 220, the number of bits bM used for the decoding of the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) in the monaural decoding unit 210, and the number of samples T per frame (step S230-13). The left channel subtraction gain decoding unit 230 then obtains a value obtained by multiplying the decoded value of the normalized inner product value {circumflex over ( )}rL obtained in step S230-12 and the left channel correction coefficient cL obtained in step S230-13, as the left channel subtraction gain α (step S230-14).
Note that in a case where the stereo code CS is a combination of the left channel difference code CL and the right channel difference code CR, the number of bits bL used for the decoding of the left channel decoded difference signals {circumflex over ( )}yL(1), {circumflex over ( )}yL(2), . . . , {circumflex over ( )}yL(T) in the stereo decoding unit 220 is the number of bits of the left channel difference code CL. In a case where the number of bits bL used for the decoding of the left channel decoded difference signals {circumflex over ( )}yL(1), {circumflex over ( )}yL(2), . . . , {circumflex over ( )}yL(T) in the stereo decoding unit 220 is not explicitly determined, it is only needed to use half of the number of bits bs of the stereo code CS input to the stereo decoding unit 220 (that is, bs/2), as the number of bits bL. The number of bits bM used for the decoding of the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) in the monaural decoding unit 210 is the number of bits of the monaural code CM. Instead of the value obtained by Equation (1-7) itself, the left channel correction coefficient cL may be a value greater than 0 and less than 1, may be 0.5 when the number of bits bL used for the decoding of the left channel decoded difference signals {circumflex over ( )}yL(1), yL(2), . . . , {circumflex over ( )}yL(T) and the number of bits bM used for the decoding of the monaural decoded sound signals{circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}XM(T) are the same, and may be a value closer to 0 than 0.5 as the number of bits bL is greater than the number of bits bM and closer to 1 than 0.5 as the number of bits bL is less than the number of bits bm.
The right channel subtraction gain decoding unit 250 stores in advance a plurality of sets (B sets, b=1, . . . , B) of a candidate of the normalized inner product value of the right channel rRcand(b) and the code Cβcand(b) corresponding to the candidate, which are the same as those stored in the right channel subtraction gain estimation unit 140 of the corresponding coding device 100. The right channel subtraction gain decoding unit 250 performs steps S250-12 to S250-14 below illustrated in
The right channel subtraction gain decoding unit 250 obtains a candidate of the normalized inner product value of the right channel corresponding to an input right channel subtraction gain code Cβ of the stored codes Cβcand(1), . . . , Cβcand(B) as the decoded value rR of the normalized inner product value of the right channel (step S250-12). The right channel subtraction gain decoding unit 250 obtains the right channel correction coefficient cR by Equation (1-7-2) by using the number of bits bR used for the decoding of the right channel decoded difference signals yR(1), yR(2), . . . , yR(T) in the stereo decoding unit 220, the number of bits bM used for the decoding of the monaural decoded sound signals{circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}XM(T) in the monaural decoding unit 210, and the number of samples T per frame (step S250-13). The right channel subtraction gain decoding unit 250 then obtains a value obtained by multiplying the decoded value of the normalized inner product value rR obtained in step S250-12 and the right channel correction coefficient cR obtained in step S250-13, as the right channel subtraction gain β (step S250-14).
Note that in a case where the stereo code CS is a combination of the left channel difference code CL and the right channel difference code CR, the number of bits bR used for the decoding of the right channel decoded difference signals {circumflex over ( )}yR(1), {circumflex over ( )}yR(2), . . . , {circumflex over ( )}yR(T) in the stereo decoding unit 220 is the number of bits of the right channel difference code CR. In a case where the number of bits bR used for the decoding of the right channel decoded difference signals {circumflex over ( )}yR(1), {circumflex over ( )}yR(2), . . . , {circumflex over ( )}yR(T) in the stereo decoding unit 220 is not explicitly determined, it is only needed to use half of the number of bits bs of the stereo code CS input to the stereo decoding unit 220 (that is, bs/2), as the number of bits bR. The number of bits bM used for the decoding of the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) in the monaural decoding unit 210 is the number of bits of the monaural code CM. Instead of the value obtained by Equation (1-7-2) itself, the right channel correction coefficient cR may be a value greater than 0 and less than 1, may be 0.5 when the number of bits bR used for the decoding of the right channel decoded difference signals {circumflex over ( )}yR(1), {circumflex over ( )}yR(2), . . . , {circumflex over ( )}yR(T) and the number of bits bM used for the decoding of the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) are the same, and may be a value closer to 0 than 0.5 as the number of bits bR is greater than the number of bits bM and closer to 1 than 0.5 as the number of bits bR is less than the number of bits bM.
Note that the left channel and the right channel only needs to use the same candidates or codes of normalized inner product value, and by using the same value for the above-described A and B, the set of the candidate of the normalized inner product value of the left channel rLcand(a) and the code Cαcand(a) corresponding to the candidate stored in the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 and the set of the candidate of the normalized inner product value of the right channel rRcand(b) and the code Cβcand(b) corresponding to the candidate stored in the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may be the same.
Note that the code Cα is referred to as a left channel subtraction gain code because the code Cα is substantially a code corresponding to the left channel subtraction gain α, for the purpose of matching the wording in the descriptions of the coding device 100 and the decoding device 200, and the like, but the code Cα may also be referred to as a left channel inner product code or the like because the code Cα represents a normalized inner product value. This similarly applies to the code Cβ, and the code Cβ may be referred to as a right channel inner product code or the like.
An example of using a value considering input values of past frames as the normalized inner product value will be described as Example 2. Example 2 does not strictly guarantee the optimization within the frame, that is, the minimization of the energy of the quantization errors possessed by the decoded sound signals of the left channel and the minimization of the energy of the quantization errors possessed by the decoded sound signals of the right channel, but reduces abrupt fluctuation of the left channel subtraction gain α between frames and abrupt fluctuation of the right channel subtraction gain β between frames, and reduces noise generated in the decoded sound signals due to the fluctuation. In other words, Example 2 considers the auditory quality of the decoded sound signals in addition to reducing the energy of the quantization errors possessed by the decoded sound signals.
In Example 2, the coding side, that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from those in Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 are the same as those in Example 1. Hereinafter, the differences of Example 2 from Example 1 will be mainly described.
As illustrated in
The left channel subtraction gain estimation unit 120 first obtains the inner product value εL(0) used in the current frame by Equation (1-8) below by using the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel input, the downmix signals xM(1), xM(2), . . . , xM(T) input, and the inner product value εL(−1) used in the previous frame (step S120-111).
Here, εL is a predetermined value greater than 0 and less than 1, and is stored in advance in the left channel subtraction gain estimation unit 120. Note that the left channel subtraction gain estimation unit 120 stores the obtained inner product value εL(0) in the left channel subtraction gain estimation unit 120 for use in the next frame as “the inner product value εL(−1) used in the previous frame”.
The left channel subtraction gain estimation unit 120 obtains the energy EM(0) of the downmix signals used in the current frame by Equation (1-9) below by using the input downmix signals xM(1), xM(2), . . . , xM(T) and the energy EM(−1) of the downmix signals used in the previous frame (step S120-112).
Here, εM is a predetermined value greater than 0 and less than 1, and is stored in advance in the left channel subtraction gain estimation unit 120. Note that the left channel subtraction gain estimation unit 120 stores the obtained energy EM(0) of the downmix signals in the left channel subtraction gain estimation unit 120 for use in the next frame as “the energy EM(−1) of the downmix signals used in the previous frame”.
The left channel subtraction gain estimation unit 120 then obtains the normalized inner product value rL by Equation (1-10) below by using the inner product value εL(0) used in the current frame obtained in step S120-111 and the energy EM(0) of the downmix signals used in the current frame obtained in step S120-112 (step S120-113).
The left channel subtraction gain estimation unit 120 also performs step S120-12, then performs step S120-13 by using the normalized inner product value rL obtained in step S120-113 described above instead of the normalized inner product value rL obtained in step S120-11, and further performs step S120-14.
Note that, as εL and EM described above get closer to 1, the normalized inner product value rL is more likely to include the influence of the input sound signals of the left channel and the downmix signals of the past frames, and the fluctuation between the frames of the normalized inner product value rL and the left channel subtraction gain α obtained by the normalized inner product value rL gets smaller.
As illustrated in
The right channel subtraction gain estimation unit 140 first obtains the inner product value ER(0) used in the current frame by Equation (1-8-2) below by using the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel input, the downmix signals xM(1), xM(2), . . . , xM(T) input, and the inner product value ER(−1) used in the previous frame (step S140-111).
Here, εR iS a predetermined value greater than 0 and less than 1, and is stored in advance in the right channel subtraction gain estimation unit 140. Note that the right channel subtraction gain estimation unit 140 stores the obtained inner product value ER(0) in the right channel subtraction gain estimation unit 140 for use in the next frame as “the inner product value ER(−1) used in the previous frame”.
The right channel subtraction gain estimation unit 140 obtains the energy EM(0) of the downmix signals used in the current frame by Equation (1-9) by using the input downmix signals xM(1), xM(2), . . . , xM(T) and the energy EM(−1) of the downmix signals used in the previous frame (step S140-112). The right channel subtraction gain estimation unit 140 stores the obtained energy EM(0) of the downmix signals in the right channel subtraction gain estimation unit 140 for use in the next frame as “the energy EM(−1) of the downmix signals used in the previous frame”. Note that because the left channel subtraction gain estimation unit 120 also obtains the energy EM(0) of the downmix signals used in the current frame by Equation (1-9), only one of the steps of step S120-112 performed by the left channel subtraction gain estimation unit 120 and step S140-112 performed by the right channel subtraction gain estimation unit 140 may be performed.
The right channel subtraction gain estimation unit 140 then obtains the normalized inner product value rR by Equation (1-10-2) below by using the inner product value ER(0) used in the current frame obtained in step S140-111 and the energy EM(0) of the downmix signals used in the current frame obtained in step S140-112 (step S140-113).
The right channel subtraction gain estimation unit 140 also performs step S140-12, then performs step S140-13 by using the normalized inner product value rR obtained in step S140-113 described above instead of the normalized inner product value rR obtained in step S140-11, and further performs step S140-14.
Note that, as ER and EM described above get closer to 1, the normalized inner product value rR is more likely to include the influence of the input sound signals of the right channel and the downmix signals of the past frames, and the fluctuation between the frames of the normalized inner product value rR and the right channel subtraction gain j3 obtained by the normalized inner product value rR gets smaller.
Example 2 can be modified in a similar manner to the modified example of Example 1 with respect to Example 1. This embodiment will be described as a modified example of Example 2. In the modified example of Example 2, the coding side, that is, the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 are different from those in the modified example of Example 1, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 are the same as those in the modified example of Example 1. The differences of the modified example of Example 2 from the modified example of Example 1 are the same as those of Example 2, and thus the modified example of Example 2 will be described below with reference to the modified example of Example 1 and Example 2 as appropriate.
Similar to the left channel subtraction gain estimation unit 120 of the modified example of Example 1, the left channel subtraction gain estimation unit 120 stores in advance a plurality of sets (A sets, a=1, . . . , A) of a candidate of the normalized inner product value of the left channel rLcand(a) and the code Cαcand(a) corresponding to the candidate. As illustrated in
The left channel subtraction gain estimation unit 120 first obtains the inner product value εL(0) used in the current frame by Equation (1-8) by using the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel input, the downmix signals xM(1), xM(2), . . . , xM(T) input, and the inner product value εL(−1) used in the previous frame (step S120-111). The left channel subtraction gain estimation unit 120 obtains the energy EM(0) of the downmix signals used in the current frame by Equation (1-9) by using the input downmix signals xM(1), xM(2), . . . , xM(T) and the energy EM(−1) of the downmix signals used in the previous frame (step S120-112). The left channel subtraction gain estimation unit 120 then obtains the normalized inner product value rL by Equation (1-10) by using the inner product value εL(0) used in the current frame obtained in step S120-111 and the energy EM(0) of the downmix signals used in the current frame obtained in step S120-112 (step S120-113). The left channel subtraction gain estimation unit 120 then obtains a candidate rL closest to the normalized inner product value rL (quantized value of the normalized inner product value rL) obtained in step S120-113 of the stored candidates rLcand(1), . . . , rLcand(A) of the normalized inner product value of the left channel, and obtains the code corresponding to the closest candidate rL of the stored codes Cαcand(1), . . . , Cαcand(A) as the left channel subtraction gain code Cα (step S120-15).
The left channel subtraction gain estimation unit 120 obtains the left channel correction coefficient cL by Equation (1-7) by using the number of bits bL used for the coding of the left channel difference signals yL(1), yL(2), . . . , yL(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S120-12).
The left channel subtraction gain estimation unit 120 then obtains a value obtained by multiplying the quantized value of the normalized inner product value rL obtained in step S120-15 and the left channel correction coefficient cL obtained in step S120-12 as the left channel subtraction gain α (step S120-16).
Similar to the right channel subtraction gain estimation unit 140 in the modified example of Example 1, the right channel subtraction gain estimation unit 140 stores in advance a plurality of sets (B sets, b=1, . . . , B) of a candidate of the normalized inner product value of the right channel rRcand(b) and the code Cβcand(b) corresponding to the candidate. As illustrated in
The right channel subtraction gain estimation unit 140 first obtains the inner product value ER(0) used in the current frame by Equation (1-8-2) by using the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel input, the downmix signals xM(1), xM(2), . . . , xM(T) input, and the inner product value ER(−1) used in the previous frame (step S140-111). The right channel subtraction gain estimation unit 140 obtains the energy EM(0) of the downmix signals used in the current frame by Equation (1-9) by using the input downmix signals xM(1), xM(2), . . . , xM(T) and the energy EM(−1) of the downmix signals used in the previous frame (step S140-112). The right channel subtraction gain estimation unit 140 then obtains the normalized inner product value rR by Equation (1-10-2) by using the inner product value ER(0) used in the current frame obtained in step S140-111 and the energy EM(0) of the downmix signals used in the current frame obtained in step S140-112 (step S140-113). The right channel subtraction gain estimation unit 140 then obtains a candidate rR closest to the normalized inner product value rR (quantized value of the normalized inner product value rR) obtained in step S140-113 of the stored candidates rRcand(1), . . . , rRcand(B) of the normalized inner product value of the right channel, and obtains the code corresponding to the closest candidate rR of the stored codes Cβcand(1), . . . , Cβcand(B) as the right channel subtraction gain code Cβ (step S140-15). The right channel subtraction gain estimation unit 140 obtains the right channel correction coefficient cR by Equation (1-7-2) by using the number of bits bR used for the coding of the right channel difference signals yR(1), yR(2), . . . , yR(T) in the stereo coding unit 170, the number of bits bM used for the coding of the downmix signals xM(1), xM(2), . . . , xM(T) in the monaural coding unit 160, and the number of samples T per frame (step S140-12). The right channel subtraction gain estimation unit 140 then obtains a value obtained by multiplying the quantized value of the normalized inner product value rR obtained in step S140-15 and the right channel correction coefficient cR obtained in step S140-12, as the right channel subtraction gain β(step S140-16).
For example, in a case where sounds such as voice or music included in the input sound signals of the left channel and sounds such as voice and music included in the input sound signals of the right channel are different from each other, the downmix signals may include both the components of the input sound signals of the left channel and the components of the input sound signals of the right channel. Thus, as a greater value is used as the left channel subtraction gain α, there is a problem in that sounds originating from the input sound signals of the right channel that should not naturally be heard are included in the left channel decoded sound signals, and as a greater value is used as the right channel subtraction gain β, there is a problem in that sounds originating from the input sound signals of the left channel that should not naturally be heard are included in the right channel decoded sound signals. Thus, while the minimization of the energy of the quantization errors possessed by the decoded sound signals is not strictly guaranteed, the left channel subtraction gain α and the right channel subtraction gain β may be smaller values than the values determined in Example 1, in consideration of the auditory quality. Similarly, the left channel subtraction gain α and the right channel subtraction gain β may be smaller values than the values determined in Example 2.
Specifically, for the left channel, in Example 1 and Example 2, the quantized value of the multiplication value cL×rL of the normalized inner product value rL and the left channel correction coefficient cL is set as the left channel subtraction gain α, but in Example 3, the quantized value of the multiplication value λL×cL×rL of the normalized inner product value rL, the left channel correction coefficient cL, and 4L that is a predetermined value greater than 0 and less than 1 is set as the left channel subtraction gain α. Thus, in a similar manner to those in Example 1 and Example 2, assuming that the multiplication value cL×rL is a target of coding in the left channel subtraction gain estimation unit 120 and decoding in the left channel subtraction gain decoding unit 230, and the left channel subtraction gain code Cα represents the quantized value of the multiplication value cL×rL, the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 may multiply the quantized value of the multiplication value cL×rL by λL to obtain the left channel subtraction gain α. Alternatively, the multiplication value λL×cL×rL of the normalized inner product value rL, the left channel correction coefficient cL, and the predetermined value rL may be a target of coding in the left channel subtraction gain estimation unit 120 and decoding in the left channel subtraction gain decoding unit 230, and the left channel subtraction gain code Cα may represent the quantized value of the multiplication value λL×cL×rL.
Similarly, for the right channel, in Example 1 and Example 2, the quantized value of the multiplication value cR×rR of the normalized inner product value rR and the right channel correction coefficient cR is set as the right channel subtraction gain β, but in Example 3, the quantized value of the multiplication value λR×cR×rx of the normalized inner product value rR, the right channel correction coefficient cR, and λR that is a predetermined value greater than 0 and less than 1 is set as the right channel subtraction gain β. Thus, in a similar manner to those in Example 1 and Example 2, assuming that the multiplication value cR×rx is a target of coding in the right channel subtraction gain estimation unit 140 and decoding in the right channel subtraction gain decoding unit 250, and the right channel subtraction gain code Cβ represents the quantized value of the multiplication value cR×rx, the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may multiply the quantized value of the multiplication value cR×rR by kR to obtain the right channel subtraction gain β. Alternatively, the multiplication value λR×cR×rR of the normalized inner product value rR, the left channel correction coefficient cR, and the predetermined value λR may be a target of coding in the right channel subtraction gain estimation unit 140 and decoding in the right channel subtraction gain decoding unit 250, and the right channel subtraction gain code Cβ may represent the quantized value of the multiplication value λR×cR×rR. Note that kR may be the same value as L.
As described above, the correction coefficient cL can be calculated as the same value for the coding device 100 and the decoding device 200. Thus, in a similar manner to those in the modified example of Example 1 and the modified example of Example 2, assuming that the normalized inner product value rL is a target of coding in the left channel subtraction gain estimation unit 120 and decoding in the left channel subtraction gain decoding unit 230, and the left channel subtraction gain code Cα represents the quantized value of the normalized inner product value rL, the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 may multiply the quantized value of the normalized inner product value rL, the left channel correction coefficient cL, and L that is a predetermined value greater than 0 and less than 1 to obtain the left channel subtraction gain α. Alternatively, assuming that the multiplication value λL×rL of the normalized inner product value rL and L that is a predetermined value greater than 0 and less than 1 is a target of coding in the left channel subtraction gain estimation unit 120 and decoding in the left channel subtraction gain decoding unit 230, and the left channel subtraction gain code Cα represents the quantized value of the multiplication value λL×rL, the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 may multiply the quantized value of the multiplication value λL×rL by the left channel correction coefficient cL to obtain the left channel subtraction gain α.
This similarly applies to the right channel, and the correction coefficient cR can be calculated as the same value for the coding device 100 and the decoding device 200. Thus, in a similar manner to those in the modified example of Example 1 and the modified example of Example 2, assuming that the normalized inner product value rR is a target of coding in the right channel subtraction gain estimation unit 140 and decoding in the right channel subtraction gain decoding unit 250, and the right channel subtraction gain code Cβ represents the quantized value of the normalized inner product value rR, the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may multiply the quantized value of the normalized inner product value rR, the right channel correction coefficient cR, and kR that is a predetermined value greater than 0 and less than 1 to obtain the right channel subtraction gain β. Alternatively, assuming that the multiplication value λR×rR of the normalized inner product value rR and λR that is a predetermined value greater than 0 and less than 1 is a target of coding in the right channel subtraction gain estimation unit 140 and decoding in the right channel subtraction gain decoding unit 250, and the right channel subtraction gain code C3 represents the quantized value of the multiplication value λR×rR, the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may multiply the quantized value of the multiplication value λR×rR by the right channel correction coefficient cR to obtain the right channel subtraction gain β.
The problem of auditory quality described at the beginning of Example 3 occurs when the correlation between the input sound signals of the left channel and the input sound signals of the right channel is small, and the problem does not occur much when the correlation between the input sound signals of the left channel and the input sound signals of the right channel is large. Thus, in Example 4, by using a left-right correlation coefficient γ that is a correlation coefficient of the input sound signals of the left channel and the input sound signals of the right channel instead of the predetermined value in Example 3, as the correlation between the input sound signals of the left channel and the input sound signals of the right channel is larger, the priority is given to reducing the energy of the quantization errors possessed by the decoded sound signals, and as the correlation between the input sound signals of the left channel and the input sound signals of the right channel is smaller, the priority is given to suppressing the deterioration of the auditory quality.
In Example 4, the coding side is different from those in Example 1 and Example 2, but the decoding side, that is, the left channel subtraction gain decoding unit 230 and the right channel subtraction gain decoding unit 250 are the same as those in Example 1 and Example 2. Hereinafter, the differences of Example 4 from Example 1 and Example 2 will be described.
The coding device 100 of Example 4 also includes a left-right relationship information estimation unit 180 as illustrated by the dashed lines in
The left-right correlation coefficient γ is a correlation coefficient of the input sound signals of the left channel and the input sound signals of the right channel, and may be a correlation coefficient γ0 between a sample sequence of the input sound signals of the left channel xL(1), xL(2), . . . , xL(T) and a sample sequence of the input sound signals of the right channel xR(1), xR(2), . . . , xR(T), or may be a correlation coefficient taking into account the time difference, for example, a correlation coefficient γ, between a sample sequence of the input sound signals of the left channel and a sample sequence of the input sound signals of the right channel in a position shifted to a later position than that of the sample sequence by τ samples.
Assuming that sound signals obtained by AD conversion of sounds collected by the microphone for the left channel disposed in a certain space are the input sound signals of the left channel, and sound signals obtained by AD conversion of sounds collected by the microphone for the right channel disposed in the certain space are the input sound signals of the right channel, this τ is information corresponding to the difference (so-called time difference of arrival) between the arrival time from the sound source that mainly emits sound in the space to the microphone for the left channel and the arrival time from the sound source to the microphone for the right channel, and is hereinafter referred to as the left-right time difference. The left-right time difference τ may be determined by any known method, and is obtained by the method described with the left-right relationship information estimation unit 181 of the second reference embodiment. In other words, the correlation coefficient γ, described above is information corresponding to the correlation coefficient between the sound signals reaching the microphone for the left channel from the sound source and collected and the sound signals reaching the microphone for the right channel from the sound source and collected.
Instead of step S120-13, the left channel subtraction gain estimation unit 120 obtains a value obtained by multiplying the normalized inner product value rL obtained in step S120-11 or step S120-113, the left channel correction coefficient cL obtained in step S120-12, and the left-right correlation coefficient γ obtained in step S180 (step S120-13”). Instead of step S120-14, the left channel subtraction gain estimation unit 120 then obtains a candidate closest to the multiplication value γ×cL×rL obtained in step S120-13” (quantized value of the multiplication value γ×cL×rL) of the stored candidates αcand(1), . . . , αcand(A) of the left channel subtraction gain as the left channel subtraction gain α, and obtains the code corresponding to the left channel subtraction gain α of the stored codes Cαcand(1), . . . , Cαcand(A) as the left channel subtraction gain code Cα (step S120-14″).
Instead of step S140-13, the right channel subtraction gain estimation unit 140 obtains a value obtained by multiplying the normalized inner product value rR obtained in step S140-11 or step S140-113, the right channel correction coefficient cR obtained in step S140-12, and the left-right correlation coefficient γ obtained in step S180 (step S140-13″). Instead of step S140-14, the right channel subtraction gain estimation unit 140 then obtains a candidate closest to the multiplication value γ×cR×rR obtained in step S140-13″ (quantized value of the multiplication value γ×cR×rR) of the stored candidates βcand(1), . . . , βcana(B) of the right channel subtraction gain as the right channel subtraction gain β, and obtains the code corresponding to the right channel subtraction gain β of the stored codes Cβcand(1), . . . , Cβcand(B) as the right channel subtraction gain code Cβ (step S140-14″).
As described above, the correction coefficient cL can be calculated as the same value for the coding device 100 and the decoding device 200. Thus, assuming that the multiplication value γ×rL of the normalized inner product value rL and the left-right correlation coefficient γ is a target of coding in the left channel subtraction gain estimation unit 120 and decoding in the left channel subtraction gain decoding unit 230, and the left channel subtraction gain code Cα represents the quantized value of the multiplication value γ×rL, the left channel subtraction gain estimation unit 120 and the left channel subtraction gain decoding unit 230 may multiply the quantized value of the multiplication value γ×rL by the left channel correction coefficient cL to obtain the left channel subtraction gain α.
This similarly applies to the right channel, and the correction coefficient cR can be calculated as the same value for the coding device 100 and the decoding device 200. Thus, assuming that the multiplication value γ×rR of the normalized inner product value rR and the left-right correlation coefficient γ is a target of coding in the right channel subtraction gain estimation unit 140 and decoding in the right channel subtraction gain decoding unit 250, and the right channel subtraction gain code Cβ represents the quantized value of the multiplication value γ×rR, the right channel subtraction gain estimation unit 140 and the right channel subtraction gain decoding unit 250 may multiply the quantized value of the multiplication value γ×rR by the right channel correction coefficient cR to obtain the right channel subtraction gain.
A coding device and a decoding device according to a second reference embodiment will be described.
As illustrated in
The input sound signals of the left channel input to the coding device 101 and the input sound signals of the right channel input to the coding device 101 are input to the left-right relationship information estimation unit 181. The left-right relationship information estimation unit 181 obtains and outputs a left-right time difference τ and a left-right time difference code Cτ, which is the code representing the left-right time difference τ, from the input sound signals of the left channel and the input sound signals of the right channel input (step S181).
Assuming that sound signals obtained by AD conversion of sounds collected by the microphone for the left channel disposed in a certain space are the input sound signals of the left channel, and sound signals obtained by AD conversion of sounds collected by the microphone for the right channel disposed in the certain space are the input sound signals of the right channel, the left-right time difference τ is information corresponding to the difference (so-called time difference of arrival) between the arrival time from the sound source that mainly emits sound in the space to the microphone for the left channel and the arrival time from the sound source to the microphone for the right channel. Note that, in order to include not only the time difference of arrival, but also the information on which microphone sound has reached earlier in the left-right time difference τ, the left-right time difference τ can take a positive value or a negative value, based on the input sound signals of one of the sides. In other words, the left-right time difference τ is information indicating how far ahead the same sound signal is included in the input sound signals of the left channel or the input sound signals of the right channel. In the following, in a case where the same sound signal is included in the input sound signals of the left channel before the input sound signals of the right channel, it is also said that the left channel is preceding, and in a case where the same sound signal is included in the input sound signals of the right channel before the input sound signals of the left channel, it is also said that the right channel is preceding.
The left-right time difference τ may be determined by any known method. For example, the left-right relationship information estimation unit 181 calculates a value γcand representing the magnitude of the correlation (hereinafter referred to as a correlation value) between a sample sequence of the input sound signals of the left channel and a sample sequence of the input sound signals of the right channel at a position shifted to a later position than that of the sample sequence by the number of candidate samples τcand for each number of candidate samples τcand from the predetermined τmax to τmin (e.g., τmax is a positive number and τmin is a negative number), to obtain the number of candidate samples τcand at which the correlation value γcand is maximized, as the left-right time difference τ. In other words, in this example, in the case where the left channel is preceding, the left-right time difference τ is a positive value, in the case where the right channel is preceding, the left-right time difference τ is a negative value, and the absolute value of the left-right time difference τ is the value representing how far the preceding channel precedes the other channel (the number of samples preceding). For example, in a case where the correlation value γcand is calculated using only the samples in the frame, if τcand is a positive value, the absolute value of the correlation coefficient between a partial sample sequence xR(1+τcand), xR(2+τcand), . . . , xR(T) of the input sound signals of the right channel and a partial sample sequence xL(1), xL(2), . . . , xL(T−τcand) of the input sound signals of the left channel at a position shifted before the partial sample sequence by the number of candidate samples of τcand may be calculated as the correlation value γcand, and if τcand is a negative value, the absolute value of the correlation coefficient between a partial sample sequence xL(1−τcand), xL(2−τcand), . . . , xL(T) of the input sound signals of the left channel and a partial sample sequences xR(1), xR(2), . . . , xR(T+τcand) of the input sound signals of the right channel at a position shifted before the partial sample sequence by the number of candidate samples −τcand is calculated as the correlation value γcand. Of course, one or more samples of past input sound signals that are continuous with the sample sequence of the input sound signals of the current frame may also be used to calculate the correlation value γcand, and in this case, the sample sequence of the input sound signals of the past frames only needs to be stored in a storage unit (not illustrated) in the left-right relationship information estimation unit 181 for a predetermined number of frames.
For example, instead of the absolute value of the correlation coefficient, the correlation value γcand may be calculated by using the information on the phases of the signals as described below. In this example, the left-right relationship information estimation unit 181 first performs Fourier transform on each of the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the input sound signals xR(1), xR(2), . . . xR(T) of the right channel as in Equations (3-1) and (3-2) below to obtain the frequency spectra XL(k) and XR(k) at each frequency k from 0 to T−1.
The left-right relationship information estimation unit 181 obtains the spectrum φ(k) of the phase difference at each frequency k by Equation (3-3) below using the obtained frequency spectra XL(k) and XR(k).
The obtained spectrum of the phase difference is inverse Fourier transformed to obtain a phase difference signal ω(τcand) for each number of candidate samples τcand from τmax to 1 min as in Equation (3-4) below.
Because the absolute value of the obtained phase difference signal ω(τcand) represents a certain correlation corresponding to the plausibility of the time difference between the input sound signals xL(1), xL(2), . . . , xL(T) of the left channel and the input sound signals xR(1), xR(2), . . . , xR(T) of the right channel, the absolute value of this phase difference signal ω(τcand) for each number of candidate samples τcand is used as the correlation value γcand. The left-right relationship information estimation unit 181 obtains the number of candidate samples τcand at which the correlation value γcand, which is the absolute value of the phase difference signal ω(τcand), is maximized, as the left-right time difference τ. Note that instead of using the absolute value of the phase difference signal ω(τcand) as the correlation value γcand as it is, a normalized value such as, for example, the relative difference from the average of the absolute values of the phase difference signals obtained for each of the plurality of the numbers of candidate samples −τcand before and after the absolute value of the phase difference signal ω(τcand) for each τcand may be used. In other words, the average value may be obtained by Equation (3-5) below using a predetermined positive number τrange for each τcand, and the normalized correlation value obtained by Expression (3-6) below using the obtained average value ωc(τcand) and the phase difference signal ω(τcand) may be used as the γcand.
Note that the normalized correlation value obtained by Expression (3-6) is a value of 0 or greater and 1 or less, and is a value indicating a property where the normalized correlation value is close to 1 as τcand is plausible as the left-right time difference, and the normalized correlation value is close to 0 as τcand is not plausible as the left-right time difference.
The left-right relationship information estimation unit 181 only needs to code the left-right time difference τ in a prescribed coding scheme to obtain a left-right time difference code Cτ that is a code capable of uniquely identifying the left-right time difference τ. Known coding schemes such as scalar quantization is used as the prescribed coding scheme. Note that each of the predetermined numbers of candidate samples may be each of integer values from τmax to τmin, or may include fractions and decimals between τmax and Imin, but need not necessarily include any integer value between τmax and τmin. τmax=−τmin may but need not necessarily be the case. In a case of targeting special input sound signals in which any channel always precedes, both τmax and τmin may be positive numbers, or both τmax and τmin may be negative numbers.
Note that, in a case where the coding device 101 estimates the subtraction gain based on the principle for minimizing the quantization errors of Example 4 or the modified example of Example 4 described in the first reference embodiment, the left-right relationship information estimation unit 181 further outputs the correlation value between the sample sequence of the input sound signals of the left channel and the sample sequence of the input sound signals of the right channel at a position shifted to a later position than that of the sample sequence by the left-right time difference t, that is, the maximum value of the correlation values γcand calculated for each number of candidate samples τcand from τmax to τmin, as the left-right correlation coefficient γ (step S180).
The downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110 and the left-right time difference τ output by the left-right relationship information estimation unit 181 are input into the time shift unit 191. In a case where the left-right time difference τ is a positive value (i.e., in a case where the left-right time difference t indicates that the left channel is preceding), the time shift unit 191 outputs the downmix signals xM(1), xM(2), . . . , xM(T) to the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130 as is (i.e., determined to be used in the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130), and outputs delayed downmix signals xM′(1), xM′(2), . . . , xM′(T) which are signals xM(1−|τ|), xM(2−|τ|), . . . , xM(T−|τ|) obtained by delaying the downmix signals by |τ| samples (the number of samples in the absolute value of the left-right time difference τ, the number of samples for the magnitude represented by the left-right time difference τ) to the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150 (i.e., determined to be used in the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150). In a case where the left-right time difference τ is a negative value (i.e., in a case where the left-right time difference τ indicates that the right channel is preceding), the time shift unit 191 outputs delayed downmix signals xM′(1), xM′(2), . . . , xM′(T) which are signals xM(1−|τ|), xM(2−|τ|), . . . , xM(T−|τ|) obtained by delaying the downmix signals by |τ| samples to the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130 (i.e., determined to be used in the left channel subtraction gain estimation unit 120 and the left channel signal subtraction unit 130), and outputs the downmix signals xM(1), xM(2), . . . , xM(T) to the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150 as is (i.e., determined to be used in the right channel subtraction gain estimation unit 140 and the right channel signal subtraction unit 150). In a case where the left-right time difference τ is 0 (i.e., in a case where the left-right time difference τ indicates that none of the channels is preceding), the time shift unit 191 outputs the downmix signals xM(1), xM(2), . . . , xM(T) to the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 as is (i.e., determined to be used in the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150) (step S191). In other words, for the channel with the shorter arrival time described above of the left channel and the right channel, the input downmix signals are output as is to the subtraction gain estimation unit of the channel and the signal subtraction unit of the channel, and for the channel with the longer arrival time of the left channel and the right channel, signals obtained by delaying the input downmix signals by the absolute value IiI of the left-right time difference τ are output to the subtraction gain estimation unit of the channel and the signal subtraction unit of the channel. Note that because the downmix signals of the past frames are used in the time shift unit 191 to obtain the delayed downmix signals, the storage unit (not illustrated) in the time shift unit 191 stores the downmix signals input in the past frames for a predetermined number of frames. In a case where the left channel subtraction gain estimation unit 120 and the right channel subtraction gain estimation unit 140 obtain the left channel subtraction gain α and the right channel subtraction gain β in a well-known method such as that illustrated in PTL 1 rather than the method based on the principle for minimizing quantization errors, a means for obtaining a local decoded signal corresponding to the monaural code CM may be provided in the subsequent stage of the monaural coding unit 160 of the coding device 101 or in the monaural coding unit 160, and in the time shift unit 191, the processing described above may be performed by using the quantized downmix signals xM(1), xM(2), . . . , xM(T) which are local decoded signals for monaural coding in place of the downmix signals xM(1), xM(2), . . . , xM(T). In this case, the time shift unit 191 outputs the quantized downmix signals xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) instead of the downmix signals xM(1), xM(2), . . . , xM(T), and outputs delayed quantized downmix signals {circumflex over ( )}xM,(1), {circumflex over ( )}xM,(2), . . . , {circumflex over ( )}xM′(T) instead of the delayed downmix signals xM′(1), xM′(2), . . . , xM′(T).
The left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 perform the same operations as those described in the first reference embodiment, by using the downmix signals xM(1), xM(2), . . . , xM(T) or the delayed downmix signals xM′(1), xM′(2), . . . , xM′(T) input from the time shift unit 191, instead of the downmix signals xM(1), xM(2), . . . , xM(T) output by the downmix unit 110 (steps S120, S130, S140, and S150). In other words, the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 perform the same operations as those described in the first reference embodiment, by using the downmix signals xM(1), xM(2), . . . , xM(T) or the delayed downmix signals xM′(1), xM′(2), . . . , xM′(T) determined by the time shift unit 191. Note that, in the case where the time shift unit 191 outputs the quantized downmix signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) instead of the downmix signals xM(1), xM(2), . . . , xM(T), and outputs delayed quantized downmix signals {circumflex over ( )}xM,(1), {circumflex over ( )}xM,(2), . . . , {circumflex over ( )}xM,(T) instead of the delayed downmix signals xM′(1), xM′(2), . . . , xM′(T), the left channel subtraction gain estimation unit 120, the left channel signal subtraction unit 130, the right channel subtraction gain estimation unit 140, and the right channel signal subtraction unit 150 performs the processing described above by using the quantized downmix signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) or the delayed quantized downmix signals {circumflex over ( )}xM(1), {circumflex over ( )}xM′(2), . . . , {circumflex over ( )}xM′(T) input from the time shift unit 191.
As illustrated in
The left-right time difference code Cτ input to the decoding device 201 is input to the left-right time difference decoding unit 271. The left-right time difference decoding unit 271 decodes the left-right time difference code Cτ in a prescribed decoding scheme to obtain and output the left-right time difference τ (step S271). A decoding scheme corresponding to the coding scheme used by the left-right relationship information estimation unit 181 of the corresponding coding device 101 is used as the prescribed decoding scheme. The left-right time difference τ obtained by the left-right time difference decoding unit 271 is the same value as the left-right time difference τ obtained by the left-right relationship information estimation unit 181 of the corresponding coding device 101, and is any value within a range from τmax to τmin.
The monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) output by the monaural decoding unit 210 and the left-right time difference τ output by the left-right time difference decoding unit 271 are input to the time shift unit 281. In a case where the left-right time difference τ is a positive value (i.e., in a case where the left-right time difference τ indicates that the left channel is preceding), the time shift unit 281 outputs the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) to the left channel signal addition unit 240 as is (i.e., determined to be used in the left channel signal addition unit 240), and outputs delayed monaural decoded sound signals {circumflex over ( )}xM,(1), {circumflex over ( )}xM,(2), . . . , {circumflex over ( )}xM′(T) which are signals xM(1−|τ|), xM(2−|τ|), . . . , {circumflex over ( )}xM(T−|τ|) obtained by delaying the monaural decoded sound signals by |τ| samples, to the right channel signal addition unit 260 (i.e., determined to be used in the right channel signal addition unit 260). In a case where the left-right time difference τ is a negative value (i.e., in a case where the left-right time difference t indicates that the right channel is preceding), the time shift unit 281 outputs delayed monaural decoded sound signals xM,(1), xM,(2), . . . , xM,(T) which are signals xM(1−|τ|), xM(2−|τ|), . . . , xM(T −|τ|) obtained by delaying the monaural decoded sound signals by |τ| samples to the left channel signal addition unit 240 (i.e., determined to be used in the left channel signal addition unit 240), and outputs the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) to the right channel signal addition unit 260 as is (i.e., determined to be used in the right channel signal addition unit 260). In a case where the left-right time difference τ is 0 (i.e., in a case where the left-right time difference τ indicates that none of the channels is preceding), the time shift unit 281 outputs the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) to the left channel signal addition unit 240 and the right channel signal addition unit 260 as is (i.e., determined to be used in the left channel signal addition unit 240 and the right channel signal addition unit 260) (step S281). Note that because the monaural decoded sound signals of the past frames are used in the time shift unit 281 to obtain the delayed monaural decoded sound signals, the storage unit (not illustrated) in the time shift unit 281 stores the monaural decoded sound signals input in the past frames for a predetermined number of frames.
The left channel signal addition unit 240 and the right channel signal addition unit 260 perform the same operations as those described in the first reference embodiment, by using the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) or the delayed monaural decoded sound signals {circumflex over ( )}xM,(1), {circumflex over ( )}xM,(2), . . . , {circumflex over ( )}xM,(T) input from the time shift unit 281, instead of the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) output by the monaural decoding unit 210 (steps S240 and S260). In other words, the left channel signal addition unit 240 and the right channel signal addition unit 260 perform the same operations as those described in the first reference embodiment, by using the monaural decoded sound signals {circumflex over ( )}xM(1), {circumflex over ( )}xM(2), . . . , {circumflex over ( )}xM(T) or the delayed monaural decoded sound signals {circumflex over ( )}xM,(1), {circumflex over ( )}xM,(2), . . . , {circumflex over ( )}xM,(T) determined by the time shift unit 281.
An embodiment in which the coding device 101 according to the second reference embodiment is modified to generate downmix signals in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel is a first embodiment. A coding device according to the first embodiment will be described below. Note that the codes obtained by the coding device according to the first embodiment can be decoded by the decoding device 201 according to the second reference embodiment, and thus description of the decoding device is omitted.
As illustrated in
The input sound signals of the left channel input to the coding device 102 and the input sound signals of the right channel input to the coding device 102 are input to the left-right relationship information estimation unit 182. The left-right relationship information estimation unit 182 obtains and outputs a left-right time difference τ, a left-right time difference code Cτ, which is the code representing the left-right time difference τ, a left-right correlation coefficient γ, and preceding channel information, from the input sound signals of the left channel and the input sound signals of the right channel input (step S182). The process in which the left-right relationship information estimation unit 182 obtains the left-right time difference τ and the left-right time difference code Cτ is similar to that of the left-right relationship information estimation unit 181 according to the second reference embodiment.
The left-right correlation coefficient γ is information corresponding to the correlation coefficient between the sound signals reaching the microphone for the left channel from the sound source and collected and the sound signals reaching the microphone for the right channel from the sound source and collected, in the above-mentioned assumption in the description of the left-right relationship information estimation unit 181 according to the second reference embodiment. The preceding channel information is information corresponding to which microphone the sound emitted by the sound source reaches earlier, is information indicating in which of the input sound signals of the left channel and the input sound signals of the right channel the same sound signal is included earlier, and is information indicating which channel of the left channel and the right channel is preceding.
In the case of the example described above in the description of the left-right relationship information estimation unit 181 according to the second reference embodiment, the left-right relationship information estimation unit 182 obtains and outputs the correlation value between the sample sequence of the input sound signals of the left channel and the sample sequence of the input sound signals of the right channel at a position shifted to a later position than that of the sample sequence by the left-right time difference τ, that is, the maximum value of the correlation values γcand calculated for each number of candidate samples τcand from τmax to τmin, as the left-right correlation coefficient γ. In a case where the left-right time difference τ is a positive value, the left-right relationship information estimation unit 182 obtains and outputs information indicating that the left channel is preceding as the preceding channel information, and in a case where the left-right time difference t is a negative value, the left-right relationship information estimation unit 182 obtains and outputs information indicating that the right channel is preceding as the preceding channel information. In a case where the left-right time difference τ is 0, the left-right relationship information estimation unit 182 may obtain and output information indicating that the left channel is preceding as the preceding channel information, may obtain and output information indicating that the right channel is preceding as the preceding channel information, or may obtain and output information indicating that none of the channels is preceding as the preceding channel information.
The input sound signals of the left channel input to the coding device 102, the input sound signals of the right channel input to the coding device 102, the left-right correlation coefficient γ output by the left-right relationship information estimation unit 182, and the preceding channel information output by the left-right relationship information estimation unit 182 are input to the downmix unit 112. The downmix unit 112 obtains and outputs the downmix signals by weighted averaging the input sound signals of the left channel and the input sound signals of the right channel such that the downmix signals include a larger amount of the input sound signals of the preceding channel of the input sound signals of the left channel and the input sound signals of the right channel as the left-right correlation coefficient γ is greater (step S112).
For example, if an absolute value or a normalized value of the correlation coefficient is used for the correlation value as in the example described above in the description of the left-right relationship information estimation unit 181 according to the second reference embodiment, the obtained left-right correlation coefficient γ is a value of 0 or greater and 1 or less, and thus the downmix unit 112 uses a signal obtained by weighted addition of the input sound signal xL(t) of the left channel and the input sound signal xR(t) of the right channel by using the weight determined by the left-right correlation coefficient γ for each corresponding sample number t, as the downmix signal xM(t). Specifically, in the case where the preceding channel information is information indicating that the left channel is preceding, that is, in the case where the left channel is preceding, the downmix unit 112 obtains the downmix signal xM(t) as xM(t)=((1+γ)/2)×xL(t)+((1−γ)/2)×xR(t), and in the case where the preceding channel information is information indicating that the right channel is preceding, that is, in the case where the right channel is preceding, the downmix unit 112 obtains the downmix signal xM(t) as xM(t)=((1−γ)/2)×xL(t)+((1+γ)/2)×xR(t). By the downmix unit 112 obtaining the downmix signal in this way, the downmix signal is closer to the signal obtained by the average of the input sound signals of the left channel and the input sound signals of the right channel, as the left-right correlation coefficient γ is smaller, that is, the correlation between the input sound signals of the left channel and the input sound signals of the right channel is smaller, and the downmix signal is closer to the input sound signal of the preceding channel of the input sound signals of the left channel and the input sound signals of the right channel, as the left-right correlation coefficient γ is greater, that is, the correlation between the input sound signals of the left channel and the input sound signals of the right channel is greater.
Note that in the case where none of the channels is preceding, the downmix unit 112 may obtain and output the downmix signals by averaging the input sound signals of the left channel and the input sound signals of the right channel such that the input sound signals of the left channel and the input sound signals of the right channel are included in the downmix signals with the same weight. Thus, in the case where the preceding channel information indicates that none of the channels is preceding, then the downmix unit 112 uses xM(t)=(xL(t)+xR(t))/2 obtained by averaging the input sound signal xL(t) of the left channel and the input sound signal xR(t) of the right channel for each sample number t as the downmix signal xM(t).
The coding device 100 according to the first reference embodiment may also be modified to generate downmix signals in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel, and this embodiment will be described as a second embodiment. Note that the codes obtained by the coding device according to the second embodiment can be decoded by the decoding device 200 according to the first reference embodiment, and thus description of the decoding device is omitted.
As illustrated in
The input sound signals of the left channel input to the coding device 103 and the input sound signals of the right channel input to the coding device 103 are input to the left-right relationship information estimation unit 183. The left-right relationship information estimation unit 183 obtains and outputs the left-right correlation coefficient γ and the preceding channel information from the input sound signals of the left channel and the input sound signals of the right channel input (step S183).
The left-right correlation coefficient γ and the preceding channel information obtained and output by the left-right relationship information estimation unit 183 are the same as those described in the first embodiment. In other words, the left-right relationship information estimation unit 183 may be the same as the left-right relationship information estimation unit 182 except that the left-right relationship information estimation unit 183 need not necessarily obtain and output the left-right time difference t and the left-right time difference code Ci.
For example, the left-right relationship information estimation unit 183 obtains and outputs the maximum value of the correlation values γcand between a sample sequence of the input sound signals of the left channel and a sample sequence of the input sound signals of the right channel at a position shifted to a later position than that of the sample sequence by each number of candidate samples τcand for each number of candidate samples τcand from τmax to τmin as the left-right correlation coefficient γ, and in a case where τcand is a positive value when the correlation value is the maximum value, the left-right relationship information estimation unit 183 obtains and outputs information indicating that the left channel is preceding as the preceding channel information, and in a case where τcand is a negative value when the correlation value is the maximum value, the left-right relationship information estimation unit 183 obtains and outputs information indicating that the right channel is preceding, as the preceding channel information. In a case where τcand is 0 when the correlation value is the maximum value, the left-right relationship information estimation unit 183 may obtain and output information indicating that the left channel is preceding as the preceding channel information, may obtain and output information indicating that the right channel is preceding as the preceding channel information, or may obtain and output information indicating that none of the channels is preceding as the preceding channel information.
A configuration in which downmix signals are obtained in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel may be adopted even to a coding device that performs stereo coding on the input sound signals of each channel instead of the difference signals of each channel, and such embodiment will be described as a third embodiment.
As illustrated in
The left-right relationship information estimation unit 183 is the same as the left-right relationship information estimation unit 183 according to the second embodiment. The input sound signals of the left channel input to the coding device 104 and the input sound signals of the right channel input to the coding device 104 are input to the left-right relationship information estimation unit 183. The left-right relationship information estimation unit 183 obtains the left-right correlation coefficient γ, which is the correlation coefficient between the input sound signals of the left channel and the input sound signals of the right channel, and the preceding channel information, which is information indicating which of the input sound signals of the left channel and the input sound signals of the right channel is preceding, from the input sound signals of the left channel and the input sound signals of the right channel that are input and outputs the left-right correlation coefficient γ and the preceding channel information (step S183).
The downmix unit 112 is the same as the downmix unit 112 according to the second embodiment. The input sound signals of the left channel input to the coding device 104, the input sound signals of the right channel input to the coding device 104, the left-right correlation coefficient γ output by the left-right relationship information estimation unit 183, and the preceding channel information output by the left-right relationship information estimation unit 183 are input to the downmix unit 112. The downmix unit 112 obtains and outputs the downmix signals by weighted averaging the input sound signals of the left channel and the input sound signals of the right channel such that the downmix signals include a larger amount of the input sound signals of the preceding channel of the input sound signals of the left channel and the input sound signals of the right channel as the left-right correlation coefficient γ is greater (step S112).
For example, assuming that the sample number is t, the input sound signal of the left channel is xL(t), the input sound signal of the right channel is xR(t), and the downmix signal is xM(t), the downmix unit 112 obtains the downmix signal by xM(t)=((1+γ)/2)×xL(t)+((1−γ)/2)×xR(t) for each sample number t in a case where the preceding channel information indicates that the left channel is preceding, obtains the downmix signal by xM(t)=((1−γ)/2)×xL(t)+((1+γ)/2)×xR(t) for each sample number t in a case where the preceding channel information indicates that the right channel is preceding, and obtains the downmix signal by xM(t)=(xL(t)+xR(t))/2 for each sample number t in a case where the preceding channel information indicates that none of the channels is preceding.
The monaural coding unit 160 is the same as the monaural coding unit 160 according to the second embodiment. The downmix signals output by the downmix unit 112 are input to the monaural coding unit 160. The monaural coding unit 160 codes the input downmix signals to obtain and output the monaural code CM (step S160). The monaural coding unit 160 may use any coding scheme, for example, uses a coding scheme such as the 3GPP EVS standard. The coding scheme may be a coding scheme that performs coding processing independent of the stereo coding unit 174 described below, specifically, a coding scheme that performs coding processing without using the stereo code CS' obtained by the stereo coding unit 174 or information obtained in the coding processing performed by the stereo coding unit 174, or may be a coding scheme that performs coding processing using the stereo code CS' obtained by the stereo coding unit 174 or information obtained in the coding processing performed by the stereo coding unit 174.
The input sound signals of the left channel input to the coding device 104 and the input sound signals of the right channel input to the coding device 104 are input to the stereo coding unit 174. The stereo coding unit 174 codes the input sound signals of the left channel and the input sound signals of the right channel input to obtain and output the stereo code CS' (step S174). The stereo coding unit 174 may use any coding scheme, for example, a stereo coding scheme corresponding to the stereo decoding scheme of the MPEG-4 AAC standard may be used, or a coding scheme of independently coding the input sound signals of the left channel and the input sound signals of the right channel input may be used, and a combination of all the codes obtained by the coding is used as a “stereo code CS”. The coding scheme may be a coding scheme that performs coding processing independent of the monaural coding unit 160, specifically, a coding scheme that performs coding processing without using the monaural code CM obtained by the monaural coding unit 160 or information obtained in the coding processing performed by the monaural coding unit 160, or may be a coding scheme that performs coding processing using the monaural code CM obtained by the monaural coding unit 160 or information obtained in the coding processing performed by the monaural coding unit 160.
As can be seen from the description in the above embodiments, a configuration in which downmix signals are obtained in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel may be adopted to any coding device as long as the coding device at least codes the downmix signals obtained from the input sound signals of the left channel and the input sound signals of the right channel to obtain the code. Not limited to a coding device, a configuration in which downmix signals are obtained in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel may be adopted to any signal processing device as long as the signal processing device at least performs signal processing on the downmix signals obtained from the input sound signals of the left channel and the input sound signals of the right channel to obtain the signal processing result. Furthermore, the configuration in which downmix signals are obtained in consideration of the relationship between the input sound signals of the left channel and the input sound signals of the right channel may be adopted as a downmix device used in the preceding stage of the coding device or the signal processing device. These embodiments will be described as a fourth embodiment.
As illustrated in
The left-right relationship information estimation unit 183 is the same as the left-right relationship information estimation unit 183 according to the second embodiment, and obtains the left-right correlation coefficient γ, which is the correlation coefficient between the input sound signals of the left channel and the input sound signals of the right channel, and the preceding channel information, which is information indicating which of the input sound signals of the left channel and the input sound signals of the right channel is preceding, from the input sound signals of the left channel and the input sound signals of the right channel that are input and outputs the left-right correlation coefficient γ and the preceding channel information (step S183).
The downmix unit 112 is the same as the downmix unit 112 according to the second embodiment, and obtains and outputs the downmix signals by weighted averaging the input sound signals of the left channel and the input sound signals of the right channel such that the downmix signals include a larger amount of the input sound signals of the preceding channel of the input sound signals of the left channel and the input sound signals of the right channel as the left-right correlation coefficient γ is greater (step S112).
The downmix signals output by the downmix unit 112 are at least input to the coding unit 195. The coding unit 195 at least codes the input downmix signals to obtain and output a sound signal code (step S195). The coding unit 195 may also code the input sound signals of the left channel and the input sound signals of the right channel, and the code obtained by this coding may also be output while being included in the sound signal code. In this case, as illustrated by the dashed lines in
As illustrated in
The downmix signals output by the downmix unit 112 are at least input to the signal processing unit 315. The signal processing unit 315 at least performs signal processing on the input downmix signals to obtain and output the signal processing result (step S315). The signal processing unit 315 may also perform signal processing on the input sound signals of the left channel and the input sound signals of the right channel to obtain the signal processing result, and in this case, as illustrated by the dashed lines in
In a case where the input sound signals of the left channel and the input sound signals of the right channel input to the sound signal processing device 305 are decoded sound signals of the left channel and decoded sound signals of the right channel obtained by decoding the code with another device, one or both of the left-right correlation coefficient γ and the preceding channel information same as those obtained by the left-right relationship information estimation unit 183 may be obtained by the other device. In a case where one or both of the left-right correlation coefficient γ and the preceding channel information is obtained by the other device, as illustrated by the dot-dash lines in
As illustrated in
The left-right relationship information acquisition unit 185 obtains and outputs the left-right correlation coefficient γ, which is the correlation coefficient between the input sound signals of the left channel and the input sound signals of the right channel, and the preceding channel information, which is information indicating which of the input sound signals of the left channel and the input sound signals of the right channel is preceding (step S185).
In a case where both the left-right correlation coefficient γ and the preceding channel information are obtained by another device, as illustrated by the dot-dash lines in
In a case where both the left-right correlation coefficient γ and the preceding channel information are not obtained in another device, as illustrated by the dashed line in
In a case where either one of the left-right correlation coefficient γ and the preceding channel information are not obtained in another device, as illustrated by the dashed line in
The downmix unit 112 is the same as the downmix unit 112 according to the second embodiment, and obtains and outputs the downmix signals by weighted averaging the input sound signals of the left channel and the input sound signals of the right channel such that the downmix signals include a larger amount of the input sound signals of the preceding channel of the input sound signals of the left channel and the input sound signals of the right channel as the left-right correlation coefficient γ is greater, based on the preceding channel information and the left-right correlation coefficient acquired by the left-right relationship information acquisition unit 185 (step S112).
For example, assuming that the sample number is t, the input sound signal of the left channel is xL(t), the input sound signal of the right channel is xR(t), and the downmix signal is xM(t), the downmix unit 112 obtains the downmix signal by xM(t)=((1+γ)/2)×xL(t)+((1−γ)/2)×xR(t) for each sample number t in a case where the preceding channel information indicates that the left channel is preceding, obtains the downmix signal by xM(t)=((1−γ)/2)×xL(t)+((1+γ)/2)×xR(t) for each sample number t in a case where the preceding channel information indicates that the right channel is preceding, and obtains the downmix signal by xM(t)=(xL(t)+xR(t))/2 for each sample number t in a case where the preceding channel information indicates that none of the channels is preceding.
The processing of each unit of each coding device, each decoding device, the sound signal coding device, the sound signal processing device, and the sound signal downmix device described above may be realized by computers, and in this case, the processing contents of the functions that each device should have are described by programs. Then, by causing this program to be read into a storage unit 1020 of the computer 1000 illustrated in
A program in which processing content thereof has been described can be recorded on a computer-readable recording medium. The computer-readable recording medium is, for example, a non-temporary recording medium, specifically, a magnetic recording device, an optical disk, or the like.
Distribution of this program is performed, for example, by selling, transferring, or renting a portable recording medium such as a DVD or CD-ROM on which the program has been recorded. Further, the program may be distributed by being stored in a storage device of a server computer and transferred from the server computer to another computer via a network.
For example, a computer executing such a program first temporarily stores the program recorded on the portable recording medium or the program transmitted from the server computer in an auxiliary recording unit 1050 that is its own non-temporary storage device. Then, when executing the processing, the computer reads the program stored in the auxiliary recording unit 1050 that is its own storage device to the storage unit 1020 and executes the processing in accordance with the read program. As another execution mode of this program, the computer may directly read the program from the portable recording medium to the storage unit 1020 and execute processing in accordance with the program, or, further, may sequentially execute the processing in accordance with the received program each time the program is transferred from the server computer to the computer. A configuration in which the above-described processing is executed by a so-called application service provider (ASP) type service for realizing a processing function according to only an execution instruction and result acquisition without transferring the program from the server computer to the computer may be adopted. It is assumed that the program in the present embodiment includes information provided for processing of an electronic calculator and being pursuant to the program (such as data that is not a direct command to the computer, but has properties defining processing of the computer).
In this embodiment, although the present device is configured by a prescribed program being executed on the computer, at least a part of processing content of thereof may be realized by hardware.
It is needless to say that the present disclosure can appropriately be modified without departing from the gist of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
PCT/JP2020/010080 | Mar 2020 | WO | international |
PCT/JP2020/010081 | Mar 2020 | WO | international |
Number | Date | Country | |
---|---|---|---|
Parent | 17909666 | Sep 2022 | US |
Child | 18812833 | US |