Multichannel Audio Signal Processing Method, Apparatus, and System

Abstract
A multichannel audio signal processing method, an apparatus, and a system to resolve a problem that an audio signal cannot be discontinuously transmitted in a multichannel audio communications system. An encoder includes a signal detection circuit and a signal encoding circuit. The signal encoding circuit is configured to encode the Nth-frame downmixed signal when the signal detection circuit detects that an Nth-frame downmixed signal includes a speech signal, or when the signal detection circuit detects that the Nth-frame downmixed signal does not include a speech signal, encode the Nth-frame downmixed signal when the signal detection circuit determines that the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, or skip encoding the Nth-frame downmixed signal when the signal detection circuit determines that the Nth-frame downmixed signal does not satisfy a preset audio frame encoding condition.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of International Patent Application No. PCT/CN2016/100617 filed on Sep. 28, 2016, which is hereby incorporated by reference in its entirety.


TECHNICAL FIELD

The present disclosure relates to the field of audio encoding and decoding technologies, and in particular, to a multichannel audio signal processing method, an apparatus, and a system.


BACKGROUND

During audio communication, to increase a capacity of a communications system, usually, a transmit end first encodes each frame of original audio signal to be transmitted, and then transmits the audio signal. The audio signal is compressed by means of encoding. After receiving the signal, a receive end decodes the received signal, and restores the original audio signal. To implement maximum compression on an audio signal, different types of encoding manners are used for different types of audio signals. In other approaches, when an audio signal is a speech signal, a continuous encoding manner is usually used, that is, each frame of speech signal is encoded, when an audio signal is a noise signal, a discontinuous encoding manner is usually used to encode the noise signal, that is, one frame of noise signal is encoded every several frames of noise signals. For example, a noise signal is encoded every six frames. After the first frame of noise signal is encoded, the second frame of noise signal to the seventh frame of noise signal is not encoded, and the eighth frame of noise signal is encoded. The second frame to the seventh frame is six No_Data frames. Further, the audio signal is a mono audio signal.


With the development of audio communications technologies, an audio communications system further has a special communication manner, stereo communication. That the stereo communication is dual channel communication is used as an example. The two channels include a first channel and a second channel. A transmit end obtains, according to an nth-frame speech signal on the first channel and an nth-frame speech signal on the second channel, a stereo parameter used to mix the nth-frame speech signal on the first channel and the nth-frame speech signal on the second channel into one frame of downmixed signal, where the downmixed signal is a mono signal. Then, the transmit end mixes the nth-frame speech signals on the two channels into one frame of downmixed signal, where n is a positive integer greater than 0, then encodes the frame of downmixed signal, and finally, sends the encoded downmixed signal and the stereo parameter to a receive end. After receiving the encoded downmixed signal and the stereo parameter, the receive end decodes the encoded downmixed signal, and restores the downmixed signal to a dual channel signal according to the stereo parameter. Compared with a transmission manner in which each frame of speech signal on the two channels is encoded, in this transmission manner, a quantity of transmitted bits is greatly reduced, implementing compression.


However, when a noise signal is transmitted during the stereo communication, if a same encoding manner is used as that for a speech signal, and a discontinuous encoding manner used in mono is directly applied to the stereo communication, the receive end cannot restore the noise signal, leading to poor subjective experience of a user of the receive end.


SUMMARY

The present disclosure provides a multichannel audio signal processing method, an apparatus, and a system, to resolve a problem in the other approaches that an audio signal cannot be discontinuously transmitted in a multichannel audio communications system.


According to a first aspect, a multichannel audio signal processing method is provided, including detecting, by an encoder, whether an Nth-frame downmixed signal includes a speech signal, and encoding the Nth-frame downmixed signal when detecting that the Nth-frame downmixed signal includes the speech signal, or when detecting that the Nth-frame downmixed signal does not include the speech signal encoding the Nth-frame downmixed signal if the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, or skipping encoding the Nth-frame downmixed signal if the Nth-frame downmixed signal does not satisfy a preset audio frame encoding condition, where the Nth-frame downmixed signal is obtained after Nth-frame audio signals on two of multiple channels are mixed based on a predetermined first algorithm, and N is a positive integer greater than 0.


The encoder encodes the downmixed signal only when the downmixed signal includes the speech signal or the downmixed signal satisfies the preset audio frame encoding condition, otherwise, the encoder does not encode the downmixed signal such that the encoder implements discontinuous encoding on the downmixed signal, and downmixed signal compression efficiency is improved.


It should be noted that in embodiments of the present disclosure, the preset audio frame encoding condition includes a first-frame downmixed signal. That is, when the first-frame downmixed signal does not include the speech signal, but the first-frame downmixed signal satisfies the preset audio frame encoding condition, the first-frame downmixed signal is encoded.


Based on the first aspect, to improve the downmixed signal compression efficiency to a greater extent, optionally, the encoder encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate when detecting that the Nth-frame downmixed signal includes the speech signal, or when detecting that the Nth-frame downmixed signal does not include the speech signal encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate if determining that the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, or encodes the Nth-frame downmixed signal according to a preset silence insertion descriptor (SID) encoding rate if determining that the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID encoding condition, where the SID encoding rate is less than the speech frame encoding rate.


It should be understood that during specific implementation, if the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition, but satisfies the preset SID encoding condition, SID encoding is performed on the Nth-frame downmixed signal according to the preset SID encoding rate. Compared with speech signal encoding, this further improves the downmixed signal compression efficiency. In addition, it should be noted that in the first aspect and the technical solution, to avoid that a decoder cannot restore the downmixed signal, a stereo parameter set needs to be further encoded.


Based on the first aspect, to further improve compression efficiency of a multichannel communications system, optionally, the encoder performs discontinuous encoding on a stereo parameter set. Further, the encoder obtains an Nth-frame stereo parameter set according to the Nth-frame audio signals, and encodes the Nth-frame stereo parameter set when detecting that the Nth-frame downmixed signal includes the speech signal, or when detecting that the Nth-frame downmixed signal does not include the speech signal, if the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, encodes at least one stereo parameter in the Nth-frame stereo parameter set, or if determining that the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition, skips encoding the stereo parameter set, where the Nth-frame stereo parameter set includes Z stereo parameters, the Z stereo parameters include a parameter that is used when the encoder mixes the Nth-frame audio signals based on a predetermined algorithm, and Z is a positive integer greater than 0.


Based on the first aspect, optionally, to further improve the compression efficiency of the multichannel communications system, before the encoding at least one stereo parameter in the Nth-frame stereo parameter set, the encoder obtains X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule, and then encodes the X target stereo parameters, where X is a positive integer greater than 0 and less than or equal to Z.


The preset stereo parameter dimension reduction rule may be a preset stereo parameter type. That is, the X target stereo parameters satisfying the preset stereo parameter type are selected from the Nth-frame stereo parameter set. Alternatively, the preset stereo parameter dimension reduction rule is a preset quantity of stereo parameters. That is, the X target stereo parameters are selected from the Nth-frame stereo parameter set. Alternatively, the preset stereo parameter dimension reduction rule is reducing time-domain or frequency-domain resolution for the at least one stereo parameter in the Nth-frame stereo parameter set. That is, the X target stereo parameters are determined based on the Z stereo parameters according to reduced time-domain or frequency-domain resolution of the at least one stereo parameter.


Based on the first aspect, optionally, the following method may be further used to improve the compression efficiency of the multichannel communications system, when detecting that the Nth-frame audio signals include the speech signal the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encodes the Nth-frame stereo parameter set, or when detecting that the Nth-frame audio signals do not include the speech signal if the Nth-frame audio signals satisfy the preset speech frame encoding condition, the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encodes the Nth-frame stereo parameter set, or if determining that the Nth-frame audio signals do not satisfy the preset speech frame encoding condition, the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner, and encodes at least one stereo parameter in the Nth-frame stereo parameter set when the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, or the encoder does not encode the stereo parameter set when the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition, where the first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of the following conditions a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, time-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than time-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner, or frequency-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than frequency-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner.


Based on the first aspect, optionally, when the Nth-frame downmixed signal includes the speech signal, the encoder encodes the Nth-frame stereo parameter set according to a first encoding manner, and when the Nth-frame downmixed signal satisfies the speech frame encoding condition, the encoder encodes at least one stereo parameter in the Nth-frame stereo parameter set according to the first encoding manner, or when the Nth-frame downmixed signal does not satisfy the speech frame encoding condition, the encoder encodes the at least one stereo parameter in the Nth-frame stereo parameter set according to a second encoding manner, where an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner.


For example, the Nth-frame stereo parameter set includes an inter-channel phase difference (IPD) and an inter-channel time difference (ITD). IPD quantization precision stipulated in the first encoding manner is not lower than IPD quantization precision stipulated in the second encoding manner, and ITD quantization precision stipulated in the first encoding manner is not lower than ITD quantization precision stipulated in the second encoding manner.


Based on the first aspect, optionally, generally, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an inter-channel level difference (ILD), the preset stereo parameter encoding condition includes DL≥D0, where DL represents a degree by which the ILD deviates from a first standard, the first standard is determined based on a predetermined second algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an ITD, the preset stereo parameter encoding condition includes DT≥D1, where DT represents a degree by which the ITD deviates from a second standard, the second standard is determined based on a predetermined third algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0, or if the at least one stereo parameter in the Nth-frame stereo parameter set includes an IPD, the preset stereo parameter encoding condition includes DP≥D2, where DP represents a degree by which the IPD deviates from a third standard, the third standard is determined based on a predetermined fourth algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


The second algorithm, the third algorithm, and the fourth algorithm need to be preset according to an actual situation.


Optionally, DL, DT, and DP respectively satisfy the following expressions:








D
L

=




m
=
0


M
-
1




(


ILD


(
m
)


-


1
T






t
=
1

T




ILD

[

-
t

]




(
m
)





)



;








D
T

=

ITD
-


1
T






t
=
1

T




ITD

[

-
t

]




(
m
)






;
and








D
P

=




m
=
0


M
-
1




(


IPD


(
m
)


-


1
T






t
=
1

T




IPD

[

-
t

]




(
m
)





)



,




where ILD (m) is a level difference generated when the Nth-frame audio signals are respectively transmitted on the two channels in an mth sub frequency band, M is a total quantity of sub frequency bands occupied for transmitting the Nth-frame audio signals,







1
T






t
=
1

T




ILD

[

-
t

]




(
m
)







is an average value of ILDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, T is a positive integer greater than 0, ILD[−t](m) is a level difference generated when tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band, the ITD is a time difference generated when the Nth-frame audio signals are respectively transmitted on the two channels,







1
T






t
=
1

T







ITD

[

-
t

]







is an average value of ITDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, ITD[−t] is a time difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels, IPD(m) is a phase difference generated when some of the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band,







1
T






t
=
1

T








ITD

[

-
t

]




(
m
)







is an average value of IPDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, and IPD[−t](m) is a phase difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band.


According to a second aspect, a multichannel audio signal processing method is provided, including receiving, by a decoder, a bitstream, where the bitstream includes at least two frames, the at least two frames include at least one first-type frame and at least one second-type frame, the first-type frame includes a downmixed signal, and the second-type frame does not include a downmixed signal, and for an Nth-frame bitstream, where N is a positive integer greater than 1, decoding, by the decoder, the Nth-frame bitstream if the Nth-frame bitstream is the first-type frame to obtain an Nth-frame downmixed signal, or if the Nth-frame bitstream is the second-type frame, determining, by the decoder according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, and obtaining the Nth-frame downmixed signal according to the m-frame downmixed signals based on a predetermined first algorithm, where m is a positive integer greater than 0, and the Nth-frame downmixed signal is obtained by an encoder by mixing Nth-frame audio signals on two of multiple channels based on a predetermined second algorithm.


The bitstream received by the decoder includes the first-type frame and the second-type frame, the first-type frame includes the downmixed signal, and the second-type frame does not include the downmixed signal. That is, the encoder does not encode each frame of downmixed signal. Therefore, discontinuous transmission on the downmixed signal is implemented, and downmixed signal compression efficiency of a multichannel audio communications system is improved.


It should be noted that in embodiments of the present disclosure, the first-frame bitstream is the first-type frame. Further, to restore the obtained downmixed signal to audio signals on the two channels after the first-frame bitstream is decoded, the first-frame bitstream further needs to include a stereo parameter set. Further, because the first-type frame includes the downmixed signal and the second-type frame does not include the downmixed signal, a size of the first-type frame is greater than a size of the second-type frame. The decoder may determine, according to a size of the Nth-frame bitstream, whether the Nth-frame bitstream is the first-type frame or the second-type frame. In addition, a flag bit may be further encapsulated in the Nth-frame bitstream. The decoder partially decodes the Nth-frame bitstream, to obtain the flag bit. If the flag bit indicates that the Nth-frame bitstream is the first-type frame, the decoder decodes the Nth-frame bitstream, to obtain the Nth-frame downmixed signal. If the flag bit indicates that the Nth-frame bitstream is the second-type frame, the decoder obtains the Nth-frame downmixed signal according to the predetermined first algorithm.


Based on the second aspect, to restore the downmixed signal to the audio signals on the two channels, and ensure communication quality of the audio signals, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes a stereo parameter set, but does not include a downmixed signal, and if the Nth-frame bitstream is the first-type frame, after decoding the Nth-frame bitstream, the decoder obtains both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a predetermined third algorithm, or if the Nth-frame bitstream is the second-type frame, the decoder decodes the Nth-frame bitstream to obtain an Nth-frame stereo parameter set, and obtains the Nth-frame downmixed signal based on the predetermined first algorithm. Then, the decoder restores the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the predetermined third algorithm.


Based on the second aspect, to restore the downmixed signal to the audio signals on the two channels, and ensure communication quality of the audio signals, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes neither a downmixed signal nor a stereo parameter set, and if the Nth-frame bitstream is the first-type frame, the decoder decodes the Nth-frame bitstream to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, and then restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or if the Nth-frame bitstream is the second-type frame, the decoder obtains the Nth-frame downmixed signal based on the predetermined first algorithm, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, and then restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, where k is a positive integer greater than 0.


Based on the second aspect, to restore the downmixed signal to the audio signals on the two channels, and ensure communication quality of the audio signals, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame, and if the Nth-frame bitstream is the first-type frame, the decoder decodes the Nth-frame bitstream to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or if the decoder determines that the Nth-frame bitstream is the second-type frame, the following two cases are included, when the Nth-frame bitstream is the third-type frame, the decoder decodes the Nth-frame bitstream, to obtain an Nth-frame stereo parameter set, obtains the Nth-frame downmixed signal based on the predetermined first algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or when the Nth-frame bitstream is the fourth-type frame, the decoder determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where k is a positive integer greater than 0, obtains the Nth-frame downmixed signal based on the predetermined first algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.


Based on the second aspect, to restore the downmixed signal to the audio signals on the two channels, and ensure communication quality of the audio signals, optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, and the second-type frame includes neither a downmixed signal nor a stereo parameter set, and if the decoder determines that the Nth-frame bitstream is the first-type frame, the following two cases are included, when the Nth-frame bitstream is the fifth-type frame, the decoder decodes the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or when the Nth-frame bitstream is the sixth-type frame, the decoder decodes the Nth-frame bitstream to obtain the Nth-frame downmixed signal, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or if the Nth-frame bitstream is the second-type frame, the decoder obtains the Nth-frame downmixed signal based on the predetermined first algorithm, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.


Based on the second aspect, to restore the downmixed signal to the audio signals on the two channels, and ensure communication quality of the audio signals, optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame, and if the decoder determines that the Nth-frame bitstream is the first-type frame, the following two cases are included when the Nth-frame bitstream is the fifth-type frame, after decoding the Nth-frame bitstream, the decoder obtains both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or when the Nth-frame bitstream is the sixth-type frame, after decoding the Nth-frame bitstream, the decoder obtains the Nth-frame downmixed signal, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or if the decoder determines that the Nth-frame bitstream is the second-type frame, the following two cases are included, when the Nth-frame bitstream is the third-type frame, the decoder decodes the Nth-frame bitstream, to obtain an Nth-frame stereo parameter set, obtains the Nth-frame downmixed signal based on the predetermined first algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm, or when the Nth-frame bitstream is the fourth-type frame, the decoder determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where k is a positive integer greater than 0, obtains the Nth-frame downmixed signal based on the predetermined first algorithm, and restores the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.


According to a third aspect, an encoder is provided, including a signal detection unit and a signal encoding unit. The signal detection unit is configured to detect whether an Nth-frame downmixed signal includes a speech signal, where the Nth-frame downmixed signal is obtained after Nth-frame audio signals on two of multiple channels are mixed based on a predetermined first algorithm, and N is a positive integer greater than 0. The signal encoding unit is configured to encode the Nth-frame downmixed signal when the signal detection unit detects that the Nth-frame downmixed signal includes the speech signal, or when the signal detection unit detects that the Nth-frame downmixed signal does not include the speech signal encode the Nth-frame downmixed signal if the signal detection unit determines that the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, or skip encoding the Nth-frame downmixed signal if the signal detection unit determines that the Nth-frame downmixed signal does not satisfy a preset audio frame encoding condition.


Based on the third aspect, optionally, the signal encoding unit includes a first signal encoding unit and a second signal encoding unit. When the signal detection unit detects that the Nth-frame downmixed signal includes the speech signal, the signal detection unit instructs the first signal encoding unit to encode the Nth-frame downmixed signal. Alternatively, if determining that the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, the signal detection unit instructs the first signal encoding unit to encode the Nth-frame downmixed signal. Further, the first signal encoding unit encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate. If the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID frame encoding condition, the signal detection unit instructs the second signal encoding unit to encode the Nth-frame downmixed signal. Further, the second signal encoding unit encodes the Nth-frame downmixed signal according to a preset SID encoding rate, where the SID encoding rate is not greater than the speech frame encoding rate.


Based on the third aspect, optionally, the encoder further includes a parameter generation unit, a parameter encoding unit, and a parameter detection unit. The parameter generation unit is configured to obtain an Nth-frame stereo parameter set according to the Nth-frame audio signals, where the Nth-frame stereo parameter set includes Z stereo parameters, the Z stereo parameters include a parameter that is used when the encoder mixes the Nth-frame audio signals based on the predetermined first algorithm, and Z is a positive integer greater than 0. The parameter encoding unit is configured to encode the Nth-frame stereo parameter set when the signal detection unit detects that the Nth-frame downmixed signal includes the speech signal, or when the signal detection unit detects that the Nth-frame downmixed signal does not include the speech signal, encode at least one stereo parameter in the Nth-frame stereo parameter set if the parameter detection unit determines that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, or skip encoding the stereo parameter set if the parameter detection unit determines that the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition.


Based on the third aspect, optionally, the parameter encoding unit is configured to obtain X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule, and encode the X target stereo parameters, where X is a positive integer greater than 0 and less than or equal to Z.


Based on the third aspect, optionally, the parameter generation unit includes a first parameter generation unit and a second parameter generation unit, where when the signal detection unit detects that the Nth-frame audio signals include the speech signal, or when the signal detection unit detects that the Nth-frame audio signals do not include the speech signal, and the Nth-frame audio signals satisfy the preset speech frame encoding condition, the signal detection unit instructs the first parameter generation unit to generate an Nth-frame stereo parameter set, the first parameter generation unit obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and the parameter encoding unit encodes the Nth-frame stereo parameter set, when the parameter encoding unit includes a first parameter encoding unit and a second parameter encoding unit, the first parameter encoding unit encodes the Nth-frame stereo parameter set, where an encoding manner stipulated by the first parameter encoding unit is a first encoding manner, an encoding manner stipulated by the second parameter encoding unit is a second encoding manner, an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or, for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner, and when the signal detection unit detects that the Nth-frame audio signals do not include the speech signal the second parameter generation unit obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner, and when the parameter detection unit determines that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, the parameter encoding unit encodes at least one stereo parameter in the Nth-frame stereo parameter set, and when the parameter encoding unit includes the first parameter encoding unit and the second parameter encoding unit, the second parameter encoding unit encodes the at least one stereo parameter in the Nth-frame stereo parameter set, or the parameter encoding unit skips encoding the stereo parameter set when the parameter detection unit determines that the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition, and the first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, time-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than time-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner, or frequency-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than frequency-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner.


Based on the third aspect, optionally, the parameter encoding unit includes a first parameter encoding unit and a second parameter encoding unit. Further, the first parameter encoding unit is configured to encode the Nth-frame stereo parameter set according to a first encoding manner when the Nth-frame downmixed signal includes the speech signal and when the Nth-frame downmixed signal does not include the speech signal, but satisfies the speech frame encoding condition, and the second parameter encoding unit is configured to encode at least one stereo parameter in the Nth-frame stereo parameter set according to a second encoding manner when the Nth-frame downmixed signal does not satisfy the speech frame encoding condition, where an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner.


Based on the third aspect, optionally, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an ILD, the preset stereo parameter encoding condition includes DL≥D0, where DL represents a degree by which the ILD deviates from a first standard, the first standard is determined based on a predetermined second algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an ITD, the preset stereo parameter encoding condition includes DT≥D1, where DT represents a degree by which the ITD deviates from a second standard, the second standard is determined based on a predetermined third algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0, or if the at least one stereo parameter in the Nth-frame stereo parameter set includes an IPD, the preset stereo parameter encoding condition includes DP≥D2, where DP represents a degree by which the IPD deviates from a third standard, the third standard is determined based on a predetermined fourth algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


Based on the third aspect, optionally, DL, DT, and DP respectively satisfy the following expressions:








D
L

=




m
=
0


M
-
1








(


ILD


(
m
)


-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)





)



;








D
T

=

ITD
-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)






;
and








D
P






m
=
0


M
-
1








(


IPD


(
m
)


-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)





)



,




where ILD(m) is a level difference generated when the Nth-frame audio signals are respectively transmitted on the two channels in an mth sub frequency band, M is a total quantity of sub frequency bands occupied for transmitting the Nth-frame audio signals,







1
T






t
=
1

T








ITD

[

-
t

]




(
m
)







is an average value of ILDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, T is a positive integer greater than 0, ILD[−t](m) is a level difference generated when tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band, the ITD is a time difference generated when the Nth-frame audio signals are respectively transmitted on the two channels,







1
T






t
=
1

T







ITD

[

-
t

]







is an average value of ITDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, ITD[−t] is a time difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels, IPD(m) is a phase difference generated when some of the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band,







1
T






t
=
1

T








ITD

[

-
t

]




(
m
)







is an average value of IPDs in the T-frame stereo parameter sets preceding the


Nth-frame stereo parameter set in the mth sub frequency band, and IPD[−t](m) is a phase difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band.


According to a fourth aspect, a decoder is provided, including a receiving unit and a decoding unit. The receiving unit is configured to receive a bitstream, where the bitstream includes at least two frames, the at least two frames include at least one first-type frame and at least one second-type frame, the first-type frame includes a downmixed signal, and the second-type frame does not include a downmixed signal, and the decoding unit is configured to for an Nth-frame bitstream, where N is a positive integer greater than 1, decode the Nth-frame bitstream if the Nth-frame bitstream is the first-type frame, to obtain an Nth-frame downmixed signal, or if the Nth-frame bitstream is the second-type frame, determine, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding an Nth-frame downmixed signal, and obtain the Nth-frame downmixed signal according to the m-frame downmixed signals based on a predetermined first algorithm, where m is a positive integer greater than 0, and the Nth-frame downmixed signal is obtained by an encoder by mixing Nth-frame audio signals on two of multiple channels based on a predetermined second algorithm.


Based on the fourth aspect, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes a stereo parameter set, but does not include a downmixed signal, the decoding unit is further configured to if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, decode the Nth-frame bitstream, to obtain an Nth-frame stereo parameter set, where at least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and a signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Based on the fourth aspect, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes neither a downmixed signal nor a stereo parameter set, the decoding unit is further configured to if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where k is a positive integer greater than 0, and at least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and a signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Based on the fourth aspect, optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame, the decoding unit is further configured to, if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, when the Nth-frame bitstream is the third-type frame, decode the Nth-frame bitstream to obtain an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the fourth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where k is a positive integer greater than 0, and at least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and a signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Based on the fourth aspect, optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, and the second-type frame includes neither a downmixed signal nor a stereo parameter set, the decoding unit is further configured to , if the Nth-frame bitstream is the first-type frame, when the Nth-frame bitstream is the fifth-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the sixth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, or if the Nth-frame bitstream is the second-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where at least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and k is a positive integer greater than 0, and a signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Based on the fourth aspect, optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame, the decoding unit is further configured to, if the Nth-frame bitstream is the first-type frame, when the Nth-frame bitstream is the fifth-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the sixth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, or the decoding unit is further configured to, if the Nth-frame bitstream is the second-type frame, when the Nth-frame bitstream is the third-type frame, decode the Nth-frame bitstream, to obtain an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the fourth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where at least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and k is a positive integer greater than 0, and the decoder further includes a signal restoration unit, where the signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


According to a fifth aspect, an encoding and decoding system is provided, including any encoder provided in the third aspect and any decoder provided in the fourth aspect.


According to a sixth aspect, an embodiment of the present disclosure further provides a terminal device. The terminal device includes a processor and a memory. The memory is configured to store a software program, and the processor is configured to read the software program stored in the memory and implement the method provided in the first aspect or any implementation of the first aspect.


According to a seventh aspect, an embodiment of the present disclosure further provides a computer storage medium. The storage medium may be non-volatile. That is, content is not lost after power-off. The storage medium stores a software program, and when the software program is read and executed by one or more processors, the method provided in the first aspect or any implementation of the first aspect can be implemented.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic flowchart of a multichannel audio signal processing method according to Embodiment 1 of the present disclosure;



FIG. 2A, FIG. 2B, and FIG. 2C are a schematic flowchart of a multichannel audio signal processing method according to Embodiment 2 of the present disclosure;



FIG. 3A, FIG. 3B, FIG. 3C, and FIG. 3D are schematic diagrams of an encoder according to an embodiment of the present disclosure;



FIG. 4 is a schematic diagram of a decoder according to an embodiment of the present disclosure; and



FIG. 5 is a schematic diagram of an encoding and decoding system according to an embodiment of the present disclosure.





DESCRIPTION OF EMBODIMENTS

To make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes the present disclosure in detail with reference to the accompanying drawings.


It should be understood that, in an audio encoding and decoding technology, an audio signal is encoded or decoded in a unit of frame. Further, an Nth-frame audio signal is an Nth audio frame. When the Nth-frame audio signal includes a speech signal, the Nth audio frame is a speech frame. When the Nth-frame audio frame does not include a speech signal, but includes a background noise signal, the Nth audio frame is a noise frame. Herein, N is a positive integer greater than 0.


In addition, in a mono communications system, when a discontinuous encoding manner is used, encoding is performed once every several noise frames to obtain a SID frame.


An encoder and a decoder in the embodiments of the present disclosure are packages used to process a multichannel audio signal. The packages may be installed on a device supporting multichannel audio signal processing, such as a terminal (for example, a mobile phone, a notebook computer, or a tablet computer), or a server such that the device such as the terminal or the server has a function of processing the multichannel audio signal in the embodiments of the present disclosure.


In the embodiments of the present disclosure, because an audio signal can be encoded using a discontinuous encoding mechanism in a multichannel communications system, audio signal compression efficiency of is greatly improved.


The following describes in detail a multichannel audio signal processing method in the embodiments of the present disclosure using an Nth-frame downmixed signal as an example, and N is a positive integer greater than 0. It is assumed that the Nth-frame downmixed signal is obtained after Nth-frame audio signals on two of multiple channels are mixed.


When the multiple channels are two channels, and the two channels are respectively a first channel and a second channel, the two of the multiple channels are the first channel and the second channel, and an Nth-frame downmixed signal is obtained by mixing an Nth-frame audio signal on the first channel and an Nth-frame audio signal on the second channel. When the multiple channels are at least three channels, a downmixed signal is obtained by mixing audio signals on two paired channels in the multiple channels. Further, three channels are used as an example, and the three channels are a first channel, a second channel, and a third channel. Assuming that only the first channel and the second channel are paired according to a specified rule, the two of the multiple channels are the first channel and the second channel, and an Nth-frame downmixed signal is obtained after downmixing is performed on an Nth-frame audio signal on the first channel and an Nth-frame audio signal on the second channel. Assuming that, in the three channels, the first channel and the second channel are paired and the second channel and the third channel are paired, the two of the multiple channels may be the first channel and the second channel, or may be the second channel and the third channel.


As shown in FIG. 1, a multichannel audio signal processing method in Embodiment 1 of the present disclosure includes the following steps.


Step 100: An encoder generates an Nth-frame stereo parameter set according to Nth-frame audio signals on two of multiple channels, where the stereo parameter set includes Z stereo parameters.


Further, the Z stereo parameters include a parameter that is used when the encoder mixes the Nth-frame audio signals based on a predetermined first algorithm, and Z is a positive integer greater than 0. It should be understood that the predetermined first algorithm is a downmixed signal generation algorithm preset in the encoder.


It should be noted that stereo parameters included in the Nth-frame stereo parameter set are determined using a preset stereo parameter generation algorithm. Assuming that one of the two channels is a left channel, and the other is a right channel, the preset stereo parameter generation algorithm is as follows, and a stereo parameter obtained according to the Nth-frame audio signals is an ILD:








PL


(
i
)


=



Re







L


(
i
)


2


+

Im







L


(
i
)


2






i


=
1


,
2
,





,


N
2

-
2

,






PR


(
i
)


=



Re







R


(
i
)


2


+

Im







R


(
i
)


2






i


=
1


,
2
,





,


N
2

-
2

,






EL


(
m
)


=





i
=

bl


(
m
)




bh


(
m
)










PL


(
i
)







m


=
0


,
1
,





,

M
-
1

,






ER


(
m
)


=





i
=

bl


(
m
)




bh


(
m
)










PR


(
i
)







m


=
0


,
1
,





,

M
-
1

,
and








ILD


(
m
)


=



10
·

log


(


EL


(
m
)



ER


(
m
)



)








m

=
0


,
1
,





,

M
-
1

,




where L(i) is a discrete Fourier transform (DFT) coefficient of an Nth-frame audio signal on the left channel in an ith frequency bin, R(i) is a DFT coefficient of an Nth-frame audio signal on the right channel in the ith frequency bin, ReL(i) is a real part of L(i), ImL(i) is an imaginary part of L(i), ReR(i) is a real part of R(i), ImR(i) is an imaginary part of R(i), PL(i) is an energy spectrum of the Nth-frame audio signal on the left channel in the ith frequency bin, PR(i) is an energy spectrum of the Nth-frame audio signal on the right channel in the ith frequency bin, EL(m) is energy of an Nth-frame audio signal in an mth sub frequency band of the left channel, ER(m) is energy of an Nth-frame audio signal in an mth sub frequency band of the right channel, and a total quantity of sub frequency bands for transmitting the Nth-frame audio signals is M.


In the stereo parameter generation algorithm, a case in which the Nth-frame audio signal is a direct component or a Nyquist component respectively in frequency bins i=0 or






i
=


N
2

-
1





is not considered.


When the preset stereo parameter generation algorithm further includes an algorithm for calculating other stereo parameters such as an ITD, an IPD, and inter-channel coherence (IC), the encoder can further obtain the stereo parameters such as the ITD, the IPD, and the IC according to the audio signal based on the preset stereo parameter generation algorithm.


It should be understood that the Nth-frame stereo parameter set includes at least one stereo parameter. For example, the IPD, the ITD, the ILD, and the IC are obtained according to the Nth-frame audio signals on the two channels based on the preset stereo parameter generation algorithm, and the IPD, the ITD, the ILD, and the IC form the Nth-frame stereo parameter set.


Step 101: The encoder mixes the Nth-frame audio signals on the two channels into an Nth-frame downmixed signal according to at least one stereo parameter in the Nth-frame stereo parameter set based on a predetermined first algorithm.


For example, the Nth-frame stereo parameter set includes the ITD, the ILD, the IPD, and the IC. The Nth-frame downmixed signal is obtained according to the ILD and the IPD based on the predetermined first algorithm. Further, the Nth-frame downmixed signal DMX(k) satisfies the following expression in a kth frequency bin:








DMX


(
k
)


=







L


(
k
)




+



R


(
k
)





2



e

j


(









L


(
k
)



-


IPD


(
k
)



1
+

10


ILD


(
k
)




/


2





)








k

=
0


,
1
,





,

N


/


2

,




where DMX(k) represents the Nth-frame downmixed signal in the kth frequency bin, |L(k)| represents an amplitude of an Nth-frame audio signal on a left channel in a Kth pair of channels in the kth frequency bin, |R(k)| represents an amplitude of an Nth-frame audio signal on a right channel in the Kth pair of channels in the kth frequency bin, <L(k)represents a phase angle of the Nth-frame audio signal on the left channel in the kth frequency bin, ILD(k)represents an ILD of the Nth-frame audio signals in the kth frequency bin, and IPD(k)represents an IPD of the Nth-frame audio signals in the kth frequency bin.


It should be noted that in addition to the algorithm for obtaining the downmixed signal, this embodiment of the present disclosure imposes no limitation on another algorithm for obtaining the downmixed signal.


In Embodiment 1 of the present disclosure, the Nth-frame stereo parameter set is encoded such that a decoder can restore the Nth-frame downmixed signal. Optionally, to improve compression efficiency during encoding, the encoder encodes a stereo parameter used for obtaining the Nth-frame downmixed signal in the Nth-frame stereo parameter set. For example, the generated Nth-frame stereo parameter set includes the ITD, the ILD, the IPD, and the IC. If the encoder mixes the Nth-frame audio signals on the two channels into the Nth-frame downmixed signal according to only the ILD and the IPD in the Nth-frame stereo parameter set based on the predetermined first algorithm, to improve the compression efficiency, the encoder may encode only the ILD and the IPD in the Nth-frame stereo parameter set.


Step 102: The encoder detects whether the Nth-frame downmixed signal includes a speech signal, and if the Nth-frame downmixed signal includes the speech signal, performs step 103, or if the Nth-frame downmixed signal does not include the speech signal, performs step 104.


For ease of detecting, by the encoder, whether the Nth-frame downmixed signal includes the speech signal, optionally, the encoder directly detects, by means of voice activity detection (VAD), whether the Nth-frame downmixed signal includes the speech signal.


Optionally, a method for indirectly detecting, by the encoder, whether the Nth-frame downmixed signal includes the speech signal includes that the encoder directly detects, by means of VAD, whether the Nth-frame audio signals include the speech signal. Further, if detecting that an audio signal on one of the two channels includes the speech signal, the encoder determines that a downmixed signal obtained by mixing audio signals on the two channels includes the speech signal. Only when neither of the audio signals on the two channels includes the speech signal, the encoder determines that the downmixed signal obtained by mixing the audio signals on the two channels does not include the speech signal. It should be noted that in such an indirect detection manner, a sequence between step 102 and step 100 or step 101 is not limited, provided that step 100 precedes step 101.


Step 103: The encoder encodes the Nth-frame downmixed signal, and performs step 107.


The encoder encodes the Nth-frame downmixed signal to obtain an Nth-frame bitstream.


Because discontinuous encoding is performed on the downmixed signal in Embodiment 1 of the present disclosure, a bitstream includes two frame types a first-type frame and a second-type frame. The first-type frame includes a downmixed signal, and the second-type frame does not include a downmixed signal. The Nth-frame bitstream obtained in step 103 is the first-type frame.


In step 103, because the Nth-frame downmixed signal includes the speech signal, optionally, the encoder encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate. The preset speech frame encoding rate may be set to 13.2 kilobits per second (kbps).


In addition, optionally, if encoding the Nth-frame downmixed signal, the encoder encodes the Nth-frame stereo parameter set.


Step 104: The encoder determines whether the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, and if the Nth-frame downmixed signal satisfies the preset audio frame encoding condition, performs step 105, or if the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition, performs step 106.


The preset audio frame encoding condition is a condition that is preconfigured in the encoder and that is used to determine whether to encode the Nth-frame downmixed signal.


It should be noted that for a first-frame downmixed signal, if the first-frame downmixed signal does not include the speech signal, the first-frame downmixed signal satisfies the preset audio frame encoding condition. That is, the first-frame downmixed signal is encoded regardless of whether the first-frame downmixed signal includes the speech signal.


Step 105: The encoder encodes the Nth-frame downmixed signal, and performs step 107.


Further, the Nth-frame bitstream obtained in step 105 is also the first-type frame.


It should be noted that, optionally, if encoding the Nth-frame downmixed signal, the encoder encodes the Nth-frame stereo parameter set.


Optionally, for ease of simplifying an implementation of encoding the downmixed signal, in Embodiment 1 of the present disclosure, the Nth-frame downmixed signal is encoded in a same manner in step 103 and step 105.


Optionally, because the Nth-frame downmixed signal in step 105 does not include the speech signal, when the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, the encoder encodes the Nth-frame downmixed signal according to the preset speech frame encoding rate. Alternatively, when the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID encoding condition, the encoder encodes the Nth-frame downmixed signal according to a preset SID encoding rate. The preset SID encoding rate may be set to 2.8 kbps.


It should be noted that when the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition, but satisfies the preset SID encoding condition, the encoder encodes the Nth-frame downmixed signal according to an SID encoding manner. The SID encoding manner stipulates that an encoding rate is the preset SID encoding rate, and stipulates an algorithm used for the encoding and a parameter used for the encoding.


The preset speech frame encoding condition may be duration between the Nth-frame downmixed signal and an Mth-frame downmixed signal is not greater than preset duration. The Mth-frame downmixed signal includes the speech signal, and the Mth-frame downmixed signal is a frame of downmixed signal that includes the speech signal and that is closest to the Nth-frame downmixed signal. The preset SID encoding condition may be encoding an odd-number frame. When N of the Nth-frame downmixed signal is an odd number, the encoder determines that the Nth-frame downmixed signal satisfies the preset SID encoding condition.


Step 106: The encoder skips encoding the Nth-frame downmixed signal, and performs step 109.


Further, the Nth-frame bitstream obtained in step 106 is the second-type frame.


The encoder determines that the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition. Further, the encoder determines that the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition, and does not satisfy the preset SID encoding condition.


In this embodiment of the present disclosure, the encoder does not encode the Nth-frame downmixed signal. Further, the Nth-frame bitstream does not include the Nth-frame downmixed signal.


When the encoder does not encode the Nth-frame downmixed signal, the encoder may encode the Nth-frame stereo parameter set, or may not encode the Nth-frame stereo parameter set.


In Embodiment 1 of the present disclosure, a description is made using an example in which the encoder does not encode the Nth-frame downmixed signal, but encodes the Nth-frame stereo parameter set. However, optionally, when the encoder does not encode the Nth-frame downmixed signal, the encoder may not encode the Nth-frame stereo parameter set either. Further, when the encoder encodes neither the Nth-frame stereo parameter set nor the Nth-frame downmixed signal, for a manner of obtaining the Nth-frame downmixed signal and the Nth-frame stereo parameter set by the decoder, refer to Embodiment 2 of the present disclosure.


Step 107: The encoder sends an Nth-frame bitstream to a decoder.


In order that the decoder can restore the Nth-frame downmixed signal to the Nth-frame audio signals on the two channels after obtaining, by means of decoding, the Nth-frame downmixed signal, the Nth-frame bitstream includes both the Nth-frame stereo parameter set and the Nth-frame downmixed signal.


Step 108: If the Nth-frame bitstream is a first-type frame, the decoder decodes the Nth-frame bitstream to obtain the Nth-frame downmixed signal and the Nth-frame stereo parameter set, and performs step 111.


It should be noted that, because the first-type frame includes a downmixed signal and the second-type frame does not include a downmixed signal, a size of the first-type frame is greater than a size of the second-type frame. The decoder may determine, according to a size of the Nth-frame bitstream, whether the Nth-frame bitstream is the first-type frame or the second-type frame. In addition, optionally, a flag bit may be further encapsulated in the Nth-frame bitstream. The decoder partially decodes the Nth-frame bitstream to obtain the flag bit, and determines, according to the flag bit, whether the Nth-frame bitstream is the first-type frame or the second-type frame. For example, when the flag bit is 1, it indicates that the Nth-frame bitstream is the first-type frame, when the flag bit is 0, it indicates that the Nth-frame bitstream is the second-type frame.


In addition, optionally, the decoder determines a decoding manner according to a rate corresponding to the Nth-frame bitstream. For example, if the rate of the Nth-frame bitstream is 17.4 kbps, a rate of a bitstream corresponding to a downmixed signal is 13.2 kbps, and a rate of a bitstream corresponding to a stereo parameter set is 4.2 kbps, the decoder decodes, according to a decoding manner corresponding to 13.2 kbps, the bitstream corresponding to the downmixed signal, and decodes, according to a decoding manner corresponding to 4.2 kbps, the bitstream corresponding to the stereo parameter set.


Alternatively, the decoder determines an encoding manner of the Nth-frame bitstream according to an encoding manner flag bit in the Nth-frame bitstream, and decodes the Nth-frame bitstream according to a decoding manner corresponding to the encoding manner.


Step 109: The encoder sends an Nth-frame bitstream to a decoder, where the Nth-frame bitstream includes the Nth-frame stereo parameter set.


Step 110: If the Nth-frame bitstream is a second-type frame, the decoder decodes the Nth-frame bitstream to obtain the Nth-frame stereo parameter set, determines, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, and obtains the Nth-frame downmixed signal according to the m-frame downmixed signals based on the predetermined first algorithm, where m is a positive integer greater than 0.


Further, an average value of an (N−3)th-frame downmixed signal, an (N−2)th-frame downmixed signal, and an (N−1)th-frame downmixed signal is used as the Nth-frame downmixed signal, or an (N−1)th-frame downmixed signal is directly used as the Nth-frame downmixed signal, or the Nth-frame downmixed signal is estimated according to another algorithm.


In addition, the (N−1)th-frame downmixed signal may be directly used as the Nth-frame downmixed signal, or the Nth-frame downmixed signal is calculated according to the (N−1)th-frame downmixed signal and a preset offset value based on a preset algorithm.


Step 111: The decoder restores the Nth-frame downmixed signal to the Nth-frame audio signals on the two channels according to a target stereo parameter in the Nth-frame stereo parameter set based on a predetermined second algorithm.


It should be understood that the target stereo parameter is at least one stereo parameter in the Nth-frame stereo parameter set.


Further, a process of restoring, by the decoder, the Nth-frame downmixed signal to the Nth-frame audio signals on the two channels is an inverse process of mixing, by the encoder, the Nth-frame audio signals on the two channels into the Nth-frame downmixed signal. Assuming that the encoder obtains the Nth-frame downmixed signal according to the IPD and the ILD in the Nth-frame stereo parameter set, the decoder restores the Nth-frame downmixed signal to Nth-frame signals on the channels in the Kth pair of channels according to the IPD and the ILD in the Nth-frame stereo parameter set. In addition, it should be noted that an algorithm that is preset in the decoder and that is used to restore a downmixed signal may be an inverse algorithm of a downmixed signal generation algorithm in the encoder, or may be an algorithm independent of a downmixed signal generation algorithm in the encoder.


In addition, to improve compression efficiency during encoding in a multichannel communications system, when implementing discontinuous encoding on a downmixed signal, an encoder may further implement discontinuous encoding on a stereo parameter set. An Nth-frame downmixed signal is used as an example below. As shown in FIG. 2A, FIG. 2B, and FIG. 2C, a multichannel audio signal processing method in Embodiment 2 of the present disclosure includes the following steps.


Step 200: An encoder generates an Nth-frame stereo parameter set according to Nth-frame audio signals on two of multiple channels, where the stereo parameter set includes Z stereo parameters.


Further, the Z stereo parameters include a parameter that is used when the encoder mixes the Nth-frame audio signals based on a predetermined first algorithm, and Z is a positive integer greater than 0. It should be understood that the predetermined first algorithm is a downmixed signal generation algorithm preset in the encoder.


It should be noted that stereo parameters included in the Nth-frame stereo parameter set are determined using a preset stereo parameter generation algorithm. Assuming that one of the two channels is a left channel, and the other is a right channel, the preset stereo parameter generation algorithm is as follows, and a stereo parameter obtained according to the Nth-frame audio signals is an ITD:









c
n



(
i
)


=




j
=
0


N
-
1
-
i









r


(
j
)


*

l


(

j
+
i

)





,
and









c
p



(
i
)


=




j
=
0


N
-
1
-
i









l


(
j
)


*

r


(

j
+
i

)





,




where 0≤i≤Tmax, N is a frame length, l(j) represents a time-domain signal frame on the left channel at a moment j, r(j) represents a time-domain signal frame on the right channel at the moment j, and if









max

0

i


T
max





(


c
n



(
i
)


)


>


max

0

i


T
max





(


c
p



(
i
)


)



,




the ITD is an opposite number of an index value corresponding to








max

0

i


T
max





(


c
n



(
i
)


)


,




otherwise, the ITD is an opposite number of an index value corresponding to







max

0

i


T
max






(


c
p



(
i
)


)

.





Another algorithm for obtaining the ITD is also applicable to this embodiment of the present disclosure.


If the preset stereo parameter generation algorithm further includes the following IPD generation algorithm, an IPD may be further obtained according to the following algorithm. Further, an IPD in a bth sub frequency band satisfies the following expression:








IPD


(
b
)


=

arg


(




k
=

A

b
-
1




A

b
-
1










L


(
k
)





R
*



(
k
)




)



,

0

b

B

,




where B is a total quantity of sub frequency bands occupied by an audio signal in a frequency domain, L(k) is a signal of an Nth-frame audio signal on the left channel in a kth frequency bin, and R*(k) is a signal conjugate of Nth-frame audio signals on the right channel in the kth frequency bin.


In addition, when the preset stereo parameter generation algorithm further includes an ILD generation algorithm in Embodiment 1 of the present disclosure, an ILD may be further obtained.


Step 201: The encoder mixes the Nth-frame audio signals on the two channels into an Nth-frame downmixed signal according to at least one stereo parameter in the Nth-frame stereo parameter set based on a predetermined algorithm.


Further, for the predetermined first algorithm, refer to the method for obtaining an Nth-frame downmixed signal in Embodiment 1 of the present disclosure. However, the predetermined first algorithm is not limited to the method for obtaining an Nth-frame downmixed signal in Embodiment 1 of the present disclosure.


Step 202: The encoder detects whether the Nth-frame downmixed signal includes a speech signal, and if the Nth-frame downmixed signal includes the speech signal, performs step 203, or if the Nth-frame downmixed signal does not include the speech signal, performs step 204.


In Embodiment 2 of the present disclosure, for a specific implementation of detecting, by the encoder, whether the Nth-frame downmixed signal includes the speech signal, refer to the manner of detecting, by the encoder, whether the Nth-frame downmixed signal includes the speech signal in Embodiment 1 of the present disclosure.


Step 203: The encoder encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate, encodes the Nth-frame stereo parameter set, and performs step 211.


Further, when the encoder includes two manners of encoding a stereo parameter set, a first encoding manner and a second encoding manner, an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or, for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner. In step 203, the encoder encodes the Nth-frame stereo parameter set according to the first encoding manner.


For example, the Nth-frame stereo parameter set includes an IPD and an ITD. IPD quantization precision stipulated in the first encoding manner is not lower than IPD quantization precision stipulated in the second encoding manner, and ITD quantization precision stipulated in the first encoding manner is not lower than ITD quantization precision stipulated in the second encoding manner.


The speech frame encoding rate may be set to 13.2 kbps.


Step 204: The encoder determines whether the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, and if the Nth-frame downmixed signal satisfies the preset speech frame encoding condition, performs step 205, or if the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition, performs step 206.


Step 205: The encoder encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate, encodes the Nth-frame stereo parameter set, and performs step 211.


Further, when the encoder includes two manners of encoding a stereo parameter set a first encoding manner and a second encoding manner, an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or, for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner. In step 205, the encoder encodes the Nth-frame stereo parameter set according to the first encoding manner.


Step 206: The encoder determines whether the Nth-frame downmixed signal satisfies a preset SID encoding condition, and determines whether the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, and if the Nth-frame downmixed signal satisfies the preset SID encoding condition and the Nth-frame stereo parameter set satisfies the preset stereo parameter encoding condition, performs step 207, or if the Nth-frame downmixed signal satisfies the preset SID encoding condition, but the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition, performs step 208, or if the Nth-frame downmixed signal does not satisfy the preset SID encoding condition, but the Nth-frame stereo parameter set satisfies the preset stereo parameter encoding condition, performs step 209, or if the Nth-frame downmixed signal does not satisfy the preset SID encoding condition and the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition, performs step 210.


Further, before encoding the at least one stereo parameter in the Nth-frame stereo parameter set, the encoder determines whether a stereo parameter in the at least one stereo parameter satisfies a preset corresponding stereo parameter encoding condition. Further, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an ILD, the preset stereo parameter encoding condition includes DL≥D0, where DL represents a degree by which the ILD deviates from a first standard, the first standard is determined based on a predetermined third algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


If the at least one stereo parameter in the Nth-frame stereo parameter set includes an ITD, the preset stereo parameter encoding condition includes DT≥D1, where DT represents a degree by which the ITD deviates from a second standard, the second standard is determined based on a predetermined fourth algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


If the at least one stereo parameter in the Nth-frame stereo parameter set includes an IPD, the preset stereo parameter encoding condition includes DP≥D2, where DP represents a degree by which the IPD deviates from a third standard, the third standard is determined based on a predetermined fifth algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


The third algorithm, the fourth algorithm, and the fifth algorithm need to be preset according to an actual situation.


Further, when the at least one stereo parameter in the Nth-frame stereo parameter set includes only the ITD, the preset stereo parameter encoding condition includes only DT≥D1, and when the ITD included in the at least one stereo parameter in the Nth-frame stereo parameter set satisfies DT≥D1, the at least one stereo parameter in the Nth-frame stereo parameter set is encoded. When the at least one stereo parameter in the Nth-frame stereo parameter set includes only the ITD and the IPD, the preset stereo parameter encoding condition includes only DT≥D1, and when the ITD included in the at least one stereo parameter in the Nth-frame stereo parameter set satisfies DT≥D1, the at least one stereo parameter in the Nth-frame stereo parameter set is encoded. However, when the at least one stereo parameter in the Nth-frame stereo parameter set includes only the ITD and the ILD, the preset stereo parameter encoding condition includes DT≥D1 and DL≥D0, and the encoder encodes the ITD and the ILD only when the ITD included in the at least one stereo parameter in the Nth-frame stereo parameter set satisfies DT≥D1 and the ILD satisfies DL≥D0.


Optionally, DL , DT , and DP, respectively satisfy the following expressions:








D
L

=




m
=
0


M
-
1








(


ILD


(
m
)


-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)





)



;








D
T

=

ITD
-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)






;
and








D
P






m
=
0


M
-
1








(


IPD


(
m
)


-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)





)



,




where ILD(m) is a level difference generated when the Nth-frame audio signals are respectively transmitted on the two channels in an mth sub frequency band, M is a total quantity of sub frequency bands occupied for transmitting the Nth-frame audio signals,







1
T






t
=
1

T








ILD

[

-
t

]




(
m
)







is an average value of ILDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, T is a positive integer greater than 0, ILD[−t](m) is a level difference generated when tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band, the ITD is a time difference generated when the Nth-frame audio signals are respectively transmitted on the two channels,







1
T






t
=
1

T







ITD

[

-
t

]







is an average value of ITDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, ITD[−t](m) is a time difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels, IPD(m) is a phase difference generated when some of the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band,







1
T






t
=
1

T








IPD

[

-
t

]




(
m
)







is an average value of IPDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, and IPD[−t](m) is a phase difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band.


Step 207: The encoder encodes the Nth-frame downmixed signal according to a preset SID encoding rate, encodes the at least one stereo parameter in the Nth-frame stereo parameter set, and performs step 211.


Further, when the encoder includes two manners of encoding a stereo parameter set, a first encoding manner and a second encoding manner, an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or, for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner. The encoder encodes the at least one stereo parameter in the Nth-frame stereo parameter set according to the second encoding manner.


For example, in the first encoding manner, the encoder encodes the Nth-frame stereo parameter set according to 4.2 kbps, and in the second encoding manner, the encoder encodes the Nth-frame stereo parameter set according to 1.2 kbps.


To improve efficiency of compressing the stereo parameter set by the encoder, optionally, the encoder obtains X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule, and encodes the X target stereo parameters. X is a positive integer greater than 0 and less than or equal to Z.


Further, the Nth-frame stereo parameter set includes three types of stereo parameters: an IPD, an ITD, and an ILD. The ILD includes ILDs in 10 sub frequency bands: an ILD(0), . . . , and an ILD(9), the IPD includes IPDs in 10 sub frequency bands: an IPD(0), . . . , and an IPD(9), and the ITD includes ITDs in two time-domain subbands: an ITD(0) and an ITD(1). Assuming that the preset stereo parameter dimension reduction rule is that the stereo parameter set includes only two types of stereo parameters, the encoder selects any two types of stereo parameters from the IPD, the ITD, and the ILD. Assuming that the IPD and the ILD are selected, the encoder encodes the IPD and the ILD. Alternatively, if the preset stereo parameter dimension reduction rule is that only a half of each type of stereo parameters is reserved, five ILDs are selected from the ILD(0), . . . , and the ILD(9), five IPDs are selected from the IPD(0), . . . , and the IPD(9), one ITD is selected from the ITD(0) and the ITD(1), and the selected parameters are encoded. Alternatively, the preset stereo parameter dimension reduction rule is that five ILDs and five IPDs are selected. Alternatively, if the preset stereo parameter dimension reduction rule is that frequency-domain resolution of the ILDs, frequency-domain resolution of the IPDs, and time-domain resolution of the ITDs are reduced, ILDs in neighboring sub frequency bands in the ILD(0), . . . , and the ILD(9) are combined. For example, an average value of the ILD(0) and the ILD(1) is calculated to obtain a new ILD(0), an average value of the ILD(2) and the ILD(3) is calculated to obtain a new ILD(1), . . . , and an average value of the ILD(8) and the ILD(9) is calculated to obtain a new ILD(4). A sub frequency band corresponding to the new ILD(0) is obtained by combining sub frequency bands corresponding to the original ILD(0) and the original ILD(1), . . . , and a sub frequency band corresponding to the new ILD(4) is obtained by combining corresponding to the original ILD(8) and the original ILD(9). According to the same method, IPDs in neighboring sub frequency bands in the IPD(0), . . . , and the IPD(9) are combined, to obtain a new IPD(0), . . . , and a new IPD(4), and an average value of the ITD(0) and the ITD(1) is also calculated to obtain a new ITD(0). A time-domain signal corresponding to the new ITD(0) is obtained by combining corresponding to the original ITD(0) and the original ITD(1). The new ILD(0), . . . , and the new ILD(4), the new IPD(0), . . . , and the new IPD(4), and the new ITD(0) are encoded. Alternatively, if the preset stereo parameter dimension reduction rule is that frequency-domain resolution of the ILDs is reduced, ILDs in neighboring sub frequency bands in the ILD(0), . . . , and the ILD(9) are combined. For example, an average value of the ILD(0) and the ILD(1) is calculated to obtain a new ILD(0), an average value of the ILD(2) and the ILD(3) is calculated to obtain a new ILD(1), . . . , and an average value of the ILD(8) and the ILD(9) is calculated to obtain a new ILD(4). A sub frequency band corresponding to the new ILD(0) is obtained by combining corresponding to the original ILD(0) and the original ILD(1), . . . , and a sub frequency band corresponding to the new ILD(4) is obtained by combining corresponding to the original ILD(8) and the original ILD(9). Then, the new ILD(0), . . . , and the new ILD(4) are encoded.


Step 208: The encoder encodes the Nth-frame downmixed signal according to a preset SID encoding rate, but skips encoding the at least one stereo parameter in the Nth-frame stereo parameter set, and performs step 211.


Step 209: The encoder encodes the at least one stereo parameter in the Nth-frame stereo parameter set, but skips encoding the Nth-frame downmixed signal, and performs step 215.


Step 210: The encoder encodes neither the Nth-frame downmixed signal nor the Nth-frame stereo parameter set, and performs step 217.


In Embodiment 2 of the present disclosure, the encoder performs encoding to obtain a bitstream. The bitstream includes four different types of frames, that is, a third-type frame, a fourth-type frame, a fifth-type frame, and a sixth-type frame. The third-type frame includes a stereo parameter set, but does not include a downmixed signal, the fourth-type frame includes neither a downmixed signal nor a stereo parameter set, the fifth-type frame includes both a downmixed signal and a stereo parameter set, and the sixth-type frame includes a downmixed signal, but does not include a stereo parameter set. Each of the fifth-type frame and the sixth-type frame is one case of a type frame including a downmixed signal, and each of the third-type frame and the fourth-type frame is one case of a type frame including no downmixed signal.


Further, an Nth-frame bitstream obtained in step 203, step 205, or step 207 is the fifth-type frame, an Nth-frame bitstream obtained in step 208 is the sixth-type frame, an Nth-frame bitstream obtained in step 209 is the third-type frame, and an Nth-frame bitstream obtained in step 211 is the fourth-type frame.


Step 211: The encoder sends an Nth-frame bitstream to a decoder, where the Nth-frame bitstream includes the Nth-frame downmixed signal and the Nth-frame stereo parameter set.


Step 212: The decoder receives the Nth-frame bitstream, decodes the Nth-frame bitstream if determining that the Nth-frame bitstream is a fifth-type frame to obtain the Nth-frame downmixed signal and the Nth-frame stereo parameter set, and performs step 218.


For a specific implementation of determining, by the decoder, which type frame the Nth-frame bitstream is, refer to Embodiment 1 of the present disclosure.


Further, the decoder decodes the Nth-frame bitstream according to a rate corresponding to the Nth-frame bitstream. Further, if the encoder encodes the Nth-frame downmixed signal according to 13.2 kbps, the decoder decodes a bitstream of the Nth-frame downmixed signal in the Nth-frame bitstream according to 13.2 kbps. If the encoder encodes the Nth-frame stereo parameter set according to 4.2 kbps, the decoder decodes a bitstream of the Nth-frame stereo parameter set in the Nth-frame bitstream according to 4.2 kbps.


Step 213: The encoder sends an Nth-frame bitstream to a decoder, where the Nth-frame bitstream includes the Nth-frame downmixed signal.


Step 214: The decoder decodes the Nth-frame bitstream if the Nth-frame bitstream is a sixth-type frame to obtain the Nth-frame downmixed signal, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined sixth algorithm, and performs step 218.


Further, using a stereo parameter in the Nth-frame stereo parameter set as an example, a stereo parameter set stipulated in the preset second rule is a frame of stereo parameter set that is closest to P and that is obtained by means of decoding, and an Nth-frame stereo parameter P is obtained according to the following algorithm:






P={tilde over (P)}
[−1]+δ,


where P represents the Nth-frame stereo parameter, {tilde over (P)}[−1] represents a frame of stereo parameter that is closest to P and that is obtained by means of decoding, and δ represents a random number whose absolute value is relatively small. For example, δ may be a random number between −{tilde over (P)}[−1]×5% and +{tilde over (P)}[−1]×5% .


It should be noted that this embodiment of the present disclosure imposes no limitation on the method for estimating stereo parameters in the Nth-frame stereo parameter set.


Step 215: The encoder sends an Nth-frame bitstream to a decoder, where the Nth-frame bitstream includes the at least one stereo parameter in the Nth-frame stereo parameter set.


Step 216: The decoder decodes the Nth-frame bitstream if the Nth-frame bitstream is a third-type frame to obtain the at least one stereo parameter in the Nth-frame stereo parameter set, determines, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, obtains the Nth-frame downmixed signal according to the m-frame downmixed signals based on a predetermined second algorithm, where m is a positive integer greater than 0, and performs step 218.


Further, an average value of an (N−3)th-frame downmixed signal, an (N−2)th-frame downmixed signal, and an (N−1)th-frame downmixed signal is used as the Nth-frame downmixed signal, or an (N−1)th-frame downmixed signal is directly used as the Nth-frame downmixed signal, or the Nth-frame downmixed signal is estimated according to another algorithm.


In addition, the (N−1)th-frame downmixed signal may be directly used as the Nth-frame downmixed signal, or the Nth-frame downmixed signal is calculated according to the (N−1)th-frame downmixed signal and a preset offset value based on a preset algorithm.


Step 217: After receiving an Nth-frame bitstream, a decoder determines that the Nth-frame bitstream is a fourth-type frame, determines, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtains the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined sixth algorithm, and determines, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, and obtains the Nth-frame downmixed signal according to the m-frame downmixed signals based on a predetermined second algorithm, where m is a positive integer greater than 0.


Step 218: The decoder restores the Nth-frame downmixed signal to the Nth-frame audio signals on the two channels according to a target stereo parameter in the Nth-frame stereo parameter set based on a predetermined seventh algorithm.


In addition, based on this embodiment of the present disclosure, if the encoder detects, using the Nth-frame audio signals on the two channels, whether the Nth-frame downmixed signal includes the speech signal, another manner of encoding a stereo parameter set is further provided. Further, if detecting that either of the Nth-frame audio signals on the two channels includes the speech signal, the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encodes the Nth-frame stereo parameter set.


When the encoder determines that neither of the Nth-frame audio signals on the two channels includes the speech signal if the Nth-frame audio signals satisfy a preset speech frame encoding condition, the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encodes the Nth-frame stereo parameter set, or if the Nth-frame audio signals do not satisfy a preset speech frame encoding condition, the encoder obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner, and encodes at least one stereo parameter in the Nth-frame stereo parameter set when determining that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, or skips encoding the stereo parameter set when determining that the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition.


The first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of the following conditions.


A quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, time-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than time-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner, or frequency-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than frequency-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner.


Further, frequency-domain precision or time-domain precision of a stereo parameter set obtained in the first stereo parameter set generation manner is higher than that of a stereo parameter set obtained in the second stereo parameter set generation manner.


In addition, in a multichannel audio signal processing method in Embodiment 3 of the present disclosure, when detecting that an Nth-frame downmixed signal includes a speech signal, an encoder encodes the Nth-frame downmixed signal according to a speech encoding rate, and encodes an Nth-frame stereo parameter set, or when an encoder detects that an Nth-frame downmixed signal does not include a speech signal, if the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, the encoder encodes the Nth-frame downmixed signal according to a speech encoding rate, and encodes an Nth-frame stereo parameter set, or if the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID encoding condition, the encoder encodes the Nth-frame downmixed signal according to an SID encoding rate, and encodes at least one stereo parameter in an Nth-frame stereo parameter set, or if the Nth-frame downmixed signal satisfies neither a preset speech frame encoding condition nor a preset SID encoding condition, the encoder encodes neither the Nth-frame downmixed signal nor an Nth-frame stereo parameter set.


It should be understood that a difference between Embodiment 3 of the present disclosure and Embodiment 1 of the present disclosure or between Embodiment 3 of the present disclosure and Embodiment 2 of the present disclosure lies in that the encoder does not perform determining on a stereo parameter set, and encodes the stereo parameter set regardless of which manner is used to encode a downmixed signal.


In Embodiment 3 of the present disclosure, a bitstream obtained after the encoder encodes the downmixed signal includes two types of frames, a first-type frame and a second-type frame. The first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes neither a downmixed signal nor a stereo parameter set. Further, for a method for restoring the bitstream to audio signals on two channels by a decoder after receiving the bitstream, refer to Embodiment 2 of the present disclosure and Embodiment 1 of the present disclosure.


Based on Embodiment 3 of the present disclosure, optionally, when the Nth-frame downmixed signal satisfies neither the preset speech frame encoding condition nor the preset SID encoding condition, the encoder determines whether the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, and if the Nth-frame stereo parameter set satisfies the preset stereo parameter encoding condition, the encoder does not encode the Nth-frame downmixed signal, but encodes at least one stereo parameter in the Nth-frame stereo parameter set, or if the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition, the encoder encodes neither the Nth-frame downmixed signal nor the Nth-frame stereo parameter set.


A bitstream obtained based on the foregoing encoding method includes three types of frames, a first-type frame, a third-type frame, and a fourth-type frame. The first-type frame includes both a downmixed signal and a stereo parameter set, the third-type frame does not include a downmixed signal, but includes a stereo parameter set, and the fourth-type frame includes neither a downmixed signal nor a stereo parameter set. Further, for a method for restoring the bitstream to audio signals on two channels by a decoder after receiving the bitstream, refer to Embodiment 2 of the present disclosure and Embodiment 1 of the present disclosure.


A difference between the foregoing technical solution and Embodiment 2 of the present disclosure lies in when the Nth-frame downmixed signal satisfies neither the preset speech frame encoding condition nor the preset SID encoding condition, the encoder determines whether the Nth-frame stereo parameter set satisfies the preset stereo parameter encoding condition.


Optionally, in a multichannel audio signal processing method in Embodiment 4 of the present disclosure, when detecting that an Nth-frame downmixed signal includes a speech signal, an encoder encodes the Nth-frame downmixed signal according to a speech encoding rate, and encodes an Nth-frame stereo parameter set, or when an encoder detects that an Nth-frame downmixed signal does not include a speech signal, if the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, the encoder encodes the Nth-frame downmixed signal according to a speech encoding rate, and encodes an Nth-frame stereo parameter set, or if the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID encoding condition, the encoder determines whether an Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, and when the Nth-frame stereo parameter set satisfies the preset stereo parameter encoding condition, the encoder encodes the Nth-frame downmixed signal according to an SID encoding rate, and encodes at least one stereo parameter in the Nth-frame stereo parameter set, or when the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition, the encoder encodes the Nth-frame downmixed signal according to an SID encoding rate, but does not encode the Nth-frame stereo parameter set, or if the Nth-frame downmixed signal satisfies neither a preset speech frame encoding condition nor a preset SID encoding condition, the encoder encodes neither the Nth-frame downmixed signal nor an Nth-frame stereo parameter set.


A bitstream obtained based on an encoding manner in Embodiment 4 of the present disclosure includes three types of frames, a fifth-type frame, a sixth-type frame, and a second-type frame. The fifth-type frame includes both a downmixed signal and a stereo parameter set, the sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, and the second-type frame includes neither a downmixed signal nor a stereo parameter set. Further, for a method for restoring the bitstream to audio signals on two channels by a decoder after receiving the bitstream, refer to Embodiment 2 of the present disclosure and Embodiment 1 of the present disclosure.


A difference between Embodiment 4 of the present disclosure and Embodiment 2 of the present disclosure lies in when the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition, but satisfies the preset SID encoding condition, the encoder determines whether to encode the at least one stereo parameter in the Nth-frame stereo parameter set, and when the Nth-frame downmixed signal satisfies neither the preset speech frame encoding condition nor the preset SID encoding condition, skips encoding the Nth-frame stereo parameter set.


In Embodiment 3 of the present disclosure and Embodiment 4 of the present disclosure, further, for a manner of obtaining the Nth-frame downmixed signal and the Nth-frame stereo parameter set by the decoder, refer to Embodiment 2 of the present disclosure and Embodiment 1 of the present disclosure, and for a specific implementation of encoding a stereo parameter and a downmixed signal, refer to Embodiment 2 of the present disclosure and Embodiment 1 of the present disclosure.


In any embodiment of the present disclosure, first and second in the predetermined first algorithm and the predetermined second algorithm have no special meanings, and are merely used to distinguish between different algorithms, third, fourth, fifth, sixth, seventh, and the like are similar thereto, and details are not described herein.


Based on a same inventive concept, the embodiments of the present disclosure further provide an encoder, a decoder, and an encoding and decoding system. Because methods corresponding to the encoder, the decoder, and the encoding and decoding system in the embodiments of the present disclosure are the multichannel audio signal processing method in the embodiments of the present disclosure, for implementations of the encoder, the decoder, and the encoding and decoding system in the embodiments of the present disclosure, refer to the implementation of the method, and details are not repeated herein.


As shown in FIG. 3A, an encoder in an embodiment of the present disclosure includes a signal detection unit 300 and a signal encoding unit 310. The signal detection unit 300 is configured to detect whether an Nth-frame downmixed signal includes a speech signal. The Nth-frame downmixed signal is obtained after Nth-frame audio signals on two of multiple channels are mixed based on a predetermined first algorithm, and N is a positive integer greater than 0. The signal encoding unit 310 is configured to encode the Nth-frame downmixed signal when the signal detection unit 300 detects that the Nth-frame downmixed signal includes the speech signal, or when the signal detection unit 300 detects that the Nth-frame downmixed signal does not include the speech signal, encode the Nth-frame downmixed signal if the signal detection unit 300 determines that the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, or skip encoding the Nth-frame downmixed signal if the signal detection unit 300 determines that the Nth-frame downmixed signal does not satisfy a preset audio frame encoding condition.


Optionally, as shown in FIG. 3B, the signal encoding unit 310 includes a first signal encoding unit 311 and a second signal encoding unit 312. When the signal detection unit 300 detects that the Nth-frame downmixed signal includes the speech signal, the signal detection unit 300 instructs the first signal encoding unit 311 to encode the Nth-frame downmixed signal.


If the Nth-frame downmixed signal satisfies a preset speech frame encoding condition, the signal detection unit 300 instructs the first signal encoding unit 311 to encode the Nth-frame downmixed signal.


Further, it is stipulated that the first signal encoding unit 311 encodes the Nth-frame downmixed signal according to a preset speech frame encoding rate.


If the Nth-frame downmixed signal does not satisfy a preset speech frame encoding condition, but satisfies a preset SID frame encoding condition, the signal detection unit 300 instructs the second signal encoding unit 312 to encode the Nth-frame downmixed signal. Further, it is stipulated that the second signal encoding unit 312 encodes the Nth-frame downmixed signal according to a preset SID encoding rate. The SID encoding rate is not greater than the speech frame encoding rate.


Optionally, as shown in FIG. 3A and FIG. 3B, the encoder further includes a parameter generation unit 320, a parameter encoding unit 330, and a parameter detection unit 340. The parameter generation unit 320 is configured to obtain an Nth-frame stereo parameter set according to the Nth-frame audio signals. The Nth-frame stereo parameter set includes Z stereo parameters, the Z stereo parameters include a parameter that is used when the encoder mixes the Nth-frame audio signals based on the predetermined first algorithm, and Z is a positive integer greater than 0. The parameter encoding unit 330 is configured to encode the Nth-frame stereo parameter set when the signal detection unit 300 detects that the Nth-frame downmixed signal includes the speech signal, or when the signal detection unit 300 detects that the Nth-frame downmixed signal does not include the speech signal, encode at least one stereo parameter in the Nth-frame stereo parameter set if the parameter detection unit 340 determines that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition, or skip encoding the stereo parameter set if the parameter detection unit 340 determines that the Nth-frame stereo parameter set does not satisfy a preset stereo parameter encoding condition.


Optionally, the parameter encoding unit 330 is configured to obtain X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule, and encode the X target stereo parameters. X is a positive integer greater than 0 and less than or equal to Z.


Further, when the parameter encoding unit 330 includes a first parameter encoding unit 331 and a second parameter encoding unit 332, the second parameter encoding unit 332 is configured to obtain the X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on the preset stereo parameter dimension reduction rule, and encode the X target stereo parameters.


Optionally, based on FIG. 3A and FIG. 3B, as shown in FIG. 3C, the parameter generation unit 320 of the encoder includes a first parameter generation unit 321 and a second parameter generation unit 322. When the signal detection unit 300 detects that the Nth-frame audio signals include the speech signal, or the signal detection unit 300 detects that the Nth-frame audio signals do not include the speech signal and the Nth-frame audio signals satisfy the preset speech frame encoding condition, the signal detection unit 300 instructs the first parameter generation unit 321 to generate the Nth-frame stereo parameter set. When the signal detection unit 300 detects that the Nth-frame audio signals do not include the speech signal, and the Nth-frame audio signals do not satisfy the preset speech frame encoding condition, the signal detection unit 300 instructs the second parameter generation unit 322 to generate the Nth-frame stereo parameter set. Further, it is pre-stipulated that the first parameter generation unit 321 obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and the second parameter generation unit 322 obtains the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner.


The first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of the following conditions.


A quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of types of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the first stereo parameter set generation manner is not less than a quantity that is of stereo parameters included in a stereo parameter set and that is stipulated in the second stereo parameter set generation manner, time-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than time-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner, or frequency-domain resolution that is of a stereo parameter and that is stipulated in the first stereo parameter set generation manner is not lower than frequency-domain resolution that is of a corresponding stereo parameter and that is stipulated in the second stereo parameter set generation manner.


After the second parameter generation unit 322 obtains the Nth-frame stereo parameter set, the parameter encoding unit 330 encodes the Nth-frame stereo parameter set. Further, as shown in FIG. 3D, when the parameter encoding unit 330 includes a first parameter encoding unit 331 and a second parameter encoding unit 332, the first parameter encoding unit 331 encodes the Nth-frame stereo parameter set generated by the first parameter generation unit 321, and the second parameter encoding unit 332 encodes the Nth-frame stereo parameter set generated by the second parameter generation unit 322. It is pre-stipulated that an encoding manner of the first parameter encoding unit 331 is a first encoding manner, and it is pre-stipulated that an encoding manner of the second parameter encoding unit 332 is a second encoding manner. An encoding manner stipulated by the first parameter encoding unit 331 is the first encoding manner, and an encoding manner stipulated by the second parameter encoding unit 332 is the second encoding manner. Further, an encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner.


The stereo parameter set is not encoded when the parameter detection unit 340 determines that the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition.


Optionally, the parameter encoding unit 330 includes a first parameter encoding unit 331 and a second parameter encoding unit 332. Further, the first parameter encoding unit 331 is configured to encode the Nth-frame stereo parameter set according to a first encoding manner when the Nth-frame downmixed signal includes the speech signal and when the Nth-frame downmixed signal does not include the speech signal, but satisfies the speech frame encoding condition. The second parameter encoding unit 332 is configured to encode at least one stereo parameter in the Nth-frame stereo parameter set according to a second encoding manner when the Nth-frame downmixed signal does not satisfy the speech frame encoding condition.


An encoding rate stipulated in the first encoding manner is not less than an encoding rate stipulated in the second encoding manner, and/or for any stereo parameter in the Nth-frame stereo parameter set, quantization precision stipulated in the first encoding manner is not lower than quantization precision stipulated in the second encoding manner.


Optionally, if the at least one stereo parameter in the Nth-frame stereo parameter set includes an ILD, the preset stereo parameter encoding condition includes DL≥D0, where DL represents a degree by which the ILD deviates from a first standard, the first standard is determined based on a predetermined second algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


If the at least one stereo parameter in the Nth-frame stereo parameter set includes an ITD, the preset stereo parameter encoding condition includes DT≥D1, where DT represents a degree by which the ITD deviates from a second standard, the second standard is determined based on a predetermined third algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


If the at least one stereo parameter in the Nth-frame stereo parameter set includes an IPD, the preset stereo parameter encoding condition includes DP≥D2 , where DP represents a degree by which the IPD deviates from a third standard, the third standard is determined based on a predetermined fourth algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and T is a positive integer greater than 0.


Optionally, DL , DT , and DP, respectively satisfy the following expressions:








D
L

=




m
=
0


M
-
1








(


ILD


(
m
)


-


1
T






t
=
1

T








ILD

[

-
t

]




(
m
)





)



;








D
T

=

ITD
-


1
T






t
=
1

T








ITD

[

-
t

]




(
m
)






;
and








D
P






m
=
0


M
-
1








(


IPD


(
m
)


-


1
T






t
=
1

T








IPD

[

-
t

]




(
m
)





)



,




where ILD(m) is a level difference generated when the Nth-frame audio signals are respectively transmitted on the two channels in an mth sub frequency band, M is a total quantity of sub frequency bands occupied for transmitting the Nth-frame audio signals,







1
T






t
=
1

T








ILD

[

-
t

]




(
m
)







is an average value of ILDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, T is a positive integer greater than 0, ILD[−l](m) is a level difference generated when tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band, the ITD is a time difference generated when the Nth-frame audio signals are respectively transmitted on the two channels,







1
T






t
=
1

T







ITD

[

-
t

]







is an average value of ITDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, ITD[−l] is a time difference generated when the th-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels, IPD(m) is a phase difference generated when some of the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band,







1
T






t
=
1

T








IPD

[

-
t

]




(
m
)







is an average value of IPDs in the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set in the mth sub frequency band, and IPD[−t](m) is a phase difference generated when the tth-frame audio signals preceding the Nth-frame audio signals are respectively transmitted on the two channels in the mth sub frequency band.


It should be noted that the parameter detection unit 340 in FIG. 3A to FIG. 3D is optional. That is, the encoder may include the parameter detection unit 340 or may not include the parameter detection unit 340.


When the parameter encoding unit 330 encodes each frame of stereo parameter set of the parameter generation unit 320, the stereo parameter does not need to be detected, but is directly encoded.


As shown in FIG. 4, a decoder in an embodiment of the present disclosure includes a receiving unit 400 and a decoding unit 410. The receiving unit 400 is configured to receive a bitstream. The bitstream includes at least two frames, the at least two frames include at least one first-type frame and at least one second-type frame, the first-type frame includes a downmixed signal, and the second-type frame does not include a downmixed signal. For an Nth-frame bitstream, where N is a positive integer greater than 1, the decoding unit 410 is configured to, if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream, to obtain an Nth-frame downmixed signal, or if the Nth-frame bitstream is the second-type frame, determine, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding an Nth-frame downmixed signal, and obtain the Nth-frame downmixed signal according to the m-frame downmixed signals based on a predetermined first algorithm. m is a positive integer greater than 0.


The Nth-frame downmixed signal is obtained by an encoder by mixing Nth-frame audio signals on two of multiple channels based on a predetermined second algorithm.


Optionally, as shown in FIG. 4, the decoder further includes a signal restoration unit 420. The first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes a stereo parameter set, but does not include a downmixed signal.


If the Nth-frame bitstream is the first-type frame, the decoding unit 410 decodes the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, the decoding unit 410 decodes the Nth-frame bitstream to obtain an Nth-frame stereo parameter set. At least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm.


The signal restoration unit 420 is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, and the second-type frame includes neither a stereo parameter set nor a downmixed signal.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm. k is a positive integer greater than 0.


At least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm.


A signal restoration unit 420 is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Optionally, the first-type frame includes both a downmixed signal and a stereo parameter set, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the first-type frame, decode the Nth-frame bitstream, to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or if the Nth-frame bitstream is the second-type frame, when the Nth-frame bitstream is the third-type frame, decode the Nth-frame bitstream, to obtain an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the fourth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm, where k is a positive integer greater than 0.


At least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm.


A signal restoration unit 420 is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, and the second-type frame includes neither a downmixed signal nor a stereo parameter set.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the first-type frame, when the Nth-frame bitstream is the fifth-type frame, decode the Nth-frame bitstream to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the sixth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the second-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm.


At least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and k is a positive integer greater than 0.


A signal restoration unit 420 is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


Optionally, a fifth-type frame includes both a downmixed signal and a stereo parameter set, a sixth-type frame includes a downmixed signal, but does not include a stereo parameter set, each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, a third-type frame includes a stereo parameter set, but does not include a downmixed signal, a fourth-type frame includes neither a downmixed signal nor a stereo parameter set, and each of the third-type frame and the fourth-type frame is one case of the second-type frame.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the first-type frame, when the Nth-frame bitstream is the fifth-type frame, decode the Nth-frame bitstream to obtain both the Nth-frame downmixed signal and an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the sixth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm.


The decoding unit 410 is further configured to, if the Nth-frame bitstream is the second-type frame, when the Nth-frame bitstream is the third-type frame, decode the Nth-frame bitstream to obtain an Nth-frame stereo parameter set, or when the Nth-frame bitstream is the fourth-type frame, determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding an Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a predetermined fourth algorithm.


At least one stereo parameter in the Nth-frame stereo parameter set is used by the decoder to restore the Nth-frame downmixed signal to the Nth-frame audio signals based on a predetermined third algorithm, and k is a positive integer greater than 0.


A signal restoration unit 420 is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.


As shown in FIG. 5, an embodiment of the present disclosure provides an encoding and decoding system, including any encoder 500 shown in FIG. 3A and FIG. 3B and the decoder 510 shown in FIG. 4.


Persons skilled in the art should understand that the embodiments of the present disclosure may be provided as a method, a system, or a computer program product. Therefore, the present disclosure may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a compact disc read-only memory (CD-ROM), an optical memory, and the like) that include computer-usable program code.


The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and implement a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of another programmable data processing device to generate a machine such that the instructions executed by the computer or the processor of the other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.


These computer program instructions may be stored in a computer readable memory that can instruct the computer or the other programmable data processing device to work in a specific manner such that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.


These computer program instructions may be loaded onto the computer or the other programmable data processing device such that a series of operations and steps are performed on the computer or the other programmable device, to generate computer-implemented processing. Therefore, the instructions executed on the computer or the other programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.


Although some embodiments of the present disclosure have been described, persons skilled in the art can make changes and modifications to these embodiments once they learn the basic inventive concept. Therefore, the following claims are intended to be construed as to cover the embodiments and all changes and modifications falling within the scope of the present disclosure.


Obviously, persons skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. The present disclosure is intended to cover these modifications and variations provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.

    • the signal restoration unit is configured to restore the Nth-frame downmixed signal to the Nth-frame audio signals according to the at least one stereo parameter in the Nth-frame stereo parameter set based on the third algorithm.

Claims
  • 1. A multichannel audio signal processing method implemented by an encoder, comprising: mixing Nth-frame audio signals on two of a plurality of channels based on a first algorithm to obtain an Nth-frame downmixed signal;detecting wether the Nth-frame downmixed signal comprises a speech signal, wherein N is a positive integer greater than zero:encoding the Nth-frame downmixed signal when detecting that the Nth-frame downmixed signal comprises the speech signal;encoding the Nth-frame downmixed signal when the encoder detects that the Nth frame downmixed signal does not comprise the speech signal and when determining that the Nth-frame downmixed signal satisfies the preset audio frame encoding condition; andskipping encoding the Nth-frame downmixed signal when determining that the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition.
  • 2. The multichannel audio signal processing method of claim 1, wherein encoding the Nth-frame downmixed signal comprises: encoding the Nth-frame downrnixed signal according to a preset speech frame encoding rate when detecting that the Nth-frame downmixed signal comprises the speech signal; orencoding the Nth-frame downmixed signal according to the preset speech frame encoding rate when determining that the Nth-frame downmixed signal satisfies a preset speech frame encoding condition; andencoding the Nth-frame downmixed signal according to a preset silence insertion descriptor (SID) frame encoding rate when determining that the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition and satisfies a preset SID encoding condition, wherein the preset SID frame encoding rate is less than or equal to the preset speech frame encoding rate.
  • 3. The multichannel audio signal processing method of claim 1, comprising: obtaining an Nth-frame stereo parameter set according to the Nth-frame audio signals, wherein the Nth-frame stereo parameter set comprises Z stereo parameters, wherein the Z stereo parameters comprise a parameter used to mix the Nth-frame audio signals, and wherein Z is a positive integer greater than zero;encoding the Nth-frame stereo parameter set when detecting that the Nth-frame downmixed signal comprises the speech signal;determining that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition:encoding at least one stereo parameter in the Nth-frame stereo parameter set when detecting, that the Nth-frame downmixed signal does not comprise the speech signal and when determining that the Nth-frame stereo parameter set satisfies a the preset stereo parameter encoding condition; andskipping encoding the stereo parameter set when detecting that the Nth-frame downmixed signal does not comprise the speech signal and when determining that the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition.
  • 4. The multichannel audio signal processing method of claim 3, wherein encoding the at least one stereo parameter in the Nth-frame stereo parameter set comprises: obtaining X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule, wherein X is a positive integer greater than zero and less than or equal to Z; andencoding, the X target stereo parameters.
  • 5. The multichannel audio signal processing method of claim 2, further comprising: detecting that the Nth-frame audio signals comprise the speech signal;obtaining, an Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encoding the Nth-frame stereo parameter set when detecting that the Nth-frame audio signals comprise the speech signal;determining hat the Nth-frame audio signals satisfy the preset speech frame encoding condition;obtaining the Nth-frame stereo parameter set according to the Nth-frame audio signals based on the first stereo parameter set generation manner, and encoding the Nth-frame stereo parameter set when detecting that the Nth-frame audio signals do not comprise the speech signal and when determining that the Nth-frame audio signals satisfy the preset speech frame encoding condition;obtaining the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner when detecting that the Nth-frame audio signals do not comprise the speech signal and when determining that the Nth-frame audio signals do not satisfy the preset speech frame encoding condition;encoding at least one stereo parameter in the Nth-frame stereo parameter set when determining that the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition; andskipping encoding the stereo parameter set when determining that the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition,wherein the first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of the following conditions: a quantity of types of stereo parameters comprised in a stereo parameter set stipulated in the first stereo parameter set generation manner is not less than a quantity of types of stereo parameters comprised in a stereo parameter set stipulated in the second stereo parameter set generation manner;a quantity of stereo parameters comprised in the stereo parameter set stipulated in the first stereo parameter set generation manner is not less than a quantity of stereo parameters comprised in the stereo parameter set stipulated in the second stereo parameter set generation manner;a time-domain resolution of a stereo parameter stipulated in the first stereo parameter set generation manner is higher than or equal to a time-domain resolution of a corresponding stereo parameter stipulated in the second stereo parameter set generation manner; ora frequency-domain resolution of the stereo parameter stipulated in the first stereo parameter set generation manner is higher than or equal to a frequency-domain resolution of the corresponding stereo parameter stipulated in the second stereo parameter set generation manner.
  • 6. The multichannel audio signal processing method of claim 3, wherein encoding the Nth-frame stereo parameter set comprises encoding the Nth-frame stereo parameter set according to a first encoding manner, and wherein encoding the at least one stereo parameter in the Nth-frame stereo parameter set comprises: encoding the at least one stereo parameter in the Nth-frame stereo parameter set according to the first encoding manner when the Nth-frame downmixed signal satisfies the preset audios frame encoding condition; andencoding the at least one stereo parameter in the Nth-frame stereo parameter set according to a second encoding manner when the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition,wherein an encoding rate stipulated in the first encoding manner is greater than or equal to an encoding rate stipulated in the second encoding manner, or wherein a quantization precision stipulated in the first encoding manner is higher than or equal to a quantization precision stipulated in the second encoding manner for any stereo parameter in the Nth-frame stereo parameter set.
  • 7. The multichannel audio signal processing method of claim 3, further comprising: determining that the at least one stereo parameter in the Nth-frame stereo parameter set comprises an inter-channel level difference (ILD), wherein the preset stereo parameter encoding condition comprises DL≥D0 when determining that the at least one stereo parameter in the Nth-frame stereo parameter set comprises the ILD wherein DL represents degree by which the ILD deviates from a first standard, wherein the first standard is determined based on a second algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and wherein T is a positive integer greater than zero;determining that the at least one parameter in the Nth-frame stereo parameter set comprises an inter-channel time difference (ITD), wherein the preset stereo parameter encoding condition comprises DT≥D1 when determining that the at least one stereo parameter in the Nth-frame stereo parameter set comprises the ITD, wherein DT represents a degree by which the ITD deviates from a second standard, and wherein the second standard is determined based on a third algorithm according to the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set; anddetermining that the at least one stereo parameter in the Nth-frame stereo parameter set comprises an inter-channel phase difference (IPD), wherein the preset stereo parameter encoding condition comprises DP≥D2 when determining that the at least one stereo parameter in the Nth-frame stereo parameter set comprises the IPD, wherein DP represents a degree by which the IPD deviates from a third standard, and wherein the third standard is determined based on a fourth algorithm according to the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set.
  • 8. The multichannel audio signal processing method of claim 7, wherein DL, DT, and DP respectively satisfy the following expressions:
  • 9. A multichannel audio signal processing method implemented by a decoder, comprising: receiving a bitstream, wherein the bitstream comprises at least two frames, wherein the at least two frames comprise at least one first-type frame or at least one second-type frame, wherein the first-type frame comprises a downmixed signal, and wherein the second-type frame does not comprise the downmixed signal;decoding the Nth-frame bitstream when determining that the Nth-frame bitstream is the first-type frame to obtain an Nth-frame downmixed signal;determining, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, and obtaining the Nth-frame downmixed signal according to the m-frame downmixed signals based on a first algorithm. when determining that the Nth-frame bitstream is the second-type frame, wherein m is a positive integer greater than zero, wherein N is a positive integer greater than one, and wherein the Nth-frame downmixed signal is received from an encoder after mixing Nth-frame audio signals on two of a plurality of channels based on a second algorithm.
  • 10. The multichannel audio signal processing-method of claim 9, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein the second-type frame comprises the stereo parameter set and does not comprise the downmixed signal, and wherein the multichannel audio signal processing method further comprises: obtaining an Nth-frame stereo parameter set after decoding the Nth-frame bitstream when determining that the Nth-frame bitstream is the first-type frame;decoding the Nth-frame bitstream to obtain the Nth-frame stereo parameter set when determining that the Nth-frame bitstream is the second-type frame; andrestoring the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 11. The multichannel audio signal processing method of claim 9, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein the second-type frame comprises neither the downmixed signal nor the stereo parameter set, and wherein the multichannel audio signal processing method further comprises: obtaining an Nth-frame stereo parameter set after decoding the Nth-frame bitstream when determining that the Nth-frame bitstream is the first-type frame;determining, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm after determining that the Nth-frame bitstream is the second-type frame, wherein k is a positive integer greater than zero; andrestoring the Nth-frame downrnixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on on a third algorithm.
  • 12. The multichannel audio signal processing method of claim 9, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein a third-type frame comprises the stereo parameter set and does not comprise the downmixed signal, wherein a fourth-type frame comprises neither the downmixed signal nor the stereo parameter set, wherein each of the third-type frame and the fourth-type frame is one case of the second-type frame and wherein the multichannel audio signal processing method further comprises: obtaining an Nth-frame stereo parameter set after decoding the Nth-frame bitstream when determining that the Nth-frame bitstream is the first-type frame;decoding the Nth-flame bitstream to obtain the Nth-frame stereo parameter set when determining that the Nth-frame bitstream is the third-type frame;determining, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when determining that the Nth-frame bitstream is the fourth-type frame, wherein k is a positive integer greater than zero; andrestoring the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 13. The multichannel audio signal processing method of claim 9, wherein a fifth-type frame comprises the downmixed signal and a stereo parameter set, wherein a sixth-type frame comprises the downmixed signal and does not comprise the stereo parameter set, wherein each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, wherein the second-type frame comprises neither the downmixed signal nor the stereo parameter set, and wherein the multichannel audio signal processing method further comprises: decoding the Nth-frame bitstream to obtain an Nth-frame stereo parameter set when determining that the Nth-frame bitstream is the fifth-type frame;determining, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when determining that the Nth-frame bitstream is the sixth-type frame;determining, according to the preset second rule, the k-frame stereo parameter sets in the at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on the fourth algorithm after determining that the Nth-frame bitstream is the second-type frame, wherein k is a positive integer greater than zero; andrestoring the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 14. The multichannel audio signal processing method of claim 9, wherein a fifth-type frame comprises the downmixed signal and a stereo parameter set, wherein a sixth-type frame comprises the downmixed signal and does not comprise the stereo parameter set, wherein each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, wherein a third-type frame comprises the stereo parameter set and does not comprise the downmixed signal, wherein a fourth-type frame comprises neither the downmixed signal nor the stereo parameter set, wherein each of the third-type frame and the fourth-type frame is one case of the second-type frame, and wherein the multichannel audio signal processing method further comprises: decoding the Nth-frame bitstream to obtain an Nth-frame stereo parameter set when determining that the Nth-frame bitstream is the fifth-type frame;determining, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when determining that the Nth-frame bitstream is the sixth-type frame;decoding the Nth-frame bitstream to obtain the Nth-frame stereo parameter set when determining that the Nth-frame bitstream is the third-type frame;determining, according to the preset second rule, the k-frame stereo parameter sets in the at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtaining the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on the fourth algorithm when determining— that the Nth-frame bitstream is the fourth-type frame, wherein k is a positive integer greater than zero; andrestoring the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 15. An encoder, comprising: a memory configured to store instructions; anda processor coupled to the memory, wherein the instructions cause the processor to be configured to: mix Nth-frame audio signals on two of a plurality of channels based on a algorithm to obtain an Nth-frame downmixed signal;detect whether the Nth-frame downmixed signal comprises a speech signal, wherein N is a positive integer greater than zero;encode the Nth-frame downmixed signal when the Nth-frame downmixed signal comprises the speech signal;encode the Nth-frame downmixed signal when the Nth-frame downmixed signal satisfies a preset audio frame encoding condition, and when detecting that the Nth-frame downmixed signal does not comprise the speech signal; andskip encoding the Nth-frame downmixed signal when the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition and when detecting that the Nth-frame downmixed signal does not comprise the speech signal.
  • 16. The encoder of claim 15, wherein the instructions further cause the processor to be configured to: encode the Nth-frame downmixed signal according to a preset speech frame encoding rate when detecting that the Nth-frame downmixed signal comprises the speech signal;encode the Nth-frame downmixed signal according to the preset speech frame encoding rate when the Nth-frame downmixed signal satisfies a preset speech frame encoding condition; andencode the Nth-frame downmixed signal according to a preset silence insertion descriptor (SID) frame encoding rate when the Nth-frame downmixed signal does not satisfy the preset speech frame encoding condition and satisfies a preset SID encoding condition, wherein the preset SID frame encoding rate is less than or equal to the preset speech frame encoding rate.
  • 17. The encoder of claim 15, wherein the instructions further cause the processor to be configured to: obtain an Nth frame stereo parameter set according to the Nth-frame audio signals, wherein the Nth-frame stereo parameter set comprises Z stereo parameters, wherein the Z stereo parameters comprise a parameter used when mixing the Nth-frame frame audio signals, and wherein Z is a positive integer greater than zero;encode the Nth-frame stereo parameter set when detecting that the Nth-frame downmixed signal comprises the speech signal;encode at least one stereo parameter in the Nth-frame stereo parameter set when the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition and when detecting that the Nth-frame downrnixed signal does not comprise the speech signal; andskip encoding the stereo parameter set when the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition and when detecting that the Nth-frame downmixed signal does not comprise the speech signal.
  • 18. The encoder of claim 17, wherein the instructions further cause the processor to be configured to: obtain X target stereo parameters according to the Z stereo parameters in the Nth-frame stereo parameter set based on a preset stereo parameter dimension reduction rule; andencode the X target stereo parameters, wherein X is a positive integer greater than zero and less than or equal to Z.
  • 19. The encoder of claim 16, wherein the instructions further cause the processor to be configured to: obtain an Nth-frame stereo parameter set according to the Nth-frame audio signals based on a first stereo parameter set generation manner, and encode the Nth-frame stereo parameter set when detecting that the Nth-frame audio signals comprise the speech signal, or when detecting that the Nth-frame audio signals do not comprise the speech signal, and when the Nth-frame audio signals satisfy the preset speech frame encoding condition;obtain the Nth-frame stereo parameter set according to the Nth-frame audio signals based on a second stereo parameter set generation manner when detecting that the Nth-frame audio signals do not comprise the speech signal and when the Nth-frame audio signals do not satisfy the preset speech frame encoding condition;encode at least one stereo parameter in the Nth-frame stereo parameter set when the Nth-frame stereo parameter set satisfies a preset stereo parameter encoding condition; andskip encoding the stereo parameter set when the Nth-frame stereo parameter set does not satisfy the preset stereo parameter encoding condition, wherein the first stereo parameter set generation manner and the second stereo parameter set generation manner satisfy at least one of the following conditions: a quantity of types of stereo parameters comprised in a stereo parameter set stipulated in the first stereo parameter set generation manner is greater than or equal to a quantity of types of stereo parameters comprised in a stereo parameter set stipulated in the second stereo parameter set generation manner;a quantity of stereo parameters comprised in the stereo parameter set stipulated in the first stereo parameter set generation manner is water than or equal to a quantity stereo parameters comprised in the stereo parameter set stipulated in the second stereo parameter set generation manner;a time-domain resolution of the stereo parameter stipulated in the first stereo parameter set generation manner is higher than or equal to time-domain resolution of a corresponding stereo parameter stipulated in the second stereo parameter set generation manner; ora frequency-domain resolution of the stereo parameter stipulated in the first stereo parameter set generation manner is higher than or equal to a frequency-domain resolution of the corresponding stereo parameter stipulated in the second stereo parameter set generation manner.
  • 20. The encoder of claim 17, wherein the instructions further cause the processor to be configured to; encode the Nth-frame stereo parameter set according to a first encoding manner when detecting that the Nth-frame downmixed signal comprises the speech signal and the Nth-frame downmixed signal satisfies the preset audio frame encoding condition; andencode the at least one stereo parameter in the Nth-frame stereo parameter set according to a second encoding manner when the Nth-frame downmixed signal does not satisfy the preset audio frame encoding condition,wherein an encoding rate stipulated in the first encoding manner is greater than or equal to an encoding rate stipulated in the second encoding manner, or wherein a quantization precision stipulated in the first encoding manner is higher than or equal to a quantization precision stipulated in the second encoding manner for any stereo parameter in the Nth-frame stereo parameter set.
  • 21. The encoder of claim 17, wherein the instructions further cause the processor to be configured to: determine that the preset stereo parameter encoding condition comprises DL≥D0 when the at least one stereo parameter in the Nth-frame stereo parameter set comprises an inter-channel level difference (ILD), wherein DL represents a degree by which the ILD deviates from a first standard, wherein the first standard is determined based on a second algorithm according to T-frame stereo parameter sets preceding the Nth-frame stereo parameter set, and wherein T is a positive integer greater than zero;determine that the preset stereo parameter encoding condition comprises DT≥D1 when the at least one stereo parameter in the Nth-frame stereo parameter set comprises an inter-channel time difference (ITD), wherein DT represents a degree by which the ITD deviates from a second standard, and wherein the second standard is determined based on a third algorithm according to the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set; anddetermine that the preset stereo parameter encoding condition comprises DP≥D2 when the at least one stereo parameter in the Nth-frame stereo parameter set comprises an inter-channel phase difference (IPD), wherein DP represents a degree by which the IPD deviates from a third standard, and wherein the third standard is determined based on a fourth algorithm according to the T-frame stereo parameter sets preceding the Nth-frame stereo parameter set.
  • 22. The encoder of claim 21, wherein DL, DT, and DP respectively satisfy the following expressions:
  • 23. A decoder, comprising: a memory configured to store instructions; anda processor coupled to the memory, wherein the instructions cause the processor to be configured to: receive a bitstream, wherein the bitstream comprises at least two frames, Wherein the at least two frames comprise at least one first-type frame or at least one second-type frame, wherein the first-type frame comprises a downmixed signal, and wherein the second-type frame does not comprise a the downmixed signal; anddecode an Nth-frame bitstream to obtain an Nth-frame downmixed signal when the Nth-frame bitstream is the first-type frame; anddetermine, according to a preset first rule, m-frame downmixed signals in at least one-frame downmixed signal preceding the Nth-frame downmixed signal, and obtain the Nth-frame downmixed signal according to the m-frame downmixed signals based on a first algorithm when the Nth-frame bitstream is the second-type frame, wherein m is a positive integer greater than zero, wherein N is a positive integer greater than one, and wherein the Nth-frame downmixed signal is received from an encoder after mixing Nth-frame audio signals on two of a plurality of channels based on a second algorithm.
  • 24. The decoder of claim 23, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein the second-type frame comprises the stereo parameter set and does not comprise the downmixed signal, and wherein the instructions further cause the processor to be configured to: decode the Nth-frame bitstream to obtain an Nth-frame stereo parameter set when the Nth-frame bitstream is the first-type frame;decode the Nth-frame bitstream to obtain the Nth-frame stereo parameter set when the Nth-frame bitstream is the second-type frame; andrestore the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 25. The decoder of claim 23, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein the second-type frame comprises neither the downmixed signal nor the stereo parameter set, and wherein the instructions further cause the processor to be configured to: decode the Nth-frame bitstream to obtain an Nth frame stereo parameter set when the Nth-frame bitstream is the first-type frame;determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when the Nth-frame bitstream is the second-type frame, wherein k is a positive integer greater than zero; andrestore the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 26. The decoder of claim 23, wherein the first-type frame comprises the downmixed signal and a stereo parameter set, wherein a third-type frame comprises the stereo parameter set and does not comprise the downmixed signal, wherein a fourth-type frame comprises neither the downmixed signal or the stereo parameter set, wherein each of the third-type frame and the fourth-type frame is one case of the second-type frame, and wherein the instructions further cause the processor to be configured to: decode the Nth-frame bitstream to obtain an Nth-frame stereo parameter set when the Nth-frame bitstream is the first-type frame;decode the Nth-frame bitstream to obtain the Nth-frame stereo parameter set when the Nth-frame bitstream is the third-type frame;determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when the Nth-frame bitstream is the fourth-type frame, wherein k is a positive integer greater than zero; andrestore the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 27. The decoder of claim 23, wherein a fifth-type frame comprises both the downmixed signal and a stereo parameter set, wherein a sixth-type frame comprises the downmixed signal and does not comprise the stereo parameter set, wherein each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, wherein the second-type frame comprises neither the downmixed signal nor the stereo parameter set, and wherein the instructions further cause the processor to be configured to: decode the Nth-frame bitstream to obtain an Nth-frame stereo parameter set when the Nth-frame bitstream is the fifth-type frame;determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when the Nth-frame bitstream is the sixth-type frame;determine, according to the preset second rule, the k-frame stereo parameter sets in the at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a the fourth algorithm when the Nth-frame bitstream the second-type frame, where k is a positive integer greater than zero; andrestore the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
  • 28. The decoder of claim 23, wherein a fifth-type frame comprises both the downmixed signal and a stereo parameter set, wherein a sixth-type frame comprises the downmixed signal and does not comprise the stereo parameter set, wherein each of the fifth-type frame and the sixth-type frame is one case of the first-type frame, wherein a third-type frame comprises the stereo parameter set and does not comprise the downmixed signal, wherein a fourth-type frame comprises neither the downmixed signal nor the stereo parameter set, wherein each of the third-type frame and the fourth-type frame is one case of the second-type frame, and wherein the instructions further cause the processor to be configured to: decode the Nth-frame bitstream to obtain to obtain an Nth-frame stereo parameter set when the Nth-frame bitstream is the fifth-type frame;determine, according to a preset second rule, k-frame stereo parameter sets in at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a fourth algorithm when the Nth-frame bitstream is the sixth-type frame; decode the Nth-frame bitstream to obtain an to obtain the Nth-frame stereo parameter set when the Nth-frame bitstream is the third-type frame;determine, according to the preset second rule, the k-frame stereo parameter sets in the at least one-frame stereo parameter set preceding the Nth-frame stereo parameter set, and obtain the Nth-frame stereo parameter set according to the k-frame stereo parameter sets based on a the fourth algorithm when the Nth-frame bitstream is the fourth-type frame, wherein k is a positive integer greater than zero; andrestore the Nth-frame downmixed signal to the Nth-frame audio signals according to at least one stereo parameter in the Nth-frame stereo parameter set based on a third algorithm.
Continuations (1)
Number Date Country
Parent PCT/CN2016/100617 Sep 2016 US
Child 16368208 US