Audio coding method and apparatus

Information

  • Patent Grant
  • 12136430
  • Patent Number
    12,136,430
  • Date Filed
    Friday, August 27, 2021
    3 years ago
  • Date Issued
    Tuesday, November 5, 2024
    5 months ago
  • Inventors
  • Original Assignees
    • Top Quality Telephony, LLC (Austin, TX, US)
  • Examiners
    • Shin; Seong-Ah A
    Agents
    • Conley Rose, P.C.
    • Rodolph; Grant
    • Beaulieu; Nicholas K.
Abstract
A method comprises determining a first modification weight according to linear spectral frequency (LSF) differences of the current frame and LSF differences of a previous frame of the current frame when a signal characteristic of the current frame meets a preset modification condition, modifying the linear predictive parameter of the current frame according to the determined first modification weight, and coding the current frame according to the modified linear predictive parameter.
Description
TECHNICAL FIELD

The present application relates to the communications field, and in particular, to an audio coding method and apparatus.


BACKGROUND

With constant development of technologies, users have an increasingly higher requirement on audio quality of an electronic device. A main method for improving the audio quality is to improve a bandwidth of audio. If the electronic device codes the audio in a conventional coding manner to increase the bandwidth of the audio, a bit rate of coded information of the audio greatly increases. Therefore, when the coded information of the audio is transmitted between two electronic devices, a relatively wide network transmission bandwidth is occupied. Therefore, an issue to be addressed is to code audio having a wider bandwidth while a bit rate of coded information of the audio remains unchanged or the bit rate slightly changes. For this issue, a proposed solution is to use a bandwidth extension technology. The bandwidth extension technology is divided into a time domain bandwidth extension technology and a frequency domain bandwidth extension technology. The present disclosure relates to the time domain bandwidth extension technology.


In the time domain bandwidth extension technology, a linear predictive parameter, such as a linear predictive coding (LPC) coefficient, a linear spectral pair (LSP) coefficient, an immittance spectral pair (ISP) coefficient, or a linear spectral frequency (LSF) coefficient, of each audio frame in audio is calculated generally using a linear predictive algorithm. When coding transmission is performed on the audio, the audio is coded according to the linear predictive parameter of each audio frame in the audio. However, in a case in which a codec error precision requirement is relatively high, this coding manner causes discontinuity of a spectrum between audio frames.


SUMMARY

Embodiments of the present disclosure provide an audio coding method and apparatus. Audio having a wider bandwidth can be coded while a bit rate remains unchanged or a bit rate slightly changes, and a spectrum between audio frames is steadier.


According to a first aspect, an embodiment of the present disclosure provides an audio coding method, including, for each audio frame, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, determining a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame, or when the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, determining a second modification weight, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame, modifying a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight, and coding the audio frame according to a modified linear predictive parameter of the audio frame.


With reference to the first aspect, in a first possible implementation manner of the first aspect, determining a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame includes determining the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula:







w


[
i
]


=

{






lsf_new



_diff


[
i
]


/
lsf_old



_diff


[
i
]



,





lsf_new


_diff


[
i
]



<

lsf_old


_diff


[
i
]










lsf_old



_diff


[
i
]


/
lsf_new



_diff




[
i
]


,





lsf_new


_diff


[
i
]





lsf_old


_diff


[
i
]







,







where w[i] is the first modification weight, lsf_new_diff[i] is the LSF differences of the audio frame, lsf_old_diff[i] is the LSF differences of the previous audio frame, i is an order of the LSF differences, a value of i ranges from 0 to M−1, and M is an order of the linear predictive parameter.


With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, determining a second modification weight includes determining the second modification weight as a preset modification weight value, where the preset modification weight value is greater than 0, and is less than or equal to 1.


With reference to the first aspect, the first possible implementation manner of the first aspect, or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, modifying a linear predictive parameter of the audio frame according to the determined first modification weight includes modifying the linear predictive parameter of the audio frame according to the first modification weight using the following formula: L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i], where w[i] is the first modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is a linear predictive parameter of the previous audio frame, i is an order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


With reference to the first aspect, the first possible implementation manner of the first aspect, the second possible implementation manner of the first aspect, or the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, modifying a linear predictive parameter of the audio frame according to the determined second modification weight includes modifying the linear predictive parameter of the audio frame according to the second modification weight using the following formula: L[i]=(1−y)*L_old[i]+y*L_new[i], where y is the second modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is the linear predictive parameter of the previous audio frame, i is the order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


With reference to the first aspect, the first possible implementation manner of the first aspect, the second possible implementation manner of the first aspect, the third possible implementation manner of the first aspect, or the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition includes the audio frame is not a transition frame, where the transition frame includes a transition frame from a non-fricative to a fricative or a transition frame from a fricative to a non-fricative, and a signal characteristic of the audio frame and a signal characteristic of a previous audio frame do not meet a preset modification condition includes the audio frame is a transition frame.


With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the audio frame is a transition frame from a fricative to a non-fricative includes a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold, and a coding type of the audio frame is transient, and the audio frame is not a transition frame from a fricative to a non-fricative includes the spectrum tilt frequency of the previous audio frame is not greater than the first spectrum tilt frequency threshold, and/or the coding type the audio frame is not transient.


With reference to the fifth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, the audio frame is a transition frame from a fricative to a non-fricative includes a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold, and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold, and the audio frame is not a transition frame from a fricative to a non-fricative includes the spectrum tilt frequency of the previous audio frame is not greater than the first spectrum tilt frequency threshold, and/or the spectrum tilt frequency of the audio frame is not less than the second spectrum tilt frequency threshold.


With reference to the fifth possible implementation manner of the first aspect, in an eighth possible implementation manner of the first aspect, the audio frame is a transition frame from a non-fricative to a fricative includes a spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold, a coding type of the previous audio frame is one of the four types, voiced, generic, transient, and audio, and a spectrum tilt frequency of the audio frame is greater than a fourth spectrum tilt frequency threshold, and the audio frame is not a transition frame from a non-fricative to a fricative includes the spectrum tilt frequency of the previous audio frame is not less than the third spectrum tilt frequency threshold, and/or the coding type of the previous audio frame is not one of the four types, voiced, generic, transient, and audio, and/or the spectrum tilt frequency of the audio frame is not greater than the fourth spectrum tilt frequency threshold.


With reference to the fifth possible implementation manner of the first aspect, in a ninth possible implementation manner of the first aspect, the audio frame is a transition frame from a fricative to a non-fricative includes a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a coding type of the audio frame is transient.


With reference to the fifth possible implementation manner of the first aspect, in a tenth possible implementation manner of the first aspect, the audio frame is a transition frame from a fricative to a non-fricative includes a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold.


With reference to the fifth possible implementation manner of the first aspect, in an eleventh possible implementation manner of the first aspect, the audio frame is a transition frame from a non-fricative to a fricative includes a spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold, a coding type of the previous audio frame is one of four types, voiced, generic, transient, and audio, and a spectrum tilt frequency of the audio frame is greater than a fourth spectrum tilt frequency threshold.


According to a second aspect, an embodiment of the present disclosure provides an audio coding apparatus, including a determining unit, a modification unit, and a coding unit, where the determining unit is configured to, for each audio frame, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, determine a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame, or when the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, determine a second modification weight, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame, the modification unit is configured to modify a linear predictive parameter of the audio frame according to the first modification weight or the second modification weight determined by the determining unit, and the coding unit is configured to code the audio frame according to a modified linear predictive parameter of the audio frame, where the modified linear predictive parameter is obtained after modification by the modification unit.


With reference to the second aspect, in a first possible implementation manner of the second aspect, the determining unit is configured to determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula:







w


[
i
]


=

{






lsf_new



_diff


[
i
]


/
lsf_old



_diff


[
i
]



,





lsf_new


_diff


[
i
]



<

lsf_old


_diff


[
i
]










lsf_old



_diff


[
i
]


/
lsf_new



_diff




[
i
]


,





lsf_new


_diff


[
i
]





lsf_old


_diff


[
i
]







,







where w[i] is the first modification weight, lsf_new_diff[i] is the LSF differences of the audio frame, lsf_old_diff[i] is the LSF differences of the previous audio frame, i is an order of the LSF differences, a value of i ranges from 0 to M−1, and M is an order of the linear predictive parameter.


With reference to the second aspect or the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the determining unit is configured to determine the second modification weight as a preset modification weight value, where the preset modification weight value is greater than 0, and is less than or equal to 1.


With reference to the second aspect, the first possible implementation manner of the second aspect, or the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the modification unit is configured to modify the linear predictive parameter of the audio frame according to the first modification weight using the following formula: L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i], where w[i] is the first modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is a linear predictive parameter of the previous audio frame, i is an order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


With reference to the second aspect, the first possible implementation manner of the second aspect, the second possible implementation manner of the second aspect, or the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the modification unit is configured to modify the linear predictive parameter of the audio frame according to the second modification weight using the following formula: L[i]=(1−y)*L_old[i]+y*L_new[i], where y is the second modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is the linear predictive parameter of the previous audio frame, i is the order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


With reference to the second aspect, the first possible implementation manner of the second aspect, the second possible implementation manner of the second aspect, the third possible implementation manner of the second aspect, or the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the determining unit is configured to, for each audio frame in audio, when the audio frame is not a transition frame, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame, and when the audio frame is a transition frame, determine the second modification weight, where the transition frame includes a transition frame from a non-fricative to a fricative, or a transition frame from a fricative to a non-fricative.


With reference to the fifth possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, the determining unit is configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a coding type of the audio frame is not transient, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame, and when the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the coding type of the audio frame is transient, determine the second modification weight.


With reference to the fifth possible implementation manner of the second aspect, in a seventh possible implementation manner of the second aspect, the determining unit is configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a spectrum tilt frequency of the audio frame is not less than a second spectrum tilt frequency threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame, and when the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the spectrum tilt frequency of the audio frame is less than the second spectrum tilt frequency threshold, determine the second modification weight.


With reference to the fifth possible implementation manner of the second aspect, in an eighth possible implementation manner of the second aspect, the determining unit is configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not less than a third spectrum tilt frequency threshold, and/or a coding type of the previous audio frame is not one of four types, voiced, generic, transient, and audio, and/or a spectrum tilt of the audio frame is not greater than a fourth spectrum tilt threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame, and when the spectrum tilt frequency of the previous audio frame is less than the third spectrum tilt frequency threshold, the coding type of the previous audio frame is one of the four types, voiced, generic, transient, and audio, and the spectrum tilt frequency of the audio frame is greater than the fourth spectrum tilt frequency threshold, determine the second modification weight.


In the embodiments of the present disclosure, for each audio frame in audio, when it is determined that a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, a first modification weight is determined according to LSF differences of the audio frame and LSF differences of the previous audio frame, or when it is determined that the signal characteristic of the audio frame and the signal characteristic of a previous audio frame do not meet the preset modification condition, a second modification weight is determined, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame. A linear predictive parameter of the audio frame is modified according to the determined first modification weight or the determined second modification weight and the audio frame is coded according to a modified linear predictive parameter of the audio frame. In this way, different modification weights are determined according to whether the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame and the linear predictive parameter of the audio frame is modified so that a spectrum between audio frames is steadier. Moreover, the audio frame is coded according to the modified linear predictive parameter of the audio frame so that inter-frame continuity of a spectrum recovered by decoding is enhanced while a bit rate remains unchanged, and therefore, the spectrum recovered by decoding is closer to an original spectrum and coding performance is improved.





BRIEF DESCRIPTION OF DRAWINGS

To describe the technical solutions in the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments. The accompanying drawings in the following description show merely some embodiments of the present disclosure, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.



FIG. 1A is a schematic flowchart of an audio coding method according to an embodiment of the present disclosure.



FIG. 1B is a diagram of a comparison between an actual spectrum and LSF differences according to an embodiment of the present disclosure.



FIG. 2 is an example of an application scenario of an audio coding method according to an embodiment of the present disclosure.



FIG. 3 is schematic structural diagram of an audio coding apparatus according to an embodiment of the present disclosure.



FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.





DESCRIPTION OF EMBODIMENTS

The following clearly describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings in the embodiments of the present disclosure. The described embodiments are merely a part rather than all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.


Referring to FIG. 1A, a flowchart of an audio coding method according to an embodiment of the present disclosure is shown and includes the following steps.


Step 101: For each audio frame in audio, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, an electronic device determines a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, the electronic device determines a second modification weight, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame.


Step 102: The electronic device modifies a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight.


The linear predictive parameter may include an LPC, an LSP, an ISP, an LSF, or the like.


Step 103: The electronic device codes the audio frame according to a modified linear predictive parameter of the audio frame.


In this embodiment, for each audio frame in audio, when the signal characteristic of the audio frame and the signal characteristic of the previous audio frame meet the preset modification condition, the electronic device determines the first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, the electronic device determines a second modification weight. The electronic device modifies a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight and codes the audio frame according to a modified linear predictive parameter of the audio frame. In this way, different modification weights are determined according to whether the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame and the linear predictive parameter of the audio frame is modified so that a spectrum between audio frames is steadier. In addition, different modification weights are determined according to whether the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame and a second modification weight that is determined when the signal characteristics are not similar may be as close to 1 as possible so that an original spectrum feature of the audio frame is kept as much as possible when the signal characteristic of the audio frame is not similar to the signal characteristic of the previous audio frame, and therefore auditory quality of the audio obtained after coded information of the audio is decoded is better.


Specific implementation of how the electronic device determines whether the signal characteristic of the audio frame and the signal characteristic of the previous audio frame meet the preset modification condition in step 101 is related to specific implementation of the modification condition. A description is provided below using an example.


In a possible implementation manner, the modification condition may include, if the audio frame is not a transition frame, determining, by the electronic device, that the signal characteristic of the audio frame and the signal characteristic of the previous audio frame meet the preset modification condition may include the audio frame is not a transition frame, where the transition frame includes a transition frame from a non-fricative to a fricative or a transition frame from a fricative to a non-fricative. Determining, by an electronic device, that the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition may include the audio frame is a transition frame.


In a possible implementation manner, determining whether the audio frame is the transition frame from a fricative to a non-fricative may be implemented by determining whether a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold, and whether a coding type of the audio frame is transient. Determining that the audio frame is a transition frame from a fricative to a non-fricative may include determining that the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the coding type of the audio frame is transient. Determining that the audio frame is not a transition frame from a fricative to a non-fricative may include determining that the spectrum tilt frequency of the previous audio frame is not greater than the first spectrum tilt frequency threshold and/or the coding type of the audio frame is not transient.


In another possible implementation manner, determining whether the audio frame is the transition frame from a fricative to a non-fricative may be implemented by determining whether a spectrum tilt frequency of the previous audio frame is greater than a first frequency threshold and determining whether a spectrum tilt frequency of the audio frame is less than a second frequency threshold. Determining that the audio frame is the transition frame from a fricative to a non-fricative may include determining that the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the spectrum tilt frequency of the audio frame is less than the second spectrum tilt frequency threshold. Determining that the audio frame is not the transition frame from a fricative to a non-fricative may include determining that the spectrum tilt frequency of the previous audio frame is not greater than the first spectrum tilt frequency threshold and/or the spectrum tilt frequency of the audio frame is not less than the second spectrum tilt frequency threshold. Specific values of the first spectrum tilt frequency threshold and the second spectrum tilt frequency threshold are not limited in this embodiment of the present disclosure, and a relationship between the values of the first spectrum tilt frequency threshold and the second spectrum tilt frequency threshold is not limited. Optionally, in an embodiment of the present disclosure, the value of the first spectrum tilt frequency threshold may be 5.0. In another embodiment of the present disclosure, the value of the second spectrum tilt frequency threshold may be 1.0.


In a possible implementation manner, determining whether the audio frame is the transition frame from a non-fricative to a fricative may be implemented by determining whether a spectrum tilt frequency of the previous audio frame is less than a third frequency threshold, determining whether a coding type of the previous audio frame is one of four types, voiced, generic, transient, and/or audio, and determining whether a spectrum tilt frequency of the audio frame is greater than a fourth frequency threshold. Determining that the audio frame is a transition frame from a non-fricative to a fricative may include determining that the spectrum tilt frequency of the previous audio frame is less than the third spectrum tilt frequency threshold, the coding type of the previous audio frame is one of the four types, voiced, generic, transient, and/or audio, and the spectrum tilt of the audio frame is greater than the fourth spectrum tilt threshold. Determining that the audio frame is not the transition frame from a non-fricative to a fricative may include determining that the spectrum tilt frequency of the previous audio frame is not less than the third spectrum tilt frequency threshold, and/or the coding type of the previous audio frame is not one of the four types, voiced, generic, transient, and/or audio, and/or the spectrum tilt frequency of the audio frame is not greater than the fourth spectrum tilt frequency threshold. Specific values of the third spectrum tilt frequency threshold and the fourth spectrum tilt frequency threshold are not limited in this embodiment of the present disclosure, and a relationship between the values of the third spectrum tilt frequency threshold and the fourth spectrum tilt frequency threshold is not limited. In an embodiment of the present disclosure, the value of the third spectrum tilt frequency threshold may be 3.0. In another embodiment of the present disclosure, the value of the fourth spectrum tilt frequency threshold may be 5.0.


In step 101, the determining, by an electronic device, a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame may include determining, by the electronic device, the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula:










w


[
i
]


=

{





lsf_new



_diff


[
i
]


/
lsf_old



_diff


[
i
]



,





lsf_new


_diff


[
i
]



<

lsf_old


_diff


[
i
]










lsf_old



_diff


[
i
]


/
lsf_new



_diff




[
i
]


,





lsf_new


_diff


[
i
]





lsf_old


_diff


[
i
]












(
1
)








where w[i] is the first modification weight, lsf_new_diff[i] is the LSF differences of the audio frame, lsf_new_diff[i]=lsf_new[i]−lsf_new[i−1], lsf_new[i] is the ith-order LSF parameter of the audio frame, lsf_new[i−1] is the (i−1)th-order LSF parameter of the audio frame, lsf_old_diff[i] is the LSF differences of the previous audio frame, lsf_old_diff[i]=lsf_old[i]−lsf_old[i−1], lsf_old[i] is the ith-order LSF parameter of the previous audio frame, lsf_old[i−1] is the (i−1)th-order LSF parameter of the previous audio frame, i is an order of the LSF parameter and an order of the LSF differences, a value of i ranges from 0 to M−1, and M is an order of the linear predictive parameter.


A principle of the foregoing formula is as follows.


Refer to FIG. 1B, which is a diagram of a comparison between an actual spectrum and LSF differences according to an embodiment of the present disclosure. As can be seen from the figure, the LSF differences lsf_new_diff[i] in the audio frame reflects a spectrum energy trend of the audio frame. Smaller lsf_new_diff[i] indicates larger spectrum energy of a corresponding frequency point.


Smaller w[i]=lsf_new_diff[i]/lsf_old_diff[i] indicates a greater spectrum energy difference between a previous frame and a current frame at a frequency point corresponding to lsf_new[i] and that spectrum energy of the audio frame is much greater than spectrum energy of a frequency point corresponding to the previous audio frame.


Smaller w[i]=lsf_old_diff[i]/lsf_new_diff[i] indicates a smaller spectrum energy difference between the previous frame and the current frame at the frequency point corresponding to lsf_new[i] and that the spectrum energy of the audio frame is much smaller than spectrum energy of the frequency point corresponding to the previous audio frame.


Therefore, to make a spectrum between the previous frame and the current frame steady, w[i] may be used as a weight of the audio frame lsf_new[i] and 1−w[i] may be used as a weight of the frequency point corresponding to the previous audio frame. Details are shown in formula 2.


In step 101, determining, by the electronic device, the second modification weight may include determining, by the electronic device, the second modification weight as a preset modification weight value, where the preset modification weight value is greater than 0 and is less than or equal to 1.


Preferably, the preset modification weight value is a value close to 1.


In step 102, modifying, by the electronic device, the linear predictive parameter of the audio frame according to the determined first modification weight may include modifying the linear predictive parameter of the audio frame according to the first modification weight using the following formula:

L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i],  (2)

where w[i] is the first modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is a linear predictive parameter of the previous audio frame, i is an order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


In step 102, modifying, by the electronic device, the linear predictive parameter of the audio frame according to the determined second modification weight may include modifying the linear predictive parameter of the audio frame according to the second modification weight using the following formula:

L[i]=(1−y)*L_old[i]+y*L_new[i],  (3)

where y is the second modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is the linear predictive parameter of the previous audio frame, i is the order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


In step 103, for how the electronic device codes the audio frame according to the modified linear predictive parameter of the audio frame, refer to a related time domain bandwidth extension technology, and details are not described in the present disclosure.


The audio coding method in this embodiment of the present disclosure may be applied to a time domain bandwidth extension method shown in FIG. 2. In the time domain bandwidth extension method an original audio signal is divided into a low-band signal and a high-band signal. For the low-band signal, processing such as low-band signal coding, low-band excitation signal preprocessing, linear prediction (LP) synthesis, and time-domain envelope calculation and quantization is performed in sequence. For the high-band signal, processing such as high-band signal preprocessing, LP analysis, and LPC quantization is performed in sequence and multiplexing (MUX) is performed on the audio signal according to a result of the low-band signal coding, a result of the LPC quantization, and a result of the time-domain envelope calculation and quantization.


The LPC quantization corresponds to step 101 and step 102 in this embodiment of the present disclosure, and the MUX performed on the audio signal corresponds to step 103 in this embodiment of the present disclosure.


Refer to FIG. 3, which is a schematic structural diagram of an audio coding apparatus according to an embodiment of the present disclosure. The apparatus 300 may be disposed in an electronic device. The apparatus 300 may include a determining unit 310, a modification unit 320, and a coding unit 330.


The determining unit 310 is configured to, for each audio frame in audio, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, determine a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, determine a second modification weight, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame.


The modification unit 320 is configured to modify a linear predictive parameter of the audio frame according to the first modification weight or the second modification weight determined by the determining unit 310.


The coding unit 330 is configured to code the audio frame according to a modified linear predictive parameter of the audio frame, where the modified linear predictive parameter is obtained after modification by the modification unit 320.


Optionally, the determining unit 310 may be configured to determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula, which may be substantially similar to formula 1:







w


[
i
]


=

{






lsf_new



_diff


[
i
]


/
lsf_old



_diff


[
i
]



,





lsf_new


_diff


[
i
]



<

lsf_old


_diff


[
i
]










lsf_old



_diff


[
i
]


/
lsf_new



_diff




[
i
]


,





lsf_new


_diff


[
i
]





lsf_old


_diff


[
i
]







,







where w[i] is the first modification weight, lsf_new_diff[i] is the LSF differences of the audio frame, lsf_old_diff[i] is the LSF differences of the previous audio frame, i is an order of the LSF differences, a value of i ranges from 0 to M−1, and M is an order of the linear predictive parameter.


Optionally, the determining unit 310 may be configured to determine the second modification weight as a preset modification weight value, where the preset modification weight value is greater than 0, and is less than or equal to 1.


Optionally, the modification unit 320 may be configured to modify the linear predictive parameter of the audio frame according to the first modification weight using the following formula, which may be substantially similar to formula 2:

L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i],

where w[i] is the first modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is a linear predictive parameter of the previous audio frame, i is an order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


Optionally, the modification unit 320 may be configured to modify the linear predictive parameter of the audio frame according to the second modification weight using the following formula, which may be substantially similar to formula 3:

L[i]=(1−y)*L_old[i]+y*L_new[i],

where y is the second modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is the linear predictive parameter of the previous audio frame, i is the order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


Optionally, the determining unit 310 may be configured to, for each audio frame in the audio, when the audio frame is not a transition frame, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the audio frame is a transition frame, determine the second modification weight, where the transition frame includes a transition frame from a non-fricative to a fricative, or a transition frame from a fricative to a non-fricative.


Optionally, the determining unit 310 may be configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a coding type of the audio frame is not transient, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the coding type of the audio frame is transient, determine the second modification weight.


Optionally, the determining unit 310 may be configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a spectrum tilt frequency of the audio frame is not less than a second spectrum tilt frequency threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the spectrum tilt frequency of the audio frame is less than the second spectrum tilt frequency threshold, determine the second modification weight.


Optionally, the determining unit 310 may be configured to, for each audio frame in the audio, when determining a spectrum tilt frequency of the previous audio frame is not less than a third spectrum tilt frequency threshold, and/or a coding type of the previous audio frame is not one of four types, voiced, generic, transient, and/or audio, and/or a spectrum tilt of the audio frame is not greater than a fourth spectrum tilt threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is less than the third spectrum tilt frequency threshold, the coding type of the previous audio frame is one of the four types, voiced, generic, transient, and/or audio, and the spectrum tilt frequency of the audio frame is greater than the fourth spectrum tilt frequency threshold, determine the second modification weight.


In this embodiment, for each audio frame in audio, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, an electronic device determines a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When a signal characteristic of the audio frame and a signal characteristic of a previous audio frame do not meet a preset modification condition, the electronic device determines a second modification weight. The electronic device modifies a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight and codes the audio frame according to a modified linear predictive parameter of the audio frame. In this way, different modification weights are determined according to whether the signal characteristic of the audio frame and the signal characteristic of the previous audio frame meet the preset modification condition, and the linear predictive parameter of the audio frame is modified so that a spectrum between audio frames is steadier. Moreover, the electronic device codes the audio frame according to the modified linear predictive parameter of the audio frame, and therefore, audio having a wider bandwidth is coded while a bit rate remains unchanged or a bit rate slightly changes.


Refer to FIG. 4, which is a structural diagram of a first node according to an embodiment of the present disclosure. The first node 400 includes a processor 410, a memory 420, a transceiver 430, and a bus 440.


The processor 410, the memory 420, and the transceiver 430 are connected to each other using the bus 440, and the bus 440 may be an industry standard architecture (ISA) bus, a peripheral component interconnect (PCI) bus, an extended ISA (EISA) bus, or the like. The bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of representation, the bus in FIG. 4 is represented using only one bold line, but it does not indicate that there is only one bus or only one type of bus.


The memory 420 is configured to store a program. The program may include program code, and the program code includes a computer operation instruction. The memory 420 may include a high-speed random access memory (RAM), and may further include a non-volatile memory, such as at least one magnetic disk memory.


The transceiver 430 is configured to connect other devices, and communicate with other devices.


The processor 410 executes the program code and is configured to, for each audio frame in audio, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, determine a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, determine a second modification weight, where the preset modification condition is used to determine that the signal characteristic of the audio frame is similar to the signal characteristic of the previous audio frame, modify a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight, and code the audio frame according to a modified linear predictive parameter of the audio frame.


Optionally, the processor 410 may be configured to determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula, which may be substantially similar to formula 1:







w


[
i
]


=

{






lsf_new



_diff


[
i
]


/
lsf_old



_diff


[
i
]



,





lsf_new


_diff


[
i
]



<

lsf_old


_diff


[
i
]










lsf_old



_diff


[
i
]


/
lsf_new



_diff




[
i
]


,





lsf_new


_diff


[
i
]





lsf_old


_diff


[
i
]







,







where w[i] is the first modification weight, lsf_new_diff[i] is the LSF differences of the audio frame, lsf_old_diff[i] is the LSF differences of the previous audio frame, i is an order of the LSF differences, a value of i ranges from 0 to M−1, and M is an order of the linear predictive parameter.


Optionally, the processor 410 may be configured to determine the second modification weight as 1, or determine the second modification weight as a preset modification weight value, where the preset modification weight value is greater than 0, and is less than or equal to 1.


Optionally, the processor 410 may be configured to modify the linear predictive parameter of the audio frame according to the first modification weight using the following formula, which may be substantially similar to formula 2:

L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i],

where w[i] is the first modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is a linear predictive parameter of the previous audio frame, i is an order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


Optionally, the processor 410 may be configured to modify the linear predictive parameter of the audio frame according to the second modification weight using the following formula, which may be substantially similar to formula 3:

L[i]=(1−y)*L_old[i]+y*L_new[i],

where y is the second modification weight, L[i] is the modified linear predictive parameter of the audio frame, L_new[i] is the linear predictive parameter of the audio frame, L_old[i] is the linear predictive parameter of the previous audio frame, i is the order of the linear predictive parameter, the value of i ranges from 0 to M−1, and M is the order of the linear predictive parameter.


Optionally, the processor 410 may be configured to, for each audio frame in the audio, when the audio frame is not a transition frame, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the audio frame is a transition frame, determine the second modification weight, where the transition frame includes a transition frame from a non-fricative to a fricative, or a transition frame from a fricative to a non-fricative.


Optionally, the processor 410 may be configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a coding type of the audio frame is not transient, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the coding type of the audio frame is transient, determine the second modification weight, or for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not greater than a first spectrum tilt frequency threshold and/or a spectrum tilt frequency of the audio frame is not less than a second spectrum tilt frequency threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and the spectrum tilt frequency of the audio frame is less than the second spectrum tilt frequency threshold, determine the second modification weight.


Optionally, the processor 410 may be configured to, for each audio frame in the audio, when a spectrum tilt frequency of the previous audio frame is not less than a third spectrum tilt frequency threshold, and/or a coding type of the previous audio frame is not one of four types, voiced, generic, transient, and/or audio, and/or a spectrum tilt of the audio frame is not greater than a fourth spectrum tilt threshold, determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame. When the spectrum tilt frequency of the previous audio frame is less than the third spectrum tilt frequency threshold, the coding type of the previous audio frame is one of the four types, voiced, generic, transient, and/or audio, and the spectrum tilt frequency of the audio frame is greater than the fourth spectrum tilt frequency threshold, determine the second modification weight.


In this embodiment, for each audio frame in audio, when a signal characteristic of the audio frame and a signal characteristic of a previous audio frame meet a preset modification condition, an electronic device determines a first modification weight according to LSF differences of the audio frame and LSF differences of the previous audio frame. When the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not meet the preset modification condition, the electronic device determines a second modification weight. The electronic device modifies a linear predictive parameter of the audio frame according to the determined first modification weight or the determined second modification weight and codes the audio frame according to a modified linear predictive parameter of the audio frame. In this way, different modification weights are determined according to whether the signal characteristic of the audio frame and the signal characteristic of the previous audio frame meet the preset modification condition, and the linear predictive parameter of the audio frame is modified so that a spectrum between audio frames is steadier. Moreover, the electronic device codes the audio frame according to the modified linear predictive parameter of the audio frame, and therefore, audio having a wider bandwidth is coded while a bit rate remains unchanged or a bit rate slightly changes.


A person skilled in the art may clearly understand that, the technologies in the embodiments of the present disclosure may be implemented by software in addition to a necessary general hardware platform. Based on such an understanding, the technical solutions of the present disclosure essentially or the part contributing to the prior art may be implemented in a form of a software product. The software product is stored in a storage medium, such as a read only memory (ROM)/RAM, a hard disk, or an optical disc, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform the methods described in the embodiments or some parts of the embodiments of the present disclosure.


In this specification, the embodiments are described in a progressive manner. Reference may be made to each other for a same or similar part of the embodiments. Each embodiment focuses on a difference from other embodiments. Especially, the system embodiment is basically similar to the method embodiments, and therefore is briefly described. For a relevant part, reference may be made to the description in the part of the method embodiments.


The foregoing descriptions are implementation manners of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present disclosure shall fall within the protection scope of the present disclosure.

Claims
  • 1. An audio coding method comprising: determining a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modifying a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame, wherein the first modification weight satisfies the following formula:
  • 2. The audio coding method of claim 1, wherein the linear predictive parameter is a linear predictive coding (LPC) coefficient.
  • 3. The audio coding method of claim 1, wherein the linear predictive parameter is a linear spectral pair (LSP) coefficient.
  • 4. An audio coding method, comprising: determining a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modifying a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame, wherein modifying the linear predictive parameter of the audio frame comprises modifying the linear predictive parameter of the audio frame according to the following formula: L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i],
  • 5. An audio coding method, comprising: determining a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modifying a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame; andcoding the audio frame according to the modified linear predictive parameter,wherein the audio frame is not the transition frame when the following conditions are not satisfied: a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a coding type of the audio frame is transient; andthe spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold; andthe spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold and a coding type of the previous audio frame is voiced.
  • 6. The audio coding method of claim 5, wherein the first spectrum tilt frequency threshold, the second spectrum tilt frequency threshold, and the third spectrum tilt frequency threshold are preset values.
  • 7. The audio coding method of claim 5, wherein a first value of the first spectrum tilt frequency threshold is 5.0, wherein a second value of the second spectrum tilt frequency threshold is 1.0, and wherein a third value of the third spectrum tilt frequency threshold is 3.0.
  • 8. The audio coding method of claim 5, wherein a first value of the first spectrum tilt frequency threshold is greater than a second value of the second spectrum tilt frequency threshold.
  • 9. The audio coding method of claim 5, wherein a first value of the first spectrum tilt frequency threshold is greater than a second value of the third spectrum tilt frequency threshold.
  • 10. The audio coding method of claim 5, wherein a first value of the third spectrum tilt frequency threshold is greater than a second value of the second spectrum tilt frequency threshold.
  • 11. An audio coding method, comprising: determining, when a signal characteristic of an audio frame and a signal characteristic of a previous audio frame of the audio frame satisfy a preset modification condition, a first modification weight according to linear spectral frequency (LSF) differences of the audio frame and LSF differences of the previous audio frame;determining, when the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not satisfy the preset modification condition, a preset modification weight value as a second modification weight, wherein the preset modification weight value is greater than 0 and is less than or equal to 1;modifying a linear predictive parameter of the audio frame according to the first modification weight or the second modification weight to generate a modified linear predictive parameter of the audio frame; andcoding the audio frame according to the modified linear predictive parameter,wherein the signal characteristic of the audio frame and the signal characteristic of the previous audio frame satisfy the preset modification condition when the following conditions are not satisfied: a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a coding type of the audio frame is transient;the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold; andthe spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold and a coding type of the previous audio frame is voiced.
  • 12. The audio coding method of claim 11, wherein the linear predictive parameter is a linear predictive coding (LPC) coefficient.
  • 13. The audio coding method of claim 6, wherein the linear predictive parameter is a linear spectral pair (LSP) coefficient.
  • 14. An audio coding apparatus, comprising: a memory configured to store instructions; anda processor coupled to the memory and configured to execute the instructions to cause the audio coding apparatus to be configured to: determine a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modify a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame; andcode the audio frame according to the modified linear predictive parameter,wherein the processor is further configured to execute the instructions to cause the audio coding apparatus to be configured to determine the first modification weight according to the LSF differences of the audio frame and the LSF differences of the previous audio frame using the following formula:
  • 15. The audio coding apparatus of claim 14, wherein the linear predictive parameter is a linear predictive coding (LPC) coefficient.
  • 16. The audio coding apparatus of claim 14, wherein the linear predictive parameter is a linear spectral pair (LSP) coefficient.
  • 17. An audio coding apparatus, comprising: a memory configured to store instructions; anda processor coupled to the memory and configured to execute the instructions to cause the audio coding apparatus to be configured to: determine a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modify a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame; andcode the audio frame according to the modified linear predictive parameter,wherein the processor is further configured to execute the instructions to cause the audio coding apparatus to be configured to modify the linear predictive parameter of the audio frame to generate the modified linear predictive parameter using the following formula: L[i]=(1−w[i])*L_old[i]+w[i]*L_new[i],
  • 18. An audio coding apparatus of claim 10, comprising: a memory configured to store instructions; anda processor coupled to the memory and configured to execute the instructions to cause the audio coding apparatus to be configured to: determine a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when the audio frame is not a transition frame;modify a linear predictive parameter of the audio frame according to the first modification weight to generate a modified linear predictive parameter of the audio frame; andcode the audio frame according to the modified linear predictive parameter,wherein the audio frame is not the transition frame when the following conditions are not satisfied: a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a coding type of the audio frame is transient; andthe spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold; andthe spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold and a coding type of the previous audio frame is voiced.
  • 19. An audio coding apparatus, comprising: a memory configured to store instructions; anda processor coupled to the memory and configured to execute the instructions to cause the audio coding apparatus to be configured to: determine a first modification weight according to linear spectral frequency (LSF) differences of an audio frame and LSF differences of a previous audio frame of the audio frame when a signal characteristic of the audio frame and a signal characteristic of the previous audio frame satisfy a preset modification condition;determine a preset modification weight value as a second modification weight when the signal characteristic of the audio frame and the signal characteristic of the previous audio frame do not satisfy the preset modification condition, wherein the preset modification weight value is greater than 0 and is less than or equal to 1;modify a linear predictive parameter of the audio frame according to the first modification weight or the second modification weight to generate a modified linear predictive parameter of the audio frame; andcode the audio frame according to the modified linear predictive parameter,wherein the signal characteristic of the audio frame and the signal characteristic of the previous audio frame of the audio frame satisfy the preset modification condition when the following conditions are not satisfied:a spectrum tilt frequency of the previous audio frame is greater than a first spectrum tilt frequency threshold and a coding type of the audio frame is transient;the spectrum tilt frequency of the previous audio frame is greater than the first spectrum tilt frequency threshold and a spectrum tilt frequency of the audio frame is less than a second spectrum tilt frequency threshold; andthe spectrum tilt frequency of the previous audio frame is less than a third spectrum tilt frequency threshold and a coding type of the previous audio frame is voiced.
Priority Claims (2)
Number Date Country Kind
201410299590.2 Jun 2014 CN national
201410426046.X Aug 2014 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/588,064 filed on Sep. 30, 2019, which is a continuation of U.S. patent application Ser. No. 15/699,694 filed on Sep. 8, 2017, now U.S. Pat. No. 10,460,741, which is a continuation of U.S. patent application Ser. No. 15/362,443 filed on Nov. 28, 2016, now U.S. Pat. No. 9,812,143, which is a continuation of International Patent Application No. PCT/CN2015/074850 filed on Mar. 23, 2015, which claims priority to Chinese Patent Application No. 201410426046.X filed on Aug. 26, 2014, and Chinese Patent Application No. 201410299590.2 filed on Jun. 27, 2014. All of the afore-mentioned patent applications are hereby incorporated by reference in their entireties.

US Referenced Citations (36)
Number Name Date Kind
5600754 Gardner et al. Feb 1997 A
6104992 Gao Aug 2000 A
6188980 Thyssen Feb 2001 B1
6199040 Fette Mar 2001 B1
6233550 Gersho May 2001 B1
6330533 Su et al. Dec 2001 B2
6385573 Gao et al. May 2002 B1
6418408 Udaya Bhaskar Jul 2002 B1
6449590 Gao Sep 2002 B1
6493665 Su Dec 2002 B1
6636829 Benyassine Oct 2003 B1
6782360 Gao Aug 2004 B1
6931373 Bhaskar et al. Aug 2005 B1
7720683 Vermeulen et al. May 2010 B1
8532984 Rajendran et al. Sep 2013 B2
8744847 Paul Jun 2014 B2
8938390 Xu Jan 2015 B2
20030028386 Zinser, Jr. Feb 2003 A1
20040002856 Bhaskar Jan 2004 A1
20060277038 Vos Dec 2006 A1
20060277039 Vos Dec 2006 A1
20070094019 Nurminen Apr 2007 A1
20070223577 Ehara Sep 2007 A1
20080027711 Rajendran Jan 2008 A1
20080126904 Sung et al. May 2008 A1
20080249768 Ertan et al. Oct 2008 A1
20080294429 Su Nov 2008 A1
20090265167 Ehara Oct 2009 A1
20100114567 Bruhn May 2010 A1
20100174532 Vos et al. Jul 2010 A1
20110099018 Neuendorf Apr 2011 A1
20120095756 Sung et al. Apr 2012 A1
20120271629 Sung et al. Oct 2012 A1
20130226595 Liu Aug 2013 A1
20140236588 Subasingha Aug 2014 A1
20170076732 Liu et al. Mar 2017 A1
Foreign Referenced Citations (13)
Number Date Country
1081037 Jan 1994 CN
1420487 May 2003 CN
1677491 Oct 2005 CN
1815552 Aug 2006 CN
101114450 Jan 2008 CN
102664003 Sep 2012 CN
103262161 Aug 2013 CN
2466670 Jul 2010 GB
H1083200 Mar 1998 JP
2007212637 Aug 2007 JP
2010520512 Jun 2010 JP
101888030 Aug 2018 KR
101990538 Jun 2019 KR
Non-Patent Literature Citations (5)
Entry
“Digital cellular telecommunications system (Phase 2+); Universal Mobile Telecommunications System (UMTS); LTE; Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Error concealment of lost frames (3GPP TS 26.091 version 11.0.0 Release 11),” ETSI TS 126 091 V11.0.0, Oct. 2012, 15 pages.
Erzin, E., “Interframe Differential Coding of Line Spectrum Frequencies,”, IEEE Transactions on Speech and Audio Processing, vol. 3, No. 2, Apr. 1994, pp. 350-352.
Marca, J., “An LSF Quantizer for the North-American Half-Rate Speech Coder,” XP000466781, IEEE Transactions on Vehicular Technology, Aug. 1994, pp. 413-419.
Kuo, C., et al., “Low Bit-rate Quantization of LSP Parameters Using Two-Dimensional Differential Coding,” XP010058707, Mar. 23, 1992, 4 pages.
Wang, T., et al., “Verification of MPEG—2 /4 AAC Audio Encoder Module,” Computer Technology and Development, vol. 22, No. 7, Jul. 2012, 4 pages, with English abstract.
Related Publications (1)
Number Date Country
20210390968 A1 Dec 2021 US
Continuations (4)
Number Date Country
Parent 16588064 Sep 2019 US
Child 17458879 US
Parent 15699694 Sep 2017 US
Child 16588064 US
Parent 15362443 Nov 2016 US
Child 15699694 US
Parent PCT/CN2015/074850 Mar 2015 WO
Child 15362443 US