This description relates generally to the encoding and decoding of speech.
Speech encoding and decoding have a large number of applications. In general, speech encoding, which is also known as speech compression, seeks to reduce the data rate needed to represent a speech signal without substantially reducing the quality or intelligibility of the speech. Speech compression techniques may be implemented by a speech coder, which also may be referred to as a voice coder or vocoder.
A speech coder is generally viewed as including an encoder and a decoder. The encoder produces a compressed stream of bits from a digital representation of speech, such as may be generated at the output of an analog-to-digital converter having as an input an analog signal produced by a microphone. The decoder converts the compressed bit stream into a digital representation of speech that is suitable for playback through a digital-to-analog converter and a speaker. In many applications, the encoder and the decoder are physically separated, and the bit stream is transmitted between them using a communication channel.
A key parameter of a speech coder is the amount of compression the coder achieves, which is measured by the bit rate of the stream of bits produced by the encoder. The bit rate of the encoder is generally a function of the desired fidelity (i.e., speech quality) and the type of speech coder employed. Different types of speech coders have been designed to operate at different bit rates. For example, low to medium rate speech coders may be used in mobile communication applications. These applications typically require high quality speech and robustness to artifacts caused by acoustic noise and channel noise (e.g., bit errors).
Speech is generally considered to be a non-stationary signal having signal properties that change over time. This change in signal properties is generally linked to changes made in the properties of a person's vocal tract to produce different sounds. A sound is typically sustained for some short period, typically 10-100 ms, and then the vocal tract is changed again to produce the next sound. The transition between sounds may be slow and continuous or it may be rapid as in the case of a speech “onset.” This change in signal properties increases the difficulty of encoding speech at lower bit rates since some sounds are inherently more difficult to encode than others and the speech coder must be able to encode all sounds with reasonable fidelity while preserving the ability to adapt to a transition in the characteristics of the speech signals. Performance of a low to medium bit rate speech coder can be improved by allowing the bit rate to vary. In variable-bit-rate speech coders, the bit rate for each segment of speech is allowed to vary between two or more options depending on various factors, such as user input, system loading, terminal design or signal characteristics.
One approach for low to medium rate speech coding is a model-based speech coder or vocoder. A vocoder models speech as the response of a system to excitation over short time intervals. Examples of vocoder systems include linear prediction vocoders such as MELP, homomorphic vocoders, channel vocoders, sinusoidal transform coders (“STC”), harmonic vocoders and multiband excitation (“MBE”) vocoders. In these vocoders, speech is divided into short segments (typically 10-40 ms), with each segment being characterized by a set of model parameters. These parameters typically represent a few basic elements of each speech segment, such as the segment's pitch, voicing state, and spectral envelope. A vocoder may use one of a number of known representations for each of these parameters. For example, the pitch may be represented as a pitch period, a fundamental frequency or pitch frequency (which is the inverse of the pitch period), or a long-term prediction delay. Similarly, the voicing state may be represented by one or more voicing metrics, by a voicing probability measure, or by a set of voicing decisions. The spectral envelope may be represented by a set of spectral magnitudes or other spectral measurements. Since they permit a speech segment to be represented using only a small number of parameters, model-based speech coders, such as vocoders, typically are able to operate at medium to low data rates. However, the quality of a model-based system is dependent on the accuracy of the underlying model. Accordingly, a high fidelity model must be used if these speech coders are to achieve high speech quality.
An MBE vocoder is a harmonic vocoder based on the MBE speech model that has been shown to work well in many applications. The MBE vocoder combines a harmonic representation for voiced speech with a flexible, frequency-dependent voicing structure based on the MBE speech model. This allows the MBE vocoder to produce natural sounding unvoiced speech and makes the MBE vocoder robust to the presence of acoustic background noise. These properties allow the MBE vocoder to produce higher quality speech at low to medium data rates and have led to its use in a number of commercial mobile communication applications.
The MBE vocoder (like other vocoders) analyzes speech at fixed intervals, with typical intervals being 10 ms or 20 ms. The result of the MBE analysis is a set of MBE model parameters including a fundamental frequency, a set of voicing errors, a gain value, and a set of spectral magnitudes. The model parameters are then quantized at a fixed interval, such as 20 ms, to produce quantizer bits at the vocoder bit rate. At the decoder, the model parameters are reconstructed from the received bits. For example, model parameters may be reconstructed at 20 ms intervals, and then overlapping speech segments may be synthesized and added together at 10 ms intervals.
Techniques are provided for reducing the bit rate, or improving the speech quality for a given bit rate, in a vocoder, such as a MBE vocoder. In such a vocoder, two ways to reduce the bit rate are reducing the number of bits per frame or increasing the quantization interval (or frame duration). In general, reducing the number bits per frame decreases the ability to accurately convey the shape of the spectral formants because the quantizer step size resolution begins to become insufficient. And decreasing the quantization interval reduces the time resolution and tends to lead to smoothing and a muffled sound.
Using current techniques, it is difficult to quantize the spectral magnitudes using fewer than 30-32 bits. Too much quantization negatively affects the formant characteristics of the speech and does not provide enough granularity for the parameters which change over time. In view of this, the described techniques increase the average time between sets of quantized spectral magnitudes rather than reducing the number of bits used to represent a set of spectral magnitudes.
In particular, sets of log spectral magnitudes are estimated at a fixed interval, then magnitudes are downsampled in a data dependent fashion to reduce the data rate. The downsampled magnitudes then are quantized and reconstructed, and the omitted magnitudes are estimated using interpolation. The spectral error between the estimated magnitudes and the reconstructed/interpolated magnitudes is measured in order to refine which magnitudes are omitted and to refine parameters for the interpolation.
So, for example, speech may be analyzed at a fixed interval of 10 ms, but the corresponding spectral magnitudes may be quantized at varying intervals that are an integer multiple of the analysis period. Thus, rather than quantizing the spectral magnitudes at a fixed interval, the techniques seek optimal points in time at which to quantize the spectral magnitudes. These points in time are referred to as interpolation points.
The analysis algorithms generate MBE model parameters at a fixed interval (e.g., 10 ms or 5 ms), with the points in time for which analysis has been used to produce a set of MBE model parameters being referred to as “analysis points” or subframes. Analysis subframes are grouped into frames at a fixed interval that is an integer multiple of the analysis interval. A frame is defined to contain N subframes. Downsampling is used to find P subframes within each frame that can be used to most accurately code the model parameters. Selection of the interpolation points is determined by evaluating the total quantization error for the frame for many possible combinations of interpolation point locations.
In one general aspect, encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into frames including N subframes (where N is an integer greater than 1); computing model parameters for the subframes, with the model parameters including spectral parameters; and generating a representation of the frame. The representation includes information representing the spectral parameters of P subframes (where P is an integer and P<N) and information identifying the P subframes. The representation excludes information representing the spectral parameters of the N−P subframes not included in the P subframes. The representation is generated by selecting the P subframes by, for multiple combinations of P subframes, determining an error induced by representing the frame using the spectral parameters for the P subframes and using interpolated spectral parameter values for the N−P subframes, the interpolated spectral parameter values being generated by interpolating using the spectral parameters for the P subframes, and selecting a combination of P subframes as the selected P subframes based on the determined error for the combination of P subframes.
Implementations may include one or more of the following features. For example, the multiple combinations of P subframes may include less than all possible combinations of P subframes. The model parameters may be model parameters of a Multi-Band Excitation speech model, and the information identifying the P subframes may be an index.
Generating the interpolated spectral parameter values for the N−P subframes may include interpolating using the spectral parameters for the P subframes and spectral parameters from a subframe of a prior frame.
Determining an error for a combination of P subframes may include quantizing and reconstructing the spectral parameters for the P subframes, generating the interpolated spectral parameter values for the P−N subframes, and determining a difference between the spectral parameters for the frame including the P subframes and a combination of the reconstructed spectral parameters and the interpolated spectral parameters. Selecting the combination of P subframes may include selecting the combination that induces the smallest error.
In another general aspect, a method for decoding digital speech samples from a bit stream includes dividing the bit stream into frames of bits and extracting, from a frame of bits, information identifying, for which P of N subframes of a frame represented by the frame of bits (where N is an integer greater than 1, P is an integer, and P<N), spectral parameters are included in the frame of bits, and information representing spectral parameters of the P subframes. Spectral parameters of the P subframes are reconstructed using the information representing spectral parameters of the P subframes; and spectral parameters for the remaining N−P subframes of the frame of bits are generated by interpolating using the reconstructed spectral parameters of the P subframes.
Generating spectral parameters for the remaining N−P subframes of the frame of bits may include interpolating using the reconstructed spectral parameters of the P subframes and reconstructed spectral parameters of a subframe of a prior frame of bits.
In another general aspect, a speech coder is operable to encode a sequence of digital speech samples into a bit stream using the techniques described above. The speech coder may be incorporated in a communication, such as a handheld communication device, that includes a transmitter for transmitting the bit stream.
In another general aspect, a speech decoder is operable to decode a sequence of digital speech samples from a bit stream using the techniques described above. The speech decoder may be incorporated in a communication, such as a handheld communication device, that includes a receiver for receiving the bit stream and a speaker connected to the speech decoder to generate audible speech based on digital speech samples generated using the reconstructed spectral parameters and the interpolated spectral parameters.
Other features will be apparent from the following description, including the drawings, and the claims.
Using the MBE model parameters, a time-varying interpolation frame generator 210 then generates quantizer bits for a frame including a collection of N subframes, where N is an integer greater than one. For example, the frame generator 210 may generate quantizer bits for a 50 ms frame that includes five 10 ms subframes (N=5). However, rather than quantize the spectral magnitudes for all of the N subframes, the frame generator only quantizes the spectral magnitudes for P subframes, where P is an integer less than N. Thus, rather than quantizing the spectral magnitudes at a fixed interval, the frame generator 210 seeks optimal points in time at which to quantize the spectral magnitudes. These points in time may be referred to as interpolation points. The frame generator selects the interpolation points by evaluating the total quantization error for the frame for many possible combinations of interpolation point locations.
In general, the frame generator 210 can be used to produce P interpolation points per frame. If every frame includes N analysis subframes, then the number of combinations of interpolation points per frame is determined from the binomial theorem as E=NI/((N−P)I·PI). The frame generator 210 evaluates, for each combination of interpolation point locations considered, the effects of downsampling the magnitudes at the N subframes to magnitudes at P subframes, quantizing the magnitudes for those P subframes, and then using interpolation to fill back in the magnitudes for the unquantized subframes.
For example, in a system where the parameter estimation unit 205 produces a set of MBE model parameters every 10 ms, with a 50 ms frame size (N=5), there are five subframes, or analysis points, per frame, and the frame generator 210 may identify two interpolation points per frame for spectral magnitude quantization (P=2).
The spectral magnitude information from N subframes can be conveyed by the spectral magnitude information at P subframes if interpolation is used to fill in the spectral magnitudes for the analysis points that were omitted. For this system, the average time between interpolation points is 25 ms, the minimum distance between interpolation points is 10 ms, and the maximum distance is 70 ms. In particular, if analysis points for which MBE model parameters are represented by quantized data are denoted by ‘x’ and analysis points for which the MBE model parameters are resolved by interpolation are denoted by ‘-’, then for this particular example there are 10 choices for the interpolation points:
x x - - -
x - x - -
x - - x -
x - - - x
- x x - -
- x - x -
- x - - x
- - x x -
- - x - x
- - - x x
Note that it would require four bits to code all ten of the possible interpolation point combinations. To reduce the number of bits required, some of the possibilities may be omitted from consideration. For example, the first and last cases (“x x - - -” and “- - - x x”) may be omitted, which would then reduce the number of possible combinations down to eight, which can be represented with three coding bits.
The frame generator 210 quantizes the spectral magnitudes at the interpolation points and combines them with the locations of the interpolation points, which are coded using, for example, three bits as noted above, and the other MBE parameters for the frame to produce the quantized MBE parameters for the frame.
An FEC encoder 215 receives the quantized MBE parameters and encodes them using error correction coding to produce the bit stream 220 for transmission for receipt as a received bit stream 225. The FEC encoder 215 combines the quantizer bits with redundant forward error correction (“FEC”) data to produce the bit stream 220. The addition of redundant FEC data enables the decoder to correct and/or detect bit errors caused by degradation in the transmission channel.
A MBE decoder unit 230 receives the bit stream 225 and uses an FEC decoder 235 to decode the received bit stream 225 and produce quantized MBE parameters.
A frame interpolator 240 uses the quantized MBE parameters and, in particular, the quantized spectral magnitudes at the interpolation points and the locations of the interpolation points to generate interpolated spectral magnitudes for the N−P subframes that were not encoded. In particular, the frame interpolator 240 reconstructs the MBE parameters from the quantized parameters, generates the interpolated spectral magnitudes, and combines the reconstructed parameters with the interpolated spectral magnitudes to produce a set of MBE parameters. The frame interpolator 240 uses the same interpolation technique employed by the frame generator 210 to find the optimal interpolation points to interpolate between the spectral magnitudes.
An MBE speech synthesizer 245 receives the MBE parameters and uses them to synthesize digital speech.
Referring to
In more detail, after receiving the spectral magnitudes, the frame generator 210 selects the first available combination of P subframes (e.g., “x - x - -”) (step 305) and quantizes the spectral magnitudes for that combination of P subframes (step 310). Thus, in the case of the combination “x - x - -”, the frame generator 210 would quantize the first and third subframes to generate quantized bits. The frame generator 210 then reconstructs the spectral magnitudes from the quantized bits (step 315) and generates representations of the spectral magnitudes of the other subframes (i.e., the second, fourth and fifth subframes in this example) by interpolating between the spectral magnitudes reconstructed from the quantized bits (step 320). For example, the interpolation may involve generating the spectral magnitudes using, for example, linear interpolation of magnitudes, linear interpolation of log magnitudes, or linear interpolation of magnitudes squared. As one illustrative example of these techniques, when the reconstructed magnitudes at endpoints for one particular harmonic are 8 and 16, and the interpolation subframe is halfway between the reconstructed subframe, the interpolated magnitude would be (8+16)/2=12; the log 2 magnitude would be 3 and 4, the interpolated log 2 magnitude would be 3.5, and the interpolated magnitude would be 2{circumflex over ( )}3.5=11.3; and the magnitudes squared would be 64 and 256, the interpolated squared magnitude would be (64+256)/2=160, and the interpolated magnitude would be square root of 160=12.6. In each of these cases, the interpolated magnitude (12, 11.3, and 12.6) are between the endpoints (8 and 16).
In more detail, for this example, the frame generator 210 generates a representation of the second subframe by interpolating between the reconstructed spectral magnitudes of the first and third subframes, and generates a representation for each of the fourth and fifth subframes by interpolating between the reconstructed spectral magnitudes of the third subframe and reconstructed spectral magnitudes of the first subframe of the next frame. The frame generator 210 then compares the reconstructed spectral magnitudes and the interpolated spectral magnitudes to generate an error measurement that compares the “closeness” of the downsampled, quantized, reconstructed, and interpolated magnitudes with the original magnitudes (step 325).
If there is another available combination of P subframes to be considered (step 330), the frame generator selects that combination of P subframes (step 335) and repeats steps 310-325. For example, after generating the error measurement for “x - x - -”, the frame generator 210 generates an error measurement for “x - - x -”.
If there are no more available combinations of P subframes to be considered (step 330), the frame generator 210 selects the combination of P subframes that has the lowest error measurement (step 340) and sends the quantized parameters for that combination of P subframes along with an index that identifies the combination of P subframes to the FEC encoder 215 (step 345).
As shown in
Finally, the decoder uses the reconstructed and interpolated spectral magnitudes to synthesize speech (step 420).
While the example above describes a system that employs 50 ms frames, 10 ms subframes (such that N equals 5) and two interpolation points (P equals 2), these parameters may be varied. For example, the analysis interval between sets of estimated log spectral magnitudes can be increased or decreased such as, for example, by increasing the length of a subframe from 20 ms or decreasing the length of a subframe from 10 ms to 5 ms. In addition, the number of analysis points per frame (N) and the number of interpolation points per frame (P) may be varied. These parameters may be varied when the system is initially configured or they may be varied dynamically during operation based on changing operating conditions.
The techniques described above may be implemented in the context of an AMBE vocoder. A typical implementation of an AMBE vocoder using a 20 ms frame size without using time varying interpolation points has an overall coding/encoding delay of 72 ms. A similar AMBE vocoder using a frame size of N*10 ms without using time varying interpolation points has a delay of N*10+52 ms. In general, the use of variable interpolation points adds (N−P)*10 ms of delay such that the delay becomes N*20−P*10+52 ms. Note that the N−P subframes of delay is added by the decoder. After receiving a frame of quantized bits, the decoder is only able to reconstruct subframes up through the last interpolation point. In the worst case, the decoder will only reconstruct P subframes (the remaining N−P subframes will be generated after receiving the next frame). Due to this delay, the decoder keeps model parameters from up to (N−P) subframes in a buffer. In a typical software implementation, the decoder will use model parameters from the buffer along with model parameters from the most recent subframe such that N or more subframes of model parameters are available for speech synthesis. Then it will synthesize speech for N subframes and place the model parameters for any remaining subframes in the buffer.
However, the delay may be reduced by one or two subframe intervals by adjusting the techniques such that the magnitudes for the most recent one or two subframes use the estimated fundamental frequency from a prior subframe. The delay, D, is therefore confined to a range:
(N*2−P)*I+32 ms<D<((N+1)*2−P)*I+32 ms
Where I is the subframe interval and is typically 10 ms. The delay may be reduced further by restricting interpolation point candidates, but this may result in reduced voice quality.
Referring to
The procedure proceeds by evaluating an error for many possible combinations of interpolation point locations. The total number of possible interpolation point combinations, from the binomial theorem, is
where N is the number of subframes per frame and P is the number of interpolation points per frame. In some cases, it might be desirable to consider only a subset of the possible combinations.
In the discussion below, M(C) through M(N−1) denote the log2 spectral magnitudes for subframes 0 through N−1. In this context, 0 and N−1 are referred to as subframe indices. The spectral magnitudes are represented at L harmonics, where the number of harmonics is variable between 9 and 56 and is dependent upon the fundamental frequency of the subframe. When it is useful to denote the magnitude of a particular harmonic, a subscript is used. For example, MI(Q), denotes the magnitude of the lth harmonic of subframe 0. Estimated magnitudes from a prior frame are denoted using negative subframe indices For example, subframes 0 through N−1 from the prior frame are denoted as subframes −N through −1 (i.e., N is subtracted from each subframe index).
After the magnitudes for subframe n have been quantized and reconstructed they are denoted by
Since an iterative procedure is used to evaluate an error for k different sets of interpolation points, it is necessary to distinguish the quantized/reconstructed/interpolated magnitudes of each candidate. To address this,
The procedure 500 requires that MBE model parameters have been estimated for subframes −(N−P) through N. The total number of subframes is thus 2·N−P+2. M(1) through M(N) are the spectral magnitudes from most recent N subframes.
The objective of the procedure 500 is to downsample the magnitudes and then quantize them so that the information can be conveyed using a lower data rate. Note that downsampling and quantization are each a method of reducing data rate. A proper combination of downsampling and quantization can be used to achieve the least impact on voice quality. A close representation of the original magnitudes can be obtained by reversing these steps. The quantized bits are used to reconstruct the spectral magnitudes for the subframes that they were sampled from. Then the magnitudes that were omitted during the downsampling process are reformed using interpolation. The objective is to choose a set of interpolation points such that when the magnitudes at those subframes are quantized and reconstructed and the magnitudes at the subframes that fall between the interpolation points are reconstructed by interpolation, the resulting magnitudes are “close” to the original estimated magnitudes.
The equation used for measuring “closeness” is as follows:
In this equation, MI(n) represent the estimated spectral magnitudes for each subframe and
where g(n) is the gain for the subframe and is computed as follows:
And g(max) and g(min) are the maximum and minimum gains for Λ≤n≤N, and w(n) represents a weight between 0.25 and 1.0 that gives more importance to subframes that have greater gain.
The procedure 500 needs to evaluate the magnitudes, associated quantized magnitude data, reconstructed magnitudes, and the associated error for all permitted combinations of “sampling points,” where the sampling points correspond to the P subframes at which the spectral magnitudes will be quantized for every N subframes of spectral magnitudes that were estimated. Rather than being chosen arbitrarily, the sampling points are chosen in a manner that minimizes the error.
With k being the magnitude sampling index, the number of possible combinations of sampling points is K=NI/((N−P)I·PI).
For a system where N=5, P=2, and K=10, the amount of magnitude data may be reduced by 60% (from 5 subframes down to 2). To reverse the downsampling process, interpolation is used to estimate the magnitudes at the unquantized subframes. The magnitude sampling index, k, must be transmitted from the encoder to the decoder such that the decoder will know the location of the sampling points. For N=5, P=2, and K=10, a 4-bit k-value would need to be transmitted to the decoder. The terms “magnitude sampling index” or “k-value” can be used interchangeably as needed.
Ml(n), where 0≤n<N denote the spectral magnitudes at N equidistant points in time. Ml(N) denotes the spectral magnitudes at the next interval. Also, Ml(n), where −(N−P+1)≤n<0, denote the prior spectral magnitudes.
The procedure 500 selects P points from subframes 0 . . . N−1 at which the magnitudes are sampled. The magnitudes at intervening points are filled in using interpolation. Each combination of interpolation points is denoted using a set of P elements:
T
k
={n
0
,n
1
, . . . ,n
p−1}
where Q≤np<N are the subframe indices of the interpolation points.
For example, when N=5, P=2, there are K=10 combinations of points to consider:
T0={0,1},
T1={0,2},
T2={0,3},
T3={0,4},
T4={1,2},
T5={1,3},
T6={1,4},
T7={2,3},
T8={2,4},
T9={3,4},
Each of the above sets represent one possible combination of interpolation points to be evaluated in this example.
In this example, also assume that, in the prior frame, subframe, index 3 was chosen as the second interpolation point. Since 3−N=−2, Λ=−2 is used as the subframe index for the prior frame, such that
M(N=5) are the spectral magnitudes for subframe 5 (which also may be referred to as the first subframe of the next frame). While subframe index N is not an eligible location for the interpolation point of the current frame, the magnitudes for the subframe are required by the algorithm that selects the interpolation points.
Each set in Tk is extended to contain subframe indices Λ and N, so for this example, the extended sets are:
C0={−2,0,1,5},
C1={−2,0,2,5},
C2={−2,0,3,5},
C3={−2,0,4,5},
C4={−2,1,2,5},
C5={−2,1,3,5},
C6={−2,1,4,5},
C7={−2,2,3,5},
C8={−2,2,4,5},
C9={−2,3,4,5},
C0-C9 in this example represent the K=10 combinations of interpolation points that need to be evaluated. Also note that since Λ can change every frame, the first element of Ck can change every frame.
To improve the notation, and allow it to be adapted for arbitrary N, P, and K, the set ΘM,Pk may be defined to be the kth combination of subframe indices where there are N subframes per frame with P interpolation subframes per frame. Following are ΘM,Pk sets for varying values of N and P:
The pattern can be continued to compute ΘN,Pk for any other values of N and P, where N>P.
Using this, the combinations of interpolation points that need to be evaluated can be defined as:
CN,Pk={Λ,ΘN,Pk,N} for D≤k<K to be k sets of subframe indices, where each set has P+2 indices and the first index in each combination set is always Λ, which is derived from the final magnitude interpolation index (k-value) in the last frame. In addition, P−M≤Λ<Q and N>P. Since A varies from frame to frame, the first index in each CN,Pk will also vary. The last index in each combination set is always N.
With the context of this notation, the procedure 500 proceeds by setting k to 0 (step 510) and, for each point CN,Pk, quantizing and reconstructing the magnitudes (step 512).
An exemplary implementation of a magnitude quantization and reconstruction technique is described in APCO Project 25 Half-Rate Vocoder Addendum, TIA-102.BABA-1, which is incorporated by reference. The quantization and reconstruction produces:
l=Quant−1(Quant(Ml(n))) for each n in the set Ck
The procedure 500 then interpolates the magnitudes for the intermediate subframes (i.e., n not in the set Ck) using a weighted sum of the magnitudes at the end points (step 515). The magnitudes for the starting subframe are denoted by
Where M1l(s) and M1l(s) are derived from
For each harmonic, l, the interpolation equation is dependent on whether the voicing type for the first end point, intermediate point, and final end point are voiced (“v”), unvoiced (“u”), or pulsed (“p”). For example “when v-u-u”, is applicable when the lth harmonic of the first subframe is voiced, and the lth harmonic of the intermediate subframe is unvoiced, and the lth harmonic of the final subframe is unvoiced.
Since the number of harmonics, L, (and fundamental frequency) at subframe index i may not be the same as those parameters at subframes s and e, the magnitudes at subframes s and e need to be resampled.
M
1
l(x)=
Where integer indices, nl(L) for each harmonic, are computed as follows
Where f(l) is the fundamental frequency for subframe i and f(x) is the fundamental frequency for subframe x, where x is either s or e. For each harmonic, weights, kl(l), are derived as follows:
Continuing with the example that N=5, P=2, Λ=−2 to show how the equations are applied, the following sets of magnitudes may be formed by grouping the magnitudes at each subframe denoted in the set into the various combinations:
{
{
. . .
{
{
The above sets of magnitudes are each produced by applying the quantizer and its inverse on the magnitudes at each of the interpolation points in the set.
The magnitudes for intermediate subframes (i.e. n not in the set Ck) are obtained using interpolation. In the first set above,
After filling in the intermediate magnitudes for each combination, the procedure 500 evaluates the error for this combination of interpolation points (step 520).
The procedure 500 then increments k (step 525) and determines whether the maximum value of k has been exceeded (step 530). If not, the procedure 500 repeats the quantizing and reconstructing (step 512) for the new value of k and proceeds as discussed above.
If the maximum value of k has been exceeded, the procedure 500 selects the combination of interpolation points (kmin) that minimizes the error (step 535). The associated bits from the magnitude quantizer, Bmin, and the associated magnitude sampling index, kmin, are transmitted across the communication channel.
Referring to
Additional implementations may select between multiple interpolation functions rather than using just a single interpolation function for interpolating between two interpolation points. With this variation, the interpolation/quantization error for each combination of interpolation points is evaluated for each permitted combination of interpolation functions. For each interpolation point, an index that selects the interpolation function is transmitted from the encoder to the decoder. If F is used to denote the number of interpolation function choices, then log2F bits per interpolation point are required to represent the interpolation function choice.
For example, if N=5 and P=2 and F=4, then two interpolation points are chosen in each frame containing five subframes, and log24=2 bits are used for each interpolation point to represent the interpolation function chosen for each interpolation point. Since there are two interpolation points per frame, a total of four bits are needed to represent the interpolation function choices in each frame.
Previously the interpolation function,
As one example of using multiple interpolation functions, three interpolation functions may be defined as follows:
Q,l(t)=
1,l(t)=
2,l(t)=
where
The quantization/interpolation error for each combination of interpolation points is evaluated for each combination of interpolation functions and the combination of interpolation points and interpolation functions that produces the lowest error is selected. A parameter that quantifies the location of the interpolation points is generated for transmission to the decoder along with a parameter that quantifies the interpolation function choice for each subframe. For example, 0 is sent if
Other interpolation techniques that may be employed include, for example, formant interpolation, parameteric interpolation, and parabolic interpolation.
In formant interpolation, the magnitudes at the endpoints are analyzed to find formant peaks and troughs, and linear interpolation in frequency is used to shift the position of moving formants between the two end points. This interpolation method may also account for formants that split or merge.
In parametric interpolation, a parametric model, such as an all pole model, is fitted to the spectral magnitudes at the endpoints. The model parameters then are interpolated to produce interpolated magnitudes from the parameters at intermediate subframes.
Parabolic interpolation uses methods such as those discussed with the magnitudes at three subframes rather than two subframes.
The decoder receives the interpolation function parameter for each interpolation point and uses the corresponding interpolation function to regenerate the same interpolated magnitudes that were chosen by the encoder.
Referring to
The procedure 800 proceeds by setting k to 0 (step 810) and, for each point in CN,Pk, quantizing and reconstructing the magnitudes (step 812).
The procedure 800 then sets the interpolation function index “F” to 0 (step 814) and interpolates the magnitudes for the intermediate subframes (i.e., n not in the set Ck) using the interpolation function corresponding to F (step 815).
After filling in the intermediate magnitudes for each combination, the procedure 800 evaluates the error for this combination of interpolation points (step 820).
The procedure 500 then increments F (step 821) and determines whether the maximum value of F has been exceeded (step 823). If not, the procedure 800 repeats the interpolating step using the interpolation function corresponding to the new value of F (step 815) and proceeds as discussed above.
If the maximum value of F has been exceeded, the procedure 800 increments k (step 825) and determines whether the maximum value of k has been exceeded (step 830). If not, the procedure 800 repeats the quantizing and reconstructing (step 812) for the new value of k and proceeds as discussed above.
If the maximum value of k has been exceeded, the procedure 800 selects the combination of interpolation points and the interpolation function that minimize the error (step 835). The associated bits from the magnitude quantizer, the associated interpolation function index, and the associated magnitude sampling index are transmitted across the communication channel.
While the techniques are described largely in the context of a MBE vocoder, the described techniques may be readily applied to other systems and/or vocoders. For example, other MBE type vocoders may also benefit from the techniques regardless of the bit rate or frame size. In addition, the techniques described may be applicable to many other speech coding systems that use a different speech model with alternative parameters (such as STC, MELP, MB-HTC, CELP, HVXC or others) or which use different methods for analysis, quantization. Other implementations are within the scope of the following claims.