This application is related to U.S. patent application Ser. No. 11/383,506, filed on the same date as this application.
The present invention relates, in general, to signal compression systems and, more particularly, to Code Excited Linear Prediction (CELP)-type speech coding systems.
Compression of digital speech and audio signals is well known. Compression is generally required to efficiently transmit signals over a communications channel, or to store said compressed signals on a digital media device, such as a solid-state memory device or computer hard disk. Although there exist many compression (or “coding”) techniques, one method that has remained very popular for digital speech coding is known as Code Excited Linear Prediction (CELP), which is one of a family of “analysis-by-synthesis” coding algorithms. Analysis-by-synthesis generally refers to a coding process by which multiple parameters of a digital model are used to synthesize a set of candidate signals that are compared to an input signal and analyzed for distortion. A set of parameters that yield the lowest distortion is then either transmitted or stored, and eventually used to reconstruct an estimate of the original input signal. CELP is a particular analysis-by-synthesis method that uses one or more codebooks that each essentially comprises sets of code-vectors that are retrieved from the codebook in response to a codebook index.
In modern CELP coders, there is a problem with maintaining high quality speech reproduction. The problem originates since there are too few bits available to appropriately model the “excitation” sequences or “codevectors” which are used as the stimulus to a synthesis filter. An improved method for determining the codebook related parameters has been described in U.S. patent application Ser. No. 11/383,506, filed on the same date as this application and is incorporated herein by reference. This method addresses a low complexity, joint optimization process and method. However, there remains a need for improving performance of CELP type speech coders at low bit rates.
Embodiments of the invention concern a speech coder that varies a codebook configuration for efficiently coding a speech signal based on parameters extracted from the information signal. The codebook configuration determines the contribution of one or more codebooks used to code the speech signal. The codebook configuration can be associated with a codebook configuration parameter that describes a bit allocation between the one or more codebooks. For example, the codebook configuration parameter can identify an optimal number of bits in a pitch related codebook and a corresponding optimal number of bits in a fixed codebook. The speech coder can identify the optimal number of bits for the bit allocation between two or more codebook based on one or more performance metrics during a coding of the speech signal. In one example, a first performance metric can be a squared error metric and a second performance metric can be a prediction gain metric.
Stated specifically, a method and system for adaptive bit allocation among a set of codebooks and codebook related parameters is provided. The method provides a low complexity, codebook optimization process to increase speech modeling performance of CELP type speech coders at low bit rates. In practice, a combination of fixed codebook and adaptive codebook contributions are determined based on one or more performance metrics. A codebook configuration is determined from the one or more performance metrics. Upon selection of the codebook configuration, multiple related codebook parameters are determined. The performance metrics identify a contribution of the adaptive codebook and a contribution of the fixed codebook that increases information modeling accuracy. That is, for certain types of speech, a bit-allocation for the adaptive codebooks and the fixed codebooks is adjusted to minimize an error criterion, wherein the bit-allocation establishes the contribution of each of the codebooks. The method and system can dynamically allocate bits to the adaptive codebook and fixed codebook components, such that an increase in overall performance is attained with reduced overhead in computational complexity and memory.
One example of the speech coder of the current invention implements a method for analysis-by-synthesis encoding of an information signal. The method can include the steps of generating a weighted reference signal based on the information signal, generating a first synthetic signal based on a first pitch-related codebook, generating a first performance metric between the reference signal and the first synthetic signal, generating a second synthetic signal based on a second pitch-related codebook, generating a second performance metric between the reference signal and the second synthetic signal, selecting a codebook configuration parameter based on the first and second performance metrics, and outputting the codebook configuration parameter for use in reconstructing an estimate of the input signal.
In another embodiment, one or more codebook configuration parameters can be determined for a speech frame and encoded in a variable length code word. For example, a codebook configuration can be determined for one or more subframes of the speech frame. Each subframe can have a corresponding configuration parameter associated with the subframe. In one example, the codebook configuration parameters for the subframes can be encoded in a Huffman code using Huffman coding. The Huffman code can be sent to a decoder which can identify the one or more configuration codebook parameters from the Huffman codeword. The configuration parameters describe the number of bits used in an adaptive codebook and the number of bits used in a fixed codebook for decoding.
For example, the method can include the steps of receiving at least one parameter related to a codebook configuration, coding the codebook configuration to produce a variable length codeword, and conveying the variable length codeword to a decoder for interpreting the codebook parameter and reconstructing an estimate of the input signal. The one or more codebook configuration parameters corresponding to one or more subframes of a speech frame can be encoded in a variable length codeword. Each codebook parameter can identify an adaptive codebook having a first distribution of bits and a fixed codebook having a second distribution of bits.
Accordingly, a method for decoding parameters for use in reconstructing an estimate of an encoder input signal is provided. The method can include receiving a variable length codeword representing a codebook configuration parameter, receiving a first code related to an adaptive codebook, receiving a second code related to a fixed codebook, decoding the codes related to the adaptive codebook and the fixed codebook based on the codebook configuration parameter, and generating an estimate of the encoder input signal from the adaptive codebook and fixed codebook.
Another embodiment of the invention is a method for analysis-by-synthesis encoding of an information signal. The method can include the steps of generating a weighted reference signal based on the information signal, generating multiple synthetic signals using multiple pitch related codebooks, determining a performance metric based on the reference signal and the first synthetic signal, selecting at least one codebook configuration parameter based on the performance metric, generating a second synthetic signal using a second pitch related codebook, encoding the at least one codebook configuration parameter in a variable length codeword, and conveying the variable length codeword for use in reconstructing an estimate of the input signal.
Referring to
The quantized spectral, or LP, parameters are also conveyed locally to an LPC synthesis filter 105 that has a corresponding transfer function 1/Aq(Z). LPC synthesis filter 105 also receives a combined excitation signal u(n) from a first combiner 110 and produces an estimate of the input signal ŝ(n) based on the quantized spectral parameters Aq and the combined excitation signal u(n). Combined excitation signal u(n) is produced as follows. An adaptive codebook code-vector cτ is selected from an adaptive codebook (ACB) 103 based on an index parameter τ. The adaptive codebook code-vector cτ is then weighted based on a gain parameter β 109 and the weighted adaptive codebook code-vector is conveyed to first combiner 110. A fixed codebook code-vector ck is selected from a fixed codebook (FCB) 104 based on an index parameter k. The fixed codebook code-vector ck is then weighted based on a gain parameter γ 108 and is also conveyed to first combiner 110. First combiner 110 then produces combined excitation signal u(n) by combining the weighted version of adaptive codebook code-vector cτ with the weighted version of fixed codebook code-vector ck. Contents of the ACB 103 are then updated using a delayed version of signal u(n) by subframe length L.
LPC synthesis filter 105 conveys the input signal estimate ŝ(n) to a second combiner 112. Second combiner 112 also receives input signal s(n) and subtracts the estimate of the input signal ŝ(n) from the input signal s(n). The difference between input signal s(n) and input signal estimate ŝ(n) is applied to a perceptual error weighting filter 106, which filter produces a perceptually weighted error signal e(n) based on the difference between ŝ(n) and s(n) and a weighting function W(z). Perceptually weighted error signal e(n) is then conveyed to squared error minimization/parameter quantization block 107. Squared error minimization/parameter quantization block 107 uses the error signal e(n) to determine an optimal set of codebook-related parameters τ, β, k, and γ that produce the best estimate ŝ(n) of the input signal s(n).
The block diagram of decoder 200 of the prior art corresponds to encoder 100. As one of ordinary skilled in the art realizes, the coded bit-stream produced by encoder 100 is used by a demultiplexer 202 in decoder 200 to decode the optimal set of codebook-related parameters, that is, τ, β, k, and γ, in a process that is reverse to the synthesis process performed by encoder 100. Thus, if the coded bit-stream produced by encoder 100 is received by decoder 200 without errors, the speech ŝ(n) output by decoder 200 can be reconstructed as an exact duplicate of the input speech estimate ŝ(n) produced by encoder 100.
While CELP encoder 100 is conceptually useful, it is not a practical implementation of an encoder where it is desirable to keep computational complexity as low as possible. As a result,
From
E(z)=W(z)(S(z)−Ŝ(z)). (1)
From this expression, the weighting function W(z) can be distributed and the input signal estimate ŝ(n) can be decomposed into the filtered sum of the weighted codebook code-vectors:
The term W(z)S(z) corresponds to a weighted version of the input signal. By letting the weighted input signal W(z)S(z) be defined as Sw(z)=W(z)S(z) and by further letting synthesis filter 105 of encoder 100 now be defined by a transfer function H(z)=W(z)/Aq(z), Equation 2 can rewritten as follows:
E(z)=Sw(z)−H(z)(βCτ(z)+γCk(z)). (3)
By using z-transform notation, the filter states need not be explicitly defined. Now proceeding using vector notation, where the vector length L is a length of a current subframe, Equation 3 can be rewritten as follows by using the superposition principle:
e=sw−H(βcτ+γck)−hzir, (4)
where:
From the expression above, a formula can be derived for minimization of a weighted version of the perceptually weighted error, that is, ∥e∥2, by squared error minimization/parameter block 107. A norm of the squared error is given as:
ε=∥e∥2=∥xw−βHcτ−γHck∥2. (7)
Due to complexity limitations, practical implementations of speech coding systems typically minimize the squared error in a sequential fashion. That is, the ACB component may be optimized first (by assuming the FCB contribution is zero), and then the FCB component is optimized using the given (previously optimized) ACB component. The ACB/FCB gains, that is, codebook-related parameters β and γ, may or may not be re-optimized, that is, quantized, given the sequentially selected ACB/FCB code-vectors cτ and ck.
The theory for performing the sequential search is as follows. First, the norm of the squared error as provided in Equation 7 is modified by setting γ=0, and then expanded to produce:
ε=∥xw−βHcτ∥2=xwTxw−2βxwTHcτ+β2cτTHTHcτ. (8)
Minimization of the squared error is then determined by taking the partial derivative of ε with respect to β and setting the quantity to zero:
This yields the optimal ACB gain:
Substituting the optimal ACB gain back into Equation 8 gives:
where τ* is an optimal ACB index parameter, that is, an ACB index parameter that minimizes the value of the bracketed expression. Since xw is not dependent on τ, Equation 11 can be rewritten as follows:
Now, by letting yτ equal the ACB code-vector cτ filtered by weighted synthesis filter 303, that is, yτ=Hcτ, Equation 13 can be simplified to:
and likewise, Equation 10 can be simplified to:
Thus Equations 13 and 14 represent the two expressions necessary to determine the optimal ACB index τ and ACB gain β in a sequential manner. These expressions can now be used to determine the sequentially optimal FCB index and gain expressions. First, from
ε=∥x2−γHck∥2. (15)
where γHck is a filtered and weighted version of FCB code-vector ck, that is, FCB code-vector ck filtered by weighted synthesis filter 304 and then weighted based on FCB gain parameter γ. Similar to the above derivation of the optimal ACB index parameter τ* it is apparent that:
where k* is a sequentially optimal FCB index parameter, that is, an FCB index parameter that maximizes the value in the bracketed expression. By grouping terms that are not dependent on k, that is, by letting d2T=x2TH and Φ=HTH, Equation 16 can be simplified to:
in which the sequentially optimal FCB gain γ is given as:
Thus, encoder 300 provides a method and apparatus for determining the optimal excitation vector-related parameters τ, β, k, and γ, in a sequential manner. However, the sequential determination of parameters τ, β, k, and γ is actually sub-optimal since the optimization equations do not consider the effects that the selection of one codebook code-vector has on the selection of the other codebook code-vector.
Embodiments in accordance with the present invention may be more fully described with reference to
As can be seen in
The ACB/FCB configuration parameter m (1≦m≦M) selects a combination of ACB/FCB that trades off bit allocation and bit rate based on the error minimization unit 408. For example, the error minimization/adaptive bit allocation unit 408 can determine a configuration, m, that provides a compromise between the bits allocated to the ACB and the bits allocated to the FCB for providing an optimal combination of encoding the input speech signal, s. The configuration parameter, m, identifies the ACB and FCB codebooks that are to be employed during encoding. Notably, the configuration parameter, m, can change during the encoding process for accurately modeling the input speech signal.
In general, the phonetic content of speech can vary such that differing contributions of the codebook can be warranted. For example, speech can be composed of voiced and unvoiced portions. The contributions of the unvoiced portions and voiced portions can change over time. Whereas consonants are typical of unvoiced speech and having a more abrupt nature, vowels are typical of voiced speech and having a more periodic nature. Unvoiced speech and speech onsets can rely heavily on the FCB contribution, while periodic signals such as steady state voiced speech can rely heavily on the ACB contribution. As another example, transition voiced speech can rely on a more balanced contribution from both the ACB and FCB. Thus, an embodiment of the present invention selects an ACB/FCB configuration m that optimizes the allocation of bits to the respective ACB/FCB contributions to balance the contribution of the ACB codebook and the FCB codebooks based on the content of speech for accurately modeling speech. In practice, the error minimization/bit allocation unit 408 determines the bit allocations that result in a minimum error, e, to produce the best estimate ŝ(n) of the input signal s(n)
In the current invention, the derivation of the error expression is modified from that in the prior art as follows. In general terms, Equation 13 may be modified to take the form:
where τm is the ACB codevector associated with the mth ACB, and τm* is the optimal ACB index parameter for ACB m. From this expression, it may be possible to then select an ACB/FCB configuration using the expression:
m=arg max{ε1′, . . . ,εM′}, (20)
where εm′ is a form of the error expression which corresponds to:
where yτ
For example, referring to
Each of the codevectors in a first codebook can be represented by a certain number of bits, for example N bits. Moreover, each of the codevectors in the second codebook can be represented by a certain number of bits that is more than the number of bits in the preceding codebook, for example N+B bits. Similarly, the number of bits used to represent the codevectors in each codebook 1 to M can increase with each codebook to increase the codevector resolution. Increasing the bits can increase the modeling resolution of the codevectors. Notably, the set of codevectors in one codebook differs from a set of codevectors in another codebook by the number of bits assigned to the codevectors in the codebook.
For example, the first codebook, ACB 1 (402), may allocate 4 bits for the codevectors in that codebook. The second codebook ACB 2, may allocate 8 bits for the codevectors in that codebook. Understandably, increasing the number of bits can improve the modeling performance for certain portions of speech. For example, an adaptive codebook having codevectors with a high number of bits may accurately model voiced speech. However, a fixed codebook may not require that same number of bits to represent the voiced speech. In contrast, a fixed codebook having codevectors with a high number of bits may accurately model unvoiced speech. However, an adaptive codebook may not require that same number of bits to represent the unvoiced speech. Accordingly, the number of bits allocated to the codevectors of the codebooks can be disproportionately assigned to take advantage of the changing nature of speech.
Referring again to
The initial first excitation vector cτm is conveyed to a first zero state weighted synthesis filter 303 that has a corresponding transfer function Hzs(z), or in matrix notation H. Weighted synthesis filter 303 filters the initial first excitation vector cτm to produce a signal yτm (n) or, in vector notation, a vector yτm, wherein yτm=Hcτm The filtered initial first excitation vector yτm, is then weighted by a first gain 109 based on an initial first excitation vector-related gain parameter β and the weighted, filtered initial first excitation vector, βHcτm, or first synthetic signal βyτm, is conveyed to second combiner 321
Second combiner 321 subtracts the weighted, filtered initial first excitation vector βHcτm, or first synthetic signal βyτm, from the target input signal or vector xw to produce an intermediate signal x2(n), or in vector notation an intermediate vector x2, wherein x2=xw−βHcτm. Second combiner 321 then conveys intermediate signal x2 (n), or vector x2, to a third combiner 307. Third combiner 307 also receives a weighted, filtered version of an initial second excitation vector ckm preferably a fixed codebook (FCB) code-vector. The initial second excitation vector ckm, is generated by a fixed codebook 404, preferably a fixed codebook (FCB), based on an initial second excitation vector-related index parameter k, preferably an FCB index parameter. The initial second excitation vector ckm is conveyed to a second zero state weighted synthesis filter 304 that also has a corresponding transfer function Hzs(z), or in matrix notation H. Weighted synthesis filter 304 filters the initial second excitation vector ckm to produce a signal ykm (n), or in vector notation a vector ykm, where ykm=Hckm. The filtered initial second excitation vector ykm (n), or ykm, is then weighted by a second gain 108 based on an initial second excitation vector-related gain parameter γ. The weighted, filtered initial second excitation vector Hckm, or signal ykm, is then also conveyed to third combiner 307.
Third combiner 307 subtracts the weighted, filtered initial second excitation vector γHckm, or signal ykm from the intermediate signal x2(n), or intermediate vector x2, to produce a perceptually weighted error signal e(n), or e. Perceptually weighted error signal e(n) is then conveyed to the error minimization unit 408, preferably a squared error minimization/parameter quantization block that includes adaptive bit allocation. Notably, the error minimization unit 408 can adjust the gain elements β and γ to minimize the perceptually weighted error signal, or mean squared error criterion, e(n). Error minimization/bit allocation/parameter quantization unit 408 uses the error signal e(n) to jointly determine multiple excitation vector-related parameters τ, β, k and γ that optimize the performance of encoder 400 by minimizing a squared sum of the error signal e(n) 308. The optimization includes identifying the bit-allocations for the ACB and FCB that produce the optimal first and second excitation vectors. Thus, optimization of index parameters τ and k, that is, a determination of τ* and k*, with regard to the M bit-allocated codebooks respectively results in a generation (526) of the optimal first excitation vector cτm* by the adaptive codebook 402, and the optimal second excitation vector ckm* by the fixed codebook 403. Optimization of parameters β and γ, with regard to the M bit-allocated codebooks, respectively results in optimal weightings of the filtered versions of the optimal excitation vectors cτm* and ckm*, thereby producing a best estimate of the input signal s(n).
Unlike squared error minimization/parameter block 408 of encoder 300, which determines an optimal set of multiple codebook-related parameters τ, β, k and γ by performing a sequential optimization process, error minimization unit 408 of encoder 400 determines the optimal set of excitation vector-related parameters τm, β, km and γ by evaluating M codebook bit allocations and gain scalings that are non-sequential. By performing a bit allocation and gain scaling process during error minimization, a determination of excitation vector-related parameters τm, β, km and γ can be optimized that are interdependent among one another. That is, the effects of the selection of one excitation vector has on the selection of the other excitation vector is taken into consideration in the optimization of each parameter.
In particular, the parameters τm, β, km and γ are dependent on the bit-allocations for each of the M codebook configurations. The various bit-allocations produce excitation vectors cτm* and ckm* having resolutions dependent on the allocated number of bits to the codebook. Understandably, certain portions of speech may require more or less bits from the ACB and FCB codebooks for accurately modeling the speech. Error minimization/bit allocation/parameter quantization unit 408 can identify the optimal bit-allocations for producing the best estimate of speech.
The optimization process identifies the bit-allocations for the adaptive codebook and the bit-allocations for the fixed codebook that together produce the best estimate of the input signal s(n). Error minimization/adaptive bit allocation/parameter quantization unit 408 selects a codebook configuration parameter, m, based on a first and a second performance metric. The codebook configuration parameter, m, in effect, identifies a first distribution of bits for a first adaptive (pitch-related) codebook and a second distribution of bits for a second adaptive (pitch-related) codebook. The configuration parameter, m, identifies the codebook which corresponds to a particular bit-allocation. For example, Error minimization/adaptive bit allocation/parameter quantization unit 408 can identify a distribution of bits (a codebook configuration m) for adaptive codebook 402 through 403 and fixed codebook 404 through 405 that minimizes the power of the weighted error signal e(n). Error minimization/adaptive bit allocation/parameter quantization unit 408 can identify a bit-allocation that results in the minimum closed loop analysis-by-synthesis error.
Referring, to
The error minimization unit/adaptive bit allocation unit 408 generates and evaluates the error metrics and the prediction gains for selecting the codebook configuration. For example, a first configuration can be evaluated against a second configuration, and the second configuration can be selected if the performance metrics of the second configuration exceed the first configuration with respect to the error bias and the prediction gain bias. The flowchart 500 describes the methods steps for a configuration m=2. however, the evaluation can continue if more than two configurations are provided. Understandably, the method can be extended to multiple codebook configurations. For instance, if the second configuration is selected over the first configuration, a third configuration can be evaluated against the second configuration. In practice, the codebook configuration evaluation ceases when a new configuration does not exceed the performance metrics of a current configuration. For example, if the third configuration does not exceed the second configuration, a fourth and fifth configurations will not be evaluated since the third configuration did not exceed the performance metrics of the second configuration, even if m=5 configurations are available.
In summary, the error minimization/adaptive bit allocation/parameter quantization unit 408 assesses the performance modeling errors for each of the ACB and FCB codebooks and identifies the bit-allocation for these codebooks that provide the least error; that is, the contribution of each codebook that provides the highest modeling performance. For example, the error minimization/adaptive bit allocation/parameter quantization unit 408 evaluates each of the m ACB codebooks to determine the list of m codevectors, τm, producing the smallest error. The Error minimization unit 408 selects the codebook having the codevector producing the smallest error. The Error minimization/adaptive bit allocation/parameter quantization unit 408 (herein after error minimization unit) also evaluates each of the m FCB codebooks, km, to determine the list of m codevectors producing the smallest error. The Error minimization unit 408 selects the codebook having the codevector producing the smallest error; that is, the codebook that corresponds to the maximum value of the parameter εm′ in EQ (12)
Upon determining a codebook configuration based on the evaluation of error performance metrics and prediction gain metrics, the codevectors and codebook gains can be determined. For example, upon determining m=2, the multiple codebook-related parameters τ, β, k and γ for m=2 can be determined by the methods used in the sequential optimization process presented in discussion of
In the aforementioned embodiment, each of the codebooks are assigned a different number of bits to represent the codevectors in the codebook. The number of bits assigned to each codebook are fixed, and the number of adaptive and fixed codebooks are fixed. The Error minimization unit 408 identifies the codebook configuration providing the optimal bit-allocation prior to a determination of the multiple codebook related parameters τ, β, k and γ. Alternatively, in another embodiment of the invention, bits can be allocated dynamically (adaptively) to the codevectors during an encoding. Namely, the error minimization unit 408 can increase or decrease the number of bits in a codebook for one or more codevectors to maximize a performance metric. For example, bits can be allocated between the adaptive codebook 402 and the fixed codebook 404 to increase or decrease the codevector resolution in order to minimize the error criterion, e 308. The Error minimization unit 408 can dynamically allocate the bits in a non-sequential order based on the first and second performance metric. That is, the bit allocations for the adaptive codebook and the fixed codebooks can occur dynamically within the same codebook. In practice, Error minimization unit 408 identifies a configuration, m, for a codebook which provides an optimal compromise between the quality of the first synthetic signals generated by the ACB and the quality of the second synthetic signals generated by the FCB. The optimal configuration produces the minimum error. The configuration can identify the number of bits assigned to the adaptive codebook and the number of bits assigned to the fixed codebook.
For example, Table 1 shows two bit assignment configurations available for an encoding implementation having two codebooks, ACB and FCB. The first configuration, m=1, reveals that 0 bits are assigned to the adaptive codebook, and 31 bits are assigned to the fixed codebook. The second configuration, m=2, reveals that 4 bits are assigned to the adaptive codebook, and 27 bits are assigned to the fixed codebook. The number of bits allocated is not limited to those shown in Table 1, which are provided only as example. In practice, the configurations can be stored in a data memory and accessed by the Error minimization unit/Adaptive Bit Allocation unit 408. In this exemplary table, the total number of bits available to both the codebooks is 32. Notably, a configuration identifies the allocation of bits to each of the codebooks. Those who are of ordinary skill in the art realize that the arrangement of the codebooks and their respective codevectors may be varied without departing from the spirit and scope of the present invention. Embodiments of the invention are not limited to only two codebooks, and more than two codebooks are herein contemplated. For example, the first codebook may be a fixed codebook, the second codebook may be an adaptive codebook.
In another aspect, the dynamic bit-allocation strategy of the invention can be applied to Factorial Pulse Coding. For example, In the IS-127 half rate case (4.0 kbps), the FCB uses a multi-pulse configuration in which the excitation vector ck contains only three non-zero values. Since there are very few non-zero elements within ck, the computational complexity involved with EQ (18) is relatively low. For the three “pulses,” there are only 10 bits allocated for the pulse positions and associated signs for each of the three subframes (of length of L=53, 53, 54). In this configuration, an associated “track” defines the allowable positions for each of the three pulses within ck (3 bits per pulse plus I bit for composite sign of +, −, + or −, +, −). As shown in Table 4.5.7.4-1 of IS-127, pulse 1 can occupy positions 0, 7, 14, . . . , 49, pulse 2 can occupy positions 2, 9, 16, . . . , 51, and pulse 3 can occupy positions 4, 11, 18, . . . , 53. This is known as “interleaved pulse permutation,” which is well known in the art.
However, the excitation codevector ck is not generally robust enough to model different phonetic aspects of the input speech. The primary reason for this is that there are too few pulses which are constrained to too small a vector space. Each pulse takes a certain number of bits, for example, 4 bits per pulse. Accordingly, embodiments of the invention can assign more or less bits to the FCB for increasing or decreasing the number of pulses to adequately represent certain portions of speech. Similarly, the number of pulses can be decreased for certain portions of speech, and the bits used for the pulse in the FCB can be applied to the codevectors of the ACB. In this manner, bits can be allocated between the ACB and the FCB for producing codebook configurations optimized for certain types of speech that are encoded using factorial packing.
For example, referring again to
In the m=1 configuration, the FCB uses a 6 pulse FCB, comprising 31 bits for the FPC over a subframe length of 54. Understandably, more pulses can be assigned to represent the codevector of the FCB since the bits to represent these pulses are allocated away from the ACB. As is known to those skilled in the art, the number of pulses used in an FCB can be determined through table look up. For example, a 5 bit pulse corresponds to an index in an FCB table that determines the number of bits assigned to the FCB codeword for representing the 5 bit pulse. The index is equal to the order of the pulse configuration in the total order.
Referring again to Table 1, configuration m=2 reveals that 4 bits are assigned to the ACB delay adjustment parameter, thereby providing a refinement of the ACB shape over the m=1 configuration. Notably, configuration m=2 allocates more bits to the ACB codebook for increasing the resolution of the ACB codevectors. Configuration m=2 can be selected when the pitch of the speech changes such that the delay contour is a value greater than zero. That is, the 4 bits assigned to the ACB allow a value to be assigned to the delay contour. However, these 4 bits reduce the number of bits available for representing the pulses in the Factorial Pulse Codebook. Configuration m=2, reduces the number of pulses from 6 pulses to 5 pulses, thereby reducing the total to 27 bits for the FCB. The total number of bits assigned to both codebooks is a constant for this particular example. That is, the total number of bits for each configuration is the same for each value of m. Those skilled in the art can appreciate the number of bits between the codebooks does not need to remain fixed, and Table 1, is only an exemplary embodiment illustrating the principles of dynamic bit allocation between codebooks.
The selection of a configuration m can be performed in a manner that dedicates more bits to the ACB in cases where the improvement due to the increased resolution in the ACB parameters exceeds the relative degradation due to the FCB when reducing the number of pulses from 6 to 5. A comprehensive error minimization on all the codevectors of the codebooks can be conducted to determine the optimal bit-allocations. However, such an exhaustive procedure can be computationally demanding, and an alternate, more appropriate, solution can be employed. The lower complexity method uses a biased ACB error minimization process that justifies the reduction of bits in the FCB. In principle, more bits are allocated to the FCB when the performance is significantly greater than that using fewer bits. The performance can be measured with regard to minimizing the error. For example, a bias term (as shown in
to produce an error metric:
Similar processing may then be performed for configuration m=2 to produce an error metric ε2′ corresponding to ACB parameter τ2*. The long-term prediction gain may also be calculated to include in the selection of a configuration m, defined as:
In another embodiment of the present invention, the methods herein described are applied to subframe encoding. For example, a codebook configuration can be selected for each subframe of a frame of speech. The bits required to represent the coding configuration and the bits required to represent the codebooks can be combined into a single combined codeword. The single combined codeword can take advantage of coding redundancies when combining the bits of the subframes. Accordingly, an efficient coding method can applied to the bits to minimize overhead related to the ACB/FCB configuration information. For example, a Huffman coding scheme can be applied to the bits to achieve higher data compression.
Consider a speech frame containing three subframes wherein 3 bits of information are required to convey the M=2 configurations per subframe to the respective decoding processor. Understandably, a subframe configuration requires a single bit for providing two states, and there are three subframes which require a minimum of 3 bits. However, the configurations can be coded using a variable rate code, such as a Huffman code, to reduce the overhead due to the coding of the M configurations. For example, Table 2 illustrates an exemplary coding configuration using Huffman coding wherein the number of bits varies as a function of the number of pulses per subframe. Table 2 identifies the number of Huffman code, the pulses per subframe, the number of Huffman bits, the allocation of bits between the ACB and FCB, and the total number of bits. In the particular example, the total number of bits is a constant that is the sum of the Huffman code bits, ACB bits, and FCB bits. The notation 6-6-5, under pulses per subframe, describes the number of pulses per subframe for a frame of speech. For example, 6-6-5 states that there are 6 subframes in subframe 1, 6 pulses in subframe 2, and 5 pulses in subframe 3.
Referring to Table 2, Huffman code 0 states that 6 pulses will be used in each of the 3 subframes and that zero bits are allocated to the ACB. In this arrangement, all the bits are assigned to the FCB to represent the pulses. Notably, only 1 Huffman bit is required for the entire frame versus the 3 bits required without variable length coding. In effect, the overhead of coding M=2 configurations per subframe is captured by using only 1 bit for the entire frame for Huffman code 0. In contrast, Huffman code 100 states that 6 pulses are used in the first 2 subframes followed by 5 pulses in the third subframe. The Huffman code bit-length increases as the number of the pulses in the subframes are reduced. Understandably, the proportion of voiced and unvoiced portions in speech is balanced more towards voiced content. That is, most of speech is more voiced than unvoiced. Accordingly, a shorter Huffman code for unvoiced regions of speech provides more coding bits for the FCB, and longer Huffman codes corresponding to voiced speech provides more coding bits to the ACB.
For subframes with 5 pulses, the corresponding number of ACB parameter bits is 2 per subframe. That is, each subframe requiring 5 pulses allocates 2 bits to the ACB. For example, frame 6-6-5 allocates 2 bits to the ACB, frame 6-5-5 allocates 2+2 bits to the ACB, and so on. Understandably, embodiments of the invention are not restricted to only 5 and 6 bits, or 2 bits per pulse. More or less than this number of bits can be employed for the purposes of variable length subframe coding. It should also be noted, in the particular example of Table 2, that when a pulse representing 4 bits is allocated from the FCB to the ACB, 2 of the bits are allocated to the ACB and the remaining 2 bits are allocated to the Huffman codeword. That is, when bits representing a pulse in an FCB is removed from the FCB, the bits representing the pulse are distributed between the ACB and the variable length codeword. In this arrangement, pulses can be removed from the FCB codebook and applied to the codeword and ACB. This is a particularly beneficial approach for subframe encoding. For example, a speech frame can be represented by one or more subframes. A codebook configuration selector can determine a codebook configuration parameter for each subframe. The codebook configuration parameters of the subframes can be encoded into a single variable length codeword. Accordingly, increased compression can be achieved by taking advantage of the variable length coding scheme used to represent the number of pulses in the FCB.
Referring to
For example, each 6 pulse FCB subframe corresponds to m=1, whereas each 5 pulse FCB subframe corresponds to m=2. For instance, 6-5-6 pulses per subframe in Table 2 corresponds to a 1-2-2 codebook configuration in the three respective subframes. Accordingly, the number of bits for each subframe changes by 2 bits depending on the number of pulses. Recall each pulse removed from the FCB requires 4 bits, though 2 bits are distributed to the ACB and 2 bits are distributed to the Huffman code. For instance, a 5 pulse FCB thus requires 2 bits, whereas a 6 pulse FCB thus requires 0 bits. The number of bits distributed between the ACB and the codeword fore each FCB pulse are not limited to this arrangement. More or less than 2 bits can be allocated to the ACB and the Huffman code.
Referring back to Table 2, the bits for each of the codebooks can be combined into a single codeword. That is, the FCB bits for all 3 subframes can be combined together to form a large composite codeword, the method of which is described in the related U.S. patent application Ser. No. 11/383,506, filed on the same day and contained herein. For example, the FCB bits can be efficiently encoded using a combinational factorial packing algorithm. The combinatorial algorithm provides an information segregation property that is more robust to bit errors. For the present example of Table 2, the total number of bits required for the 3 subframes having lengths 53, 53, 54 is calculated using the formula:
FCB Bits=|log2(53FPCm
where m1, m2, m3 are the respective number of pulses per subframe, and nFPCm is the number of combinations required for coding the Factorial Pulse Codebook (described in U.S. Pat. No. 6,236,960), and given as:
The total number of bits for this example can then be observed in the last column of Table 2, where despite the variations in the number of Huffman bits, ACB parameter bits, and pulses per subframe, the total number of bits can be held virtually constant. Notably, referring back to
Referring to
At step 602 through 604, for the first subframe, a set of M codebook configurations can be searched and a first codebook configuration parameter m can be produced at step 606. For example, referring to
m=arg max{ε1′, . . . ,εM′},
That is, a performance metric can be generated for each of the M ACB codebooks, from which a configuration parameter m is selected.
At step 606, the codebook configuration corresponding to the maximum performance metric for the first codebook and the second codebook can be selected. More than two codebooks can be provided though only two are shown for exemplary illustration. The principles of operation can be equally applied to two more codebook sets which is herein contemplated. In one arrangement, the number of codebook sets can equal the number of codebook configurations, M.
At step 608, the method steps 602 to 606 can be repeated for each of the subframes. Upon identifying the codebook configurations yielding the highest performance metrics, the multiple codebook-related parameters τm, β, km and γ can be determined in the manner as described in accordance with
The demultiplexer 702, parses the codebook parameter from the coded bit-stream to determine the codebook selections. For example, the demultiplexer can parse the configuration parameter and determine m using Table 1 for identifying codebooks to use during decoding. The codebook-related parameters τm and km identify the indexes to the appropriate ACB and FCB codebook, respectively. The parameters β and γ identify the gain scaling applied by to the ACB and FCB codevectors, respectively. Recall, the multiple codebook-related parameters were determined after codebook configuration, m, was selected. The encoder 400 determined the multiple codebook-related parameters τm, β, km and γ through an error minimization process that included the optimal bit-allocation assignments and optimal gains scalings.
In another arrangement, the demultiplexer 702 parses the codebook parameter from the coded bit-stream to determine the bit allocations assigned to each codebook. For example, the demultiplexer 702 can identify the Huffman code from the received bit sequence and determine the number of bits used in the ACB and FCB codebooks according to Table 2.
For example, upon receiving a frame of N bits, the demultiplexer 702 can identify the Huffman code which inherently identifies the codebook configuration; that is, the bit-allocation to the respective ACB and FCB. For instance, if the Huffman code is 100, according to Table 2, 2 bits can be assigned to ACB 402, and 89 bits can be assigned to FCB 404. In this particular arrangement, the remaining M-1 ACB codebooks and M-1 FCB codebooks are not employed; this is because the number of bits used by each codebook is established by the demultiplexer in view of the codebook configuration. For example, the first subframe includes 6 pulses from FCB, the second subframe includes 6 pulses from FCB, and the third subframe includes 5 pulses from FCB. Notably, the pulse removed from the third subframe provides the 2 bits to the ACB. The demultiplexer 702 can select the codebook configuration, m, from the demultiplexed bit stream for each speech frame, or subframe, in order to generate the first synthetic signal and second synthetic signal. Combiner 210 can combine the first synthetic signal and second synthetic signal into the excitation signal u(n) which is input to the synthesis filter 205. The synthesis filter 205 can receive the filter coefficients, Aq, from the demultiplexer 702. The excitation sequence u(n) is passed through the synthesis filter 205 to eventually generate the output speech signal in accordance with the invention.
While the present invention has been particularly shown and described with reference to particular embodiments thereof, it will be understood by those skilled in the art that various changes may be made and equivalents substituted for elements thereof without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather then a restrictive sense, and all such changes and substitutions are intended to be included within the scope of the present invention. In addition, the invention has been shown to comprise a specific instance of Adaptive and Fixed Codebook types, when in fact any such Adaptive and/or Fixed Codebook structures may be used without departing from the spirit or scope of the present invention. The Adaptive Codebook may also fall into any class of pitch related codebooks often referred to by those of ordinary skill in the art as “Virtual Codebooks” or “Long-Term Predictors”.
Furthermore, while a specific example for selecting an ACB/FCB configuration has been described, many such selection mechanisms may be employed, and may depend on several factors in the design of the respective system, including codebook types, target bit rates, and number of configurations. While the codebook types presented imply separate physical elements, the actual implementation of such elements may be optimized to reduce computational complexity, physical memory size, and/or require hardware circuitry. For example, the ACB components are described as in terms of separate physical elements, however, one that is of ordinary skill in the art may appreciate that the ACB memories across configurations may be common, and that the difference in codebook structure may be the meaning and interpretation (i.e., he encoding/decoding) of the respective input indices. The same may be true of the FCB components which may utilize other scalable algebraic or fixed memory codebooks (such as VSELP) which may not occupy separate physical memories, but rather may share both codebook memory and/or program codes for execution and/or efficient implementation of the described method and apparatus. Additionally, the configuration selection criteria may be based purely on the final error signal which may be based on the combined ACB/FCB contributions, however, it should be noted that the complexity of such an embodiment may be significantly higher than the example described in the preferred embodiment of the present invention.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. As used herein, the terms “comprises,” “comprising,” or any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. It is further understood that the use of relational terms, if any, such as first and second, top and bottom, and the like are used solely to distinguish one from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Number | Name | Date | Kind |
---|---|---|---|
5513297 | Kleijn et al. | Apr 1996 | A |
5657418 | Gerson et al. | Aug 1997 | A |
5729655 | Kolesnik et al. | Mar 1998 | A |
5734789 | Swaminathan et al. | Mar 1998 | A |
5778335 | Ubale et al. | Jul 1998 | A |
5857168 | Ozawa | Jan 1999 | A |
5873060 | Ozawa | Feb 1999 | A |
6003001 | Maeda | Dec 1999 | A |
6141638 | Peng et al. | Oct 2000 | A |
6167375 | Miseki et al. | Dec 2000 | A |
6236960 | Peng et al. | May 2001 | B1 |
6240386 | Thyssen et al. | May 2001 | B1 |
6470313 | Ojala | Oct 2002 | B1 |
6594626 | Suzuki et al. | Jul 2003 | B2 |
6604070 | Gao et al. | Aug 2003 | B1 |
6662154 | Mittal et al. | Dec 2003 | B2 |
6714907 | Gao | Mar 2004 | B2 |
6810381 | Sasaki et al. | Oct 2004 | B1 |
7092885 | Yamaura | Aug 2006 | B1 |
7177804 | Wang et al. | Feb 2007 | B2 |
7266793 | Agmon | Sep 2007 | B1 |
7379865 | Kang et al. | May 2008 | B2 |
20040093207 | Ashley et al. | May 2004 | A1 |
20050096901 | Uvliden et al. | May 2005 | A1 |
20060190246 | Park | Aug 2006 | A1 |
20070271102 | Morii | Nov 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20070271094 A1 | Nov 2007 | US |