Described tools and techniques relate to audio codecs, and particularly to post-processing of decoded speech.
With the emergence of digital wireless telephone networks, streaming audio over the Internet, and Internet telephony, digital processing and delivery of speech has become commonplace. Engineers use a variety of techniques to process speech efficiently while still maintaining quality. To understand these techniques, it helps to understand how audio information is represented and processed in a computer.
I. Representation of Audio Information in a Computer
A computer processes audio information as a series of numbers representing the audio. A single number can represent an audio sample, which is an amplitude value at a particular time. Several factors affect the quality of the audio, including sample depth and sampling rate.
Sample depth (or precision) indicates the range of numbers used to represent a sample. More possible values for each sample typically yields higher quality output because more subtle variations in amplitude can be represented. An eight-bit sample has 256 possible values, while a sixteen-bit sample has 65,536 possible values.
The sampling rate (usually measured as the number of samples per second) also affects quality. The higher the sampling rate, the higher the quality because more frequencies of sound can be represented. Some common sampling rates are 8,000, 11,025, 22,050, 32,000, 44,100, 48,000, and 96,000 samples/second (Hz). Table 1 shows several formats of audio with different quality levels, along with corresponding raw bit rate costs.
As Table 1 shows, the cost of high quality audio is high bit rate. High quality audio information consumes large amounts of computer storage and transmission capacity. Many computers and computer networks lack the resources to process raw digital audio. Compression (also called encoding or coding) decreases the cost of storing and transmitting audio information by converting the information into a lower bit rate form. Compression can be lossless (in which quality does not suffer) or lossy (in which quality suffers but bit rate reduction from subsequent lossless compression is more dramatic). Decompression (also called decoding) extracts a reconstructed version of the original information from the compressed form. A codec is an encoder/decoder system.
II. Speech Encoders and Decoders
One goal of audio compression is to digitally represent audio signals to provide maximum signal quality for a given amount of bits. Stated differently, this goal is to represent the audio signals with the least bits for a given level of quality. Other goals such as resiliency to transmission errors and limiting the overall delay due to encoding/transmission/decoding apply in some scenarios.
Different kinds of audio signals have different characteristics. Music is characterized by large ranges of frequencies and amplitudes, and often includes two or more channels. On the other hand, speech is characterized by smaller ranges of frequencies and amplitudes, and is commonly represented in a single channel. Certain codecs and processing techniques are adapted for music and general audio; other codecs and processing techniques are adapted for speech.
One type of conventional speech codec uses linear prediction (“LP”) to achieve compression. The speech encoding includes several stages. The encoder finds and quantizes coefficients for a linear prediction filter, which is used to predict sample values as linear combinations of preceding sample values. A residual signal (represented as an “excitation” signal) indicates parts of the original signal not accurately predicted by the filtering. At some stages, the speech codec uses different compression techniques for voiced segments (characterized by vocal chord vibration), unvoiced segments, and silent segments, since different kinds of speech have different characteristics. Voiced segments typically exhibit highly repeating voicing patterns, even in the residual domain. For voiced segments, the encoder achieves further compression by comparing the current residual signal to previous residual cycles and encoding the current residual signal in terms of delay or lag information relative to the previous cycles. The encoder handles other discrepancies between the original signal and the predicted, encoded representation (from the linear prediction and delay information) using specially designed codebooks.
Although speech codecs as described above have good overall performance for many applications, they have several drawbacks. For example, lossy codecs typically reduce bit rate by reducing redundancy in a speech signal, which results in noise or other undesirable artifacts in decoded speech. Accordingly, some codecs filter decoded speech to improve its quality. Such post-filters have typically come in two types: time domain post-filters and frequency domain post-filters.
Given the importance of compression and decompression to representing speech signals in computer systems, it is not surprising that post-filtering of reconstructed speech has attracted research. Whatever the advantages of prior techniques for processing of reconstructed speech or other audio, they do not have the advantages of the techniques and tools described herein.
In summary, the detailed description is directed to various techniques and tools for audio codecs, and specifically to tools and techniques related to filtering decoded speech. Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following:
In one aspect, a set of filter coefficients for application to a reconstructed audio signal is calculated. The calculation includes performing one or more frequency domain calculations. A filtered audio signal is produced by filtering at least a portion of the reconstructed audio signal in a time domain using the set of filter coefficients.
In another aspect, a set of filter coefficients for application to a reconstructed audio signal is produced. Production of the coefficients includes processing a set of coefficient values representing one or more peaks and one or more valleys. Processing the set of coefficient values includes clipping one or more of the peaks or valleys. At least a portion of the reconstructed audio signal is filtered using the filter coefficients.
In another aspect, a reconstructed composite signal synthesized from plural reconstructed frequency sub-band signals is received. The sub-band signals include a reconstructed first frequency sub-band signal for a first frequency band and a reconstructed second frequency sub-band signal for a second frequency band. At a frequency region around an intersection between the first frequency band and the second frequency band, the reconstructed composite signal is selectively enhanced.
The various techniques and tools can be used in combination or independently.
Additional features and advantages will be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings.
Described embodiments are directed to techniques and tools for processing audio information in encoding and/or decoding. With these techniques the quality of speech derived from a speech codec, such as a real-time speech codec, is improved. Such improvements may result from the use of various techniques and tools separately or in combination.
Such techniques and tools may include a post-filter that is applied to a decoded audio signal in the time domain using coefficients that are designed or processed in the frequency domain. The techniques may also include clipping or capping filter coefficient values for use in such a filter, or in some other type of post-filter.
The techniques may also include a post-filter that enhances the magnitude of a decoded audio signal at frequency regions where energy may have been attenuated due to decomposition into frequency bands. As an example, the filter may enhance the signal at frequency regions near intersections of adjacent bands.
Although operations for the various techniques are described in a particular, sequential order for the sake of presentation, it should be understood that this manner of description encompasses minor rearrangements in the order of operations, unless a particular ordering is required. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, flowcharts may not show the various ways in which particular techniques can be used in conjunction with other techniques.
While particular computing environment features and audio codec features are described below, one or more of the tools and techniques may be used with various different types of computing environments and/or various different types of codecs. For example, one or more of the post-filter techniques may be used with codecs that do not use the CELP coding model, such as adaptive differential pulse code modulation codecs, transform codecs and/or other types of codecs. As another example, one or more of the post-filter techniques may be used with single band codecs or sub-band codecs. As another example, one or more of the post-filter techniques may be applied to a single band of a multi-band codec and/or to a synthesized or unencoded signal including contributions of multiple bands of a multi-band codec.
I. Computing Environment
With reference to
A computing environment (100) may have additional features. In
The storage (140) may be removable or non-removable, and may include magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (100). The storage (140) stores instructions for the software (180).
The input device(s) (150) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, network adapter, or another device that provides input to the computing environment (100). For audio, the input device(s) (150) may be a sound card, microphone or other device that accepts audio input in analog or digital form, or a CD/DVD reader that provides audio samples to the computing environment (100). The output device(s) (160) may be a display, printer, speaker, CD/DVD-writer, network adapter, or another device that provides output from the computing environment (100).
The communication connection(s) (170) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed speech information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The invention can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment (100), computer-readable media include memory (120), storage (140), communication media, and combinations of any of the above.
The invention can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description may use terms like “determine,” “generate,” “adjust,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Generalized Network Environment and Real-time Speech Codec
The primary functions of the encoder-side and decoder-side components are speech encoding and decoding, respectively. On the encoder side, an input buffer (210) accepts and stores speech input (202). The speech encoder (230) takes speech input (202) from the input buffer (210) and encodes it.
Specifically, a frame splitter (212) splits the samples of the speech input (202) into frames. In one implementation, the frames are uniformly twenty ms long 160 samples for eight kHz input and 320 samples for sixteen kHz input. In other implementations, the frames have different durations, are non-uniform or overlapping, and/or the sampling rate of the input (202) is different. The frames may be organized in a super-frame/frame, frame/sub-frame, or other configuration for different stages of the encoding and decoding.
A frame classifier (214) classifies the frames according to one or more criteria, such as energy of the signal, zero crossing rate, long-term prediction gain, gain differential, and/or other criteria for sub-frames or the whole frames. Based upon the criteria, the frame classifier (214) classifies the different frames into classes such as silent, unvoiced, voiced, and transition (e.g., unvoiced to voiced). Additionally, the frames may be classified according to the type of redundant coding, if any, that is used for the frame. The frame class affects the parameters that will be computed to encode the frame. In addition, the frame class may affect the resolution and loss resiliency with which parameters are encoded, so as to provide more resolution and loss resiliency to more important frame classes and parameters. For example, silent frames typically are coded at very low rate, are very simple to recover by concealment if lost, and may not need protection against loss. Unvoiced frames typically are coded at slightly higher rate, are reasonably simple to recover by concealment if lost, and are not significantly protected against loss. Voiced and transition frames are usually encoded with more bits, depending on the complexity of the frame as well as the presence of transitions. Voiced and transition frames are also difficult to recover if lost, and so are more significantly protected against loss. Alternatively, the frame classifier (214) uses other and/or additional frame classes.
The input speech signal may be divided into sub-band signals before applying an encoding model, such as the CELP encoding model, to the sub-band information for a frame. This may be done using a series of one or more analysis filter banks (such as QMF analysis filters) (216). For example, if a three-band structure is to be used, then the low frequency band can be split out by passing the signal through a low-pass filter. Likewise, the high band can be split out by passing the signal through a high pass filter. The middle band can be split out by passing the signal through a band pass filter, which can include a low pass filter and a high pass filter in series. Alternatively, other types of filter arrangements for sub-band decomposition and/or timing of filtering (e.g., before frame splitting) may be used. If only one band is to be decoded for a portion of the signal, that portion may bypass the analysis filter banks (216).
The number of bands n may be determined by sampling rate. For example, in one implementation, a single band structure is used for eight kHz sampling rate. For 16 kHz and 22.05 kHz sampling rates, a three-band structure is used as shown in
The low frequency band is typically the most important band for speech signals because the signal energy typically decays towards the higher frequency ranges. Accordingly, the low frequency band is often encoded using more bits than the other bands. Compared to a single band coding structure, the sub-band structure is more flexible, and allows better control of quantization noise across the frequency band. Accordingly, it is believed that perceptual voice quality is improved significantly by using the sub-band structure. However, as discussed below, the decomposition of sub-bands may cause energy loss of the signal at the frequency regions near the intersection of adjacent bands. This energy loss can degrade the quality of the resulting decoded speech signal.
In
The network (250) is a wide area, packet-switched network such as the Internet. Alternatively, the network (250) is a local area network or other kind of network.
On the decoder side, software for one or more networking layers (260) receives and processes the transmitted data. The network, transport, and higher layer protocols and software in the decoder-side networking layer(s) (260) usually correspond to those in the encoder-side networking layer(s) (240). The networking layer(s) provide the encoded speech information to the speech decoder (270) through a demultiplexer (“DEMUX”) (276).
The decoder (270) decodes each of the sub-bands separately, as is depicted in band decoding components (272, 274). All the sub-bands may be decoded by a single decoder, or they may be decoded by separate band decoders.
The decoded sub-bands are then synthesized in a series of one or more synthesis filter banks (such as QMF synthesis filters) (280), which output decoded speech (292). Alternatively, other types of filter arrangements for sub-band synthesis are used. If only a single band is present, then the decoded band may bypass the filter banks (280). If multiple bands are present, decoded speech output (292) may also be passed through a middle frequency enhancement post-filter (284) to improve the quality of the resulting enhanced speech output (294). An implementation of the middle frequency enhancement post-filter is discussed in more detail below.
One generalized real-time speech band decoder is described below with reference to
Aside from these primary encoding and decoding functions, the components may also share information (shown in dashed lines in
The rate controller (220) directs the speech encoder (230) to change the rate, quality, and/or loss resiliency with which speech is encoded. The encoder (230) may change rate and quality by adjusting quantization factors for parameters or changing the resolution of entropy codes representing the parameters. Additionally, the encoder may change loss resiliency by adjusting the rate or type of redundant coding. Thus, the encoder (230) may change the allocation of bits between primary encoding functions and loss resiliency functions depending on network conditions.
The band encoder (400) accepts the band input (402) from the filter banks (or other filters) if the signal is split into multiple bands. If the signal is not split into multiple bands, then the band input (402) includes samples that represent the entire bandwidth. The band encoder produces encoded band output (492).
If a signal is split into multiple bands, then a downsampling component (420) can perform downsampling on each band. As an example, if the sampling rate is set at sixteen kHz and each frame is twenty ms in duration, then each frame includes 320 samples. If no downsampling were performed and the frame were split into the three-band structure shown in
The LP analysis component (430) computes linear prediction coefficients (432). In one implementation, the LP filter uses ten coefficients for eight kHz input and sixteen coefficients for sixteen kHz input, and the LP analysis component (430) computes one set of linear prediction coefficients per frame for each band. Alternatively, the LP analysis component (430) computes two sets of coefficients per frame for each band, one for each of two windows centered at different locations, or computes a different number of coefficients per band and/or per frame.
The LPC processing component (435) receives and processes the linear prediction coefficients (432). Typically, the LPC processing component (435) converts LPC values to a different representation for more efficient quantization and encoding. For example, the LPC processing component (435) converts LPC values to a line spectral pair (LSP) representation, and the LSP values are quantized (such as by vector quantization) and encoded. The LSP values may be intra coded or predicted from other LSP values. Various representations, quantization techniques, and encoding techniques are possible for LPC values. The LPC values are provided in some form as part of the encoded band output (492) for packetization and transmission (along with any quantization parameters and other information needed for reconstruction). For subsequent use in the encoder (400), the LPC processing component (435) reconstructs the LPC values. The LPC processing component (435) may perform interpolation for LPC values (such as equivalently in LSP representation or another representation) to smooth the transitions between different sets of LPC coefficients, or between the LPC coefficients used for different sub-frames of frames.
The synthesis (or “short-term prediction”) filter (440) accepts reconstructed LPC values (438) and incorporates them into the filter. The synthesis filter (440) receives an excitation signal and produces an approximation of the original signal. For a given frame, the synthesis filter (440) may buffer a number of reconstructed samples (e.g., ten for a ten-tap filter) from the previous frame for the start of the prediction.
The perceptual weighting components (450, 455) apply perceptual weighting to the original signal and the modeled output of the synthesis filter (440) so as to selectively de-emphasize the formant structure of speech signals to make the auditory systems less sensitive to quantization errors. The perceptual weighting components (450, 455) exploit psychoacoustic phenomena such as masking. In one implementation, the perceptual weighting components (450, 455) apply weights based on the original LPC values (432) received from the LP analysis component (430). Alternatively, the perceptual weighting components (450, 455) apply other and/or additional weights.
Following the perceptual weighting components (450, 455), the encoder (400) computes the difference between the perceptually weighted original signal and perceptually weighted output of the synthesis filter (440) to produce a difference signal (434). Alternatively, the encoder (400) uses a different technique to compute the speech parameters.
The excitation parameterization component (460) seeks to find the best combination of adaptive codebook indices, fixed codebook indices and gain codebook indices in terms of minimizing the difference between the perceptually weighted original signal and synthesized signal (in terms of weighted mean square error or other criteria). Many parameters are computed per sub-frame, but more generally the parameters may be per super-frame, frame, or sub-frame. As discussed above, the parameters for different bands of a frame or sub-frame may be different. Table 2 shows the available types of parameters for different frame classes in one implementation.
In
Referring to
Referring still to
If the component (460) determines (530) that the adaptive codebook is to be used, then the adaptive codebook parameters are signaled (540) in the bit stream. If not, then it is indicated that no adaptive codebook is used for the sub-frame (535), such as by setting a one-bit sub-frame level flag, as discussed above. This determination (530) may include determining whether the adaptive codebook contribution for the particular sub-frame is significant enough to be worth the number of bits required to signal the adaptive codebook parameters. Alternatively, some other basis may be used for the determination. Moreover, although
The excitation parameterization component (460) also determines (550) whether a pulse codebook is used. The use or non-use of the pulse codebook is indicated as part of an overall coding mode for the current frame, or it may be indicated or determined in other ways. A pulse codebook is a type of fixed codebook that specifies one or more pulses to be contributed to the excitation signal. The pulse codebook parameters include pairs of indices and signs (gains can be positive or negative). Each pair indicates a pulse to be included in the excitation signal, with the index indicating the position of the pulse and the sign indicating the polarity of the pulse. The number of pulses included in the pulse codebook and used to contribute to the excitation signal can vary depending on the coding mode. Additionally, the number of pulses may depend on whether or not an adaptive codebook is being used.
If the pulse codebook is used, then the pulse codebook parameters are optimized (555) to minimize error between the contribution of the indicated pulses and a target signal. If an adaptive codebook is not used, then the target signal is the weighted original signal. If an adaptive codebook is used, then the target signal is the difference between the weighted original signal and the contribution of the adaptive codebook to the weighted synthesized signal. At some point (not shown), the pulse codebook parameters are then signaled in the bit stream.
The excitation parameterization component (460) also determines (565) whether any random fixed codebook stages are to be used. The number (if any) of the random codebook stages is indicated as part of an overall coding mode for the current frame, or it may be determined in other ways. A random codebook is a type of fixed codebook that uses a pre-defined signal model for the values it encodes. The codebook parameters may include the starting point for an indicated segment of the signal model and a sign that can be positive or negative. The length or range of the indicated segment is typically fixed and is therefore not typically signaled, but alternatively a length or extent of the indicated segment is signaled. A gain is multiplied by the values in the indicated segment to produce the contribution of the random codebook to the excitation signal.
If at least one random codebook stage is used, then the codebook stage parameters for the codebook are optimized (570) to minimize the error between the contribution of the random codebook stage and a target signal. The target signal is the difference between the weighted original signal and the sum of the contribution to the weighted synthesized signal of the adaptive codebook (if any), the pulse codebook (if any), and the previously determined random codebook stages (if any). At some point (not shown), the random codebook parameters are then signaled in the bit stream.
The component (460) then determines (580) whether any more random codebook stages are to be used. If so, then the parameters of the next random codebook stage are optimized (570) and signaled as described above. This continues until all the parameters for the random codebook stages have been determined. All the random codebook stages can use the same signal model, although they will likely indicate different segments from the model and have different gain values. Alternatively, different signal models can be used for different random codebook stages.
Each excitation gain may be quantized independently or two or more gains may be quantized together, as determined by the rate controller and/or other components.
While a particular order has been set forth herein for optimizing the various codebook parameters, other orders and optimization techniques may be used. For example, all random codebooks could be optimized simultaneously. Thus, although
The excitation signal in this implementation is the sum of any contributions of the adaptive codebook, the pulse codebook, and the random codebook stage(s). Alternatively, the component (460) of
Referring to
The bit rate of the output (492) depends in part on the parameters used by the codebooks, and the encoder (400) may control bit rate and/or quality by switching between different sets of codebook indices, using embedded codes, or using other techniques. Different combinations of the codebook types and stages can yield different encoding modes for different frames, bands, and/or sub-frames. For example, an unvoiced frame may use only one random codebook stage. An adaptive codebook and a pulse codebook may be used for a low rate voiced frame. A high rate frame may be encoded using an adaptive codebook, a pulse codebook, and one or more random codebook stages. In one frame, the combination of all the encoding modes for all the sub-bands together may be called a mode set. There may be several pre-defined mode sets for each sampling rate, with different modes corresponding to different coding bit rates. The rate control module can determine or influence the mode set for each frame.
Referring still to
Referring back to
The MUX (236) provides feedback such as current buffer fullness for rate control purposes. More generally, various components of the encoder (230) (including the frame classifier (214) and MUX (236)) may provide information to a rate controller (220) such as the one shown in
The bit stream DEMUX (276) of
The DEMUX (276) may receive multiple versions of parameters for a given segment, including a primary encoded version and one or more secondary error correction versions. When error correction fails, the decoder (270) uses concealment techniques such as parameter repetition or estimation based upon information that was correctly received.
The band decoder (600) accepts encoded speech information (692) for a band (which may be the complete band, or one of multiple sub-bands) as input and produces a filtered reconstructed output (604) after decoding and filtering. The components of the decoder (600) have corresponding components in the encoder (400), but overall the decoder (600) is simpler since it lacks components for perceptual weighting, the excitation processing loop and rate control.
The LPC processing component (635) receives information representing LPC values in the form provided by the band encoder (400) (as well as any quantization parameters and other information needed for reconstruction). The LPC processing component (635) reconstructs the LPC values (638) using the inverse of the conversion, quantization, encoding, etc. previously applied to the LPC values. The LPC processing component (635) may also perform interpolation for LPC values (in LPC representation or another representation such as LSP) to smooth the transitions between different sets of LPC coefficients.
The codebook stages (670, 672, 674, 676) and gain application components (680, 682, 684, 686) decode the parameters of any of the corresponding codebook stages used for the excitation signal and compute the contribution of each codebook stage that is used. Generally, the configuration and operations of the codebook stages (670, 672, 674, 676) and gain components (680, 682, 684, 686) correspond to the configuration and operations of the codebook stages (470, 472, 474, 476) and gain components (480, 482, 484, 486) in the encoder (400). The contributions of the used codebook stages are summed, and the resulting excitation signal (690) is fed into the synthesis filter (640). Delayed values of the excitation signal (690) are also used as an excitation history by the adaptive codebook (670) in computing the contribution of the adaptive codebook for subsequent portions of the excitation signal.
The synthesis filter (640) accepts reconstructed LPC values (638) and incorporates them into the filter. The synthesis filter (640) stores previously reconstructed samples for processing. The excitation signal (690) is passed through the synthesis filter to form an approximation of the original speech signal.
The reconstructed sub-band signal (602) is also fed into a short term post-filter (694). The short term post-filter produces a filtered sub-band output (604). Several techniques for computing coefficients for the short term post-filter (694) are described below. For adaptive post-filtering, the decoder (270) may compute the coefficients from parameters (e.g., LPC values) for the encoded speech. Alternatively, the coefficients are provided through some other technique.
Referring back to
The relationships shown in
III. Post-Filter Techniques
In some embodiments, a decoder or other tool applies a short-term post-filter to reconstructed audio, such as reconstructed speech, after it has been decoded. Such a filter can improve the perceptual quality of the reconstructed speech.
Post filters are typically either time domain post-filters or frequency domain post-filters. A conventional time domain post-filter for a CELP codec includes an all-pole linear prediction coefficient synthesis filter scaled by one constant factor and an all-zero linear prediction coefficient inverse filter scaled by another constant factor.
Additionally, a phenomenon known as “spectral tilt” occurs in many speech signals because the amplitudes of lower frequencies in normal speech are often higher than the amplitudes of higher frequencies. Thus, the frequency domain amplitude spectrum of a speech signal often includes a slope, or “tilt.” Accordingly, the spectral tilt from the original speech should be present in a reconstructed speech signal. However, if coefficients of a post-filter also incorporate such a tilt, then the effect of the tilt will be magnified in the post-filter output so that the filtered speech signal will be distorted. Thus, some time-domain post-filters also have a first-order high pass filter to compensate for spectral tilt.
The characteristics of time domain post-filters are therefore typically controlled by two or three parameters, which does not provide much flexibility.
A frequency domain post-filter, on the other hand, has a more flexible way of defining the post-filter characteristics. In a frequency domain post-filter, the filter coefficients are determined in the frequency domain. The decoded speech signal is transformed into the frequency domain, and is filtered in the frequency domain. The filtered signal is then transformed back into the time domain. However, the resulting filtered time domain signal typically has a different number of samples than the original unfiltered time domain signal. For example, a frame having 160 samples may be converted to the frequency domain using a 256-point transform, such as a 256-point fast Fourier transform (“FFT”), after padding or inclusion of later samples. When a 256-point inverse FFT is applied to convert the frame back to the time domain, it will yield 256 time domain samples. Therefore, it yields an extra ninety-six samples. The extra ninety-six samples can be overlapped with, and added to, respective samples in the first ninety-six samples of the next frame. This is often referred to as the overlap-add technique. The transformation of the speech signal, as well as the implementation of techniques such as the overlap add technique can significantly increase the complexity of the overall decoder, especially for codecs that do not already include frequency transform components. Accordingly, frequency domain post-filters are typically only used for sinusoidal-based speech codecs because the application of such filters to non-sinusoidal based codecs introduces too much delay and complexity. Frequency domain post-filters also typically have less flexibility to change frame size if the codec frame size varies during coding because the complexity of the overlap add technique discussed above may become prohibitive if a different size frame (such as a frame with 80 samples, rather than 160 samples) is encountered.
While particular computing environment features and audio codec features are described above, one or more of the tools and techniques may be used with various different types of computing environments and/or various different types of codecs. For example, one or more of the post-filter techniques may be used with codecs that do not use the CELP coding model, such as adaptive differential pulse code modulation codecs, transform codecs and/or other types of codecs. As another example, one or more of the post-filter techniques may be used with single band codecs or sub-band codecs. As another example, one or more of the post-filter techniques may be applied to a single band of a multi-band codec and/or to a synthesized or unencoded signal including contributions of multiple bands of a multi-band codec.
A. Example Hybrid Short Term Post-Filters
In some embodiments, a decoder such as the decoder (600) shown in
Referring to
In general, the post-filter (694) may be a finite impulse response (“FIR”) filter, whose frequency-response is the result of nonlinear processes performed on the logarithm of a magnitude spectrum of an LPC synthesis filter. The magnitude spectrum of the post-filter can be designed so that the filter (694) only attenuates at spectral valleys, and in some cases at least part of the magnitude spectrum is clipped to be flat around formant regions. As discussed below, the FIR post-filter coefficients can be obtained by truncating a normalized sequence that results from the inverse Fourier transform of the processed magnitude spectrum.
The filter (694) is applied to the reconstructed speech in the time-domain. The filter may be applied to the entire band or to a sub-band. Additionally, the filter may be used alone or in conjunction with other filters, such as long-term post filters and/or the middle frequency enhancement filter discussed in more detail below.
The described post-filter can be operated in conjunction with codecs using various bit-rates, different sampling rates and different coding algorithms. It is believed that the post-filter (694) is able to produce significant quality improvement over the use of voice codecs without the post-filter. Specifically, it is believed that the post-filter (694) reduces the perceptible quantization noise in frequency regions where the signal power is relatively low, i.e., in spectral valleys between formants. In these regions the signal-to-noise ratio is typically poor. In other words, due to the weak signal, the noise that is present is relatively stronger. It is believed that the post-filter enhances the overall speech quality by attenuating the noise level in these regions.
The reconstructed LPC coefficients (638) often contain formant information because the frequency response of the LPC synthesis filter typically follows the spectral envelope of the input speech. Accordingly, LPC coefficients (638) are used to derive the coefficients of the short-term post-filter. Because the LPC coefficients (638) change from one frame to the next or on some other basis, the post-filter coefficients derived from them also adapt from frame to frame or on some other basis.
A technique for computing the filter coefficients for the post-filter (694) is illustrated in
The decoder (600) obtains an LPC spectrum by zero-padding (715) a set of LPC coefficients (710) a(i), where i=0, 1, 2, . . . , P, and where a(0)=1. The set of LPC coefficients (710) can be obtained from a bit stream if a linear prediction codec, such as a CELP codec, is used. Alternatively, the set of LPC coefficients (710) can be obtained by analyzing a reconstructed speech signal. This can be done even if the codec is not a linear prediction codec. P is the LPC order of the LPC coefficients a(i) to be used in determining the post-filter coefficients. In general, zero padding involves extending a signal (or spectrum) with zeros to extend its time (or frequency band) limits. In the process, zero padding maps a signal of length P to a signal of length N, where N>P. In a full band codec implementation, P is ten for an eight kHz sampling rate, and sixteen for sampling rates higher than eight kHz. Alternatively, P is some other value. For sub-band codecs, P may be a different value for each sub-band. For example, for an sixteen kHz sampling rate using the three sub-band structure illustrated in
The decoder (600) then performs an N-point transform, such as an FFT (720), on the zero-padded coefficients, yielding a magnitude spectrum A(k). A(k) is the spectrum of the zero-padded LPC inverse filter, for k=0, 1, 2, . . . , N−1. The inverse of the magnitude spectrum (namely, 1/|A(k)|) gives the magnitude spectrum of the LPC synthesis filter.
The magnitude spectrum of the LPC synthesis filter is optionally converted to the logarithmic domain (725) to decrease its magnitude range. In one implementation, this conversion is as follows:
where ln is the natural logarithm. However, other operations could be used to decrease the range. For example, a base ten logarithm operation could be used instead of a natural logarithm operation.
Three optional non-linear operations are based on the values of H(k): normalization (730), non-linear compression (735), and clipping (740).
Normalization (730) tends to make the range of H(k) more consistent from frame to frame and band to band. Normalization (730) and non-linear compression (735) both reduce the range of the non-linear magnitude spectrum so that the speech signal is not altered too much by the post-filter. Alternatively, additional and/or other techniques could be used to reduce the range of the magnitude spectrum.
In one implementation, initial normalization (730) is performed for each band of a multi-band codec as follows:
{circumflex over (H)}(k)=H(k)−Hmin+0.1
where Hmin is the minimum value of H(k), for k=0, 1, 2, . . . , N−1.
Normalization (730) may be performed for a full band codec as follows:
where Hmin is the minimum value of H(k), and Hmax is the maximum value of H(k), for k=0, 1, 2, . . . , N−1. In both the normalization equations above, a constant value of 0.1 is added to prevent the maximum and minimum values of Ĥ(k) from being 1 and 0, respectively, thereby making non-linear compression more effective. Other constant values, or other techniques, may alternatively be used to prevent zero values.
Nonlinear compression (735) is performed to further adjust the dynamic range of the non-linear spectrum as follows:
Hc(k)=β*|{circumflex over (H)}(k)|γ
where k=0, 1, . . . , N−1. Accordingly, if a 128-point FFT was used to convert the coefficients to the frequency domain, then k=0, 1, . . . , 127. Additionally, β=η*(Hmax−Hmin), with η and γ taken as appropriately chosen constant factors. The values of η and γ may be chosen according to the type of speech codec and the encoding rate. In one implementation, the η and γ parameters are chosen experimentally. For example, γ is chosen as a value from the range of 0.125 to 0.135, and η is chosen from the range of 0.5 to 1.0. The constant values can be adjusted based on preferences. For example, a range of constant values is obtained by analyzing the predicted spectrum distortion (mainly around peaks and valleys) resulting from various constant values. Typically, it is desirable to choose a range that does not exceed a predetermined level of predicted distortion. The final values are then chosen from among a set of values within the range using the results of subjective listening tests. For example, in a post-filter with an eight kHz sampling rate, η is 0.5 and γ is 0.125, and in a post-filter with a sixteen kHz sampling rate, η is 1.0 and γ is 0.135.
Clipping (740) can be applied to the compressed spectrum, Hc(k), as follows:
where Hmean is the mean value of Hc(k), and λ is a constant. The value of λ may be chosen differently according to the type of speech codec and the encoding rate. In some implementations, λ is chosen experimentally (such as a value from 0.95 to 1.1), and it can be adjusted based on preferences. For example, the final values of λ may be chosen using the results of subjective listening tests. For example, in a post-filter with an eight kHz sampling rate, λ is 1.1, and in post-filter operating at a sixteen kHz sampling rate, λ is 0.95.
This clipping operation caps the values of Hpf(k) at a maximum, or ceiling. In the above equations, this maximum is represented as λ*Hmean. Alternatively, other operations are used to cap the values of the magnitude spectrum. For example, the ceiling could be based on the median value of Hc(k), rather than the mean value. Also, rather than clipping all the high Hc(k) values to a specific maximum value (such as λ*Hmean), the values could be clipped according to a more complex operation.
Clipping tends to result in filter coefficients that will attenuate the speech signal at its valleys without significantly changing the speech spectrum at other regions, such as formant regions. This can keep the post filter from distorting the speech formants, thereby yielding higher quality speech output. Additionally, clipping can reduce the effects of spectral tilt because clipping flattens the post-filter spectrum by reducing the large values to the capped value, while the values around the valleys remain substantially unchanged.
When conversion to the logarithmic domain was performed, the resulting clipped magnitude spectrum, Hpf(k), is converted (745) from the log domain to the linear domain, for example, as follows:
Hpft(k)=exp(Hpf(k))
where exp is the inverse natural logarithm function.
An N-point inverse fast Fourier transform (750) is performed on Hpft(k), yielding a time sequence of f(n), where n=0, 1, . . . , N−1, and N is the same as in the FFT operation (720) discussed above. Thus, f(n) is an N-point time sequence.
In
where M is the order of the short term post-filter. In general, a higher value of M yields higher quality filtered speech. However, the complexity of the post-filter increases as M increases. The value of M can be chosen, taking these trade-offs into consideration. In one implementation, M is seventeen.
The values of h(n) are optionally normalized (760) to avoid sudden changes between frames. For example, this is done as follows:
Alternatively, some other normalization operation is used. For example, the following operation may be used:
In an implementation where normalization yields post filter coefficients hpf(n) (765), a FIR filter with coefficients of hpf (n) (765) is applied to the synthesized speech in the time domain. Thus, in this implementation, the first-order post-filter coefficient (n=0) is set to a value of one for every frame to prevent significant deviations of the filter coefficients from one frame to the next.
B. Example Middle Frequency Enhancement Filters
In some embodiments, a decoder such as the decoder (270) shown in
As discussed above, multi-band codecs decompose an input signal into channels of reduced bandwidths, typically because sub-bands are more manageable and flexible for coding. Band pass filters, such as the filter banks (216) described above with reference to
In
In some implementations, the MFE filter is a second-order band-pass FIR filter. It cascades a first-order low-pass filter and a first-order high-pass filter. Both first-order filters can have identical coefficients. The coefficients are typically chosen so that the MFE filter gain is desirable at pass-bands (increasing energy of the signal) and unity at stop-bands (passing through the signal unchanged or relatively unchanged). Alternatively, some other technique is used to enhance frequency regions that have been attenuated due to band decomposition.
The transfer function of one first-order low-pass filter is:
The transfer function of one first-order high-pass filter is:
Thus, the transfer function of a second-order MFE filter which cascades the low-pass filter and high-pass filter above is:
The corresponding MFE filter coefficients can be represented as:
The value of μ can be chosen by experiment. For example, a range of constant values is obtained by analyzing the predicted spectrum distortion resulting from various constant values. Typically, it is desirable to choose a range that does not exceed a predetermined level of predicted distortion. The final values is then chosen from among a set of values within the range using the results of subjective listening tests. In one implementation, when a sixteen kHz sampling rate is used, and the speech is broken into the following three bands (zero to eight kHz, eight to twelve kHz, and twelve to sixteen kHz), it can be desirable to enhance the region around eight kHz, and μ is chosen to be 0.45. Alternatively, other values of p are chosen, especially if it is desirable to enhance some other frequency region. Alternatively, the MFE filter is implemented with one or more band pass filters of different design, or the MFE filter is implemented with one or more other filters.
Having described and illustrated the principles of our invention with reference to described embodiments, it will be recognized that the described embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of the described embodiments shown in software may be implemented in hardware and vice versa.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.
Number | Name | Date | Kind |
---|---|---|---|
4815134 | Picone et al. | Mar 1989 | A |
4969192 | Chen et al. | Nov 1990 | A |
5255339 | Fette et al. | Oct 1993 | A |
5394473 | Davidson | Feb 1995 | A |
5615298 | Chen | Mar 1997 | A |
5664051 | Hardwick et al. | Sep 1997 | A |
5664055 | Kroon | Sep 1997 | A |
5668925 | Rothweiler et al. | Sep 1997 | A |
5699477 | McCree | Dec 1997 | A |
5699485 | Shoham | Dec 1997 | A |
5717823 | Kleijn | Feb 1998 | A |
5724433 | Engebretson et al. | Mar 1998 | A |
5734789 | Swaminathan et al. | Mar 1998 | A |
5737484 | Ozawa | Apr 1998 | A |
5751903 | Swaminathan et al. | May 1998 | A |
5778335 | Ubale et al. | Jul 1998 | A |
5819212 | Matsumoto et al. | Oct 1998 | A |
5819298 | Wong et al. | Oct 1998 | A |
5835495 | Ferriere | Nov 1998 | A |
5845244 | Proust | Dec 1998 | A |
5870412 | Schuster et al. | Feb 1999 | A |
5873060 | Ozawa | Feb 1999 | A |
5890108 | Yeldener | Mar 1999 | A |
6009122 | Chow | Dec 1999 | A |
6029126 | Malvar | Feb 2000 | A |
6041345 | Levi et al. | Mar 2000 | A |
6064962 | Oshikiri et al. | May 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6122607 | Ekudden et al. | Sep 2000 | A |
6128349 | Chow | Oct 2000 | A |
6134518 | Cohen et al. | Oct 2000 | A |
6199037 | Hardwick | Mar 2001 | B1 |
6202045 | Ojala et al. | Mar 2001 | B1 |
6226606 | Acero | May 2001 | B1 |
6240387 | DeJaco | May 2001 | B1 |
6263312 | Kolesnik et al. | Jul 2001 | B1 |
6289297 | Bahl | Sep 2001 | B1 |
6292834 | Ravi et al. | Sep 2001 | B1 |
6310915 | Wells et al. | Oct 2001 | B1 |
6311154 | Gersho et al. | Oct 2001 | B1 |
6317714 | Del Castillo | Nov 2001 | B1 |
6330533 | Su et al. | Dec 2001 | B2 |
6351730 | Chen | Feb 2002 | B2 |
6385573 | Gao et al. | May 2002 | B1 |
6392705 | Chaddha | May 2002 | B1 |
6408033 | Chow et al. | Jun 2002 | B1 |
6434247 | Kates et al. | Aug 2002 | B1 |
6438136 | Bahl | Aug 2002 | B1 |
6460153 | Chou et al. | Oct 2002 | B1 |
6493665 | Su et al. | Dec 2002 | B1 |
6499060 | Wang et al. | Dec 2002 | B1 |
6505152 | Acero | Jan 2003 | B1 |
6564183 | Hagen et al. | May 2003 | B1 |
6614370 | Gottesman | Sep 2003 | B2 |
6621935 | Xin et al. | Sep 2003 | B1 |
6633841 | Thyssen et al. | Oct 2003 | B1 |
6647063 | Oikawa | Nov 2003 | B1 |
6647366 | Wang et al. | Nov 2003 | B2 |
6658383 | Kazuhito et al. | Dec 2003 | B2 |
6693964 | Zhang et al. | Feb 2004 | B1 |
6732070 | Rotola-Pukkila et al. | May 2004 | B1 |
6757654 | Westerlund et al. | Jun 2004 | B1 |
6772126 | Simpson et al. | Aug 2004 | B1 |
6775649 | DeMartin | Aug 2004 | B1 |
6823303 | Su et al. | Nov 2004 | B1 |
6934678 | Yang | Aug 2005 | B1 |
6952668 | Kapilow | Oct 2005 | B1 |
6968309 | Makinen et al. | Nov 2005 | B1 |
7003448 | Lauber et al. | Feb 2006 | B1 |
7065338 | Mano et al. | Jun 2006 | B2 |
7117156 | Kapilow | Oct 2006 | B1 |
7246037 | Evans | Jul 2007 | B2 |
7356748 | Taleb | Apr 2008 | B2 |
20010023395 | Su et al. | Sep 2001 | A1 |
20020072901 | Bruhn | Jun 2002 | A1 |
20020097807 | Gerrits | Jul 2002 | A1 |
20020159472 | Bialik | Oct 2002 | A1 |
20030004718 | Rao | Jan 2003 | A1 |
20030009326 | Wang et al. | Jan 2003 | A1 |
20030016630 | Vega-Garcia et al. | Jan 2003 | A1 |
20030072464 | Kates | Apr 2003 | A1 |
20030088406 | Chen et al. | May 2003 | A1 |
20030088408 | Thyssen et al. | May 2003 | A1 |
20030101050 | Khalil | May 2003 | A1 |
20030115050 | Chen et al. | Jun 2003 | A1 |
20030115051 | Chen et al. | Jun 2003 | A1 |
20030135631 | Li et al. | Jul 2003 | A1 |
20050075869 | Gersho et al. | Apr 2005 | A1 |
20050154584 | Jelinek et al. | Jul 2005 | A1 |
20050165603 | Bessette et al. | Jul 2005 | A1 |
20050228651 | Wang et al. | Oct 2005 | A1 |
20050267753 | Yang | Dec 2005 | A1 |
20050281345 | Obernosterer et al. | Dec 2005 | A1 |
20060271355 | Wang et al. | Nov 2006 | A1 |
20060271373 | Khalil et al. | Nov 2006 | A1 |
20070088558 | Vos et al. | Apr 2007 | A1 |
20070255558 | Yasunaga et al. | Nov 2007 | A1 |
20070255559 | Gao et al. | Nov 2007 | A1 |
20080232612 | Tourwe | Sep 2008 | A1 |
Number | Date | Country |
---|---|---|
1336454 | Jul 1995 | CA |
0 503 684 | Sep 1992 | EP |
0 747 882 | Dec 1996 | EP |
2 784 218 | Apr 2000 | FR |
2 324 689 | Oct 1998 | GB |
1013200 | Jan 1989 | JP |
07-297726 | Nov 1995 | JP |
08-263098 | Oct 1996 | JP |
10-133695 | May 1998 | JP |
10-340098 | Dec 1998 | JP |
2000-132194 | May 2000 | JP |
2002-118517 | Apr 2002 | JP |
WO 98 27543 | Jun 1998 | WO |
WO 00 11655 | Mar 2000 | WO |
WO 00 63882 | Oct 2000 | WO |
WO 00 68934 | Nov 2000 | WO |
WO 01 93516 | Dec 2001 | WO |
WO 02 37475 | May 2002 | WO |
WO 03 102921 | Dec 2003 | WO |
Number | Date | Country | |
---|---|---|---|
20060271354 A1 | Nov 2006 | US |