The present invention relates to a communication system and, more particularly, to a Viterbi decoder and its branch metrics.
Typically a communication link includes three major elements: a transmitter, a communication channel, and a receiver. The transmitter and the receiver elements can be further subdivided into sub-systems, which include a data source, an encoder, a modulator, a demodulator, a decoder, and an original signal regenerator.
The data source generates the appropriate information signal that is intended to be sent to the destination. This signal may be digital or analog. Even for analog signals, it is often desirable to digitally encode the signal prior to its transmission. To provide error correction capacity at the receiver, the encoder transforms the information sequence into encoded sequence by adding redundancy to a digital data stream, in the form of additional data bits. The process of adding redundant information is known as “channel coding,” and the encoder is also known as “channel encoder.”
Coding is an effective method for trading bandwidth and implementation complexity against transmitter power. In general, higher transmitter power results in higher signal to noise ratio (SNR), which means the signal is less susceptible to noise and consequently to error at the receiving end. On the other hand, a low transmission power, and the resulting low SNR, can even make a signal unrecognizable and inseparable from the noise at the receiving end. Under such circumstances, where the probability of error is high, coding helps reduce the error probability and retrieve the original signal.
Different coding schemes are in use today, including Convolutional coding and Block coding. The Convolutional coding results in a serial data stream, whereas the Block coding results in large message blocks with a fixed number of elements within each block. The encoded signals in both methods include redundant information, as mentioned above. After encoding, the modulator converts the encoded information into physically transmittable signals. Modulation techniques depend on the type of information signal and the particular transmission medium.
A channel is the medium for transmission of the modulated information. It can be a copper wire, a coaxial cable, or the space. To various degrees, all channels introduce some form of distortion to the transmitted signal. The distortions introduced by different channels differ in their noise distribution. Some can be modeled as Additive White Gaussian Noise (AWGN), by which a noise with uniform power spectral density is assumed to be added to the information signal. Others introduce noise in bursts, include fading channels and multipath channels.
At the receiving end, the demodulator extracts the encoded information from the modulated signal. To retrieve the original digital signal, the extracted encoded data, which is also distorted to some degree by traveling through the channel, is subsequently decoded by the decoder. The decoding process is usually more complicated than the encoding process—it can also be computationally more intensive. Efficient decoding schemes have been developed over the years such as the Viterbi decoding algorithm for recovery of binary data. These schemes discover the distorted parts of the demodulated information and correct them. Finally, decoded data will be used to produce an estimate of the original signal.
The Viterbi algorithm is a self-correcting decoder that employs a maximum likelihood decoding rule. It computes some kind of probability measure for different “possible replacements” of the received data, based on the “actual” data received. Each possible string of data (up to the last received data symbol) is called a “data path,” and the probability measure associated with each data path is called a “cumulative metric.” Upon the arrival of a new data symbol, the algorithm adjusts each cumulative metric. This incremental adjustment is called a “branch metric.” In practice this is a series of add-compare-select operations. Ultimately, the Viterbi algorithm utilizes the cumulative metrics to calculate the maximum likelihood of a path being the best path. In other words, it finds the string of data most accurately representing the original encoded symbols.
Most of the existing codes perform relatively well under uniform channel error conditions, such as those of a Gaussian channel mentioned above, while the received data is correlated. However, to maximize the performance of the coding process, there is a need for improving the Viterbi algorithm for those channels in which the error tends to occur in bursts, such as fading channels, or when the received data is uncorrelated.
The foregoing aspects and many of the attendant advantages of the invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
The present invention relates to the modifications of the branch metrics of the Viterbi decoder so that the decoder can decode the uncorrelated incoming signals without the influence of either the signal power level or the noise variance. The proposed modifications noticeably improve the performance of the Viterbi algorithm in such cases. In the following description, several specific details are presented to provide a thorough understanding of the embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or in combination with or with other components, etc. In other instances, well-known implementations or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the invention.
Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, implementation, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, implementation, or characteristics may be combined in any suitable manner in one or more embodiments.
An encoded data 103 enters an interleaver 104 for rearrangement of its content. This shuffling of the encoded data 103 helps avoid the loss of a sizable portion of the data in case of error bursts, which by definition are concentrated in time and are not distributed over the data. Once the damaged interleaved data is deinterleaved, the locally corrupted data is broken up and distributed over the entire deinterleaved data, creating independent errors and easier data correction.
It is important to note that the fading channels fit the model of a bursty error channel. The term fading is used when the amplitude of the received signal drastically varies as a result of the phase difference between a signal and its reflections. Such signals, at times, can weaken or practically cancel each other, or can combine to form a stronger signal. Although fading primarily is a result of the time variation of phases, the fading channel is an accurate model for channels susceptible to error bursts.
Interleaved data 105 is segmented to several “bursts” and mapped into physical channel bursts, 107, by a data segmentation and physical channel mapping module 106. But while data is being transmitted, the characteristics of the channel can change because of the physical changes in the structure of its medium. Channel characteristics influence the traveling signals and can uncorrelate two otherwise correlated data bursts. Changing weather conditions or other factors, for example, can change the “channel response,” which is defined as the response of a channel to an impulse-like signal, and which is an accurate representation of the channel characteristics.
In a typical communication system, the lengths of the bursts are chosen so that the channel response remains relatively constant during the transmission of each burst. If the time between two bursts is short enough, this assumption will stay valid even for two consecutive bursts, and preserves the correlation of the signals, of the noise, and of the interferences. However, if the time lapse between the two consecutive bursts is long enough, the signals, the noise, and the interference will be each uncorrelated and will adversely affect the performance of the traditional Viterbi channel decoder.
If the time between transmitted physical channel bursts 107 has been long enough, the deinterleaver 404 will further distribute the data extracted from an already uncorrelated signal, noise, and interference. Such data, 405, entering a channel decoder 406, will affect the branch metrics of the Viterbi algorithm and will lower its performance. The modifications proposed in the present invention, among other advantages, will drastically improve the performance of the channel decoder 406 in such situations. A decoded block data 407 is finally checked by a Cyclic Redundancy Check (CRC) module 408 to identify most of the possible remaining errors.
In one embodiment the equations for the branch metric modifications are derived as follows. The aim of the Viterbi algorithm, as briefly mentioned above, is to find a set of {{circumflex over (α)}n} that maximizes the likelihood function P({yn}|{αn}), where {αn} is the original signal before coding and {yn} is the quantized received signal, which consists of the coded signal and the noise. In other words, given our observations {yn}, the algorithm tries to find a set of data that maximizes the possibility of such observations. Ideally, the result of such a search should be the original data set {αn}, but, practically, {{circumflex over (α)}n} will be the best result, considering the nondeterministic and random nature of the noise and the interferences. Here P({yn}|{αn}) represents the “cumulative metric” previously described, and if it can be written in the following form:
Jn=Jn−1+yn·s(αn) Eq. 1
then yn·s(αn) will represent the “branch metric,” since it incrementally adjusts the cumulative metric for each received symbol. The above likelihood function can be written as:
where {S(αn)} is the matched filter reconstruction of {αn} based on the received signals {yn}, and σ2 is the variance of the assumed normally distributed (Gaussian) noise. As a result, {yn} and {S(αn)} have the same power density. After taking natural-log of both sides of Eq. 2, the following manipulations are possible:
If {yn} is stationary and correlated and noise has Gaussian distribution, the metric in Eq. 3 can be written in the form of Eq. 1, and, as mentioned above, yn·s(αn) will represent its branch metric. Note that in such a case the branch metric has no dependency on the noise variance. But, if {yn} is the concatenation of uncorrelated segments, such as in the case of data bursts with long delays between consecutive bursts, only the signal within each segment is stationary. Or there is a case in which the interference is stationary within each segment but uncorrelated between segments. Or yet another case in which both {yn} and interference are stationary within each segment but uncorrelated between segments. In such cases, Eq. 3 can be rewritten in the following form, while σn2 represents noise variance at the n-th symbol:
Since the value of
remains the same for all symbols and will merely add the same constant to the cumulative metrics of all paths, it can be dropped and Eq. 4 can be written in the following form:
Note that in this case, unlike the case in which {yn}was stationary and correlated, the branch metric part of Eq. 5 is a function of the Signal to Interference and Noise Ratio (SINRn). This is clearer when we write Eq. 5 in the following form:
Jn=Jn−1+sign(yn·s(αn))·SINRn Eq. 6
where sign(x)=1 for x>0 and −1 for x<0. As a matter of fact, in any situation in which the received signals are not uncorrelated symbol by symbol, Eq. 5 is a good approximation of the likelihood function.
To implement the latter set of equations, the present invention modifies Eq. 5 to the following equation: (Note that s(αn) will always be either +1 or −1, while {yn} is the soft output of the equalizer.)
and where Es,n is the total energy of the n-th received symbol.
The output of the noise and interference measurement module 509 is utilized by a Soft Output Bit Fetch Decision module 511 to simplify required branch metric computations, as elaborated below. The noise and interference measurement module 509 also produces the estimated value of the energy of the noise per symbol, En, which will be proved to be the only required value for computing wn, or the only additional computation compared to the computation of a traditional branch metric. If the energy per symbol of information signal is Es, the energy per symbol of information signal at the output of the matched filter 504 will be Es2. Now assuming that a demodulator 506 does not change the power ratio of the signal to the noise, wn can be written as:
where En, as mentioned above, is produced by the noise and interference measurement module 509. Therefore, to compute wn, there is no need to estimate the energy of the signal. Also since En is estimated for each burst, wn is only updated once for each burst.
In yet another implementation, to simplify the process, wn is not multiplied by every yn output of a demodulator 506; rather, a procedure is introduced to select a limited number of its output bits. This procedure is called fetching. It should be noted that fetching is not required for the generation of modified branch metrics. It is merely an additional scheme to make decoder computations more efficient while using modified branch metrics.
In this procedure, if the soft output yn of the demodulator 506 has K bits, (bK−1, bK−2, . . . , bK, . . . , b0), only L bits of it will be fetched to form y′n, (bi+L−1, bi+L−2, . . . , bi+1, bi). Here i is calculated by the Soft Output Bit Fetch Decision module 511, using the following equation:
i=round(log2(wn))+c, 0≦i≦K−L
where c is a constant, computed based on the value range of yn and the bit margin. In fix point implementation, c is determined by the number of fix point bits of soft equalizer output (B1), the input number of fix point bits of Viterbi decoder (B2), and the range of round(log2(wn)). c should satisfy
c+max(abs(round(log2(wn))))+B2≧B1
At step 703, the transmitted signal is demodulated and at step 704 the demodulated symbols are quantized to soft outputs. At step 705 the 0, or 1 binary values of the received signal is estimated and, subsequently at step 706, −1 or +1 is assigned to each binary value, respectively. At step 707 the noise is measured and the energy of the noise per data symbol is computed. At step 708 a branch metric is formed by multiplying the quantized value of a symbol by the assigned −1 or +1, and deviding by the noise energy of the same symbol. In step 709 this branch metric is used to update the cumulative metric.
The preferred and several alternate embodiments have thus been described. One of ordinary skill after reading the foregoing specification will be able to effect various changes, alterations, combinations, and substitutions of equivalents without departing from the broad concepts disclosed. It is therefore intended that the scope of the letters patent granted hereon be limited only by the definitions contained in the appended claims and equivalents thereof, and not by limitations of the embodiments described herein.
The present application claims the benefit of U.S. Provisional Application No. 60/533,193, filed 30 Dec. 2003 and entitled “Viterbi Decoder for Uncorrelated Signals,” the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60533193 | Dec 2003 | US |