1. Field of the Invention
The present invention generally relates to communication systems, and more particularly to a system and method for encoding DSL information streams having differing latencies.
2. Discussion of the Related Art
In recent years telecommunication systems have expanded from traditional POTS communications to include high-speed data communications as well. As is known, POTS communications include not only the transmission of voice information, but also PSTN (public switched telephone network) modem information, control signals, and other information that is transmitted in the POTS bandwidth, which extends from approximately 300 hertz to approximately 3.4 kilohertz.
New, high-speed data communications provided over digital subscriber lines (DSL), such as Asymmetric Digital Subscriber Line (ADSL), Rate Adaptive Digital Subscriber Line (RADSL), High-Speed Digital Subscriber Line (HDSL), etc. (more broadly denoted as xDSL) provide for high-speed data transmissions, as is commonly used in communicating over the Internet. As is known, the bandwidth for xDSL transmissions is generally defined by a lower cutoff frequency of approximately 30 kilohertz, and a higher cutoff frequency which varies depending upon the particular technology. Since the POTS and xDSL signals are defined by isolated frequency bands, both signals may be transmitted over the same two-wire loop.
Indeed, twisted pair public telephone lines are increasingly being used to carry relatively high-speed signals instead of, or in addition to, telephone signals. Examples of such signals are ADSL (asymmetric digital subscriber line), HDSL (High Density Subscriber Line, T1 (1.544 Mb/s), and ISDN signals. There is a growing demand for increasing use of telephone lines for high speed remote access to computer networks, and there have been various proposals to address this demand, including using voice over data systems to communicate signals via telephone lines at frequencies above the voice-band.
As is known, different applications often demand (or at least lend themselves to) different latency requirements. For example, applications of pure data transfer are often not sensitive to latency delays, while real-time voice communications are sensitive to latency delays. As is also known, to accommodate maximum flexibility for providers and end users of ADSL services, forward error correction (FEC) may be selectively applied to the composite data streams to, or from, the central office ADSL modem. This permits FEC to be included or excluded on a data service by data service basis within the composite data stream.
As an example of the mixed requirements for FEC in an ADSL service, consider transmitting a one-way data stream from the central office to a remote unit. The end user may require high reliability on the one-way channel because the channel may contain highly compressed digital data with no possibility for requesting retransmission. For this application, FEC is highly desired. On the other hand, voice services and duplex data services with their own embedded protocols may require minimum latency. As noted above, in real-time voice communication applications, latency delays are undesirable, while small transmission error may be tolerated (manifested as noise, which can be effectively filtered by the listener). Thus, in such an application, FEC may be optional.
FEC involves the addition of redundant information to the data to be transferred. The data to be transferred, along with the redundant data when added together, form what are commonly known as codewords. FEC in ADSL employs Reed-Solomon codes based on symbols of 8 bits to a byte. FEC in ADSL is rate adaptable, providing for various interleave depths and codeword lengths to support a range of data rates while maintaining constant interleave latency. An enhancement to FEC involves shuffling or interleaving the encoded data prior to transmission, then unshuffling or deinterleaving the data received at the remote DSL modem. Interleaving ensures that bursts of error noise during data transmission do not adversely affect any individual codeword in the transmission. If noise affects a particular frame of data, only a minimum number of bytes of any particular codeword will be affected as the individual codewords are distributed across multiple frames.
The combination of Reed-Solomon encoding with data interleaving is highly effective at correcting errors caused by impulse noise in the service subscriber's local loop. In convolutional interleaving, after writing a byte into interleave memory, a previously written byte is typically read from the same memory.
Standard T1.413, Interface between Networks and Customer Installation—ADSL Metallic Interface provides for convolutional interleaving/deinterleaving along with Reed-Solomon coding as part of forward error correction (FEC). The standard provides an effective method for dealing with burst error channels in modem telecommunication systems. In DMT systems, two latency channels are supported: interleave data and fast data (without interleaving). Convolutional interleaving/deinterleaving is typically implemented by processing the Reed-Solomon encoded digital data sequence through a linear finite state shift register. In high bit rate applications like DMT, a random access memory (RAM) device may be used as the data storage means. Convolutional interleaving/deinterleaving is computation intensive. In software approaches that use a single address pointer and several modulo and addition operations to update the address pointer, system level concurrency and performance is adversely affected. Conversely, hardware approaches that utilize multiple pointers for interleaving/deinterleaving operations increase the complexity of the overall DSL system. The system performance trade-off introduced by FEC in the form of Reed-Solomon coding and convolutional interleaving can be described as increased data transmission reliability at the expense of increased channel latency.
U.S. Pat. No. 5,764,649 to Tong discloses a system and method compliant with the T1.413 standard. As illustrated in the '649 patent, both a “fast path” and an interleave path are provided downstream of the FEC. As taught in the '649 patent, two frames are output from multiplexer every frame period. One frame is sent through ADSL transmitter along a “fast path” while the other frame is sent along an “interleave path.” The fast path is so called simply because the data does not undergo the additional processing of interleaving, and therefore does not experience the additional delay imposed by de-interleaving at the receiving end of the communication system. However, all data from the incoming bit stream is passed through the FEC, and therefore encounters the latency delay associated therewith.
Accordingly, there is a need to provide an improved system and method for encoding DSL information streams to further minimize latency delays. Further, there is a desire to provide an improved system and method for encoding DSL information streams having differing latencies.
Certain objects, advantages and novel features of the invention will be set forth in part in the description that follows and in part will become apparent to those skilled in the art upon examination of the following or may be learned with the practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
To achieve the advantages and novel features, the present invention is generally directed to a system and method for encoding a DSL information bit stream and decoding a corresponding encoded DSL symbol. In accordance with one embodiment, an apparatus for encoding a DSL information bit stream is provided having a switch with an input configured to receive a DSL information bit stream and at least two outputs. An encoder is provided and coupled to a first output of the switch. A serial to parallel converter is provided and coupled to both an output of the encoder and a second output of the switch. Finally, a mapper is provided and coupled to an output of the serial to parallel converter through multiple paths. Preferably, a first coupling path between the serial to parallel converter and the mapper is a direct path and a second coupling path includes a second encoder.
In accordance with another embodiment, an apparatus is provided for decoding an encoded DSL symbol. The apparatus includes a soft demapper configured to generate multiple outputs, including an uncoded bit stream having an input configured to receive a DSL information bit stream and at least one coded bit stream. The apparatus further includes at least one decoder configured to decode the at least one coded bit stream. Finally, the apparatus includes a circuit configured to perform a hard demapping of both the uncoded bit stream and an output of the at least one decoder.
In accordance with yet another embodiment of the invention, a method is provided for encoding a DSL information bit stream. Preferably, the method operates by providing at least two paths between an input configured to receive the DSL information bit stream and a serial to parallel converter. The method providing a first encoder in one of the at least two paths, and switches the DSL information bit stream through one of the at least two paths based upon a latency in the DSL information bit stream.
The accompanying drawings incorporated in and forming a part of the specification, illustrate several aspects of the present invention, and together with the description serve to explain the principles of the invention. In the drawings:
Having summarized various aspects of the present invention, reference will now be made in detail to the description of the invention as illustrated in the drawings. While the invention will be described in connection with these drawings, there is no intent to limit it to the embodiment or embodiments disclosed therein. On the contrary, the intent is to cover all alternatives, modifications and equivalents included within the spirit and scope of the invention as defined by the appended claims.
Reference is now made to
In this regard, a switching mechanism 110 is provided. In the diagram of
In a second position, the switching mechanism 110 may connect the DSL bit stream 102 directly to a serial to parallel converter 116. In the illustrated embodiment, an additional encoder 118 (an outer block code) may be included to provide a smaller, but additional measure of error correction to the transmitted symbol.
Of course, a mechanism 120 is also provided to control the operation of the switching mechanism 110. The switch control mechanism 120 controls the position of the switching mechanism 110 in accordance with the latency demands of the DSL bit stream 102. In this regard, the switch control mechanism 120 is provided with some indication as to the type of information being conveyed in the DSL bit stream 102. This information may be ascertained by the switch control mechanism 120 evaluating the signals on the DSL bit stream 102 themselves. Alternatively, this information may be provided through network layer transport mechanisms communicating to the switch control mechanism 120. For purposes of the present invention, the implementation of the switch control mechanism is not deemed to be relevant, as the particular implementation of this device or feature is not considered to be a limitation upon the present invention.
As illustrated, the output of the interleaver 114 is also directed to the serial to parallel converter 116. The parallel data that is output from the serial to parallel converter 116 is then directed to a mapper 130 (also referred to as a symbol selector). In a manner that is known and understood by persons skilled in the art, the mapper 130 operates to select or generate an output symbol (e.g., constellation point). Although various encoders may be utilized consistent with the scope of the invention, in the preferred embodiment, the encoder 140 is implemented as a parallel concatenated encoder (or turbo coder), which includes encoders 142 and 144 and an interleaver 146. Concatenated codes are now well known and need not be described herein. As will be appreciated, additional levels or degrees of coding may be added consistent with the invention.
Accordingly, what is generally illustrated in
Reference is made now to
As illustrated, multiple outputs are generated from the soft demapper 204. The decoder 210 may be configured to perform the inverse of the operation of the encoder 140 of
The circuit 220 also operates to determine whether the transmitted signal was generated through the low latency path or high latency path illustrated in
Reference is now made briefly to
Although not specifically shown, the implementation of a receiver for receiving a symbol generated by the circuitry of
In accordance with yet another embodiment of the invention, a method is provided for encoding a DSL information bit stream. As should be appreciated from the foregoing discussion, the method operates by providing at least two paths between an input configured to receive the DSL information bit stream and a serial to parallel converter. The method providing a first encoder in one of the at least two paths, and switches the DSL information bit stream through one of the at least two paths based upon a latency in the DSL information bit stream.
Having described the top-level architecture and operation of a system and method constructed in accordance with the invention, the discussion will now focus on implementation details of certain preferred embodiments of the invention. For example,
As is known, turbo codes cover channel coding techniques that combine at least 2 “light codes” at the transceiver side, separated by a decoupling device such as an interleaver, and jointly soft decode them iteratively at the receiver side, in order to benefit from the performance of the equivalent “longer code” at a reasonable price. In this regard, the “turbo” effect takes place at the receiver side, insofar as performance is increased since the 2 soft input soft output SISO decoders (See
The two generally two categories or families of turbo codes: convolutional turbo codes and block turbo codes. As is known, convolutional turbo codes combine two convolutive codes Cb and Ca either in a serial (
In the preferred embodiment of the invention, the encoders are implemented as product block turbo codes. As is known, product block codes have long been known. However, the iterative hard decoding for such codes is to be tedious and yields bad performance. Product block turbo codes lift the intrinsic “hard iterative decoding” limitation by introducing a “soft iterative procedure”. Product Block codes put the information bits in a matrix fashion (see
The SISO block decoder (
Most of the turbo decoders follow the paradigm illustrated in
The soft output or the extrinsic information measures the reliability of the hard decision that could have been made if a hard decoder were used. Therefore, as soon as the process goes through sufficient iterations, the extrinsic information changes according to the improvement of the hard decision that could have been made if a hard decoder were used. When the extrinsic information reaches a steady state, a high reliable “hard decision” is performed to generate a hard output 410. Generally, a few (three to five) iterations are sufficient to get very close to the Shannon bound.
Convolutional SISO algorithms generally rely either on “soft Viterbi algorithm SOVA” or on the known BCJR (Bahl, Cocke, Jelinek, Raviv) algorithm. Both of them model the convolutional code as a finite state Markov Chain. As is known, the Viterbi algorithm estimates the whole state path of the Markov Chain. Although based on a MAP criterion, the Viterbi algorithm is suboptimal, since a certain number of transitions are dropped from time to time to reduce complexity. As is further known, the BCJR algorithm searches for the current and the future states of the Markov Chain at any time according to a MAP criterion. The complexity of SISO schemes devoted to convolutional codes is generally quite high. Conversely, the SISO algorithms used for product block codes are simpler and are based on the Chase algorithm.
General Properties of Product Codes
As illustrated in
n=naxnb, k=kaxkb, d=daxdb Equation 1
Other interesting properties of product block codes are related to the “code spectrum”, i.e. the number A(w) of codes words with a certain weight w (number of bits equal to one). The number of code words (of the Product Block Code) with the minimum weight is the product of the number of code words with the minimum weight of the two original codes is given by:
A(dmin)=Aa(dmin,a)Ab(dmin,b) Equation 2
Soft Decoding of Block Codes: CHASE Algorithm
Complexity of the Maximum Likelihood Approach
Although maximum likelihood decoding is optimum, it generally implies an unrealistic complexity, when the unknown takes discrete values and when their number is higher than 10. Assume, for example, that a row r (na components) of the matrix is transmitted. The associated received vector x contains additive noise:
x=r+n Equation 3
If the noise is white and gaussian, the maximum likelihood approach looks for the code word, which minimizes the following Euclidian distance is:
ropt=ARG{Min1≦i≦2
This requires a search amongst 2k
ri=[r1i . . . rmi . . . rn
Each component takes binary values 0, or 1.
CHASE proposed an algorithm which approximates the maximum likelihood sequence decoding of block codes with a low computation complexity and a small performance degradation.
CHASE Algorithm
Instead of reviewing all the codes words of Ca, the CHASE algorithm searches for these located in a subspace of the code. If a BPSK modulation is used, the received vector, still associated to a row r (for simplicity's sake) is:
x={tilde over (r)}+n Equation 6
where:
{tilde over (r)}m=1, if rm=1
{tilde over (r)}m=−1, if rm=0 Equation 7
We denote y, the binary vector deduced after hard slicing of x:
The CHASE algorithm is based on 4 steps:
Step 1. Find the P least reliable bits of y.
Step 2. Elaborate the 2P associated “test sequences”, zi, 1≦i≦2P.
Step 3. Hard decode the 2P test sequences, store the results which spans the subspace.
Step 4. Decode x based on maximum likelihood in {tilde over (Ω)}, and compute the reliability of the decision to elaborate the soft output.
With regard to step 1, the P least reliable bits are associated to the P smallest values of |xm|.
With regard to step 2, to built the test sequences, the most reliable bits are kept, and the P least reliable takes any binary values 0, or 1. This leads to 2P possibilities.
With regard to step 4, the likelihood approach limited to leads to the decision d:
d=ARG{Min{∥x−{tilde over (r)}∥2)
{tilde over (r)}ε{tilde over (Ω)} Equation 9
where {tilde over (Ω)} is the extension of according to the BPSK modulation, see equation 7. The reliability of decision d is evaluated per bit m. The reliability of bit m is based on the following Log-likelihood ratio:
After some calculation and manipulation, the Log-likelihood ratio becomes approximately:
where {tilde over (r)}−1(m) is the closest code word to x in {tilde over (C)}a with bit m equals to −1, {tilde over (r)}+1(m) is the closest code word to x in {tilde over (C)}a with bit m equals to +1. It is particularly noted that if:
d={tilde over (r)}−1(m) Equation 12
the Log-likelihood ratio is negative. If:
d={tilde over (r)}+1(m) Equation 13
the likelihood ratio if positive. Therefore, the Log-likelihood ratio of bit m, bears its sign. The Log-likelihood ratio might be rewritten as:
Based on a straightforward calculation, it is possible to show that if σ2=2, then:
Λ(dm)≈xm+wm=x′m Equation 15
where wm is the extrinsic information stemming from the likelihood evaluation. This extrinsic information is added to the soft input to elaborate the soft output x′m which reliability is contained in its absolute value: |x′m|. When this absolute value is growing, so does the reliability.
Simplified Soft Output.
The Log-likelihood given by Equation 14 is the “soft output” of the decoding. It becomes an additive correction of the input data for a particular normalization. Nevertheless, the search for both code words {tilde over (r)}+1(m) and {tilde over (r)}−1(m) in the whole code {tilde over (C)}a is still unrealistic. This optimum soft output may be replaced by the following simplified version in the CHASE algorithm:
where d is the likelihood decision in {tilde over (Ω)}, dm is the bit m of d. a is the “challenger” of d. It is the closest (in {tilde over (Ω)} code word to x, but with a bit m equal to −dm. According to equation (15), the extrinsic information wm is the difference:
If the challenger a does not exist, than the soft output is chosen to be equal to in coherence with Equations 15 and 16:
x′m=xmβdm Equation 18
As a summary, for BPSK modulation, the CHASE algorithm associates to a soft input x a soft output x′ (according to either Equation 15 or 16). Their difference is the extrinsic information w (see
As a compact useful notation, consider:
CHASE[x]=x′ Equation 19
Iterative Soft Decoding of Product Block Codes: Block Product Turbo Codes for BPSK Modulation
Assuming that the received data consists in a matrix X with na columns and nb rows, according to the fashion described above. One iteration of the turbo decoding of X is split into two “half iterations”: the soft decoding of all the rows on the one hand and all the columns on the other hand, based both on the CHASE algorithm.
The iterative scheme is as follows.
Iteration 0
First Half Iteration 0. Rows. Soft decode all the rows of matrix X. Half iteration 0 does not use any extrinsic information as input. Denoting SIR[0,0] as the soft input at the half iteration 0 (the index R stands for rows, although it could be of course replaced by the same operation on the columns). The soft output SOR[0,0] after the first half iteration 0 may be obtained by soft decoding (based on the CHASE algorithm devoted to code Ca) of all the rows of X. In summary,
SIR[0,0]=X
SOR[0,0]=CHASER[X]
WR[0,0]=SOR[0,0]−SIR[0,0] Equation 20
Second Half iteration 0, Columns. The soft input of the second half iteration 0 SIC[0,1] includes part of the extrinsic information stemming from the first half iteration 0:
SIC[0,1]=X+α[0,1]WR[0,0] Equation 21
The soft output of the second half iteration SOC[0,1] results from the soft decoding (based on the CHASE algorithm devoted to code Cb) of all the columns of the soft input given by Equation 21:
SOC[0,1]=CHASEC[SIC[0,1]] Equation 22
The relevant extrinsic information WC[0,1] is still the difference between the soft output and the soft input:
WC[0,1]=SOC[0,1]−SIC[0,1] Equation 23
At the first iteration, the extrinsic information is not highly reliable, the reliability increases with each iteration. The extrinsic information which feeds the next SISO block is thus weighted by a coefficient tuned according to the iteration. This coefficient is increasing from 0 to 1.
Extension to the ADSL-DMT Constraints
The idea is to replace the actual coded QAM modulation, the interleaver and the Reed-Solomon code by both a soft demapper of QAM and a block product turbo BCH code. In what follows, we will refer to the “classical chain” as the chain without turbo codes. The matrix fashion inherent to block product behaves as a matter of fact as an interleaver. To introduce block product turbo codes requires the satisfaction of ADSL constraints.
Block Product Turbo BCH, BPTBCH, Scheme Suited to ADSL Framing Constraints
The Reed-Solomon leads to 255 bytes words. BCH codes are binary cyclic codes. BCH codes with 255 bits are thus convenient (8 BCH words fit one RS word). Starting from the fact that the rate of the product block code is the product of the original rates, to keep a reasonable rate, consider (255,247)2BPTBCH and (255,247)(255,239) BPTBCH with respective rates equal to 0.937 and 0.907. Both schemes are equivalent to an interleaver depth of at least 29 bytes.
Soft Demapping
The following presents a mathematical discussion (and illustration) of soft demapping. It should be appreciated that the following is presented purely for the purpose of illustration, and should not be viewed as limiting in any way upon the broader concepts of the present invention.
ML Partitioning of the Real Axis
For simplicity, consider an 8 PAM, based on 3 bits, b2b1b0 and a Gray labeling (see
where b−1 is the 2 bits word built with the bits different from i. Instead of considering the Max[Log(MAP)] argument and the Bayes rule to simplify Equation 25, we introduce the simple idea of “tangent BPSK” modulation.
Soft Bits Based on “Tangent BPSK” Modulation
To explain the idea of “tangent BPSK” modulation, we consider an example and assume that r=1.1 (see
The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obvious modifications or variations are possible in light of the above teachings. The embodiment or embodiments discussed were chosen and described to provide the best illustration of the principles of the invention and its practical application to thereby enable one of ordinary skill in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly and legally entitled.
This application claims the benefit of U.S. provisional patent application Ser. 60/206,068, filed on May 22, 2000, and entitled “Block Product Turbo BCH Codes and QAM Soft Demapper for DSL DMT,” which is hereby incorporated by reference in its entirety. This application also claims the benefit of U.S. provisional patent application Ser. 60/226,114, filed on Aug. 18, 2000, and entitled “Multiple Latencies Multilevel Turbo Coding for DSL,” which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5654962 | Rostoker et al. | Aug 1997 | A |
5737337 | Voith et al. | Apr 1998 | A |
5970098 | Herzberg | Oct 1999 | A |
5996104 | Herzberg | Nov 1999 | A |
6034996 | Herzberg | Mar 2000 | A |
6088387 | Gelblum et al. | Jul 2000 | A |
6219386 | Amrany et al. | Apr 2001 | B1 |
6687315 | Keevill et al. | Feb 2004 | B2 |
6715124 | Betts | Mar 2004 | B1 |
6769090 | Halder | Jul 2004 | B1 |
20010031017 | Betts | Oct 2001 | A1 |
20040240590 | Cameron et al. | Dec 2004 | A1 |
20050013379 | Duvaut et al. | Jan 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
60206068 | May 2000 | US | |
60226114 | Aug 2000 | US |