The present invention relates generally to wireless communications, and more particularly to multiplexed coding design for cooperative communications.
In wireless communications, a transmitter typically transmits information to a receiver over a communication channel. Statistically, a communication channel can be defined as a triple consisting of an input alphabet, an output alphabet, and for each pair (i,o) of input and output elements of each alphabet, a transition probability p(i,o). The transition probability is the probability that the receiver receives the symbol o given that the transmitter transmitted symbol i over the channel.
Given a communication channel, there exists a number, called the capacity of the channel, such that reliable transmission is possible for rates arbitrarily close to the capacity, and reliable transmission is not possible for rates above the capacity
In some circumstances, the distance separating the transmitter (i.e., source) and the receiver (i.e., destination) is large. Alternatively or additionally, the communication channel over which the source and destination communicate may be of poor quality. As a result, interference may be introduced in the communications between the source and the destination, which can result in distortion of the message. To reduce the effect of interference, the transmitter and receiver often transmit information over a communication channel using a coding scheme. The coding scheme provides redundancy so that the message can be detected (and decoded) by the receiver in the presence of interference.
The coding scheme uses codes, which are an ensemble (i.e., group) of vectors that are to be transmitted by the transmitter. The length of the vectors are assumed to be the same and is referred to as the block length of the code. If the number of vectors is K=2k, then every vector can be described with k bits.
Employing multiple antennas in a mobile terminal to achieve transmit diversity or special diversity, known as multiple-input and multiple-output (MIMO), has become a promising solution to combat fading channels and potentially provide very high data rates. Recently, cooperative communication has drawn increasing interest in the wireless communication area due to user cooperation diversity or cooperative diversity gain, which is another form of spatial diversity created by using a collection of distributed antennas from multiple terminals in a network. User cooperation is a framework where two users can jointly transmit their signals, in coded cooperation, using both of their antennas. As shown in
Two coding design methods, multiplexed coding and superposition coding, theoretically perform very well. However, practically, it is difficult to build or decode as well as theoretically described when implementing these coding schemes. These two schemes are very difficult to implement with practical codes.
Therefore, there remains a need to design a coding method which is easier to implement, but approaches the accuracy and rate of multiplexed coding.
In accordance with an embodiment of the invention a method for decoding a combination of a first message and a second message that were encoded using a generating matrix of a systematic linear block code is provided. In one embodiment, the combination of the first message and the second message is decoded using a parity check matrix.
In another embodiment where the second message is known, the first message is decoded using a first component code parity check matrix.
In another embodiment where the first message is known, the second message is decoded using a second component code parity check matrix.
The parity check matrix can be derived from the generating matrix and the first message or the second message can be decoded using the first or second component code parity check matrix.
The multiplexed component codes may be derived for a generating matrix by generating a generator matrix for a multiplexed code, and obtaining component codes from the generator matrices and the corresponding parity check matrices.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
Low-density parity check (LDPC) codes are a class of linear block codes that often approach the capacity of conventional single user communication channels. The name comes from the characteristic of their parity-check matrix, which contains only a few 1's in comparison to the amount of 0's. LDPC codes may use linear time complex algorithms for decoding
In more detail, LDPC codes are linear codes obtained from sparse bipartite graphs.
Thus, a binary LDPC code is a linear block code with a sparse binary parity-check matrix. This m×n parity check matrix can be represented by a bipartite graph (e.g., as shown in
An LDPC code ensemble is characterized by its variable and check degree distributions (or profiles) λ=└λ2 . . . λd
The design rate of an ensemble can be given in terms of λ(x) and ρ(x) by
In a two-user cooperative coding system, as shown in
It is very difficult to build a fully multiplexed code that is able to decode two messages at or near the theoretical capacity rate. To build such a capacity-approaching fully multiplexed code, a code must be designed so that both the multiplexed codebook and the codebook subsets or component codes approach performance capacity. Although it might be possible in theory to do this, it is very difficult to implement practically. LDPC codes, one class of random-like capacity-approaching linear block codes, may be used to build a fully multiplexed code. For example, the fully multiplexed code can be built in the following way
where G is the generator matrix for multiplexed code, G1 and G2 are the generator matrices of the component codes for message u1 and u2, respectively, and
G=[G1TG2T]T, u=[u1u2].
To design multiplexed code that approaches full capacity, its parity check matrix H is designed so that the multiplexed code (with the generator matrix G) is (n, n(R1+R2)), and also each Hi (corresponding to Gi) is (n,nRi). This is very difficult to achieve practically.
Since it is difficult to practically implement a fully multiplexed code, the fully multiplexed code may be approximated. This approximation code is called a “partially multiplexed code”.
The partially multiplexed code is formed from a binary systematic code. In a binary systematic (n, k) code, k=k1+k2=n(R1+R2), with the parity check matrix given by
H=└P(n−k)×kIn−k┘=[P1P2I],
where P=[P1P2] with the dimensions of P1 and P2 given by (n−k)×k1 and (n−k)×k2, respectively. The generator matrix is then given by
G is used to construct the component codes in the following manner. Then two component codes are constructed by separating the parity check matrix P=[P1, P2]. After the separation, the submatrices Ik
G1=[Ik
and a (n−k1, k2) code with the generator
G2=[Ik
The parity check matrices are then given by
H1=[P1In−k], H2=[P2In−k]
With this coding method of partially multiplexed coding, a message of length k=k1+k2 (which can be treated as a concatenation of two messages with lengths k1 and k2) can be decoded the with the parity check matrix H at the full rate
Once one message is known, the receiver is then able to decode the message u1 of length k1 at a rate of k1=(n−k2), or decode message u2 at a rate k2=(n−k1). Because
the rate of decoding the component codes cannot achieve the rate of the component codes in the fully multiplexed codes.
For example, an illustrative case is the single parity check (SPC) code, shown in
So in a cooperative system as shown in
There is a performance loss for partial multiplexed code since the effective rate of the component code for multiplexed code is larger than that in the fully multiplexed code. However, when R1+R2 is not high, the performance loss will be small. Assuming R1=R2=R, the total rate for the multiplexed code is 2R. For fully multiplexed codes, the rate for the component code is R. The effective rate of the component code in the partially multiplexed code is then given by
A better performing partially multiplexed code may be built using a systematic Irregular Repeat Accumulate (IRA) code. An IRA code can be defined by a (n−k)×n parity-check matrix with the form H=[PA, PB], where PA is a sparse (n−k)×k matrix and PB is a (n−k)×(n−k) matrix
given below
where PB−1 is the binary inverse of PB and can be realized by a differential encoder. Furthermore, the generator matrix is given by G=└I,PAT(PB−1)T┘, where I is a k×k identity matrix and T denotes a matrix transpose.
The multiplexed IRA code is built starting with the multiplexed code Hmp given by Hmp=[PA; PB]; where Hmp: m−n; m=n−k1−k2; PA: m×(k1+k2); PB: m×m.
Then the generator matrix of the multiplexed code is written as
Gmp=└I(k
Then, PA is separated into two parts, i.e., PA=[PA;1; PA;2], where PA;1: m×k1; PA;2: m×k2. Then
The component codes may be obtained with the generator matrices given by
with the corresponding parity check matrices given by
Since PA is sparse, the matrices PA;1 and PA;2 split from PA are also sparse, so are the component code matrices H1 and H2. Therefore, the sum-product iterative decoding can be employed for the component codes so that low decoding complexity is achieved. The resulting parity matrices also represent the systematic IRA codes. Therefore, the low complexity encoding for the component codes is also achieved. Since submatrices PA;1 and PA;2 in H1 and H2 are split from PA, PA;1 and PA;2 share the similar degree distribution with PA.
Computers executing coding systems are well known in the art, and may be implemented, for example, using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is shown in
The computer program instructions may be stored in a storage device 512 (e.g., magnetic disk) and loaded into memory 510 when execution of the computer program instructions is desired. Thus, the email client application will be defined by computer program instructions stored in memory 510 and/or storage 512 and the email client application will be controlled by processor 504 executing the computer program instructions. Computer 502 also includes one or more network interfaces 506 for communicating with other devices via a network. Computer 502 also includes input/output 508 which represents devices which allow for user interaction with the computer 502 (e.g., display, keyboard, mouse, speakers, buttons, etc.).
One skilled in the art will recognize that an implementation of an actual computer will contain other elements as well, and that
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 60/743,265 filed Feb. 9, 2006, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
7530002 | Lee et al. | May 2009 | B2 |
Number | Date | Country | |
---|---|---|---|
20070186136 A1 | Aug 2007 | US |
Number | Date | Country | |
---|---|---|---|
60743265 | Feb 2006 | US |