Cayley-encodation of unitary matrices for differential communication

Information

  • Patent Grant
  • 7170954
  • Patent Number
    7,170,954
  • Date Filed
    Wednesday, February 20, 2002
    22 years ago
  • Date Issued
    Tuesday, January 30, 2007
    17 years ago
Abstract
Differential unitary space-time (DUST) communication is a technique for communicating via one or more antennas by encoding the transmitted data differentially using unitary matrices at the transmitter, and decoding differentially without knowing the channel coefficients at the receiver. Since channel knowledge is not required at the receiver, DUST is ideal for use on wireless links where channel tracking is undesirable or infeasible. Disclosed are a class of Cayley codes for DUST communication that can produce sets of unitary matrices that work with any number of antennas, and has efficient encoding and decoding at any rate. The codes are named for their generation via the Cayley transform, which maps the highly nonlinear Stiefel manifold of unitary matrices to the linear space of skew-Hermitian matrices. The Cayley codes can be decoded using either successive nulling/cancelling or sphere decoding.
Description
FIELD OF THE INVENTION

The present invention is directed toward a method of communication in which the input data is encoded using unitary matrices, transmitted via one or more antennas and decoded differentially at the receiver without knowing the channel characteristics at the receiver, and more particularly to technology for producing a set of unitary matrices for such encoding/decoding using Cayley codes.


BACKGROUND OF THE INVENTION

Although reliable mobile wireless transmission of video, data, and speech at high rates to many users will be an important part of future telecommunications systems, there is considerable uncertainty as to what technologies will achieve this goal. One way to get high rates on a scattering-rich wireless channel is to use multiple transmit and/or receive antennas. Many of the practical schemes that achieve these high rates require the propagation environment or channel to be known to the receiver.


In practice, knowledge of the channel is often obtained via training: known signals are periodically transmitted for the receiver to learn the channel, and the channel parameters are tracked (using decision feedback or automatic-gain-control (AGC)) in between the transmission of the training signals. However, it is not always feasible or advantageous to use training-based schemes, especially when many antennas are used or either end of the link is moving so fast that the channel is changing very rapidly.


Hence, there is much interest in space-time transmission schemes that do not require either the transmitter or receiver to know the channel. A standard method used to combat fading in single-antenna wireless channels is differential phase-shift keying (DPSK). In DPSK, the transmitted signals are unit-modulus (typically chosen from a m-PSK set), information is encoded differentially on the phase of the transmitted signal, and as long as the phase of the channel remains approximately constant over two consecutive channel uses, the receiver can decode the data without having to know the channel coefficient.


Differential techniques for multi-antenna communications have been proposed, where, as long as the channel is approximately constant in consecutive uses, the receiver can decode the data without having to know the channel. The general differential techniques of the Background Art have good performance when the set of matrices used for transmission forms a group under matrix multiplication, which also leads to simple decoding rules.


But the number of groups available is rather limited, and the groups do not lend themselves to very high rates (such as tens of bits/sec/Hz) with many antennas. One of the Background Art techniques is based on orthogonal designs, and therefore has simple encoding/decoding and works well when there are two transmit and one receive antenna, but suffers otherwise from performance penalties at very high rates.


Part of the difficulty of designing large sets of unitary matrices is the lack of simple parameterizations of these matrices. To keep the transmitter and receiver complexity low in multiple antenna systems, linear processing is often preferred, whereas unitary matrices are often highly nonlinear in their parameters.


SUMMARY OF THE INVENTION

The invention, in part, provides a partial and yet robust solution to the general design problem for differential transmission, for rate R (in bits/channel use) with M transmit antennas for an unknown channel.


The invention, also in part, provides Cayley Differential (“CD”) codes that break the data stream into substreams, but instead of transmitting these substreams directly, these substreams are used to parameterize the unitary matrices that are transmitted. The codes work with any number of transmit and receive antennas and at any rate. The Cayley code advantages include that they:

    • 1. Are very simple to encode;
    • 2. Can be used for any number of transmit and receive antennas;
    • 3. Can be decoded in a variety of ways including simple polynomial-time linear-algebraic techniques such as: (a) Successive nulling and canceling and (b) Sphere decoding;
    • 4. Are designed with the numbers of both the transmit and receive antennas in mind; and
    • 5. Satisfy a probabilistic criterion namely, maximization of an expected distance between matrix pairs.


Additional features and advantages of the invention will be more fully apparent from the following detailed description of the preferred embodiments, the appended claims and the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a plot of the performance of an example CD code according to an embodiment of the invention for the example circumstances of M=2 transmit and N=2 receive antennas with rate R=6.



FIG. 2 is a plot of block error performance of an example CD code according to an embodiment of the invention for the example circumstances of M=4 transmit and N=1 receive antennas with rate R=4 compared with a nongroup code.



FIG. 3 is a plot of the performance of an example CD code according to an embodiment of the invention for the example circumstances of M=4 transmit and N=2 receive antennas with rate R=4.



FIG. 4 is a plot of the performance of an example CD code according to an embodiment of the invention for the example circumstances of M=4 transmit and N=4 receive antennas with rate R=8.



FIG. 5 is a plot of the performance of an example CD code according to an embodiment of the invention for the example circumstances of M=8 transmit and N=12 receive antennas with rate R=16.



FIG. 6 is a flow chart of steps included in a transmitting method according to an embodiment of the invention.


And FIG. 7 is a flow chart of steps included in a receiving method according to an embodiment of the invention.





The accompanying drawings are: intended to depict example embodiments of the invention and should not be interpreted to limit the scope thereof; and not to be considered as drawn to scale unless explicitly noted.


DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Using Cayley Differential (“CD”) codes for differential unitary space-time (DUST) communication includes two aspects. The first is transmitting/receiving data with the CD codes, which necessarily involves calculating the CD codes. The second aspect is designing how the actual CD codes will be calculated. It should be noted that the first aspect assumes that the second aspect has been performed at least once.


The first aspect of Cayley-encoded DUST communication itself includes two aspects, namely (A) Cayley-encoding and transmitting, and (B) receiving and Cayley-decoding.


A few statements about notation will be made. The notation α1, . . . , αQ indicates a set of real-valued scalars each member of which is referred to as a parameter α, there being a total of Q such parameters. Another way to refer to the set α1, . . . ,αQ is via the notation: {αq}. Each αq will take its value to be one of the elements in the set known as A (the reader should be careful to note that this fancy symbol is the letter “A” in the Euclid Math One font in order to distinguish from a Hermitian basis matrix known as “A” which will be discussed below). Mention will also be made of a set of matrices referred to as {A}, as well as a subset having Q elements, referred to as {Aq}.


As depicted in FIG. 6, transmitting R*M bits of Cayley-encoded data (starting at step 602) can include: (step 604) breaking the R*M total bits of data to be transmitted into R*M/Q chunks; (step 606) mapping each of the Q chunks of R*M/Q bits to take one of the different scalar values found in the set A, where A has r=2RM/Q elements and A has been has been previously determined (A and its role will be discussed below), i.e., assigning a specific value from A to each chunk, respectively, in order to get α1, . . . , αQ (also referred to as ““{αq}”) (below {αq} and its role will be discussed); (step 608) calculating the specific CD code according to (Eq. No. 9, introduced below) using {αq} and {Aq} (where {Aq} has been previously determined; see the discussion below of the second aspect); (step 610) calculating the matrix to be transmitted (the “present matrix” representing the R*M bits) based on the specific CD code and the previously-transmitted matrix according to the fundamental transmission equation (Eq. No. 4, introduced below); (step 612) modulating the present matrix on a carrier to form a carrier-level signal; and (step 616) transmitting the carrier-level signal (with flow ending at step 618).


As depicted in FIG. 7, receiving R*M bits of Cayley-encoded data (starting at step 702) can include: (step 704) receiving a carrier-level signal; (step 706) demodulating the carrier-level signal to form a matrix (representing R*M bits); (step 708) searching the set known as A to find Q specific scalar values, namely α1, . . . , αQ (or {αq}), that minimize (Eq. No. 12, introduced below) or (Eq. No. 13, introduced below, which can be solved more quickly albeit less accurately than (Eq. No. 12)); (step 710) mapping each element of {αq} into its corresponding R*M/Q bits using a predetermined mapping relation; and (step 712) reassembling Q chunks of R*M/Q bits to produce the R*M bits of data that were transmitted (with flow ending at step 714).


The second aspect of Cayley-encoded DUST communication, namely the design of the CD codes, is to determine the non-data parameters of the CD code (Eq. No. 9, introduced below). This includes: determining the set of Aq (“{Aq}”); and the A from which are selected α1, . . . , αQ (or {αq}). The {Aq} is independent of the data to be transmitted but is dependent upon the transmitter/receiver hardware. Both {Aq} and A will be known to the transmitter and to the receiver.


Determining A can include: (1) substituting the number of transmitting antennas, M, and the number of receiving antennas, N, into (Eq. No. 15, introduced below) to get the number of degrees of freedom, Q; (2) determining how many elements, r, will be in A according to the equation r=2(RM/Q) (a rewriting of (Eq. No. 22, introduced below)); (3) establishing the set of θ (“{θ}”) according to the equation {θ}={π/r, 3π/r, 5π/r, . . . (2r−1) π/r}; and establishing A by substituting {θ}into (Eq. No. 17, introduced below).


The {Aq} can be determined by solving (Eq. No. 18, introduced below) using the A determined above, e.g., via an iterative technique such as the gradient-ascent method. An iterative technique will find a {Aq} which causes (Eq. No. 18) to yield a maximum value (see (Eq. No. 19, introduced below)). But an iterative technique will not necessarily find the optimal {Aq} among all possible {A}, rather it will find the optimal value for that particular iterative technique being used.


Cayley codes, according to an embodiment of the invention are briefly summarized as follows. To generate a unitary matrix V parameterized by the transmitted data, we break the data stream into Q substreams (we specify Q later) and use these substreams to chose α1, . . . , αQ each from the set A with r real-valued elements (we also have more to say about this set later). A Cayley code of a rate R, where R=(Q/M)log2r obeys the following equation (known as the Cayley Transform)

V=(I+iA)−1(I−iA)  (1)

where







A
=




q
=
1

Q




A
q



α
q




,





I is the identity matrix of the relevant dimension and A1, . . . , AQ are pre-selected M×M complex Hermitian matrices.


The matrix V, as given by (2), is referred to as the Cayley transform of iA and is unitary.


The Cayley code is completely specified by A1, . . . , AQ and A. Each individual codeword is determined by the scalars α1, . . . , αQ.


The performance of a Cayley code according to an embodiment of the invention depends on the choices of the number of substreams Q, the Hermitian basis matrices {Aq}, and the set A from which each αq is chosen. Roughly speaking, Q is chosen so as to maximize the number of independent degrees of freedom observed at the output of the channel. To choose the {Aq}, a coding criterion can be optimized (to be discussed below) that resembles |det(Vl−Vl′) | but is more suitable for high rates and is amenable to analysis. The optimization need be done only once, during code design, and it is amenable to gradient-based methods. The Cayley transform (Eq. No. 1) is powerful because it generates the unitary matrix V from the Hermitian matrix A, and A is linear in the data α1, . . . , αQ.


For the purposes of the present application, it suffices to assume that the channel has a coherence interval (defined to be the number of samples at the sampling rate during which the channel is approximately constant) that is at least twice the number of transmit antennas.


1. Review of Differential Unitary Space-Time (“DUST”) Modulation


The following is a brief summary of the differential, unitary matrix signaling scheme disclosed in the copending application, Ser. No. 09/356,387, which has been incorporated by reference above in its entirety.


In a narrow-band, flat-fading, multi-antenna communication system with M transmit and N receive antennas, the transmitted and received signals are related by

x=√{square root over (p)}sH+v,  (2)


where xεC1×N and x denotes the vector of complex received signals during any given channel use, p represents the signal-to-noise ratio (“SNR”) at the receiver, sεC1×M and s denotes the vector of complex transmitted signals, HεCM×N and H denotes the channel matrix, and the additive noise VεC1×N and v is assumed to have independent CN (0, 1) (zero-mean, unit-variance, complex-Gaussian) entries that are temporally white.


The channel is used in blocks with there being M channel uses per block. We can then aggregate the transmit row vectors s over these blocks of M channel uses into an M×M matrix Sτ, where τ=0, 1, . . . represents each block of channel uses. In this setting, the mth column of Sτ denotes what is transmitted on the mth antenna in each instance of time unit t within block τ, and the mth row denotes what is transmitted during the mth time unit of block τ on by each of the M antennas. If we assume that the channel is constant over the M channel uses, i.e., over each block τ, the input and output row vectors are related through a common channel so that we may represent the received matrix Xτ as a function of the transmitted matrix Sτ according to the following equation:

Xτ=√{square root over (p)}SτH+Wτ,  (3)

where W, and H are M×N matrices of independent CN (0,1) random variables, Xτ is the M×N received complex signal matrix and Sτ is the M×N transmitted complex signal matrix.


In differential unitary space-time modulation, the transmitted matrix at block τ satisfies the following so-called fundamental transmission equation

Sτ=VZτSτ−1  (4)

where Zτε{0, . . . , L−1} is the data to be transmitted in the form of the code matrix VZτ (assume S0=I). Since the channel is used a total of M times, the corresponding transmission rate is R=(Q/M)log2L. If we further assume that the propagation environment is approximately constant for 2M consecutive channel uses, then we may write

Xτ=√{square root over (p)}SτH+Wτ=√{square root over (p)}VZτSτ−1H+Wτ=VZτ(Xτ−1−Wτ−1)+Wτ  (5)

which leads us to the fundamental differential receiver equation










X
τ

=



V

z
τ




X

τ
-
1



+




W
τ

-


V

z
τ




W

τ
-
1







W
τ






.






(
6
)








Note that the channel matrix H does not appear in Eq. No. (6), i.e., H was substituted for in Eq. No. (5). This implies that, as long as the channel is approximately constant for 2M channel uses, differential transmission permits decoding without knowing the fading matrix H.


From Eq. No. (5) it is apparent that the matrices Vl should be unitary, otherwise the product Sτ=VZτVZτ−1. . . VZ1 can go to zero, infinity, or both (in different spatial and temporal directions).


In general, the number of unitary M×M matrices in V can be quite large. For example, if rate R=8 is desired with M=4 transmit antennas (even larger rates are quite possible as shown later), then the number of matrices is L=2RM=232≈4×109, and the pairwise error between any two signals can be very small. This huge number of signals calls into question the feasibility of computing a figure of merit, ζ, where






ζ
=


1
2




min

l


l










det






(


V
l

-

V

l




)





1
M













and lessens its usefulness as a performance criterion. We therefore consider a different, though related, criterion (to be discussed below).


The large number of signals imposes a large computational load when decoding via an exhaustive search. For high rates, it is possible to construct a random set with some structure. But, again, we have no efficient decoding method. To design sets that are huge, effective, and yet still simple, so that they can be decoded in real-time, it is explained below how the Cayley transform can be used to parameterize unitary matrices.


2. The Stiefel Manifold


The space of M×M complex unitary matrices is referred to as the Stiefel manifold. This is the space from which will be selected a subset of unitary matrices that data (to be transmitted) will be mapped-to and received data mapped-from. The Stiefel manifold is highly nonlinear and nonconvex, and can be parameterized by M2 real free parameters.


3. The Cayley Transform


The Cayley transform of a complex M×M matrix Y is defined to be

(IM+Y)−1(IM−Y),  (7)

where IM is the M×M identity matrix and Y is assumed to have no eigenvalues at −1 so that the inverse exists (hereafter the M subscript on I will be dropped). Note that I−Y, I+Y, (I−Y)−1 and (I+Y)−1 all commute so there are other equivalent ways to write this transform.


Compared with other parameterizations of unitary matrices, the parameterization with the Cayley transform is not “too nonlinear” (to be discussed below) and it is one-to-one and easily invertible. The Cayley transform also maps the complicated Stiefel manifold of unitary matrices to the space of skew-Hermitian matrices. Skew-Hermitian matrices are easy to characterize since they form a linear vector space over the real numbers (the real linear combination of any number of skew-Hermitian matrices is skew-Hermitian). And this handy feature will be used below for easy encoding and decoding.


It should be recognized that the Cayley transform is the matrix generalization of the scalar transform






v
=


1
-
ia


1
+
ia







that maps the real line to the unit circle. This map is also called a bilinear map and is often used in complex analysis. The Cayley transform (Eq. No. 7) maps matrices with eigenvalues inside the unit circle to matrices with eigenvalues in the right-half-plane.


An important observation to be made is that of Full Diversity. A set of unitary matrices {V0, . . . , VL} is fully-diverse, i.e., |det(V−Vl′) | is nonzero for all l≠l′ if and only if the set of its skew-Hermitian Cayley transforms {Y0, . . . , YL.} is fully-diverse. Moreover, we have

Vl−Vl′=2(I+Yl)−1[Yl′−Yl](I+Yl′)−1.  (8)


Thus, to design a fully-diverse set of unitary matrices into which will be mapped the data that is to be transmitted, we can design a fully-diverse set of skew-Hermitian matrices and then employ the Cayley transform. This design technique is used in an example below.


4. Cayley Differential Codes


Because the Cayley transform maps the nonlinear Stiefel manifold to the linear space of skew-Hermitian matrices (and vice-versa) it is convenient to do two things: (1) encode data onto a skew-Hermitian matrix; and then (2) apply the Cayley transform to get a unitary matrix. It is most straightforward to encode the data linearly.


Again, a Cayley Differential (“CD”) code, A, is a type of unitary matrix that satisfies the Cayley transform

V=(I+iA)−1(I−iA),  (again, 2)

where the CD code, i.e., matrix A, is Hermitian and is given by










A
=




q
=
1

Q




α
q



A
q




,




(
9
)








and where α1, . . . , αQ are real scalars (chosen from a set A with r possible values) (in other words, mapped based upon the data to be transmitted) and where Aq are fixed M×M complex Hermitian matrices.


The code, i.e., the set of unitary matrices {V}, is completely determined by the set of matrices A1, . . . , AQ, which can be thought of as Hermitian basis matrices. Each individual codeword, i.e., each unitary matrix V, on the other hand, is determined by our choice of the scalars α1, . . . , αQ. Since each αq may each take on r possible values (i.e., the set A from which values for αq are taken has r values), and the code occupies M channel uses, then the transmission rate, R, is R=(Q/M) log2r. Finally, since an arbitrary M×M Hermitian matrix is parameterized by M2 real variables, we have the constraint

Q≦M2  (10)

Below, as a consequence of the preferred decoding algorithm, a more stringent constraint on Q will be suggested.


The discussion of how to choose Q and design the Aq's and the set A follows after the discussion of how to decode α1, . . . , αQ at the receiver.


5. Decoding the CD Codes


An important property of the CD codes is the ease with which the receiver may form a system of linear equations in the variables {αq}. To see this, it is useful to substitute the Cayley transform (Eq. No. (1)) into the fundamental receiver equation (Eq. No. (6)),










X
τ

=



V

Z
τ




X

τ
-
1



+

W
τ

-


V

z
τ




W

τ
-
1










=




(

I
+

i





A


)


-
1




(

I
-

i





A


)



X

τ
-
1



+

W
τ

-



(

I
+

i





A


)


-
1




(

I
-

i





A


)



W

τ
-
1












implying that

(I+iA)Xτ=(I−iA)Xτ−1+(I−iA)Wτ−(I−iA)Wτ−1,

or












X
τ

-

X

τ
-
1



=


A


1
i



(


X
τ

+

X

τ
-
1



)


+


(

I
+

i





A


)



W
τ


-


(

I
-

i





A


)



W

τ
-
1





,




(
11
)








which is linear in A. Since the data {αq} is also linear in A, then (Eq. No. 11) is linear in {αq}.


Consider first the maximum-likelihood estimation of the {αq}. Using (Eq. No. 17) and noting that the additive noise (I+iA)WT−(I−iA)WT−1 has independent columns with covariance 2(I+iA)(I−iA)=2(I+A2) shows that the maximum likelihood (“ML”) decoder is








α
^


m





l


=

arg







min

{

α
q

}









(

I
+

i





A


)


-
1





(

(


X
τ

-

X

τ
-
1


-


1
i



A


(


X
τ

+

X

τ
-
1



)




)



2


;









or, more explicitly,











α
^

ml

=

arg







min

{

α
q

}










(

I
+

i





q
=
1

Q




α
q



A
q





)


-
1




(


X
τ

-

X

τ
-
1


-


1
i






q
=
1

Q




α
q




A
q



(


X
τ

+

X

τ
-
1



)






)




2

.







(
12
)







This decoder is not quadratic in {αq} and so may be difficult to solve. However, if we ignore the covariance of the additive noise in (Eq. No. 11) and assume that the noise is simply spatially white, then we obtain the linearized ML decoder











α
^

lin

=

arg







min

{

α
q

}








X
τ

-

X

τ
-
1


-


1
i






q
=
1

Q




α
q




A
q



(


X
τ

+

X

τ
-
1



)








2







(
13
)








We call the decoder “linearized” because the system of equations obtained in solving (Eq. No. 13) for unconstrained {αq} is linear.


Because (Eq. No. 13) is quadratic in {αq}, a simple approximate solution for {αq} chosen from a fixed set can use nulling and canceling. An exact solution without an exhaustive search can use sphere decoding.


6. The Number of Independent Equations


Nulling and canceling explicitly requires that the number of equations be at least as large as the number of unknowns. Sphere decoding does not have this hard constraint, but it benefits from more equations because the computational complexity grows exponentially in the difference between the number of unknowns and equations. If this difference is not very large, sphere decoding is still feasible.


7. Design of the CD Codes


Although we have introduced the CD code










A
=




q
=
1

Q




α
q



A
q




,




(again,  9)








we have not yet specified Q, nor have we explained how to design the Hermitian basis matrices A1, . . . , AQ or choose the discrete set A from which the αq are selected. We now address these issues.


7.1. Choice of Q


To make the set of all possible CD codes as rich as possible, we should make the number of degrees of freedom Q as large as possible. Though the parameter Q can be any size, it has been found that restricting Q as follows,

Q≦K(2M−K); where K=min(M,N),  (14)

makes it possible to strike a good balance between information content and performance versus computational load. Therefore, as a general practice, we can take Q at its upper limit in (Eq. No. 14),

Q=min(N,M)*max(2M−N,M).  (15)

If sphere decoding is used we sometimes exceed this limit (yielding more unknowns than equations; see examples in Section 3), but Q≦M2 should be obeyed.


We are left with how to design A1, . . . , AQ and how to choose the discrete set A. If the rates being considered are reasonably small (for example, R<4), then the criterion of maximizing |det(Vl−Vl′)| for all λ′≠λ is tractable. Recall that any set V1, . . . , VQ for which this determinant is nonzero for all λ′≠λ is said to be fully diverse.


It has been shown above (in the discussion of (Eq. No. 8)) that a set of unitary matrices is fully diverse if and only if the corresponding Cayley-transformed set of skew-Hermitian matrices is fully-diverse. Since









A


-
A

=




q
=
1

Q




A
q



(


α
q


-

α
q


)




,





by considering α and α′ that differ in only one coordinate q, we see that it is necessary (but not sufficient) for A1, . . . , AQ to be nonsingular. Some examples of full diversity for small rates and small number of antennas are shown below.


At high rates, however, it is preferred not to pursue the full-diversity criterion. The reasons include: first, the criterion becomes intractable because of the number of matrices involved; and second, the performance of the set may not be governed so much by its worst-case pairwise |det(Vl−Vl′)|, but rather by how well the matrices are distributed throughout the space of unitary matrices. One reason why group sets do not perform very well at high rates is because they lack the required statistical structure of a good high rate set.


7.2. Choice of A


At high rates, a CD code A=Σq=1QAqαq should resemble samples from a Cauchy random matrix distribution. We look first at the implications of the example where there is one transmit antenna M=1. In this case the optimal strategy is standard DPSK.


For the example of M=1, we are limited by (Eq. No. 14) to Q=1 in this example, and there is no loss of generality in setting A1=1 to get










v
=


1
-

i






α
1




1
+

i






α
1





,


α
1

=


-
i




1
-
v


1
+
v








(
16
)







To get rate R=(Q/M)log2L with M=Q=1 we need A to have r=2R points. Standard DPSK puts these points uniformly around the unit circle at angular intervals of 2π/r with the first point at angle π/r. (The location of the first point does not affect the set's performance in any way, but it helps us avoid a formal singularity in the inversion formula (Eq. No. 16) at v=−1). For a point at angle Θ on the unit circle,









α
=



-
i




1
-



i





θ




1
-



i





θ





=

-


tan


(

θ
/
2

)


.







(
17
)







For example, for r=2 (D-BPSK, i.e., differential binary PSK), we have V={eπi/2, e−πi/2}. Plugging these values into (Eq. No. 17) yields {−1,1}. For r=4, (D-QPSK i.e., differential quad PSK), we have {−1−√{square root over (2)}, 1−√{square root over (2,)}−1+√{square root over (2)}, 1+√{square root over (2)}{=}−2.4142; −0.4142; 0.4142; 2.4142) (where the points are arranged in increasing order). For r=8, {−5.0273, −1.4966, −0.6682, −0.1989, 0.1989, 0.6682, 1.4966, 5.0273}.


We see that the points rapidly spread themselves out as r increases, thus reflecting the long tail of the Cauchy distribution (p(a)=1/π(1+a2)). We denote A to be the image of the function (Eq. No. 17) applied to the set θε{π/r, 3π/r, 5π/r, . . . , (2r−1)π/r}. In the limit as r→∞, the fraction of points in A less than some x is given by the cumulative Cauchy distribution evaluated at x. The A thus be regarded as an r-point discretization of a scalar Cauchy random variable.


While this argument tells us how to choose the set A as a function of r when Q=M=1, it does not directly show us how to choose A when M>1. Thus, the {αq} are chosen as discretized scalar Cauchy random variables for any Q and M. To complete the code construction, {A1, . . . , AQ} should be chosen appropriately, and we present a criterion in the next section.


7.3. Choice of {Aq}


We shift our attention away from the final distribution on A and express our design criterion in terms of V. For a given A1, . . . , AQ and A, we define a distance criterion for the resulting set of matrices V to be











ξ


(
V
)


=



1
M


E





log






det


(

V
-

V



)





(

V
-

V



)

*


=


2
M


E





log




det


(

V
-

V



)







,




(
18
)








where V′=(I+iA′)−1(I−iA′), A′=Σq=1QAqα′q, and the expectation is over α1, . . . αQ and α′1, . . . , α′Q chosen uniformly from A such that α≠α′. Although ξ(V) is often negative, it is a measure of the expected “distance” between the random matrices V and V′.


To choose the Aq's, we therefore propose the optimization problem











a





r





g





max




A
q

=

A
q
*


,

q
=
1

,











Q










ξ


(
V
)


.





(
19
)







Our choices of Aq and A affect the distance criterion through the distribution pV(·) that they impose on the V matrices. It will now be shown that this criterion is maximized when V and V′ are independently chosen isotropic matrices. Such maximization can be done via gradient-ascent techniques, e.g., Optimization: Theory and Practice, Gordon Beveridge and Robert Shechter, McGraw-Hill, 1970.


We interpret (Eq. No. 18) as a measure of the average distance between matrices in the set. If the set A and A1, . . . , AQ are chosen such that V (namely, the Cayley transform (Eq. No. 1)) is approximately isotropically distributed when A is sampled uniformly, then the average distance should be large.


We use (Eq. No. 8), and the fact that matrices commute inside the determinant function, to write the optimization as a function of A and A′,











arg





max




A
q

=

A
q
*


,

q
=
1

,



Q



[









log





4

-


1
M


E





log






det


(

I
+

A
2


)



-








1
M


E





log






det


(

I
+

A
′2


)



+







1
M


E





log







det


(

A
-

A



)


2









]




(
20
)








where A=Σq=1QAqαq, A′=Σq=1QAqα′q. For a set with α1, . . . αQ and α′1, . . . α′Q chosen from Ar, we interpret the expectation as uniform over A such that α≠α′.


It is occasionally useful, especially when r is large, to replace the discrete set from which αq and αq′ are chosen (A) with independent scalar Cauchy distributions. In this case, since the sum of two independent Cauchy random variables is scaled-Cauchy, our criterion simplifies to











arg





max




A
q

=

A
q
*


,

q
=
1

,



Q



[









2





log





4

-


2
M


E





log






det


(

I
+

A
2


)



+







1
M


E





log





det






A
2









]




(
21
)








where






A
=




q
=
1

Q




A
q



α
q








and the expectation is over α1, . . . αQ chosen independently from a Cauchy distribution.


7.4. CD Code


We now summarize the design method for a CD code with M transmit and N receive antennas, and target rate R.


(i) Choose Q≦min(N,M)*max(2M−N,M). This inequality is a hard limit for decoding by nulling/canceling (e.g., G. D. Golden, G. J. Foschini, R. A. Valenzuela, and P. W. Wolniansky, “Detection algorithm and initial laboratory results using V-BLAST space-time communication architecture,” Electronic Letters, Vol. 35, pp. 14–16, January 1999, or G. J. Foschini, G. D. Golden, R. A. Valenzuela, and P. W. Wolniansky, “Simplified processing for high spectral efficiency wireless communication employing multi-element arrays,” J. Sel. Area Comm., vol. 17, pp. 1841–1852, November 1999) and Q is typically chosen to make it an equality. But the inequality is a soft limit for sphere decoding (e.g., U. Fincke and M. Pohst, “Improved methods for calculating vectors of short length in a lattice, including a complexity analysis,” Mathematics of Computation, vol. 44, pp. 463–471, April 1985, or M. O. Damen, A. Chkeif, and J. -C. Belfiore, “Lattice code decoder for space-time codes,” IEEE Comm. Let., pp. 161–163, May 2000) and we may choose Q as large as M2 even if N<M.


(ii) Since R=(Q/M)*log2(r), set r=2MR/Q. Let A be the r-point discretization of the scalar Cauchy distribution obtained as the image of the function (Eq. No. 17) applied to the set {θ}□{π/r, 3π/r, 5π/r, . . . (2r−1) π/r}.


(iii) Choose a set {Aq} that solves the optimization problem (Eq. No. 20).


The following is to be noted.


A. The solution to (Eq. No. 20) is highly nonunique: simply reordering the {Aq} gives another solution, as does changing the signs of the {Aq}, since the sets A are symmetric about the origin.


B. It does not appear that (Eq. No. 20) has a simple closed-form solution for general Q, M, and N, but presented below is a special case where a closed-form solution appears.


C. Numerical methods, e.g. gradient-ascent methods, can be used to solve (Eq. No. 20). The computation of the gradient of the criterion in (Eq. No. 20) is presented in the Appendix, the entirety of which is hereby incorporated by reference. Since the criterion function is nonlinear and nonconcave in the design variables {Aq}, there is no guarantee of obtaining a global maximum. However, since the code design can be performed off-line and need only be performed once, one can use more sophisticated optimization techniques that vary the initial condition, use second-order methods, use simulated annealing, etc. Below it is shown that the CD codes obtained with a gradient search tend to have very good performance.


D. The entries of {Aq} in (Eq. No. 20) are unconstrained other than that the final matrix must be Hermitian. Appealing to symmetry arguments, however, we have found it beneficial to constrain the Frobenius norm of all the matrices in {Aq} to be the same. It is preferred, both for the criterion function (Eq. No. 20) and for the ultimate set performance, that the correct Frobenius norm of the basis matrices be chosen. With the correct Frobenius norm, choosing an initial condition for the {Aq} in the gradient search becomes easier.


The gradient for the Frobenius norm has a simple closed form which we now give. It can be used to solve for the optimal norm. Let √{square root over (γ)} be a multiplicative factor that we use to multiply every Aq; we solve for the optimal γ>0 by maximizing the criterion function







a





r





g







max








A
q

=

A
q
*


,

q
=
1

,









,




Q




ξ


(
V
)



,





that is









arg





max

γ

[









2





log





4

-


2
M


E





log






det


(

I
+

γ






A
2



)



+







1
M


E





log





det





γ






A
2









]





=



arg





max

γ



[


log





γ

+


2
M


E





log






det


(

I
+

γ






A
2



)




]







The optimal γ therefore sets the gradient of this last equation to zero:









0
=




1
γ

-


2
M


E






tr


[



(

I
+

γ






A
2



)


-
1








A
2


]










=




1
γ

-


2
M


E






tr


[


(


(

I
+

γ






A
2



)


-
1


)



1
γ



(

I
+

γ






A
2


-
I

)


]










=




1
γ



(

1
-


2
M


E






tr


[

I
-


(

I
+

γ






A
2



)


-
1



]




)








=




1
γ



(


-
1

+


2
M


E







tr


(

I
+

γ






A
2



)



-
1




)










The equation








-
1

+


2
M


E







tr


(

I
+

γ






A
2



)



-
1




=
0





can readily be solved numerically for γ.


E. The ultimate rate of the code depends on the number of signals sent, namely Q, and the size of the set A from which α1, . . . , αQ are chosen. The code rate in bits/channel is









R
=


Q
M



log
2







r
.






(
22
)








We generally choose r to be a power of two.


F. The design criterion (Eq. No. 20) depends explicitly on the number of receive antennas N through the choice of Q. Hence, the optimal codes, for a given M, are different for different N.


G. The variable Q is essentially also a design variable. The CD code performance is generally best when Q is chosen as large as possible. For example, a code with a given Q and r is likely to perform better than another code of the same rate that is obtained by halving Q and squaring r. Nevertheless, it is sometimes advantageous to choose a small Q to design a code of a specific rate.


H. If r is chosen a power of two, a standard gray-code assignment of bits to the symbols of the set A may be used.


I. The dispersion matrices {Aq} are Hermitian and, in general, complex.


8. Examples of CD Codes and Performance


Example simulations for the performance of CD codes for various numbers of antennas and rates follow. The channel is assumed to be quasi-static, where the fading matrix between the transmitter and receiver is constant (but unknown) between two successive channel uses. Two error events of interest include block errors, which correspond to errors in decoding the M×M matrices V1, . . . , VL, and bit errors, which correspond to errors in decoding α1, . . . αQ. The bits to be transmitted are mapped to αq with a gray code and therefore a block error will correspond to only a few bit errors. In some examples, we compare the performance of linearized likelihood (sphere decoding) with true maximum likelihood and nulling/cancelling.


8.1. Simple example: M=2, R=1


For M=2 transmit antennas and rate R=1, the set has L=4 elements. In this case, it turns out that no set can have ζ, defined as







ϛ
=


1
2




min

l


l









det


(


V
l

-

V

l




)





1
M





,





larger than ζ=√{square root over (⅔)}=0.8165. The optimal set corresponds to a tetrahedron whose corners lie on the surface of a three-dimensional unit sphere, and one representation of it is given by the four unitary matrices











V
1

=

[










1
/
3


+

i



2
/
3






0




0





1
/
3


-

i



2
/
3











]


,


V
2

=

[








-


1
/
3







2
/
3







-


2
/
3






-


1
/
3










]


,


V
3

=

[








-


1
/
3






-


2
/
3









2
/
3





-


1
/
3










]


,


V
4

=

[










1
/
3


-

i



2
/
3






0




0





1
/
3


+

i



2
/
3











]






(
23
)








There are many equivalent representations, but it turns out that this particular choice can be constructed as a CD code with Q=r=2, and the basis matrices are











A
1

=


[








1


2



(


3

+
1

)







-
i



2



(


3

-
1

)








i


2



(


3

-
1

)







-
1



2



(


3

+
1

)










]






and









A
2

=


[








1


2



(


3

+
1

)






i


2



(


3

-
1

)









-
i



2



(


3

-
1

)







-
1



2



(


3

+
1

)










]

.






(
24
)








The matrices (Eq. No. 23) are generated as the Cayley transform (Eq. No. 1) of A=A1α1+A2α2, with α1, α2ε{−1,1}.


For comparison, we may consider the set based on orthogonal designs for M=2 and R=1 given by











V
1

=


1

2


[







1


1





-
1



1







]


,


V
2

=


1

2


[








-
1



1





-
1




-
1








]


,


V
3

=


1

2


[







1



-
1





1


1







]


,


V
3

=


1

2


[








-
1




-
1





1



1
-








]






(
25
)








which has ζ=1/√{square root over (2≈)}0.7071, or the set








V
l

=



1

2


[










2


πⅈ
/
4





0




0





2


πⅈ
/
4










]


l
-
1



,

l
=
1

,





,
4





which also has ζ=0.7071. Since we are more interested in high rate examples, we do not plot the performance of the CD code (Eq. No. 23); however, simulations show that the performance gain over (Eq. No. 25) is approximately 0.75 dB at high signal to noise ratio (“SNR”). This small example shows that there are good codes within the CD structure at low rates. In this case, the best R=1 code has a CD structure.


8.2. CD Code Using Orthogonal Designs: M=2


Recall from Lemma 3 that a set of unitary matrices is fully-diverse if and only if its Cayley transform set of skew-Hermitian matrices is fully-diverse. For M=2 transmit antennas, a famous fully-diverse set is the orthogonal design of Alamouti, namely









OD
=

[







x


y






-
y

*




x
*




]





(
26
)








Orthogonal designs are readily seen to be fully-diverse since









det
(



OD
1

-

OD
2


=



det
[





x
1

-

x
2






y
1

-

y
2








-

(


y
1

-

y
2


)


*





(


x
1

-

x
2


)

*








]








=








x
1

-

x
2




2

+






y
1

-

y
2




2

.










We require that OD be skew-Hermitian, implying that









OD
=


[








i






α
1






α
2

+

i






α
3









-

α
2


+

i






α
3







-
i







α
1









]

=

i




[








α
1






-
i







α
2


+

α
3








i






α
2


+

α
3





-

α
1









]




=
A








(
27
)








where the αq's are real. Thus, we may define a CD code with basis matrices











A
1

=

[







1


0




0



-
1








]


,


A
2

=

[







0



-
i





i


0



]


,


A
3

=

[







0


1




1


0







]






(
28
)








that generates a fully-diverse set. It can be noted that A1, A2, and A3 form a basis for the real Lie algebra su(2) of traceless Hermitian matrices. Using (Eq. No. 8) yields

det(V−V′)=4det(I+iA)−1det(A′−A)det(I+A′)−1,

which upon simplification yields










det


(

V
-

V



)


=



4


(






α
1

-

α
1





2

+





α
2

-

α
2





2

+





α
3

-

α
3





2


)




(

1
+




α
1



2

+




α
2



2

+




α
3



2


)



(

1
+




α
1




2

+




α
2




2

+




α
3




2


)



.





(
29
)








For example, by choosing αqε{α}r, we get a code with rate R=1.5. The appropriate scaling γ is =⅓. The resulting set of eight matrices (which we omit for brevity and because they are readily derived) has ζ=1/√{square root over (3)}.


It is noted that the code (Eq. No. 28) is a closed-form solution to (Eq. No. 20) for M=2 and Q=3 because it is a local maximum to the criterion.


8.3. CD Code vs. OD: M=N=2


For a higher-rate example, we examine another code for M=2, but we choose N=2 and R=6. FIG. 1 shows the performance of a CD code with Q=4. The code is








A
1

=

[



0.1785



0.0510
+

0.1340

i







0.0510
-

0.1340

i




0.0321



]


,






A
2

=

[








-
0.1902




0.1230
+

0.0495

i







0.1230
-

0.0495

i





-
0.0512








]


,






A
3

=

[








-
0.2350




0.0515
-

0.0139

i







0.0515
+

0.0139

i




0.1142







]


,






A
4

=


[







0.0208



0.1143
-

0.1532

i







0.1143
+

0.1532

i




0.0220







]

.







In FIG. 1, the solid line is the block error rate (“bler” in the FIG.) for the CD code with sphere decoding, and the dashed line is the differential two-antenna orthogonal design with maximum likelihood decoding. To get R=6, choose α1, . . . , α4ε A where r=8; the distance criterion (Eq. No. 18) for this code is tζ−1.46. Also included in the figure is the two-antenna differential orthogonal design with the same rate. The CD code obeys the constraint (Eq. No. 14) and therefore can be decoded very quickly using the sphere decoder. A maximum likelihood decoder would have to search over 2RM=212=4096 matrices. It is to be noted that FIG. 1 is not intended to illustrate absolute superior performance, rather it illustrates relatively superior performance.


8.4. Comparison with Another Nongroup Code: M=4, N=1, R=4


There are not many performance curves easily available for existing codes for M=R=4 over an unknown channel, but the publication by A. Shokrollahi, B. Hassibi, B. Hochwald, and W. Sweldens, “Representation theory for high-rate multiple-antenna code design,” submitted to IEEE Trans. Info. Theory, 2000, http://mars.bell-labs.com, has a nongroup code for N=1 that appears in Table 4 and FIG. 9 of that paper. FIG. 2 compares it to a CD code with the same parameters.


For FIG. 2, the CD code has Q=16, and achieves R=4 by choosing r=2. The 4×4 matrices A1, . . . , A16 are:







A
1

=

[








0
-

0.2404

i





0.0004
-

0.0552

i






-
0.1191

-

0.1226

i






-
0.1851

+

0.0590

i








-
0.0004

-

0.0552

i





0
+

0.1088

i






-
0.0254

+

0.0269

i






-
0.0037

-

0.0552

i







0.1191
-

0.1226

i





0.0254
+

0.0269

i





0
-

0.2123

i






-
0.0070

+

0.0782

i







0.1851
+

0.0590

i





0.0037
-

0.0552

i





0.0070
+

0.0782

i





0
+

0.0803

i









]








A
2

=

[








0
+

0.0256

i





0.0680
-

0.0044

i





0.0602
-

0.0450

i





0.1834
-

0.0525

i








-
0.0680

-

0.0044

i





0
+

0.0612

i





0.1373
+

0.0840

i






-
0.0514

-

0.0203

i








-
0.0602

-

0.0450

i






-
0.1373

+

0.0840

i





0
-

0.0521

i





0.2104
+

0.1243

i








-
0.1834

-

0.0525

i





0.0514
-

0.0203

i






-
0.2104

+

0.1243

i





0
+

0.0676

i









]








A
3

=

[








0
-

0.1983

i






-
0.0603

+

0.0351

i





0.0916
+

0.0494

i





0.1784
-

0.0136

i







0.0603
+

0.0351

i





0
+

0.0609

i






-
0.0190

+

0.0868

i






-
0.1614

-

0.0575

i








-
0.0916

+

0.0494

i





0.0190
+

0.0868

i





0
+

0.2219

i






-
0.0621

+

0.0777

i








-
0.1784

-

0.0136

i





0.1614
-

0.0575

i





0.0621
+

0.0777

i





0
-

0.0196

i









]








A
4

=

[








0
+

0.0149

i





0.2168
-

0.0384

i






-
0.0268

+

0.0702

i






-
0.0569

-

0.0114

i








-
0.2168

-

0.0384

i





0
+

0.0201

i






-
0.1388

+

0.0965

i





0.1276
+

0.0793

i







0.0268
+

0.0702

i





0.1388
+

0.0965

i





0
-

0.0022

i





0.1576
-

0.0771

i







0.0569
-

0.0114

i






-
0.1276

+

0.0793

i






-
0.1576

-

0.0771

i





0
+

0.0541

i









]








A
5

=

[








0
-

0.1976

i






-
0.1267

+

0.0316

i






-
0.0818

-

0.1269

i






-
0.0751

-

0.1148

i







0.1267
+

0.0316

i





0
-

0.0754

i






-
0.0671

-

0.1447

i





0.1276
-

0.0364

i







0.0818
-

0.1269

i





0.0671
-

0.1447

i





0
+

0.0004

i






-
0.0336

-

0.0754

i







0.0751
-

0.1148

i






-
0.1276

-

0.0364

i





0.0336
-

0.0754

i





0
-

0.1435

i









]








A
6

=

[








0
-

0.0397

i





0.0041
-

0.0431

i





0.0143
+

0.0019

i






-
0.0985

-

0.1415

i








-
0.0041

-

0.0431

i





0
+

0.0350

i






-
0.1616

+

0.1164

i






-
0.0870

+

0.2128

i








-
0.0143

+

0.0019

i





0.1616
+

0.1164

i





0
+

0.0788

i





0.0720
+

0.0829

i







0.0985
-

0.1415

i





0.0870
+

0.2128

i






-
0.0720

+

0.0829

i





0
+

0.0251

i









]








A
7

=

[








0
+

0.1418

i





0.0244
+

0.1003

i





0.0575
-

0.0581

i






-
0.0472

-

0.0349

i








-
0.0244

+

0.1003

i





0
-

0.1984

i






-
0.0059

-

0.0304

i






-
0.0735

-

0.2520

i








-
0.0575

-

0.0581

i





0.0059
-

0.0304

i





0
+

0.1012

i






-
0.1113

+

0.0231

i







0.0472
-

0.0349

i





0.0735
-

0.2520

i





0.1113
+

0.0231

i





0
+

0.0742

i









]








A
8

=

[








0
-

0.0335

i






-
0.0190

-

0.1533

i





0.0112
-

0.0098

i






-
0.0781

-

0.1095

i







0.0190
-

0.1533

i





0
-

0.0862

i





0.0116
+

0.1090

i






-
0.1356

-

0.1393

i








-
0.0112

-

0.0098

i






-
0.0116

+

0.1090

i





0
+

0.0421

i





0.1032
-

0.1622

i







0.0781
-

0.1095

i





0.1356
-

0.1393

i






-
0.1032

-

0.1622

i





0
+

0.1190

i









]








A
9

=

[








0
+

0.1257

i





0.0192
+

0.0658

i





0.0312
-

0.0430

i






-
0.0170

+

0.0031

i








-
0.0192

+

0.0658

i





0
+

0.3621

i






-
0.0893

+

0.0286

i






-
0.0588

+

0.1090

i








-
0.0312

-

0.0430

i





0.0893
+

0.0286

i





0
-

0.1343

i






-
0.0469

-

0.1561

i







0.0170
+

0.0031

i





0.0588
+

0.1090

i





0.0469
-

0.1561

i





0
-

0.0202

i









]








A
10

=

[








0
-

0.1220

i






-
0.0534

+

0.0239

i






-
0.0291

-

0.0339

i





0.0094
-

0.0649

i







0.0534
+

0.0239

i





0
-

0.0337

i






-
0.1484

+

0.0921

i





0.0813
-

0.0134

i







0.0291
-

0.0339

i





0.1484
+

0.0921

i





0
+

0.0159

i






-
0.1116

-

0.1673

i








-
0.0094

-

0.0649

i






-
0.0813

-

0.0134

i





0.1116
-

0.1673

i





0
+

0.3019

i









]








A
11

=

[








0
-

0.0848

i






-
0.0902

+

0.0471

i






-
0.0578

+

0.0636

i





0.0540
-

0.0904

i







0.0902
+

0.0471

i





0
+

0.0301

i





0.1250
+

0.0087

i





0.0597
-

0.0539

i







0.0578
+

0.0636

i






-
0.1250

+

0.0087

i





0
+

0.2837

i






-
0.0252

+

0.2114

i








-
0.0540

-

0.0904

i






-
0.0597

-

0.0539

i





0.0252
+

0.2114

i





0
+

0.0332

i









]








A
12

=

[








0
+

0.0781

i





0.1675
+

0.0064

i






-
0.0349

-

0.0324

i





0.1939
-

0.0375

i








-
0.1675

+

0.0064

i





0
-

0.1216

i





0.0977
+

0.0318

i






-
0.0768

-

0.1414

i







0.0349
-

0.0324

i






-
0.0977

+

0.0318

i





0
-

0.1522

i






-
0.0964

+

0.0526

i








-
0.1939

-

0.0375

i





0.0768
-

0.1414

i





0.0964
+

0.0526

i





0
+

0.0508

i









]








A
13

=

[








0
-

0.0117

i





0.0628
-

0.0848

i





0.1047
-

0.1017

i





0.0316
+

0.0272

i








-
0.0628

-

0.0848

i





0
-

0.0322

i






-
0.0848

+

0.0371

i





0.1228
+

0.0154

i








-
0.1047

-

0.1017

i





0.0848
+

0.0371

i





0
-

0.1907

i






-
0.2330

-

0.0132

i








-
0.0316

+

0.0272

i






-
0.1228

+

0.0154

i





0.2330
-

0.0132

i





0
-

0.1408

i









]








A
14

=

[








0
+

0.1587

i






-
0.0038

-

0.0996

i






-
0.1055

-

0.1272

i





0.1282
-

0.1698

i







0.0038
-

0.0996

i





0
-

0.0848

i






-
0.0883

-

0.0334

i






-
0.0280

+

0.1595

i







0.1055
-

0.1272

i





0.0883
-

0.0334

i





0
-

0.0480

i





0.0030
-

0.0765

i








-
0.1282

-

0.1698

i





0.0280
+

0.1595

i






-
0.0030

-

0.0765

i





0
+

0.0256

i









]








A
15

=

[








0
-

0.0574

i





0.1189
-

0.0981

i






-
0.0998

-

0.0472

i






-
0.0315

-

0.0113

i








-
0.1189

-

0.0981

i





0
+

0.0527

i





0.0116
+

0.1028

i





0.1104
-

0.0912

i







0.0998
-

0.0472

i






-
0.0116

+

0.1028

i





0
+

0.2906

i





0.1081
+

0.0020

i







0.0315
-

0.0113

i






-
0.1104

-

0.0912

i






-
0.1081

+

0.0020

i





0
-

0.1787

i









]








A
16

=

[








0
-

0.0315

i





0.0136
+

0.0632

i






-
0.0392

+

0.1217

i






-
0.2722

+

0.0216

i








-
0.0136

+

0.0632

i





0
+

0.0899

i






-
0.0040

-

0.0596

i





0.1057
-

0.0382

i







0.0392
+

0.1217

i





0.0040
-

0.0596

i





0
+

0.1765

i





0.0531
-

0.0535

i







0.2722
+

0.0216

i






-
0.1057

-

0.0382

i






-
0.0531

-

0.0535

i





0
+

0.0908

i









]





In FIG. 2, ζ=−1.46. The nongroup code, which has its origin in a group code, performs better but the difference is very small. Observe that Q=M2>2MN−N2=7 and therefore the inequality (Eq. No. 14) is not satisfied, but it does not matter in this case because the decoding for both codes is true maximum likelihood (rather than sphere decoding or nulling/cancelling). This example is not very practical because maximum likelihood decoding involves a search over 2RM=216=65,536 matrices. However, this same CD code is used in the next example where, by increasing the number of receive antennas to N=2, we are able to solve the linearized likelihood with sphere decoding.


8.5. Linearized vs. Exact ML: M=4, N=2, R=4


By increasing the number of receive antennas in the previous example to N=2, we may linearize the likelihood and compare the performance with the true maximum likelihood. FIG. 3 shows the results. In FIG. 3, the solid lines are the block error rate (“bler” in the FIG.) and the bit error rates (“ber” in the FIG.) for the CD code with sphere decoding, and the dashed lines are the block/bit error rates with maximum likelihood decoding. The same CD code as in the example of FIG. 2 is used.


In the example of FIG. 3, observe that Q>2MN−N2=12 and therefore the inequality (Eq. No. 14) is still not obeyed; but because it is almost obeyed, the sphere decoder of the linearized likelihood searches over only 16−12=4 dimensions. With r=2, this search is over 24=16 quantities, which is a negligible burden. Compare this burden with the true maximum likelihood (65,536 matrices). FIG. 3 shows that the performance loss for linearizing the likelihood is approximately 1.3 dB at high SNR. While the performance of linearized maximum likelihood is slightly worse than true maximum likelihood, the next figure shows that the performance of nulling/cancelling is much worse than either.


8.6 Sphere decoding vs. nulling/cancelling: M=N=4, R=8



FIG. 4 shows the performance of a CD code for M=4 transmit and N=4 receive antennas for rate R=8 with linearized-likelihood decoding. In FIG. 4, the solid lines are the block and bit error rates for sphere decoding and the dashed lines are for nulling/cancelling. The performance advantage of sphere decoding is dramatic. As in the previous example of FIG. 3, Q=16, but to achieve R=8 we choose r=4. Again, the explicit description of A1, . . . , A16 is omitted for brevity and because they are readily derived; ξ=−1.36.


Also plotted in FIG. 4 is a comparison of the same CD code with nulling/cancelling decoding. We see that sphere decoding is significantly better. True maximum likelihood decoding is not realistic in this example because there are 2RM−232≈4×109 matrices in the codebook.


8.7 High-Rate Example: M=8, N=12, R=16


Some of the original V-BLAST experiments use eight transmit and twelve receive antennas to transmit more than 20 bits/second/Hz. FIG. 5 shows that high rates with reasonable decoding complexity are also within reach of the CD codes. In FIG. 5, the solid lines are the block and bit error rates for sphere decoding.


Plotted in FIG. 5 are the block and bit error rates for R=16; here Q=64 and r =4. The CD matrices are again omitted for brevity and because they are readily derived; ξ=−1.48. We note that because M=8, the effective set size of the set of unitary matrices is L=2RM=2128≈3.4×1038, yet we may still easily sphere decode the linearized likelihood.


9. Recap


The Cayley differential codes we have introduced do not require channel knowledge at the receiver, are simple to encode and decode, apply to any combination of one or more transmit and one or more receive antennas, and have excellent performance at very high rates. They are designed with a probabilistic criterion: they maximize the expected log-determinant of the difference between matrix pairs.


The CD codes make use of the Cayley transform that maps the nonlinear Stiefel manifold of unitary matrices to the linear space of skew-Hermitian matrices. The transmitted data is broken into substreams α1, . . . , αQ and then linearly encoded in the Cayley transform domain. We showed that α1, . . . , αQ appear linearly at the receiver and can be decoded by nulling/cancelling or sphere decoding by ignoring the data dependence of the additive noise. Additional channel coding across α1, . . . , αQ or from block to block can be combined with a CD code to lower the error probability even further.


Finally, we choose αq's from a set A designed to help make the final A matrix behave, on average, like a Cauchy random matrix.


Applicants hereby incorporate by reference the entirety of their internet-published paper, “Cayley Differential Unitary Space-Time Codes”, available at http://mars.bell-labs.com/cm/ms/what/mars/papers/cayley/.


The invention may be embodied in other forms without departing from its spirit and essential characteristics. The described embodiments are to be considered only non-limiting examples of the invention. The scope of the invention is to be measured by the appended claims. All changes which come within the meaning and equivalency of the claims are to be embraced within their scope.


Appendix

Gradient of Criterion (Eq. No. 20)


In all the simulations presented in this paper the maximization of the design criterion function in (Eq. No. 20), needed to design the CD codes, is performed using a simple constrained-gradient-ascent method. In this section, we compute the gradient of (Eq. No. 20) that this method requires. More sophisticated optimization techniques that we do not consider, such as Newton-Raphson, scoring, and interior-point methods, can also use this gradient.


From (Eq. No. 20), the criterion function is











1
M


E





log






det


(

V
-

V



)





(

V
-

V



)

*


=




log





4

-


1
M


E





log






det


(

I
+

A
2


)



-












1
M


E





log






det


(

I
+

A
′2


)



+











1
M


E





log







det


(


A


-
A

)


2










where A=Σq=1QAqαq and A′=Σq=1QAqα′q. We are interested in the gradient of this function with respect to the matrices A1, . . . , AQ. To compute the gradient of a real function f(Aq) with respect to the entries of the Hermitian matrix Aq, we use the formulas












[




f


(

A
q

)










Re







A
q



]


j
,
k


=




lim





δ

0







1
δ



[


f


(


A
q

+

δ


(



e
j



e
k
T


+


e
k



e
j
T



)



)


-

f


(

A
q

)



]




,

j

k





(

A.2

)









[




f


(

A
q

)










Im







A
q



]


j
,
k


=




lim





δ

0







1
δ



[


f


(


A
q

+

i






δ


(



e
j



e
k
T


+


e
k



e
j
T



)




)


-

f


(

A
q

)



]




,

j

k





(

A.3

)









[




f


(

A
q

)






A
q



]


j
,
j


=




lim





δ

0







1
δ



[


f


(


A
q

+

δ






e
j



e
m
T



)


-

f


(

A
q

)



]




,




(

A.4

)








where ej is the M-dimensional unit column vector with a one in the jth entry and zeros elsewhere.


To apply (Eq. No. A.2) to the second term in (Eq. No. A.1), we compute










log






det


(

I
+


(

A
+


(



e
j



e
k
T


+


e
k



e
j
T



)



α
q


δ


)

2


)



=



log






det
(

I
+

A
2

+

[

A
(



e
j



e
k
T


+



















e
k



e
j
T


)


+


(



e
j



e
k
T


+


e
k



e
j
T



)


A


]












α
q


δ

+

0


(

δ
2

)



)







=



log






det
[


(

I
+

A
2


)



(

I
+

(

I
+



















A
2

)



-
1


[


A


(



e
j



e
k
T


+


e
k



e
j
T



)


+
















(



e
j



e
k
T


+


e
k



e
j
T



)


A

]




α
q


δ

+

O


(

δ
2

)



)

]






=




log






det


(

I
+

A
2


)



+

tr






log
(

I
+















(

I
+

A
2


)


-
1


[


A


(



e
j



e
k
T


+


e
k



e
j
T



)


+
















(



e
j



e
k
T


+


e
k



e
j
T



)


A

]



α
q


δ

+

O


(

δ
2

)



)







=




log






det


(

I
+

A
2


)



+

tr
[

(

I
+
















A
2

)



-
1


[


A


(



e
j



e
k
T


+


e
k



e
j
T



)


+















(



e
j



e
k
T


+


e
k



e
j
T



)


A

]




α
q


δ

]

+

O


(

δ
2

)









=




log






det


(

I
+

A
2


)



+

















[










[



(

I
+

A
2


)


-
1



A

]


k
,
j


+








[



(

I
+

A
2


)


-
1



A

]


j
,
k


+








[


A


(

I
+

A
2


)



-
1


]


k
,
j


+







[


A


(

I
+

A
2


)



-
1


]


j
,
k





]



α
q


δ

+










O


(

δ
2

)








=




log






det


(

I
+

A
2


)



+

4






Re
[

(

I
+














(








A
2

)



-
1



A

]

)


j
,
k




α
q


δ

+


O


(

δ
2

)


.








The last equality follows because (I+A2)−1 and A commute and A is Hermitian. We may now apply (Eq. No. A.2) to obtain









[




E






log






det


(

I
+

A
2


)










Re







A
q



]


j
,
k


=

4





E







Re


[



(

I
+

A
2


)


-
1



A

]



j
,
k




α
q



,

j

k





The gradient with respect to the imaginary components of Aq is handled in a similar way to obtain










log





det


(

I
+


(

A
+


(



e
j



e
k
T


-


e
k



e
j
T



)



α
q


i





δ


)

2


)


=




log






det


(

I
+

A
2


)



+

tr
[

(

I
+
















A
2

)



-
1


[


A


(



e
j



e
k
T


-


e
k



e
j
T



)


+















(



e
j



e
k
T


-


e
k



e
j
T



)


A

]




α
q


i





δ

]

+









O


(

δ
2

)








=




log






det


(

I
+

A
2


)



+












[










[



(

I
+

A
2


)


-
1



A

]


k
,
j


-








[



(

I
+

A
2


)


-
1



A

]


j
,
k


+








[


A


(

I
+

A
2


)



-
1


]


k
,
j


-







[


A


(

I
+

A
2


)



-
1


]


j
,
k









]



α
q


i





δ

+










O


(

δ
2

)








=




log






det


(

I
+

A
2


)



+

4






Im
[

(

I
+














(








A
2

)



-
1



A

]

)


j
,
k




α
q


δ

+

O


(

δ
2

)













which







yields




[




E






log






det


(

I
+

A
2


)










Im







A
q



]


j
,
k



=

4

E







Im


[


(

I
+

A
2


)


-
1


]



j
,
k




α
q



,

j

k





The gradient with respect to the diagonal elements is








[




E






log






det


(

I
+

A
2


)






A
q



]


k
,
k


=

2



E


[


(

I
+

A
2


)


-
1


]



j
,
j





α
q

.






The third term in (Eq. No. A.1) has the same derivative as the second term.


For the fourth term, note that








A


-
A

=




q
=
1

Q




A
q



β
q








where βq=α′q−αq. Therefore,










log







det


(

A
+


(



e
j



e
k
T


+


e
k



e
j
T



)


δ






β
q



)


2


=



log







det


(

A


(

I
+



A

-
1




(



e
j



e
k
T


+


e
k



e
j
T



)







δ






β
q



)


)


2








=



log






det
(


A


(

I
+



A

-
1




(



e
j



e
k
T


+


e
k



e
j
T



)



δ






β
q



)



A
(

I
+















A

-
1




(



e
j



e
k
T


+


(


e
j



e
j
T


)


δ






β
q



)


)







=




log





det






A
2


+

2

tr






log
(

I
+


A

-
1


(



e
j



e
k
T


+




















e
k



e
j
T


)



δ






β
q


)

+

O


(

δ
2

)








=




log





det






A
2


+

2


tr


[



A

-
1




(



e
j



e
k
T


+


e
k



e
j
T



)



δ






β
q


]



+










O


(

δ
2

)








=




log





det






A
2


+

2


(



[

A

-
1


]


k
,
j


+


[

A

-
1


]


j
,
k



)


δ






β
q









=




log





det






A
2


+

4







Re


[

A

-
1


]



j
,
k



δ







β
q

.










Hence,









[




E






log







det


(


A


-
A

)


2









Re







A
q



]


j
,
k


=

4







Re


[

A

-
1


]



j
,
k




β
q



,

j

k





For brevity, the computation of the derivatives with respect to the imaginary and diagonal components of Aq is omitted. The results are









[




E






log







det


(


A


-
A

)


2









Im







A
q



]


j
,
k


=

4

E







Im


[

A

-
1


]



j
,
k




β
q



,

j

k









and




[




E






log







det


(


A


-
A

)


2





A
q



]


j
,
j


=

2



E


[

A

-
1


]



j
,
j





β
q

.





Claims
  • 1. A method of wireless differential communication, the method comprising: generating a plurality of baseband signals based on Cayley-encoded input data;modulating said baseband signals on a carrier to form carrier-level signals; andtransmitting said carrier-level signals from at least one antenna.
  • 2. The method of claim 1, wherein each baseband signal is also based on a previous baseband signal that corresponds to a previously-transmitted carrier-level signal.
  • 3. The method of claim 2, wherein said transmitting step transmits from a multiple antenna array;each baseband signal includes one or more sequences, in time, of complex numbers, each sequence to be transmitted from a respective antenna of said multiple antenna array; andeach baseband signal is representable as a transmission matrix in which each column corresponds to one of said sequences and represents a respective antenna and in which each row represents a respective time segment.
  • 4. The method of claim 3, wherein said generating step generates each transmission matrix based upon said input data and a previous transmission matrix representing previously transmitted baseband signals.
  • 5. The method of claim 4, wherein said input data is a set from a plurality of sets providing a highest transmission quality out of said plurality of sets.
  • 6. The method of claim 4, wherein, for a total number of bits of data to be transmitted, said generating step includes: breaking said total number of bits into same-sized chunks;mapping each of said chunks to take a value from a predetermined set of real values to obtain a scalar set;determining a Cayley-Differential (“CD”) code based upon said scalar set; anddetermining said transmission matrix based on said CD code.
  • 7. The method of claim 6, wherein said step of determining said CD code includes calculating said CD code, V, according to the following equation: V=(I+iA)−1(I−iA)
  • 8. The method of claim 7, wherein said set, {Aq}, of fixed M×M complex Hermitian matrices is determined by maximizing over {Aq} according to:
  • 9. The method of claim 8, wherein said {Aq} is stored in memories of both a transmitter and a corresponding receiver before data to be transmitted is transmitted.
  • 10. The method of claim 6, wherein: said total number of bits equals R*M, where R is the rate in bits per channel use and M is the number of transmit antennas;said number of chunks is Q, where Q represents the number of degrees of freedom; andeach chunk is R*M/Q bits in size.
  • 11. The method of claim 6, wherein said predetermined set of real values, hereafter referred to as A, is determined according to the following equation: d=−tan(θ/2)
  • 12. The method of claim 11, wherein said A is stored in memories of both a transmitter and a corresponding receiver before data to be transmitted is transmitted.
  • 13. The method of claim 6, wherein said generating step includes calculating said transmission matrix, Sτ, according to the following equation: Sτ=VZτSτ−1
  • 14. A method of wireless differential communication, the method comprising: receiving receive carrier-level signals, each of which is formed from at least one transmit carrier-level signal transmitted from at least one transmitter antenna passing through a channel, using at least one receiver antenna;demodulating said receive carrier-level signals to recover a plurality of Cayley-encoded receive baseband signals; andprocessing said receive baseband signals to obtain data represented thereby.
  • 15. The method of claim 14, wherein each receive baseband signal depends on said data as encoded therein and a previously-received receive baseband signal that corresponds to a previously-transmitted carrier-level signal.
  • 16. The method of claim 15, wherein each receive baseband signal includes one or more receive sequences, in time, of complex numbers; and wherein each receive baseband signal is representable as a reception matrix in which each column corresponds to one of said receive sequences and represents a respective receiver antenna and in which each row represents a respective time segment.
  • 17. The method of claim 16, wherein said transmit carrier-level signals were formed from a plurality of transmit baseband signals, each transmit baseband signal including one or more transmit sequences, in time, of complex numbers, each transmit sequence having been transmitted from a respective antenna of a multiple antenna array, each transmit baseband signal being representable as a transmission matrix in which each column corresponds to one of said transmit sequences and represents a respective transmit antenna and in which each row represents a respective time segment, each transmission matrix being based on a transmission matrix representing a previously transmitted transmit baseband signal and input data.
  • 18. The method of claim 15, wherein said processing step includes searching a predetermined set of real scalar values to assemble a set {αq} that minimizes one of the following equations:
  • 19. The method of claim 18, wherein said {Aq} is stored in the memories of both the receiver and a corresponding transmitter the before said data to be transmitted is transmitted.
  • 20. The method of claim 18, wherein R*M bits of data are received, where R is the rate in bits per channel use M is the number of transmit antennas, the method further comprising: mapping each element of said {αq} into corresponding R*M/Q bits, where Q represents the number of degrees of freedom, to get Q chunks; andreassembling said Q chunks of said R*M/Q bits to produce the R*M bits of data that were transmitted.
  • 21. A machine implemented method of wireless differential communication, the machine implemented method comprising: generating a set, {Aq} of fixed M×M complex Hermitian matrices for which ξ(V) satisfies a maximization criterion and ξ(V) is defined by the following equation: ξ⁡(V)=1M⁢E⁢⁢log⁢⁢det⁡(V-V′)⁢(V-V′)*where V=(I+iA)−1(I+iA),V′=(I+iA′)−1(I+iA′),
  • 22. The method of claim 21, further comprising: storing said {Aq} in memories of both a receiver and a corresponding transmitter before data to be transmitted is transmitted.
  • 23. A machine implemented method of wireless differential communication, the machine implemented method comprising: generating a set, A, of real-valued scalars according to the following equation: d=−tan(θ/2)
  • 24. The method of claim 23, further comprising: storing said A in memories of both a receiver and a corresponding transmitter before data to be transmitted is transmitted.
  • 25. A machine operable to perform the method of claim 1.
  • 26. A computer-readable medium having code portions embodied thereon that, when read by a processor, cause said processor to perform the method of claim 1.
  • 27. A machine operable to perform the method of claim 14.
  • 28. A computer-readable medium having code portions embodied thereon that, when read by a processor, cause said processor to perform the method of claim 14.
  • 29. A machine operable to perform the method of claim 21.
  • 30. A computer-readable medium having code portions embodied thereon that, when read by a processor, cause said processor to perform the method of claim 21.
  • 31. A machine operable to perform the method of claim 23.
  • 32. A computer-readable medium having code portions embodied thereon that, when read by a processor, cause said processor to perform the method of claim 23.
CONTINUING APPLICATION INFORMATION

The present application is a Continuation-In-Part (“CIP”) under 35 U.S.C. § 120 of U.S. patent application Ser. No. 09/356,387 filed Jul. 16, 1999,now U.S. Pat. No. 6,724,842 the entirety of which is hereby incorporated by reference. The present application also claims priority under 35 U.S.C. § 119(e) upon Provisional U.S. Patent Application Ser. No. 60/269,838 filed Feb. 20, 2001, the entirety of which is hereby incorporated by reference.

US Referenced Citations (5)
Number Name Date Kind
3835392 Mahner et al. Sep 1974 A
6724842 Hochwald et al. Apr 2004 B1
6801579 Hassibi et al. Oct 2004 B1
20040228271 Marzetta Nov 2004 A1
20050105644 Baxter et al. May 2005 A1
Related Publications (1)
Number Date Country
20020163892 A1 Nov 2002 US
Provisional Applications (1)
Number Date Country
60269838 Feb 2001 US
Continuation in Parts (1)
Number Date Country
Parent 09356387 Jul 1999 US
Child 10077849 US