Training-based channel estimation for multiple-antennas

Information

  • Patent Grant
  • 9264122
  • Patent Number
    9,264,122
  • Date Filed
    Thursday, December 18, 2014
    9 years ago
  • Date Issued
    Tuesday, February 16, 2016
    8 years ago
Abstract
The burden of designing multiple training sequences for systems having multiple transmit antennas, is drastically reduced by employing a single sequence from which the necessary multiple sequences are developed. The single sequence is selected to create sequences that have an impulse-like autocorrelation function and zero cross correlations. A sequence of any desired length Nt can be realized for an arbitrary number of channel taps, L. The created sequences can be restricted to a standard constellation (that is used in transmitting information symbols) so that a common constellation mapper is used for both the information signals and the training sequence. In some applications a training sequence may be selected so that it is encoded with the same encoder that is used for encoding information symbols. Both block and trellis coding is possible in embodiments that employ this approach.
Description
BACKGROUND OF THE INVENTION

This relates to space-time coding, and more particularly, to channel estimation in space-time coding arrangements.


Space-Time coding (STC) is a powerful wireless transmission technology that enables joint optimized designs of modulation, coding, and transmit diversity modules on wireless links. A key feature of STC is that channel knowledge is not required at the transmitter. While several non-coherent STC schemes have been invented that also do not require channel information at the receiver, they suffer performance penalties relative to coherent techniques. Such non-coherent techniques are therefore more suitable for rapidly fading channels that experience significant variation with the transmission block. However, for quasi static or slowly varying fading channels, training-based channel estimation at the receiver is commonly employed, because it offers better performance.


For single transmit antenna situations, it is known that a training sequence can be constructed that achieves a channel estimation with minimum mean squared error (optimal sequences) by selecting symbols from an Nth root-of-unit alphabet of symbols







e


i





2

π





k

N


,

k
=
0

,
1
,
2
,








(

N
-
1

)


,





when the alphabet size N is not constrained. Such sequences are the Perfect Roots-of-Unity Sequences (PRUS) that have been proposed in the literature, for example, by W. H. Mow, “Sequence Design for Spread Spectrum,” The Chinese University Press, Chinese University of Hong Kong. 1995. The training sequence length, Nt, determines the smallest possible alphabet size. Indeed, it has been shown that for any given length Nt, there exists a PRUS with alphabet size N=2 Nt, and that for some values of Nt smaller alphabet sizes are possible. It follows that a PRUS of a predetermined length might employ a constellation that is other than a “standard” constellation, where a “standard” constellation is one that has a power of 2 number of symbols. Binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), and 8-point phase shift keying (8-PSK) are examples of a standard constellation. Most, if not all, STC systems employ standard constellations for the transmission of information.


Another known approach for creating training sequences constrains the training sequence symbols to a specific (standard) constellation, typically, BPSK, QPSK, or 8-PSK in order that the transmitter and receiver implementations would be simpler (a single mapper in the transmitter and an inverse mapper in the receiver—rather than two). In such a case, however, optimal sequences do not exist for all training lengths Nt. Instead, exhaustive searches must be carried out to identify sub-optimal sequences according to some performance criteria. Alas, such searches may be computationally prohibitive. For example, in the third generation TDMA proposal that is considered by the industry, 8-PSK constellation symbols are transmitted in a block that includes 116 information symbols, and 26 training symbols (Nt=26). No optimal training sequence exists for this value of Nt and constellation size and number of channel taps to estimate, L.


When, for example, two transmit antennas are employed, a training sequence is needed for each antenna, and ideally, the sequences should be uncorrelated. One known way to arrive at such sequences is through an exhaustive search in the sequences space. This space can be quite large. For example, when employing two antennas, and a training sequence of 26 symbols for each antenna, this space contains 82×26 sequences. For current computational technology, this is a prohibitively large space for exhaustive searching. Reducing the constellation of the training sequence to BPSK (from 8-PSK) reduces the search to 22×26 sequences, but that is still quite prohibitively large; and the reduction to a BPSK sequence would increase the achievable mean squared error. Moreover, once the two uncorrelated sequences are found, a generator is necessary for each of the sequences, resulting in an arrangement (for a two antenna case) as shown in FIG. 1, which includes transmitter 10 that includes information encoder 13 that feeds constellation mapper 14 that drives antennas 11 and 12 via switches 15 and 16. To provide training sequences, transmitter 10 includes sequence generator 5 followed by constellation mapper 6 that feeds antenna 11 via switch 15, and sequence generator 7 followed by constellation mapper 8 that feeds antenna 12 via switch 16.


SUMMARY OF THE INVENTION

As advance in the art is achieved with an approach that drastically reduces the problem of designing multiple training sequences for systems having multiple transmit antennas, by employing a single sequence from which the necessary multiple sequences are developed. The single sequence is selected to develop the multiple sequences that have impulse-like autocorrelation functions and zero cross correlations. A sequence of any desired length Nt can be realized for an arbitrary number of channel taps, L.


In one approach, a sequence having an impulse-like autocorrelation function is restricted advantageously to a standard constellation (that is used in transmitting information symbols) so a common constellation mapper is used for both the information signals and the training sequence.


In another approach, a training sequence is selected so that it is encoded with the same encoder that is used for encoding information symbols. Both block and trellis coding is possible in embodiments that employ this approach.





BRIEF DESCRIPTION OF THE DRAWING


FIG. 1 shows a prior art arrangement of a two-antenna transmitter and a one antenna receiver, where training sequences are independently generated for the two transmitting antennas;



FIG. 2 shows an arrangement where the training sequences employ the same constellation mapper that is employed in mapping space-time encoded information symbols;



FIG. 3 presents a block diagram of an arrangement where a single encoder generates the training sequences for the two transmitter antennas;



FIG. 4 shows one encoding realization for encoder 9 of FIG. 3;



FIG. 5 shows another encoding realization for encoder 9 of FIG. 3;



FIG. 6 presents a block diagram of an arrangement where a single encoder generates both the training sequences and the information symbols for the two transmitter antennas;



FIG. 7 shows one encoding realization for encoder 9 of FIG. 6;



FIG. 8 shows another encoding realization for encoder 9 of FIG. 6;



FIG. 9 shows yet another encoding realization for encoder 9 of FIG. 6; and



FIG. 10 shows the constellation of an 8-PSK encoder realization for encoder 9.





DETAILED DESCRIPTION

The following mathematical development focuses on a system having two transmit antennas and one receive antenna. It should be understood, however, that a skilled artisan could easily extend this mathematical development to more than two transmit antennas, and to more than one receive antenna.



FIG. 2 shows an arrangement a transmitter with two transmit antennas 11 and 12 that transmits signals s1 and s2, respectively, and a receiver with receive antenna 21, and channels h1 (from antenna 11 to antenna 21) and h2 (from antenna 12 to antenna 21) therebetween. Channels h1 and h2 can be expressed as a finite impulse response (FIR) filter with L taps. Thus, the signal received at antenna 21 at time k, y(k), can be expressed as










y


(
k
)


=





i
=
0


L
-
1






h
1



(
i
)





s
1



(

k
-
i

)




+




i
=
0


L
-
1






h
2



(
i
)





s
2



(

k
-
i

)




+

z


(
k
)







(
1
)








where z(k) is noise, which is assumed to be AWGN (additive white Gaussian noise).


The inputs sequences s1 and s2 belong to a finite signals constellation and can be assumed, without loss of generality, that they are transmitted in data blocks that consist of Ni information symbols and Nt training symbols. If Nt training symbols are employed to estimate the L taps of a channel in the case of a single antenna, then for a two antenna case such as shown in FIG. 1, one needs to employ 2Nt training symbols to estimate the 2L unknown coefficients (of h1 and h2).


When a training sequence of length Nt is transmitted, the first L received signals are corrupted by the preceding symbols. Therefore, the useful portion of the transmitted Nt sequence is from L to Nt. Expressing equation (2) in matrix notation over the useful portion of a transmitted training sequence yields










y
=


Sh
+
z

=



[





S
1



(

L
,

N
t


)






S
2



(

L
,

N
t


)





]



[





h
1



(
L
)








h
2



(
L
)





]


+
z



,




(
2
)








where y and z are vectors with (Nt−L+1) elements, S1(L, Nt) and S2(L, Nt) are convolution matrices of dimension (Nt−L+1)×L, and h1(L) and h2(L) are of dimension L×1; that is,












S
i



(

L
,

N
t


)


=

[





s
i



(

L
-
1

)






s
i



(

L
-
2

)




L




s
i



(
0
)








s
i



(
L
)






s
i



(

L
-
1

)




L




s
i



(
1
)






M


M


O


M






s
i



(


N
t

-
1

)






s
i



(


N
t

-
2

)




L




s
i



(


N
t

-
L

)





]


,




and




(
3
)









h
i



(
L
)


=



[





h
i



(
0
)








h
i



(
1
)






M






h
i



(

L
-
1

)





]






for





i

=
1


,
2.




(
4
)








If the convolution matrix is to have at least L rows, Nt must be at least 2L−1. In the context of this disclosure, the S matrix is termed the “training matrix” and, as indicated above, it is a convolution matrix that relates to signals received from solely in response to training sequence symbols; i.e., not corrupted by signals sent prior to the sending of the training sequence.


The linear least squared channel estimates, ĥ, assuming S has full column rank, is











h
^

=


[





h
^

1







h
^

2




]

=



(


S
H


S

)


-
1




S
H


y



,




(
5
)








where the (•)H and (•)−1 designate the complex-conjugate transpose (Hermitian) and the inverse, respectively. For zero mean noise, the channel estimation mean squared error is defined by

MSE=E[(h−ĥ)H(h−ĥ)]=2σ2tr((SHS)−1),  (6)

where tr(•) denotes a trace of a matrix. The minimum MSE, MMSE, is equal to











M





M





S





E

=


2


σ
2


L


(


N
t

-
L
+
1

)



,




(
7
)








which is achieved if and only if











S
H


S

=


[





S
1
H



S
1






S
2
H



S
1








S
1
H



S
2






S
2
H



S
2





]

=


(


N
t

-
L
+
1

)



I

2





L








(
8
)








where I2L is the 2L×2L identity matrix. The sequences s1 and s2 that satisfy equation (8) are optimal sequences. Equation (8) effectively states that the optimal sequences have an impulse-like autocorrelation function (e.g. S1HS corresponds to the identity matrix, I, multiplied by a scalar) and zero cross-correlations.


A straightforward approach for designing two training sequences of length Nt each is to estimates two L-taps channels (i.e., two channels having L unknowns each, or a total of 2L unknowns) is to design a single training sequence s of length Nt′(Nt′=Nt+L) that estimates a single channel with L′=2L taps (i.e., a single channel having 2L unknowns). Generalizing, Nt′=Nt+(n−2)L, where n is the number of antennas. One can thus view the received signal as

y=S(L′,Nt′)h(L′)+z  (9)

where S is a convolution matrix of dimension (Nt′−L′+1)×L′. Again, for optimality, the imposed requirement is that

SH(L′,Nt′)S(L′,Nt′)=(Nt′−L′+1)I2L,  (10)

and once the sequence s is found, the task is to create the subsequences s1 and s2 from the found sequence s. Preferably, the subsequences s1 and s2 can be algorithmically generated from sequence s. Conversely, one may find subsequences s1 and s2 that satisfy the requirements of equation (8) and be such that sequence s can be algorithmically generated. This permits the use of a single training signal generator that, through a predetermined algorithm (i.e., coding) develops the subsequences s1 and s2. Both approaches lead to embodiment depicted in FIG. 3, where information signals are applied to encoder 13 that generates two streams of symbols that are applied to constellation mapper 14 via switches 15 and 16. Generator 5 creates a training sequence that is applied to encoder 9, and encoder 9 generates the subsequences s1 and s2 that are applied to constellation mapper 14 via switches 15 and 16.


Actually, once we realized that the complexity of the training sequence determination problem can be reduced by focusing on the creation of a single sequence from which a plurality of sequences that meet the requirements of equation (8) can be generated, it became apparent that there is no requirement for s to be longer than s1 and s2.



FIG. 4 presents one approach for generating optimal subsequences s1 and s2 that meet the requirements of equation (8) and that can be generated from a single sequence. In accordance with FIG. 4, generator 5 develops a sequence s of length Nt/2, and encoder 9 develops therefrom the sequences S1=−S|S and S2=S|S, where the “|” symbol stands for concatenation; e.g., sequence s1 comprises sequence −s concatenated with, or followed by, sequence s. Thus, during the training sequence, antenna 11 transmits the sequence −s during the first Nt/2 time periods, and the sequence s during the last Nt/2 time periods. Antenna 12 transmits the sequence s during both the first and last Nt/2 time periods.


In response to the training sequences transmitted by antennas 11 and 12, receiving antenna 21 develops the signal vector y (where the elements of the vector y are the signals received from antennas 11 and 12). Considering the received signal during the first Nt/2 time periods as y1 and during the last Nt/2 time periods as y2, and employing only the useful portion of the signal (that is, the portions not corrupted by signals that are not part of the training sequence) one gets










[




y
1






y
2




]

=



[




-
S



S




S


S



]



[




h
1






h
2




]


+

[




z
1






z
2




]






(
11
)








where S is a convolution matrix of dimension (Nt−L+1)×L. In accordance with the principles disclosed herein, the FIG. 3 receiver multiplies the received signal in processor 25 with the transpose conjugate matrix SH, yielding











[




r
1






r
2




]

=



[




-

S
H





S
H






S
H




S
H




]



[




y
1






y
2




]


=



[




2






S
H


S



0




0



2






S
H


S




]



[




h
1






h
2




]


+

[





z
_

1







z
_

2




]









where




(
12
)







[





z
_

1







z
_

2




]

=



[




-

S
H





S
H






S
H




S
H




]



[




z
1






z
2




]


.





(
13
)








If the sequence s is such that SHS=(Nt−L+1)IL, then










[




r
1






r
2




]

=


2



(


N
t

-
L
+
1

)



[




h
1






h
2




]



+


[





z
_

1







z
_

2




]

.






(
14
)








If the noise is white, then the linear processing at the receiver does not color it, and the channel transfer functions correspond to

h1=½(Nt−L+1)r1.
h2=½(Nt−L+1)r2  (15)

with a minimum squared error, MSE, that achieves the lower bound expressed in equation (7); to wit,









MSE
=


2


σ
2


L


(


N
t

-
L
+
1

)






(
16
)







The above result can be generalized to allow any matrix U to be used to encode the training sequence, s, so that










[




y
1






y
2




]

=


U


[




h
1






h
2




]


+

[




z
1






z
2




]






(
17
)








as long as UHU=2I for a two antennas case, and UHU=KI for an K antenna case.


Whereas FIG. 4 presents a method for developing sequences s1 and s2 of length Nt from a sequence s that is Nt/2 symbols long, FIG. 5 presents a method for developing sequences s1 and s2 of length Nt from a sequence s that is 2Nt symbols long, which consists of a sequence d1=[s(0) s(1) . . . s(Nt−1)] followed by a sequence d2=[s(0) s(1) . . . s(Nt−1)]. In accordance with this approach, s1=d1|−{tilde over (d)}2* and s2=d2|{tilde over (d)}1*. The sequence {tilde over (d)}1 corresponds to the sequence d1 with its elements in reverse order. The symbol {tilde over (d)}1* operation corresponds to the sequence d1 with its elements in reverse order and converted to their respective complex conjugates.


The FIG. 5 encoding is very similar to the encoding scheme disclosed by Alamouti in U.S. Pat. No. 6,185,258, issued Feb. 6, 2001, except that (a) the Alamouti scheme is symbols-centric whereas the FIG. 5 encoding is sequence-centric, and (b) the Alamouti scheme does not have the concept of a reverse order of a sequence (e.g., {tilde over (d)}1*). See also E. Lindskog and A. Paulraj titled “A Transmit Diversity Scheme for Channels With Intersymbol Interference,” ICC, 1:307-311, 2000. An encoder 9 that is created for developing training sequences s1 and s2 in accordance with FIG. 5, can be constructed with a control terminal that is set to 1 during transmission of information and set to another value (e.g., 0, or to Nt to indicate the length of the generated block) during transmission of the training sequence, leading to the simplified transmitter realization shown in FIG. 6. More importantly, such an arrangement leads to a simplified receiver because essentially the same decoder is used for both the information signals and the training signals.


With a signal arrangement as shown in FIG. 5, the signal captured at antenna 21 of receiver 20 is










[




y
1






y
2




]

=



[




-


D
~

2
*






D
~

1
*






D
1




D
2




]



[




h
1






h
2




]


+

[




z
1






z
2




]






(
18
)








where the matrices Di and {tilde over (D)}i (for i=1,2) are convolution matrices for d1 and {tilde over (d)}1, respectively, of dimension (Nt−L+1)×L. Recalling from equation (8) that MMSE is achieved if and only if DHD has zeros off the diagonal; i.e.,

−{tilde over (D)}1T{tilde over (D)}2*+(D2*)TD1=0  (19)
and
−{tilde over (D)}2T{tilde over (D)}1*+(D1*)TD2=0  (20)

and identity matrices on the diagonal; i.e.,

{tilde over (D)}2T{tilde over (D)}2*+(D1*)TD1=2(Nt−L+1)IL  (21)
and
{tilde over (D)}1T{tilde over (D)}1*+(D2*)TD2=2(Nt−L+1)IL.  (22)


Various arrangements that interrelate sequences d1 and d2 can be found that meet the above requirement. By way of example (and not by way of limitation), a number of simple choices satisfy these conditions follow.


(1) (D1*)TD1=(Nt−L+1)IL, {tilde over (D)}1=D1, and D2=D1. To show that equation (21) holds, one may note that {tilde over (D)}2T{tilde over (D)}2* (the first term in the equation) becomes D1TD1* but if (D1*)TD1 is a diagonal matrix then so is {tilde over (D)}2T{tilde over (D)}2*. Thus, according to this training sequence embodiment, one needs to only identify a sequence d1 that is symmetric about its center, with an impulse-like autocorrelation function, and set d2 equal to d1. This is shown in FIG. 7.


(2) (D1*)TD1=(Nt−L+1)IL, and {tilde over (D)}2=D1. To show that equation (21) holds, one may note that the {tilde over (D)}2T{tilde over (D)}2* first term in the equation also becomes D1TD1*. Thus, according to this training sequence embodiment, one needs to only identify a sequence d1 with an impulse-like autocorrelation function, and set d2 equal to {tilde over (d)}1. This is shown in FIG. 8.


(3) (D1*)TD1=(Nt−L+1)IL, and {tilde over (D)}2*=D1. To show that equation (21) holds, one may note that the {tilde over (D)}2T{tilde over (D)}2* first term in the equation becomes (D1*)TD1. Thus, according to this training sequence embodiment, one needs to only identify a sequence d1 with an impulse-like autocorrelation function, and set d2 equal to {tilde over (d)}1*. This is shown in FIG. 9.


Training Sequences Employing Trellis Coding


Consider a trellis code with m memory elements and outputs from a constellation of size C, over a single channel with memory 2mC(L−1)−1. To perform joint equalization and decoding one needs a product trellis with 2mC(L−1) states. For a space-time trellis code with m memory elements, n transmit antennas and one receive antenna, over a channel with memory (L−1), one needs a product trellis with 2mCn(L−1).


The receiver can incorporate the space-time trellis code structure in the channel model to create an equivalent single-input, single output channel, heq, of length m+L. The trellis, in such a case, involves C(m+L−1) states. The approach disclosed herein uses a single training sequence at the input of the space-time trellis encoder to directly estimate heq used by the joint space-time equalizer/decoder. The channel heq that incorporates the space-time code structure typically has a longer memory than the channel h1 and h2 (in a system where there are two transmitting antennas and one receiving antenna).


To illustrate, assume an encoder 30 as depicted in FIG. 10 that employs an 8-PSK constellation of symbols to encode data from a training sequence generator into a sequence s of symbols taken from the set ei2πpk/8, pk=0, 1, 2, . . . , 7, where the training sequences s1 and s2 are algorithmically derived within encoder 30 from sequence s. Specifically, assume that s1(k)=s(k), and that s2(k)=(−1)pk−1s(k−1), which means that s2(k)=s(k−1) when s(k−1) is an even member of the constellation (ei0, eiπ/2, e, and ei3π/2), and s2(k)=−s(k−1) when s(k−1) is an odd member of the constellation.


With such an arrangement, the received signal at time k can be expressed as














y


(
k
)


=







i
=
0


L
-
1










h
1



(
i
)




s


(

k
-
i

)




+




i
=
0


L
-
1










h
2



(

i
,
k

)





(

-
1

)


p

k
-
i
-
1





s


(

k
-
i
-
1

)




+

z


(
k
)










=







i
=
0

L









h
eq



(
i
)




s


(

k
-
i

)




+

z


(
k
)




,













where




(
23
)













h
eq



(

i
,
k

)


=

{






h
1



(
0
)






for





i

=
0








h
1



(
i
)


+



(

-
1

)


p

k
-
i






h
2



(

i
-
1

)








for





0

<
i
<
L








(

-
1

)


p

k
-
L






h
2



(

L
-
1

)







for





i

=
L




.







(
24
)








A block of received signals (corresponding to the useful portion of the training sequence block) can be expressed in matrix form by

y=Sheq+z  (25)

where









S
=


[




s


(
L
)





s


(

L
-
1

)








s


(
0
)







s


(

L
+
1

)





s


(
L
)








s


(
0
)























s


(


N
t

-
1

)





s


(


N
t

-
2

)








s


(


N
t

-
L
-
1

)





]



[





h
eq



(

0
,
L

)








h
eq



(

1
,

L
+
1


)













h
eq



(

L
,


N
t

-
1


)





]






(
26
)








and following the principles disclosed above, it can be realized that when the training sequence is properly selected so that SHS is a diagonal matrix, i.e., SHS=(Nt−L)IL+1, an estimate of heq, that is, ĥeq, is obtained from











h
^

eq

=




S
H


y


(


N
t

-
L

)


.





(
27
)








If the training sequence were to comprise only the even constellation symbols, ei2πk/8, k=0,2,4,6, per equation (24), the elements of ĥeq would correspond to

heqeven=[h1(0),h1(1)+h2(0),h1(2)+h2(1), . . . h1(L−1)+h2(L−2),h2(L−1)].  (28)

If the training sequence were to comprise only the odd constellation symbols, ei2πk/8, k=1,3,5,7, the elements of ĥeq would correspond to

heqodd=[h1(0),h1(1)−h2(0),h1(2)−h2(1), . . . h1(L−1)−h2(L−2),−h2(L−1)].  (29)

If the training sequence were to comprise a segment of only even constellation symbols followed by only odd constellation symbols (or vice versa), then channel estimator 22 within receiver 20 can determine the heqeven coefficients from the segment that transmitted only the even constellation symbols, and can determine the heqodd coefficients from the segment that transmitted only the even constellation symbols. Once both heqeven and heqodd are known, estimator 22 can obtain the coefficients of h1 from










[



h
1



(
0
)


,


h
1



(
1
)


,


h
1



(
2
)


,









h
1



(

L
-
1

)




]

=



h
eq
even

+

h
eq
odd


2





(
30
)








and the coefficients of h2 from










[



h
2



(
0
)


,


h
2



(
1
)


,


h
2



(
2
)


,









h
2



(

L
-
1

)




]

=




h
eq
even

-

h
eq
odd


2

.





(
31
)








What remains, then, is to create a single training sequence s of length Nt where one half of it (the seven portion) consists of only even constellation symbols (even sub-constellation), and another half of it (the sodd portion) consists of only odd constellation symbols (odd sub-constellation). The sequences s1 and s2 of length Nt are derived from the sequence s by means of the 8-PSK space-time trellis encoder. The sequences s1 and s2 must also meet the requirements of equation (8). Once seven is found, sodd can simply be

sodd=αseven,where α=eiπk/4 for any k=1,3,5,7.  (32)

Therefore, the search for sequence s is reduced from a search in the of 8Nt to a search for seven in the space 4(Nt/2) such that, when concatenated with sodd that is computed from seven as specified in equation (32), yields a sequence s that has an autocorrelation function that is, or is close to being, impulse-like.


For a training sequence of length Nt=26, with an 8-PSK space-time trellis encoder, we have identified the 12 training sequences specified in Table 1 below.











TABLE 1





sequence #
α
se

















1
exp(i5π/4)
−1 1 1 1 1 −1 −i −1 1 1 −1 1 1


2
exp(i3π/4)
1 1 −1 1 i i 1 −i i −1 −1 −1 1


3
exp(iπ/4)
1 −1 −1 −i i −i 1 1 1 −i −1 1 1


4
exp(iπ/4)
1 −1 −1 −i 1 −1 1 −i −i −i −1 1 1


5
exp(iπ/4)
1 i 1 1 i −1 −1 i 1 −1 1 i 1


6
exp(i3π/4)
1 i 1 i −1 −1 1 −1 −1 i 1 i 1


7
exp(i7π/4)
1 −i 1 1 −i −1 −1 −i 1 −1 1 −i 1


8
exp(i5π/4)
1 −i 1 −1 i 1 −1 −i 1 1 1 −i 1


9
exp(i3π/4)
−1 1 1 1 −1 −1 −i −1 −1 1 −1 1 1


10
exp(i7π/4)
−1 i −1 −i 1 −i i i 1 i 1 −i 1


11
exp(iπ/4)
−1 −i −1 i 1 i −i −i 1 −i 1 i 1


12
exp(i3π/4)
−1 −i −1 i −1 i −i −i −1 −i 1 i 1










Construction of Training Sequence


While the above-disclosed materials provide a very significant improvement over the prior art, there is still the requirement of selecting a sequence s1 with an impulse-like autocorrelation function. The following discloses one approach for identifying such a sequence without having to perform an exhaustive search.


A root-of-unity sequence with alphabet size N has complex roots of unity elements of the form










ⅈ2π





k

N


,

k
=
1

,
2
,









(

N
-
1

)

.







As indicated above, the prior art has shown that perfect roots-of-unity sequences (PRUS) can be found for any training sequence of length Nt, as long as no constraint is imposed on the value of N. As also indicated above, however, it is considered disadvantageous to not limit N to a power of 2. Table 2 presents the number of PRUSs that were found to exist (through exhaustive search) for different sequence lengths when the N is restricted to 2 (BPSK), 4 (QPSK), or 8 (8-PSK). Cell entries in Table 2 with “-” indicate that sequence does not exist, and blank cells indicate that an exhaustive search for a sequence was not performed.


























TABLE 2





Nt=
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
































BPSK


8
















QPSK
8

32



128







6144




8-PSK
16

128



512














A sequence s of length Nt is called L-perfect if the corresponding training matrix S of dimension (Nt−L+1)×L satisfies equation (8). Thus, an L-perfect sequence of length Nt is optimal for a channel with L taps. It can be shown that the length Nt of an L-perfect sequence from a 2p-alphabet can only be equal to











N
t

=

{




2


(

L
+
i

)






for





L

=
odd







2


(

L
+
i

)


-
1





for





L

=
even




}


,


for





i

=
0

,
1
,





,




(
33
)








which is a necessary, but not sufficient, condition for L-perfect sequences of length Nt. Table 3 shows the minimum necessary Nt for L=2, 3, . . . 10, the size of the corresponding matrix S, and the results of an exhaustive search for L-perfect sequences (indicating the number of such sequences that were found). Cell entries marked “x” indicate that sequences exist, but number of such sequences it is not known.


















TABLE 3





L
2
3
4
5
6
7
8
9
10
























Nt
3
6
7
10
11
14
15
18
19


S
2 × 2
4 × 3
4 × 4
6 × 5
6 × 6
8 × 7
8 × 8
10 × 9
10 × 10


BPSK
4
8
8








QPSK
16
64
64


128 

x


8-PSK
64
512
512


x

x









It is known that with a PRUS of a given length, NPRUS, one can estimate up to L=NPRUS unknowns. It can be shown that a training sequence of length Nt is also an L-perfect training sequence if

Nt=kNPRUS+L−1 and k≧1.  (34)

Accordingly, an L-perfect sequence of length kNPRUS+L−1 can be constructed by selecting an NPRUS sequence, repeating it k times, and circularly extending it by L−1 symbols. Restated and amplified somewhat, for a selected PRUS of a given NPRUS, i.e.,

sp(NPRUS)=[sp(0)sp(1) . . . sp(NPRUS−1)],  (35)

the L-perfect sequence of length kNPRUS+L−1 is created from a concatenation of k sp(NPRUS) sequences followed by the first L−1 symbols of sp(NPRUS), or from a concatenation of the last L−1 symbols of sp(NPRUS) followed by k sp(NPRUS) sequences.


To illustrate, assume that the number of channel “taps” that need to be estimated, L, is 5, and that a QPSK alphabet is desired to be used. From the above it is known that NPRUS must be equal to or greater than 5, and from Table 2 it is known that the smallest NPRUS that can be found for QSPK that is larger than 5 is NPRUS=8. Employing equation (34) yields














N
t

=




kN
PRUS

+
L
-
1







=




k
·
8

+
5
-
1









=


12

,
20
,
28
,








for





k

=
1

,
2
,

3
























(
36
)







While an L-perfect training sequence cannot be constructed from PRUS sequences for values of Nt other than values derived by operation of equation (34), it is known that, nevertheless, L-perfect sequences may exist. The only problem is that it may be prohibitively difficult to find them. However, in accordance with the approach disclosed below, sub-optimal solutions are possible to create quite easily.


If it is given that the training sequence is Nt long, one can express this length by

Nt=kNPRUS+L−1+M,where M>0  (37)

In accord with our approach, select a value of NPRUS≧L that minimizes M, create a sequence of length kNPRUS+L−1 as disclosed above, and then extend that sequence by adding M symbols. The M added symbols can be found by selecting, through an exhaustive search, the symbols that lead to the lowest estimation MSE. Alternatively, select a value of NPRUS≧L that minimizes M′ in the equation,

Nt=kNPRUS+L−1−M′,where M′>0  (38)

create a sequence of length kNPRUS+L−1 as disclosed above, and then drop the last (or first) M′ symbols.


The receiver shown in FIG. 2 includes the channel estimator 22, which takes the received signal and multiplies it SH as appropriate; see equation (12), above.

Claims
  • 1. A method comprising: developing a plurality of symbol streams from an information stream;mapping symbols of said symbol streams onto a standard constellation to create a plurality of mapped streams;applying respective ones of the mapped streams to different ones of a plurality of antennas during a first time period;generating n training sequences from a training sequence, where n is an integer greater than one;mapping said n training sequences onto the standard constellation to create n mapped training sequences, where said n training sequences have a common length; andapplying respective ones of the n mapped training sequences to different ones of said antennas during a second time period;wherein the training sequence is a root sequence composed of Nth roots of unity, where N is a power of 2 integer that is not smaller than L, where L is an integer and corresponds to a number of channel taps of a channel represented by a filter with L channel taps from each of said plurality of antennas to a receiver antenna.
  • 2. The method as recited in claim 1 where said n training sequences have an impulse-like autocorrelation function and zero cross correlation.
  • 3. The method as recited in claim 1 wherein said training sequence is a sequence s, and generating said n training sequences includes creating a first training sequence that equals −s concatenated with s, and creating a second training sequence that is equal to s concatenated with s.
  • 4. The method as recited in claim 1 wherein said root sequence consists of subsequence d2 that has at least L symbols concatenated to subsequence d1 that has at least L symbols, a first training sequence of said n training sequences is sequence ={tilde over (d)}2* concatenated to sequence d1, where sequence {tilde over (d)}2* corresponds to sequence d2 with its elements in reverse order and each element converted to its complex conjugate; and a second training sequence of said n training sequences is sequence −d1 concatenated to sequence d2.
  • 5. The method as recited in claim 1 wherein generating said n training sequences includes, generating a first training sequence as a sequence −d concatenated to sequence {tilde over (d)}*, where sequence d is the root sequence, and sequence {tilde over (d)}* corresponds to sequence d with its elements in reverse order and each element in sequence {tilde over (d)}* is the complex conjugate of its corresponding element in sequence d; andgenerating a second training sequence as the sequence d concatenated to a sequence d*, where each element in the sequence d* is the complex conjugate of its corresponding element in the sequence d.
  • 6. A method comprising: developing a plurality of symbol streams from an information stream;mapping symbols of said symbol streams onto a standard constellation to create a plurality of mapped streams;applying respective ones of the mapped streams to different ones of a plurality of antennas during a first time period;generating n training sequences from a training sequence, where n is an integer greater than one;mapping said n training sequences onto the standard constellation to create n mapped training sequences, where said n training sequences have a common length; andapplying respective ones of the n mapped training sequences to different ones of said antennas during a second time period;wherein a first training sequence of said n training sequences is sequence −d concatenated to sequence d*, where sequence d is a root sequence and each element in sequence d* is the complex conjugate of its corresponding element in sequence d; andwherein a second training sequence of said n training sequences is a sequence {tilde over (d)} concatenated to sequence d*, and where the sequence {tilde over (d)} corresponds to sequence d with its elements in reverse order.
  • 7. A transmitter comprising: a plurality of transmitting antennas;a first encoder responsive to an applied information stream, said first encoder developing a plurality of symbol streams;a constellation mapper responsive to said symbol streams, to map symbols of each of said symbol streams onto a standard signal constellation to create a plurality of mapped streams, said constellation mapper coupled to supply respective ones of the mapped streams to the transmitting antennas;a training generator to supply to a second encoder a training sequence of symbols;said second encoder configured to develop n training sequences of symbols using said training sequence, said n training sequences being different; andsaid constellation mapper is configured to map said symbols of said n training sequences onto a constellation and is coupled to supply said transmitting antennas respective n mapped training sequences;wherein the training sequence is a root sequence composed of Nth roots of unity, where N is a power of 2 integer that is not smaller than L, where L is an integer and corresponds to a number of channel taps of a channel represented by a filter with L channel taps from each of said plurality of antennas to a receiver antenna.
  • 8. The transmitter as recited in claim 7 where said n training sequences have an impulse-like autocorrelation function and zero cross correlation.
  • 9. The transmitter as recited in claim 7 wherein said training generator creates a sequence s, and said second encoder creates the first training sequence that equals −s concatenated with s, and a second training sequence that is equal to s concatenated with s.
  • 10. The transmitter as recited in claim 7 wherein said root sequence consists of subsequence d2 that has at least L symbols concatenated to subsequence d1 that has at least L symbols, a first training sequence of said n training sequences, is sequence −{tilde over (d)}2* concatenated to sequence d1, where sequence {tilde over (d)}2* corresponds to sequence d2 with its elements in reverse order and each element converted to its complex conjugate; and a second training sequence of said n training sequences is sequence −d1 concatenated to sequence d2.
  • 11. The transmitter as recited in claim 7 wherein, said training sequence is the root sequence;a first training sequence of said n training sequences is sequence {tilde over (d)}* concatenated to sequence −d, where sequence d is the root sequence, and sequence {tilde over (d)}* corresponds to sequence d with its elements in reverse order and each element in sequence {tilde over (d)}* is the complex conjugate of its corresponding element in sequence d; anda second training sequence of said n training sequences is sequence d* concatenated to sequence d, where each element in sequence d* is the complex conjugate of its corresponding element in sequence d.
  • 12. A transmitter comprising: a plurality of transmitting antennas;a first encoder responsive to an applied information stream, said first encoder developing a plurality of symbol streams;a constellation mapper responsive to said symbol streams, to map symbols of each of said symbol streams onto a standard signal constellation to create a plurality of mapped streams, said constellation mapper coupled to supply respective ones of the mapped streams to the transmitting antennas;a training generator to supply to a second encoder a training sequence of symbols;said second encoder configured to develop n training sequences of symbols using said training sequence, said n training sequences being different; andsaid constellation mapper is configured to map said symbols of said n training sequences onto a constellation and is coupled to supply said transmitting antennas respective n mapped training sequences;wherein said training sequence is a root sequence;a first training sequence of said n training sequences is sequence −d concatenated to sequence d*, where sequence d is the root sequence and each element in sequence d* is the complex conjugate of its corresponding element in sequence d; anda second training sequence of said n training sequences is sequence {tilde over (d)} concatenated with the sequence d*, and where sequence {tilde over (d)} corresponds to sequence d with its elements in reverse order.
RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 12/777,421, filed May 11, 2010, now U.S. Pat. No. 8,942,335, which is a continuation of U.S. patent application Ser. No. 11/190,403, filed Jul. 27, 2005 (now U.S. Pat. No. 7,746,945 (reissued as reissue Pat. No. RE44,827)), which is a continuation of U.S. patent application Ser. No. 09/956,648, filed Sep. 20, 2001, now U.S. Pat. No. 6,959,047, which claims benefit from provisional application No. 60/282,647, filed Apr. 9, 2001, all of which applications are incorporated herein by reference.

US Referenced Citations (23)
Number Name Date Kind
5642379 Bremer Jun 1997 A
5666378 Marchetto Sep 1997 A
6205127 Ramesh Mar 2001 B1
6369758 Zhang Apr 2002 B1
6424679 Dabak et al. Jul 2002 B1
6449314 Dabak et al. Sep 2002 B1
6643338 Dabak et al. Nov 2003 B1
6674817 Dolle et al. Jan 2004 B1
6741643 McGibney May 2004 B1
6788661 Ylitalo et al. Sep 2004 B1
7006579 Kuchi et al. Feb 2006 B2
7139324 Ylitalo et al. Nov 2006 B1
7200182 Dabak et al. Apr 2007 B2
7203249 Raleigh et al. Apr 2007 B2
7272192 Lindskog et al. Sep 2007 B2
7366266 Debak et al. Apr 2008 B2
7469018 Ionescu Dec 2008 B2
7613259 Debak et al. Nov 2009 B2
7701916 Debak et al. Apr 2010 B2
20020067309 Baker et al. Jun 2002 A1
20020111142 Klimovitch Aug 2002 A1
20050157683 Ylitalo et al. Jul 2005 A1
20050157684 Ylitalo et al. Jul 2005 A1
Related Publications (1)
Number Date Country
20150103940 A1 Apr 2015 US
Provisional Applications (1)
Number Date Country
60282647 Apr 2001 US
Continuations (3)
Number Date Country
Parent 12777421 May 2010 US
Child 14575859 US
Parent 11190403 Jul 2005 US
Child 12777421 US
Parent 09956648 Sep 2001 US
Child 11190403 US