SYSTEM AND ALGORITHM FOR MULTIPATH MITIGATION

Information

  • Patent Application
  • 20130329840
  • Publication Number
    20130329840
  • Date Filed
    June 11, 2013
    11 years ago
  • Date Published
    December 12, 2013
    11 years ago
Abstract
A system and method for optimally combining multi-path signals is presented. A first signal is received that traveled a first path from a transmitter to a receiving location and a second signal is received that traveled a different second path from the transmitter to the same receiving location. The paths are different so that the first and second signals contain the same signal data but the first signal has a first distortion that is different than a second distortion in the second signal. According to an objective function, the method adaptively generates a first weight value and a second weight value. The first and second weight values are applied to the respective first and second signals to produce respective first and second weighted signals. The first and second weighted signals are linearly combined producing a combined signal with a combined signal degradation.
Description
BACKGROUND

1. Field of Invention


The current invention relates generally to apparatus, systems and methods for communicating. More particularly, the apparatus, systems and methods relate to wireless communication. Specifically, the apparatus, systems and methods provide for improved reception of signals by analysing two or more signals at the same time and dynamically weighting the signals to product a better resulting signal.


2. Description of Related Art


In digital radio communication systems a signal is transmitted from a transmitting antenna to a receiving antenna via a channel, which channel comprises open space generally containing objects such as the earth and its topographic features (mountains, oceans) as well as buildings, vehicles and other man-made obstructions, in addition to atmospheric gases, that is characterized by several parameters and effects. The primary result of propagation through the channel is an expansion of the signal wave-front (energy) in and along multiple directions, including directions other than the nominally desired direction corresponding to a path between the receiver and transmitter.


The use of antennas having directivity (radiation patterns) reduces the propagation (respectively collection) of energy in (respectively, from) undesired directions. However, as is well known, significant energy can reach the receiving antenna after traveling along paths other than the direct path between the transmitter and receiver. Indeed, in some applications the direct path, also referred to as the line-of-sight path, may not exist at all and the received energy is actually carried by a superposition of waves that have been reflected, refracted and generally scattered during propagation.


The multiplicity of propagation paths and resulting effects on the output of the receiving antenna is referred to as “multipath”. The effects of multipath are determined by the linear superposition (addition) of the multiple electromagnetic waves (or more precisely, electromagnetic fields) at the receiver antenna. This superposition can result in partial cancellation of the received field at the antenna and thus a reduction in received signal energy. This is the well-known and familiar fading process.


In addition, when the respective components of the multipath ensemble arrive at the antenna with distinct time delays (that is, having traversed different path lengths and thus having different propagation times), the components combine in a manner that may result in distortion of the signal. This is also a well-known phenomenon known as frequency selective fading, a term that arises from an analysis of the effect of the differential propagation delays of the respective waves (dispersion) in the frequency domain. What is needed is a better why of receiving multipath signals.


SUMMARY

The preferred embodiment of the invention is a method for optimally combining multi-path signals. The method begins by receiving a first signal that traveled a first path from a transmitter to a receiving location and receiving a second signal that traveled a second path from the transmitter to the same receiving location. The second path is different than the first path so that the first signal contains signal data and has a first distortion that is different than a second distortion in the second signal. The second signal contains the same signal data as the first signal. The first distortion and the second distortion can correspond to different time intervals of the signals or other parameters.


According to an objective function, the method adaptively generates a first weight value and a second weight value. The first weight value the second weight value can generate the weight values so that a combined signal to noise ratio (SNR) of the combined signal (introduced below) is maximized. The first weight value is applied to the first signal to produce a first weighted signal. Similarly, the second weight value is applied to the second signal to produce a second weighted signal.


The first weighted signal and the second weighted signal are linearly combined, to produce a combined signal with a combined signal degradation. The combined signal degradation is less than the first degradation and the combined signal degradation is less than the second degradation.


Another configuration of the preferred embodiment is a system for optimally combining multi-path signals. The system includes a first channel sub-processor logic, a second channel sub-processor logic, an objective function and adaption and combining logic. The first channel sub-processor logic receives a first signal that traveled a first path from a transmitter to the first channel sub-processor. The first signal contains signal data and has a first distortion. The second channel sub-processor logic receives a second signal that traveled a second path from a transmitter to the first channel sub-processor. The second path is different than the first path the second signal contains the same signal data and has a second distortion that is different than the first distortion.


The adaption and combining logic adaptively generates according to the objective function a first weight value and adaptively generates according to the objective function a second weight value. The first channel sub-processor logic applies the first weight value to the first signal to produce a first weighted signal. Similarly, the second channel sub-processor logic applies the second weight value to the second signal to produce a second weighted signal. The adaption and combining logic linearly combines the first weight signal and the second weighted signal to produce a combined signal with a combined signal degradation. The combined signal degradation is less than the first degradation and the combined signal degradation is less than the second degradation.





BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

One or more preferred embodiments that illustrate the best mode(s) are set forth in the drawings and in the following description. The appended claims particularly and distinctly point out and set forth the invention.


The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate various example methods, and other example embodiments of various aspects of the invention. It will be appreciated that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. One of ordinary skill in the art will appreciate that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of another element may be implemented as an external component and vice versa. Furthermore, elements may not be drawn to scale.



FIGS. 1 and 2 are schematic drawings illustrating a preferred embodiment of a system model of the preferred embodiment.



FIG. 3 is a schematic drawing showing a configuration of the preferred embodiment system in which top weights are adjusted to maximize the quality of combined output.



FIG. 4 is a schematic drawing illustrating an example system using 4 taps per branch; and



FIG. 5 is a graph illustrating the adoption of the tap coefficients in one embodiment of the system of FIG. 4.



FIG. 6 illustrates a preferred embodiment of a method for optimally combining multi-path signals.





Similar numbers refer to similar parts throughout the drawings.


DETAILED DESCRIPTION

Before referring to specific figures below, the preferred embodiment of an algorithm to more accurately receive and detect multipath signals will be described. When there are multiple receiving antennas available for reception of the signal of interest there is an opportunity to realize a significant improvement in system performance by optimally combining the respective antenna outputs. It is customary to refer to the respective receiving antennas and corresponding sub-channels (propagation paths) by which energy reaches them as diversity “branches” in recognition of the fact that the impairment (fading or dispersion) on the different branches is usually uncorrelated from branch to branch, or at least not strongly correlated, resulting in a diversification of the signal quality that reduces the probability of a simultaneously poor signal condition at all branch outputs. More specifically, when the multiple antenna outputs exhibit distinct fading characteristics, as is usually the case, one may apply weights to the respective antenna outputs and then combine (add) them together so as to maximize the signal level of the weighted sum relative to the combined noise. Since the desired signal and the corresponding noise in each branch (at each antenna) receives the same weight it is necessary to choose the weights so that they optimize the signal output while controlling the noise output level in some manner. In particular, as is well known, it is desirable to maximize the signal-to-noise ratio (SNR) of the composite weighted combination of the multiple branch outputs. If the signals from the respective antennas also exhibit temporal (time domain) distortion (frequency selective fading) one may further process the respective antenna output signals with appropriate filters that are adapted to compensate for the distortion before performing the weighted combining. This more general case subsumes the pure weighted combining as a special case. In either case, the result of combining the respective antenna output signals is to mitigate the effect of multipath. Increases in radio link margin, or range, are thereby realized.


It is also to be noted that the algorithm for adaptation and optimization of the composite signal-to-noise ratio at the output of a diversity combiner is not limited in its applicability to the case of antenna (spatial) diversity. It is also applicable to systems in which the diversity branches are realized by other physical mechanisms that produce multiple signal outputs that exhibit a diversity of qualities (as measured, for example, by the respective branch signal-to-noise ratios) and which are available for weighted combining. Such multiplicity of signals may be realized, for example, by transmitting the signal on two or more distinct radio carrier frequencies (frequency diversity) or in different time intervals (time diversity). In all cases the objective is to adapt the weighting that is applied so as to maximize the combined signal quality. It is further a desirable feature of all such diversity combining systems to be able to track the relative quality of the respective branches and to control the weighting process in a manner that corresponds to the tracked relative qualities.



FIG. 1 illustrates the preferred embodiment of a system 1 that can process signals transmitted by a single transmitter and received from different paths (multipath signals) that have different distortion (impairment, addition of noise, the reduction or amplification of signal level, the time domain distortion of the signal or other degradation). However, each multipath signal carries the same signal data. For example, a transmitter 3 can transmit a signal along different channels CH1, CH2 . . . CHN as illustrated. Signal on the channels CH-1, CH-2 . . . CH-N can arrive at their respective antennas AN-1, AN-2 . . . AN-N at different times, with different phases and/or with other different properties. The signals are then processed by a multipath mitigation logic 5. Each signal from each antenna AN-1, AN-2 . . . AN-N is processed in a respective channel sub-processor logic CS-1, CS-2 . . . CS-N. As discussed in detail later, the channel sub-processor logics CS-1, CS-2 . . . CS-N can apply weight values to their corresponding signals to so as to maximize the signal quality when two or more of the signals output from the channel sub-processor logics CS-1, CS-2 . . . CS-N are combined by an adaption and combining logic 7. Maximizing the signal data quality increases the chances that the useful signal data that was originally transmitted is recovered. As discussed in detail below, the adaption and combining logic 7 runs an algorithm that determines each of the weights to be applied to the signals being processed by each of the channel sub-processor logics CS-1, CS-2 . . . CS-N.


“Logic”, as used herein, includes but is not limited to hardware, firmware, software and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. For example, based on a desired application or needs, logic may include a software controlled microprocessor, discrete logic like an application specific integrated circuit (ASIC), a programmed logic device, a memory device containing instructions, or the like. Logic may include one or more gates, combinations of gates, or other circuit components. Logic may also be fully embodied as software. Where multiple logics are described, it may be possible to incorporate the multiple logics into one physical logic. Similarly, where a single logic is described, it may be possible to distribute that single logic between multiple physical logics.



FIG. 2 illustrates and example system 10 that has two diversity branches. This system includes a propagation channel 12 and respective sub-channels (paths) 14 by which energy propagates between the transmitter 16 and the two receiving antenna apertures 18. It will be clear to one of ordinary skill in the art that the specific realization of the respective receiving antennas can take many different forms all of which provide a means of collecting signal energy and providing an output corresponding to the respective branch. As described above, the diversity branches may also be realized by other physical means. In the following detailed description it is assumed that there are two branches and that the fading condition and additive noise in the two branches are statistically independent. This is a generally occurring scenario. However, as will be understood by those of ordinary skill in the art, neither the fading/multipath processes nor the noise processes are required to be uncorrelated or independent in order for the system to provide a benefit. Also, two channels are chosen for ease of explanation and in other embodiments more than two channels may be used.


The system 10 further includes a dual receiver front end 20 and a multipath mitigation logic 22 similar to the multipath mitigation logic 5 of FIG. 1. The system 10 can include a demodulator 24 that can be used to recover the original signal.



FIG. 3 illustrates another multipath mitigation system 30 that illustrates one configuration of the preferred embodiment of a multipath mitigation system. In particular, this figure illustrates example components use to add weights to different multipath signals using taps as understood by one of ordinary skill in the art. Signals x1 and x2 are received (for example at two different antennas) from to different paths so that one signal has one or more different properties (phase change, delay, etc.) that the other signal but both signals carry the same information. These signals can be complex baseband samples from the two (in this example) branches. The noise processes are denoted by w1 and w2.


Each signal respectively is weighted by a weight α0 and β0 by using multipliers as shown. If other weights are desired then the signals can also be passed through delay elements 32, 33 and additional weights α1 . . . αL and β0 . . . βL can be added. In operation, the tap weights are to be adjusted to maximize the quality of the combined output according to certain metrics as discussed below. The weighted signals are input to filters 34, 35 that can be equalizer filters. The filtered signals y1[n], y2[n] are then combined in a diversity combiner 40


The embodiment of FIG. 3 runs an adaptation algorithm based on a set of norms that can be defined independently of the data-specific signal (since the latter is not demodulated). One such measure is






P
0
=∥y
1
−y
2


This measure is related to the objective function which is simply the output SNR, which is maximized by driving ∥y1+y2∥ toward a maximum while holding the noise power constant. A direct maximization of the output SNR is described later.


Minimization of ∥y1−y2∥ will drive the two equalized signals toward a match. Of course, it may be necessary to ensure the matched value is not zero. In addition, except for the single tap (L=0) case, the signals must also be driven toward their distortion-free characteristics. For the first, it is sufficient to constrain the equalizer tap weight vector to a fixed non-zero norm. This will be discussed further below. For the second, to equalize (and prevent) any distortion of the signal components, the system 30 can attempt to match the spectral characteristics of the signal to the nominal characteristics of the undistorted signal. This may be done by correlation matching, as follows.


Define the autocorrelations





ρk(1)=custom-characteryn(1)yn+k(1)custom-character





ρk(2)=custom-characteryn(2)yn+k(2)custom-character





and cross correlations





γk=custom-characteryn(1)yn+k(2)custom-character


The angle bracket notation denotes an average. As is well known, there are several types of the required averages that are commonly realized in digital signal processing applications, including time averages, ensemble averages, recursively estimated averages or moving (finite time) averages. The latter are of course suitable for online processing while the others are more appropriate for theoretical evaluations.


A distortion metric can be based on the correlations as follows






D
k=|γk−√{square root over (ρ0(1)ρ0(2))}{square root over (ρ0(1)ρ0(2))}ck|


where the sequence {ck} takes on predetermined target (auto-)correlation values. A useful prescription for the sequence {ck} is the delta sequence, which is a condition of zero 151.


Stochastic Adaptation


FIG. 4 illustrates one example system 50 that performs multipath mitigation using 4 taps per branch. The system includes channel registers 52, 53, tap update logic 56 and an adder (diversity combiner). The tapped delay structure communicates with the tap update processor by making available the tap coefficients and the register contents (real and imaginary parts of each—thus, in the working example there are 16 data between each channel register and the processor).


The tap update process can be computed in 7 steps:

    • 1. The vectors I and Q are formed by concatenating the real and imaginary vectors from the respective channels (this is an 8 element result)
    • 2. The vector [Γ=I*(p1+p4)+Q*(p2−p3); I(p3−p2)+Q*(p1+p4)] is formed (16 elements)
    • 3. The quantity q=(p1)2+(p2)2+(p3)2+2p1*p4−2*p2*p3 is computed
    • 4. The vector ΓT=Γ−q*γ is formed (γ is the current tap coefficient vector arranged as a 16 element array containing the real taps of channel 1, the real taps of channel 2, the imaginary taps of channel 1 and the imaginary taps of channel 2, in that order).
    • 5. The vector Γr is normalized by diving by its length (norm), which is the square root of the sum of the squares. The square root can be approximate.
    • 6. The taps are updated by the equation γγΓT
    • 7. The taps are normalized.

      FIG. 5 illustrates the adaptation of the tap coefficients.


Derivation of Algorithm Tap Update Equations

The respective inputs to the two filter arms are I1+jQ1 and I2+jQ2. The filter contents at time n are, in a delay line of L+1 stages,











I
_

1

=

[





I
1



[
n
]








I
1



[

n
-
1

]








I
1



[

n
-
2

]













I
1



[

n
-
L

]





]







Q
_

1

=

[





Q
1



[
n
]








Q
1



[

n
-
1

]








Q
1



[

n
-
2

]













Q
1



[

n
-
L

]





]









I
_

2

=

[





I
2



[
n
]








I
2



[

n
-
1

]








I
2



[

n
-
2

]













I
2



[

n
-
L

]





]







Q
_

2

=

[





Q
2



[
n
]








Q
2



[

n
-
1

]








Q
2



[

n
-
2

]













Q
2



[

n
-
L

]





]








The taps weights are






α=1=λ1+jμ1






α=2=λ2+jμ2


Define vectors










λ
_

=

[





λ
_

1







λ
_

2




]






μ
_

=

[





μ
_

1







μ
_

2




]








I
_

=

[





I
_

1







I
_

2




]






Q
_

=

[





Q
_

1







Q
_

2




]








The partial outputs are






p
1=λTI






p
2=λTQ






p
3=μTI






p
4=μTQ


The composite (combined complex) output is






y=(p1+p4+j(p2−p3)


With this notation the weights are conjugated in the implementation so that the output of the respective filters can be written as dot products:






y
1=α1*z1






y
2=α2*z2







z

1
=I
1
+jQ
1







z

2
=I
2
+jQ
2


The total power in the output signal is






P
s= |y|2= (p1+p4)2+ (p2−p3)2


The stochastic gradient is





λ=2(p1+p4)I+2(p2−p3)Q





μ=2(p1+p4)Q+2(p2−p3)I


These vectors are used to drive the tap weights according to the following equation in which ε is a step size (gain) parameter that controls the speed of adaptation and the variance of the tap weights. Larger values of ε give faster adaptation and the ability to track a more rapidly varying channel while smaller values result in lower variation of the tap weights about the optimal values.





λn+1n+ε∇λ





μn+1n+ε∇μ


The above equations are quite general. Any objective function for which the gradient may be computed, or approximated, may be used. In general, multiple objectives may be optimized by computing the respective gradients and combining them by a weighted addition. The weights may be chosen so as to give greater priority to certain objective functions, such as the SNR. Thus, the algorithm offers a great deal of flexibility to a system can be implemented.


An example of a useful auxiliary objective function is the lag correlations corresponding to lags of s samples where s is the number of samples between zeros of the baseband autocorrelation function. This objective function leads to minimized intersymbol interference in systems having zero autocorrelation at multiples of the baud interval, such as Nyquist pulsed systems. Thus, it is desired to minimizing lag-correlations, |ρs|2. Then the appropriate gradients, as shown below will need to be calculated.










ρ
s

=





y


[
n
]





y


[

n
+
s

]


*


_







=



σ
+

j





ϕ








=




[


(



p
1



[
n
]


+


p
4



[
n
]



)

+

j


(



p
2



[
n
]


-


p
3



[
n
]



)



]

_











[


(



p
1



[

n
+
s

]


+


p
4



[

n
+
s

]



)

-

j


(



p
2



[

n
+
s

]


-


p
3



[

n
+
s

]



)



]

_










σ
=





(



p
1



[
n
]


+


p
4



[
n
]



)



(



p
1



[

n
+
s

]


+


p
4



[

n
+
s

]



)


+

_





(



p
2



[
n
]


-


p
3



[
n
]



)



(



p
2



[

n
+
s

]


-


p
3



[

n
+
s

]



)


_








ϕ
=





-

(



p
1



[
n
]


+


p
4



[
n
]



)




(



p
2



[

n
+
s

]


-


p
3



[

n
+
s

]



)


+

_





(



p
2



[
n
]


-


p
3



[
n
]



)



(



p
1



[

n
+
s

]


+


p
4



[

n
+
s

]



)


_

















ρ
s



2

=





σ
+




2

=


σ
2

+

ϕ
2





















ρ
s



2


=


2

σ



σ


+

2





ϕ



ϕ








The stochastic gradients are





λσ=(p1[n]+p4[n])I[n+s]+(p1[n+s]+p4[n+s])I[n]+(p2[n]−p3[n])Q[n+s]+(p2[n+s]−p3[n+s])Q[n]





λφ=(p1[n]+p4[n])Q[n+s]−(p2[n+s]−p3[n+s])I[n]+(p2[n]−p3[n])I[n+s]+(p1[n+s]+p4[n+s])Q[n]


Extension of the Update Algorithm to Further Reduce Intersymbol Interference Effects

The algorithm can be extended to drive additional metrics to desired values. The next metric will be the autocorrelation of the output, yn. In particular, one can define the metric:






d
s=|ρ1|2=|E{ynyn+s*}|2


which can be driven toward a minimum. Note that this is a zero-ISI condition at lag equal to s samples. Define the vectors:











x
_

n

(
1
)


=

[




x
n

(
1
)







x

n
-
1


(
1
)












x

n
-
L


(
1
)





]







x
_

n

(
2
)


=

[




x
n

(
2
)







x

n
-
1


(
2
)












x

n
-
L


(
2
)





]








comprising the current L+1 samples from the two sub-arrays at time n, and


the conjugate


tap vectors










α
_

=

[




α
0
*






α
1
*













a
L
*




]






β
_

=

[




β
0
*






β
1
*













β
L
*




]








These are of course complex. It will prove to be useful to define the concatenation of the tap vectors,







σ
_

=


[


α
_


β
_


]

=



σ
_

1

+

j







σ
_

2








and the signal vectors,








u
_

n

=

[



x
_


(
1
)




x
_


(
2
)



]





so that the output of the combiner is given as:







y
n

=



σ
_

*




u
_

n







yields









ρ
s

=



E


{


y
n



y

n
+
s

*


}








=





σ
_

*


E



{



u
_

n




u
_


n
+
s

*


}


σ
_









=






σ
_

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




σ
_








=





σ
_

*


M






σ
_








=






σ
_

1
*


M







σ
_

1


+



σ
_

2
*


M







σ
_

2


+

j







σ
_

1
*


M







σ
_

2


-

j







σ
_

2
*


M







σ
_

1










where one can identify






M
=

[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]








R
s

(
1
)


=

E


{



x
_

n

(
1
)





x
_


n
+
s



(
1
)

*



}









R
s

(
2
)


=

E


{



x
_

n

(
2
)





x
_


n
+
s



(
2
)

*



}









C
s

(
12
)


=

E


{



x
_

n

(
1
)





x
_


n
+
s



(
1
)

*



}









C
s

(
21
)


=

E


{



x
_

n

(
2
)





x
_


n
+
s



(
2
)

*



}






and note that






R
s
(1)
=R
−s
(1)*






R
s
(2)
=R
−s
(2)*






C
s
(12)
=C
−s
(21)*


One can calculate the gradient of







d
s

=





ρ
1



2

=




E


{


y
n



y

n
+
s

*


}




2









d
s

=


Re



{

ρ
s

}

2


+

Im



{

ρ
s

}

2














σ

1
,
i




=


2

Re


{

ρ
s

}







σ

1
,
i





Re


{

ρ
s

}


+

2

Im


{

ρ
s

}







σ

1
,
i





Im


{

ρ
s

}














σ

2
,
i




=


2

Re


{

ρ
s

}







σ

2
,
i





Re


{

ρ
s

}


+

2

Im


{

ρ
s

}







σ

2
,
i





Im


{

ρ
s

}







The objective can be symmetrized since one knows ds=d−s that Im{ρ−s}=−Im{ρs} and so it can be written:














σ

1
,
i




=


Re


{

ρ
s

}







σ

1
,
i





Re


{


ρ
s

+

ρ

-
s



}


+

Im


{

ρ
s

}







σ

1
,
i





Im


{


ρ
s

-

ρ

-
s



}















Re


{

ρ
s

}


=


Re


{




σ
_

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




σ
_


}


=

Re


{



(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


}




















Im


{

ρ
s

}


=

Im


{



(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


}




























σ

1
,
i





Re


{

ρ
s

}


=



Re


{








e
_

i
*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+









(



σ
_

1

+

j



σ
_

2



)

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





e
_

i





}









=





e
_

i
*


Re


{






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}








=





e
_

i
*


Re


{






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}



























σ

1
,
i





Re


{

ρ

-
s


}


=



e
_

i
*


Re


{






[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}


















thus

















σ

1
,
i





Re


{


ρ
s

+

ρ

-
s



}


=





e
_

i
*


Re


{





[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

1


-



e
_

i
*


Im


{





[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

2



=




e
_

i
*


Re


{





[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

1


-



e
_

i
*


Im


{





[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}



σ
_
















where the matrix







[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

+

[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





appearing in the last sine is Hermitian.


Similarly






















σ

1
,
i





Im


{

ρ
s

}


=



Im


{








e
_

i
*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+









(



σ
_

1

+

j



σ
_

2



)

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





e
_

i





}









=





e
_

i
*


Im


{






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}








=





e
_

i
*


Im


{






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}



























σ

1
,
i





Im


{

ρ

-
s


}


=



e
_

i
*


Im


{






[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}


















thus

















σ

1
,
i





Im


{


ρ
s

+

ρ

-
s



}


=




e
_

i
*


Im


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]




}




σ
_

1


-



e
_

i
*


Re


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]




}




σ
_

2















where the matrix







[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

-

[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





that appears is now skew Hermitian.


In a similar fashion it is computed














σ

2
,
i




=


Re


{

ρ
s

}







σ

2
,
i





Re


{


ρ
s

+

ρ

-
s



}


+

Im


{

ρ
s

}







σ

s
,
i





Im


{


ρ
s

-

ρ

-
s



}























σ

2
,
i





Re


{

ρ
s

}


=








σ

2
,
i





Re


{



(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


}








=



Re


{






-
j





e
_

i
*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+








(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



j



e
_

i





}








=





e
_

i
*


Re


{






-

j


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





(



σ
_

1

+

j



σ
_

2



)


+






conj


{




(

-
j

)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]


*



(



σ
_

1

+

j



σ
_

2



)


}





}








=





e
_

i
*


Re


{






-

j


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





(



σ
_

1

+

j



σ
_

2



)


+






j





conj


{



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

*



(



σ
_

1

+

j



σ
_

2



)


}





}













e
_

i
*


Im


{






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


-








conj


[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]


*



(



σ
_

1

-

j



σ
_

2



)





}



























σ

2
,
i





Re


{

ρ

-
s


}


=



e
_

i
*


Im


{






[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


-








conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]


*



(



σ
_

1

-

j



σ
_

2



)





}


















thus

















σ

2
,
i





Re


{


ρ
s

+

ρ

-
s



}


=





e
_

i
*


Im


{





[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

1


+



e
_

i
*


Re


{





[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

2



=




e
_

i
*


Im


{





[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

1


+



e
_

i
*


Re


{





[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]

+






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




}




σ
_

2



















and




















σ

2
,
i





Im


{

ρ
s

}


=








σ

2
,
i





Im


{



(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


}








=



Im


{






-
j





e
_

i
*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+








(



σ
_

1
*

-

j



σ
_

2
*



)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



j



e
_

i





}








=





e
_

i
*


Im


{






-

j


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





(



σ
_

1

+

j



σ
_

2



)


+






conj


{




(

-
j

)



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]


*



(



σ
_

1

+

j



σ
_

2



)


}





}








=





e
_

i
*


Im


{






-

j


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





(



σ
_

1

+

j



σ
_

2



)


+






j





conj


{



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

*



(



σ
_

1

+

j



σ
_

2



)


}





}













e
_

i
*


Re


{






-

[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+







conj


[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




(



σ
_

1

-

j



σ
_

2



)





}



























σ

2
,
i





Im


{

ρ

-
s


}


=



e
_

i
*


Re


{






-

[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+








conj


[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]


*



(



σ
_

1

-

j



σ
_

2



)





}


















thus

















σ

2
,
i





Im


{


ρ
s

-

ρ

-
s



}


=




-


e
_

i
*



Re


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




}




σ
_

1


+



e
_

i
*


Im


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R

-
s


(
1
)





C

-
s


(
12
)







C

-
s


(
21
)





R

-
s


(
2
)





]




}




σ
_

2



=



-


e
_

i
*



Re


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]




}




σ
_

1


+



e
_

i
*


Im


{





[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

-






[




R
s


(
1
)

*





C
s


(
21
)

*







C
s


(
12
)

*





R
s


(
2
)

*





]




}




σ
_

2



















Let

















ρ
s

=

λ
+



















H
=

M
+

M
*



















S
=

M
-

M
*















Then from all of the above data we can write











σ

1
,
i




=



Re


{

ρ
s

}







σ

1
,
i





Re


{


ρ
s

+

ρ

-
s



}


+

Im


{

ρ
s

}







σ

1
,
i





Im


{


ρ
s

-

ρ

-
s



}



=


λ





Re


{
H
}




σ
_

1


-

λ





Im


{
H
}




σ
_

2


+

μ





Im


{
S
}




σ
_

1


+

μ





Re


{
S
}




σ
_

2




















σ

2
,
i




=


λ





Im


{
H
}




σ
_

1


+

λ





Re


{
H
}




σ
_

2


-

μ





Re


{
S
}




σ
_

1


+

μ





Im


{
S
}




σ
_

2








The complex gradient is then











σ
1




+
j






σ
2



=


λ





H







σ
_

1


+

μ





S







σ
_

2


+





[



(

)


j





Im


{
H
}




σ
_

2


+


(

)


Re


{
H
}




σ
_

2



]

-

[



(

)


j





Im


{
S
}




σ
_

1


+


(

)


Re


{
S
}




σ
_

1



]


=



λ





H







σ
_

1


+

μ





S







σ
_

2


+







H







σ
_

2


-







S







σ
_

1



=



λ






H


(



σ
_

1

+

j



σ
_

2



)



-








S


(



σ
_

1

+

j



σ
_

2



)




=



(


λ





H

-







S


)



(



σ
_

1

+

j



σ
_

2



)


=




[



(

λ
-


)


M

+


(

λ
+


)



M
*



]


σ
_














σ
1



=



Re


{

λ





H






σ
_


}


-

Re


{







S






σ
_


}



=


Re


{

λ





H






σ
_


}


+

Im


{

λ





S






σ
_


}














σ
2




=



Im


{

λ





H






σ
_


}


-

Im


{







S






σ
_


}



=


Im


{

λ





H






σ
_


}


-

Re


{

μ





S






σ
_


}




















Returning to









ρ
s

=



E


{


y
n



y

n
+
s

*


}








=





σ
_

*


E


{



u
_

n




u
_


n
+
s

*


}



σ
_








=






σ
_

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




σ
_








=





σ
_

*


M






σ
_








=






σ
_

1
*


M







σ
_

1


+



σ
_

2
*


M







σ
_

2


+

j



σ
_

1
*


M







σ
_

2


-

j



σ
_

2
*


M







σ
_

1









=






σ
_

1
*


M







σ
_

1


+



σ
_

2
*


M







σ
_

2


+

j



σ
_

1
*


M







σ
_

2


-

j





conj


{



σ
_

1
*



M
*




σ
_

2


}
















2

λ

=






σ
_

1
*


H







σ
_

1


+



σ
_

2
*


H







σ
_

2


+



σ
_

1
*


2

Re


{

j
(

M
-

conj


{

M
*

}



)

}




σ
_

2









=






σ
_

1
*


H







σ
_

1


+



σ
_

2
*


H







σ
_

2


-



σ
_

1
*


2


Im
(

M
+

M
*


)




σ
_

2









=






σ
_

1
*


H







σ
_

1


+



σ
_

2
*


H







σ
_

2


-



σ
_

1
*


2

Im


{
H
}




σ
_

2
















2



=






σ
_

1
*


S







σ
_

1


+



σ
_

2
*


S







σ
_

2


+

j



σ
_

1
*


2

Re


{

(

M
-

conj


{

M
*

}



)

}




σ
_

2









=






σ
_

1
*


S







σ
_

1


+



σ
_

2
*


S







σ
_

2


+

j



σ
_

1
*


2


Re
(

M
-

M
*


)




σ
_

2









=






σ
_

1
*


S







σ
_

1


+



σ
_

2
*


S







σ
_

2


+

j



σ
_

1
*


2

Re


{
S
}




σ
_

2










Let us compute













ρ
s





σ

1
,
i




=






e
_

i
*


M







σ
_

1


+



σ
_

1
*


M







e
_

i


+

j







e
_

i
*


M







σ
_

2


-

j



σ
_

2
*


M







e
_

i









=






e
_

i
*


M







σ
_

1


+

conj


{



e
_

i
*



M
*




σ
_

1


}


+

j







e
_

i
*


M







σ
_

2


+

conj


{

j







e
_

i
*



M
*




σ
_

2


}









=






e
_

i
*



M


(



σ
_

1

+

j



σ
_

2



)



+

conj


{



e
_

i
*




M
*



(



σ
_

1

+

j



σ
_

2



)



}
















ρ
s
*





σ

1
,
i




=




e
_

i
*




M
*



(



σ
_

1

+

j



σ
_

2



)



+

conj


{



e
_

i
*



M


(



σ
_

1

+

j



σ
_

2



)



}








Thus








λ




σ

1
,
i




=

2

Re


{



e
_

i
*


H






σ
_


}














μ




σ

1
,
i




=



2

Im


{



e
_

i
*


S






σ
_


}








=






σ
_

1
*


M







σ
_

1


+



σ
_

2
*


M







σ
_

2


+

j



σ
_

1
*


M







σ
_

2


-

j



σ
_

2
*


M







σ
_

1










Similarly












ρ
s





σ

2
,
i




=






e
_

i
*


M







σ
_

2


+



σ
_

2
*


M







e
_

i


-

j







e
_

i
*


M







σ
_

1


+

j



σ
_

2
*


M







e
_

i









=






e
_

i
*


M







σ
_

2


+

conj


{



e
_

i
*



M
*




σ
_

2


}


-

j







e
_

i
*


M







σ
_

1


+

conj


{

j







e
_

i
*



M
*




σ
_

1


}









=





-
j








e
_

i
*



M


(



σ
_

1

+

j



σ
_

2



)



+

conj


{





-
j






e

_

i
*




M
*



(



σ
_

1

+

j



σ
_

2



)



}
















ρ
s
*





σ

2
,
i




=



-
j




e
_

i
*




M
*



(



σ
_

1

+

j



σ
_

2



)



+

conj


{


-
j




e
_

i
*



M


(



σ
_

1

+

j



σ
_

2



)



}







so that









λ




σ

2
,
i




=

2

Im


{



e
_

i
*


H






σ
_


}











μ




σ

2
,
i




=


-
2


Re


{



e
_

i
*


S






σ
_


}






Hessian




















σ
_

1




=




Re


{

λ





H






σ
_


}


-

Re


{







S






σ
_


}











=




Re


{

λ





H






σ
_


}


+

Im


{

μ





S






σ
_


}





























σ
_

2




=




Im


{

λ





H






σ
_


}


-

Im


{







S






σ
_


}











=




Im


{

λ





H






σ
_


}


-

Re


{

μS






σ
_


}




















J
=






2



d
s






σ
b






σ
a










=







λ




σ
b




Re


{



e
_

a
*


H






σ
_


}


+

λ





Re


{



e
_

a
*


H







e
_

b


}


+




μ




σ
b




Im


{



e
_

a
*


S






σ
_


}


+

μ





Im


{



e
_

a
*


S







e
_

b


}































σ

1
,
i






ρ
s


=







e
_

i
*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]




(



σ
_

1

+

j



σ
_

2



)


+














(



σ
_

1

+

j







σ
_

2



)

*



[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]





e
_

i








=





e
_

i
*



(






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+
conj








[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]

*



conj


(



σ
_

1

+

j



σ
_

2



)






)








=





e
_

i
*



(






[




R
s

(
1
)





C
s

(
12
)







C
s

(
21
)





R
s

(
2
)





]



(



σ
_

1

+

j



σ
_

2



)


+







[




R
s


(
1
)

T





C
s


(
21
)

T







C
s


(
12
)

T





R
s


(
2
)

T





]



(



σ
_

1

-

j



σ
_

2



)





)

















The preferred embodiment of the invention contains several useful features and components. For example, the plurality of received branch outputs, such as antenna outputs, are independently detected by radio frequency processing means including filters, amplifiers frequency converters and analog to digital converters, to produce a digital baseband (I and Q) sample stream for each branch. Aspects of the preferred embodiment may be implemented using digital signal processing (DSP) techniques, however, it should be understood that such processing may be realized in various ways, including equivalent analog processing. The two or more receiving path outputs can be connected to a channel estimation and combiner optimization algorithm that adaptively determines suitable branch weights to be applied to the respective branches in a linear weighted combining process. The algorithm adapts the weights according to an objective function, usually the SNR of the combined output. The adaptation in the preferred embodiment is controlled by a stochastic gradient which is computed (updated) at regular intervals. In the preferred embodiment, the algorithm operates without knowledge of the signal data and requires no demodulation or decoding of the signal, nor is a training interval or sequence required. Thus, the algorithm is able to operate as a “blind” adaptive algorithm.


Additionally, the algorithm allows the speed of adaptation, the update rate and the variance of the estimated weights to be balanced by adjusting a parameter of the stochastic gradient algorithm, namely, the step size parameter. The algorithm may be applied, in one embodiment, as a simple SNR maximizing algorithm that is scalable to any number of branches. The algorithm and system may also be extended, by incorporation of additional objective functions, such as a measure of intersymbol interference (ISI), so that it further adapts to dispersion effects (frequency selective fading induced by differential delay of multipath components) in addition to fading effects. The additional processing to mitigate such dispersion effects includes an adaptive filter for each branch. The filter may be realized as a transversal filter with “taps” that are spaced in time by a suitable sample interval and which are further provided with adjustable weights, or gains, generally complex (that is, having amplitude and phase adjustment capability), that are adaptively computed by the channel estimation and combiner optimization algorithm. When the algorithm is extended to measure ISI, the resulting improvement to the signal quality includes a reduction in intersymbol interference that further increases the quality of the symbol “eye pattern”.


Example methods may be better appreciated with reference to flow diagrams. While for purposes of simplicity of explanation, the illustrated methodologies are shown and described as a series of blocks, it is to be appreciated that the methodologies are not limited by the order of the blocks, as some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be required to implement an example methodology. Blocks may be combined or separated into multiple components. Furthermore, additional and/or alternative methodologies can employ additional, not illustrated blocks.



FIG. 6 illustrates a method 600 for optimally combining multi-path signals. The method 600 begins by receiving a first signal, at 602, that traveled a first path from a transmitter to a receiving location and receiving, at 604, a second signal that traveled a second path from the transmitter to the same receiving location. The second path is different than the first path so that the first signal contains signal data and has a first distortion that is different than a second distortion in the second signal. The second signal contains the same signal data as the first signal. The first distortion and the second distortion can correspond to different time intervals or other parameters.


According to an objective function, the method adaptively generates a first weight value and a second weight value, at 606. The first weight value the second weight value can be generated so that a combined signal to noise ratio (SNR) of the combined signal (introduced below) is maximized. The first weight value is applied to the first signal, at 608, to produce a first weighted signal. Similarly, the second weight value is applied to the second signal, at 610, to produce a second weighted signal.


The first weighted signal and the second weighted signal are linearly combined, at 612, to produce a combined signal with a combined signal degradation. The combined signal degradation is less than the first degradation and the combined signal degradation is less than the second degradation.


In another embodiment the method 600 can adaptively generate a third weight value and a fourth weight value so that an intersymbol interference ratio (ISI) is minimized. The third weight value is applied to the first signal and the fourth weight value is applied to the second signal. The third weight value can also be applied to the first signal after time domain filtering the first signal and the fourth weight value can also be applied to the second signal after time domain filtering the second signal.


In one implementation, the method 600 can enforce various constraints. For example, the method 600 can constrain the first weight value and constrain the second the second weight value so that at least one parameter of the combined signal is optimized. These constraints can ensure that a noise power of the combined signal is held below a threshold value. The constrains can be based, at least in part, by the norm of a weighted vector. The norm of the weighted vector can be held to a constant value. The constant value can be a noise value and the objective function can have a goal of maximizing the power of the combined signal.


Other embodiments of the method 600 may perform other useful actions and contain other features. For example, the method could constrain the first weight value and the second weight value so the that the sum of the square of the first weight value plus the sum of the square of the second weight value is a constant. The first weight value and the second weight value can be periodically updated based, at least in part, on the constant. The objective function is based, at least in part, on a stochastic gradient value. The method 600 can use an objective function that does not require training over a sequence of values nor a demodulation of the combined signal to extract information in the combined signal.


In the foregoing description, certain terms have been used for brevity, clearness, and understanding. No unnecessary limitations are to be implied therefrom beyond the requirement of the prior art because such terms are used for descriptive purposes and are intended to be broadly construed. Therefore, the invention is not limited to the specific details, the representative embodiments, and illustrative examples shown and described. Thus, this application is intended to embrace alterations, modifications, and variations that fall within the scope of the appended claims.


Moreover, the description and illustration of the invention is an example and the invention is not limited to the exact details shown or described. References to “the preferred embodiment”, “an embodiment”, “one example”, “an example”, and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in the preferred embodiment” does not necessarily refer to the same embodiment, though it may.

Claims
  • 1. A method for optimally combining multi-path signals comprising: receiving a first signal that traveled a first path from a transmitter to a receiving location, wherein the first signal contains signal data and has a first distortion;receiving a second signal that traveled a second path from the transmitter to the receiving location, wherein the second path is different than the first path, and wherein the second signal contains the same signal data and has a second distortion that is different than the first distortion;adaptively generating according to an objective function a first weight value and adaptively generating according to the objective function a second weight value;applying the first weight value to the first signal to produce a first weighted signal;applying the second weight value to the second signal to produce a second weighted signal;linearly combining the first weighted signal and the second weighted signal to produce a combined signal with a combined signal degradation, wherein the combined signal degradation is less than the first degradation and the combined signal degradation is less than the second degradation.
  • 2. The method of claim 1 further comprising: constraining the first weight value and the second weight value so the that the sum of the square of the first weight value plus the sum of the square of the second weight value is a constant; andperiodically updating the first weight value and the second weight value based, at least in part, on the constant and the objective function.
  • 3. The method of claim 1 wherein the adaptively generating further comprises: adaptively generating the first weight value and the second weight value so that a combined signal to noise ratio (SNR) of the combined signal is maximized.
  • 4. The method of claim 1 wherein the adaptively generating further comprises: adaptively generating third weight value and a fourth weight value so that an intersymbol interference ratio (ISI) is minimized;applying the third weight value to the first signal; andapplying the fourth weight value to the second signal.
  • 5. The method of claim 4 further comprising: applying the third weight value to the first signal after time domain filtering the first signal; andapplying the fourth weight value to the second signal after time domain filtering the second signal.
  • 6. The method of claim 4 wherein the adaptively generating further comprises: adaptively generating the first weight value and the second weight value so that a combined signal to noise ratio (SNR) of the combined signal is maximized.
  • 7. The method of claim 1 wherein the objective function is based, at least in part, on a stochastic gradient value.
  • 8. The method of claim 1 further comprising: constraining the first weight value and constraining the second weight value so that at least one parameter of the combined signal is optimized.
  • 9. The method of claim 1 further comprising: constraining the first weight value and constraining the second weight value so that a noise power of the combined signal is held below a threshold value.
  • 10. The method of claim 1 further comprising: periodically updating the first weight value and the second weight value at a periodic time interval.
  • 11. The method of claim 1 wherein the adaptively generating according to an objective function does not require any of the group of: training over a sequence of values and demodulation of the combined signal to extract information in the combined signal.
  • 12. The method of claim 1 further comprising: constraining the first weight value is based, at least in part, by a norm of a weighted vector.
  • 13. The method of claim 12 wherein the norm of the weighted vector is held to a constant value.
  • 14. The method of claim 13 wherein the constant value is a noise value, and wherein the objective function has a goal of maximizing the power of the combined signal.
  • 15. The method of claim 1 wherein the first distortion and the second distortion correspond to different time intervals.
  • 16. A system for optimally combining multi-path signals comprising: a first channel sub-processor logic configured to receive a first signal that traveled a first path from a transmitter to the first channel sub-processor, wherein the first signal contains signal data and has a first distortion;a second channel sub-processor logic configured to receive a second signal that traveled a second path from a transmitter to the first channel sub-processor, wherein the second path is different than the first path, and wherein the second signal contains the same signal data and has a second distortion that is different than the first distortion;an objective function;adaption and combining logic configured to adaptively generate according to the objective function a first weight value and adaptively generate according to the objective function a second weight value;wherein the first channel sub-processor logic is configured to apply the first weight value to the first signal to produce a first weighted signal, wherein the second channel sub-processor logic is configured to apply the second weight value to the second signal to produce a second weighted signal, wherein the adaption and combining logic is configured to linearly combine the first weight signal and the second weighted signal to produce a combined signal with a combined signal degradation, wherein the combined signal degradation is less than the first degradation and the combined signal degradation is less than the second degradation.
  • 17. The system of claim 16 wherein the first channel sub-processor logic further comprises: a first time domain filter configured to filter the first weighted signal before the first weight signal and the second weighted signal are combined to produce the combined signal; and wherein the first channel sub-processor logic further comprises:a second time domain filter configured to filter the second weighted signal before the first weight signal and the second weighted signal are combined to produce the combined signal.
  • 18. The system of claim 17 wherein the first time domain filter further comprises: a first adjustable tap coefficient configured to reduce signal distortions in the first weighted signal before the first weight signal and the second weighted signal are combined to produce a combined signal; and wherein the second time domain filter further comprises:a second adjustable tap coefficient configured to reduce signal distortions in the first weighted signal before the first weight signal and the second weighted signal are combined to produce a combined signal.
  • 19. The system of claim 18 in which the first time domain filter and the second time domain filter are configured to align the first multiplied signal and the second multiplied signal to compensate for differences in delay between the first multiplied signal and the second multiplied signal.
  • 20. The system of claim 18 wherein the adaption and combining logic is configured to adaptively generate according to the objective function the first weight value and the second weight value base, at least in part, on one or more of the group of: the first weighted signal, the second weighted signal, the combined signal, the first tap coefficient and the second tap coefficient.
CROSS REFERENCE TO RELATED APPLICATION

This application claims priority from U.S. Provisional Application Ser. No. 61/689,757, filed Jun. 11, 2012; the disclosure of which is incorporated herein by reference.

Provisional Applications (1)
Number Date Country
61689757 Jun 2012 US